SCSI misc on 20190507
This is mostly update of the usual drivers: qla2xxx, qedf, smartpqi, hpsa, lpfc, ufs, mpt3sas, ibmvfc and hisi_sas. Plus number of minor changes, spelling fixes and other trivia. Signed-off-by: James E.J. Bottomley <jejb@linux.ibm.com> -----BEGIN PGP SIGNATURE----- iJwEABMIAEQWIQTnYEDbdso9F2cI+arnQslM7pishQUCXNIK0yYcamFtZXMuYm90 dG9tbGV5QGhhbnNlbnBhcnRuZXJzaGlwLmNvbQAKCRDnQslM7pishbeFAP4wsOm6 BNqDvLbF7gHAH+oSVAeENd9tepXG2xKbUHmLRgEA1wPaUxon8L8v/SSsqwewqMXP y6CnU6aO2iaViTNBsPs= =RX74 -----END PGP SIGNATURE----- Merge tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi Pull SCSI updates from James Bottomley: "This is mostly update of the usual drivers: qla2xxx, qedf, smartpqi, hpsa, lpfc, ufs, mpt3sas, ibmvfc and hisi_sas. Plus number of minor changes, spelling fixes and other trivia" * tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: (298 commits) scsi: qla2xxx: Avoid that lockdep complains about unsafe locking in tcm_qla2xxx_close_session() scsi: qla2xxx: Avoid that qlt_send_resp_ctio() corrupts memory scsi: qla2xxx: Fix hardirq-unsafe locking scsi: qla2xxx: Complain loudly about reference count underflow scsi: qla2xxx: Use __le64 instead of uint32_t[2] for sending DMA addresses to firmware scsi: qla2xxx: Introduce the dsd32 and dsd64 data structures scsi: qla2xxx: Check the size of firmware data structures at compile time scsi: qla2xxx: Pass little-endian values to the firmware scsi: qla2xxx: Fix race conditions in the code for aborting SCSI commands scsi: qla2xxx: Use an on-stack completion in qla24xx_control_vp() scsi: qla2xxx: Make qla24xx_async_abort_cmd() static scsi: qla2xxx: Remove unnecessary locking from the target code scsi: qla2xxx: Remove qla_tgt_cmd.released scsi: qla2xxx: Complain if a command is released that is owned by the firmware scsi: qla2xxx: target: Fix offline port handling and host reset handling scsi: qla2xxx: Fix abort handling in tcm_qla2xxx_write_pending() scsi: qla2xxx: Fix error handling in qlt_alloc_qfull_cmd() scsi: qla2xxx: Simplify qlt_send_term_imm_notif() scsi: qla2xxx: Fix use-after-free issues in qla2xxx_qpair_sp_free_dma() scsi: qla2xxx: Fix a qla24xx_enable_msix() error path ...
This commit is contained in:
commit
d1cd7c85f9
|
@ -5,8 +5,9 @@ Each UFS controller instance should have its own node.
|
|||
Please see the ufshcd-pltfrm.txt for a list of all available properties.
|
||||
|
||||
Required properties:
|
||||
- compatible : Compatible list, contains the following controller:
|
||||
"cdns,ufshc"
|
||||
- compatible : Compatible list, contains one of the following controllers:
|
||||
"cdns,ufshc" - Generic CDNS HCI,
|
||||
"cdns,ufshc-m31-16nm" - CDNS UFS HC + M31 16nm PHY
|
||||
complemented with the JEDEC version:
|
||||
"jedec,ufs-2.0"
|
||||
|
||||
|
|
|
@ -0,0 +1,43 @@
|
|||
* Mediatek Universal Flash Storage (UFS) Host Controller
|
||||
|
||||
UFS nodes are defined to describe on-chip UFS hardware macro.
|
||||
Each UFS Host Controller should have its own node.
|
||||
|
||||
To bind UFS PHY with UFS host controller, the controller node should
|
||||
contain a phandle reference to UFS M-PHY node.
|
||||
|
||||
Required properties for UFS nodes:
|
||||
- compatible : Compatible list, contains the following controller:
|
||||
"mediatek,mt8183-ufshci" for MediaTek UFS host controller
|
||||
present on MT81xx chipsets.
|
||||
- reg : Address and length of the UFS register set.
|
||||
- phys : phandle to m-phy.
|
||||
- clocks : List of phandle and clock specifier pairs.
|
||||
- clock-names : List of clock input name strings sorted in the same
|
||||
order as the clocks property. "ufs" is mandatory.
|
||||
"ufs": ufshci core control clock.
|
||||
- freq-table-hz : Array of <min max> operating frequencies stored in the same
|
||||
order as the clocks property. If this property is not
|
||||
defined or a value in the array is "0" then it is assumed
|
||||
that the frequency is set by the parent clock or a
|
||||
fixed rate clock source.
|
||||
- vcc-supply : phandle to VCC supply regulator node.
|
||||
|
||||
Example:
|
||||
|
||||
ufsphy: phy@11fa0000 {
|
||||
...
|
||||
};
|
||||
|
||||
ufshci@11270000 {
|
||||
compatible = "mediatek,mt8183-ufshci";
|
||||
reg = <0 0x11270000 0 0x2300>;
|
||||
interrupts = <GIC_SPI 104 IRQ_TYPE_LEVEL_LOW>;
|
||||
phys = <&ufsphy>;
|
||||
|
||||
clocks = <&infracfg_ao INFRACFG_AO_UFS_CG>;
|
||||
clock-names = "ufs";
|
||||
freq-table-hz = <0 0>;
|
||||
|
||||
vcc-supply = <&mt_pmic_vemc_ldo_reg>;
|
||||
};
|
|
@ -31,7 +31,6 @@ Optional properties:
|
|||
- vcc-max-microamp : specifies max. load that can be drawn from vcc supply
|
||||
- vccq-max-microamp : specifies max. load that can be drawn from vccq supply
|
||||
- vccq2-max-microamp : specifies max. load that can be drawn from vccq2 supply
|
||||
- <name>-fixed-regulator : boolean property specifying that <name>-supply is a fixed regulator
|
||||
|
||||
- clocks : List of phandle and clock specifier pairs
|
||||
- clock-names : List of clock input name strings sorted in the same
|
||||
|
@ -65,7 +64,6 @@ Example:
|
|||
interrupts = <0 28 0>;
|
||||
|
||||
vdd-hba-supply = <&xxx_reg0>;
|
||||
vdd-hba-fixed-regulator;
|
||||
vcc-supply = <&xxx_reg1>;
|
||||
vcc-supply-1p8;
|
||||
vccq-supply = <&xxx_reg2>;
|
||||
|
|
|
@ -16057,6 +16057,13 @@ L: linux-scsi@vger.kernel.org
|
|||
S: Supported
|
||||
F: drivers/scsi/ufs/*dwc*
|
||||
|
||||
UNIVERSAL FLASH STORAGE HOST CONTROLLER DRIVER MEDIATEK HOOKS
|
||||
M: Stanley Chu <stanley.chu@mediatek.com>
|
||||
L: linux-scsi@vger.kernel.org
|
||||
L: linux-mediatek@lists.infradead.org (moderated for non-subscribers)
|
||||
S: Maintained
|
||||
F: drivers/scsi/ufs/ufs-mediatek*
|
||||
|
||||
UNSORTED BLOCK IMAGES (UBI)
|
||||
M: Artem Bityutskiy <dedekind1@gmail.com>
|
||||
M: Richard Weinberger <richard@nod.at>
|
||||
|
|
|
@ -642,7 +642,7 @@ mptbase_reply(MPT_ADAPTER *ioc, MPT_FRAME_HDR *req, MPT_FRAME_HDR *reply)
|
|||
freereq = 0;
|
||||
if (event != MPI_EVENT_EVENT_CHANGE)
|
||||
break;
|
||||
/* else: fall through */
|
||||
/* fall through */
|
||||
case MPI_FUNCTION_CONFIG:
|
||||
case MPI_FUNCTION_SAS_IO_UNIT_CONTROL:
|
||||
ioc->mptbase_cmds.status |= MPT_MGMT_STATUS_COMMAND_GOOD;
|
||||
|
|
|
@ -565,7 +565,7 @@ mptctl_event_process(MPT_ADAPTER *ioc, EventNotificationReply_t *pEvReply)
|
|||
* TODO - this define is not in MPI spec yet,
|
||||
* but they plan to set it to 0x21
|
||||
*/
|
||||
if (event == 0x21 ) {
|
||||
if (event == 0x21) {
|
||||
ioc->aen_event_read_flag=1;
|
||||
dctlprintk(ioc, printk(MYIOC_s_DEBUG_FMT "Raised SIGIO to application\n",
|
||||
ioc->name));
|
||||
|
|
|
@ -2928,27 +2928,27 @@ mptsas_exp_repmanufacture_info(MPT_ADAPTER *ioc,
|
|||
if (ioc->sas_mgmt.status & MPT_MGMT_STATUS_RF_VALID) {
|
||||
u8 *tmp;
|
||||
|
||||
smprep = (SmpPassthroughReply_t *)ioc->sas_mgmt.reply;
|
||||
if (le16_to_cpu(smprep->ResponseDataLength) !=
|
||||
sizeof(struct rep_manu_reply))
|
||||
smprep = (SmpPassthroughReply_t *)ioc->sas_mgmt.reply;
|
||||
if (le16_to_cpu(smprep->ResponseDataLength) !=
|
||||
sizeof(struct rep_manu_reply))
|
||||
goto out_free;
|
||||
|
||||
manufacture_reply = data_out + sizeof(struct rep_manu_request);
|
||||
strncpy(edev->vendor_id, manufacture_reply->vendor_id,
|
||||
SAS_EXPANDER_VENDOR_ID_LEN);
|
||||
strncpy(edev->product_id, manufacture_reply->product_id,
|
||||
SAS_EXPANDER_PRODUCT_ID_LEN);
|
||||
strncpy(edev->product_rev, manufacture_reply->product_rev,
|
||||
SAS_EXPANDER_PRODUCT_REV_LEN);
|
||||
edev->level = manufacture_reply->sas_format;
|
||||
if (manufacture_reply->sas_format) {
|
||||
strncpy(edev->component_vendor_id,
|
||||
manufacture_reply->component_vendor_id,
|
||||
manufacture_reply = data_out + sizeof(struct rep_manu_request);
|
||||
strncpy(edev->vendor_id, manufacture_reply->vendor_id,
|
||||
SAS_EXPANDER_VENDOR_ID_LEN);
|
||||
strncpy(edev->product_id, manufacture_reply->product_id,
|
||||
SAS_EXPANDER_PRODUCT_ID_LEN);
|
||||
strncpy(edev->product_rev, manufacture_reply->product_rev,
|
||||
SAS_EXPANDER_PRODUCT_REV_LEN);
|
||||
edev->level = manufacture_reply->sas_format;
|
||||
if (manufacture_reply->sas_format) {
|
||||
strncpy(edev->component_vendor_id,
|
||||
manufacture_reply->component_vendor_id,
|
||||
SAS_EXPANDER_COMPONENT_VENDOR_ID_LEN);
|
||||
tmp = (u8 *)&manufacture_reply->component_id;
|
||||
edev->component_id = tmp[0] << 8 | tmp[1];
|
||||
edev->component_revision_id =
|
||||
manufacture_reply->component_revision_id;
|
||||
tmp = (u8 *)&manufacture_reply->component_id;
|
||||
edev->component_id = tmp[0] << 8 | tmp[1];
|
||||
edev->component_revision_id =
|
||||
manufacture_reply->component_revision_id;
|
||||
}
|
||||
} else {
|
||||
printk(MYIOC_s_ERR_FMT
|
||||
|
|
|
@ -786,6 +786,7 @@ mptscsih_io_done(MPT_ADAPTER *ioc, MPT_FRAME_HDR *mf, MPT_FRAME_HDR *mr)
|
|||
/*
|
||||
* Allow non-SAS & non-NEXUS_LOSS to drop into below code
|
||||
*/
|
||||
/* Fall through */
|
||||
|
||||
case MPI_IOCSTATUS_SCSI_TASK_TERMINATED: /* 0x0048 */
|
||||
/* Linux handles an unsolicited DID_RESET better
|
||||
|
@ -882,6 +883,7 @@ mptscsih_io_done(MPT_ADAPTER *ioc, MPT_FRAME_HDR *mf, MPT_FRAME_HDR *mr)
|
|||
|
||||
case MPI_IOCSTATUS_SCSI_DATA_OVERRUN: /* 0x0044 */
|
||||
scsi_set_resid(sc, 0);
|
||||
/* Fall through */
|
||||
case MPI_IOCSTATUS_SCSI_RECOVERED_ERROR: /* 0x0040 */
|
||||
case MPI_IOCSTATUS_SUCCESS: /* 0x0000 */
|
||||
sc->result = (DID_OK << 16) | scsi_status;
|
||||
|
@ -1934,7 +1936,7 @@ mptscsih_host_reset(struct scsi_cmnd *SCpnt)
|
|||
/* If our attempts to reset the host failed, then return a failed
|
||||
* status. The host will be taken off line by the SCSI mid-layer.
|
||||
*/
|
||||
retval = mpt_Soft_Hard_ResetHandler(ioc, CAN_SLEEP);
|
||||
retval = mpt_Soft_Hard_ResetHandler(ioc, CAN_SLEEP);
|
||||
if (retval < 0)
|
||||
status = FAILED;
|
||||
else
|
||||
|
|
|
@ -258,8 +258,6 @@ mptspi_writeIOCPage4(MPT_SCSI_HOST *hd, u8 channel , u8 id)
|
|||
IOCPage4_t *IOCPage4Ptr;
|
||||
MPT_FRAME_HDR *mf;
|
||||
dma_addr_t dataDma;
|
||||
u16 req_idx;
|
||||
u32 frameOffset;
|
||||
u32 flagsLength;
|
||||
int ii;
|
||||
|
||||
|
@ -276,9 +274,6 @@ mptspi_writeIOCPage4(MPT_SCSI_HOST *hd, u8 channel , u8 id)
|
|||
*/
|
||||
pReq = (Config_t *)mf;
|
||||
|
||||
req_idx = le16_to_cpu(mf->u.frame.hwhdr.msgctxu.fld.req_idx);
|
||||
frameOffset = ioc->req_sz - sizeof(IOCPage4_t);
|
||||
|
||||
/* Complete the request frame (same for all requests).
|
||||
*/
|
||||
pReq->Action = MPI_CONFIG_ACTION_PAGE_WRITE_CURRENT;
|
||||
|
|
|
@ -14,7 +14,7 @@
|
|||
#include "fabrics.h"
|
||||
#include <linux/nvme-fc-driver.h>
|
||||
#include <linux/nvme-fc.h>
|
||||
|
||||
#include <scsi/scsi_transport_fc.h>
|
||||
|
||||
/* *************************** Data Structures/Defines ****************** */
|
||||
|
||||
|
|
|
@ -272,9 +272,8 @@ mrs[] = {
|
|||
static void NCR5380_print(struct Scsi_Host *instance)
|
||||
{
|
||||
struct NCR5380_hostdata *hostdata = shost_priv(instance);
|
||||
unsigned char status, data, basr, mr, icr, i;
|
||||
unsigned char status, basr, mr, icr, i;
|
||||
|
||||
data = NCR5380_read(CURRENT_SCSI_DATA_REG);
|
||||
status = NCR5380_read(STATUS_REG);
|
||||
mr = NCR5380_read(MODE_REG);
|
||||
icr = NCR5380_read(INITIATOR_COMMAND_REG);
|
||||
|
@ -1933,13 +1932,13 @@ static void NCR5380_information_transfer(struct Scsi_Host *instance)
|
|||
if (!hostdata->connected)
|
||||
return;
|
||||
|
||||
/* Fall through to reject message */
|
||||
|
||||
/* Reject message */
|
||||
/* Fall through */
|
||||
default:
|
||||
/*
|
||||
* If we get something weird that we aren't expecting,
|
||||
* reject it.
|
||||
* log it.
|
||||
*/
|
||||
default:
|
||||
if (tmp == EXTENDED_MESSAGE)
|
||||
scmd_printk(KERN_INFO, cmd,
|
||||
"rejecting unknown extended message code %02x, length %d\n",
|
||||
|
|
|
@ -3,7 +3,7 @@
|
|||
# $Id: //depot/linux-aic79xx-2.5.0/drivers/scsi/aic7xxx/Kconfig.aic7xxx#7 $
|
||||
#
|
||||
config SCSI_AIC7XXX
|
||||
tristate "Adaptec AIC7xxx Fast -> U160 support (New Driver)"
|
||||
tristate "Adaptec AIC7xxx Fast -> U160 support"
|
||||
depends on (PCI || EISA) && SCSI
|
||||
select SCSI_SPI_ATTRS
|
||||
---help---
|
||||
|
|
|
@ -1666,7 +1666,7 @@ ahc_handle_scsiint(struct ahc_softc *ahc, u_int intstat)
|
|||
printk("\tCRC Value Mismatch\n");
|
||||
if ((sstat2 & CRCENDERR) != 0)
|
||||
printk("\tNo terminal CRC packet "
|
||||
"recevied\n");
|
||||
"received\n");
|
||||
if ((sstat2 & CRCREQERR) != 0)
|
||||
printk("\tIllegal CRC packet "
|
||||
"request\n");
|
||||
|
|
|
@ -194,12 +194,11 @@ static irqreturn_t atp870u_intr_handle(int irq, void *dev_id)
|
|||
((unsigned char *) &adrcnt)[2] = atp_readb_io(dev, c, 0x12);
|
||||
((unsigned char *) &adrcnt)[1] = atp_readb_io(dev, c, 0x13);
|
||||
((unsigned char *) &adrcnt)[0] = atp_readb_io(dev, c, 0x14);
|
||||
if (dev->id[c][target_id].last_len != adrcnt)
|
||||
{
|
||||
k = dev->id[c][target_id].last_len;
|
||||
if (dev->id[c][target_id].last_len != adrcnt) {
|
||||
k = dev->id[c][target_id].last_len;
|
||||
k -= adrcnt;
|
||||
dev->id[c][target_id].tran_len = k;
|
||||
dev->id[c][target_id].last_len = adrcnt;
|
||||
dev->id[c][target_id].last_len = adrcnt;
|
||||
}
|
||||
#ifdef ED_DBGP
|
||||
printk("dev->id[c][target_id].last_len = %d dev->id[c][target_id].tran_len = %d\n",dev->id[c][target_id].last_len,dev->id[c][target_id].tran_len);
|
||||
|
|
|
@ -963,7 +963,7 @@ int beiscsi_cmd_q_destroy(struct be_ctrl_info *ctrl, struct be_queue_info *q,
|
|||
* @ctrl: ptr to ctrl_info
|
||||
* @cq: Completion Queue
|
||||
* @dq: Default Queue
|
||||
* @lenght: ring size
|
||||
* @length: ring size
|
||||
* @entry_size: size of each entry in DEFQ
|
||||
* @is_header: Header or Data DEFQ
|
||||
* @ulp_num: Bind to which ULP
|
||||
|
|
|
@ -1083,7 +1083,6 @@ int bnx2fc_eh_device_reset(struct scsi_cmnd *sc_cmd)
|
|||
static int bnx2fc_abts_cleanup(struct bnx2fc_cmd *io_req)
|
||||
{
|
||||
struct bnx2fc_rport *tgt = io_req->tgt;
|
||||
int rc = SUCCESS;
|
||||
unsigned int time_left;
|
||||
|
||||
io_req->wait_for_comp = 1;
|
||||
|
@ -1110,7 +1109,7 @@ static int bnx2fc_abts_cleanup(struct bnx2fc_cmd *io_req)
|
|||
kref_put(&io_req->refcount, bnx2fc_cmd_release);
|
||||
|
||||
spin_lock_bh(&tgt->tgt_lock);
|
||||
return rc;
|
||||
return SUCCESS;
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
@ -474,13 +474,39 @@ csio_reduce_sqsets(struct csio_hw *hw, int cnt)
|
|||
csio_dbg(hw, "Reduced sqsets to %d\n", hw->num_sqsets);
|
||||
}
|
||||
|
||||
static void csio_calc_sets(struct irq_affinity *affd, unsigned int nvecs)
|
||||
{
|
||||
struct csio_hw *hw = affd->priv;
|
||||
u8 i;
|
||||
|
||||
if (!nvecs)
|
||||
return;
|
||||
|
||||
if (nvecs < hw->num_pports) {
|
||||
affd->nr_sets = 1;
|
||||
affd->set_size[0] = nvecs;
|
||||
return;
|
||||
}
|
||||
|
||||
affd->nr_sets = hw->num_pports;
|
||||
for (i = 0; i < hw->num_pports; i++)
|
||||
affd->set_size[i] = nvecs / hw->num_pports;
|
||||
}
|
||||
|
||||
static int
|
||||
csio_enable_msix(struct csio_hw *hw)
|
||||
{
|
||||
int i, j, k, n, min, cnt;
|
||||
int extra = CSIO_EXTRA_VECS;
|
||||
struct csio_scsi_cpu_info *info;
|
||||
struct irq_affinity desc = { .pre_vectors = 2 };
|
||||
struct irq_affinity desc = {
|
||||
.pre_vectors = CSIO_EXTRA_VECS,
|
||||
.calc_sets = csio_calc_sets,
|
||||
.priv = hw,
|
||||
};
|
||||
|
||||
if (hw->num_pports > IRQ_AFFINITY_MAX_SETS)
|
||||
return -ENOSPC;
|
||||
|
||||
min = hw->num_pports + extra;
|
||||
cnt = hw->num_sqsets + extra;
|
||||
|
|
|
@ -979,14 +979,17 @@ static int init_act_open(struct cxgbi_sock *csk)
|
|||
csk->atid = cxgb3_alloc_atid(t3dev, &t3_client, csk);
|
||||
if (csk->atid < 0) {
|
||||
pr_err("NO atid available.\n");
|
||||
goto rel_resource;
|
||||
return -EINVAL;
|
||||
}
|
||||
cxgbi_sock_set_flag(csk, CTPF_HAS_ATID);
|
||||
cxgbi_sock_get(csk);
|
||||
|
||||
skb = alloc_wr(sizeof(struct cpl_act_open_req), 0, GFP_KERNEL);
|
||||
if (!skb)
|
||||
goto rel_resource;
|
||||
if (!skb) {
|
||||
cxgb3_free_atid(t3dev, csk->atid);
|
||||
cxgbi_sock_put(csk);
|
||||
return -ENOMEM;
|
||||
}
|
||||
skb->sk = (struct sock *)csk;
|
||||
set_arp_failure_handler(skb, act_open_arp_failure);
|
||||
csk->snd_win = cxgb3i_snd_win;
|
||||
|
@ -1007,11 +1010,6 @@ static int init_act_open(struct cxgbi_sock *csk)
|
|||
cxgbi_sock_set_state(csk, CTP_ACTIVE_OPEN);
|
||||
send_act_open_req(csk, skb, csk->l2t);
|
||||
return 0;
|
||||
|
||||
rel_resource:
|
||||
if (skb)
|
||||
__kfree_skb(skb);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
cxgb3_cpl_handler_func cxgb3i_cpl_handlers[NUM_CPL_CMDS] = {
|
||||
|
|
|
@ -60,7 +60,7 @@ MODULE_PARM_DESC(dbg_level, "Debug flag (default=0)");
|
|||
#define CXGB4I_DEFAULT_10G_RCV_WIN (256 * 1024)
|
||||
static int cxgb4i_rcv_win = -1;
|
||||
module_param(cxgb4i_rcv_win, int, 0644);
|
||||
MODULE_PARM_DESC(cxgb4i_rcv_win, "TCP reveive window in bytes");
|
||||
MODULE_PARM_DESC(cxgb4i_rcv_win, "TCP receive window in bytes");
|
||||
|
||||
#define CXGB4I_DEFAULT_10G_SND_WIN (128 * 1024)
|
||||
static int cxgb4i_snd_win = -1;
|
||||
|
|
|
@ -282,7 +282,6 @@ struct cxgbi_device *cxgbi_device_find_by_netdev_rcu(struct net_device *ndev,
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(cxgbi_device_find_by_netdev_rcu);
|
||||
|
||||
#if IS_ENABLED(CONFIG_IPV6)
|
||||
static struct cxgbi_device *cxgbi_device_find_by_mac(struct net_device *ndev,
|
||||
int *port)
|
||||
{
|
||||
|
@ -315,7 +314,6 @@ static struct cxgbi_device *cxgbi_device_find_by_mac(struct net_device *ndev,
|
|||
ndev, ndev->name);
|
||||
return NULL;
|
||||
}
|
||||
#endif
|
||||
|
||||
void cxgbi_hbas_remove(struct cxgbi_device *cdev)
|
||||
{
|
||||
|
@ -653,6 +651,8 @@ cxgbi_check_route(struct sockaddr *dst_addr, int ifindex)
|
|||
}
|
||||
|
||||
cdev = cxgbi_device_find_by_netdev(ndev, &port);
|
||||
if (!cdev)
|
||||
cdev = cxgbi_device_find_by_mac(ndev, &port);
|
||||
if (!cdev) {
|
||||
pr_info("dst %pI4, %s, NOT cxgbi device.\n",
|
||||
&daddr->sin_addr.s_addr, ndev->name);
|
||||
|
@ -2310,7 +2310,6 @@ int cxgbi_get_ep_param(struct iscsi_endpoint *ep, enum iscsi_param param,
|
|||
{
|
||||
struct cxgbi_endpoint *cep = ep->dd_data;
|
||||
struct cxgbi_sock *csk;
|
||||
int len;
|
||||
|
||||
log_debug(1 << CXGBI_DBG_ISCSI,
|
||||
"cls_conn 0x%p, param %d.\n", ep, param);
|
||||
|
@ -2328,9 +2327,9 @@ int cxgbi_get_ep_param(struct iscsi_endpoint *ep, enum iscsi_param param,
|
|||
return iscsi_conn_get_addr_param((struct sockaddr_storage *)
|
||||
&csk->daddr, param, buf);
|
||||
default:
|
||||
return -ENOSYS;
|
||||
break;
|
||||
}
|
||||
return len;
|
||||
return -ENOSYS;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(cxgbi_get_ep_param);
|
||||
|
||||
|
@ -2563,13 +2562,9 @@ struct iscsi_endpoint *cxgbi_ep_connect(struct Scsi_Host *shost,
|
|||
pr_info("shost 0x%p, priv NULL.\n", shost);
|
||||
goto err_out;
|
||||
}
|
||||
|
||||
rtnl_lock();
|
||||
if (!vlan_uses_dev(hba->ndev))
|
||||
ifindex = hba->ndev->ifindex;
|
||||
rtnl_unlock();
|
||||
}
|
||||
|
||||
check_route:
|
||||
if (dst_addr->sa_family == AF_INET) {
|
||||
csk = cxgbi_check_route(dst_addr, ifindex);
|
||||
#if IS_ENABLED(CONFIG_IPV6)
|
||||
|
@ -2590,6 +2585,13 @@ struct iscsi_endpoint *cxgbi_ep_connect(struct Scsi_Host *shost,
|
|||
if (!hba)
|
||||
hba = csk->cdev->hbas[csk->port_id];
|
||||
else if (hba != csk->cdev->hbas[csk->port_id]) {
|
||||
if (ifindex != hba->ndev->ifindex) {
|
||||
cxgbi_sock_put(csk);
|
||||
cxgbi_sock_closed(csk);
|
||||
ifindex = hba->ndev->ifindex;
|
||||
goto check_route;
|
||||
}
|
||||
|
||||
pr_info("Could not connect through requested host %u"
|
||||
"hba 0x%p != 0x%p (%u).\n",
|
||||
shost->host_no, hba,
|
||||
|
|
|
@ -835,8 +835,8 @@ static void adpt_i2o_sys_shutdown(void)
|
|||
adpt_hba *pHba, *pNext;
|
||||
struct adpt_i2o_post_wait_data *p1, *old;
|
||||
|
||||
printk(KERN_INFO"Shutting down Adaptec I2O controllers.\n");
|
||||
printk(KERN_INFO" This could take a few minutes if there are many devices attached\n");
|
||||
printk(KERN_INFO "Shutting down Adaptec I2O controllers.\n");
|
||||
printk(KERN_INFO " This could take a few minutes if there are many devices attached\n");
|
||||
/* Delete all IOPs from the controller chain */
|
||||
/* They should have already been released by the
|
||||
* scsi-core
|
||||
|
@ -859,7 +859,7 @@ static void adpt_i2o_sys_shutdown(void)
|
|||
// spin_unlock_irqrestore(&adpt_post_wait_lock, flags);
|
||||
adpt_post_wait_queue = NULL;
|
||||
|
||||
printk(KERN_INFO "Adaptec I2O controllers down.\n");
|
||||
printk(KERN_INFO "Adaptec I2O controllers down.\n");
|
||||
}
|
||||
|
||||
static int adpt_install_hba(struct scsi_host_template* sht, struct pci_dev* pDev)
|
||||
|
@ -3390,7 +3390,7 @@ static int adpt_i2o_issue_params(int cmd, adpt_hba* pHba, int tid,
|
|||
return -((res[1] >> 16) & 0xFF); /* -BlockStatus */
|
||||
}
|
||||
|
||||
return 4 + ((res[1] & 0x0000FFFF) << 2); /* bytes used in resblk */
|
||||
return 4 + ((res[1] & 0x0000FFFF) << 2); /* bytes used in resblk */
|
||||
}
|
||||
|
||||
|
||||
|
@ -3463,8 +3463,8 @@ static int adpt_i2o_enable_hba(adpt_hba* pHba)
|
|||
|
||||
static int adpt_i2o_systab_send(adpt_hba* pHba)
|
||||
{
|
||||
u32 msg[12];
|
||||
int ret;
|
||||
u32 msg[12];
|
||||
int ret;
|
||||
|
||||
msg[0] = I2O_MESSAGE_SIZE(12) | SGL_OFFSET_6;
|
||||
msg[1] = I2O_CMD_SYS_TAB_SET<<24 | HOST_TID<<12 | ADAPTER_TID;
|
||||
|
|
|
@ -3697,8 +3697,9 @@ static int ioc_general(void __user *arg, char *cmnd)
|
|||
|
||||
rval = 0;
|
||||
out_free_buf:
|
||||
dma_free_coherent(&ha->pdev->dev, gen.data_len + gen.sense_len, buf,
|
||||
paddr);
|
||||
if (buf)
|
||||
dma_free_coherent(&ha->pdev->dev, gen.data_len + gen.sense_len,
|
||||
buf, paddr);
|
||||
return rval;
|
||||
}
|
||||
|
||||
|
|
|
@ -170,6 +170,7 @@ struct hisi_sas_phy {
|
|||
u32 code_violation_err_count;
|
||||
enum sas_linkrate minimum_linkrate;
|
||||
enum sas_linkrate maximum_linkrate;
|
||||
int enable;
|
||||
};
|
||||
|
||||
struct hisi_sas_port {
|
||||
|
@ -551,6 +552,8 @@ extern int hisi_sas_slave_configure(struct scsi_device *sdev);
|
|||
extern int hisi_sas_scan_finished(struct Scsi_Host *shost, unsigned long time);
|
||||
extern void hisi_sas_scan_start(struct Scsi_Host *shost);
|
||||
extern int hisi_sas_host_reset(struct Scsi_Host *shost, int reset_type);
|
||||
extern void hisi_sas_phy_enable(struct hisi_hba *hisi_hba, int phy_no,
|
||||
int enable);
|
||||
extern void hisi_sas_phy_down(struct hisi_hba *hisi_hba, int phy_no, int rdy);
|
||||
extern void hisi_sas_slot_task_free(struct hisi_hba *hisi_hba,
|
||||
struct sas_task *task,
|
||||
|
|
|
@ -10,7 +10,6 @@
|
|||
*/
|
||||
|
||||
#include "hisi_sas.h"
|
||||
#include "../libsas/sas_internal.h"
|
||||
#define DRV_NAME "hisi_sas"
|
||||
|
||||
#define DEV_IS_GONE(dev) \
|
||||
|
@ -171,7 +170,7 @@ void hisi_sas_stop_phys(struct hisi_hba *hisi_hba)
|
|||
int phy_no;
|
||||
|
||||
for (phy_no = 0; phy_no < hisi_hba->n_phy; phy_no++)
|
||||
hisi_hba->hw->phy_disable(hisi_hba, phy_no);
|
||||
hisi_sas_phy_enable(hisi_hba, phy_no, 0);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(hisi_sas_stop_phys);
|
||||
|
||||
|
@ -684,7 +683,7 @@ static void hisi_sas_bytes_dmaed(struct hisi_hba *hisi_hba, int phy_no)
|
|||
id->initiator_bits = SAS_PROTOCOL_ALL;
|
||||
id->target_bits = phy->identify.target_port_protocols;
|
||||
} else if (phy->phy_type & PORT_TYPE_SATA) {
|
||||
/*Nothing*/
|
||||
/* Nothing */
|
||||
}
|
||||
|
||||
sas_phy->frame_rcvd_size = phy->frame_rcvd_size;
|
||||
|
@ -755,7 +754,8 @@ static int hisi_sas_init_device(struct domain_device *device)
|
|||
* STP target port
|
||||
*/
|
||||
local_phy = sas_get_local_phy(device);
|
||||
if (!scsi_is_sas_phy_local(local_phy)) {
|
||||
if (!scsi_is_sas_phy_local(local_phy) &&
|
||||
!test_bit(HISI_SAS_RESET_BIT, &hisi_hba->flags)) {
|
||||
unsigned long deadline = ata_deadline(jiffies, 20000);
|
||||
struct sata_device *sata_dev = &device->sata_dev;
|
||||
struct ata_host *ata_host = sata_dev->ata_host;
|
||||
|
@ -770,8 +770,7 @@ static int hisi_sas_init_device(struct domain_device *device)
|
|||
}
|
||||
sas_put_local_phy(local_phy);
|
||||
if (rc) {
|
||||
dev_warn(dev, "SATA disk hardreset fail: 0x%x\n",
|
||||
rc);
|
||||
dev_warn(dev, "SATA disk hardreset fail: %d\n", rc);
|
||||
return rc;
|
||||
}
|
||||
|
||||
|
@ -976,6 +975,30 @@ static void hisi_sas_phy_init(struct hisi_hba *hisi_hba, int phy_no)
|
|||
timer_setup(&phy->timer, hisi_sas_wait_phyup_timedout, 0);
|
||||
}
|
||||
|
||||
/* Wrapper to ensure we track hisi_sas_phy.enable properly */
|
||||
void hisi_sas_phy_enable(struct hisi_hba *hisi_hba, int phy_no, int enable)
|
||||
{
|
||||
struct hisi_sas_phy *phy = &hisi_hba->phy[phy_no];
|
||||
struct asd_sas_phy *aphy = &phy->sas_phy;
|
||||
struct sas_phy *sphy = aphy->phy;
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&phy->lock, flags);
|
||||
|
||||
if (enable) {
|
||||
/* We may have been enabled already; if so, don't touch */
|
||||
if (!phy->enable)
|
||||
sphy->negotiated_linkrate = SAS_LINK_RATE_UNKNOWN;
|
||||
hisi_hba->hw->phy_start(hisi_hba, phy_no);
|
||||
} else {
|
||||
sphy->negotiated_linkrate = SAS_PHY_DISABLED;
|
||||
hisi_hba->hw->phy_disable(hisi_hba, phy_no);
|
||||
}
|
||||
phy->enable = enable;
|
||||
spin_unlock_irqrestore(&phy->lock, flags);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(hisi_sas_phy_enable);
|
||||
|
||||
static void hisi_sas_port_notify_formed(struct asd_sas_phy *sas_phy)
|
||||
{
|
||||
struct sas_ha_struct *sas_ha = sas_phy->ha;
|
||||
|
@ -1112,10 +1135,10 @@ static int hisi_sas_phy_set_linkrate(struct hisi_hba *hisi_hba, int phy_no,
|
|||
sas_phy->phy->maximum_linkrate = max;
|
||||
sas_phy->phy->minimum_linkrate = min;
|
||||
|
||||
hisi_hba->hw->phy_disable(hisi_hba, phy_no);
|
||||
hisi_sas_phy_enable(hisi_hba, phy_no, 0);
|
||||
msleep(100);
|
||||
hisi_hba->hw->phy_set_linkrate(hisi_hba, phy_no, &_r);
|
||||
hisi_hba->hw->phy_start(hisi_hba, phy_no);
|
||||
hisi_sas_phy_enable(hisi_hba, phy_no, 1);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -1133,13 +1156,13 @@ static int hisi_sas_control_phy(struct asd_sas_phy *sas_phy, enum phy_func func,
|
|||
break;
|
||||
|
||||
case PHY_FUNC_LINK_RESET:
|
||||
hisi_hba->hw->phy_disable(hisi_hba, phy_no);
|
||||
hisi_sas_phy_enable(hisi_hba, phy_no, 0);
|
||||
msleep(100);
|
||||
hisi_hba->hw->phy_start(hisi_hba, phy_no);
|
||||
hisi_sas_phy_enable(hisi_hba, phy_no, 1);
|
||||
break;
|
||||
|
||||
case PHY_FUNC_DISABLE:
|
||||
hisi_hba->hw->phy_disable(hisi_hba, phy_no);
|
||||
hisi_sas_phy_enable(hisi_hba, phy_no, 0);
|
||||
break;
|
||||
|
||||
case PHY_FUNC_SET_LINK_RATE:
|
||||
|
@ -1264,8 +1287,7 @@ static int hisi_sas_exec_internal_tmf_task(struct domain_device *device,
|
|||
/* no error, but return the number of bytes of
|
||||
* underrun
|
||||
*/
|
||||
dev_warn(dev, "abort tmf: task to dev %016llx "
|
||||
"resp: 0x%x sts 0x%x underrun\n",
|
||||
dev_warn(dev, "abort tmf: task to dev %016llx resp: 0x%x sts 0x%x underrun\n",
|
||||
SAS_ADDR(device->sas_addr),
|
||||
task->task_status.resp,
|
||||
task->task_status.stat);
|
||||
|
@ -1280,10 +1302,16 @@ static int hisi_sas_exec_internal_tmf_task(struct domain_device *device,
|
|||
break;
|
||||
}
|
||||
|
||||
dev_warn(dev, "abort tmf: task to dev "
|
||||
"%016llx resp: 0x%x status 0x%x\n",
|
||||
SAS_ADDR(device->sas_addr), task->task_status.resp,
|
||||
task->task_status.stat);
|
||||
if (task->task_status.resp == SAS_TASK_COMPLETE &&
|
||||
task->task_status.stat == SAS_OPEN_REJECT) {
|
||||
dev_warn(dev, "abort tmf: open reject failed\n");
|
||||
res = -EIO;
|
||||
} else {
|
||||
dev_warn(dev, "abort tmf: task to dev %016llx resp: 0x%x status 0x%x\n",
|
||||
SAS_ADDR(device->sas_addr),
|
||||
task->task_status.resp,
|
||||
task->task_status.stat);
|
||||
}
|
||||
sas_free_task(task);
|
||||
task = NULL;
|
||||
}
|
||||
|
@ -1427,9 +1455,9 @@ static void hisi_sas_rescan_topology(struct hisi_hba *hisi_hba, u32 old_state,
|
|||
sas_ha->notify_port_event(sas_phy,
|
||||
PORTE_BROADCAST_RCVD);
|
||||
}
|
||||
} else if (old_state & (1 << phy_no))
|
||||
/* PHY down but was up before */
|
||||
} else {
|
||||
hisi_sas_phy_down(hisi_hba, phy_no, 0);
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
|
@ -1711,7 +1739,7 @@ static int hisi_sas_abort_task_set(struct domain_device *device, u8 *lun)
|
|||
struct hisi_hba *hisi_hba = dev_to_hisi_hba(device);
|
||||
struct device *dev = hisi_hba->dev;
|
||||
struct hisi_sas_tmf_task tmf_task;
|
||||
int rc = TMF_RESP_FUNC_FAILED;
|
||||
int rc;
|
||||
|
||||
rc = hisi_sas_internal_task_abort(hisi_hba, device,
|
||||
HISI_SAS_INT_ABT_DEV, 0);
|
||||
|
@ -1803,7 +1831,7 @@ static int hisi_sas_I_T_nexus_reset(struct domain_device *device)
|
|||
|
||||
if (dev_is_sata(device)) {
|
||||
rc = hisi_sas_softreset_ata_disk(device);
|
||||
if (rc)
|
||||
if (rc == TMF_RESP_FUNC_FAILED)
|
||||
return TMF_RESP_FUNC_FAILED;
|
||||
}
|
||||
|
||||
|
@ -2100,10 +2128,8 @@ _hisi_sas_internal_task_abort(struct hisi_hba *hisi_hba,
|
|||
}
|
||||
|
||||
exit:
|
||||
dev_dbg(dev, "internal task abort: task to dev %016llx task=%p "
|
||||
"resp: 0x%x sts 0x%x\n",
|
||||
SAS_ADDR(device->sas_addr),
|
||||
task,
|
||||
dev_dbg(dev, "internal task abort: task to dev %016llx task=%p resp: 0x%x sts 0x%x\n",
|
||||
SAS_ADDR(device->sas_addr), task,
|
||||
task->task_status.resp, /* 0 is complete, -1 is undelivered */
|
||||
task->task_status.stat);
|
||||
sas_free_task(task);
|
||||
|
@ -2172,16 +2198,18 @@ static void hisi_sas_phy_disconnected(struct hisi_sas_phy *phy)
|
|||
{
|
||||
struct asd_sas_phy *sas_phy = &phy->sas_phy;
|
||||
struct sas_phy *sphy = sas_phy->phy;
|
||||
struct sas_phy_data *d = sphy->hostdata;
|
||||
unsigned long flags;
|
||||
|
||||
phy->phy_attached = 0;
|
||||
phy->phy_type = 0;
|
||||
phy->port = NULL;
|
||||
|
||||
if (d->enable)
|
||||
spin_lock_irqsave(&phy->lock, flags);
|
||||
if (phy->enable)
|
||||
sphy->negotiated_linkrate = SAS_LINK_RATE_UNKNOWN;
|
||||
else
|
||||
sphy->negotiated_linkrate = SAS_PHY_DISABLED;
|
||||
spin_unlock_irqrestore(&phy->lock, flags);
|
||||
}
|
||||
|
||||
void hisi_sas_phy_down(struct hisi_hba *hisi_hba, int phy_no, int rdy)
|
||||
|
@ -2234,6 +2262,19 @@ void hisi_sas_kill_tasklets(struct hisi_hba *hisi_hba)
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(hisi_sas_kill_tasklets);
|
||||
|
||||
int hisi_sas_host_reset(struct Scsi_Host *shost, int reset_type)
|
||||
{
|
||||
struct hisi_hba *hisi_hba = shost_priv(shost);
|
||||
|
||||
if (reset_type != SCSI_ADAPTER_RESET)
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
queue_work(hisi_hba->wq, &hisi_hba->rst_work);
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(hisi_sas_host_reset);
|
||||
|
||||
struct scsi_transport_template *hisi_sas_stt;
|
||||
EXPORT_SYMBOL_GPL(hisi_sas_stt);
|
||||
|
||||
|
@ -2491,22 +2532,19 @@ int hisi_sas_get_fw_info(struct hisi_hba *hisi_hba)
|
|||
|
||||
if (device_property_read_u32(dev, "ctrl-reset-reg",
|
||||
&hisi_hba->ctrl_reset_reg)) {
|
||||
dev_err(dev,
|
||||
"could not get property ctrl-reset-reg\n");
|
||||
dev_err(dev, "could not get property ctrl-reset-reg\n");
|
||||
return -ENOENT;
|
||||
}
|
||||
|
||||
if (device_property_read_u32(dev, "ctrl-reset-sts-reg",
|
||||
&hisi_hba->ctrl_reset_sts_reg)) {
|
||||
dev_err(dev,
|
||||
"could not get property ctrl-reset-sts-reg\n");
|
||||
dev_err(dev, "could not get property ctrl-reset-sts-reg\n");
|
||||
return -ENOENT;
|
||||
}
|
||||
|
||||
if (device_property_read_u32(dev, "ctrl-clock-ena-reg",
|
||||
&hisi_hba->ctrl_clock_ena_reg)) {
|
||||
dev_err(dev,
|
||||
"could not get property ctrl-clock-ena-reg\n");
|
||||
dev_err(dev, "could not get property ctrl-clock-ena-reg\n");
|
||||
return -ENOENT;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -798,16 +798,11 @@ static void start_phy_v1_hw(struct hisi_hba *hisi_hba, int phy_no)
|
|||
enable_phy_v1_hw(hisi_hba, phy_no);
|
||||
}
|
||||
|
||||
static void stop_phy_v1_hw(struct hisi_hba *hisi_hba, int phy_no)
|
||||
{
|
||||
disable_phy_v1_hw(hisi_hba, phy_no);
|
||||
}
|
||||
|
||||
static void phy_hard_reset_v1_hw(struct hisi_hba *hisi_hba, int phy_no)
|
||||
{
|
||||
stop_phy_v1_hw(hisi_hba, phy_no);
|
||||
hisi_sas_phy_enable(hisi_hba, phy_no, 0);
|
||||
msleep(100);
|
||||
start_phy_v1_hw(hisi_hba, phy_no);
|
||||
hisi_sas_phy_enable(hisi_hba, phy_no, 1);
|
||||
}
|
||||
|
||||
static void start_phys_v1_hw(struct timer_list *t)
|
||||
|
@ -817,7 +812,7 @@ static void start_phys_v1_hw(struct timer_list *t)
|
|||
|
||||
for (i = 0; i < hisi_hba->n_phy; i++) {
|
||||
hisi_sas_phy_write32(hisi_hba, i, CHL_INT2_MSK, 0x12a);
|
||||
start_phy_v1_hw(hisi_hba, i);
|
||||
hisi_sas_phy_enable(hisi_hba, i, 1);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -1695,8 +1690,7 @@ static int interrupt_init_v1_hw(struct hisi_hba *hisi_hba)
|
|||
for (j = 0; j < HISI_SAS_PHY_INT_NR; j++, idx++) {
|
||||
irq = platform_get_irq(pdev, idx);
|
||||
if (!irq) {
|
||||
dev_err(dev,
|
||||
"irq init: fail map phy interrupt %d\n",
|
||||
dev_err(dev, "irq init: fail map phy interrupt %d\n",
|
||||
idx);
|
||||
return -ENOENT;
|
||||
}
|
||||
|
@ -1704,8 +1698,7 @@ static int interrupt_init_v1_hw(struct hisi_hba *hisi_hba)
|
|||
rc = devm_request_irq(dev, irq, phy_interrupts[j], 0,
|
||||
DRV_NAME " phy", phy);
|
||||
if (rc) {
|
||||
dev_err(dev, "irq init: could not request "
|
||||
"phy interrupt %d, rc=%d\n",
|
||||
dev_err(dev, "irq init: could not request phy interrupt %d, rc=%d\n",
|
||||
irq, rc);
|
||||
return -ENOENT;
|
||||
}
|
||||
|
@ -1742,8 +1735,7 @@ static int interrupt_init_v1_hw(struct hisi_hba *hisi_hba)
|
|||
rc = devm_request_irq(dev, irq, fatal_interrupts[i], 0,
|
||||
DRV_NAME " fatal", hisi_hba);
|
||||
if (rc) {
|
||||
dev_err(dev,
|
||||
"irq init: could not request fatal interrupt %d, rc=%d\n",
|
||||
dev_err(dev, "irq init: could not request fatal interrupt %d, rc=%d\n",
|
||||
irq, rc);
|
||||
return -ENOENT;
|
||||
}
|
||||
|
@ -1823,6 +1815,7 @@ static struct scsi_host_template sht_v1_hw = {
|
|||
.target_destroy = sas_target_destroy,
|
||||
.ioctl = sas_ioctl,
|
||||
.shost_attrs = host_attrs_v1_hw,
|
||||
.host_reset = hisi_sas_host_reset,
|
||||
};
|
||||
|
||||
static const struct hisi_sas_hw hisi_sas_v1_hw = {
|
||||
|
|
|
@ -1546,14 +1546,14 @@ static void phy_hard_reset_v2_hw(struct hisi_hba *hisi_hba, int phy_no)
|
|||
struct hisi_sas_phy *phy = &hisi_hba->phy[phy_no];
|
||||
u32 txid_auto;
|
||||
|
||||
disable_phy_v2_hw(hisi_hba, phy_no);
|
||||
hisi_sas_phy_enable(hisi_hba, phy_no, 0);
|
||||
if (phy->identify.device_type == SAS_END_DEVICE) {
|
||||
txid_auto = hisi_sas_phy_read32(hisi_hba, phy_no, TXID_AUTO);
|
||||
hisi_sas_phy_write32(hisi_hba, phy_no, TXID_AUTO,
|
||||
txid_auto | TX_HARDRST_MSK);
|
||||
}
|
||||
msleep(100);
|
||||
start_phy_v2_hw(hisi_hba, phy_no);
|
||||
hisi_sas_phy_enable(hisi_hba, phy_no, 1);
|
||||
}
|
||||
|
||||
static void phy_get_events_v2_hw(struct hisi_hba *hisi_hba, int phy_no)
|
||||
|
@ -1586,7 +1586,7 @@ static void phys_init_v2_hw(struct hisi_hba *hisi_hba)
|
|||
if (!sas_phy->phy->enabled)
|
||||
continue;
|
||||
|
||||
start_phy_v2_hw(hisi_hba, i);
|
||||
hisi_sas_phy_enable(hisi_hba, i, 1);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -2423,14 +2423,12 @@ slot_complete_v2_hw(struct hisi_hba *hisi_hba, struct hisi_sas_slot *slot)
|
|||
slot_err_v2_hw(hisi_hba, task, slot, 2);
|
||||
|
||||
if (ts->stat != SAS_DATA_UNDERRUN)
|
||||
dev_info(dev, "erroneous completion iptt=%d task=%p dev id=%d "
|
||||
"CQ hdr: 0x%x 0x%x 0x%x 0x%x "
|
||||
"Error info: 0x%x 0x%x 0x%x 0x%x\n",
|
||||
slot->idx, task, sas_dev->device_id,
|
||||
complete_hdr->dw0, complete_hdr->dw1,
|
||||
complete_hdr->act, complete_hdr->dw3,
|
||||
error_info[0], error_info[1],
|
||||
error_info[2], error_info[3]);
|
||||
dev_info(dev, "erroneous completion iptt=%d task=%p dev id=%d CQ hdr: 0x%x 0x%x 0x%x 0x%x Error info: 0x%x 0x%x 0x%x 0x%x\n",
|
||||
slot->idx, task, sas_dev->device_id,
|
||||
complete_hdr->dw0, complete_hdr->dw1,
|
||||
complete_hdr->act, complete_hdr->dw3,
|
||||
error_info[0], error_info[1],
|
||||
error_info[2], error_info[3]);
|
||||
|
||||
if (unlikely(slot->abort))
|
||||
return ts->stat;
|
||||
|
@ -2502,7 +2500,7 @@ slot_complete_v2_hw(struct hisi_hba *hisi_hba, struct hisi_sas_slot *slot)
|
|||
spin_lock_irqsave(&device->done_lock, flags);
|
||||
if (test_bit(SAS_HA_FROZEN, &ha->state)) {
|
||||
spin_unlock_irqrestore(&device->done_lock, flags);
|
||||
dev_info(dev, "slot complete: task(%p) ignored\n ",
|
||||
dev_info(dev, "slot complete: task(%p) ignored\n",
|
||||
task);
|
||||
return sts;
|
||||
}
|
||||
|
@ -2935,7 +2933,7 @@ static irqreturn_t int_chnl_int_v2_hw(int irq_no, void *p)
|
|||
|
||||
if (irq_value2 & BIT(CHL_INT2_SL_IDAF_TOUT_CONF_OFF)) {
|
||||
dev_warn(dev, "phy%d identify timeout\n",
|
||||
phy_no);
|
||||
phy_no);
|
||||
hisi_sas_notify_phy_event(phy,
|
||||
HISI_PHYE_LINK_RESET);
|
||||
}
|
||||
|
@ -3036,7 +3034,7 @@ static const struct hisi_sas_hw_error axi_error[] = {
|
|||
{ .msk = BIT(5), .msg = "SATA_AXI_R_ERR" },
|
||||
{ .msk = BIT(6), .msg = "DQE_AXI_R_ERR" },
|
||||
{ .msk = BIT(7), .msg = "CQE_AXI_W_ERR" },
|
||||
{},
|
||||
{}
|
||||
};
|
||||
|
||||
static const struct hisi_sas_hw_error fifo_error[] = {
|
||||
|
@ -3045,7 +3043,7 @@ static const struct hisi_sas_hw_error fifo_error[] = {
|
|||
{ .msk = BIT(10), .msg = "GETDQE_FIFO" },
|
||||
{ .msk = BIT(11), .msg = "CMDP_FIFO" },
|
||||
{ .msk = BIT(12), .msg = "AWTCTRL_FIFO" },
|
||||
{},
|
||||
{}
|
||||
};
|
||||
|
||||
static const struct hisi_sas_hw_error fatal_axi_errors[] = {
|
||||
|
@ -3109,12 +3107,12 @@ static irqreturn_t fatal_axi_int_v2_hw(int irq_no, void *p)
|
|||
if (!(err_value & sub->msk))
|
||||
continue;
|
||||
dev_err(dev, "%s (0x%x) found!\n",
|
||||
sub->msg, irq_value);
|
||||
sub->msg, irq_value);
|
||||
queue_work(hisi_hba->wq, &hisi_hba->rst_work);
|
||||
}
|
||||
} else {
|
||||
dev_err(dev, "%s (0x%x) found!\n",
|
||||
axi_error->msg, irq_value);
|
||||
axi_error->msg, irq_value);
|
||||
queue_work(hisi_hba->wq, &hisi_hba->rst_work);
|
||||
}
|
||||
}
|
||||
|
@ -3258,7 +3256,7 @@ static irqreturn_t sata_int_v2_hw(int irq_no, void *p)
|
|||
/* check ERR bit of Status Register */
|
||||
if (fis->status & ATA_ERR) {
|
||||
dev_warn(dev, "sata int: phy%d FIS status: 0x%x\n", phy_no,
|
||||
fis->status);
|
||||
fis->status);
|
||||
hisi_sas_notify_phy_event(phy, HISI_PHYE_LINK_RESET);
|
||||
res = IRQ_NONE;
|
||||
goto end;
|
||||
|
@ -3349,8 +3347,7 @@ static int interrupt_init_v2_hw(struct hisi_hba *hisi_hba)
|
|||
rc = devm_request_irq(dev, irq, phy_interrupts[i], 0,
|
||||
DRV_NAME " phy", hisi_hba);
|
||||
if (rc) {
|
||||
dev_err(dev, "irq init: could not request "
|
||||
"phy interrupt %d, rc=%d\n",
|
||||
dev_err(dev, "irq init: could not request phy interrupt %d, rc=%d\n",
|
||||
irq, rc);
|
||||
rc = -ENOENT;
|
||||
goto free_phy_int_irqs;
|
||||
|
@ -3364,8 +3361,7 @@ static int interrupt_init_v2_hw(struct hisi_hba *hisi_hba)
|
|||
rc = devm_request_irq(dev, irq, sata_int_v2_hw, 0,
|
||||
DRV_NAME " sata", phy);
|
||||
if (rc) {
|
||||
dev_err(dev, "irq init: could not request "
|
||||
"sata interrupt %d, rc=%d\n",
|
||||
dev_err(dev, "irq init: could not request sata interrupt %d, rc=%d\n",
|
||||
irq, rc);
|
||||
rc = -ENOENT;
|
||||
goto free_sata_int_irqs;
|
||||
|
@ -3377,8 +3373,7 @@ static int interrupt_init_v2_hw(struct hisi_hba *hisi_hba)
|
|||
rc = devm_request_irq(dev, irq, fatal_interrupts[fatal_no], 0,
|
||||
DRV_NAME " fatal", hisi_hba);
|
||||
if (rc) {
|
||||
dev_err(dev,
|
||||
"irq init: could not request fatal interrupt %d, rc=%d\n",
|
||||
dev_err(dev, "irq init: could not request fatal interrupt %d, rc=%d\n",
|
||||
irq, rc);
|
||||
rc = -ENOENT;
|
||||
goto free_fatal_int_irqs;
|
||||
|
@ -3393,8 +3388,7 @@ static int interrupt_init_v2_hw(struct hisi_hba *hisi_hba)
|
|||
rc = devm_request_irq(dev, irq, cq_interrupt_v2_hw, 0,
|
||||
DRV_NAME " cq", cq);
|
||||
if (rc) {
|
||||
dev_err(dev,
|
||||
"irq init: could not request cq interrupt %d, rc=%d\n",
|
||||
dev_err(dev, "irq init: could not request cq interrupt %d, rc=%d\n",
|
||||
irq, rc);
|
||||
rc = -ENOENT;
|
||||
goto free_cq_int_irqs;
|
||||
|
@ -3546,7 +3540,7 @@ static int write_gpio_v2_hw(struct hisi_hba *hisi_hba, u8 reg_type,
|
|||
break;
|
||||
default:
|
||||
dev_err(dev, "write gpio: unsupported or bad reg type %d\n",
|
||||
reg_type);
|
||||
reg_type);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
|
@ -3599,6 +3593,7 @@ static struct scsi_host_template sht_v2_hw = {
|
|||
.target_destroy = sas_target_destroy,
|
||||
.ioctl = sas_ioctl,
|
||||
.shost_attrs = host_attrs_v2_hw,
|
||||
.host_reset = hisi_sas_host_reset,
|
||||
};
|
||||
|
||||
static const struct hisi_sas_hw hisi_sas_v2_hw = {
|
||||
|
|
|
@ -52,7 +52,36 @@
|
|||
#define CFG_ABT_SET_IPTT_DONE 0xd8
|
||||
#define CFG_ABT_SET_IPTT_DONE_OFF 0
|
||||
#define HGC_IOMB_PROC1_STATUS 0x104
|
||||
#define HGC_LM_DFX_STATUS2 0x128
|
||||
#define HGC_LM_DFX_STATUS2_IOSTLIST_OFF 0
|
||||
#define HGC_LM_DFX_STATUS2_IOSTLIST_MSK (0xfff << \
|
||||
HGC_LM_DFX_STATUS2_IOSTLIST_OFF)
|
||||
#define HGC_LM_DFX_STATUS2_ITCTLIST_OFF 12
|
||||
#define HGC_LM_DFX_STATUS2_ITCTLIST_MSK (0x7ff << \
|
||||
HGC_LM_DFX_STATUS2_ITCTLIST_OFF)
|
||||
#define HGC_CQE_ECC_ADDR 0x13c
|
||||
#define HGC_CQE_ECC_1B_ADDR_OFF 0
|
||||
#define HGC_CQE_ECC_1B_ADDR_MSK (0x3f << HGC_CQE_ECC_1B_ADDR_OFF)
|
||||
#define HGC_CQE_ECC_MB_ADDR_OFF 8
|
||||
#define HGC_CQE_ECC_MB_ADDR_MSK (0x3f << HGC_CQE_ECC_MB_ADDR_OFF)
|
||||
#define HGC_IOST_ECC_ADDR 0x140
|
||||
#define HGC_IOST_ECC_1B_ADDR_OFF 0
|
||||
#define HGC_IOST_ECC_1B_ADDR_MSK (0x3ff << HGC_IOST_ECC_1B_ADDR_OFF)
|
||||
#define HGC_IOST_ECC_MB_ADDR_OFF 16
|
||||
#define HGC_IOST_ECC_MB_ADDR_MSK (0x3ff << HGC_IOST_ECC_MB_ADDR_OFF)
|
||||
#define HGC_DQE_ECC_ADDR 0x144
|
||||
#define HGC_DQE_ECC_1B_ADDR_OFF 0
|
||||
#define HGC_DQE_ECC_1B_ADDR_MSK (0xfff << HGC_DQE_ECC_1B_ADDR_OFF)
|
||||
#define HGC_DQE_ECC_MB_ADDR_OFF 16
|
||||
#define HGC_DQE_ECC_MB_ADDR_MSK (0xfff << HGC_DQE_ECC_MB_ADDR_OFF)
|
||||
#define CHNL_INT_STATUS 0x148
|
||||
#define HGC_ITCT_ECC_ADDR 0x150
|
||||
#define HGC_ITCT_ECC_1B_ADDR_OFF 0
|
||||
#define HGC_ITCT_ECC_1B_ADDR_MSK (0x3ff << \
|
||||
HGC_ITCT_ECC_1B_ADDR_OFF)
|
||||
#define HGC_ITCT_ECC_MB_ADDR_OFF 16
|
||||
#define HGC_ITCT_ECC_MB_ADDR_MSK (0x3ff << \
|
||||
HGC_ITCT_ECC_MB_ADDR_OFF)
|
||||
#define HGC_AXI_FIFO_ERR_INFO 0x154
|
||||
#define AXI_ERR_INFO_OFF 0
|
||||
#define AXI_ERR_INFO_MSK (0xff << AXI_ERR_INFO_OFF)
|
||||
|
@ -81,6 +110,10 @@
|
|||
#define ENT_INT_SRC3_ITC_INT_OFF 15
|
||||
#define ENT_INT_SRC3_ITC_INT_MSK (0x1 << ENT_INT_SRC3_ITC_INT_OFF)
|
||||
#define ENT_INT_SRC3_ABT_OFF 16
|
||||
#define ENT_INT_SRC3_DQE_POISON_OFF 18
|
||||
#define ENT_INT_SRC3_IOST_POISON_OFF 19
|
||||
#define ENT_INT_SRC3_ITCT_POISON_OFF 20
|
||||
#define ENT_INT_SRC3_ITCT_NCQ_POISON_OFF 21
|
||||
#define ENT_INT_SRC_MSK1 0x1c4
|
||||
#define ENT_INT_SRC_MSK2 0x1c8
|
||||
#define ENT_INT_SRC_MSK3 0x1cc
|
||||
|
@ -90,6 +123,28 @@
|
|||
#define HGC_COM_INT_MSK 0x1d8
|
||||
#define ENT_INT_SRC_MSK3_ENT95_MSK_MSK (0x1 << ENT_INT_SRC_MSK3_ENT95_MSK_OFF)
|
||||
#define SAS_ECC_INTR 0x1e8
|
||||
#define SAS_ECC_INTR_DQE_ECC_1B_OFF 0
|
||||
#define SAS_ECC_INTR_DQE_ECC_MB_OFF 1
|
||||
#define SAS_ECC_INTR_IOST_ECC_1B_OFF 2
|
||||
#define SAS_ECC_INTR_IOST_ECC_MB_OFF 3
|
||||
#define SAS_ECC_INTR_ITCT_ECC_1B_OFF 4
|
||||
#define SAS_ECC_INTR_ITCT_ECC_MB_OFF 5
|
||||
#define SAS_ECC_INTR_ITCTLIST_ECC_1B_OFF 6
|
||||
#define SAS_ECC_INTR_ITCTLIST_ECC_MB_OFF 7
|
||||
#define SAS_ECC_INTR_IOSTLIST_ECC_1B_OFF 8
|
||||
#define SAS_ECC_INTR_IOSTLIST_ECC_MB_OFF 9
|
||||
#define SAS_ECC_INTR_CQE_ECC_1B_OFF 10
|
||||
#define SAS_ECC_INTR_CQE_ECC_MB_OFF 11
|
||||
#define SAS_ECC_INTR_NCQ_MEM0_ECC_1B_OFF 12
|
||||
#define SAS_ECC_INTR_NCQ_MEM0_ECC_MB_OFF 13
|
||||
#define SAS_ECC_INTR_NCQ_MEM1_ECC_1B_OFF 14
|
||||
#define SAS_ECC_INTR_NCQ_MEM1_ECC_MB_OFF 15
|
||||
#define SAS_ECC_INTR_NCQ_MEM2_ECC_1B_OFF 16
|
||||
#define SAS_ECC_INTR_NCQ_MEM2_ECC_MB_OFF 17
|
||||
#define SAS_ECC_INTR_NCQ_MEM3_ECC_1B_OFF 18
|
||||
#define SAS_ECC_INTR_NCQ_MEM3_ECC_MB_OFF 19
|
||||
#define SAS_ECC_INTR_OOO_RAM_ECC_1B_OFF 20
|
||||
#define SAS_ECC_INTR_OOO_RAM_ECC_MB_OFF 21
|
||||
#define SAS_ECC_INTR_MSK 0x1ec
|
||||
#define HGC_ERR_STAT_EN 0x238
|
||||
#define CQE_SEND_CNT 0x248
|
||||
|
@ -105,6 +160,20 @@
|
|||
#define COMPL_Q_0_DEPTH 0x4e8
|
||||
#define COMPL_Q_0_WR_PTR 0x4ec
|
||||
#define COMPL_Q_0_RD_PTR 0x4f0
|
||||
#define HGC_RXM_DFX_STATUS14 0xae8
|
||||
#define HGC_RXM_DFX_STATUS14_MEM0_OFF 0
|
||||
#define HGC_RXM_DFX_STATUS14_MEM0_MSK (0x1ff << \
|
||||
HGC_RXM_DFX_STATUS14_MEM0_OFF)
|
||||
#define HGC_RXM_DFX_STATUS14_MEM1_OFF 9
|
||||
#define HGC_RXM_DFX_STATUS14_MEM1_MSK (0x1ff << \
|
||||
HGC_RXM_DFX_STATUS14_MEM1_OFF)
|
||||
#define HGC_RXM_DFX_STATUS14_MEM2_OFF 18
|
||||
#define HGC_RXM_DFX_STATUS14_MEM2_MSK (0x1ff << \
|
||||
HGC_RXM_DFX_STATUS14_MEM2_OFF)
|
||||
#define HGC_RXM_DFX_STATUS15 0xaec
|
||||
#define HGC_RXM_DFX_STATUS15_MEM3_OFF 0
|
||||
#define HGC_RXM_DFX_STATUS15_MEM3_MSK (0x1ff << \
|
||||
HGC_RXM_DFX_STATUS15_MEM3_OFF)
|
||||
#define AWQOS_AWCACHE_CFG 0xc84
|
||||
#define ARQOS_ARCACHE_CFG 0xc88
|
||||
#define HILINK_ERR_DFX 0xe04
|
||||
|
@ -172,14 +241,18 @@
|
|||
#define CHL_INT0_PHY_RDY_OFF 5
|
||||
#define CHL_INT0_PHY_RDY_MSK (0x1 << CHL_INT0_PHY_RDY_OFF)
|
||||
#define CHL_INT1 (PORT_BASE + 0x1b8)
|
||||
#define CHL_INT1_DMAC_TX_ECC_ERR_OFF 15
|
||||
#define CHL_INT1_DMAC_TX_ECC_ERR_MSK (0x1 << CHL_INT1_DMAC_TX_ECC_ERR_OFF)
|
||||
#define CHL_INT1_DMAC_RX_ECC_ERR_OFF 17
|
||||
#define CHL_INT1_DMAC_RX_ECC_ERR_MSK (0x1 << CHL_INT1_DMAC_RX_ECC_ERR_OFF)
|
||||
#define CHL_INT1_DMAC_TX_ECC_MB_ERR_OFF 15
|
||||
#define CHL_INT1_DMAC_TX_ECC_1B_ERR_OFF 16
|
||||
#define CHL_INT1_DMAC_RX_ECC_MB_ERR_OFF 17
|
||||
#define CHL_INT1_DMAC_RX_ECC_1B_ERR_OFF 18
|
||||
#define CHL_INT1_DMAC_TX_AXI_WR_ERR_OFF 19
|
||||
#define CHL_INT1_DMAC_TX_AXI_RD_ERR_OFF 20
|
||||
#define CHL_INT1_DMAC_RX_AXI_WR_ERR_OFF 21
|
||||
#define CHL_INT1_DMAC_RX_AXI_RD_ERR_OFF 22
|
||||
#define CHL_INT1_DMAC_TX_FIFO_ERR_OFF 23
|
||||
#define CHL_INT1_DMAC_RX_FIFO_ERR_OFF 24
|
||||
#define CHL_INT1_DMAC_TX_AXI_RUSER_ERR_OFF 26
|
||||
#define CHL_INT1_DMAC_RX_AXI_RUSER_ERR_OFF 27
|
||||
#define CHL_INT2 (PORT_BASE + 0x1bc)
|
||||
#define CHL_INT2_SL_IDAF_TOUT_CONF_OFF 0
|
||||
#define CHL_INT2_RX_DISP_ERR_OFF 28
|
||||
|
@ -227,10 +300,8 @@
|
|||
#define AM_CFG_SINGLE_PORT_MAX_TRANS (0x5014)
|
||||
#define AXI_CFG (0x5100)
|
||||
#define AM_ROB_ECC_ERR_ADDR (0x510c)
|
||||
#define AM_ROB_ECC_ONEBIT_ERR_ADDR_OFF 0
|
||||
#define AM_ROB_ECC_ONEBIT_ERR_ADDR_MSK (0xff << AM_ROB_ECC_ONEBIT_ERR_ADDR_OFF)
|
||||
#define AM_ROB_ECC_MULBIT_ERR_ADDR_OFF 8
|
||||
#define AM_ROB_ECC_MULBIT_ERR_ADDR_MSK (0xff << AM_ROB_ECC_MULBIT_ERR_ADDR_OFF)
|
||||
#define AM_ROB_ECC_ERR_ADDR_OFF 0
|
||||
#define AM_ROB_ECC_ERR_ADDR_MSK 0xffffffff
|
||||
|
||||
/* RAS registers need init */
|
||||
#define RAS_BASE (0x6000)
|
||||
|
@ -408,6 +479,10 @@ struct hisi_sas_err_record_v3 {
|
|||
#define BASE_VECTORS_V3_HW 16
|
||||
#define MIN_AFFINE_VECTORS_V3_HW (BASE_VECTORS_V3_HW + 1)
|
||||
|
||||
enum {
|
||||
DSM_FUNC_ERR_HANDLE_MSI = 0,
|
||||
};
|
||||
|
||||
static bool hisi_sas_intr_conv;
|
||||
MODULE_PARM_DESC(intr_conv, "interrupt converge enable (0-1)");
|
||||
|
||||
|
@ -474,7 +549,6 @@ static u32 hisi_sas_phy_read32(struct hisi_hba *hisi_hba,
|
|||
|
||||
static void init_reg_v3_hw(struct hisi_hba *hisi_hba)
|
||||
{
|
||||
struct pci_dev *pdev = hisi_hba->pci_dev;
|
||||
int i;
|
||||
|
||||
/* Global registers init */
|
||||
|
@ -494,14 +568,11 @@ static void init_reg_v3_hw(struct hisi_hba *hisi_hba)
|
|||
hisi_sas_write32(hisi_hba, ENT_INT_SRC3, 0xffffffff);
|
||||
hisi_sas_write32(hisi_hba, ENT_INT_SRC_MSK1, 0xfefefefe);
|
||||
hisi_sas_write32(hisi_hba, ENT_INT_SRC_MSK2, 0xfefefefe);
|
||||
if (pdev->revision >= 0x21)
|
||||
hisi_sas_write32(hisi_hba, ENT_INT_SRC_MSK3, 0xffff7aff);
|
||||
else
|
||||
hisi_sas_write32(hisi_hba, ENT_INT_SRC_MSK3, 0xfffe20ff);
|
||||
hisi_sas_write32(hisi_hba, ENT_INT_SRC_MSK3, 0xffc220ff);
|
||||
hisi_sas_write32(hisi_hba, CHNL_PHYUPDOWN_INT_MSK, 0x0);
|
||||
hisi_sas_write32(hisi_hba, CHNL_ENT_INT_MSK, 0x0);
|
||||
hisi_sas_write32(hisi_hba, HGC_COM_INT_MSK, 0x0);
|
||||
hisi_sas_write32(hisi_hba, SAS_ECC_INTR_MSK, 0x0);
|
||||
hisi_sas_write32(hisi_hba, SAS_ECC_INTR_MSK, 0x155555);
|
||||
hisi_sas_write32(hisi_hba, AWQOS_AWCACHE_CFG, 0xf0f0);
|
||||
hisi_sas_write32(hisi_hba, ARQOS_ARCACHE_CFG, 0xf0f0);
|
||||
for (i = 0; i < hisi_hba->queue_count; i++)
|
||||
|
@ -532,12 +603,7 @@ static void init_reg_v3_hw(struct hisi_hba *hisi_hba)
|
|||
hisi_sas_phy_write32(hisi_hba, i, CHL_INT1, 0xffffffff);
|
||||
hisi_sas_phy_write32(hisi_hba, i, CHL_INT2, 0xffffffff);
|
||||
hisi_sas_phy_write32(hisi_hba, i, RXOP_CHECK_CFG_H, 0x1000);
|
||||
if (pdev->revision >= 0x21)
|
||||
hisi_sas_phy_write32(hisi_hba, i, CHL_INT1_MSK,
|
||||
0xffffffff);
|
||||
else
|
||||
hisi_sas_phy_write32(hisi_hba, i, CHL_INT1_MSK,
|
||||
0xff87ffff);
|
||||
hisi_sas_phy_write32(hisi_hba, i, CHL_INT1_MSK, 0xf2057fff);
|
||||
hisi_sas_phy_write32(hisi_hba, i, CHL_INT2_MSK, 0xffffbfe);
|
||||
hisi_sas_phy_write32(hisi_hba, i, PHY_CTRL_RDY_MSK, 0x0);
|
||||
hisi_sas_phy_write32(hisi_hba, i, PHYCTRL_NOT_RDY_MSK, 0x0);
|
||||
|
@ -804,6 +870,8 @@ static int reset_hw_v3_hw(struct hisi_hba *hisi_hba)
|
|||
static int hw_init_v3_hw(struct hisi_hba *hisi_hba)
|
||||
{
|
||||
struct device *dev = hisi_hba->dev;
|
||||
union acpi_object *obj;
|
||||
guid_t guid;
|
||||
int rc;
|
||||
|
||||
rc = reset_hw_v3_hw(hisi_hba);
|
||||
|
@ -815,6 +883,19 @@ static int hw_init_v3_hw(struct hisi_hba *hisi_hba)
|
|||
msleep(100);
|
||||
init_reg_v3_hw(hisi_hba);
|
||||
|
||||
if (guid_parse("D5918B4B-37AE-4E10-A99F-E5E8A6EF4C1F", &guid)) {
|
||||
dev_err(dev, "Parse GUID failed\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* Switch over to MSI handling , from PCI AER default */
|
||||
obj = acpi_evaluate_dsm(ACPI_HANDLE(dev), &guid, 0,
|
||||
DSM_FUNC_ERR_HANDLE_MSI, NULL);
|
||||
if (!obj)
|
||||
dev_warn(dev, "Switch over to MSI handling failed\n");
|
||||
else
|
||||
ACPI_FREE(obj);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -856,14 +937,14 @@ static void phy_hard_reset_v3_hw(struct hisi_hba *hisi_hba, int phy_no)
|
|||
struct hisi_sas_phy *phy = &hisi_hba->phy[phy_no];
|
||||
u32 txid_auto;
|
||||
|
||||
disable_phy_v3_hw(hisi_hba, phy_no);
|
||||
hisi_sas_phy_enable(hisi_hba, phy_no, 0);
|
||||
if (phy->identify.device_type == SAS_END_DEVICE) {
|
||||
txid_auto = hisi_sas_phy_read32(hisi_hba, phy_no, TXID_AUTO);
|
||||
hisi_sas_phy_write32(hisi_hba, phy_no, TXID_AUTO,
|
||||
txid_auto | TX_HARDRST_MSK);
|
||||
}
|
||||
msleep(100);
|
||||
start_phy_v3_hw(hisi_hba, phy_no);
|
||||
hisi_sas_phy_enable(hisi_hba, phy_no, 1);
|
||||
}
|
||||
|
||||
static enum sas_linkrate phy_get_max_linkrate_v3_hw(void)
|
||||
|
@ -882,7 +963,7 @@ static void phys_init_v3_hw(struct hisi_hba *hisi_hba)
|
|||
if (!sas_phy->phy->enabled)
|
||||
continue;
|
||||
|
||||
start_phy_v3_hw(hisi_hba, i);
|
||||
hisi_sas_phy_enable(hisi_hba, i, 1);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -929,7 +1010,7 @@ get_free_slot_v3_hw(struct hisi_hba *hisi_hba, struct hisi_sas_dq *dq)
|
|||
DLVRY_Q_0_RD_PTR + (queue * 0x14));
|
||||
if (r == (w+1) % HISI_SAS_QUEUE_SLOTS) {
|
||||
dev_warn(dev, "full queue=%d r=%d w=%d\n",
|
||||
queue, r, w);
|
||||
queue, r, w);
|
||||
return -EAGAIN;
|
||||
}
|
||||
|
||||
|
@ -1380,6 +1461,7 @@ static irqreturn_t phy_up_v3_hw(int phy_no, struct hisi_hba *hisi_hba)
|
|||
struct hisi_sas_initial_fis *initial_fis;
|
||||
struct dev_to_host_fis *fis;
|
||||
u8 attached_sas_addr[SAS_ADDR_SIZE] = {0};
|
||||
struct Scsi_Host *shost = hisi_hba->shost;
|
||||
|
||||
dev_info(dev, "phyup: phy%d link_rate=%d(sata)\n", phy_no, link_rate);
|
||||
initial_fis = &hisi_hba->initial_fis[phy_no];
|
||||
|
@ -1396,6 +1478,7 @@ static irqreturn_t phy_up_v3_hw(int phy_no, struct hisi_hba *hisi_hba)
|
|||
|
||||
sas_phy->oob_mode = SATA_OOB_MODE;
|
||||
attached_sas_addr[0] = 0x50;
|
||||
attached_sas_addr[6] = shost->host_no;
|
||||
attached_sas_addr[7] = phy_no;
|
||||
memcpy(sas_phy->attached_sas_addr,
|
||||
attached_sas_addr,
|
||||
|
@ -1539,6 +1622,14 @@ static irqreturn_t int_phy_up_down_bcast_v3_hw(int irq_no, void *p)
|
|||
}
|
||||
|
||||
static const struct hisi_sas_hw_error port_axi_error[] = {
|
||||
{
|
||||
.irq_msk = BIT(CHL_INT1_DMAC_TX_ECC_MB_ERR_OFF),
|
||||
.msg = "dmac_tx_ecc_bad_err",
|
||||
},
|
||||
{
|
||||
.irq_msk = BIT(CHL_INT1_DMAC_RX_ECC_MB_ERR_OFF),
|
||||
.msg = "dmac_rx_ecc_bad_err",
|
||||
},
|
||||
{
|
||||
.irq_msk = BIT(CHL_INT1_DMAC_TX_AXI_WR_ERR_OFF),
|
||||
.msg = "dma_tx_axi_wr_err",
|
||||
|
@ -1555,6 +1646,22 @@ static const struct hisi_sas_hw_error port_axi_error[] = {
|
|||
.irq_msk = BIT(CHL_INT1_DMAC_RX_AXI_RD_ERR_OFF),
|
||||
.msg = "dma_rx_axi_rd_err",
|
||||
},
|
||||
{
|
||||
.irq_msk = BIT(CHL_INT1_DMAC_TX_FIFO_ERR_OFF),
|
||||
.msg = "dma_tx_fifo_err",
|
||||
},
|
||||
{
|
||||
.irq_msk = BIT(CHL_INT1_DMAC_RX_FIFO_ERR_OFF),
|
||||
.msg = "dma_rx_fifo_err",
|
||||
},
|
||||
{
|
||||
.irq_msk = BIT(CHL_INT1_DMAC_TX_AXI_RUSER_ERR_OFF),
|
||||
.msg = "dma_tx_axi_ruser_err",
|
||||
},
|
||||
{
|
||||
.irq_msk = BIT(CHL_INT1_DMAC_RX_AXI_RUSER_ERR_OFF),
|
||||
.msg = "dma_rx_axi_ruser_err",
|
||||
},
|
||||
};
|
||||
|
||||
static void handle_chl_int1_v3_hw(struct hisi_hba *hisi_hba, int phy_no)
|
||||
|
@ -1719,6 +1826,121 @@ static irqreturn_t int_chnl_int_v3_hw(int irq_no, void *p)
|
|||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
static const struct hisi_sas_hw_error multi_bit_ecc_errors[] = {
|
||||
{
|
||||
.irq_msk = BIT(SAS_ECC_INTR_DQE_ECC_MB_OFF),
|
||||
.msk = HGC_DQE_ECC_MB_ADDR_MSK,
|
||||
.shift = HGC_DQE_ECC_MB_ADDR_OFF,
|
||||
.msg = "hgc_dqe_eccbad_intr found: ram addr is 0x%08X\n",
|
||||
.reg = HGC_DQE_ECC_ADDR,
|
||||
},
|
||||
{
|
||||
.irq_msk = BIT(SAS_ECC_INTR_IOST_ECC_MB_OFF),
|
||||
.msk = HGC_IOST_ECC_MB_ADDR_MSK,
|
||||
.shift = HGC_IOST_ECC_MB_ADDR_OFF,
|
||||
.msg = "hgc_iost_eccbad_intr found: ram addr is 0x%08X\n",
|
||||
.reg = HGC_IOST_ECC_ADDR,
|
||||
},
|
||||
{
|
||||
.irq_msk = BIT(SAS_ECC_INTR_ITCT_ECC_MB_OFF),
|
||||
.msk = HGC_ITCT_ECC_MB_ADDR_MSK,
|
||||
.shift = HGC_ITCT_ECC_MB_ADDR_OFF,
|
||||
.msg = "hgc_itct_eccbad_intr found: ram addr is 0x%08X\n",
|
||||
.reg = HGC_ITCT_ECC_ADDR,
|
||||
},
|
||||
{
|
||||
.irq_msk = BIT(SAS_ECC_INTR_IOSTLIST_ECC_MB_OFF),
|
||||
.msk = HGC_LM_DFX_STATUS2_IOSTLIST_MSK,
|
||||
.shift = HGC_LM_DFX_STATUS2_IOSTLIST_OFF,
|
||||
.msg = "hgc_iostl_eccbad_intr found: mem addr is 0x%08X\n",
|
||||
.reg = HGC_LM_DFX_STATUS2,
|
||||
},
|
||||
{
|
||||
.irq_msk = BIT(SAS_ECC_INTR_ITCTLIST_ECC_MB_OFF),
|
||||
.msk = HGC_LM_DFX_STATUS2_ITCTLIST_MSK,
|
||||
.shift = HGC_LM_DFX_STATUS2_ITCTLIST_OFF,
|
||||
.msg = "hgc_itctl_eccbad_intr found: mem addr is 0x%08X\n",
|
||||
.reg = HGC_LM_DFX_STATUS2,
|
||||
},
|
||||
{
|
||||
.irq_msk = BIT(SAS_ECC_INTR_CQE_ECC_MB_OFF),
|
||||
.msk = HGC_CQE_ECC_MB_ADDR_MSK,
|
||||
.shift = HGC_CQE_ECC_MB_ADDR_OFF,
|
||||
.msg = "hgc_cqe_eccbad_intr found: ram address is 0x%08X\n",
|
||||
.reg = HGC_CQE_ECC_ADDR,
|
||||
},
|
||||
{
|
||||
.irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM0_ECC_MB_OFF),
|
||||
.msk = HGC_RXM_DFX_STATUS14_MEM0_MSK,
|
||||
.shift = HGC_RXM_DFX_STATUS14_MEM0_OFF,
|
||||
.msg = "rxm_mem0_eccbad_intr found: mem addr is 0x%08X\n",
|
||||
.reg = HGC_RXM_DFX_STATUS14,
|
||||
},
|
||||
{
|
||||
.irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM1_ECC_MB_OFF),
|
||||
.msk = HGC_RXM_DFX_STATUS14_MEM1_MSK,
|
||||
.shift = HGC_RXM_DFX_STATUS14_MEM1_OFF,
|
||||
.msg = "rxm_mem1_eccbad_intr found: mem addr is 0x%08X\n",
|
||||
.reg = HGC_RXM_DFX_STATUS14,
|
||||
},
|
||||
{
|
||||
.irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM2_ECC_MB_OFF),
|
||||
.msk = HGC_RXM_DFX_STATUS14_MEM2_MSK,
|
||||
.shift = HGC_RXM_DFX_STATUS14_MEM2_OFF,
|
||||
.msg = "rxm_mem2_eccbad_intr found: mem addr is 0x%08X\n",
|
||||
.reg = HGC_RXM_DFX_STATUS14,
|
||||
},
|
||||
{
|
||||
.irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM3_ECC_MB_OFF),
|
||||
.msk = HGC_RXM_DFX_STATUS15_MEM3_MSK,
|
||||
.shift = HGC_RXM_DFX_STATUS15_MEM3_OFF,
|
||||
.msg = "rxm_mem3_eccbad_intr found: mem addr is 0x%08X\n",
|
||||
.reg = HGC_RXM_DFX_STATUS15,
|
||||
},
|
||||
{
|
||||
.irq_msk = BIT(SAS_ECC_INTR_OOO_RAM_ECC_MB_OFF),
|
||||
.msk = AM_ROB_ECC_ERR_ADDR_MSK,
|
||||
.shift = AM_ROB_ECC_ERR_ADDR_OFF,
|
||||
.msg = "ooo_ram_eccbad_intr found: ROB_ECC_ERR_ADDR=0x%08X\n",
|
||||
.reg = AM_ROB_ECC_ERR_ADDR,
|
||||
},
|
||||
};
|
||||
|
||||
static void multi_bit_ecc_error_process_v3_hw(struct hisi_hba *hisi_hba,
|
||||
u32 irq_value)
|
||||
{
|
||||
struct device *dev = hisi_hba->dev;
|
||||
const struct hisi_sas_hw_error *ecc_error;
|
||||
u32 val;
|
||||
int i;
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(multi_bit_ecc_errors); i++) {
|
||||
ecc_error = &multi_bit_ecc_errors[i];
|
||||
if (irq_value & ecc_error->irq_msk) {
|
||||
val = hisi_sas_read32(hisi_hba, ecc_error->reg);
|
||||
val &= ecc_error->msk;
|
||||
val >>= ecc_error->shift;
|
||||
dev_err(dev, ecc_error->msg, irq_value, val);
|
||||
queue_work(hisi_hba->wq, &hisi_hba->rst_work);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static void fatal_ecc_int_v3_hw(struct hisi_hba *hisi_hba)
|
||||
{
|
||||
u32 irq_value, irq_msk;
|
||||
|
||||
irq_msk = hisi_sas_read32(hisi_hba, SAS_ECC_INTR_MSK);
|
||||
hisi_sas_write32(hisi_hba, SAS_ECC_INTR_MSK, irq_msk | 0xffffffff);
|
||||
|
||||
irq_value = hisi_sas_read32(hisi_hba, SAS_ECC_INTR);
|
||||
if (irq_value)
|
||||
multi_bit_ecc_error_process_v3_hw(hisi_hba, irq_value);
|
||||
|
||||
hisi_sas_write32(hisi_hba, SAS_ECC_INTR, irq_value);
|
||||
hisi_sas_write32(hisi_hba, SAS_ECC_INTR_MSK, irq_msk);
|
||||
}
|
||||
|
||||
static const struct hisi_sas_hw_error axi_error[] = {
|
||||
{ .msk = BIT(0), .msg = "IOST_AXI_W_ERR" },
|
||||
{ .msk = BIT(1), .msg = "IOST_AXI_R_ERR" },
|
||||
|
@ -1728,7 +1950,7 @@ static const struct hisi_sas_hw_error axi_error[] = {
|
|||
{ .msk = BIT(5), .msg = "SATA_AXI_R_ERR" },
|
||||
{ .msk = BIT(6), .msg = "DQE_AXI_R_ERR" },
|
||||
{ .msk = BIT(7), .msg = "CQE_AXI_W_ERR" },
|
||||
{},
|
||||
{}
|
||||
};
|
||||
|
||||
static const struct hisi_sas_hw_error fifo_error[] = {
|
||||
|
@ -1737,7 +1959,7 @@ static const struct hisi_sas_hw_error fifo_error[] = {
|
|||
{ .msk = BIT(10), .msg = "GETDQE_FIFO" },
|
||||
{ .msk = BIT(11), .msg = "CMDP_FIFO" },
|
||||
{ .msk = BIT(12), .msg = "AWTCTRL_FIFO" },
|
||||
{},
|
||||
{}
|
||||
};
|
||||
|
||||
static const struct hisi_sas_hw_error fatal_axi_error[] = {
|
||||
|
@ -1771,6 +1993,23 @@ static const struct hisi_sas_hw_error fatal_axi_error[] = {
|
|||
.irq_msk = BIT(ENT_INT_SRC3_ABT_OFF),
|
||||
.msg = "SAS_HGC_ABT fetch LM list",
|
||||
},
|
||||
{
|
||||
.irq_msk = BIT(ENT_INT_SRC3_DQE_POISON_OFF),
|
||||
.msg = "read dqe poison",
|
||||
},
|
||||
{
|
||||
.irq_msk = BIT(ENT_INT_SRC3_IOST_POISON_OFF),
|
||||
.msg = "read iost poison",
|
||||
},
|
||||
{
|
||||
.irq_msk = BIT(ENT_INT_SRC3_ITCT_POISON_OFF),
|
||||
.msg = "read itct poison",
|
||||
},
|
||||
{
|
||||
.irq_msk = BIT(ENT_INT_SRC3_ITCT_NCQ_POISON_OFF),
|
||||
.msg = "read itct ncq poison",
|
||||
},
|
||||
|
||||
};
|
||||
|
||||
static irqreturn_t fatal_axi_int_v3_hw(int irq_no, void *p)
|
||||
|
@ -1823,6 +2062,8 @@ static irqreturn_t fatal_axi_int_v3_hw(int irq_no, void *p)
|
|||
}
|
||||
}
|
||||
|
||||
fatal_ecc_int_v3_hw(hisi_hba);
|
||||
|
||||
if (irq_value & BIT(ENT_INT_SRC3_ITC_INT_OFF)) {
|
||||
u32 reg_val = hisi_sas_read32(hisi_hba, ITCT_CLR);
|
||||
u32 dev_id = reg_val & ITCT_DEV_MSK;
|
||||
|
@ -1966,13 +2207,11 @@ slot_complete_v3_hw(struct hisi_hba *hisi_hba, struct hisi_sas_slot *slot)
|
|||
|
||||
slot_err_v3_hw(hisi_hba, task, slot);
|
||||
if (ts->stat != SAS_DATA_UNDERRUN)
|
||||
dev_info(dev, "erroneous completion iptt=%d task=%p dev id=%d "
|
||||
"CQ hdr: 0x%x 0x%x 0x%x 0x%x "
|
||||
"Error info: 0x%x 0x%x 0x%x 0x%x\n",
|
||||
slot->idx, task, sas_dev->device_id,
|
||||
dw0, dw1, complete_hdr->act, dw3,
|
||||
error_info[0], error_info[1],
|
||||
error_info[2], error_info[3]);
|
||||
dev_info(dev, "erroneous completion iptt=%d task=%p dev id=%d CQ hdr: 0x%x 0x%x 0x%x 0x%x Error info: 0x%x 0x%x 0x%x 0x%x\n",
|
||||
slot->idx, task, sas_dev->device_id,
|
||||
dw0, dw1, complete_hdr->act, dw3,
|
||||
error_info[0], error_info[1],
|
||||
error_info[2], error_info[3]);
|
||||
if (unlikely(slot->abort))
|
||||
return ts->stat;
|
||||
goto out;
|
||||
|
@ -2205,8 +2444,7 @@ static int interrupt_init_v3_hw(struct hisi_hba *hisi_hba)
|
|||
cq_interrupt_v3_hw, irqflags,
|
||||
DRV_NAME " cq", cq);
|
||||
if (rc) {
|
||||
dev_err(dev,
|
||||
"could not request cq%d interrupt, rc=%d\n",
|
||||
dev_err(dev, "could not request cq%d interrupt, rc=%d\n",
|
||||
i, rc);
|
||||
rc = -ENOENT;
|
||||
goto free_cq_irqs;
|
||||
|
@ -2362,7 +2600,7 @@ static int write_gpio_v3_hw(struct hisi_hba *hisi_hba, u8 reg_type,
|
|||
break;
|
||||
default:
|
||||
dev_err(dev, "write gpio: unsupported or bad reg type %d\n",
|
||||
reg_type);
|
||||
reg_type);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
|
@ -2678,6 +2916,7 @@ static struct scsi_host_template sht_v3_hw = {
|
|||
.ioctl = sas_ioctl,
|
||||
.shost_attrs = host_attrs_v3_hw,
|
||||
.tag_alloc_policy = BLK_TAG_ALLOC_RR,
|
||||
.host_reset = hisi_sas_host_reset,
|
||||
};
|
||||
|
||||
static const struct hisi_sas_hw hisi_sas_v3_hw = {
|
||||
|
@ -2800,7 +3039,7 @@ hisi_sas_v3_probe(struct pci_dev *pdev, const struct pci_device_id *id)
|
|||
|
||||
hisi_hba->regs = pcim_iomap(pdev, 5, 0);
|
||||
if (!hisi_hba->regs) {
|
||||
dev_err(dev, "cannot map register.\n");
|
||||
dev_err(dev, "cannot map register\n");
|
||||
rc = -ENOMEM;
|
||||
goto err_out_ha;
|
||||
}
|
||||
|
@ -2921,161 +3160,6 @@ static void hisi_sas_v3_remove(struct pci_dev *pdev)
|
|||
scsi_host_put(shost);
|
||||
}
|
||||
|
||||
static const struct hisi_sas_hw_error sas_ras_intr0_nfe[] = {
|
||||
{ .irq_msk = BIT(19), .msg = "HILINK_INT" },
|
||||
{ .irq_msk = BIT(20), .msg = "HILINK_PLL0_OUT_OF_LOCK" },
|
||||
{ .irq_msk = BIT(21), .msg = "HILINK_PLL1_OUT_OF_LOCK" },
|
||||
{ .irq_msk = BIT(22), .msg = "HILINK_LOSS_OF_REFCLK0" },
|
||||
{ .irq_msk = BIT(23), .msg = "HILINK_LOSS_OF_REFCLK1" },
|
||||
{ .irq_msk = BIT(24), .msg = "DMAC0_TX_POISON" },
|
||||
{ .irq_msk = BIT(25), .msg = "DMAC1_TX_POISON" },
|
||||
{ .irq_msk = BIT(26), .msg = "DMAC2_TX_POISON" },
|
||||
{ .irq_msk = BIT(27), .msg = "DMAC3_TX_POISON" },
|
||||
{ .irq_msk = BIT(28), .msg = "DMAC4_TX_POISON" },
|
||||
{ .irq_msk = BIT(29), .msg = "DMAC5_TX_POISON" },
|
||||
{ .irq_msk = BIT(30), .msg = "DMAC6_TX_POISON" },
|
||||
{ .irq_msk = BIT(31), .msg = "DMAC7_TX_POISON" },
|
||||
};
|
||||
|
||||
static const struct hisi_sas_hw_error sas_ras_intr1_nfe[] = {
|
||||
{ .irq_msk = BIT(0), .msg = "RXM_CFG_MEM3_ECC2B_INTR" },
|
||||
{ .irq_msk = BIT(1), .msg = "RXM_CFG_MEM2_ECC2B_INTR" },
|
||||
{ .irq_msk = BIT(2), .msg = "RXM_CFG_MEM1_ECC2B_INTR" },
|
||||
{ .irq_msk = BIT(3), .msg = "RXM_CFG_MEM0_ECC2B_INTR" },
|
||||
{ .irq_msk = BIT(4), .msg = "HGC_CQE_ECC2B_INTR" },
|
||||
{ .irq_msk = BIT(5), .msg = "LM_CFG_IOSTL_ECC2B_INTR" },
|
||||
{ .irq_msk = BIT(6), .msg = "LM_CFG_ITCTL_ECC2B_INTR" },
|
||||
{ .irq_msk = BIT(7), .msg = "HGC_ITCT_ECC2B_INTR" },
|
||||
{ .irq_msk = BIT(8), .msg = "HGC_IOST_ECC2B_INTR" },
|
||||
{ .irq_msk = BIT(9), .msg = "HGC_DQE_ECC2B_INTR" },
|
||||
{ .irq_msk = BIT(10), .msg = "DMAC0_RAM_ECC2B_INTR" },
|
||||
{ .irq_msk = BIT(11), .msg = "DMAC1_RAM_ECC2B_INTR" },
|
||||
{ .irq_msk = BIT(12), .msg = "DMAC2_RAM_ECC2B_INTR" },
|
||||
{ .irq_msk = BIT(13), .msg = "DMAC3_RAM_ECC2B_INTR" },
|
||||
{ .irq_msk = BIT(14), .msg = "DMAC4_RAM_ECC2B_INTR" },
|
||||
{ .irq_msk = BIT(15), .msg = "DMAC5_RAM_ECC2B_INTR" },
|
||||
{ .irq_msk = BIT(16), .msg = "DMAC6_RAM_ECC2B_INTR" },
|
||||
{ .irq_msk = BIT(17), .msg = "DMAC7_RAM_ECC2B_INTR" },
|
||||
{ .irq_msk = BIT(18), .msg = "OOO_RAM_ECC2B_INTR" },
|
||||
{ .irq_msk = BIT(20), .msg = "HGC_DQE_POISON_INTR" },
|
||||
{ .irq_msk = BIT(21), .msg = "HGC_IOST_POISON_INTR" },
|
||||
{ .irq_msk = BIT(22), .msg = "HGC_ITCT_POISON_INTR" },
|
||||
{ .irq_msk = BIT(23), .msg = "HGC_ITCT_NCQ_POISON_INTR" },
|
||||
{ .irq_msk = BIT(24), .msg = "DMAC0_RX_POISON" },
|
||||
{ .irq_msk = BIT(25), .msg = "DMAC1_RX_POISON" },
|
||||
{ .irq_msk = BIT(26), .msg = "DMAC2_RX_POISON" },
|
||||
{ .irq_msk = BIT(27), .msg = "DMAC3_RX_POISON" },
|
||||
{ .irq_msk = BIT(28), .msg = "DMAC4_RX_POISON" },
|
||||
{ .irq_msk = BIT(29), .msg = "DMAC5_RX_POISON" },
|
||||
{ .irq_msk = BIT(30), .msg = "DMAC6_RX_POISON" },
|
||||
{ .irq_msk = BIT(31), .msg = "DMAC7_RX_POISON" },
|
||||
};
|
||||
|
||||
static const struct hisi_sas_hw_error sas_ras_intr2_nfe[] = {
|
||||
{ .irq_msk = BIT(0), .msg = "DMAC0_AXI_BUS_ERR" },
|
||||
{ .irq_msk = BIT(1), .msg = "DMAC1_AXI_BUS_ERR" },
|
||||
{ .irq_msk = BIT(2), .msg = "DMAC2_AXI_BUS_ERR" },
|
||||
{ .irq_msk = BIT(3), .msg = "DMAC3_AXI_BUS_ERR" },
|
||||
{ .irq_msk = BIT(4), .msg = "DMAC4_AXI_BUS_ERR" },
|
||||
{ .irq_msk = BIT(5), .msg = "DMAC5_AXI_BUS_ERR" },
|
||||
{ .irq_msk = BIT(6), .msg = "DMAC6_AXI_BUS_ERR" },
|
||||
{ .irq_msk = BIT(7), .msg = "DMAC7_AXI_BUS_ERR" },
|
||||
{ .irq_msk = BIT(8), .msg = "DMAC0_FIFO_OMIT_ERR" },
|
||||
{ .irq_msk = BIT(9), .msg = "DMAC1_FIFO_OMIT_ERR" },
|
||||
{ .irq_msk = BIT(10), .msg = "DMAC2_FIFO_OMIT_ERR" },
|
||||
{ .irq_msk = BIT(11), .msg = "DMAC3_FIFO_OMIT_ERR" },
|
||||
{ .irq_msk = BIT(12), .msg = "DMAC4_FIFO_OMIT_ERR" },
|
||||
{ .irq_msk = BIT(13), .msg = "DMAC5_FIFO_OMIT_ERR" },
|
||||
{ .irq_msk = BIT(14), .msg = "DMAC6_FIFO_OMIT_ERR" },
|
||||
{ .irq_msk = BIT(15), .msg = "DMAC7_FIFO_OMIT_ERR" },
|
||||
{ .irq_msk = BIT(16), .msg = "HGC_RLSE_SLOT_UNMATCH" },
|
||||
{ .irq_msk = BIT(17), .msg = "HGC_LM_ADD_FCH_LIST_ERR" },
|
||||
{ .irq_msk = BIT(18), .msg = "HGC_AXI_BUS_ERR" },
|
||||
{ .irq_msk = BIT(19), .msg = "HGC_FIFO_OMIT_ERR" },
|
||||
};
|
||||
|
||||
static bool process_non_fatal_error_v3_hw(struct hisi_hba *hisi_hba)
|
||||
{
|
||||
struct device *dev = hisi_hba->dev;
|
||||
const struct hisi_sas_hw_error *ras_error;
|
||||
bool need_reset = false;
|
||||
u32 irq_value;
|
||||
int i;
|
||||
|
||||
irq_value = hisi_sas_read32(hisi_hba, SAS_RAS_INTR0);
|
||||
for (i = 0; i < ARRAY_SIZE(sas_ras_intr0_nfe); i++) {
|
||||
ras_error = &sas_ras_intr0_nfe[i];
|
||||
if (ras_error->irq_msk & irq_value) {
|
||||
dev_warn(dev, "SAS_RAS_INTR0: %s(irq_value=0x%x) found.\n",
|
||||
ras_error->msg, irq_value);
|
||||
need_reset = true;
|
||||
}
|
||||
}
|
||||
hisi_sas_write32(hisi_hba, SAS_RAS_INTR0, irq_value);
|
||||
|
||||
irq_value = hisi_sas_read32(hisi_hba, SAS_RAS_INTR1);
|
||||
for (i = 0; i < ARRAY_SIZE(sas_ras_intr1_nfe); i++) {
|
||||
ras_error = &sas_ras_intr1_nfe[i];
|
||||
if (ras_error->irq_msk & irq_value) {
|
||||
dev_warn(dev, "SAS_RAS_INTR1: %s(irq_value=0x%x) found.\n",
|
||||
ras_error->msg, irq_value);
|
||||
need_reset = true;
|
||||
}
|
||||
}
|
||||
hisi_sas_write32(hisi_hba, SAS_RAS_INTR1, irq_value);
|
||||
|
||||
irq_value = hisi_sas_read32(hisi_hba, SAS_RAS_INTR2);
|
||||
for (i = 0; i < ARRAY_SIZE(sas_ras_intr2_nfe); i++) {
|
||||
ras_error = &sas_ras_intr2_nfe[i];
|
||||
if (ras_error->irq_msk & irq_value) {
|
||||
dev_warn(dev, "SAS_RAS_INTR2: %s(irq_value=0x%x) found.\n",
|
||||
ras_error->msg, irq_value);
|
||||
need_reset = true;
|
||||
}
|
||||
}
|
||||
hisi_sas_write32(hisi_hba, SAS_RAS_INTR2, irq_value);
|
||||
|
||||
return need_reset;
|
||||
}
|
||||
|
||||
static pci_ers_result_t hisi_sas_error_detected_v3_hw(struct pci_dev *pdev,
|
||||
pci_channel_state_t state)
|
||||
{
|
||||
struct sas_ha_struct *sha = pci_get_drvdata(pdev);
|
||||
struct hisi_hba *hisi_hba = sha->lldd_ha;
|
||||
struct device *dev = hisi_hba->dev;
|
||||
|
||||
dev_info(dev, "PCI error: detected callback, state(%d)!!\n", state);
|
||||
if (state == pci_channel_io_perm_failure)
|
||||
return PCI_ERS_RESULT_DISCONNECT;
|
||||
|
||||
if (process_non_fatal_error_v3_hw(hisi_hba))
|
||||
return PCI_ERS_RESULT_NEED_RESET;
|
||||
|
||||
return PCI_ERS_RESULT_CAN_RECOVER;
|
||||
}
|
||||
|
||||
static pci_ers_result_t hisi_sas_mmio_enabled_v3_hw(struct pci_dev *pdev)
|
||||
{
|
||||
return PCI_ERS_RESULT_RECOVERED;
|
||||
}
|
||||
|
||||
static pci_ers_result_t hisi_sas_slot_reset_v3_hw(struct pci_dev *pdev)
|
||||
{
|
||||
struct sas_ha_struct *sha = pci_get_drvdata(pdev);
|
||||
struct hisi_hba *hisi_hba = sha->lldd_ha;
|
||||
struct device *dev = hisi_hba->dev;
|
||||
HISI_SAS_DECLARE_RST_WORK_ON_STACK(r);
|
||||
|
||||
dev_info(dev, "PCI error: slot reset callback!!\n");
|
||||
queue_work(hisi_hba->wq, &r.work);
|
||||
wait_for_completion(r.completion);
|
||||
if (r.done)
|
||||
return PCI_ERS_RESULT_RECOVERED;
|
||||
|
||||
return PCI_ERS_RESULT_DISCONNECT;
|
||||
}
|
||||
|
||||
static void hisi_sas_reset_prepare_v3_hw(struct pci_dev *pdev)
|
||||
{
|
||||
struct sas_ha_struct *sha = pci_get_drvdata(pdev);
|
||||
|
@ -3171,7 +3255,7 @@ static int hisi_sas_v3_resume(struct pci_dev *pdev)
|
|||
pci_power_t device_state = pdev->current_state;
|
||||
|
||||
dev_warn(dev, "resuming from operating state [D%d]\n",
|
||||
device_state);
|
||||
device_state);
|
||||
pci_set_power_state(pdev, PCI_D0);
|
||||
pci_enable_wake(pdev, PCI_D0, 0);
|
||||
pci_restore_state(pdev);
|
||||
|
@ -3199,9 +3283,6 @@ static const struct pci_device_id sas_v3_pci_table[] = {
|
|||
MODULE_DEVICE_TABLE(pci, sas_v3_pci_table);
|
||||
|
||||
static const struct pci_error_handlers hisi_sas_err_handler = {
|
||||
.error_detected = hisi_sas_error_detected_v3_hw,
|
||||
.mmio_enabled = hisi_sas_mmio_enabled_v3_hw,
|
||||
.slot_reset = hisi_sas_slot_reset_v3_hw,
|
||||
.reset_prepare = hisi_sas_reset_prepare_v3_hw,
|
||||
.reset_done = hisi_sas_reset_done_v3_hw,
|
||||
};
|
||||
|
|
|
@ -60,7 +60,7 @@
|
|||
* HPSA_DRIVER_VERSION must be 3 byte values (0-255) separated by '.'
|
||||
* with an optional trailing '-' followed by a byte value (0-255).
|
||||
*/
|
||||
#define HPSA_DRIVER_VERSION "3.4.20-125"
|
||||
#define HPSA_DRIVER_VERSION "3.4.20-160"
|
||||
#define DRIVER_NAME "HP HPSA Driver (v " HPSA_DRIVER_VERSION ")"
|
||||
#define HPSA "hpsa"
|
||||
|
||||
|
@ -2647,9 +2647,20 @@ static void complete_scsi_command(struct CommandList *cp)
|
|||
decode_sense_data(ei->SenseInfo, sense_data_size,
|
||||
&sense_key, &asc, &ascq);
|
||||
if (ei->ScsiStatus == SAM_STAT_CHECK_CONDITION) {
|
||||
if (sense_key == ABORTED_COMMAND) {
|
||||
switch (sense_key) {
|
||||
case ABORTED_COMMAND:
|
||||
cmd->result |= DID_SOFT_ERROR << 16;
|
||||
break;
|
||||
case UNIT_ATTENTION:
|
||||
if (asc == 0x3F && ascq == 0x0E)
|
||||
h->drv_req_rescan = 1;
|
||||
break;
|
||||
case ILLEGAL_REQUEST:
|
||||
if (asc == 0x25 && ascq == 0x00) {
|
||||
dev->removed = 1;
|
||||
cmd->result = DID_NO_CONNECT << 16;
|
||||
}
|
||||
break;
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
@ -3956,14 +3967,18 @@ static int hpsa_update_device_info(struct ctlr_info *h,
|
|||
memset(this_device->device_id, 0,
|
||||
sizeof(this_device->device_id));
|
||||
if (hpsa_get_device_id(h, scsi3addr, this_device->device_id, 8,
|
||||
sizeof(this_device->device_id)) < 0)
|
||||
sizeof(this_device->device_id)) < 0) {
|
||||
dev_err(&h->pdev->dev,
|
||||
"hpsa%d: %s: can't get device id for host %d:C0:T%d:L%d\t%s\t%.16s\n",
|
||||
"hpsa%d: %s: can't get device id for [%d:%d:%d:%d]\t%s\t%.16s\n",
|
||||
h->ctlr, __func__,
|
||||
h->scsi_host->host_no,
|
||||
this_device->target, this_device->lun,
|
||||
this_device->bus, this_device->target,
|
||||
this_device->lun,
|
||||
scsi_device_type(this_device->devtype),
|
||||
this_device->model);
|
||||
rc = HPSA_LV_FAILED;
|
||||
goto bail_out;
|
||||
}
|
||||
|
||||
if ((this_device->devtype == TYPE_DISK ||
|
||||
this_device->devtype == TYPE_ZBC) &&
|
||||
|
@ -5809,7 +5824,7 @@ static int hpsa_send_test_unit_ready(struct ctlr_info *h,
|
|||
/* Send the Test Unit Ready, fill_cmd can't fail, no mapping */
|
||||
(void) fill_cmd(c, TEST_UNIT_READY, h,
|
||||
NULL, 0, 0, lunaddr, TYPE_CMD);
|
||||
rc = hpsa_scsi_do_simple_cmd(h, c, reply_queue, DEFAULT_TIMEOUT);
|
||||
rc = hpsa_scsi_do_simple_cmd(h, c, reply_queue, NO_TIMEOUT);
|
||||
if (rc)
|
||||
return rc;
|
||||
/* no unmap needed here because no data xfer. */
|
||||
|
|
|
@ -281,7 +281,7 @@ int sas_get_ata_info(struct domain_device *dev, struct ex_phy *phy)
|
|||
res = sas_get_report_phy_sata(dev->parent, phy->phy_id,
|
||||
&dev->sata_dev.rps_resp);
|
||||
if (res) {
|
||||
pr_debug("report phy sata to %016llx:0x%x returned 0x%x\n",
|
||||
pr_debug("report phy sata to %016llx:%02d returned 0x%x\n",
|
||||
SAS_ADDR(dev->parent->sas_addr),
|
||||
phy->phy_id, res);
|
||||
return res;
|
||||
|
|
|
@ -826,9 +826,14 @@ static struct domain_device *sas_ex_discover_end_dev(
|
|||
#ifdef CONFIG_SCSI_SAS_ATA
|
||||
if ((phy->attached_tproto & SAS_PROTOCOL_STP) || phy->attached_sata_dev) {
|
||||
if (child->linkrate > parent->min_linkrate) {
|
||||
struct sas_phy *cphy = child->phy;
|
||||
enum sas_linkrate min_prate = cphy->minimum_linkrate,
|
||||
parent_min_lrate = parent->min_linkrate,
|
||||
min_linkrate = (min_prate > parent_min_lrate) ?
|
||||
parent_min_lrate : 0;
|
||||
struct sas_phy_linkrates rates = {
|
||||
.maximum_linkrate = parent->min_linkrate,
|
||||
.minimum_linkrate = parent->min_linkrate,
|
||||
.minimum_linkrate = min_linkrate,
|
||||
};
|
||||
int ret;
|
||||
|
||||
|
@ -865,7 +870,7 @@ static struct domain_device *sas_ex_discover_end_dev(
|
|||
|
||||
res = sas_discover_sata(child);
|
||||
if (res) {
|
||||
pr_notice("sas_discover_sata() for device %16llx at %016llx:0x%x returned 0x%x\n",
|
||||
pr_notice("sas_discover_sata() for device %16llx at %016llx:%02d returned 0x%x\n",
|
||||
SAS_ADDR(child->sas_addr),
|
||||
SAS_ADDR(parent->sas_addr), phy_id, res);
|
||||
goto out_list_del;
|
||||
|
@ -890,7 +895,7 @@ static struct domain_device *sas_ex_discover_end_dev(
|
|||
|
||||
res = sas_discover_end_dev(child);
|
||||
if (res) {
|
||||
pr_notice("sas_discover_end_dev() for device %16llx at %016llx:0x%x returned 0x%x\n",
|
||||
pr_notice("sas_discover_end_dev() for device %16llx at %016llx:%02d returned 0x%x\n",
|
||||
SAS_ADDR(child->sas_addr),
|
||||
SAS_ADDR(parent->sas_addr), phy_id, res);
|
||||
goto out_list_del;
|
||||
|
@ -955,7 +960,7 @@ static struct domain_device *sas_ex_discover_expander(
|
|||
int res;
|
||||
|
||||
if (phy->routing_attr == DIRECT_ROUTING) {
|
||||
pr_warn("ex %016llx:0x%x:D <--> ex %016llx:0x%x is not allowed\n",
|
||||
pr_warn("ex %016llx:%02d:D <--> ex %016llx:0x%x is not allowed\n",
|
||||
SAS_ADDR(parent->sas_addr), phy_id,
|
||||
SAS_ADDR(phy->attached_sas_addr),
|
||||
phy->attached_phy_id);
|
||||
|
@ -1065,7 +1070,7 @@ static int sas_ex_discover_dev(struct domain_device *dev, int phy_id)
|
|||
ex_phy->attached_dev_type != SAS_FANOUT_EXPANDER_DEVICE &&
|
||||
ex_phy->attached_dev_type != SAS_EDGE_EXPANDER_DEVICE &&
|
||||
ex_phy->attached_dev_type != SAS_SATA_PENDING) {
|
||||
pr_warn("unknown device type(0x%x) attached to ex %016llx phy 0x%x\n",
|
||||
pr_warn("unknown device type(0x%x) attached to ex %016llx phy%02d\n",
|
||||
ex_phy->attached_dev_type,
|
||||
SAS_ADDR(dev->sas_addr),
|
||||
phy_id);
|
||||
|
@ -1081,7 +1086,7 @@ static int sas_ex_discover_dev(struct domain_device *dev, int phy_id)
|
|||
}
|
||||
|
||||
if (sas_ex_join_wide_port(dev, phy_id)) {
|
||||
pr_debug("Attaching ex phy%d to wide port %016llx\n",
|
||||
pr_debug("Attaching ex phy%02d to wide port %016llx\n",
|
||||
phy_id, SAS_ADDR(ex_phy->attached_sas_addr));
|
||||
return res;
|
||||
}
|
||||
|
@ -1093,7 +1098,7 @@ static int sas_ex_discover_dev(struct domain_device *dev, int phy_id)
|
|||
break;
|
||||
case SAS_FANOUT_EXPANDER_DEVICE:
|
||||
if (SAS_ADDR(dev->port->disc.fanout_sas_addr)) {
|
||||
pr_debug("second fanout expander %016llx phy 0x%x attached to ex %016llx phy 0x%x\n",
|
||||
pr_debug("second fanout expander %016llx phy%02d attached to ex %016llx phy%02d\n",
|
||||
SAS_ADDR(ex_phy->attached_sas_addr),
|
||||
ex_phy->attached_phy_id,
|
||||
SAS_ADDR(dev->sas_addr),
|
||||
|
@ -1126,7 +1131,7 @@ static int sas_ex_discover_dev(struct domain_device *dev, int phy_id)
|
|||
SAS_ADDR(child->sas_addr)) {
|
||||
ex->ex_phy[i].phy_state= PHY_DEVICE_DISCOVERED;
|
||||
if (sas_ex_join_wide_port(dev, i))
|
||||
pr_debug("Attaching ex phy%d to wide port %016llx\n",
|
||||
pr_debug("Attaching ex phy%02d to wide port %016llx\n",
|
||||
i, SAS_ADDR(ex->ex_phy[i].attached_sas_addr));
|
||||
}
|
||||
}
|
||||
|
@ -1151,7 +1156,7 @@ static int sas_find_sub_addr(struct domain_device *dev, u8 *sub_addr)
|
|||
phy->attached_dev_type == SAS_FANOUT_EXPANDER_DEVICE) &&
|
||||
phy->routing_attr == SUBTRACTIVE_ROUTING) {
|
||||
|
||||
memcpy(sub_addr, phy->attached_sas_addr,SAS_ADDR_SIZE);
|
||||
memcpy(sub_addr, phy->attached_sas_addr, SAS_ADDR_SIZE);
|
||||
|
||||
return 1;
|
||||
}
|
||||
|
@ -1163,7 +1168,7 @@ static int sas_check_level_subtractive_boundary(struct domain_device *dev)
|
|||
{
|
||||
struct expander_device *ex = &dev->ex_dev;
|
||||
struct domain_device *child;
|
||||
u8 sub_addr[8] = {0, };
|
||||
u8 sub_addr[SAS_ADDR_SIZE] = {0, };
|
||||
|
||||
list_for_each_entry(child, &ex->children, siblings) {
|
||||
if (child->dev_type != SAS_EDGE_EXPANDER_DEVICE &&
|
||||
|
@ -1173,7 +1178,7 @@ static int sas_check_level_subtractive_boundary(struct domain_device *dev)
|
|||
sas_find_sub_addr(child, sub_addr);
|
||||
continue;
|
||||
} else {
|
||||
u8 s2[8];
|
||||
u8 s2[SAS_ADDR_SIZE];
|
||||
|
||||
if (sas_find_sub_addr(child, s2) &&
|
||||
(SAS_ADDR(sub_addr) != SAS_ADDR(s2))) {
|
||||
|
@ -1261,7 +1266,7 @@ static int sas_check_ex_subtractive_boundary(struct domain_device *dev)
|
|||
else if (SAS_ADDR(sub_sas_addr) !=
|
||||
SAS_ADDR(phy->attached_sas_addr)) {
|
||||
|
||||
pr_notice("ex %016llx phy 0x%x diverges(%016llx) on subtractive boundary(%016llx). Disabled\n",
|
||||
pr_notice("ex %016llx phy%02d diverges(%016llx) on subtractive boundary(%016llx). Disabled\n",
|
||||
SAS_ADDR(dev->sas_addr), i,
|
||||
SAS_ADDR(phy->attached_sas_addr),
|
||||
SAS_ADDR(sub_sas_addr));
|
||||
|
@ -1282,7 +1287,7 @@ static void sas_print_parent_topology_bug(struct domain_device *child,
|
|||
};
|
||||
struct domain_device *parent = child->parent;
|
||||
|
||||
pr_notice("%s ex %016llx phy 0x%x <--> %s ex %016llx phy 0x%x has %c:%c routing link!\n",
|
||||
pr_notice("%s ex %016llx phy%02d <--> %s ex %016llx phy%02d has %c:%c routing link!\n",
|
||||
ex_type[parent->dev_type],
|
||||
SAS_ADDR(parent->sas_addr),
|
||||
parent_phy->phy_id,
|
||||
|
@ -1304,7 +1309,7 @@ static int sas_check_eeds(struct domain_device *child,
|
|||
|
||||
if (SAS_ADDR(parent->port->disc.fanout_sas_addr) != 0) {
|
||||
res = -ENODEV;
|
||||
pr_warn("edge ex %016llx phy S:0x%x <--> edge ex %016llx phy S:0x%x, while there is a fanout ex %016llx\n",
|
||||
pr_warn("edge ex %016llx phy S:%02d <--> edge ex %016llx phy S:%02d, while there is a fanout ex %016llx\n",
|
||||
SAS_ADDR(parent->sas_addr),
|
||||
parent_phy->phy_id,
|
||||
SAS_ADDR(child->sas_addr),
|
||||
|
@ -1327,7 +1332,7 @@ static int sas_check_eeds(struct domain_device *child,
|
|||
;
|
||||
else {
|
||||
res = -ENODEV;
|
||||
pr_warn("edge ex %016llx phy 0x%x <--> edge ex %016llx phy 0x%x link forms a third EEDS!\n",
|
||||
pr_warn("edge ex %016llx phy%02d <--> edge ex %016llx phy%02d link forms a third EEDS!\n",
|
||||
SAS_ADDR(parent->sas_addr),
|
||||
parent_phy->phy_id,
|
||||
SAS_ADDR(child->sas_addr),
|
||||
|
@ -1445,11 +1450,11 @@ static int sas_configure_present(struct domain_device *dev, int phy_id,
|
|||
goto out;
|
||||
res = rri_resp[2];
|
||||
if (res == SMP_RESP_NO_INDEX) {
|
||||
pr_warn("overflow of indexes: dev %016llx phy 0x%x index 0x%x\n",
|
||||
pr_warn("overflow of indexes: dev %016llx phy%02d index 0x%x\n",
|
||||
SAS_ADDR(dev->sas_addr), phy_id, i);
|
||||
goto out;
|
||||
} else if (res != SMP_RESP_FUNC_ACC) {
|
||||
pr_notice("%s: dev %016llx phy 0x%x index 0x%x result 0x%x\n",
|
||||
pr_notice("%s: dev %016llx phy%02d index 0x%x result 0x%x\n",
|
||||
__func__, SAS_ADDR(dev->sas_addr), phy_id,
|
||||
i, res);
|
||||
goto out;
|
||||
|
@ -1515,7 +1520,7 @@ static int sas_configure_set(struct domain_device *dev, int phy_id,
|
|||
goto out;
|
||||
res = cri_resp[2];
|
||||
if (res == SMP_RESP_NO_INDEX) {
|
||||
pr_warn("overflow of indexes: dev %016llx phy 0x%x index 0x%x\n",
|
||||
pr_warn("overflow of indexes: dev %016llx phy%02d index 0x%x\n",
|
||||
SAS_ADDR(dev->sas_addr), phy_id, index);
|
||||
}
|
||||
out:
|
||||
|
@ -1760,10 +1765,11 @@ static int sas_get_phy_attached_dev(struct domain_device *dev, int phy_id,
|
|||
|
||||
res = sas_get_phy_discover(dev, phy_id, disc_resp);
|
||||
if (res == 0) {
|
||||
memcpy(sas_addr, disc_resp->disc.attached_sas_addr, 8);
|
||||
memcpy(sas_addr, disc_resp->disc.attached_sas_addr,
|
||||
SAS_ADDR_SIZE);
|
||||
*type = to_dev_type(dr);
|
||||
if (*type == 0)
|
||||
memset(sas_addr, 0, 8);
|
||||
memset(sas_addr, 0, SAS_ADDR_SIZE);
|
||||
}
|
||||
kfree(disc_resp);
|
||||
return res;
|
||||
|
@ -1870,10 +1876,12 @@ static int sas_find_bcast_dev(struct domain_device *dev,
|
|||
if (phy_id != -1) {
|
||||
*src_dev = dev;
|
||||
ex->ex_change_count = ex_change_count;
|
||||
pr_info("Expander phy change count has changed\n");
|
||||
pr_info("ex %016llx phy%02d change count has changed\n",
|
||||
SAS_ADDR(dev->sas_addr), phy_id);
|
||||
return res;
|
||||
} else
|
||||
pr_info("Expander phys DID NOT change\n");
|
||||
pr_info("ex %016llx phys DID NOT change\n",
|
||||
SAS_ADDR(dev->sas_addr));
|
||||
}
|
||||
list_for_each_entry(ch, &ex->children, siblings) {
|
||||
if (ch->dev_type == SAS_EDGE_EXPANDER_DEVICE || ch->dev_type == SAS_FANOUT_EXPANDER_DEVICE) {
|
||||
|
@ -1983,7 +1991,7 @@ static int sas_discover_new(struct domain_device *dev, int phy_id)
|
|||
struct domain_device *child;
|
||||
int res;
|
||||
|
||||
pr_debug("ex %016llx phy%d new device attached\n",
|
||||
pr_debug("ex %016llx phy%02d new device attached\n",
|
||||
SAS_ADDR(dev->sas_addr), phy_id);
|
||||
res = sas_ex_phy_discover(dev, phy_id);
|
||||
if (res)
|
||||
|
@ -2022,15 +2030,23 @@ static bool dev_type_flutter(enum sas_device_type new, enum sas_device_type old)
|
|||
return false;
|
||||
}
|
||||
|
||||
static int sas_rediscover_dev(struct domain_device *dev, int phy_id, bool last)
|
||||
static int sas_rediscover_dev(struct domain_device *dev, int phy_id,
|
||||
bool last, int sibling)
|
||||
{
|
||||
struct expander_device *ex = &dev->ex_dev;
|
||||
struct ex_phy *phy = &ex->ex_phy[phy_id];
|
||||
enum sas_device_type type = SAS_PHY_UNUSED;
|
||||
u8 sas_addr[8];
|
||||
u8 sas_addr[SAS_ADDR_SIZE];
|
||||
char msg[80] = "";
|
||||
int res;
|
||||
|
||||
memset(sas_addr, 0, 8);
|
||||
if (!last)
|
||||
sprintf(msg, ", part of a wide port with phy%02d", sibling);
|
||||
|
||||
pr_debug("ex %016llx rediscovering phy%02d%s\n",
|
||||
SAS_ADDR(dev->sas_addr), phy_id, msg);
|
||||
|
||||
memset(sas_addr, 0, SAS_ADDR_SIZE);
|
||||
res = sas_get_phy_attached_dev(dev, phy_id, sas_addr, &type);
|
||||
switch (res) {
|
||||
case SMP_RESP_NO_PHY:
|
||||
|
@ -2052,6 +2068,11 @@ static int sas_rediscover_dev(struct domain_device *dev, int phy_id, bool last)
|
|||
if ((SAS_ADDR(sas_addr) == 0) || (res == -ECOMM)) {
|
||||
phy->phy_state = PHY_EMPTY;
|
||||
sas_unregister_devs_sas_addr(dev, phy_id, last);
|
||||
/*
|
||||
* Even though the PHY is empty, for convenience we discover
|
||||
* the PHY to update the PHY info, like negotiated linkrate.
|
||||
*/
|
||||
sas_ex_phy_discover(dev, phy_id);
|
||||
return res;
|
||||
} else if (SAS_ADDR(sas_addr) == SAS_ADDR(phy->attached_sas_addr) &&
|
||||
dev_type_flutter(type, phy->attached_dev_type)) {
|
||||
|
@ -2062,13 +2083,13 @@ static int sas_rediscover_dev(struct domain_device *dev, int phy_id, bool last)
|
|||
|
||||
if (ata_dev && phy->attached_dev_type == SAS_SATA_PENDING)
|
||||
action = ", needs recovery";
|
||||
pr_debug("ex %016llx phy 0x%x broadcast flutter%s\n",
|
||||
pr_debug("ex %016llx phy%02d broadcast flutter%s\n",
|
||||
SAS_ADDR(dev->sas_addr), phy_id, action);
|
||||
return res;
|
||||
}
|
||||
|
||||
/* we always have to delete the old device when we went here */
|
||||
pr_info("ex %016llx phy 0x%x replace %016llx\n",
|
||||
pr_info("ex %016llx phy%02d replace %016llx\n",
|
||||
SAS_ADDR(dev->sas_addr), phy_id,
|
||||
SAS_ADDR(phy->attached_sas_addr));
|
||||
sas_unregister_devs_sas_addr(dev, phy_id, last);
|
||||
|
@ -2098,7 +2119,7 @@ static int sas_rediscover(struct domain_device *dev, const int phy_id)
|
|||
int i;
|
||||
bool last = true; /* is this the last phy of the port */
|
||||
|
||||
pr_debug("ex %016llx phy%d originated BROADCAST(CHANGE)\n",
|
||||
pr_debug("ex %016llx phy%02d originated BROADCAST(CHANGE)\n",
|
||||
SAS_ADDR(dev->sas_addr), phy_id);
|
||||
|
||||
if (SAS_ADDR(changed_phy->attached_sas_addr) != 0) {
|
||||
|
@ -2109,13 +2130,11 @@ static int sas_rediscover(struct domain_device *dev, const int phy_id)
|
|||
continue;
|
||||
if (SAS_ADDR(phy->attached_sas_addr) ==
|
||||
SAS_ADDR(changed_phy->attached_sas_addr)) {
|
||||
pr_debug("phy%d part of wide port with phy%d\n",
|
||||
phy_id, i);
|
||||
last = false;
|
||||
break;
|
||||
}
|
||||
}
|
||||
res = sas_rediscover_dev(dev, phy_id, last);
|
||||
res = sas_rediscover_dev(dev, phy_id, last, i);
|
||||
} else
|
||||
res = sas_discover_new(dev, phy_id);
|
||||
return res;
|
||||
|
|
|
@ -87,25 +87,27 @@ EXPORT_SYMBOL_GPL(sas_free_task);
|
|||
/*------------ SAS addr hash -----------*/
|
||||
void sas_hash_addr(u8 *hashed, const u8 *sas_addr)
|
||||
{
|
||||
const u32 poly = 0x00DB2777;
|
||||
u32 r = 0;
|
||||
int i;
|
||||
const u32 poly = 0x00DB2777;
|
||||
u32 r = 0;
|
||||
int i;
|
||||
|
||||
for (i = 0; i < 8; i++) {
|
||||
int b;
|
||||
for (b = 7; b >= 0; b--) {
|
||||
r <<= 1;
|
||||
if ((1 << b) & sas_addr[i]) {
|
||||
if (!(r & 0x01000000))
|
||||
r ^= poly;
|
||||
} else if (r & 0x01000000)
|
||||
r ^= poly;
|
||||
}
|
||||
}
|
||||
for (i = 0; i < SAS_ADDR_SIZE; i++) {
|
||||
int b;
|
||||
|
||||
hashed[0] = (r >> 16) & 0xFF;
|
||||
hashed[1] = (r >> 8) & 0xFF ;
|
||||
hashed[2] = r & 0xFF;
|
||||
for (b = (SAS_ADDR_SIZE - 1); b >= 0; b--) {
|
||||
r <<= 1;
|
||||
if ((1 << b) & sas_addr[i]) {
|
||||
if (!(r & 0x01000000))
|
||||
r ^= poly;
|
||||
} else if (r & 0x01000000) {
|
||||
r ^= poly;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
hashed[0] = (r >> 16) & 0xFF;
|
||||
hashed[1] = (r >> 8) & 0xFF;
|
||||
hashed[2] = r & 0xFF;
|
||||
}
|
||||
|
||||
int sas_register_ha(struct sas_ha_struct *sas_ha)
|
||||
|
@ -623,7 +625,7 @@ struct asd_sas_event *sas_alloc_event(struct asd_sas_phy *phy)
|
|||
if (atomic_read(&phy->event_nr) > phy->ha->event_thres) {
|
||||
if (i->dft->lldd_control_phy) {
|
||||
if (cmpxchg(&phy->in_shutdown, 0, 1) == 0) {
|
||||
pr_notice("The phy%02d bursting events, shut it down.\n",
|
||||
pr_notice("The phy%d bursting events, shut it down.\n",
|
||||
phy->id);
|
||||
sas_notify_phy_event(phy, PHYE_SHUTDOWN);
|
||||
}
|
||||
|
|
|
@ -122,11 +122,10 @@ static void sas_phye_shutdown(struct work_struct *work)
|
|||
phy->enabled = 0;
|
||||
ret = i->dft->lldd_control_phy(phy, PHY_FUNC_DISABLE, NULL);
|
||||
if (ret)
|
||||
pr_notice("lldd disable phy%02d returned %d\n",
|
||||
phy->id, ret);
|
||||
pr_notice("lldd disable phy%d returned %d\n", phy->id,
|
||||
ret);
|
||||
} else
|
||||
pr_notice("phy%02d is not enabled, cannot shutdown\n",
|
||||
phy->id);
|
||||
pr_notice("phy%d is not enabled, cannot shutdown\n", phy->id);
|
||||
}
|
||||
|
||||
/* ---------- Phy class registration ---------- */
|
||||
|
|
|
@ -95,6 +95,7 @@ static void sas_form_port(struct asd_sas_phy *phy)
|
|||
int i;
|
||||
struct sas_ha_struct *sas_ha = phy->ha;
|
||||
struct asd_sas_port *port = phy->port;
|
||||
struct domain_device *port_dev;
|
||||
struct sas_internal *si =
|
||||
to_sas_internal(sas_ha->core.shost->transportt);
|
||||
unsigned long flags;
|
||||
|
@ -153,8 +154,9 @@ static void sas_form_port(struct asd_sas_phy *phy)
|
|||
}
|
||||
|
||||
/* add the phy to the port */
|
||||
port_dev = port->port_dev;
|
||||
list_add_tail(&phy->port_phy_el, &port->phy_list);
|
||||
sas_phy_set_target(phy, port->port_dev);
|
||||
sas_phy_set_target(phy, port_dev);
|
||||
phy->port = port;
|
||||
port->num_phys++;
|
||||
port->phy_mask |= (1U << phy->id);
|
||||
|
@ -184,14 +186,21 @@ static void sas_form_port(struct asd_sas_phy *phy)
|
|||
port->phy_mask,
|
||||
SAS_ADDR(port->attached_sas_addr));
|
||||
|
||||
if (port->port_dev)
|
||||
port->port_dev->pathways = port->num_phys;
|
||||
if (port_dev)
|
||||
port_dev->pathways = port->num_phys;
|
||||
|
||||
/* Tell the LLDD about this port formation. */
|
||||
if (si->dft->lldd_port_formed)
|
||||
si->dft->lldd_port_formed(phy);
|
||||
|
||||
sas_discover_event(phy->port, DISCE_DISCOVER_DOMAIN);
|
||||
/* Only insert a revalidate event after initial discovery */
|
||||
if (port_dev && sas_dev_type_is_expander(port_dev->dev_type)) {
|
||||
struct expander_device *ex_dev = &port_dev->ex_dev;
|
||||
|
||||
ex_dev->ex_change_count = -1;
|
||||
sas_discover_event(port, DISCE_REVALIDATE_DOMAIN);
|
||||
}
|
||||
flush_workqueue(sas_ha->disco_q);
|
||||
}
|
||||
|
||||
|
@ -254,6 +263,15 @@ void sas_deform_port(struct asd_sas_phy *phy, int gone)
|
|||
spin_unlock(&port->phy_list_lock);
|
||||
spin_unlock_irqrestore(&sas_ha->phy_port_lock, flags);
|
||||
|
||||
/* Only insert revalidate event if the port still has members */
|
||||
if (port->port && dev && sas_dev_type_is_expander(dev->dev_type)) {
|
||||
struct expander_device *ex_dev = &dev->ex_dev;
|
||||
|
||||
ex_dev->ex_change_count = -1;
|
||||
sas_discover_event(port, DISCE_REVALIDATE_DOMAIN);
|
||||
}
|
||||
flush_workqueue(sas_ha->disco_q);
|
||||
|
||||
return;
|
||||
}
|
||||
|
||||
|
|
|
@ -942,6 +942,7 @@ struct lpfc_hba {
|
|||
int brd_no; /* FC board number */
|
||||
char SerialNumber[32]; /* adapter Serial Number */
|
||||
char OptionROMVersion[32]; /* adapter BIOS / Fcode version */
|
||||
char BIOSVersion[16]; /* Boot BIOS version */
|
||||
char ModelDesc[256]; /* Model Description */
|
||||
char ModelName[80]; /* Model Name */
|
||||
char ProgramType[256]; /* Program Type */
|
||||
|
|
|
@ -71,6 +71,23 @@
|
|||
#define LPFC_REG_WRITE_KEY_SIZE 4
|
||||
#define LPFC_REG_WRITE_KEY "EMLX"
|
||||
|
||||
const char *const trunk_errmsg[] = { /* map errcode */
|
||||
"", /* There is no such error code at index 0*/
|
||||
"link negotiated speed does not match existing"
|
||||
" trunk - link was \"low\" speed",
|
||||
"link negotiated speed does not match"
|
||||
" existing trunk - link was \"middle\" speed",
|
||||
"link negotiated speed does not match existing"
|
||||
" trunk - link was \"high\" speed",
|
||||
"Attached to non-trunking port - F_Port",
|
||||
"Attached to non-trunking port - N_Port",
|
||||
"FLOGI response timeout",
|
||||
"non-FLOGI frame received",
|
||||
"Invalid FLOGI response",
|
||||
"Trunking initialization protocol",
|
||||
"Trunk peer device mismatch",
|
||||
};
|
||||
|
||||
/**
|
||||
* lpfc_jedec_to_ascii - Hex to ascii convertor according to JEDEC rules
|
||||
* @incr: integer to convert.
|
||||
|
@ -114,7 +131,7 @@ static ssize_t
|
|||
lpfc_drvr_version_show(struct device *dev, struct device_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
return snprintf(buf, PAGE_SIZE, LPFC_MODULE_DESC "\n");
|
||||
return scnprintf(buf, PAGE_SIZE, LPFC_MODULE_DESC "\n");
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -134,9 +151,9 @@ lpfc_enable_fip_show(struct device *dev, struct device_attribute *attr,
|
|||
struct lpfc_hba *phba = vport->phba;
|
||||
|
||||
if (phba->hba_flag & HBA_FIP_SUPPORT)
|
||||
return snprintf(buf, PAGE_SIZE, "1\n");
|
||||
return scnprintf(buf, PAGE_SIZE, "1\n");
|
||||
else
|
||||
return snprintf(buf, PAGE_SIZE, "0\n");
|
||||
return scnprintf(buf, PAGE_SIZE, "0\n");
|
||||
}
|
||||
|
||||
static ssize_t
|
||||
|
@ -564,14 +581,15 @@ lpfc_bg_info_show(struct device *dev, struct device_attribute *attr,
|
|||
struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
|
||||
struct lpfc_hba *phba = vport->phba;
|
||||
|
||||
if (phba->cfg_enable_bg)
|
||||
if (phba->cfg_enable_bg) {
|
||||
if (phba->sli3_options & LPFC_SLI3_BG_ENABLED)
|
||||
return snprintf(buf, PAGE_SIZE, "BlockGuard Enabled\n");
|
||||
return scnprintf(buf, PAGE_SIZE,
|
||||
"BlockGuard Enabled\n");
|
||||
else
|
||||
return snprintf(buf, PAGE_SIZE,
|
||||
return scnprintf(buf, PAGE_SIZE,
|
||||
"BlockGuard Not Supported\n");
|
||||
else
|
||||
return snprintf(buf, PAGE_SIZE,
|
||||
} else
|
||||
return scnprintf(buf, PAGE_SIZE,
|
||||
"BlockGuard Disabled\n");
|
||||
}
|
||||
|
||||
|
@ -583,7 +601,7 @@ lpfc_bg_guard_err_show(struct device *dev, struct device_attribute *attr,
|
|||
struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
|
||||
struct lpfc_hba *phba = vport->phba;
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "%llu\n",
|
||||
return scnprintf(buf, PAGE_SIZE, "%llu\n",
|
||||
(unsigned long long)phba->bg_guard_err_cnt);
|
||||
}
|
||||
|
||||
|
@ -595,7 +613,7 @@ lpfc_bg_apptag_err_show(struct device *dev, struct device_attribute *attr,
|
|||
struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
|
||||
struct lpfc_hba *phba = vport->phba;
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "%llu\n",
|
||||
return scnprintf(buf, PAGE_SIZE, "%llu\n",
|
||||
(unsigned long long)phba->bg_apptag_err_cnt);
|
||||
}
|
||||
|
||||
|
@ -607,7 +625,7 @@ lpfc_bg_reftag_err_show(struct device *dev, struct device_attribute *attr,
|
|||
struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
|
||||
struct lpfc_hba *phba = vport->phba;
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "%llu\n",
|
||||
return scnprintf(buf, PAGE_SIZE, "%llu\n",
|
||||
(unsigned long long)phba->bg_reftag_err_cnt);
|
||||
}
|
||||
|
||||
|
@ -625,7 +643,7 @@ lpfc_info_show(struct device *dev, struct device_attribute *attr,
|
|||
{
|
||||
struct Scsi_Host *host = class_to_shost(dev);
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "%s\n",lpfc_info(host));
|
||||
return scnprintf(buf, PAGE_SIZE, "%s\n", lpfc_info(host));
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -644,7 +662,7 @@ lpfc_serialnum_show(struct device *dev, struct device_attribute *attr,
|
|||
struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
|
||||
struct lpfc_hba *phba = vport->phba;
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "%s\n",phba->SerialNumber);
|
||||
return scnprintf(buf, PAGE_SIZE, "%s\n", phba->SerialNumber);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -666,7 +684,7 @@ lpfc_temp_sensor_show(struct device *dev, struct device_attribute *attr,
|
|||
struct Scsi_Host *shost = class_to_shost(dev);
|
||||
struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
|
||||
struct lpfc_hba *phba = vport->phba;
|
||||
return snprintf(buf, PAGE_SIZE, "%d\n",phba->temp_sensor_support);
|
||||
return scnprintf(buf, PAGE_SIZE, "%d\n", phba->temp_sensor_support);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -685,7 +703,7 @@ lpfc_modeldesc_show(struct device *dev, struct device_attribute *attr,
|
|||
struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
|
||||
struct lpfc_hba *phba = vport->phba;
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "%s\n",phba->ModelDesc);
|
||||
return scnprintf(buf, PAGE_SIZE, "%s\n", phba->ModelDesc);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -704,7 +722,7 @@ lpfc_modelname_show(struct device *dev, struct device_attribute *attr,
|
|||
struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
|
||||
struct lpfc_hba *phba = vport->phba;
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "%s\n",phba->ModelName);
|
||||
return scnprintf(buf, PAGE_SIZE, "%s\n", phba->ModelName);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -723,7 +741,7 @@ lpfc_programtype_show(struct device *dev, struct device_attribute *attr,
|
|||
struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
|
||||
struct lpfc_hba *phba = vport->phba;
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "%s\n",phba->ProgramType);
|
||||
return scnprintf(buf, PAGE_SIZE, "%s\n", phba->ProgramType);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -741,7 +759,7 @@ lpfc_mlomgmt_show(struct device *dev, struct device_attribute *attr, char *buf)
|
|||
struct lpfc_vport *vport = (struct lpfc_vport *)shost->hostdata;
|
||||
struct lpfc_hba *phba = vport->phba;
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "%d\n",
|
||||
return scnprintf(buf, PAGE_SIZE, "%d\n",
|
||||
(phba->sli.sli_flag & LPFC_MENLO_MAINT));
|
||||
}
|
||||
|
||||
|
@ -761,7 +779,7 @@ lpfc_vportnum_show(struct device *dev, struct device_attribute *attr,
|
|||
struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
|
||||
struct lpfc_hba *phba = vport->phba;
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "%s\n",phba->Port);
|
||||
return scnprintf(buf, PAGE_SIZE, "%s\n", phba->Port);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -789,10 +807,10 @@ lpfc_fwrev_show(struct device *dev, struct device_attribute *attr,
|
|||
sli_family = phba->sli4_hba.pc_sli4_params.sli_family;
|
||||
|
||||
if (phba->sli_rev < LPFC_SLI_REV4)
|
||||
len = snprintf(buf, PAGE_SIZE, "%s, sli-%d\n",
|
||||
len = scnprintf(buf, PAGE_SIZE, "%s, sli-%d\n",
|
||||
fwrev, phba->sli_rev);
|
||||
else
|
||||
len = snprintf(buf, PAGE_SIZE, "%s, sli-%d:%d:%x\n",
|
||||
len = scnprintf(buf, PAGE_SIZE, "%s, sli-%d:%d:%x\n",
|
||||
fwrev, phba->sli_rev, if_type, sli_family);
|
||||
|
||||
return len;
|
||||
|
@ -816,7 +834,7 @@ lpfc_hdw_show(struct device *dev, struct device_attribute *attr, char *buf)
|
|||
lpfc_vpd_t *vp = &phba->vpd;
|
||||
|
||||
lpfc_jedec_to_ascii(vp->rev.biuRev, hdw);
|
||||
return snprintf(buf, PAGE_SIZE, "%s\n", hdw);
|
||||
return scnprintf(buf, PAGE_SIZE, "%s\n", hdw);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -837,10 +855,11 @@ lpfc_option_rom_version_show(struct device *dev, struct device_attribute *attr,
|
|||
char fwrev[FW_REV_STR_SIZE];
|
||||
|
||||
if (phba->sli_rev < LPFC_SLI_REV4)
|
||||
return snprintf(buf, PAGE_SIZE, "%s\n", phba->OptionROMVersion);
|
||||
return scnprintf(buf, PAGE_SIZE, "%s\n",
|
||||
phba->OptionROMVersion);
|
||||
|
||||
lpfc_decode_firmware_rev(phba, fwrev, 1);
|
||||
return snprintf(buf, PAGE_SIZE, "%s\n", fwrev);
|
||||
return scnprintf(buf, PAGE_SIZE, "%s\n", fwrev);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -871,20 +890,20 @@ lpfc_link_state_show(struct device *dev, struct device_attribute *attr,
|
|||
case LPFC_LINK_DOWN:
|
||||
case LPFC_HBA_ERROR:
|
||||
if (phba->hba_flag & LINK_DISABLED)
|
||||
len += snprintf(buf + len, PAGE_SIZE-len,
|
||||
len += scnprintf(buf + len, PAGE_SIZE-len,
|
||||
"Link Down - User disabled\n");
|
||||
else
|
||||
len += snprintf(buf + len, PAGE_SIZE-len,
|
||||
len += scnprintf(buf + len, PAGE_SIZE-len,
|
||||
"Link Down\n");
|
||||
break;
|
||||
case LPFC_LINK_UP:
|
||||
case LPFC_CLEAR_LA:
|
||||
case LPFC_HBA_READY:
|
||||
len += snprintf(buf + len, PAGE_SIZE-len, "Link Up - ");
|
||||
len += scnprintf(buf + len, PAGE_SIZE-len, "Link Up - ");
|
||||
|
||||
switch (vport->port_state) {
|
||||
case LPFC_LOCAL_CFG_LINK:
|
||||
len += snprintf(buf + len, PAGE_SIZE-len,
|
||||
len += scnprintf(buf + len, PAGE_SIZE-len,
|
||||
"Configuring Link\n");
|
||||
break;
|
||||
case LPFC_FDISC:
|
||||
|
@ -894,38 +913,40 @@ lpfc_link_state_show(struct device *dev, struct device_attribute *attr,
|
|||
case LPFC_NS_QRY:
|
||||
case LPFC_BUILD_DISC_LIST:
|
||||
case LPFC_DISC_AUTH:
|
||||
len += snprintf(buf + len, PAGE_SIZE - len,
|
||||
len += scnprintf(buf + len, PAGE_SIZE - len,
|
||||
"Discovery\n");
|
||||
break;
|
||||
case LPFC_VPORT_READY:
|
||||
len += snprintf(buf + len, PAGE_SIZE - len, "Ready\n");
|
||||
len += scnprintf(buf + len, PAGE_SIZE - len,
|
||||
"Ready\n");
|
||||
break;
|
||||
|
||||
case LPFC_VPORT_FAILED:
|
||||
len += snprintf(buf + len, PAGE_SIZE - len, "Failed\n");
|
||||
len += scnprintf(buf + len, PAGE_SIZE - len,
|
||||
"Failed\n");
|
||||
break;
|
||||
|
||||
case LPFC_VPORT_UNKNOWN:
|
||||
len += snprintf(buf + len, PAGE_SIZE - len,
|
||||
len += scnprintf(buf + len, PAGE_SIZE - len,
|
||||
"Unknown\n");
|
||||
break;
|
||||
}
|
||||
if (phba->sli.sli_flag & LPFC_MENLO_MAINT)
|
||||
len += snprintf(buf + len, PAGE_SIZE-len,
|
||||
len += scnprintf(buf + len, PAGE_SIZE-len,
|
||||
" Menlo Maint Mode\n");
|
||||
else if (phba->fc_topology == LPFC_TOPOLOGY_LOOP) {
|
||||
if (vport->fc_flag & FC_PUBLIC_LOOP)
|
||||
len += snprintf(buf + len, PAGE_SIZE-len,
|
||||
len += scnprintf(buf + len, PAGE_SIZE-len,
|
||||
" Public Loop\n");
|
||||
else
|
||||
len += snprintf(buf + len, PAGE_SIZE-len,
|
||||
len += scnprintf(buf + len, PAGE_SIZE-len,
|
||||
" Private Loop\n");
|
||||
} else {
|
||||
if (vport->fc_flag & FC_FABRIC)
|
||||
len += snprintf(buf + len, PAGE_SIZE-len,
|
||||
len += scnprintf(buf + len, PAGE_SIZE-len,
|
||||
" Fabric\n");
|
||||
else
|
||||
len += snprintf(buf + len, PAGE_SIZE-len,
|
||||
len += scnprintf(buf + len, PAGE_SIZE-len,
|
||||
" Point-2-Point\n");
|
||||
}
|
||||
}
|
||||
|
@ -937,28 +958,28 @@ lpfc_link_state_show(struct device *dev, struct device_attribute *attr,
|
|||
struct lpfc_trunk_link link = phba->trunk_link;
|
||||
|
||||
if (bf_get(lpfc_conf_trunk_port0, &phba->sli4_hba))
|
||||
len += snprintf(buf + len, PAGE_SIZE - len,
|
||||
len += scnprintf(buf + len, PAGE_SIZE - len,
|
||||
"Trunk port 0: Link %s %s\n",
|
||||
(link.link0.state == LPFC_LINK_UP) ?
|
||||
"Up" : "Down. ",
|
||||
trunk_errmsg[link.link0.fault]);
|
||||
|
||||
if (bf_get(lpfc_conf_trunk_port1, &phba->sli4_hba))
|
||||
len += snprintf(buf + len, PAGE_SIZE - len,
|
||||
len += scnprintf(buf + len, PAGE_SIZE - len,
|
||||
"Trunk port 1: Link %s %s\n",
|
||||
(link.link1.state == LPFC_LINK_UP) ?
|
||||
"Up" : "Down. ",
|
||||
trunk_errmsg[link.link1.fault]);
|
||||
|
||||
if (bf_get(lpfc_conf_trunk_port2, &phba->sli4_hba))
|
||||
len += snprintf(buf + len, PAGE_SIZE - len,
|
||||
len += scnprintf(buf + len, PAGE_SIZE - len,
|
||||
"Trunk port 2: Link %s %s\n",
|
||||
(link.link2.state == LPFC_LINK_UP) ?
|
||||
"Up" : "Down. ",
|
||||
trunk_errmsg[link.link2.fault]);
|
||||
|
||||
if (bf_get(lpfc_conf_trunk_port3, &phba->sli4_hba))
|
||||
len += snprintf(buf + len, PAGE_SIZE - len,
|
||||
len += scnprintf(buf + len, PAGE_SIZE - len,
|
||||
"Trunk port 3: Link %s %s\n",
|
||||
(link.link3.state == LPFC_LINK_UP) ?
|
||||
"Up" : "Down. ",
|
||||
|
@ -986,15 +1007,15 @@ lpfc_sli4_protocol_show(struct device *dev, struct device_attribute *attr,
|
|||
struct lpfc_hba *phba = vport->phba;
|
||||
|
||||
if (phba->sli_rev < LPFC_SLI_REV4)
|
||||
return snprintf(buf, PAGE_SIZE, "fc\n");
|
||||
return scnprintf(buf, PAGE_SIZE, "fc\n");
|
||||
|
||||
if (phba->sli4_hba.lnk_info.lnk_dv == LPFC_LNK_DAT_VAL) {
|
||||
if (phba->sli4_hba.lnk_info.lnk_tp == LPFC_LNK_TYPE_GE)
|
||||
return snprintf(buf, PAGE_SIZE, "fcoe\n");
|
||||
return scnprintf(buf, PAGE_SIZE, "fcoe\n");
|
||||
if (phba->sli4_hba.lnk_info.lnk_tp == LPFC_LNK_TYPE_FC)
|
||||
return snprintf(buf, PAGE_SIZE, "fc\n");
|
||||
return scnprintf(buf, PAGE_SIZE, "fc\n");
|
||||
}
|
||||
return snprintf(buf, PAGE_SIZE, "unknown\n");
|
||||
return scnprintf(buf, PAGE_SIZE, "unknown\n");
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -1014,7 +1035,7 @@ lpfc_oas_supported_show(struct device *dev, struct device_attribute *attr,
|
|||
struct lpfc_vport *vport = (struct lpfc_vport *)shost->hostdata;
|
||||
struct lpfc_hba *phba = vport->phba;
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "%d\n",
|
||||
return scnprintf(buf, PAGE_SIZE, "%d\n",
|
||||
phba->sli4_hba.pc_sli4_params.oas_supported);
|
||||
}
|
||||
|
||||
|
@ -1072,7 +1093,7 @@ lpfc_num_discovered_ports_show(struct device *dev,
|
|||
struct Scsi_Host *shost = class_to_shost(dev);
|
||||
struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "%d\n",
|
||||
return scnprintf(buf, PAGE_SIZE, "%d\n",
|
||||
vport->fc_map_cnt + vport->fc_unmap_cnt);
|
||||
}
|
||||
|
||||
|
@ -1204,6 +1225,20 @@ lpfc_do_offline(struct lpfc_hba *phba, uint32_t type)
|
|||
|
||||
psli = &phba->sli;
|
||||
|
||||
/*
|
||||
* If freeing the queues have already started, don't access them.
|
||||
* Otherwise set FREE_WAIT to indicate that queues are being used
|
||||
* to hold the freeing process until we finish.
|
||||
*/
|
||||
spin_lock_irq(&phba->hbalock);
|
||||
if (!(psli->sli_flag & LPFC_QUEUE_FREE_INIT)) {
|
||||
psli->sli_flag |= LPFC_QUEUE_FREE_WAIT;
|
||||
} else {
|
||||
spin_unlock_irq(&phba->hbalock);
|
||||
goto skip_wait;
|
||||
}
|
||||
spin_unlock_irq(&phba->hbalock);
|
||||
|
||||
/* Wait a little for things to settle down, but not
|
||||
* long enough for dev loss timeout to expire.
|
||||
*/
|
||||
|
@ -1225,6 +1260,11 @@ lpfc_do_offline(struct lpfc_hba *phba, uint32_t type)
|
|||
}
|
||||
}
|
||||
out:
|
||||
spin_lock_irq(&phba->hbalock);
|
||||
psli->sli_flag &= ~LPFC_QUEUE_FREE_WAIT;
|
||||
spin_unlock_irq(&phba->hbalock);
|
||||
|
||||
skip_wait:
|
||||
init_completion(&online_compl);
|
||||
rc = lpfc_workq_post_event(phba, &status, &online_compl, type);
|
||||
if (rc == 0)
|
||||
|
@ -1258,7 +1298,7 @@ lpfc_do_offline(struct lpfc_hba *phba, uint32_t type)
|
|||
* -EBUSY, port is not in offline state
|
||||
* 0, successful
|
||||
*/
|
||||
int
|
||||
static int
|
||||
lpfc_reset_pci_bus(struct lpfc_hba *phba)
|
||||
{
|
||||
struct pci_dev *pdev = phba->pcidev;
|
||||
|
@ -1586,10 +1626,10 @@ lpfc_nport_evt_cnt_show(struct device *dev, struct device_attribute *attr,
|
|||
struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
|
||||
struct lpfc_hba *phba = vport->phba;
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "%d\n", phba->nport_event_cnt);
|
||||
return scnprintf(buf, PAGE_SIZE, "%d\n", phba->nport_event_cnt);
|
||||
}
|
||||
|
||||
int
|
||||
static int
|
||||
lpfc_set_trunking(struct lpfc_hba *phba, char *buff_out)
|
||||
{
|
||||
LPFC_MBOXQ_t *mbox = NULL;
|
||||
|
@ -1675,7 +1715,7 @@ lpfc_board_mode_show(struct device *dev, struct device_attribute *attr,
|
|||
else
|
||||
state = "online";
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "%s\n", state);
|
||||
return scnprintf(buf, PAGE_SIZE, "%s\n", state);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -1901,8 +1941,8 @@ lpfc_max_rpi_show(struct device *dev, struct device_attribute *attr,
|
|||
uint32_t cnt;
|
||||
|
||||
if (lpfc_get_hba_info(phba, NULL, NULL, &cnt, NULL, NULL, NULL))
|
||||
return snprintf(buf, PAGE_SIZE, "%d\n", cnt);
|
||||
return snprintf(buf, PAGE_SIZE, "Unknown\n");
|
||||
return scnprintf(buf, PAGE_SIZE, "%d\n", cnt);
|
||||
return scnprintf(buf, PAGE_SIZE, "Unknown\n");
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -1929,8 +1969,8 @@ lpfc_used_rpi_show(struct device *dev, struct device_attribute *attr,
|
|||
uint32_t cnt, acnt;
|
||||
|
||||
if (lpfc_get_hba_info(phba, NULL, NULL, &cnt, &acnt, NULL, NULL))
|
||||
return snprintf(buf, PAGE_SIZE, "%d\n", (cnt - acnt));
|
||||
return snprintf(buf, PAGE_SIZE, "Unknown\n");
|
||||
return scnprintf(buf, PAGE_SIZE, "%d\n", (cnt - acnt));
|
||||
return scnprintf(buf, PAGE_SIZE, "Unknown\n");
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -1957,8 +1997,8 @@ lpfc_max_xri_show(struct device *dev, struct device_attribute *attr,
|
|||
uint32_t cnt;
|
||||
|
||||
if (lpfc_get_hba_info(phba, &cnt, NULL, NULL, NULL, NULL, NULL))
|
||||
return snprintf(buf, PAGE_SIZE, "%d\n", cnt);
|
||||
return snprintf(buf, PAGE_SIZE, "Unknown\n");
|
||||
return scnprintf(buf, PAGE_SIZE, "%d\n", cnt);
|
||||
return scnprintf(buf, PAGE_SIZE, "Unknown\n");
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -1985,8 +2025,8 @@ lpfc_used_xri_show(struct device *dev, struct device_attribute *attr,
|
|||
uint32_t cnt, acnt;
|
||||
|
||||
if (lpfc_get_hba_info(phba, &cnt, &acnt, NULL, NULL, NULL, NULL))
|
||||
return snprintf(buf, PAGE_SIZE, "%d\n", (cnt - acnt));
|
||||
return snprintf(buf, PAGE_SIZE, "Unknown\n");
|
||||
return scnprintf(buf, PAGE_SIZE, "%d\n", (cnt - acnt));
|
||||
return scnprintf(buf, PAGE_SIZE, "Unknown\n");
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -2013,8 +2053,8 @@ lpfc_max_vpi_show(struct device *dev, struct device_attribute *attr,
|
|||
uint32_t cnt;
|
||||
|
||||
if (lpfc_get_hba_info(phba, NULL, NULL, NULL, NULL, &cnt, NULL))
|
||||
return snprintf(buf, PAGE_SIZE, "%d\n", cnt);
|
||||
return snprintf(buf, PAGE_SIZE, "Unknown\n");
|
||||
return scnprintf(buf, PAGE_SIZE, "%d\n", cnt);
|
||||
return scnprintf(buf, PAGE_SIZE, "Unknown\n");
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -2041,8 +2081,8 @@ lpfc_used_vpi_show(struct device *dev, struct device_attribute *attr,
|
|||
uint32_t cnt, acnt;
|
||||
|
||||
if (lpfc_get_hba_info(phba, NULL, NULL, NULL, NULL, &cnt, &acnt))
|
||||
return snprintf(buf, PAGE_SIZE, "%d\n", (cnt - acnt));
|
||||
return snprintf(buf, PAGE_SIZE, "Unknown\n");
|
||||
return scnprintf(buf, PAGE_SIZE, "%d\n", (cnt - acnt));
|
||||
return scnprintf(buf, PAGE_SIZE, "Unknown\n");
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -2067,10 +2107,10 @@ lpfc_npiv_info_show(struct device *dev, struct device_attribute *attr,
|
|||
struct lpfc_hba *phba = vport->phba;
|
||||
|
||||
if (!(phba->max_vpi))
|
||||
return snprintf(buf, PAGE_SIZE, "NPIV Not Supported\n");
|
||||
return scnprintf(buf, PAGE_SIZE, "NPIV Not Supported\n");
|
||||
if (vport->port_type == LPFC_PHYSICAL_PORT)
|
||||
return snprintf(buf, PAGE_SIZE, "NPIV Physical\n");
|
||||
return snprintf(buf, PAGE_SIZE, "NPIV Virtual (VPI %d)\n", vport->vpi);
|
||||
return scnprintf(buf, PAGE_SIZE, "NPIV Physical\n");
|
||||
return scnprintf(buf, PAGE_SIZE, "NPIV Virtual (VPI %d)\n", vport->vpi);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -2092,7 +2132,7 @@ lpfc_poll_show(struct device *dev, struct device_attribute *attr,
|
|||
struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
|
||||
struct lpfc_hba *phba = vport->phba;
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "%#x\n", phba->cfg_poll);
|
||||
return scnprintf(buf, PAGE_SIZE, "%#x\n", phba->cfg_poll);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -2196,7 +2236,7 @@ lpfc_fips_level_show(struct device *dev, struct device_attribute *attr,
|
|||
struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
|
||||
struct lpfc_hba *phba = vport->phba;
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "%d\n", phba->fips_level);
|
||||
return scnprintf(buf, PAGE_SIZE, "%d\n", phba->fips_level);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -2215,7 +2255,7 @@ lpfc_fips_rev_show(struct device *dev, struct device_attribute *attr,
|
|||
struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
|
||||
struct lpfc_hba *phba = vport->phba;
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "%d\n", phba->fips_spec_rev);
|
||||
return scnprintf(buf, PAGE_SIZE, "%d\n", phba->fips_spec_rev);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -2234,7 +2274,7 @@ lpfc_dss_show(struct device *dev, struct device_attribute *attr,
|
|||
struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
|
||||
struct lpfc_hba *phba = vport->phba;
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "%s - %sOperational\n",
|
||||
return scnprintf(buf, PAGE_SIZE, "%s - %sOperational\n",
|
||||
(phba->cfg_enable_dss) ? "Enabled" : "Disabled",
|
||||
(phba->sli3_options & LPFC_SLI3_DSS_ENABLED) ?
|
||||
"" : "Not ");
|
||||
|
@ -2263,7 +2303,7 @@ lpfc_sriov_hw_max_virtfn_show(struct device *dev,
|
|||
uint16_t max_nr_virtfn;
|
||||
|
||||
max_nr_virtfn = lpfc_sli_sriov_nr_virtfn_get(phba);
|
||||
return snprintf(buf, PAGE_SIZE, "%d\n", max_nr_virtfn);
|
||||
return scnprintf(buf, PAGE_SIZE, "%d\n", max_nr_virtfn);
|
||||
}
|
||||
|
||||
static inline bool lpfc_rangecheck(uint val, uint min, uint max)
|
||||
|
@ -2323,7 +2363,7 @@ lpfc_##attr##_show(struct device *dev, struct device_attribute *attr, \
|
|||
struct Scsi_Host *shost = class_to_shost(dev);\
|
||||
struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;\
|
||||
struct lpfc_hba *phba = vport->phba;\
|
||||
return snprintf(buf, PAGE_SIZE, "%d\n",\
|
||||
return scnprintf(buf, PAGE_SIZE, "%d\n",\
|
||||
phba->cfg_##attr);\
|
||||
}
|
||||
|
||||
|
@ -2351,7 +2391,7 @@ lpfc_##attr##_show(struct device *dev, struct device_attribute *attr, \
|
|||
struct lpfc_hba *phba = vport->phba;\
|
||||
uint val = 0;\
|
||||
val = phba->cfg_##attr;\
|
||||
return snprintf(buf, PAGE_SIZE, "%#x\n",\
|
||||
return scnprintf(buf, PAGE_SIZE, "%#x\n",\
|
||||
phba->cfg_##attr);\
|
||||
}
|
||||
|
||||
|
@ -2487,7 +2527,7 @@ lpfc_##attr##_show(struct device *dev, struct device_attribute *attr, \
|
|||
{ \
|
||||
struct Scsi_Host *shost = class_to_shost(dev);\
|
||||
struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;\
|
||||
return snprintf(buf, PAGE_SIZE, "%d\n", vport->cfg_##attr);\
|
||||
return scnprintf(buf, PAGE_SIZE, "%d\n", vport->cfg_##attr);\
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -2512,7 +2552,7 @@ lpfc_##attr##_show(struct device *dev, struct device_attribute *attr, \
|
|||
{ \
|
||||
struct Scsi_Host *shost = class_to_shost(dev);\
|
||||
struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;\
|
||||
return snprintf(buf, PAGE_SIZE, "%#x\n", vport->cfg_##attr);\
|
||||
return scnprintf(buf, PAGE_SIZE, "%#x\n", vport->cfg_##attr);\
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -2784,7 +2824,7 @@ lpfc_soft_wwpn_show(struct device *dev, struct device_attribute *attr,
|
|||
struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
|
||||
struct lpfc_hba *phba = vport->phba;
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "0x%llx\n",
|
||||
return scnprintf(buf, PAGE_SIZE, "0x%llx\n",
|
||||
(unsigned long long)phba->cfg_soft_wwpn);
|
||||
}
|
||||
|
||||
|
@ -2881,7 +2921,7 @@ lpfc_soft_wwnn_show(struct device *dev, struct device_attribute *attr,
|
|||
{
|
||||
struct Scsi_Host *shost = class_to_shost(dev);
|
||||
struct lpfc_hba *phba = ((struct lpfc_vport *)shost->hostdata)->phba;
|
||||
return snprintf(buf, PAGE_SIZE, "0x%llx\n",
|
||||
return scnprintf(buf, PAGE_SIZE, "0x%llx\n",
|
||||
(unsigned long long)phba->cfg_soft_wwnn);
|
||||
}
|
||||
|
||||
|
@ -2947,7 +2987,7 @@ lpfc_oas_tgt_show(struct device *dev, struct device_attribute *attr,
|
|||
struct Scsi_Host *shost = class_to_shost(dev);
|
||||
struct lpfc_hba *phba = ((struct lpfc_vport *)shost->hostdata)->phba;
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "0x%llx\n",
|
||||
return scnprintf(buf, PAGE_SIZE, "0x%llx\n",
|
||||
wwn_to_u64(phba->cfg_oas_tgt_wwpn));
|
||||
}
|
||||
|
||||
|
@ -3015,7 +3055,7 @@ lpfc_oas_priority_show(struct device *dev, struct device_attribute *attr,
|
|||
struct Scsi_Host *shost = class_to_shost(dev);
|
||||
struct lpfc_hba *phba = ((struct lpfc_vport *)shost->hostdata)->phba;
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "%d\n", phba->cfg_oas_priority);
|
||||
return scnprintf(buf, PAGE_SIZE, "%d\n", phba->cfg_oas_priority);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -3078,7 +3118,7 @@ lpfc_oas_vpt_show(struct device *dev, struct device_attribute *attr,
|
|||
struct Scsi_Host *shost = class_to_shost(dev);
|
||||
struct lpfc_hba *phba = ((struct lpfc_vport *)shost->hostdata)->phba;
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "0x%llx\n",
|
||||
return scnprintf(buf, PAGE_SIZE, "0x%llx\n",
|
||||
wwn_to_u64(phba->cfg_oas_vpt_wwpn));
|
||||
}
|
||||
|
||||
|
@ -3149,7 +3189,7 @@ lpfc_oas_lun_state_show(struct device *dev, struct device_attribute *attr,
|
|||
struct Scsi_Host *shost = class_to_shost(dev);
|
||||
struct lpfc_hba *phba = ((struct lpfc_vport *)shost->hostdata)->phba;
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "%d\n", phba->cfg_oas_lun_state);
|
||||
return scnprintf(buf, PAGE_SIZE, "%d\n", phba->cfg_oas_lun_state);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -3213,7 +3253,7 @@ lpfc_oas_lun_status_show(struct device *dev, struct device_attribute *attr,
|
|||
if (!(phba->cfg_oas_flags & OAS_LUN_VALID))
|
||||
return -EFAULT;
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "%d\n", phba->cfg_oas_lun_status);
|
||||
return scnprintf(buf, PAGE_SIZE, "%d\n", phba->cfg_oas_lun_status);
|
||||
}
|
||||
static DEVICE_ATTR(lpfc_xlane_lun_status, S_IRUGO,
|
||||
lpfc_oas_lun_status_show, NULL);
|
||||
|
@ -3365,7 +3405,7 @@ lpfc_oas_lun_show(struct device *dev, struct device_attribute *attr,
|
|||
if (oas_lun != NOT_OAS_ENABLED_LUN)
|
||||
phba->cfg_oas_flags |= OAS_LUN_VALID;
|
||||
|
||||
len += snprintf(buf + len, PAGE_SIZE-len, "0x%llx", oas_lun);
|
||||
len += scnprintf(buf + len, PAGE_SIZE-len, "0x%llx", oas_lun);
|
||||
|
||||
return len;
|
||||
}
|
||||
|
@ -3499,7 +3539,7 @@ lpfc_iocb_hw_show(struct device *dev, struct device_attribute *attr, char *buf)
|
|||
struct Scsi_Host *shost = class_to_shost(dev);
|
||||
struct lpfc_hba *phba = ((struct lpfc_vport *) shost->hostdata)->phba;
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "%d\n", phba->iocb_max);
|
||||
return scnprintf(buf, PAGE_SIZE, "%d\n", phba->iocb_max);
|
||||
}
|
||||
|
||||
static DEVICE_ATTR(iocb_hw, S_IRUGO,
|
||||
|
@ -3511,7 +3551,7 @@ lpfc_txq_hw_show(struct device *dev, struct device_attribute *attr, char *buf)
|
|||
struct lpfc_hba *phba = ((struct lpfc_vport *) shost->hostdata)->phba;
|
||||
struct lpfc_sli_ring *pring = lpfc_phba_elsring(phba);
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "%d\n",
|
||||
return scnprintf(buf, PAGE_SIZE, "%d\n",
|
||||
pring ? pring->txq_max : 0);
|
||||
}
|
||||
|
||||
|
@ -3525,7 +3565,7 @@ lpfc_txcmplq_hw_show(struct device *dev, struct device_attribute *attr,
|
|||
struct lpfc_hba *phba = ((struct lpfc_vport *) shost->hostdata)->phba;
|
||||
struct lpfc_sli_ring *pring = lpfc_phba_elsring(phba);
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "%d\n",
|
||||
return scnprintf(buf, PAGE_SIZE, "%d\n",
|
||||
pring ? pring->txcmplq_max : 0);
|
||||
}
|
||||
|
||||
|
@ -3561,7 +3601,7 @@ lpfc_nodev_tmo_show(struct device *dev, struct device_attribute *attr,
|
|||
struct Scsi_Host *shost = class_to_shost(dev);
|
||||
struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "%d\n", vport->cfg_devloss_tmo);
|
||||
return scnprintf(buf, PAGE_SIZE, "%d\n", vport->cfg_devloss_tmo);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -4050,9 +4090,9 @@ lpfc_topology_store(struct device *dev, struct device_attribute *attr,
|
|||
}
|
||||
if ((phba->pcidev->device == PCI_DEVICE_ID_LANCER_G6_FC ||
|
||||
phba->pcidev->device == PCI_DEVICE_ID_LANCER_G7_FC) &&
|
||||
val == 4) {
|
||||
val != FLAGS_TOPOLOGY_MODE_PT_PT) {
|
||||
lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
|
||||
"3114 Loop mode not supported\n");
|
||||
"3114 Only non-FC-AL mode is supported\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
phba->cfg_topology = val;
|
||||
|
@ -5169,12 +5209,12 @@ lpfc_fcp_cpu_map_show(struct device *dev, struct device_attribute *attr,
|
|||
|
||||
switch (phba->cfg_fcp_cpu_map) {
|
||||
case 0:
|
||||
len += snprintf(buf + len, PAGE_SIZE-len,
|
||||
len += scnprintf(buf + len, PAGE_SIZE-len,
|
||||
"fcp_cpu_map: No mapping (%d)\n",
|
||||
phba->cfg_fcp_cpu_map);
|
||||
return len;
|
||||
case 1:
|
||||
len += snprintf(buf + len, PAGE_SIZE-len,
|
||||
len += scnprintf(buf + len, PAGE_SIZE-len,
|
||||
"fcp_cpu_map: HBA centric mapping (%d): "
|
||||
"%d of %d CPUs online from %d possible CPUs\n",
|
||||
phba->cfg_fcp_cpu_map, num_online_cpus(),
|
||||
|
@ -5188,12 +5228,12 @@ lpfc_fcp_cpu_map_show(struct device *dev, struct device_attribute *attr,
|
|||
cpup = &phba->sli4_hba.cpu_map[phba->sli4_hba.curr_disp_cpu];
|
||||
|
||||
if (!cpu_present(phba->sli4_hba.curr_disp_cpu))
|
||||
len += snprintf(buf + len, PAGE_SIZE - len,
|
||||
len += scnprintf(buf + len, PAGE_SIZE - len,
|
||||
"CPU %02d not present\n",
|
||||
phba->sli4_hba.curr_disp_cpu);
|
||||
else if (cpup->irq == LPFC_VECTOR_MAP_EMPTY) {
|
||||
if (cpup->hdwq == LPFC_VECTOR_MAP_EMPTY)
|
||||
len += snprintf(
|
||||
len += scnprintf(
|
||||
buf + len, PAGE_SIZE - len,
|
||||
"CPU %02d hdwq None "
|
||||
"physid %d coreid %d ht %d\n",
|
||||
|
@ -5201,7 +5241,7 @@ lpfc_fcp_cpu_map_show(struct device *dev, struct device_attribute *attr,
|
|||
cpup->phys_id,
|
||||
cpup->core_id, cpup->hyper);
|
||||
else
|
||||
len += snprintf(
|
||||
len += scnprintf(
|
||||
buf + len, PAGE_SIZE - len,
|
||||
"CPU %02d EQ %04d hdwq %04d "
|
||||
"physid %d coreid %d ht %d\n",
|
||||
|
@ -5210,7 +5250,7 @@ lpfc_fcp_cpu_map_show(struct device *dev, struct device_attribute *attr,
|
|||
cpup->core_id, cpup->hyper);
|
||||
} else {
|
||||
if (cpup->hdwq == LPFC_VECTOR_MAP_EMPTY)
|
||||
len += snprintf(
|
||||
len += scnprintf(
|
||||
buf + len, PAGE_SIZE - len,
|
||||
"CPU %02d hdwq None "
|
||||
"physid %d coreid %d ht %d IRQ %d\n",
|
||||
|
@ -5218,7 +5258,7 @@ lpfc_fcp_cpu_map_show(struct device *dev, struct device_attribute *attr,
|
|||
cpup->phys_id,
|
||||
cpup->core_id, cpup->hyper, cpup->irq);
|
||||
else
|
||||
len += snprintf(
|
||||
len += scnprintf(
|
||||
buf + len, PAGE_SIZE - len,
|
||||
"CPU %02d EQ %04d hdwq %04d "
|
||||
"physid %d coreid %d ht %d IRQ %d\n",
|
||||
|
@ -5233,7 +5273,7 @@ lpfc_fcp_cpu_map_show(struct device *dev, struct device_attribute *attr,
|
|||
if (phba->sli4_hba.curr_disp_cpu <
|
||||
phba->sli4_hba.num_possible_cpu &&
|
||||
(len >= (PAGE_SIZE - 64))) {
|
||||
len += snprintf(buf + len,
|
||||
len += scnprintf(buf + len,
|
||||
PAGE_SIZE - len, "more...\n");
|
||||
break;
|
||||
}
|
||||
|
@ -5753,10 +5793,10 @@ lpfc_sg_seg_cnt_show(struct device *dev, struct device_attribute *attr,
|
|||
struct lpfc_hba *phba = vport->phba;
|
||||
int len;
|
||||
|
||||
len = snprintf(buf, PAGE_SIZE, "SGL sz: %d total SGEs: %d\n",
|
||||
len = scnprintf(buf, PAGE_SIZE, "SGL sz: %d total SGEs: %d\n",
|
||||
phba->cfg_sg_dma_buf_size, phba->cfg_total_seg_cnt);
|
||||
|
||||
len += snprintf(buf + len, PAGE_SIZE, "Cfg: %d SCSI: %d NVME: %d\n",
|
||||
len += scnprintf(buf + len, PAGE_SIZE, "Cfg: %d SCSI: %d NVME: %d\n",
|
||||
phba->cfg_sg_seg_cnt, phba->cfg_scsi_seg_cnt,
|
||||
phba->cfg_nvme_seg_cnt);
|
||||
return len;
|
||||
|
@ -6755,7 +6795,7 @@ lpfc_show_rport_##field (struct device *dev, \
|
|||
{ \
|
||||
struct fc_rport *rport = transport_class_to_rport(dev); \
|
||||
struct lpfc_rport_data *rdata = rport->hostdata; \
|
||||
return snprintf(buf, sz, format_string, \
|
||||
return scnprintf(buf, sz, format_string, \
|
||||
(rdata->target) ? cast rdata->target->field : 0); \
|
||||
}
|
||||
|
||||
|
@ -7003,6 +7043,7 @@ lpfc_get_cfgparam(struct lpfc_hba *phba)
|
|||
if (phba->sli_rev != LPFC_SLI_REV4) {
|
||||
/* NVME only supported on SLI4 */
|
||||
phba->nvmet_support = 0;
|
||||
phba->cfg_nvmet_mrq = 0;
|
||||
phba->cfg_enable_fc4_type = LPFC_ENABLE_FCP;
|
||||
phba->cfg_enable_bbcr = 0;
|
||||
phba->cfg_xri_rebalancing = 0;
|
||||
|
@ -7104,7 +7145,7 @@ lpfc_nvme_mod_param_dep(struct lpfc_hba *phba)
|
|||
} else {
|
||||
/* Not NVME Target mode. Turn off Target parameters. */
|
||||
phba->nvmet_support = 0;
|
||||
phba->cfg_nvmet_mrq = LPFC_NVMET_MRQ_OFF;
|
||||
phba->cfg_nvmet_mrq = 0;
|
||||
phba->cfg_nvmet_fb_size = 0;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
/*******************************************************************
|
||||
* This file is part of the Emulex Linux Device Driver for *
|
||||
* Fibre Channel Host Bus Adapters. *
|
||||
* Copyright (C) 2017-2018 Broadcom. All Rights Reserved. The term *
|
||||
* Copyright (C) 2017-2019 Broadcom. All Rights Reserved. The term *
|
||||
* “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. *
|
||||
* Copyright (C) 2009-2015 Emulex. All rights reserved. *
|
||||
* EMULEX and SLI are trademarks of Emulex. *
|
||||
|
@ -1968,14 +1968,17 @@ lpfc_sli4_bsg_set_link_diag_state(struct lpfc_hba *phba, uint32_t diag)
|
|||
}
|
||||
|
||||
/**
|
||||
* lpfc_sli4_bsg_set_internal_loopback - set sli4 internal loopback diagnostic
|
||||
* lpfc_sli4_bsg_set_loopback_mode - set sli4 internal loopback diagnostic
|
||||
* @phba: Pointer to HBA context object.
|
||||
* @mode: loopback mode to set
|
||||
* @link_no: link number for loopback mode to set
|
||||
*
|
||||
* This function is responsible for issuing a sli4 mailbox command for setting
|
||||
* up internal loopback diagnostic.
|
||||
* up loopback diagnostic for a link.
|
||||
*/
|
||||
static int
|
||||
lpfc_sli4_bsg_set_internal_loopback(struct lpfc_hba *phba)
|
||||
lpfc_sli4_bsg_set_loopback_mode(struct lpfc_hba *phba, int mode,
|
||||
uint32_t link_no)
|
||||
{
|
||||
LPFC_MBOXQ_t *pmboxq;
|
||||
uint32_t req_len, alloc_len;
|
||||
|
@ -1996,11 +1999,19 @@ lpfc_sli4_bsg_set_internal_loopback(struct lpfc_hba *phba)
|
|||
}
|
||||
link_diag_loopback = &pmboxq->u.mqe.un.link_diag_loopback;
|
||||
bf_set(lpfc_mbx_set_diag_state_link_num,
|
||||
&link_diag_loopback->u.req, phba->sli4_hba.lnk_info.lnk_no);
|
||||
bf_set(lpfc_mbx_set_diag_state_link_type,
|
||||
&link_diag_loopback->u.req, phba->sli4_hba.lnk_info.lnk_tp);
|
||||
&link_diag_loopback->u.req, link_no);
|
||||
|
||||
if (phba->sli4_hba.conf_trunk & (1 << link_no)) {
|
||||
bf_set(lpfc_mbx_set_diag_state_link_type,
|
||||
&link_diag_loopback->u.req, LPFC_LNK_FC_TRUNKED);
|
||||
} else {
|
||||
bf_set(lpfc_mbx_set_diag_state_link_type,
|
||||
&link_diag_loopback->u.req,
|
||||
phba->sli4_hba.lnk_info.lnk_tp);
|
||||
}
|
||||
|
||||
bf_set(lpfc_mbx_set_diag_lpbk_type, &link_diag_loopback->u.req,
|
||||
LPFC_DIAG_LOOPBACK_TYPE_INTERNAL);
|
||||
mode);
|
||||
|
||||
mbxstatus = lpfc_sli_issue_mbox_wait(phba, pmboxq, LPFC_MBOX_TMO);
|
||||
if ((mbxstatus != MBX_SUCCESS) || (pmboxq->u.mb.mbxStatus)) {
|
||||
|
@ -2054,7 +2065,7 @@ lpfc_sli4_bsg_diag_loopback_mode(struct lpfc_hba *phba, struct bsg_job *job)
|
|||
struct fc_bsg_request *bsg_request = job->request;
|
||||
struct fc_bsg_reply *bsg_reply = job->reply;
|
||||
struct diag_mode_set *loopback_mode;
|
||||
uint32_t link_flags, timeout;
|
||||
uint32_t link_flags, timeout, link_no;
|
||||
int i, rc = 0;
|
||||
|
||||
/* no data to return just the return code */
|
||||
|
@ -2069,12 +2080,39 @@ lpfc_sli4_bsg_diag_loopback_mode(struct lpfc_hba *phba, struct bsg_job *job)
|
|||
(int)(sizeof(struct fc_bsg_request) +
|
||||
sizeof(struct diag_mode_set)));
|
||||
rc = -EINVAL;
|
||||
goto job_error;
|
||||
goto job_done;
|
||||
}
|
||||
|
||||
loopback_mode = (struct diag_mode_set *)
|
||||
bsg_request->rqst_data.h_vendor.vendor_cmd;
|
||||
link_flags = loopback_mode->type;
|
||||
timeout = loopback_mode->timeout * 100;
|
||||
|
||||
if (loopback_mode->physical_link == -1)
|
||||
link_no = phba->sli4_hba.lnk_info.lnk_no;
|
||||
else
|
||||
link_no = loopback_mode->physical_link;
|
||||
|
||||
if (link_flags == DISABLE_LOOP_BACK) {
|
||||
rc = lpfc_sli4_bsg_set_loopback_mode(phba,
|
||||
LPFC_DIAG_LOOPBACK_TYPE_DISABLE,
|
||||
link_no);
|
||||
if (!rc) {
|
||||
/* Unset the need disable bit */
|
||||
phba->sli4_hba.conf_trunk &= ~((1 << link_no) << 4);
|
||||
}
|
||||
goto job_done;
|
||||
} else {
|
||||
/* Check if we need to disable the loopback state */
|
||||
if (phba->sli4_hba.conf_trunk & ((1 << link_no) << 4)) {
|
||||
rc = -EPERM;
|
||||
goto job_done;
|
||||
}
|
||||
}
|
||||
|
||||
rc = lpfc_bsg_diag_mode_enter(phba);
|
||||
if (rc)
|
||||
goto job_error;
|
||||
goto job_done;
|
||||
|
||||
/* indicate we are in loobpack diagnostic mode */
|
||||
spin_lock_irq(&phba->hbalock);
|
||||
|
@ -2084,15 +2122,11 @@ lpfc_sli4_bsg_diag_loopback_mode(struct lpfc_hba *phba, struct bsg_job *job)
|
|||
/* reset port to start frome scratch */
|
||||
rc = lpfc_selective_reset(phba);
|
||||
if (rc)
|
||||
goto job_error;
|
||||
goto job_done;
|
||||
|
||||
/* bring the link to diagnostic mode */
|
||||
lpfc_printf_log(phba, KERN_INFO, LOG_LIBDFC,
|
||||
"3129 Bring link to diagnostic state.\n");
|
||||
loopback_mode = (struct diag_mode_set *)
|
||||
bsg_request->rqst_data.h_vendor.vendor_cmd;
|
||||
link_flags = loopback_mode->type;
|
||||
timeout = loopback_mode->timeout * 100;
|
||||
|
||||
rc = lpfc_sli4_bsg_set_link_diag_state(phba, 1);
|
||||
if (rc) {
|
||||
|
@ -2120,13 +2154,54 @@ lpfc_sli4_bsg_diag_loopback_mode(struct lpfc_hba *phba, struct bsg_job *job)
|
|||
lpfc_printf_log(phba, KERN_INFO, LOG_LIBDFC,
|
||||
"3132 Set up loopback mode:x%x\n", link_flags);
|
||||
|
||||
if (link_flags == INTERNAL_LOOP_BACK)
|
||||
rc = lpfc_sli4_bsg_set_internal_loopback(phba);
|
||||
else if (link_flags == EXTERNAL_LOOP_BACK)
|
||||
rc = lpfc_hba_init_link_fc_topology(phba,
|
||||
FLAGS_TOPOLOGY_MODE_PT_PT,
|
||||
MBX_NOWAIT);
|
||||
else {
|
||||
switch (link_flags) {
|
||||
case INTERNAL_LOOP_BACK:
|
||||
if (phba->sli4_hba.conf_trunk & (1 << link_no)) {
|
||||
rc = lpfc_sli4_bsg_set_loopback_mode(phba,
|
||||
LPFC_DIAG_LOOPBACK_TYPE_INTERNAL,
|
||||
link_no);
|
||||
} else {
|
||||
/* Trunk is configured, but link is not in this trunk */
|
||||
if (phba->sli4_hba.conf_trunk) {
|
||||
rc = -ELNRNG;
|
||||
goto loopback_mode_exit;
|
||||
}
|
||||
|
||||
rc = lpfc_sli4_bsg_set_loopback_mode(phba,
|
||||
LPFC_DIAG_LOOPBACK_TYPE_INTERNAL,
|
||||
link_no);
|
||||
}
|
||||
|
||||
if (!rc) {
|
||||
/* Set the need disable bit */
|
||||
phba->sli4_hba.conf_trunk |= (1 << link_no) << 4;
|
||||
}
|
||||
|
||||
break;
|
||||
case EXTERNAL_LOOP_BACK:
|
||||
if (phba->sli4_hba.conf_trunk & (1 << link_no)) {
|
||||
rc = lpfc_sli4_bsg_set_loopback_mode(phba,
|
||||
LPFC_DIAG_LOOPBACK_TYPE_EXTERNAL_TRUNKED,
|
||||
link_no);
|
||||
} else {
|
||||
/* Trunk is configured, but link is not in this trunk */
|
||||
if (phba->sli4_hba.conf_trunk) {
|
||||
rc = -ELNRNG;
|
||||
goto loopback_mode_exit;
|
||||
}
|
||||
|
||||
rc = lpfc_sli4_bsg_set_loopback_mode(phba,
|
||||
LPFC_DIAG_LOOPBACK_TYPE_SERDES,
|
||||
link_no);
|
||||
}
|
||||
|
||||
if (!rc) {
|
||||
/* Set the need disable bit */
|
||||
phba->sli4_hba.conf_trunk |= (1 << link_no) << 4;
|
||||
}
|
||||
|
||||
break;
|
||||
default:
|
||||
rc = -EINVAL;
|
||||
lpfc_printf_log(phba, KERN_ERR, LOG_LIBDFC,
|
||||
"3141 Loopback mode:x%x not supported\n",
|
||||
|
@ -2185,7 +2260,7 @@ lpfc_sli4_bsg_diag_loopback_mode(struct lpfc_hba *phba, struct bsg_job *job)
|
|||
}
|
||||
lpfc_bsg_diag_mode_exit(phba);
|
||||
|
||||
job_error:
|
||||
job_done:
|
||||
/* make error code available to userspace */
|
||||
bsg_reply->result = rc;
|
||||
/* complete the job back to userspace if no error */
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
/*******************************************************************
|
||||
* This file is part of the Emulex Linux Device Driver for *
|
||||
* Fibre Channel Host Bus Adapters. *
|
||||
* Copyright (C) 2017-2018 Broadcom. All Rights Reserved. The term *
|
||||
* Copyright (C) 2017-2019 Broadcom. All Rights Reserved. The term *
|
||||
* “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. *
|
||||
* Copyright (C) 2010-2015 Emulex. All rights reserved. *
|
||||
* EMULEX and SLI are trademarks of Emulex. *
|
||||
|
@ -68,6 +68,7 @@ struct send_mgmt_resp {
|
|||
};
|
||||
|
||||
|
||||
#define DISABLE_LOOP_BACK 0x0 /* disables loop back */
|
||||
#define INTERNAL_LOOP_BACK 0x1 /* adapter short cuts the loop internally */
|
||||
#define EXTERNAL_LOOP_BACK 0x2 /* requires an external loopback plug */
|
||||
|
||||
|
@ -75,6 +76,7 @@ struct diag_mode_set {
|
|||
uint32_t command;
|
||||
uint32_t type;
|
||||
uint32_t timeout;
|
||||
uint32_t physical_link;
|
||||
};
|
||||
|
||||
struct sli4_link_diag {
|
||||
|
|
|
@ -886,7 +886,7 @@ lpfc_cmpl_ct_cmd_gid_pt(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
|
|||
}
|
||||
if (lpfc_error_lost_link(irsp)) {
|
||||
lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY,
|
||||
"4101 NS query failed due to link event\n");
|
||||
"4166 NS query failed due to link event\n");
|
||||
if (vport->fc_flag & FC_RSCN_MODE)
|
||||
lpfc_els_flush_rscn(vport);
|
||||
goto out;
|
||||
|
@ -907,7 +907,7 @@ lpfc_cmpl_ct_cmd_gid_pt(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
|
|||
* Re-issue the NS cmd
|
||||
*/
|
||||
lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
|
||||
"4102 Process Deferred RSCN Data: x%x x%x\n",
|
||||
"4167 Process Deferred RSCN Data: x%x x%x\n",
|
||||
vport->fc_flag, vport->fc_rscn_id_cnt);
|
||||
lpfc_els_handle_rscn(vport);
|
||||
|
||||
|
@ -1430,7 +1430,7 @@ lpfc_vport_symbolic_port_name(struct lpfc_vport *vport, char *symbol,
|
|||
* Name object. NPIV is not in play so this integer
|
||||
* value is sufficient and unique per FC-ID.
|
||||
*/
|
||||
n = snprintf(symbol, size, "%d", vport->phba->brd_no);
|
||||
n = scnprintf(symbol, size, "%d", vport->phba->brd_no);
|
||||
return n;
|
||||
}
|
||||
|
||||
|
@ -1444,26 +1444,26 @@ lpfc_vport_symbolic_node_name(struct lpfc_vport *vport, char *symbol,
|
|||
|
||||
lpfc_decode_firmware_rev(vport->phba, fwrev, 0);
|
||||
|
||||
n = snprintf(symbol, size, "Emulex %s", vport->phba->ModelName);
|
||||
n = scnprintf(symbol, size, "Emulex %s", vport->phba->ModelName);
|
||||
if (size < n)
|
||||
return n;
|
||||
|
||||
n += snprintf(symbol + n, size - n, " FV%s", fwrev);
|
||||
n += scnprintf(symbol + n, size - n, " FV%s", fwrev);
|
||||
if (size < n)
|
||||
return n;
|
||||
|
||||
n += snprintf(symbol + n, size - n, " DV%s.",
|
||||
n += scnprintf(symbol + n, size - n, " DV%s.",
|
||||
lpfc_release_version);
|
||||
if (size < n)
|
||||
return n;
|
||||
|
||||
n += snprintf(symbol + n, size - n, " HN:%s.",
|
||||
n += scnprintf(symbol + n, size - n, " HN:%s.",
|
||||
init_utsname()->nodename);
|
||||
if (size < n)
|
||||
return n;
|
||||
|
||||
/* Note :- OS name is "Linux" */
|
||||
n += snprintf(symbol + n, size - n, " OS:%s\n",
|
||||
n += scnprintf(symbol + n, size - n, " OS:%s",
|
||||
init_utsname()->sysname);
|
||||
return n;
|
||||
}
|
||||
|
@ -2005,8 +2005,11 @@ lpfc_fdmi_hba_attr_manufacturer(struct lpfc_vport *vport,
|
|||
ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
|
||||
memset(ae, 0, 256);
|
||||
|
||||
/* This string MUST be consistent with other FC platforms
|
||||
* supported by Broadcom.
|
||||
*/
|
||||
strncpy(ae->un.AttrString,
|
||||
"Broadcom Inc.",
|
||||
"Emulex Corporation",
|
||||
sizeof(ae->un.AttrString));
|
||||
len = strnlen(ae->un.AttrString,
|
||||
sizeof(ae->un.AttrString));
|
||||
|
@ -2301,7 +2304,8 @@ lpfc_fdmi_hba_attr_bios_ver(struct lpfc_vport *vport,
|
|||
ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
|
||||
memset(ae, 0, 256);
|
||||
|
||||
lpfc_decode_firmware_rev(phba, ae->un.AttrString, 1);
|
||||
strlcat(ae->un.AttrString, phba->BIOSVersion,
|
||||
sizeof(ae->un.AttrString));
|
||||
len = strnlen(ae->un.AttrString,
|
||||
sizeof(ae->un.AttrString));
|
||||
len += (len & 3) ? (4 - (len & 3)) : 4;
|
||||
|
@ -2360,10 +2364,11 @@ lpfc_fdmi_port_attr_fc4type(struct lpfc_vport *vport,
|
|||
ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
|
||||
memset(ae, 0, 32);
|
||||
|
||||
ae->un.AttrTypes[3] = 0x02; /* Type 1 - ELS */
|
||||
ae->un.AttrTypes[2] = 0x01; /* Type 8 - FCP */
|
||||
ae->un.AttrTypes[6] = 0x01; /* Type 40 - NVME */
|
||||
ae->un.AttrTypes[7] = 0x01; /* Type 32 - CT */
|
||||
ae->un.AttrTypes[3] = 0x02; /* Type 0x1 - ELS */
|
||||
ae->un.AttrTypes[2] = 0x01; /* Type 0x8 - FCP */
|
||||
if (vport->nvmei_support || vport->phba->nvmet_support)
|
||||
ae->un.AttrTypes[6] = 0x01; /* Type 0x28 - NVME */
|
||||
ae->un.AttrTypes[7] = 0x01; /* Type 0x20 - CT */
|
||||
size = FOURBYTES + 32;
|
||||
ad->AttrLen = cpu_to_be16(size);
|
||||
ad->AttrType = cpu_to_be16(RPRT_SUPPORTED_FC4_TYPES);
|
||||
|
@ -2673,9 +2678,11 @@ lpfc_fdmi_port_attr_active_fc4type(struct lpfc_vport *vport,
|
|||
ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
|
||||
memset(ae, 0, 32);
|
||||
|
||||
ae->un.AttrTypes[3] = 0x02; /* Type 1 - ELS */
|
||||
ae->un.AttrTypes[2] = 0x01; /* Type 8 - FCP */
|
||||
ae->un.AttrTypes[7] = 0x01; /* Type 32 - CT */
|
||||
ae->un.AttrTypes[3] = 0x02; /* Type 0x1 - ELS */
|
||||
ae->un.AttrTypes[2] = 0x01; /* Type 0x8 - FCP */
|
||||
if (vport->phba->cfg_enable_fc4_type & LPFC_ENABLE_NVME)
|
||||
ae->un.AttrTypes[6] = 0x1; /* Type 0x28 - NVME */
|
||||
ae->un.AttrTypes[7] = 0x01; /* Type 0x20 - CT */
|
||||
size = FOURBYTES + 32;
|
||||
ad->AttrLen = cpu_to_be16(size);
|
||||
ad->AttrType = cpu_to_be16(RPRT_ACTIVE_FC4_TYPES);
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -345,10 +345,10 @@ lpfc_debug_dump_qe(struct lpfc_queue *q, uint32_t idx)
|
|||
|
||||
esize = q->entry_size;
|
||||
qe_word_cnt = esize / sizeof(uint32_t);
|
||||
pword = q->qe[idx].address;
|
||||
pword = lpfc_sli4_qe(q, idx);
|
||||
|
||||
len = 0;
|
||||
len += snprintf(line_buf+len, LPFC_LBUF_SZ-len, "QE[%04d]: ", idx);
|
||||
len += scnprintf(line_buf+len, LPFC_LBUF_SZ-len, "QE[%04d]: ", idx);
|
||||
if (qe_word_cnt > 8)
|
||||
printk(KERN_ERR "%s\n", line_buf);
|
||||
|
||||
|
@ -359,11 +359,11 @@ lpfc_debug_dump_qe(struct lpfc_queue *q, uint32_t idx)
|
|||
if (qe_word_cnt > 8) {
|
||||
len = 0;
|
||||
memset(line_buf, 0, LPFC_LBUF_SZ);
|
||||
len += snprintf(line_buf+len, LPFC_LBUF_SZ-len,
|
||||
len += scnprintf(line_buf+len, LPFC_LBUF_SZ-len,
|
||||
"%03d: ", i);
|
||||
}
|
||||
}
|
||||
len += snprintf(line_buf+len, LPFC_LBUF_SZ-len, "%08x ",
|
||||
len += scnprintf(line_buf+len, LPFC_LBUF_SZ-len, "%08x ",
|
||||
((uint32_t)*pword) & 0xffffffff);
|
||||
pword++;
|
||||
}
|
||||
|
|
|
@ -1961,7 +1961,7 @@ lpfc_cmpl_els_plogi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
|
|||
IOCB_t *irsp;
|
||||
struct lpfc_nodelist *ndlp;
|
||||
struct lpfc_dmabuf *prsp;
|
||||
int disc, rc;
|
||||
int disc;
|
||||
|
||||
/* we pass cmdiocb to state machine which needs rspiocb as well */
|
||||
cmdiocb->context_un.rsp_iocb = rspiocb;
|
||||
|
@ -1990,7 +1990,6 @@ lpfc_cmpl_els_plogi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
|
|||
disc = (ndlp->nlp_flag & NLP_NPR_2B_DISC);
|
||||
ndlp->nlp_flag &= ~NLP_NPR_2B_DISC;
|
||||
spin_unlock_irq(shost->host_lock);
|
||||
rc = 0;
|
||||
|
||||
/* PLOGI completes to NPort <nlp_DID> */
|
||||
lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
|
||||
|
@ -2029,18 +2028,16 @@ lpfc_cmpl_els_plogi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
|
|||
ndlp->nlp_DID, irsp->ulpStatus,
|
||||
irsp->un.ulpWord[4]);
|
||||
/* Do not call DSM for lpfc_els_abort'ed ELS cmds */
|
||||
if (lpfc_error_lost_link(irsp))
|
||||
rc = NLP_STE_FREED_NODE;
|
||||
else
|
||||
rc = lpfc_disc_state_machine(vport, ndlp, cmdiocb,
|
||||
NLP_EVT_CMPL_PLOGI);
|
||||
if (!lpfc_error_lost_link(irsp))
|
||||
lpfc_disc_state_machine(vport, ndlp, cmdiocb,
|
||||
NLP_EVT_CMPL_PLOGI);
|
||||
} else {
|
||||
/* Good status, call state machine */
|
||||
prsp = list_entry(((struct lpfc_dmabuf *)
|
||||
cmdiocb->context2)->list.next,
|
||||
struct lpfc_dmabuf, list);
|
||||
ndlp = lpfc_plogi_confirm_nport(phba, prsp->virt, ndlp);
|
||||
rc = lpfc_disc_state_machine(vport, ndlp, cmdiocb,
|
||||
lpfc_disc_state_machine(vport, ndlp, cmdiocb,
|
||||
NLP_EVT_CMPL_PLOGI);
|
||||
}
|
||||
|
||||
|
@ -6744,12 +6741,11 @@ lpfc_els_rcv_rnid(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
|
|||
uint32_t *lp;
|
||||
RNID *rn;
|
||||
struct ls_rjt stat;
|
||||
uint32_t cmd;
|
||||
|
||||
pcmd = (struct lpfc_dmabuf *) cmdiocb->context2;
|
||||
lp = (uint32_t *) pcmd->virt;
|
||||
|
||||
cmd = *lp++;
|
||||
lp++;
|
||||
rn = (RNID *) lp;
|
||||
|
||||
/* RNID received */
|
||||
|
@ -7508,14 +7504,14 @@ lpfc_els_rcv_farp(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
|
|||
uint32_t *lp;
|
||||
IOCB_t *icmd;
|
||||
FARP *fp;
|
||||
uint32_t cmd, cnt, did;
|
||||
uint32_t cnt, did;
|
||||
|
||||
icmd = &cmdiocb->iocb;
|
||||
did = icmd->un.elsreq64.remoteID;
|
||||
pcmd = (struct lpfc_dmabuf *) cmdiocb->context2;
|
||||
lp = (uint32_t *) pcmd->virt;
|
||||
|
||||
cmd = *lp++;
|
||||
lp++;
|
||||
fp = (FARP *) lp;
|
||||
/* FARP-REQ received from DID <did> */
|
||||
lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
|
||||
|
@ -7580,14 +7576,14 @@ lpfc_els_rcv_farpr(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
|
|||
struct lpfc_dmabuf *pcmd;
|
||||
uint32_t *lp;
|
||||
IOCB_t *icmd;
|
||||
uint32_t cmd, did;
|
||||
uint32_t did;
|
||||
|
||||
icmd = &cmdiocb->iocb;
|
||||
did = icmd->un.elsreq64.remoteID;
|
||||
pcmd = (struct lpfc_dmabuf *) cmdiocb->context2;
|
||||
lp = (uint32_t *) pcmd->virt;
|
||||
|
||||
cmd = *lp++;
|
||||
lp++;
|
||||
/* FARP-RSP received from DID <did> */
|
||||
lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
|
||||
"0600 FARP-RSP received from DID x%x\n", did);
|
||||
|
@ -8454,6 +8450,14 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
|
|||
rjt_err = LSRJT_UNABLE_TPC;
|
||||
rjt_exp = LSEXP_INVALID_OX_RX;
|
||||
break;
|
||||
case ELS_CMD_FPIN:
|
||||
/*
|
||||
* Received FPIN from fabric - pass it to the
|
||||
* transport FPIN handler.
|
||||
*/
|
||||
fc_host_fpin_rcv(shost, elsiocb->iocb.unsli3.rcvsli3.acc_len,
|
||||
(char *)payload);
|
||||
break;
|
||||
default:
|
||||
lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL,
|
||||
"RCV ELS cmd: cmd:x%x did:x%x/ste:x%x",
|
||||
|
@ -8776,7 +8780,6 @@ lpfc_cmpl_reg_new_vport(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
|
|||
return;
|
||||
}
|
||||
/* fall through */
|
||||
|
||||
default:
|
||||
/* Try to recover from this error */
|
||||
if (phba->sli_rev == LPFC_SLI_REV4)
|
||||
|
|
|
@ -885,15 +885,9 @@ lpfc_linkdown(struct lpfc_hba *phba)
|
|||
LPFC_MBOXQ_t *mb;
|
||||
int i;
|
||||
|
||||
if (phba->link_state == LPFC_LINK_DOWN) {
|
||||
if (phba->sli4_hba.conf_trunk) {
|
||||
phba->trunk_link.link0.state = 0;
|
||||
phba->trunk_link.link1.state = 0;
|
||||
phba->trunk_link.link2.state = 0;
|
||||
phba->trunk_link.link3.state = 0;
|
||||
}
|
||||
if (phba->link_state == LPFC_LINK_DOWN)
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* Block all SCSI stack I/Os */
|
||||
lpfc_scsi_dev_block(phba);
|
||||
|
||||
|
@ -932,7 +926,11 @@ lpfc_linkdown(struct lpfc_hba *phba)
|
|||
}
|
||||
}
|
||||
lpfc_destroy_vport_work_array(phba, vports);
|
||||
/* Clean up any firmware default rpi's */
|
||||
|
||||
/* Clean up any SLI3 firmware default rpi's */
|
||||
if (phba->sli_rev > LPFC_SLI_REV3)
|
||||
goto skip_unreg_did;
|
||||
|
||||
mb = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
|
||||
if (mb) {
|
||||
lpfc_unreg_did(phba, 0xffff, LPFC_UNREG_ALL_DFLT_RPIS, mb);
|
||||
|
@ -944,6 +942,7 @@ lpfc_linkdown(struct lpfc_hba *phba)
|
|||
}
|
||||
}
|
||||
|
||||
skip_unreg_did:
|
||||
/* Setup myDID for link up if we are in pt2pt mode */
|
||||
if (phba->pport->fc_flag & FC_PT2PT) {
|
||||
mb = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
|
||||
|
@ -4147,9 +4146,15 @@ lpfc_register_remote_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
|
|||
rdata->pnode = lpfc_nlp_get(ndlp);
|
||||
|
||||
if (ndlp->nlp_type & NLP_FCP_TARGET)
|
||||
rport_ids.roles |= FC_RPORT_ROLE_FCP_TARGET;
|
||||
rport_ids.roles |= FC_PORT_ROLE_FCP_TARGET;
|
||||
if (ndlp->nlp_type & NLP_FCP_INITIATOR)
|
||||
rport_ids.roles |= FC_RPORT_ROLE_FCP_INITIATOR;
|
||||
rport_ids.roles |= FC_PORT_ROLE_FCP_INITIATOR;
|
||||
if (ndlp->nlp_type & NLP_NVME_INITIATOR)
|
||||
rport_ids.roles |= FC_PORT_ROLE_NVME_INITIATOR;
|
||||
if (ndlp->nlp_type & NLP_NVME_TARGET)
|
||||
rport_ids.roles |= FC_PORT_ROLE_NVME_TARGET;
|
||||
if (ndlp->nlp_type & NLP_NVME_DISCOVERY)
|
||||
rport_ids.roles |= FC_PORT_ROLE_NVME_DISCOVERY;
|
||||
|
||||
if (rport_ids.roles != FC_RPORT_ROLE_UNKNOWN)
|
||||
fc_remote_port_rolechg(rport, rport_ids.roles);
|
||||
|
@ -4675,6 +4680,7 @@ lpfc_check_sli_ndlp(struct lpfc_hba *phba,
|
|||
case CMD_XMIT_ELS_RSP64_CX:
|
||||
if (iocb->context1 == (uint8_t *) ndlp)
|
||||
return 1;
|
||||
/* fall through */
|
||||
}
|
||||
} else if (pring->ringno == LPFC_FCP_RING) {
|
||||
/* Skip match check if waiting to relogin to FCP target */
|
||||
|
@ -4870,6 +4876,10 @@ lpfc_unreg_rpi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
|
|||
* accept PLOGIs after unreg_rpi_cmpl
|
||||
*/
|
||||
acc_plogi = 0;
|
||||
} else if (vport->load_flag & FC_UNLOADING) {
|
||||
mbox->ctx_ndlp = NULL;
|
||||
mbox->mbox_cmpl =
|
||||
lpfc_sli_def_mbox_cmpl;
|
||||
} else {
|
||||
mbox->ctx_ndlp = ndlp;
|
||||
mbox->mbox_cmpl =
|
||||
|
@ -4981,6 +4991,10 @@ lpfc_unreg_default_rpis(struct lpfc_vport *vport)
|
|||
LPFC_MBOXQ_t *mbox;
|
||||
int rc;
|
||||
|
||||
/* Unreg DID is an SLI3 operation. */
|
||||
if (phba->sli_rev > LPFC_SLI_REV3)
|
||||
return;
|
||||
|
||||
mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
|
||||
if (mbox) {
|
||||
lpfc_unreg_did(phba, vport->vpi, LPFC_UNREG_ALL_DFLT_RPIS,
|
||||
|
|
|
@ -560,6 +560,8 @@ struct fc_vft_header {
|
|||
#define fc_vft_hdr_hopct_WORD word1
|
||||
};
|
||||
|
||||
#include <uapi/scsi/fc/fc_els.h>
|
||||
|
||||
/*
|
||||
* Extended Link Service LS_COMMAND codes (Payload Word 0)
|
||||
*/
|
||||
|
@ -603,6 +605,7 @@ struct fc_vft_header {
|
|||
#define ELS_CMD_RNID 0x78000000
|
||||
#define ELS_CMD_LIRR 0x7A000000
|
||||
#define ELS_CMD_LCB 0x81000000
|
||||
#define ELS_CMD_FPIN 0x16000000
|
||||
#else /* __LITTLE_ENDIAN_BITFIELD */
|
||||
#define ELS_CMD_MASK 0xffff
|
||||
#define ELS_RSP_MASK 0xff
|
||||
|
@ -643,6 +646,7 @@ struct fc_vft_header {
|
|||
#define ELS_CMD_RNID 0x78
|
||||
#define ELS_CMD_LIRR 0x7A
|
||||
#define ELS_CMD_LCB 0x81
|
||||
#define ELS_CMD_FPIN ELS_FPIN
|
||||
#endif
|
||||
|
||||
/*
|
||||
|
|
|
@ -1894,18 +1894,19 @@ struct lpfc_mbx_set_link_diag_loopback {
|
|||
union {
|
||||
struct {
|
||||
uint32_t word0;
|
||||
#define lpfc_mbx_set_diag_lpbk_type_SHIFT 0
|
||||
#define lpfc_mbx_set_diag_lpbk_type_MASK 0x00000003
|
||||
#define lpfc_mbx_set_diag_lpbk_type_WORD word0
|
||||
#define LPFC_DIAG_LOOPBACK_TYPE_DISABLE 0x0
|
||||
#define LPFC_DIAG_LOOPBACK_TYPE_INTERNAL 0x1
|
||||
#define LPFC_DIAG_LOOPBACK_TYPE_SERDES 0x2
|
||||
#define lpfc_mbx_set_diag_lpbk_link_num_SHIFT 16
|
||||
#define lpfc_mbx_set_diag_lpbk_link_num_MASK 0x0000003F
|
||||
#define lpfc_mbx_set_diag_lpbk_link_num_WORD word0
|
||||
#define lpfc_mbx_set_diag_lpbk_link_type_SHIFT 22
|
||||
#define lpfc_mbx_set_diag_lpbk_link_type_MASK 0x00000003
|
||||
#define lpfc_mbx_set_diag_lpbk_link_type_WORD word0
|
||||
#define lpfc_mbx_set_diag_lpbk_type_SHIFT 0
|
||||
#define lpfc_mbx_set_diag_lpbk_type_MASK 0x00000003
|
||||
#define lpfc_mbx_set_diag_lpbk_type_WORD word0
|
||||
#define LPFC_DIAG_LOOPBACK_TYPE_DISABLE 0x0
|
||||
#define LPFC_DIAG_LOOPBACK_TYPE_INTERNAL 0x1
|
||||
#define LPFC_DIAG_LOOPBACK_TYPE_SERDES 0x2
|
||||
#define LPFC_DIAG_LOOPBACK_TYPE_EXTERNAL_TRUNKED 0x3
|
||||
#define lpfc_mbx_set_diag_lpbk_link_num_SHIFT 16
|
||||
#define lpfc_mbx_set_diag_lpbk_link_num_MASK 0x0000003F
|
||||
#define lpfc_mbx_set_diag_lpbk_link_num_WORD word0
|
||||
#define lpfc_mbx_set_diag_lpbk_link_type_SHIFT 22
|
||||
#define lpfc_mbx_set_diag_lpbk_link_type_MASK 0x00000003
|
||||
#define lpfc_mbx_set_diag_lpbk_link_type_WORD word0
|
||||
} req;
|
||||
struct {
|
||||
uint32_t word0;
|
||||
|
@ -4083,22 +4084,7 @@ struct lpfc_acqe_grp5 {
|
|||
uint32_t trailer;
|
||||
};
|
||||
|
||||
static char *const trunk_errmsg[] = { /* map errcode */
|
||||
"", /* There is no such error code at index 0*/
|
||||
"link negotiated speed does not match existing"
|
||||
" trunk - link was \"low\" speed",
|
||||
"link negotiated speed does not match"
|
||||
" existing trunk - link was \"middle\" speed",
|
||||
"link negotiated speed does not match existing"
|
||||
" trunk - link was \"high\" speed",
|
||||
"Attached to non-trunking port - F_Port",
|
||||
"Attached to non-trunking port - N_Port",
|
||||
"FLOGI response timeout",
|
||||
"non-FLOGI frame received",
|
||||
"Invalid FLOGI response",
|
||||
"Trunking initialization protocol",
|
||||
"Trunk peer device mismatch",
|
||||
};
|
||||
extern const char *const trunk_errmsg[];
|
||||
|
||||
struct lpfc_acqe_fc_la {
|
||||
uint32_t word0;
|
||||
|
|
|
@ -1117,19 +1117,19 @@ lpfc_hba_down_post_s4(struct lpfc_hba *phba)
|
|||
|
||||
}
|
||||
}
|
||||
spin_unlock_irq(&phba->hbalock);
|
||||
|
||||
if (phba->cfg_enable_fc4_type & LPFC_ENABLE_NVME) {
|
||||
spin_lock(&phba->sli4_hba.abts_nvmet_buf_list_lock);
|
||||
spin_lock_irq(&phba->sli4_hba.abts_nvmet_buf_list_lock);
|
||||
list_splice_init(&phba->sli4_hba.lpfc_abts_nvmet_ctx_list,
|
||||
&nvmet_aborts);
|
||||
spin_unlock(&phba->sli4_hba.abts_nvmet_buf_list_lock);
|
||||
spin_unlock_irq(&phba->sli4_hba.abts_nvmet_buf_list_lock);
|
||||
list_for_each_entry_safe(ctxp, ctxp_next, &nvmet_aborts, list) {
|
||||
ctxp->flag &= ~(LPFC_NVMET_XBUSY | LPFC_NVMET_ABORT_OP);
|
||||
lpfc_nvmet_ctxbuf_post(phba, ctxp->ctxbuf);
|
||||
}
|
||||
}
|
||||
|
||||
spin_unlock_irq(&phba->hbalock);
|
||||
lpfc_sli4_free_sp_events(phba);
|
||||
return cnt;
|
||||
}
|
||||
|
@ -1844,8 +1844,12 @@ lpfc_handle_eratt_s4(struct lpfc_hba *phba)
|
|||
/* If the pci channel is offline, ignore possible errors, since
|
||||
* we cannot communicate with the pci card anyway.
|
||||
*/
|
||||
if (pci_channel_offline(phba->pcidev))
|
||||
if (pci_channel_offline(phba->pcidev)) {
|
||||
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
|
||||
"3166 pci channel is offline\n");
|
||||
lpfc_sli4_offline_eratt(phba);
|
||||
return;
|
||||
}
|
||||
|
||||
memset(&portsmphr_reg, 0, sizeof(portsmphr_reg));
|
||||
if_type = bf_get(lpfc_sli_intf_if_type, &phba->sli4_hba.sli_intf);
|
||||
|
@ -1922,6 +1926,7 @@ lpfc_handle_eratt_s4(struct lpfc_hba *phba)
|
|||
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
|
||||
"3151 PCI bus read access failure: x%x\n",
|
||||
readl(phba->sli4_hba.u.if_type2.STATUSregaddr));
|
||||
lpfc_sli4_offline_eratt(phba);
|
||||
return;
|
||||
}
|
||||
reg_err1 = readl(phba->sli4_hba.u.if_type2.ERR1regaddr);
|
||||
|
@ -3075,7 +3080,7 @@ lpfc_sli4_node_prep(struct lpfc_hba *phba)
|
|||
* This routine moves a batch of XRIs from lpfc_io_buf_list_put of HWQ 0
|
||||
* to expedite pool. Mark them as expedite.
|
||||
**/
|
||||
void lpfc_create_expedite_pool(struct lpfc_hba *phba)
|
||||
static void lpfc_create_expedite_pool(struct lpfc_hba *phba)
|
||||
{
|
||||
struct lpfc_sli4_hdw_queue *qp;
|
||||
struct lpfc_io_buf *lpfc_ncmd;
|
||||
|
@ -3110,7 +3115,7 @@ void lpfc_create_expedite_pool(struct lpfc_hba *phba)
|
|||
* This routine returns XRIs from expedite pool to lpfc_io_buf_list_put
|
||||
* of HWQ 0. Clear the mark.
|
||||
**/
|
||||
void lpfc_destroy_expedite_pool(struct lpfc_hba *phba)
|
||||
static void lpfc_destroy_expedite_pool(struct lpfc_hba *phba)
|
||||
{
|
||||
struct lpfc_sli4_hdw_queue *qp;
|
||||
struct lpfc_io_buf *lpfc_ncmd;
|
||||
|
@ -3230,7 +3235,7 @@ void lpfc_create_multixri_pools(struct lpfc_hba *phba)
|
|||
*
|
||||
* This routine returns XRIs from public/private to lpfc_io_buf_list_put.
|
||||
**/
|
||||
void lpfc_destroy_multixri_pools(struct lpfc_hba *phba)
|
||||
static void lpfc_destroy_multixri_pools(struct lpfc_hba *phba)
|
||||
{
|
||||
u32 i;
|
||||
u32 hwq_count;
|
||||
|
@ -3245,6 +3250,13 @@ void lpfc_destroy_multixri_pools(struct lpfc_hba *phba)
|
|||
if (phba->cfg_enable_fc4_type & LPFC_ENABLE_NVME)
|
||||
lpfc_destroy_expedite_pool(phba);
|
||||
|
||||
if (!(phba->pport->load_flag & FC_UNLOADING)) {
|
||||
lpfc_sli_flush_fcp_rings(phba);
|
||||
|
||||
if (phba->cfg_enable_fc4_type & LPFC_ENABLE_NVME)
|
||||
lpfc_sli_flush_nvme_rings(phba);
|
||||
}
|
||||
|
||||
hwq_count = phba->cfg_hdw_queue;
|
||||
|
||||
for (i = 0; i < hwq_count; i++) {
|
||||
|
@ -3611,8 +3623,6 @@ lpfc_io_free(struct lpfc_hba *phba)
|
|||
struct lpfc_sli4_hdw_queue *qp;
|
||||
int idx;
|
||||
|
||||
spin_lock_irq(&phba->hbalock);
|
||||
|
||||
for (idx = 0; idx < phba->cfg_hdw_queue; idx++) {
|
||||
qp = &phba->sli4_hba.hdwq[idx];
|
||||
/* Release all the lpfc_nvme_bufs maintained by this host. */
|
||||
|
@ -3642,8 +3652,6 @@ lpfc_io_free(struct lpfc_hba *phba)
|
|||
}
|
||||
spin_unlock(&qp->io_buf_list_get_lock);
|
||||
}
|
||||
|
||||
spin_unlock_irq(&phba->hbalock);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -4457,7 +4465,7 @@ int lpfc_scan_finished(struct Scsi_Host *shost, unsigned long time)
|
|||
return stat;
|
||||
}
|
||||
|
||||
void lpfc_host_supported_speeds_set(struct Scsi_Host *shost)
|
||||
static void lpfc_host_supported_speeds_set(struct Scsi_Host *shost)
|
||||
{
|
||||
struct lpfc_vport *vport = (struct lpfc_vport *)shost->hostdata;
|
||||
struct lpfc_hba *phba = vport->phba;
|
||||
|
@ -8603,9 +8611,9 @@ lpfc_sli4_queue_verify(struct lpfc_hba *phba)
|
|||
if (phba->nvmet_support) {
|
||||
if (phba->cfg_irq_chann < phba->cfg_nvmet_mrq)
|
||||
phba->cfg_nvmet_mrq = phba->cfg_irq_chann;
|
||||
if (phba->cfg_nvmet_mrq > LPFC_NVMET_MRQ_MAX)
|
||||
phba->cfg_nvmet_mrq = LPFC_NVMET_MRQ_MAX;
|
||||
}
|
||||
if (phba->cfg_nvmet_mrq > LPFC_NVMET_MRQ_MAX)
|
||||
phba->cfg_nvmet_mrq = LPFC_NVMET_MRQ_MAX;
|
||||
|
||||
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
|
||||
"2574 IO channels: hdwQ %d IRQ %d MRQ: %d\n",
|
||||
|
@ -8626,10 +8634,12 @@ static int
|
|||
lpfc_alloc_nvme_wq_cq(struct lpfc_hba *phba, int wqidx)
|
||||
{
|
||||
struct lpfc_queue *qdesc;
|
||||
int cpu;
|
||||
|
||||
cpu = lpfc_find_cpu_handle(phba, wqidx, LPFC_FIND_BY_HDWQ);
|
||||
qdesc = lpfc_sli4_queue_alloc(phba, LPFC_EXPANDED_PAGE_SIZE,
|
||||
phba->sli4_hba.cq_esize,
|
||||
LPFC_CQE_EXP_COUNT);
|
||||
LPFC_CQE_EXP_COUNT, cpu);
|
||||
if (!qdesc) {
|
||||
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
|
||||
"0508 Failed allocate fast-path NVME CQ (%d)\n",
|
||||
|
@ -8638,11 +8648,12 @@ lpfc_alloc_nvme_wq_cq(struct lpfc_hba *phba, int wqidx)
|
|||
}
|
||||
qdesc->qe_valid = 1;
|
||||
qdesc->hdwq = wqidx;
|
||||
qdesc->chann = lpfc_find_cpu_handle(phba, wqidx, LPFC_FIND_BY_HDWQ);
|
||||
qdesc->chann = cpu;
|
||||
phba->sli4_hba.hdwq[wqidx].nvme_cq = qdesc;
|
||||
|
||||
qdesc = lpfc_sli4_queue_alloc(phba, LPFC_EXPANDED_PAGE_SIZE,
|
||||
LPFC_WQE128_SIZE, LPFC_WQE_EXP_COUNT);
|
||||
LPFC_WQE128_SIZE, LPFC_WQE_EXP_COUNT,
|
||||
cpu);
|
||||
if (!qdesc) {
|
||||
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
|
||||
"0509 Failed allocate fast-path NVME WQ (%d)\n",
|
||||
|
@ -8661,18 +8672,20 @@ lpfc_alloc_fcp_wq_cq(struct lpfc_hba *phba, int wqidx)
|
|||
{
|
||||
struct lpfc_queue *qdesc;
|
||||
uint32_t wqesize;
|
||||
int cpu;
|
||||
|
||||
cpu = lpfc_find_cpu_handle(phba, wqidx, LPFC_FIND_BY_HDWQ);
|
||||
/* Create Fast Path FCP CQs */
|
||||
if (phba->enab_exp_wqcq_pages)
|
||||
/* Increase the CQ size when WQEs contain an embedded cdb */
|
||||
qdesc = lpfc_sli4_queue_alloc(phba, LPFC_EXPANDED_PAGE_SIZE,
|
||||
phba->sli4_hba.cq_esize,
|
||||
LPFC_CQE_EXP_COUNT);
|
||||
LPFC_CQE_EXP_COUNT, cpu);
|
||||
|
||||
else
|
||||
qdesc = lpfc_sli4_queue_alloc(phba, LPFC_DEFAULT_PAGE_SIZE,
|
||||
phba->sli4_hba.cq_esize,
|
||||
phba->sli4_hba.cq_ecount);
|
||||
phba->sli4_hba.cq_ecount, cpu);
|
||||
if (!qdesc) {
|
||||
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
|
||||
"0499 Failed allocate fast-path FCP CQ (%d)\n", wqidx);
|
||||
|
@ -8680,7 +8693,7 @@ lpfc_alloc_fcp_wq_cq(struct lpfc_hba *phba, int wqidx)
|
|||
}
|
||||
qdesc->qe_valid = 1;
|
||||
qdesc->hdwq = wqidx;
|
||||
qdesc->chann = lpfc_find_cpu_handle(phba, wqidx, LPFC_FIND_BY_HDWQ);
|
||||
qdesc->chann = cpu;
|
||||
phba->sli4_hba.hdwq[wqidx].fcp_cq = qdesc;
|
||||
|
||||
/* Create Fast Path FCP WQs */
|
||||
|
@ -8690,11 +8703,11 @@ lpfc_alloc_fcp_wq_cq(struct lpfc_hba *phba, int wqidx)
|
|||
LPFC_WQE128_SIZE : phba->sli4_hba.wq_esize;
|
||||
qdesc = lpfc_sli4_queue_alloc(phba, LPFC_EXPANDED_PAGE_SIZE,
|
||||
wqesize,
|
||||
LPFC_WQE_EXP_COUNT);
|
||||
LPFC_WQE_EXP_COUNT, cpu);
|
||||
} else
|
||||
qdesc = lpfc_sli4_queue_alloc(phba, LPFC_DEFAULT_PAGE_SIZE,
|
||||
phba->sli4_hba.wq_esize,
|
||||
phba->sli4_hba.wq_ecount);
|
||||
phba->sli4_hba.wq_ecount, cpu);
|
||||
|
||||
if (!qdesc) {
|
||||
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
|
||||
|
@ -8727,7 +8740,7 @@ int
|
|||
lpfc_sli4_queue_create(struct lpfc_hba *phba)
|
||||
{
|
||||
struct lpfc_queue *qdesc;
|
||||
int idx, eqidx;
|
||||
int idx, eqidx, cpu;
|
||||
struct lpfc_sli4_hdw_queue *qp;
|
||||
struct lpfc_eq_intr_info *eqi;
|
||||
|
||||
|
@ -8814,13 +8827,15 @@ lpfc_sli4_queue_create(struct lpfc_hba *phba)
|
|||
|
||||
/* Create HBA Event Queues (EQs) */
|
||||
for (idx = 0; idx < phba->cfg_hdw_queue; idx++) {
|
||||
/* determine EQ affinity */
|
||||
eqidx = lpfc_find_eq_handle(phba, idx);
|
||||
cpu = lpfc_find_cpu_handle(phba, eqidx, LPFC_FIND_BY_EQ);
|
||||
/*
|
||||
* If there are more Hardware Queues than available
|
||||
* CQs, multiple Hardware Queues may share a common EQ.
|
||||
* EQs, multiple Hardware Queues may share a common EQ.
|
||||
*/
|
||||
if (idx >= phba->cfg_irq_chann) {
|
||||
/* Share an existing EQ */
|
||||
eqidx = lpfc_find_eq_handle(phba, idx);
|
||||
phba->sli4_hba.hdwq[idx].hba_eq =
|
||||
phba->sli4_hba.hdwq[eqidx].hba_eq;
|
||||
continue;
|
||||
|
@ -8828,7 +8843,7 @@ lpfc_sli4_queue_create(struct lpfc_hba *phba)
|
|||
/* Create an EQ */
|
||||
qdesc = lpfc_sli4_queue_alloc(phba, LPFC_DEFAULT_PAGE_SIZE,
|
||||
phba->sli4_hba.eq_esize,
|
||||
phba->sli4_hba.eq_ecount);
|
||||
phba->sli4_hba.eq_ecount, cpu);
|
||||
if (!qdesc) {
|
||||
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
|
||||
"0497 Failed allocate EQ (%d)\n", idx);
|
||||
|
@ -8838,9 +8853,7 @@ lpfc_sli4_queue_create(struct lpfc_hba *phba)
|
|||
qdesc->hdwq = idx;
|
||||
|
||||
/* Save the CPU this EQ is affinitised to */
|
||||
eqidx = lpfc_find_eq_handle(phba, idx);
|
||||
qdesc->chann = lpfc_find_cpu_handle(phba, eqidx,
|
||||
LPFC_FIND_BY_EQ);
|
||||
qdesc->chann = cpu;
|
||||
phba->sli4_hba.hdwq[idx].hba_eq = qdesc;
|
||||
qdesc->last_cpu = qdesc->chann;
|
||||
eqi = per_cpu_ptr(phba->sli4_hba.eq_info, qdesc->last_cpu);
|
||||
|
@ -8863,11 +8876,14 @@ lpfc_sli4_queue_create(struct lpfc_hba *phba)
|
|||
|
||||
if (phba->nvmet_support) {
|
||||
for (idx = 0; idx < phba->cfg_nvmet_mrq; idx++) {
|
||||
cpu = lpfc_find_cpu_handle(phba, idx,
|
||||
LPFC_FIND_BY_HDWQ);
|
||||
qdesc = lpfc_sli4_queue_alloc(
|
||||
phba,
|
||||
LPFC_DEFAULT_PAGE_SIZE,
|
||||
phba->sli4_hba.cq_esize,
|
||||
phba->sli4_hba.cq_ecount);
|
||||
phba->sli4_hba.cq_ecount,
|
||||
cpu);
|
||||
if (!qdesc) {
|
||||
lpfc_printf_log(
|
||||
phba, KERN_ERR, LOG_INIT,
|
||||
|
@ -8877,7 +8893,7 @@ lpfc_sli4_queue_create(struct lpfc_hba *phba)
|
|||
}
|
||||
qdesc->qe_valid = 1;
|
||||
qdesc->hdwq = idx;
|
||||
qdesc->chann = idx;
|
||||
qdesc->chann = cpu;
|
||||
phba->sli4_hba.nvmet_cqset[idx] = qdesc;
|
||||
}
|
||||
}
|
||||
|
@ -8887,10 +8903,11 @@ lpfc_sli4_queue_create(struct lpfc_hba *phba)
|
|||
* Create Slow Path Completion Queues (CQs)
|
||||
*/
|
||||
|
||||
cpu = lpfc_find_cpu_handle(phba, 0, LPFC_FIND_BY_EQ);
|
||||
/* Create slow-path Mailbox Command Complete Queue */
|
||||
qdesc = lpfc_sli4_queue_alloc(phba, LPFC_DEFAULT_PAGE_SIZE,
|
||||
phba->sli4_hba.cq_esize,
|
||||
phba->sli4_hba.cq_ecount);
|
||||
phba->sli4_hba.cq_ecount, cpu);
|
||||
if (!qdesc) {
|
||||
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
|
||||
"0500 Failed allocate slow-path mailbox CQ\n");
|
||||
|
@ -8902,7 +8919,7 @@ lpfc_sli4_queue_create(struct lpfc_hba *phba)
|
|||
/* Create slow-path ELS Complete Queue */
|
||||
qdesc = lpfc_sli4_queue_alloc(phba, LPFC_DEFAULT_PAGE_SIZE,
|
||||
phba->sli4_hba.cq_esize,
|
||||
phba->sli4_hba.cq_ecount);
|
||||
phba->sli4_hba.cq_ecount, cpu);
|
||||
if (!qdesc) {
|
||||
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
|
||||
"0501 Failed allocate slow-path ELS CQ\n");
|
||||
|
@ -8921,7 +8938,7 @@ lpfc_sli4_queue_create(struct lpfc_hba *phba)
|
|||
|
||||
qdesc = lpfc_sli4_queue_alloc(phba, LPFC_DEFAULT_PAGE_SIZE,
|
||||
phba->sli4_hba.mq_esize,
|
||||
phba->sli4_hba.mq_ecount);
|
||||
phba->sli4_hba.mq_ecount, cpu);
|
||||
if (!qdesc) {
|
||||
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
|
||||
"0505 Failed allocate slow-path MQ\n");
|
||||
|
@ -8937,7 +8954,7 @@ lpfc_sli4_queue_create(struct lpfc_hba *phba)
|
|||
/* Create slow-path ELS Work Queue */
|
||||
qdesc = lpfc_sli4_queue_alloc(phba, LPFC_DEFAULT_PAGE_SIZE,
|
||||
phba->sli4_hba.wq_esize,
|
||||
phba->sli4_hba.wq_ecount);
|
||||
phba->sli4_hba.wq_ecount, cpu);
|
||||
if (!qdesc) {
|
||||
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
|
||||
"0504 Failed allocate slow-path ELS WQ\n");
|
||||
|
@ -8951,7 +8968,7 @@ lpfc_sli4_queue_create(struct lpfc_hba *phba)
|
|||
/* Create NVME LS Complete Queue */
|
||||
qdesc = lpfc_sli4_queue_alloc(phba, LPFC_DEFAULT_PAGE_SIZE,
|
||||
phba->sli4_hba.cq_esize,
|
||||
phba->sli4_hba.cq_ecount);
|
||||
phba->sli4_hba.cq_ecount, cpu);
|
||||
if (!qdesc) {
|
||||
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
|
||||
"6079 Failed allocate NVME LS CQ\n");
|
||||
|
@ -8964,7 +8981,7 @@ lpfc_sli4_queue_create(struct lpfc_hba *phba)
|
|||
/* Create NVME LS Work Queue */
|
||||
qdesc = lpfc_sli4_queue_alloc(phba, LPFC_DEFAULT_PAGE_SIZE,
|
||||
phba->sli4_hba.wq_esize,
|
||||
phba->sli4_hba.wq_ecount);
|
||||
phba->sli4_hba.wq_ecount, cpu);
|
||||
if (!qdesc) {
|
||||
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
|
||||
"6080 Failed allocate NVME LS WQ\n");
|
||||
|
@ -8982,7 +8999,7 @@ lpfc_sli4_queue_create(struct lpfc_hba *phba)
|
|||
/* Create Receive Queue for header */
|
||||
qdesc = lpfc_sli4_queue_alloc(phba, LPFC_DEFAULT_PAGE_SIZE,
|
||||
phba->sli4_hba.rq_esize,
|
||||
phba->sli4_hba.rq_ecount);
|
||||
phba->sli4_hba.rq_ecount, cpu);
|
||||
if (!qdesc) {
|
||||
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
|
||||
"0506 Failed allocate receive HRQ\n");
|
||||
|
@ -8993,7 +9010,7 @@ lpfc_sli4_queue_create(struct lpfc_hba *phba)
|
|||
/* Create Receive Queue for data */
|
||||
qdesc = lpfc_sli4_queue_alloc(phba, LPFC_DEFAULT_PAGE_SIZE,
|
||||
phba->sli4_hba.rq_esize,
|
||||
phba->sli4_hba.rq_ecount);
|
||||
phba->sli4_hba.rq_ecount, cpu);
|
||||
if (!qdesc) {
|
||||
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
|
||||
"0507 Failed allocate receive DRQ\n");
|
||||
|
@ -9004,11 +9021,14 @@ lpfc_sli4_queue_create(struct lpfc_hba *phba)
|
|||
if ((phba->cfg_enable_fc4_type & LPFC_ENABLE_NVME) &&
|
||||
phba->nvmet_support) {
|
||||
for (idx = 0; idx < phba->cfg_nvmet_mrq; idx++) {
|
||||
cpu = lpfc_find_cpu_handle(phba, idx,
|
||||
LPFC_FIND_BY_HDWQ);
|
||||
/* Create NVMET Receive Queue for header */
|
||||
qdesc = lpfc_sli4_queue_alloc(phba,
|
||||
LPFC_DEFAULT_PAGE_SIZE,
|
||||
phba->sli4_hba.rq_esize,
|
||||
LPFC_NVMET_RQE_DEF_COUNT);
|
||||
LPFC_NVMET_RQE_DEF_COUNT,
|
||||
cpu);
|
||||
if (!qdesc) {
|
||||
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
|
||||
"3146 Failed allocate "
|
||||
|
@ -9019,8 +9039,9 @@ lpfc_sli4_queue_create(struct lpfc_hba *phba)
|
|||
phba->sli4_hba.nvmet_mrq_hdr[idx] = qdesc;
|
||||
|
||||
/* Only needed for header of RQ pair */
|
||||
qdesc->rqbp = kzalloc(sizeof(struct lpfc_rqb),
|
||||
GFP_KERNEL);
|
||||
qdesc->rqbp = kzalloc_node(sizeof(*qdesc->rqbp),
|
||||
GFP_KERNEL,
|
||||
cpu_to_node(cpu));
|
||||
if (qdesc->rqbp == NULL) {
|
||||
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
|
||||
"6131 Failed allocate "
|
||||
|
@ -9035,7 +9056,8 @@ lpfc_sli4_queue_create(struct lpfc_hba *phba)
|
|||
qdesc = lpfc_sli4_queue_alloc(phba,
|
||||
LPFC_DEFAULT_PAGE_SIZE,
|
||||
phba->sli4_hba.rq_esize,
|
||||
LPFC_NVMET_RQE_DEF_COUNT);
|
||||
LPFC_NVMET_RQE_DEF_COUNT,
|
||||
cpu);
|
||||
if (!qdesc) {
|
||||
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
|
||||
"3156 Failed allocate "
|
||||
|
@ -9134,6 +9156,20 @@ lpfc_sli4_release_hdwq(struct lpfc_hba *phba)
|
|||
void
|
||||
lpfc_sli4_queue_destroy(struct lpfc_hba *phba)
|
||||
{
|
||||
/*
|
||||
* Set FREE_INIT before beginning to free the queues.
|
||||
* Wait until the users of queues to acknowledge to
|
||||
* release queues by clearing FREE_WAIT.
|
||||
*/
|
||||
spin_lock_irq(&phba->hbalock);
|
||||
phba->sli.sli_flag |= LPFC_QUEUE_FREE_INIT;
|
||||
while (phba->sli.sli_flag & LPFC_QUEUE_FREE_WAIT) {
|
||||
spin_unlock_irq(&phba->hbalock);
|
||||
msleep(20);
|
||||
spin_lock_irq(&phba->hbalock);
|
||||
}
|
||||
spin_unlock_irq(&phba->hbalock);
|
||||
|
||||
/* Release HBA eqs */
|
||||
if (phba->sli4_hba.hdwq)
|
||||
lpfc_sli4_release_hdwq(phba);
|
||||
|
@ -9172,6 +9208,11 @@ lpfc_sli4_queue_destroy(struct lpfc_hba *phba)
|
|||
|
||||
/* Everything on this list has been freed */
|
||||
INIT_LIST_HEAD(&phba->sli4_hba.lpfc_wq_list);
|
||||
|
||||
/* Done with freeing the queues */
|
||||
spin_lock_irq(&phba->hbalock);
|
||||
phba->sli.sli_flag &= ~LPFC_QUEUE_FREE_INIT;
|
||||
spin_unlock_irq(&phba->hbalock);
|
||||
}
|
||||
|
||||
int
|
||||
|
@ -9231,7 +9272,7 @@ lpfc_create_wq_cq(struct lpfc_hba *phba, struct lpfc_queue *eq,
|
|||
rc = lpfc_wq_create(phba, wq, cq, qtype);
|
||||
if (rc) {
|
||||
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
|
||||
"6123 Fail setup fastpath WQ (%d), rc = 0x%x\n",
|
||||
"4618 Fail setup fastpath WQ (%d), rc = 0x%x\n",
|
||||
qidx, (uint32_t)rc);
|
||||
/* no need to tear down cq - caller will do so */
|
||||
return rc;
|
||||
|
@ -9271,7 +9312,7 @@ lpfc_create_wq_cq(struct lpfc_hba *phba, struct lpfc_queue *eq,
|
|||
* This routine will populate the cq_lookup table by all
|
||||
* available CQ queue_id's.
|
||||
**/
|
||||
void
|
||||
static void
|
||||
lpfc_setup_cq_lookup(struct lpfc_hba *phba)
|
||||
{
|
||||
struct lpfc_queue *eq, *childq;
|
||||
|
@ -10740,7 +10781,7 @@ lpfc_sli4_enable_msix(struct lpfc_hba *phba)
|
|||
phba->cfg_irq_chann, vectors);
|
||||
if (phba->cfg_irq_chann > vectors)
|
||||
phba->cfg_irq_chann = vectors;
|
||||
if (phba->cfg_nvmet_mrq > vectors)
|
||||
if (phba->nvmet_support && (phba->cfg_nvmet_mrq > vectors))
|
||||
phba->cfg_nvmet_mrq = vectors;
|
||||
}
|
||||
|
||||
|
@ -11297,7 +11338,7 @@ lpfc_get_sli4_parameters(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq)
|
|||
!phba->nvme_support) {
|
||||
phba->nvme_support = 0;
|
||||
phba->nvmet_support = 0;
|
||||
phba->cfg_nvmet_mrq = LPFC_NVMET_MRQ_OFF;
|
||||
phba->cfg_nvmet_mrq = 0;
|
||||
lpfc_printf_log(phba, KERN_ERR, LOG_INIT | LOG_NVME,
|
||||
"6101 Disabling NVME support: "
|
||||
"Not supported by firmware: %d %d\n",
|
||||
|
@ -13046,7 +13087,7 @@ lpfc_io_resume(struct pci_dev *pdev)
|
|||
* is destroyed.
|
||||
*
|
||||
**/
|
||||
void
|
||||
static void
|
||||
lpfc_sli4_oas_verify(struct lpfc_hba *phba)
|
||||
{
|
||||
|
||||
|
|
|
@ -871,7 +871,7 @@ lpfc_disc_set_adisc(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
|
|||
* This function will send a unreg_login mailbox command to the firmware
|
||||
* to release a rpi.
|
||||
**/
|
||||
void
|
||||
static void
|
||||
lpfc_release_rpi(struct lpfc_hba *phba, struct lpfc_vport *vport,
|
||||
struct lpfc_nodelist *ndlp, uint16_t rpi)
|
||||
{
|
||||
|
@ -1733,7 +1733,6 @@ lpfc_cmpl_reglogin_reglogin_issue(struct lpfc_vport *vport,
|
|||
LPFC_MBOXQ_t *pmb = (LPFC_MBOXQ_t *) arg;
|
||||
MAILBOX_t *mb = &pmb->u.mb;
|
||||
uint32_t did = mb->un.varWords[1];
|
||||
int rc = 0;
|
||||
|
||||
if (mb->mbxStatus) {
|
||||
/* RegLogin failed */
|
||||
|
@ -1806,8 +1805,8 @@ lpfc_cmpl_reglogin_reglogin_issue(struct lpfc_vport *vport,
|
|||
* GFT_ID to determine if remote port supports NVME.
|
||||
*/
|
||||
if (vport->cfg_enable_fc4_type != LPFC_ENABLE_FCP) {
|
||||
rc = lpfc_ns_cmd(vport, SLI_CTNS_GFT_ID,
|
||||
0, ndlp->nlp_DID);
|
||||
lpfc_ns_cmd(vport, SLI_CTNS_GFT_ID, 0,
|
||||
ndlp->nlp_DID);
|
||||
return ndlp->nlp_state;
|
||||
}
|
||||
ndlp->nlp_fc4_type = NLP_FC4_FCP;
|
||||
|
|
|
@ -229,7 +229,7 @@ lpfc_nvme_create_queue(struct nvme_fc_local_port *pnvme_lport,
|
|||
if (qhandle == NULL)
|
||||
return -ENOMEM;
|
||||
|
||||
qhandle->cpu_id = smp_processor_id();
|
||||
qhandle->cpu_id = raw_smp_processor_id();
|
||||
qhandle->qidx = qidx;
|
||||
/*
|
||||
* NVME qidx == 0 is the admin queue, so both admin queue
|
||||
|
@ -312,7 +312,7 @@ lpfc_nvme_localport_delete(struct nvme_fc_local_port *localport)
|
|||
* Return value :
|
||||
* None
|
||||
*/
|
||||
void
|
||||
static void
|
||||
lpfc_nvme_remoteport_delete(struct nvme_fc_remote_port *remoteport)
|
||||
{
|
||||
struct lpfc_nvme_rport *rport = remoteport->private;
|
||||
|
@ -1111,9 +1111,11 @@ lpfc_nvme_io_cmd_wqe_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *pwqeIn,
|
|||
out_err:
|
||||
lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_IOERR,
|
||||
"6072 NVME Completion Error: xri %x "
|
||||
"status x%x result x%x placed x%x\n",
|
||||
"status x%x result x%x [x%x] "
|
||||
"placed x%x\n",
|
||||
lpfc_ncmd->cur_iocbq.sli4_xritag,
|
||||
lpfc_ncmd->status, lpfc_ncmd->result,
|
||||
wcqe->parameter,
|
||||
wcqe->total_data_placed);
|
||||
nCmd->transferred_length = 0;
|
||||
nCmd->rcv_rsplen = 0;
|
||||
|
@ -1141,7 +1143,7 @@ lpfc_nvme_io_cmd_wqe_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *pwqeIn,
|
|||
if (phba->cpucheck_on & LPFC_CHECK_NVME_IO) {
|
||||
uint32_t cpu;
|
||||
idx = lpfc_ncmd->cur_iocbq.hba_wqidx;
|
||||
cpu = smp_processor_id();
|
||||
cpu = raw_smp_processor_id();
|
||||
if (cpu < LPFC_CHECK_CPU_CNT) {
|
||||
if (lpfc_ncmd->cpu != cpu)
|
||||
lpfc_printf_vlog(vport,
|
||||
|
@ -1559,7 +1561,7 @@ lpfc_nvme_fcp_io_submit(struct nvme_fc_local_port *pnvme_lport,
|
|||
if (phba->cfg_fcp_io_sched == LPFC_FCP_SCHED_BY_HDWQ) {
|
||||
idx = lpfc_queue_info->index;
|
||||
} else {
|
||||
cpu = smp_processor_id();
|
||||
cpu = raw_smp_processor_id();
|
||||
idx = phba->sli4_hba.cpu_map[cpu].hdwq;
|
||||
}
|
||||
|
||||
|
@ -1639,7 +1641,7 @@ lpfc_nvme_fcp_io_submit(struct nvme_fc_local_port *pnvme_lport,
|
|||
lpfc_ncmd->ts_cmd_wqput = ktime_get_ns();
|
||||
|
||||
if (phba->cpucheck_on & LPFC_CHECK_NVME_IO) {
|
||||
cpu = smp_processor_id();
|
||||
cpu = raw_smp_processor_id();
|
||||
if (cpu < LPFC_CHECK_CPU_CNT) {
|
||||
lpfc_ncmd->cpu = cpu;
|
||||
if (idx != cpu)
|
||||
|
@ -2081,15 +2083,15 @@ lpfc_nvme_create_localport(struct lpfc_vport *vport)
|
|||
lpfc_nvme_template.max_hw_queues =
|
||||
phba->sli4_hba.num_present_cpu;
|
||||
|
||||
if (!IS_ENABLED(CONFIG_NVME_FC))
|
||||
return ret;
|
||||
|
||||
/* localport is allocated from the stack, but the registration
|
||||
* call allocates heap memory as well as the private area.
|
||||
*/
|
||||
#if (IS_ENABLED(CONFIG_NVME_FC))
|
||||
|
||||
ret = nvme_fc_register_localport(&nfcp_info, &lpfc_nvme_template,
|
||||
&vport->phba->pcidev->dev, &localport);
|
||||
#else
|
||||
ret = -ENOMEM;
|
||||
#endif
|
||||
if (!ret) {
|
||||
lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME | LOG_NVME_DISC,
|
||||
"6005 Successfully registered local "
|
||||
|
@ -2124,6 +2126,7 @@ lpfc_nvme_create_localport(struct lpfc_vport *vport)
|
|||
return ret;
|
||||
}
|
||||
|
||||
#if (IS_ENABLED(CONFIG_NVME_FC))
|
||||
/* lpfc_nvme_lport_unreg_wait - Wait for the host to complete an lport unreg.
|
||||
*
|
||||
* The driver has to wait for the host nvme transport to callback
|
||||
|
@ -2134,12 +2137,11 @@ lpfc_nvme_create_localport(struct lpfc_vport *vport)
|
|||
* An uninterruptible wait is used because of the risk of transport-to-
|
||||
* driver state mismatch.
|
||||
*/
|
||||
void
|
||||
static void
|
||||
lpfc_nvme_lport_unreg_wait(struct lpfc_vport *vport,
|
||||
struct lpfc_nvme_lport *lport,
|
||||
struct completion *lport_unreg_cmp)
|
||||
{
|
||||
#if (IS_ENABLED(CONFIG_NVME_FC))
|
||||
u32 wait_tmo;
|
||||
int ret;
|
||||
|
||||
|
@ -2162,8 +2164,8 @@ lpfc_nvme_lport_unreg_wait(struct lpfc_vport *vport,
|
|||
lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_IOERR,
|
||||
"6177 Lport %p Localport %p Complete Success\n",
|
||||
lport, vport->localport);
|
||||
#endif
|
||||
}
|
||||
#endif
|
||||
|
||||
/**
|
||||
* lpfc_nvme_destroy_localport - Destroy lpfc_nvme bound to nvme transport.
|
||||
|
|
|
@ -220,7 +220,7 @@ lpfc_nvmet_cmd_template(void)
|
|||
/* Word 12, 13, 14, 15 - is zero */
|
||||
}
|
||||
|
||||
void
|
||||
static void
|
||||
lpfc_nvmet_defer_release(struct lpfc_hba *phba, struct lpfc_nvmet_rcv_ctx *ctxp)
|
||||
{
|
||||
lockdep_assert_held(&ctxp->ctxlock);
|
||||
|
@ -325,7 +325,6 @@ lpfc_nvmet_ctxbuf_post(struct lpfc_hba *phba, struct lpfc_nvmet_ctxbuf *ctx_buf)
|
|||
struct fc_frame_header *fc_hdr;
|
||||
struct rqb_dmabuf *nvmebuf;
|
||||
struct lpfc_nvmet_ctx_info *infop;
|
||||
uint32_t *payload;
|
||||
uint32_t size, oxid, sid;
|
||||
int cpu;
|
||||
unsigned long iflag;
|
||||
|
@ -370,7 +369,6 @@ lpfc_nvmet_ctxbuf_post(struct lpfc_hba *phba, struct lpfc_nvmet_ctxbuf *ctx_buf)
|
|||
fc_hdr = (struct fc_frame_header *)(nvmebuf->hbuf.virt);
|
||||
oxid = be16_to_cpu(fc_hdr->fh_ox_id);
|
||||
tgtp = (struct lpfc_nvmet_tgtport *)phba->targetport->private;
|
||||
payload = (uint32_t *)(nvmebuf->dbuf.virt);
|
||||
size = nvmebuf->bytes_recv;
|
||||
sid = sli4_sid_from_fc_hdr(fc_hdr);
|
||||
|
||||
|
@ -435,7 +433,7 @@ lpfc_nvmet_ctxbuf_post(struct lpfc_hba *phba, struct lpfc_nvmet_ctxbuf *ctx_buf)
|
|||
* Use the CPU context list, from the MRQ the IO was received on
|
||||
* (ctxp->idx), to save context structure.
|
||||
*/
|
||||
cpu = smp_processor_id();
|
||||
cpu = raw_smp_processor_id();
|
||||
infop = lpfc_get_ctx_list(phba, cpu, ctxp->idx);
|
||||
spin_lock_irqsave(&infop->nvmet_ctx_list_lock, iflag);
|
||||
list_add_tail(&ctx_buf->list, &infop->nvmet_ctx_list);
|
||||
|
@ -765,7 +763,7 @@ lpfc_nvmet_xmt_fcp_op_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
|
|||
}
|
||||
#ifdef CONFIG_SCSI_LPFC_DEBUG_FS
|
||||
if (phba->cpucheck_on & LPFC_CHECK_NVMET_IO) {
|
||||
id = smp_processor_id();
|
||||
id = raw_smp_processor_id();
|
||||
if (id < LPFC_CHECK_CPU_CNT) {
|
||||
if (ctxp->cpu != id)
|
||||
lpfc_printf_log(phba, KERN_INFO, LOG_NVME_IOERR,
|
||||
|
@ -906,7 +904,7 @@ lpfc_nvmet_xmt_fcp_op(struct nvmet_fc_target_port *tgtport,
|
|||
ctxp->hdwq = &phba->sli4_hba.hdwq[rsp->hwqid];
|
||||
|
||||
if (phba->cpucheck_on & LPFC_CHECK_NVMET_IO) {
|
||||
int id = smp_processor_id();
|
||||
int id = raw_smp_processor_id();
|
||||
if (id < LPFC_CHECK_CPU_CNT) {
|
||||
if (rsp->hwqid != id)
|
||||
lpfc_printf_log(phba, KERN_INFO, LOG_NVME_IOERR,
|
||||
|
@ -1120,7 +1118,7 @@ lpfc_nvmet_defer_rcv(struct nvmet_fc_target_port *tgtport,
|
|||
|
||||
|
||||
lpfc_nvmeio_data(phba, "NVMET DEFERRCV: xri x%x sz %d CPU %02x\n",
|
||||
ctxp->oxid, ctxp->size, smp_processor_id());
|
||||
ctxp->oxid, ctxp->size, raw_smp_processor_id());
|
||||
|
||||
if (!nvmebuf) {
|
||||
lpfc_printf_log(phba, KERN_INFO, LOG_NVME_IOERR,
|
||||
|
@ -1596,7 +1594,7 @@ lpfc_nvmet_rcv_unsol_abort(struct lpfc_vport *vport,
|
|||
|
||||
lpfc_nvmeio_data(phba,
|
||||
"NVMET ABTS RCV: xri x%x CPU %02x rjt %d\n",
|
||||
xri, smp_processor_id(), 0);
|
||||
xri, raw_smp_processor_id(), 0);
|
||||
|
||||
lpfc_printf_log(phba, KERN_INFO, LOG_NVME_ABTS,
|
||||
"6319 NVMET Rcv ABTS:acc xri x%x\n", xri);
|
||||
|
@ -1612,7 +1610,7 @@ lpfc_nvmet_rcv_unsol_abort(struct lpfc_vport *vport,
|
|||
spin_unlock_irqrestore(&phba->hbalock, iflag);
|
||||
|
||||
lpfc_nvmeio_data(phba, "NVMET ABTS RCV: xri x%x CPU %02x rjt %d\n",
|
||||
xri, smp_processor_id(), 1);
|
||||
xri, raw_smp_processor_id(), 1);
|
||||
|
||||
lpfc_printf_log(phba, KERN_INFO, LOG_NVME_ABTS,
|
||||
"6320 NVMET Rcv ABTS:rjt xri x%x\n", xri);
|
||||
|
@ -1725,7 +1723,11 @@ lpfc_nvmet_destroy_targetport(struct lpfc_hba *phba)
|
|||
}
|
||||
tgtp->tport_unreg_cmp = &tport_unreg_cmp;
|
||||
nvmet_fc_unregister_targetport(phba->targetport);
|
||||
wait_for_completion_timeout(&tport_unreg_cmp, 5);
|
||||
if (!wait_for_completion_timeout(tgtp->tport_unreg_cmp,
|
||||
msecs_to_jiffies(LPFC_NVMET_WAIT_TMO)))
|
||||
lpfc_printf_log(phba, KERN_ERR, LOG_NVME,
|
||||
"6179 Unreg targetport %p timeout "
|
||||
"reached.\n", phba->targetport);
|
||||
lpfc_nvmet_cleanup_io_context(phba);
|
||||
}
|
||||
phba->targetport = NULL;
|
||||
|
@ -1843,7 +1845,7 @@ lpfc_nvmet_process_rcv_fcp_req(struct lpfc_nvmet_ctxbuf *ctx_buf)
|
|||
struct lpfc_hba *phba = ctxp->phba;
|
||||
struct rqb_dmabuf *nvmebuf = ctxp->rqb_buffer;
|
||||
struct lpfc_nvmet_tgtport *tgtp;
|
||||
uint32_t *payload;
|
||||
uint32_t *payload, qno;
|
||||
uint32_t rc;
|
||||
unsigned long iflags;
|
||||
|
||||
|
@ -1876,6 +1878,15 @@ lpfc_nvmet_process_rcv_fcp_req(struct lpfc_nvmet_ctxbuf *ctx_buf)
|
|||
/* Process FCP command */
|
||||
if (rc == 0) {
|
||||
atomic_inc(&tgtp->rcv_fcp_cmd_out);
|
||||
spin_lock_irqsave(&ctxp->ctxlock, iflags);
|
||||
if ((ctxp->flag & LPFC_NVMET_CTX_REUSE_WQ) ||
|
||||
(nvmebuf != ctxp->rqb_buffer)) {
|
||||
spin_unlock_irqrestore(&ctxp->ctxlock, iflags);
|
||||
return;
|
||||
}
|
||||
ctxp->rqb_buffer = NULL;
|
||||
spin_unlock_irqrestore(&ctxp->ctxlock, iflags);
|
||||
lpfc_rq_buf_free(phba, &nvmebuf->hbuf); /* repost */
|
||||
return;
|
||||
}
|
||||
|
||||
|
@ -1886,6 +1897,20 @@ lpfc_nvmet_process_rcv_fcp_req(struct lpfc_nvmet_ctxbuf *ctx_buf)
|
|||
ctxp->oxid, ctxp->size, ctxp->sid);
|
||||
atomic_inc(&tgtp->rcv_fcp_cmd_out);
|
||||
atomic_inc(&tgtp->defer_fod);
|
||||
spin_lock_irqsave(&ctxp->ctxlock, iflags);
|
||||
if (ctxp->flag & LPFC_NVMET_CTX_REUSE_WQ) {
|
||||
spin_unlock_irqrestore(&ctxp->ctxlock, iflags);
|
||||
return;
|
||||
}
|
||||
spin_unlock_irqrestore(&ctxp->ctxlock, iflags);
|
||||
/*
|
||||
* Post a replacement DMA buffer to RQ and defer
|
||||
* freeing rcv buffer till .defer_rcv callback
|
||||
*/
|
||||
qno = nvmebuf->idx;
|
||||
lpfc_post_rq_buffer(
|
||||
phba, phba->sli4_hba.nvmet_mrq_hdr[qno],
|
||||
phba->sli4_hba.nvmet_mrq_data[qno], 1, qno);
|
||||
return;
|
||||
}
|
||||
atomic_inc(&tgtp->rcv_fcp_cmd_drop);
|
||||
|
@ -1996,7 +2021,6 @@ lpfc_nvmet_unsol_fcp_buffer(struct lpfc_hba *phba,
|
|||
struct fc_frame_header *fc_hdr;
|
||||
struct lpfc_nvmet_ctxbuf *ctx_buf;
|
||||
struct lpfc_nvmet_ctx_info *current_infop;
|
||||
uint32_t *payload;
|
||||
uint32_t size, oxid, sid, qno;
|
||||
unsigned long iflag;
|
||||
int current_cpu;
|
||||
|
@ -2020,7 +2044,7 @@ lpfc_nvmet_unsol_fcp_buffer(struct lpfc_hba *phba,
|
|||
* be empty, thus it would need to be replenished with the
|
||||
* context list from another CPU for this MRQ.
|
||||
*/
|
||||
current_cpu = smp_processor_id();
|
||||
current_cpu = raw_smp_processor_id();
|
||||
current_infop = lpfc_get_ctx_list(phba, current_cpu, idx);
|
||||
spin_lock_irqsave(¤t_infop->nvmet_ctx_list_lock, iflag);
|
||||
if (current_infop->nvmet_ctx_list_cnt) {
|
||||
|
@ -2050,7 +2074,7 @@ lpfc_nvmet_unsol_fcp_buffer(struct lpfc_hba *phba,
|
|||
#endif
|
||||
|
||||
lpfc_nvmeio_data(phba, "NVMET FCP RCV: xri x%x sz %d CPU %02x\n",
|
||||
oxid, size, smp_processor_id());
|
||||
oxid, size, raw_smp_processor_id());
|
||||
|
||||
tgtp = (struct lpfc_nvmet_tgtport *)phba->targetport->private;
|
||||
|
||||
|
@ -2074,7 +2098,6 @@ lpfc_nvmet_unsol_fcp_buffer(struct lpfc_hba *phba,
|
|||
return;
|
||||
}
|
||||
|
||||
payload = (uint32_t *)(nvmebuf->dbuf.virt);
|
||||
sid = sli4_sid_from_fc_hdr(fc_hdr);
|
||||
|
||||
ctxp = (struct lpfc_nvmet_rcv_ctx *)ctx_buf->context;
|
||||
|
@ -2690,12 +2713,11 @@ lpfc_nvmet_sol_fcp_abort_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
|
|||
{
|
||||
struct lpfc_nvmet_rcv_ctx *ctxp;
|
||||
struct lpfc_nvmet_tgtport *tgtp;
|
||||
uint32_t status, result;
|
||||
uint32_t result;
|
||||
unsigned long flags;
|
||||
bool released = false;
|
||||
|
||||
ctxp = cmdwqe->context2;
|
||||
status = bf_get(lpfc_wcqe_c_status, wcqe);
|
||||
result = wcqe->parameter;
|
||||
|
||||
tgtp = (struct lpfc_nvmet_tgtport *)phba->targetport->private;
|
||||
|
@ -2761,11 +2783,10 @@ lpfc_nvmet_unsol_fcp_abort_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
|
|||
struct lpfc_nvmet_rcv_ctx *ctxp;
|
||||
struct lpfc_nvmet_tgtport *tgtp;
|
||||
unsigned long flags;
|
||||
uint32_t status, result;
|
||||
uint32_t result;
|
||||
bool released = false;
|
||||
|
||||
ctxp = cmdwqe->context2;
|
||||
status = bf_get(lpfc_wcqe_c_status, wcqe);
|
||||
result = wcqe->parameter;
|
||||
|
||||
if (!ctxp) {
|
||||
|
@ -2842,10 +2863,9 @@ lpfc_nvmet_xmt_ls_abort_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
|
|||
{
|
||||
struct lpfc_nvmet_rcv_ctx *ctxp;
|
||||
struct lpfc_nvmet_tgtport *tgtp;
|
||||
uint32_t status, result;
|
||||
uint32_t result;
|
||||
|
||||
ctxp = cmdwqe->context2;
|
||||
status = bf_get(lpfc_wcqe_c_status, wcqe);
|
||||
result = wcqe->parameter;
|
||||
|
||||
tgtp = (struct lpfc_nvmet_tgtport *)phba->targetport->private;
|
||||
|
@ -3200,7 +3220,6 @@ lpfc_nvmet_unsol_ls_issue_abort(struct lpfc_hba *phba,
|
|||
{
|
||||
struct lpfc_nvmet_tgtport *tgtp;
|
||||
struct lpfc_iocbq *abts_wqeq;
|
||||
union lpfc_wqe128 *wqe_abts;
|
||||
unsigned long flags;
|
||||
int rc;
|
||||
|
||||
|
@ -3230,7 +3249,6 @@ lpfc_nvmet_unsol_ls_issue_abort(struct lpfc_hba *phba,
|
|||
}
|
||||
}
|
||||
abts_wqeq = ctxp->wqeq;
|
||||
wqe_abts = &abts_wqeq->wqe;
|
||||
|
||||
if (lpfc_nvmet_unsol_issue_abort(phba, ctxp, sid, xri) == 0) {
|
||||
rc = WQE_BUSY;
|
||||
|
|
|
@ -27,10 +27,11 @@
|
|||
#define LPFC_NVMET_RQE_DEF_COUNT 2048
|
||||
#define LPFC_NVMET_SUCCESS_LEN 12
|
||||
|
||||
#define LPFC_NVMET_MRQ_OFF 0xffff
|
||||
#define LPFC_NVMET_MRQ_AUTO 0
|
||||
#define LPFC_NVMET_MRQ_MAX 16
|
||||
|
||||
#define LPFC_NVMET_WAIT_TMO (5 * MSEC_PER_SEC)
|
||||
|
||||
/* Used for NVME Target */
|
||||
struct lpfc_nvmet_tgtport {
|
||||
struct lpfc_hba *phba;
|
||||
|
|
|
@ -688,7 +688,7 @@ lpfc_get_scsi_buf_s4(struct lpfc_hba *phba, struct lpfc_nodelist *ndlp,
|
|||
uint32_t sgl_size, cpu, idx;
|
||||
int tag;
|
||||
|
||||
cpu = smp_processor_id();
|
||||
cpu = raw_smp_processor_id();
|
||||
if (cmnd && phba->cfg_fcp_io_sched == LPFC_FCP_SCHED_BY_HDWQ) {
|
||||
tag = blk_mq_unique_tag(cmnd->request);
|
||||
idx = blk_mq_unique_tag_to_hwq(tag);
|
||||
|
@ -3669,8 +3669,8 @@ lpfc_scsi_cmd_iocb_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *pIocbIn,
|
|||
|
||||
#ifdef CONFIG_SCSI_LPFC_DEBUG_FS
|
||||
if (phba->cpucheck_on & LPFC_CHECK_SCSI_IO) {
|
||||
cpu = smp_processor_id();
|
||||
if (cpu < LPFC_CHECK_CPU_CNT)
|
||||
cpu = raw_smp_processor_id();
|
||||
if (cpu < LPFC_CHECK_CPU_CNT && phba->sli4_hba.hdwq)
|
||||
phba->sli4_hba.hdwq[idx].cpucheck_cmpl_io[cpu]++;
|
||||
}
|
||||
#endif
|
||||
|
@ -4463,7 +4463,7 @@ lpfc_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *cmnd)
|
|||
|
||||
#ifdef CONFIG_SCSI_LPFC_DEBUG_FS
|
||||
if (phba->cpucheck_on & LPFC_CHECK_SCSI_IO) {
|
||||
cpu = smp_processor_id();
|
||||
cpu = raw_smp_processor_id();
|
||||
if (cpu < LPFC_CHECK_CPU_CNT) {
|
||||
struct lpfc_sli4_hdw_queue *hdwq =
|
||||
&phba->sli4_hba.hdwq[lpfc_cmd->hdwq_no];
|
||||
|
@ -5048,7 +5048,7 @@ lpfc_device_reset_handler(struct scsi_cmnd *cmnd)
|
|||
rdata = lpfc_rport_data_from_scsi_device(cmnd->device);
|
||||
if (!rdata || !rdata->pnode) {
|
||||
lpfc_printf_vlog(vport, KERN_ERR, LOG_FCP,
|
||||
"0798 Device Reset rport failure: rdata x%p\n",
|
||||
"0798 Device Reset rdata failure: rdata x%p\n",
|
||||
rdata);
|
||||
return FAILED;
|
||||
}
|
||||
|
@ -5117,9 +5117,10 @@ lpfc_target_reset_handler(struct scsi_cmnd *cmnd)
|
|||
int status;
|
||||
|
||||
rdata = lpfc_rport_data_from_scsi_device(cmnd->device);
|
||||
if (!rdata) {
|
||||
if (!rdata || !rdata->pnode) {
|
||||
lpfc_printf_vlog(vport, KERN_ERR, LOG_FCP,
|
||||
"0799 Target Reset rport failure: rdata x%p\n", rdata);
|
||||
"0799 Target Reset rdata failure: rdata x%p\n",
|
||||
rdata);
|
||||
return FAILED;
|
||||
}
|
||||
pnode = rdata->pnode;
|
||||
|
|
|
@ -87,9 +87,6 @@ static void lpfc_sli4_hba_handle_eqe(struct lpfc_hba *phba,
|
|||
struct lpfc_eqe *eqe);
|
||||
static bool lpfc_sli4_mbox_completions_pending(struct lpfc_hba *phba);
|
||||
static bool lpfc_sli4_process_missed_mbox_completions(struct lpfc_hba *phba);
|
||||
static int lpfc_sli4_abort_nvme_io(struct lpfc_hba *phba,
|
||||
struct lpfc_sli_ring *pring,
|
||||
struct lpfc_iocbq *cmdiocb);
|
||||
|
||||
static IOCB_t *
|
||||
lpfc_get_iocb_from_iocbq(struct lpfc_iocbq *iocbq)
|
||||
|
@ -151,7 +148,7 @@ lpfc_sli4_wq_put(struct lpfc_queue *q, union lpfc_wqe128 *wqe)
|
|||
/* sanity check on queue memory */
|
||||
if (unlikely(!q))
|
||||
return -ENOMEM;
|
||||
temp_wqe = q->qe[q->host_index].wqe;
|
||||
temp_wqe = lpfc_sli4_qe(q, q->host_index);
|
||||
|
||||
/* If the host has not yet processed the next entry then we are done */
|
||||
idx = ((q->host_index + 1) % q->entry_count);
|
||||
|
@ -271,7 +268,7 @@ lpfc_sli4_mq_put(struct lpfc_queue *q, struct lpfc_mqe *mqe)
|
|||
/* sanity check on queue memory */
|
||||
if (unlikely(!q))
|
||||
return -ENOMEM;
|
||||
temp_mqe = q->qe[q->host_index].mqe;
|
||||
temp_mqe = lpfc_sli4_qe(q, q->host_index);
|
||||
|
||||
/* If the host has not yet processed the next entry then we are done */
|
||||
if (((q->host_index + 1) % q->entry_count) == q->hba_index)
|
||||
|
@ -331,7 +328,7 @@ lpfc_sli4_eq_get(struct lpfc_queue *q)
|
|||
/* sanity check on queue memory */
|
||||
if (unlikely(!q))
|
||||
return NULL;
|
||||
eqe = q->qe[q->host_index].eqe;
|
||||
eqe = lpfc_sli4_qe(q, q->host_index);
|
||||
|
||||
/* If the next EQE is not valid then we are done */
|
||||
if (bf_get_le32(lpfc_eqe_valid, eqe) != q->qe_valid)
|
||||
|
@ -355,7 +352,7 @@ lpfc_sli4_eq_get(struct lpfc_queue *q)
|
|||
* @q: The Event Queue to disable interrupts
|
||||
*
|
||||
**/
|
||||
inline void
|
||||
void
|
||||
lpfc_sli4_eq_clr_intr(struct lpfc_queue *q)
|
||||
{
|
||||
struct lpfc_register doorbell;
|
||||
|
@ -374,7 +371,7 @@ lpfc_sli4_eq_clr_intr(struct lpfc_queue *q)
|
|||
* @q: The Event Queue to disable interrupts
|
||||
*
|
||||
**/
|
||||
inline void
|
||||
void
|
||||
lpfc_sli4_if6_eq_clr_intr(struct lpfc_queue *q)
|
||||
{
|
||||
struct lpfc_register doorbell;
|
||||
|
@ -545,7 +542,7 @@ lpfc_sli4_cq_get(struct lpfc_queue *q)
|
|||
/* sanity check on queue memory */
|
||||
if (unlikely(!q))
|
||||
return NULL;
|
||||
cqe = q->qe[q->host_index].cqe;
|
||||
cqe = lpfc_sli4_qe(q, q->host_index);
|
||||
|
||||
/* If the next CQE is not valid then we are done */
|
||||
if (bf_get_le32(lpfc_cqe_valid, cqe) != q->qe_valid)
|
||||
|
@ -667,8 +664,8 @@ lpfc_sli4_rq_put(struct lpfc_queue *hq, struct lpfc_queue *dq,
|
|||
return -ENOMEM;
|
||||
hq_put_index = hq->host_index;
|
||||
dq_put_index = dq->host_index;
|
||||
temp_hrqe = hq->qe[hq_put_index].rqe;
|
||||
temp_drqe = dq->qe[dq_put_index].rqe;
|
||||
temp_hrqe = lpfc_sli4_qe(hq, hq_put_index);
|
||||
temp_drqe = lpfc_sli4_qe(dq, dq_put_index);
|
||||
|
||||
if (hq->type != LPFC_HRQ || dq->type != LPFC_DRQ)
|
||||
return -EINVAL;
|
||||
|
@ -907,10 +904,10 @@ lpfc_handle_rrq_active(struct lpfc_hba *phba)
|
|||
mod_timer(&phba->rrq_tmr, next_time);
|
||||
list_for_each_entry_safe(rrq, nextrrq, &send_rrq, list) {
|
||||
list_del(&rrq->list);
|
||||
if (!rrq->send_rrq)
|
||||
if (!rrq->send_rrq) {
|
||||
/* this call will free the rrq */
|
||||
lpfc_clr_rrq_active(phba, rrq->xritag, rrq);
|
||||
else if (lpfc_send_rrq(phba, rrq)) {
|
||||
lpfc_clr_rrq_active(phba, rrq->xritag, rrq);
|
||||
} else if (lpfc_send_rrq(phba, rrq)) {
|
||||
/* if we send the rrq then the completion handler
|
||||
* will clear the bit in the xribitmap.
|
||||
*/
|
||||
|
@ -2502,8 +2499,8 @@ lpfc_sli_def_mbox_cmpl(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
|
|||
} else {
|
||||
ndlp->nlp_flag &= ~NLP_UNREG_INP;
|
||||
}
|
||||
pmb->ctx_ndlp = NULL;
|
||||
}
|
||||
pmb->ctx_ndlp = NULL;
|
||||
}
|
||||
|
||||
/* Check security permission status on INIT_LINK mailbox command */
|
||||
|
@ -3921,33 +3918,6 @@ lpfc_sli_abort_iocb_ring(struct lpfc_hba *phba, struct lpfc_sli_ring *pring)
|
|||
IOERR_SLI_ABORTED);
|
||||
}
|
||||
|
||||
/**
|
||||
* lpfc_sli_abort_wqe_ring - Abort all iocbs in the ring
|
||||
* @phba: Pointer to HBA context object.
|
||||
* @pring: Pointer to driver SLI ring object.
|
||||
*
|
||||
* This function aborts all iocbs in the given ring and frees all the iocb
|
||||
* objects in txq. This function issues an abort iocb for all the iocb commands
|
||||
* in txcmplq. The iocbs in the txcmplq is not guaranteed to complete before
|
||||
* the return of this function. The caller is not required to hold any locks.
|
||||
**/
|
||||
void
|
||||
lpfc_sli_abort_wqe_ring(struct lpfc_hba *phba, struct lpfc_sli_ring *pring)
|
||||
{
|
||||
LIST_HEAD(completions);
|
||||
struct lpfc_iocbq *iocb, *next_iocb;
|
||||
|
||||
if (pring->ringno == LPFC_ELS_RING)
|
||||
lpfc_fabric_abort_hba(phba);
|
||||
|
||||
spin_lock_irq(&phba->hbalock);
|
||||
/* Next issue ABTS for everything on the txcmplq */
|
||||
list_for_each_entry_safe(iocb, next_iocb, &pring->txcmplq, list)
|
||||
lpfc_sli4_abort_nvme_io(phba, pring, iocb);
|
||||
spin_unlock_irq(&phba->hbalock);
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* lpfc_sli_abort_fcp_rings - Abort all iocbs in all FCP rings
|
||||
* @phba: Pointer to HBA context object.
|
||||
|
@ -3977,33 +3947,6 @@ lpfc_sli_abort_fcp_rings(struct lpfc_hba *phba)
|
|||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* lpfc_sli_abort_nvme_rings - Abort all wqes in all NVME rings
|
||||
* @phba: Pointer to HBA context object.
|
||||
*
|
||||
* This function aborts all wqes in NVME rings. This function issues an
|
||||
* abort wqe for all the outstanding IO commands in txcmplq. The iocbs in
|
||||
* the txcmplq is not guaranteed to complete before the return of this
|
||||
* function. The caller is not required to hold any locks.
|
||||
**/
|
||||
void
|
||||
lpfc_sli_abort_nvme_rings(struct lpfc_hba *phba)
|
||||
{
|
||||
struct lpfc_sli_ring *pring;
|
||||
uint32_t i;
|
||||
|
||||
if ((phba->sli_rev < LPFC_SLI_REV4) ||
|
||||
!(phba->cfg_enable_fc4_type & LPFC_ENABLE_NVME))
|
||||
return;
|
||||
|
||||
/* Abort all IO on each NVME ring. */
|
||||
for (i = 0; i < phba->cfg_hdw_queue; i++) {
|
||||
pring = phba->sli4_hba.hdwq[i].nvme_wq->pring;
|
||||
lpfc_sli_abort_wqe_ring(phba, pring);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* lpfc_sli_flush_fcp_rings - flush all iocbs in the fcp ring
|
||||
* @phba: Pointer to HBA context object.
|
||||
|
@ -4487,7 +4430,9 @@ lpfc_sli_brdreset(struct lpfc_hba *phba)
|
|||
}
|
||||
|
||||
/* Turn off parity checking and serr during the physical reset */
|
||||
pci_read_config_word(phba->pcidev, PCI_COMMAND, &cfg_value);
|
||||
if (pci_read_config_word(phba->pcidev, PCI_COMMAND, &cfg_value))
|
||||
return -EIO;
|
||||
|
||||
pci_write_config_word(phba->pcidev, PCI_COMMAND,
|
||||
(cfg_value &
|
||||
~(PCI_COMMAND_PARITY | PCI_COMMAND_SERR)));
|
||||
|
@ -4564,7 +4509,12 @@ lpfc_sli4_brdreset(struct lpfc_hba *phba)
|
|||
"0389 Performing PCI function reset!\n");
|
||||
|
||||
/* Turn off parity checking and serr during the physical reset */
|
||||
pci_read_config_word(phba->pcidev, PCI_COMMAND, &cfg_value);
|
||||
if (pci_read_config_word(phba->pcidev, PCI_COMMAND, &cfg_value)) {
|
||||
lpfc_printf_log(phba, KERN_INFO, LOG_INIT,
|
||||
"3205 PCI read Config failed\n");
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
pci_write_config_word(phba->pcidev, PCI_COMMAND, (cfg_value &
|
||||
~(PCI_COMMAND_PARITY | PCI_COMMAND_SERR)));
|
||||
|
||||
|
@ -5395,7 +5345,7 @@ lpfc_sli4_read_rev(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq,
|
|||
}
|
||||
|
||||
/**
|
||||
* lpfc_sli4_retrieve_pport_name - Retrieve SLI4 device physical port name
|
||||
* lpfc_sli4_get_ctl_attr - Retrieve SLI4 device controller attributes
|
||||
* @phba: pointer to lpfc hba data structure.
|
||||
*
|
||||
* This routine retrieves SLI4 device physical port name this PCI function
|
||||
|
@ -5403,40 +5353,30 @@ lpfc_sli4_read_rev(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq,
|
|||
*
|
||||
* Return codes
|
||||
* 0 - successful
|
||||
* otherwise - failed to retrieve physical port name
|
||||
* otherwise - failed to retrieve controller attributes
|
||||
**/
|
||||
static int
|
||||
lpfc_sli4_retrieve_pport_name(struct lpfc_hba *phba)
|
||||
lpfc_sli4_get_ctl_attr(struct lpfc_hba *phba)
|
||||
{
|
||||
LPFC_MBOXQ_t *mboxq;
|
||||
struct lpfc_mbx_get_cntl_attributes *mbx_cntl_attr;
|
||||
struct lpfc_controller_attribute *cntl_attr;
|
||||
struct lpfc_mbx_get_port_name *get_port_name;
|
||||
void *virtaddr = NULL;
|
||||
uint32_t alloclen, reqlen;
|
||||
uint32_t shdr_status, shdr_add_status;
|
||||
union lpfc_sli4_cfg_shdr *shdr;
|
||||
char cport_name = 0;
|
||||
int rc;
|
||||
|
||||
/* We assume nothing at this point */
|
||||
phba->sli4_hba.lnk_info.lnk_dv = LPFC_LNK_DAT_INVAL;
|
||||
phba->sli4_hba.pport_name_sta = LPFC_SLI4_PPNAME_NON;
|
||||
|
||||
mboxq = (LPFC_MBOXQ_t *)mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
|
||||
if (!mboxq)
|
||||
return -ENOMEM;
|
||||
/* obtain link type and link number via READ_CONFIG */
|
||||
phba->sli4_hba.lnk_info.lnk_dv = LPFC_LNK_DAT_INVAL;
|
||||
lpfc_sli4_read_config(phba);
|
||||
if (phba->sli4_hba.lnk_info.lnk_dv == LPFC_LNK_DAT_VAL)
|
||||
goto retrieve_ppname;
|
||||
|
||||
/* obtain link type and link number via COMMON_GET_CNTL_ATTRIBUTES */
|
||||
/* Send COMMON_GET_CNTL_ATTRIBUTES mbox cmd */
|
||||
reqlen = sizeof(struct lpfc_mbx_get_cntl_attributes);
|
||||
alloclen = lpfc_sli4_config(phba, mboxq, LPFC_MBOX_SUBSYSTEM_COMMON,
|
||||
LPFC_MBOX_OPCODE_GET_CNTL_ATTRIBUTES, reqlen,
|
||||
LPFC_SLI4_MBX_NEMBED);
|
||||
|
||||
if (alloclen < reqlen) {
|
||||
lpfc_printf_log(phba, KERN_ERR, LOG_SLI,
|
||||
"3084 Allocated DMA memory size (%d) is "
|
||||
|
@ -5462,16 +5402,71 @@ lpfc_sli4_retrieve_pport_name(struct lpfc_hba *phba)
|
|||
rc = -ENXIO;
|
||||
goto out_free_mboxq;
|
||||
}
|
||||
|
||||
cntl_attr = &mbx_cntl_attr->cntl_attr;
|
||||
phba->sli4_hba.lnk_info.lnk_dv = LPFC_LNK_DAT_VAL;
|
||||
phba->sli4_hba.lnk_info.lnk_tp =
|
||||
bf_get(lpfc_cntl_attr_lnk_type, cntl_attr);
|
||||
phba->sli4_hba.lnk_info.lnk_no =
|
||||
bf_get(lpfc_cntl_attr_lnk_numb, cntl_attr);
|
||||
|
||||
memset(phba->BIOSVersion, 0, sizeof(phba->BIOSVersion));
|
||||
strlcat(phba->BIOSVersion, (char *)cntl_attr->bios_ver_str,
|
||||
sizeof(phba->BIOSVersion));
|
||||
|
||||
lpfc_printf_log(phba, KERN_INFO, LOG_SLI,
|
||||
"3086 lnk_type:%d, lnk_numb:%d\n",
|
||||
"3086 lnk_type:%d, lnk_numb:%d, bios_ver:%s\n",
|
||||
phba->sli4_hba.lnk_info.lnk_tp,
|
||||
phba->sli4_hba.lnk_info.lnk_no);
|
||||
phba->sli4_hba.lnk_info.lnk_no,
|
||||
phba->BIOSVersion);
|
||||
out_free_mboxq:
|
||||
if (rc != MBX_TIMEOUT) {
|
||||
if (bf_get(lpfc_mqe_command, &mboxq->u.mqe) == MBX_SLI4_CONFIG)
|
||||
lpfc_sli4_mbox_cmd_free(phba, mboxq);
|
||||
else
|
||||
mempool_free(mboxq, phba->mbox_mem_pool);
|
||||
}
|
||||
return rc;
|
||||
}
|
||||
|
||||
/**
|
||||
* lpfc_sli4_retrieve_pport_name - Retrieve SLI4 device physical port name
|
||||
* @phba: pointer to lpfc hba data structure.
|
||||
*
|
||||
* This routine retrieves SLI4 device physical port name this PCI function
|
||||
* is attached to.
|
||||
*
|
||||
* Return codes
|
||||
* 0 - successful
|
||||
* otherwise - failed to retrieve physical port name
|
||||
**/
|
||||
static int
|
||||
lpfc_sli4_retrieve_pport_name(struct lpfc_hba *phba)
|
||||
{
|
||||
LPFC_MBOXQ_t *mboxq;
|
||||
struct lpfc_mbx_get_port_name *get_port_name;
|
||||
uint32_t shdr_status, shdr_add_status;
|
||||
union lpfc_sli4_cfg_shdr *shdr;
|
||||
char cport_name = 0;
|
||||
int rc;
|
||||
|
||||
/* We assume nothing at this point */
|
||||
phba->sli4_hba.lnk_info.lnk_dv = LPFC_LNK_DAT_INVAL;
|
||||
phba->sli4_hba.pport_name_sta = LPFC_SLI4_PPNAME_NON;
|
||||
|
||||
mboxq = (LPFC_MBOXQ_t *)mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
|
||||
if (!mboxq)
|
||||
return -ENOMEM;
|
||||
/* obtain link type and link number via READ_CONFIG */
|
||||
phba->sli4_hba.lnk_info.lnk_dv = LPFC_LNK_DAT_INVAL;
|
||||
lpfc_sli4_read_config(phba);
|
||||
if (phba->sli4_hba.lnk_info.lnk_dv == LPFC_LNK_DAT_VAL)
|
||||
goto retrieve_ppname;
|
||||
|
||||
/* obtain link type and link number via COMMON_GET_CNTL_ATTRIBUTES */
|
||||
rc = lpfc_sli4_get_ctl_attr(phba);
|
||||
if (rc)
|
||||
goto out_free_mboxq;
|
||||
|
||||
retrieve_ppname:
|
||||
lpfc_sli4_config(phba, mboxq, LPFC_MBOX_SUBSYSTEM_COMMON,
|
||||
|
@ -7047,7 +7042,7 @@ lpfc_sli4_repost_sgl_list(struct lpfc_hba *phba,
|
|||
*
|
||||
* Returns: 0 = success, non-zero failure.
|
||||
**/
|
||||
int
|
||||
static int
|
||||
lpfc_sli4_repost_io_sgl_list(struct lpfc_hba *phba)
|
||||
{
|
||||
LIST_HEAD(post_nblist);
|
||||
|
@ -7067,7 +7062,7 @@ lpfc_sli4_repost_io_sgl_list(struct lpfc_hba *phba)
|
|||
return rc;
|
||||
}
|
||||
|
||||
void
|
||||
static void
|
||||
lpfc_set_host_data(struct lpfc_hba *phba, LPFC_MBOXQ_t *mbox)
|
||||
{
|
||||
uint32_t len;
|
||||
|
@ -7250,6 +7245,12 @@ lpfc_sli4_hba_setup(struct lpfc_hba *phba)
|
|||
"3080 Successful retrieving SLI4 device "
|
||||
"physical port name: %s.\n", phba->Port);
|
||||
|
||||
rc = lpfc_sli4_get_ctl_attr(phba);
|
||||
if (!rc)
|
||||
lpfc_printf_log(phba, KERN_INFO, LOG_MBOX | LOG_SLI,
|
||||
"8351 Successful retrieving SLI4 device "
|
||||
"CTL ATTR\n");
|
||||
|
||||
/*
|
||||
* Evaluate the read rev and vpd data. Populate the driver
|
||||
* state with the results. If this routine fails, the failure
|
||||
|
@ -7652,12 +7653,6 @@ lpfc_sli4_hba_setup(struct lpfc_hba *phba)
|
|||
phba->cfg_xri_rebalancing = 0;
|
||||
}
|
||||
|
||||
/* Arm the CQs and then EQs on device */
|
||||
lpfc_sli4_arm_cqeq_intr(phba);
|
||||
|
||||
/* Indicate device interrupt mode */
|
||||
phba->sli4_hba.intr_enable = 1;
|
||||
|
||||
/* Allow asynchronous mailbox command to go through */
|
||||
spin_lock_irq(&phba->hbalock);
|
||||
phba->sli.sli_flag &= ~LPFC_SLI_ASYNC_MBX_BLK;
|
||||
|
@ -7726,6 +7721,12 @@ lpfc_sli4_hba_setup(struct lpfc_hba *phba)
|
|||
phba->trunk_link.link3.state = LPFC_LINK_DOWN;
|
||||
spin_unlock_irq(&phba->hbalock);
|
||||
|
||||
/* Arm the CQs and then EQs on device */
|
||||
lpfc_sli4_arm_cqeq_intr(phba);
|
||||
|
||||
/* Indicate device interrupt mode */
|
||||
phba->sli4_hba.intr_enable = 1;
|
||||
|
||||
if (!(phba->hba_flag & HBA_FCOE_MODE) &&
|
||||
(phba->hba_flag & LINK_DISABLED)) {
|
||||
lpfc_printf_log(phba, KERN_ERR, LOG_INIT | LOG_SLI,
|
||||
|
@ -7820,8 +7821,9 @@ lpfc_sli4_mbox_completions_pending(struct lpfc_hba *phba)
|
|||
mcq = phba->sli4_hba.mbx_cq;
|
||||
idx = mcq->hba_index;
|
||||
qe_valid = mcq->qe_valid;
|
||||
while (bf_get_le32(lpfc_cqe_valid, mcq->qe[idx].cqe) == qe_valid) {
|
||||
mcqe = (struct lpfc_mcqe *)mcq->qe[idx].cqe;
|
||||
while (bf_get_le32(lpfc_cqe_valid,
|
||||
(struct lpfc_cqe *)lpfc_sli4_qe(mcq, idx)) == qe_valid) {
|
||||
mcqe = (struct lpfc_mcqe *)(lpfc_sli4_qe(mcq, idx));
|
||||
if (bf_get_le32(lpfc_trailer_completed, mcqe) &&
|
||||
(!bf_get_le32(lpfc_trailer_async, mcqe))) {
|
||||
pending_completions = true;
|
||||
|
@ -8500,7 +8502,7 @@ lpfc_sli4_wait_bmbx_ready(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq)
|
|||
bmbx_reg.word0 = readl(phba->sli4_hba.BMBXregaddr);
|
||||
db_ready = bf_get(lpfc_bmbx_rdy, &bmbx_reg);
|
||||
if (!db_ready)
|
||||
msleep(2);
|
||||
mdelay(2);
|
||||
|
||||
if (time_after(jiffies, timeout))
|
||||
return MBXERR_ERROR;
|
||||
|
@ -11263,102 +11265,6 @@ lpfc_sli_issue_abort_iotag(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
|
|||
return retval;
|
||||
}
|
||||
|
||||
/**
|
||||
* lpfc_sli4_abort_nvme_io - Issue abort for a command iocb
|
||||
* @phba: Pointer to HBA context object.
|
||||
* @pring: Pointer to driver SLI ring object.
|
||||
* @cmdiocb: Pointer to driver command iocb object.
|
||||
*
|
||||
* This function issues an abort iocb for the provided command iocb down to
|
||||
* the port. Other than the case the outstanding command iocb is an abort
|
||||
* request, this function issues abort out unconditionally. This function is
|
||||
* called with hbalock held. The function returns 0 when it fails due to
|
||||
* memory allocation failure or when the command iocb is an abort request.
|
||||
**/
|
||||
static int
|
||||
lpfc_sli4_abort_nvme_io(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
|
||||
struct lpfc_iocbq *cmdiocb)
|
||||
{
|
||||
struct lpfc_vport *vport = cmdiocb->vport;
|
||||
struct lpfc_iocbq *abtsiocbp;
|
||||
union lpfc_wqe128 *abts_wqe;
|
||||
int retval;
|
||||
int idx = cmdiocb->hba_wqidx;
|
||||
|
||||
/*
|
||||
* There are certain command types we don't want to abort. And we
|
||||
* don't want to abort commands that are already in the process of
|
||||
* being aborted.
|
||||
*/
|
||||
if (cmdiocb->iocb.ulpCommand == CMD_ABORT_XRI_CN ||
|
||||
cmdiocb->iocb.ulpCommand == CMD_CLOSE_XRI_CN ||
|
||||
(cmdiocb->iocb_flag & LPFC_DRIVER_ABORTED) != 0)
|
||||
return 0;
|
||||
|
||||
/* issue ABTS for this io based on iotag */
|
||||
abtsiocbp = __lpfc_sli_get_iocbq(phba);
|
||||
if (abtsiocbp == NULL)
|
||||
return 0;
|
||||
|
||||
/* This signals the response to set the correct status
|
||||
* before calling the completion handler
|
||||
*/
|
||||
cmdiocb->iocb_flag |= LPFC_DRIVER_ABORTED;
|
||||
|
||||
/* Complete prepping the abort wqe and issue to the FW. */
|
||||
abts_wqe = &abtsiocbp->wqe;
|
||||
|
||||
/* Clear any stale WQE contents */
|
||||
memset(abts_wqe, 0, sizeof(union lpfc_wqe));
|
||||
bf_set(abort_cmd_criteria, &abts_wqe->abort_cmd, T_XRI_TAG);
|
||||
|
||||
/* word 7 */
|
||||
bf_set(wqe_cmnd, &abts_wqe->abort_cmd.wqe_com, CMD_ABORT_XRI_CX);
|
||||
bf_set(wqe_class, &abts_wqe->abort_cmd.wqe_com,
|
||||
cmdiocb->iocb.ulpClass);
|
||||
|
||||
/* word 8 - tell the FW to abort the IO associated with this
|
||||
* outstanding exchange ID.
|
||||
*/
|
||||
abts_wqe->abort_cmd.wqe_com.abort_tag = cmdiocb->sli4_xritag;
|
||||
|
||||
/* word 9 - this is the iotag for the abts_wqe completion. */
|
||||
bf_set(wqe_reqtag, &abts_wqe->abort_cmd.wqe_com,
|
||||
abtsiocbp->iotag);
|
||||
|
||||
/* word 10 */
|
||||
bf_set(wqe_qosd, &abts_wqe->abort_cmd.wqe_com, 1);
|
||||
bf_set(wqe_lenloc, &abts_wqe->abort_cmd.wqe_com, LPFC_WQE_LENLOC_NONE);
|
||||
|
||||
/* word 11 */
|
||||
bf_set(wqe_cmd_type, &abts_wqe->abort_cmd.wqe_com, OTHER_COMMAND);
|
||||
bf_set(wqe_wqec, &abts_wqe->abort_cmd.wqe_com, 1);
|
||||
bf_set(wqe_cqid, &abts_wqe->abort_cmd.wqe_com, LPFC_WQE_CQ_ID_DEFAULT);
|
||||
|
||||
/* ABTS WQE must go to the same WQ as the WQE to be aborted */
|
||||
abtsiocbp->iocb_flag |= LPFC_IO_NVME;
|
||||
abtsiocbp->vport = vport;
|
||||
abtsiocbp->wqe_cmpl = lpfc_nvme_abort_fcreq_cmpl;
|
||||
retval = lpfc_sli4_issue_wqe(phba, &phba->sli4_hba.hdwq[idx],
|
||||
abtsiocbp);
|
||||
if (retval) {
|
||||
lpfc_printf_vlog(vport, KERN_ERR, LOG_NVME,
|
||||
"6147 Failed abts issue_wqe with status x%x "
|
||||
"for oxid x%x\n",
|
||||
retval, cmdiocb->sli4_xritag);
|
||||
lpfc_sli_release_iocbq(phba, abtsiocbp);
|
||||
return retval;
|
||||
}
|
||||
|
||||
lpfc_printf_vlog(vport, KERN_ERR, LOG_NVME,
|
||||
"6148 Drv Abort NVME Request Issued for "
|
||||
"ox_id x%x on reqtag x%x\n",
|
||||
cmdiocb->sli4_xritag,
|
||||
abtsiocbp->iotag);
|
||||
|
||||
return retval;
|
||||
}
|
||||
|
||||
/**
|
||||
* lpfc_sli_hba_iocb_abort - Abort all iocbs to an hba.
|
||||
* @phba: pointer to lpfc HBA data structure.
|
||||
|
@ -13636,7 +13542,7 @@ lpfc_sli4_sp_handle_eqe(struct lpfc_hba *phba, struct lpfc_eqe *eqe,
|
|||
lpfc_printf_log(phba, KERN_ERR, LOG_SLI,
|
||||
"0390 Cannot schedule soft IRQ "
|
||||
"for CQ eqcqid=%d, cqid=%d on CPU %d\n",
|
||||
cqid, cq->queue_id, smp_processor_id());
|
||||
cqid, cq->queue_id, raw_smp_processor_id());
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -14019,7 +13925,7 @@ lpfc_sli4_nvmet_handle_rcqe(struct lpfc_hba *phba, struct lpfc_queue *cq,
|
|||
return false;
|
||||
}
|
||||
drop:
|
||||
lpfc_in_buf_free(phba, &dma_buf->dbuf);
|
||||
lpfc_rq_buf_free(phba, &dma_buf->hbuf);
|
||||
break;
|
||||
case FC_STATUS_INSUFF_BUF_FRM_DISC:
|
||||
if (phba->nvmet_support) {
|
||||
|
@ -14185,7 +14091,7 @@ lpfc_sli4_hba_handle_eqe(struct lpfc_hba *phba, struct lpfc_queue *eq,
|
|||
lpfc_printf_log(phba, KERN_ERR, LOG_SLI,
|
||||
"0363 Cannot schedule soft IRQ "
|
||||
"for CQ eqcqid=%d, cqid=%d on CPU %d\n",
|
||||
cqid, cq->queue_id, smp_processor_id());
|
||||
cqid, cq->queue_id, raw_smp_processor_id());
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -14324,7 +14230,7 @@ lpfc_sli4_hba_intr_handler(int irq, void *dev_id)
|
|||
|
||||
eqi = phba->sli4_hba.eq_info;
|
||||
icnt = this_cpu_inc_return(eqi->icnt);
|
||||
fpeq->last_cpu = smp_processor_id();
|
||||
fpeq->last_cpu = raw_smp_processor_id();
|
||||
|
||||
if (icnt > LPFC_EQD_ISR_TRIGGER &&
|
||||
phba->cfg_irq_chann == 1 &&
|
||||
|
@ -14410,6 +14316,9 @@ lpfc_sli4_queue_free(struct lpfc_queue *queue)
|
|||
if (!queue)
|
||||
return;
|
||||
|
||||
if (!list_empty(&queue->wq_list))
|
||||
list_del(&queue->wq_list);
|
||||
|
||||
while (!list_empty(&queue->page_list)) {
|
||||
list_remove_head(&queue->page_list, dmabuf, struct lpfc_dmabuf,
|
||||
list);
|
||||
|
@ -14425,9 +14334,6 @@ lpfc_sli4_queue_free(struct lpfc_queue *queue)
|
|||
if (!list_empty(&queue->cpu_list))
|
||||
list_del(&queue->cpu_list);
|
||||
|
||||
if (!list_empty(&queue->wq_list))
|
||||
list_del(&queue->wq_list);
|
||||
|
||||
kfree(queue);
|
||||
return;
|
||||
}
|
||||
|
@ -14438,6 +14344,7 @@ lpfc_sli4_queue_free(struct lpfc_queue *queue)
|
|||
* @page_size: The size of a queue page
|
||||
* @entry_size: The size of each queue entry for this queue.
|
||||
* @entry count: The number of entries that this queue will handle.
|
||||
* @cpu: The cpu that will primarily utilize this queue.
|
||||
*
|
||||
* This function allocates a queue structure and the DMAable memory used for
|
||||
* the host resident queue. This function must be called before creating the
|
||||
|
@ -14445,28 +14352,26 @@ lpfc_sli4_queue_free(struct lpfc_queue *queue)
|
|||
**/
|
||||
struct lpfc_queue *
|
||||
lpfc_sli4_queue_alloc(struct lpfc_hba *phba, uint32_t page_size,
|
||||
uint32_t entry_size, uint32_t entry_count)
|
||||
uint32_t entry_size, uint32_t entry_count, int cpu)
|
||||
{
|
||||
struct lpfc_queue *queue;
|
||||
struct lpfc_dmabuf *dmabuf;
|
||||
int x, total_qe_count;
|
||||
void *dma_pointer;
|
||||
uint32_t hw_page_size = phba->sli4_hba.pc_sli4_params.if_page_sz;
|
||||
uint16_t x, pgcnt;
|
||||
|
||||
if (!phba->sli4_hba.pc_sli4_params.supported)
|
||||
hw_page_size = page_size;
|
||||
|
||||
queue = kzalloc(sizeof(struct lpfc_queue) +
|
||||
(sizeof(union sli4_qe) * entry_count), GFP_KERNEL);
|
||||
if (!queue)
|
||||
return NULL;
|
||||
queue->page_count = (ALIGN(entry_size * entry_count,
|
||||
hw_page_size))/hw_page_size;
|
||||
pgcnt = ALIGN(entry_size * entry_count, hw_page_size) / hw_page_size;
|
||||
|
||||
/* If needed, Adjust page count to match the max the adapter supports */
|
||||
if (phba->sli4_hba.pc_sli4_params.wqpcnt &&
|
||||
(queue->page_count > phba->sli4_hba.pc_sli4_params.wqpcnt))
|
||||
queue->page_count = phba->sli4_hba.pc_sli4_params.wqpcnt;
|
||||
if (pgcnt > phba->sli4_hba.pc_sli4_params.wqpcnt)
|
||||
pgcnt = phba->sli4_hba.pc_sli4_params.wqpcnt;
|
||||
|
||||
queue = kzalloc_node(sizeof(*queue) + (sizeof(void *) * pgcnt),
|
||||
GFP_KERNEL, cpu_to_node(cpu));
|
||||
if (!queue)
|
||||
return NULL;
|
||||
|
||||
INIT_LIST_HEAD(&queue->list);
|
||||
INIT_LIST_HEAD(&queue->wq_list);
|
||||
|
@ -14478,13 +14383,17 @@ lpfc_sli4_queue_alloc(struct lpfc_hba *phba, uint32_t page_size,
|
|||
/* Set queue parameters now. If the system cannot provide memory
|
||||
* resources, the free routine needs to know what was allocated.
|
||||
*/
|
||||
queue->page_count = pgcnt;
|
||||
queue->q_pgs = (void **)&queue[1];
|
||||
queue->entry_cnt_per_pg = hw_page_size / entry_size;
|
||||
queue->entry_size = entry_size;
|
||||
queue->entry_count = entry_count;
|
||||
queue->page_size = hw_page_size;
|
||||
queue->phba = phba;
|
||||
|
||||
for (x = 0, total_qe_count = 0; x < queue->page_count; x++) {
|
||||
dmabuf = kzalloc(sizeof(struct lpfc_dmabuf), GFP_KERNEL);
|
||||
for (x = 0; x < queue->page_count; x++) {
|
||||
dmabuf = kzalloc_node(sizeof(*dmabuf), GFP_KERNEL,
|
||||
dev_to_node(&phba->pcidev->dev));
|
||||
if (!dmabuf)
|
||||
goto out_fail;
|
||||
dmabuf->virt = dma_alloc_coherent(&phba->pcidev->dev,
|
||||
|
@ -14496,13 +14405,8 @@ lpfc_sli4_queue_alloc(struct lpfc_hba *phba, uint32_t page_size,
|
|||
}
|
||||
dmabuf->buffer_tag = x;
|
||||
list_add_tail(&dmabuf->list, &queue->page_list);
|
||||
/* initialize queue's entry array */
|
||||
dma_pointer = dmabuf->virt;
|
||||
for (; total_qe_count < entry_count &&
|
||||
dma_pointer < (hw_page_size + dmabuf->virt);
|
||||
total_qe_count++, dma_pointer += entry_size) {
|
||||
queue->qe[total_qe_count].address = dma_pointer;
|
||||
}
|
||||
/* use lpfc_sli4_qe to index a paritcular entry in this page */
|
||||
queue->q_pgs[x] = dmabuf->virt;
|
||||
}
|
||||
INIT_WORK(&queue->irqwork, lpfc_sli4_hba_process_cq);
|
||||
INIT_WORK(&queue->spwork, lpfc_sli4_sp_process_cq);
|
||||
|
|
|
@ -327,6 +327,10 @@ struct lpfc_sli {
|
|||
#define LPFC_SLI_ASYNC_MBX_BLK 0x2000 /* Async mailbox is blocked */
|
||||
#define LPFC_SLI_SUPPRESS_RSP 0x4000 /* Suppress RSP feature is supported */
|
||||
#define LPFC_SLI_USE_EQDR 0x8000 /* EQ Delay Register is supported */
|
||||
#define LPFC_QUEUE_FREE_INIT 0x10000 /* Queue freeing is in progress */
|
||||
#define LPFC_QUEUE_FREE_WAIT 0x20000 /* Hold Queue free as it is being
|
||||
* used outside worker thread
|
||||
*/
|
||||
|
||||
struct lpfc_sli_ring *sli3_ring;
|
||||
|
||||
|
@ -427,14 +431,13 @@ struct lpfc_io_buf {
|
|||
struct {
|
||||
struct nvmefc_fcp_req *nvmeCmd;
|
||||
uint16_t qidx;
|
||||
|
||||
#ifdef CONFIG_SCSI_LPFC_DEBUG_FS
|
||||
uint64_t ts_cmd_start;
|
||||
uint64_t ts_last_cmd;
|
||||
uint64_t ts_cmd_wqput;
|
||||
uint64_t ts_isr_cmpl;
|
||||
uint64_t ts_data_nvme;
|
||||
#endif
|
||||
};
|
||||
};
|
||||
#ifdef CONFIG_SCSI_LPFC_DEBUG_FS
|
||||
uint64_t ts_cmd_start;
|
||||
uint64_t ts_last_cmd;
|
||||
uint64_t ts_cmd_wqput;
|
||||
uint64_t ts_isr_cmpl;
|
||||
uint64_t ts_data_nvme;
|
||||
#endif
|
||||
};
|
||||
|
|
|
@ -117,21 +117,6 @@ enum lpfc_sli4_queue_subtype {
|
|||
LPFC_USOL
|
||||
};
|
||||
|
||||
union sli4_qe {
|
||||
void *address;
|
||||
struct lpfc_eqe *eqe;
|
||||
struct lpfc_cqe *cqe;
|
||||
struct lpfc_mcqe *mcqe;
|
||||
struct lpfc_wcqe_complete *wcqe_complete;
|
||||
struct lpfc_wcqe_release *wcqe_release;
|
||||
struct sli4_wcqe_xri_aborted *wcqe_xri_aborted;
|
||||
struct lpfc_rcqe_complete *rcqe_complete;
|
||||
struct lpfc_mqe *mqe;
|
||||
union lpfc_wqe *wqe;
|
||||
union lpfc_wqe128 *wqe128;
|
||||
struct lpfc_rqe *rqe;
|
||||
};
|
||||
|
||||
/* RQ buffer list */
|
||||
struct lpfc_rqb {
|
||||
uint16_t entry_count; /* Current number of RQ slots */
|
||||
|
@ -157,6 +142,7 @@ struct lpfc_queue {
|
|||
struct list_head cpu_list;
|
||||
uint32_t entry_count; /* Number of entries to support on the queue */
|
||||
uint32_t entry_size; /* Size of each queue entry. */
|
||||
uint32_t entry_cnt_per_pg;
|
||||
uint32_t notify_interval; /* Queue Notification Interval
|
||||
* For chip->host queues (EQ, CQ, RQ):
|
||||
* specifies the interval (number of
|
||||
|
@ -254,17 +240,17 @@ struct lpfc_queue {
|
|||
uint16_t last_cpu; /* most recent cpu */
|
||||
uint8_t qe_valid;
|
||||
struct lpfc_queue *assoc_qp;
|
||||
union sli4_qe qe[1]; /* array to index entries (must be last) */
|
||||
void **q_pgs; /* array to index entries per page */
|
||||
};
|
||||
|
||||
struct lpfc_sli4_link {
|
||||
uint16_t speed;
|
||||
uint32_t speed;
|
||||
uint8_t duplex;
|
||||
uint8_t status;
|
||||
uint8_t type;
|
||||
uint8_t number;
|
||||
uint8_t fault;
|
||||
uint16_t logical_speed;
|
||||
uint32_t logical_speed;
|
||||
uint16_t topology;
|
||||
};
|
||||
|
||||
|
@ -543,8 +529,9 @@ struct lpfc_sli4_lnk_info {
|
|||
#define LPFC_LNK_DAT_INVAL 0
|
||||
#define LPFC_LNK_DAT_VAL 1
|
||||
uint8_t lnk_tp;
|
||||
#define LPFC_LNK_GE 0x0 /* FCoE */
|
||||
#define LPFC_LNK_FC 0x1 /* FC */
|
||||
#define LPFC_LNK_GE 0x0 /* FCoE */
|
||||
#define LPFC_LNK_FC 0x1 /* FC */
|
||||
#define LPFC_LNK_FC_TRUNKED 0x2 /* FC_Trunked */
|
||||
uint8_t lnk_no;
|
||||
uint8_t optic_state;
|
||||
};
|
||||
|
@ -907,6 +894,18 @@ struct lpfc_sli4_hba {
|
|||
#define lpfc_conf_trunk_port3_WORD conf_trunk
|
||||
#define lpfc_conf_trunk_port3_SHIFT 3
|
||||
#define lpfc_conf_trunk_port3_MASK 0x1
|
||||
#define lpfc_conf_trunk_port0_nd_WORD conf_trunk
|
||||
#define lpfc_conf_trunk_port0_nd_SHIFT 4
|
||||
#define lpfc_conf_trunk_port0_nd_MASK 0x1
|
||||
#define lpfc_conf_trunk_port1_nd_WORD conf_trunk
|
||||
#define lpfc_conf_trunk_port1_nd_SHIFT 5
|
||||
#define lpfc_conf_trunk_port1_nd_MASK 0x1
|
||||
#define lpfc_conf_trunk_port2_nd_WORD conf_trunk
|
||||
#define lpfc_conf_trunk_port2_nd_SHIFT 6
|
||||
#define lpfc_conf_trunk_port2_nd_MASK 0x1
|
||||
#define lpfc_conf_trunk_port3_nd_WORD conf_trunk
|
||||
#define lpfc_conf_trunk_port3_nd_SHIFT 7
|
||||
#define lpfc_conf_trunk_port3_nd_MASK 0x1
|
||||
};
|
||||
|
||||
enum lpfc_sge_type {
|
||||
|
@ -990,8 +989,10 @@ int lpfc_sli4_mbx_read_fcf_rec(struct lpfc_hba *, struct lpfcMboxq *,
|
|||
uint16_t);
|
||||
|
||||
void lpfc_sli4_hba_reset(struct lpfc_hba *);
|
||||
struct lpfc_queue *lpfc_sli4_queue_alloc(struct lpfc_hba *, uint32_t,
|
||||
uint32_t, uint32_t);
|
||||
struct lpfc_queue *lpfc_sli4_queue_alloc(struct lpfc_hba *phba,
|
||||
uint32_t page_size,
|
||||
uint32_t entry_size,
|
||||
uint32_t entry_count, int cpu);
|
||||
void lpfc_sli4_queue_free(struct lpfc_queue *);
|
||||
int lpfc_eq_create(struct lpfc_hba *, struct lpfc_queue *, uint32_t);
|
||||
void lpfc_modify_hba_eq_delay(struct lpfc_hba *phba, uint32_t startq,
|
||||
|
@ -1057,12 +1058,12 @@ void lpfc_sli_remove_dflt_fcf(struct lpfc_hba *);
|
|||
int lpfc_sli4_get_els_iocb_cnt(struct lpfc_hba *);
|
||||
int lpfc_sli4_get_iocb_cnt(struct lpfc_hba *phba);
|
||||
int lpfc_sli4_init_vpi(struct lpfc_vport *);
|
||||
inline void lpfc_sli4_eq_clr_intr(struct lpfc_queue *);
|
||||
void lpfc_sli4_eq_clr_intr(struct lpfc_queue *);
|
||||
void lpfc_sli4_write_cq_db(struct lpfc_hba *phba, struct lpfc_queue *q,
|
||||
uint32_t count, bool arm);
|
||||
void lpfc_sli4_write_eq_db(struct lpfc_hba *phba, struct lpfc_queue *q,
|
||||
uint32_t count, bool arm);
|
||||
inline void lpfc_sli4_if6_eq_clr_intr(struct lpfc_queue *q);
|
||||
void lpfc_sli4_if6_eq_clr_intr(struct lpfc_queue *q);
|
||||
void lpfc_sli4_if6_write_cq_db(struct lpfc_hba *phba, struct lpfc_queue *q,
|
||||
uint32_t count, bool arm);
|
||||
void lpfc_sli4_if6_write_eq_db(struct lpfc_hba *phba, struct lpfc_queue *q,
|
||||
|
@ -1079,3 +1080,8 @@ int lpfc_sli4_post_status_check(struct lpfc_hba *);
|
|||
uint8_t lpfc_sli_config_mbox_subsys_get(struct lpfc_hba *, LPFC_MBOXQ_t *);
|
||||
uint8_t lpfc_sli_config_mbox_opcode_get(struct lpfc_hba *, LPFC_MBOXQ_t *);
|
||||
void lpfc_sli4_ras_dma_free(struct lpfc_hba *phba);
|
||||
static inline void *lpfc_sli4_qe(struct lpfc_queue *q, uint16_t idx)
|
||||
{
|
||||
return q->q_pgs[idx / q->entry_cnt_per_pg] +
|
||||
(q->entry_size * (idx % q->entry_cnt_per_pg));
|
||||
}
|
||||
|
|
|
@ -20,7 +20,7 @@
|
|||
* included with this package. *
|
||||
*******************************************************************/
|
||||
|
||||
#define LPFC_DRIVER_VERSION "12.2.0.0"
|
||||
#define LPFC_DRIVER_VERSION "12.2.0.1"
|
||||
#define LPFC_DRIVER_NAME "lpfc"
|
||||
|
||||
/* Used for SLI 2/3 */
|
||||
|
@ -32,6 +32,6 @@
|
|||
|
||||
#define LPFC_MODULE_DESC "Emulex LightPulse Fibre Channel SCSI driver " \
|
||||
LPFC_DRIVER_VERSION
|
||||
#define LPFC_COPYRIGHT "Copyright (C) 2017-2018 Broadcom. All Rights " \
|
||||
#define LPFC_COPYRIGHT "Copyright (C) 2017-2019 Broadcom. All Rights " \
|
||||
"Reserved. The term \"Broadcom\" refers to Broadcom Inc. " \
|
||||
"and/or its subsidiaries."
|
||||
|
|
|
@ -2724,7 +2724,7 @@ static int megasas_wait_for_outstanding(struct megasas_instance *instance)
|
|||
do {
|
||||
if ((fw_state == MFI_STATE_FAULT) || atomic_read(&instance->fw_outstanding)) {
|
||||
dev_info(&instance->pdev->dev,
|
||||
"%s:%d waiting_for_outstanding: before issue OCR. FW state = 0x%x, oustanding 0x%x\n",
|
||||
"%s:%d waiting_for_outstanding: before issue OCR. FW state = 0x%x, outstanding 0x%x\n",
|
||||
__func__, __LINE__, fw_state, atomic_read(&instance->fw_outstanding));
|
||||
if (i == 3)
|
||||
goto kill_hba_and_failed;
|
||||
|
@ -4647,7 +4647,7 @@ megasas_ld_list_query(struct megasas_instance *instance, u8 query_type)
|
|||
* Return: 0 if DCMD succeeded
|
||||
* non-zero if failed
|
||||
*/
|
||||
int
|
||||
static int
|
||||
megasas_host_device_list_query(struct megasas_instance *instance,
|
||||
bool is_probe)
|
||||
{
|
||||
|
|
|
@ -4418,7 +4418,7 @@ int megasas_task_abort_fusion(struct scsi_cmnd *scmd)
|
|||
if (!smid) {
|
||||
ret = SUCCESS;
|
||||
scmd_printk(KERN_NOTICE, scmd, "Command for which abort is"
|
||||
" issued is not found in oustanding commands\n");
|
||||
" issued is not found in outstanding commands\n");
|
||||
mutex_unlock(&instance->reset_mutex);
|
||||
goto out;
|
||||
}
|
||||
|
|
|
@ -45,6 +45,7 @@ config SCSI_MPT3SAS
|
|||
depends on PCI && SCSI
|
||||
select SCSI_SAS_ATTRS
|
||||
select RAID_ATTRS
|
||||
select IRQ_POLL
|
||||
---help---
|
||||
This driver supports PCI-Express SAS 12Gb/s Host Adapters.
|
||||
|
||||
|
|
|
@ -94,6 +94,11 @@ module_param(max_msix_vectors, int, 0);
|
|||
MODULE_PARM_DESC(max_msix_vectors,
|
||||
" max msix vectors");
|
||||
|
||||
static int irqpoll_weight = -1;
|
||||
module_param(irqpoll_weight, int, 0);
|
||||
MODULE_PARM_DESC(irqpoll_weight,
|
||||
"irq poll weight (default= one fourth of HBA queue depth)");
|
||||
|
||||
static int mpt3sas_fwfault_debug;
|
||||
MODULE_PARM_DESC(mpt3sas_fwfault_debug,
|
||||
" enable detection of firmware fault and halt firmware - (default=0)");
|
||||
|
@ -1382,20 +1387,30 @@ union reply_descriptor {
|
|||
} u;
|
||||
};
|
||||
|
||||
/**
|
||||
* _base_interrupt - MPT adapter (IOC) specific interrupt handler.
|
||||
* @irq: irq number (not used)
|
||||
* @bus_id: bus identifier cookie == pointer to MPT_ADAPTER structure
|
||||
*
|
||||
* Return: IRQ_HANDLED if processed, else IRQ_NONE.
|
||||
*/
|
||||
static irqreturn_t
|
||||
_base_interrupt(int irq, void *bus_id)
|
||||
static u32 base_mod64(u64 dividend, u32 divisor)
|
||||
{
|
||||
u32 remainder;
|
||||
|
||||
if (!divisor)
|
||||
pr_err("mpt3sas: DIVISOR is zero, in div fn\n");
|
||||
remainder = do_div(dividend, divisor);
|
||||
return remainder;
|
||||
}
|
||||
|
||||
/**
|
||||
* _base_process_reply_queue - Process reply descriptors from reply
|
||||
* descriptor post queue.
|
||||
* @reply_q: per IRQ's reply queue object.
|
||||
*
|
||||
* Return: number of reply descriptors processed from reply
|
||||
* descriptor queue.
|
||||
*/
|
||||
static int
|
||||
_base_process_reply_queue(struct adapter_reply_queue *reply_q)
|
||||
{
|
||||
struct adapter_reply_queue *reply_q = bus_id;
|
||||
union reply_descriptor rd;
|
||||
u32 completed_cmds;
|
||||
u8 request_desript_type;
|
||||
u64 completed_cmds;
|
||||
u8 request_descript_type;
|
||||
u16 smid;
|
||||
u8 cb_idx;
|
||||
u32 reply;
|
||||
|
@ -1404,21 +1419,18 @@ _base_interrupt(int irq, void *bus_id)
|
|||
Mpi2ReplyDescriptorsUnion_t *rpf;
|
||||
u8 rc;
|
||||
|
||||
if (ioc->mask_interrupts)
|
||||
return IRQ_NONE;
|
||||
|
||||
completed_cmds = 0;
|
||||
if (!atomic_add_unless(&reply_q->busy, 1, 1))
|
||||
return IRQ_NONE;
|
||||
return completed_cmds;
|
||||
|
||||
rpf = &reply_q->reply_post_free[reply_q->reply_post_host_index];
|
||||
request_desript_type = rpf->Default.ReplyFlags
|
||||
request_descript_type = rpf->Default.ReplyFlags
|
||||
& MPI2_RPY_DESCRIPT_FLAGS_TYPE_MASK;
|
||||
if (request_desript_type == MPI2_RPY_DESCRIPT_FLAGS_UNUSED) {
|
||||
if (request_descript_type == MPI2_RPY_DESCRIPT_FLAGS_UNUSED) {
|
||||
atomic_dec(&reply_q->busy);
|
||||
return IRQ_NONE;
|
||||
return completed_cmds;
|
||||
}
|
||||
|
||||
completed_cmds = 0;
|
||||
cb_idx = 0xFF;
|
||||
do {
|
||||
rd.word = le64_to_cpu(rpf->Words);
|
||||
|
@ -1426,11 +1438,11 @@ _base_interrupt(int irq, void *bus_id)
|
|||
goto out;
|
||||
reply = 0;
|
||||
smid = le16_to_cpu(rpf->Default.DescriptorTypeDependent1);
|
||||
if (request_desript_type ==
|
||||
if (request_descript_type ==
|
||||
MPI25_RPY_DESCRIPT_FLAGS_FAST_PATH_SCSI_IO_SUCCESS ||
|
||||
request_desript_type ==
|
||||
request_descript_type ==
|
||||
MPI2_RPY_DESCRIPT_FLAGS_SCSI_IO_SUCCESS ||
|
||||
request_desript_type ==
|
||||
request_descript_type ==
|
||||
MPI26_RPY_DESCRIPT_FLAGS_PCIE_ENCAPSULATED_SUCCESS) {
|
||||
cb_idx = _base_get_cb_idx(ioc, smid);
|
||||
if ((likely(cb_idx < MPT_MAX_CALLBACKS)) &&
|
||||
|
@ -1440,7 +1452,7 @@ _base_interrupt(int irq, void *bus_id)
|
|||
if (rc)
|
||||
mpt3sas_base_free_smid(ioc, smid);
|
||||
}
|
||||
} else if (request_desript_type ==
|
||||
} else if (request_descript_type ==
|
||||
MPI2_RPY_DESCRIPT_FLAGS_ADDRESS_REPLY) {
|
||||
reply = le32_to_cpu(
|
||||
rpf->AddressReply.ReplyFrameAddress);
|
||||
|
@ -1486,7 +1498,7 @@ _base_interrupt(int irq, void *bus_id)
|
|||
(reply_q->reply_post_host_index ==
|
||||
(ioc->reply_post_queue_depth - 1)) ? 0 :
|
||||
reply_q->reply_post_host_index + 1;
|
||||
request_desript_type =
|
||||
request_descript_type =
|
||||
reply_q->reply_post_free[reply_q->reply_post_host_index].
|
||||
Default.ReplyFlags & MPI2_RPY_DESCRIPT_FLAGS_TYPE_MASK;
|
||||
completed_cmds++;
|
||||
|
@ -1495,7 +1507,7 @@ _base_interrupt(int irq, void *bus_id)
|
|||
* So that FW can find enough entries to post the Reply
|
||||
* Descriptors in the reply descriptor post queue.
|
||||
*/
|
||||
if (completed_cmds > ioc->hba_queue_depth/3) {
|
||||
if (!base_mod64(completed_cmds, ioc->thresh_hold)) {
|
||||
if (ioc->combined_reply_queue) {
|
||||
writel(reply_q->reply_post_host_index |
|
||||
((msix_index & 7) <<
|
||||
|
@ -1507,9 +1519,14 @@ _base_interrupt(int irq, void *bus_id)
|
|||
MPI2_RPHI_MSIX_INDEX_SHIFT),
|
||||
&ioc->chip->ReplyPostHostIndex);
|
||||
}
|
||||
completed_cmds = 1;
|
||||
if (!reply_q->irq_poll_scheduled) {
|
||||
reply_q->irq_poll_scheduled = true;
|
||||
irq_poll_sched(&reply_q->irqpoll);
|
||||
}
|
||||
atomic_dec(&reply_q->busy);
|
||||
return completed_cmds;
|
||||
}
|
||||
if (request_desript_type == MPI2_RPY_DESCRIPT_FLAGS_UNUSED)
|
||||
if (request_descript_type == MPI2_RPY_DESCRIPT_FLAGS_UNUSED)
|
||||
goto out;
|
||||
if (!reply_q->reply_post_host_index)
|
||||
rpf = reply_q->reply_post_free;
|
||||
|
@ -1521,14 +1538,14 @@ _base_interrupt(int irq, void *bus_id)
|
|||
|
||||
if (!completed_cmds) {
|
||||
atomic_dec(&reply_q->busy);
|
||||
return IRQ_NONE;
|
||||
return completed_cmds;
|
||||
}
|
||||
|
||||
if (ioc->is_warpdrive) {
|
||||
writel(reply_q->reply_post_host_index,
|
||||
ioc->reply_post_host_index[msix_index]);
|
||||
atomic_dec(&reply_q->busy);
|
||||
return IRQ_HANDLED;
|
||||
return completed_cmds;
|
||||
}
|
||||
|
||||
/* Update Reply Post Host Index.
|
||||
|
@ -1555,7 +1572,82 @@ _base_interrupt(int irq, void *bus_id)
|
|||
MPI2_RPHI_MSIX_INDEX_SHIFT),
|
||||
&ioc->chip->ReplyPostHostIndex);
|
||||
atomic_dec(&reply_q->busy);
|
||||
return IRQ_HANDLED;
|
||||
return completed_cmds;
|
||||
}
|
||||
|
||||
/**
|
||||
* _base_interrupt - MPT adapter (IOC) specific interrupt handler.
|
||||
* @irq: irq number (not used)
|
||||
* @bus_id: bus identifier cookie == pointer to MPT_ADAPTER structure
|
||||
*
|
||||
* Return: IRQ_HANDLED if processed, else IRQ_NONE.
|
||||
*/
|
||||
static irqreturn_t
|
||||
_base_interrupt(int irq, void *bus_id)
|
||||
{
|
||||
struct adapter_reply_queue *reply_q = bus_id;
|
||||
struct MPT3SAS_ADAPTER *ioc = reply_q->ioc;
|
||||
|
||||
if (ioc->mask_interrupts)
|
||||
return IRQ_NONE;
|
||||
if (reply_q->irq_poll_scheduled)
|
||||
return IRQ_HANDLED;
|
||||
return ((_base_process_reply_queue(reply_q) > 0) ?
|
||||
IRQ_HANDLED : IRQ_NONE);
|
||||
}
|
||||
|
||||
/**
|
||||
* _base_irqpoll - IRQ poll callback handler
|
||||
* @irqpoll - irq_poll object
|
||||
* @budget - irq poll weight
|
||||
*
|
||||
* returns number of reply descriptors processed
|
||||
*/
|
||||
static int
|
||||
_base_irqpoll(struct irq_poll *irqpoll, int budget)
|
||||
{
|
||||
struct adapter_reply_queue *reply_q;
|
||||
int num_entries = 0;
|
||||
|
||||
reply_q = container_of(irqpoll, struct adapter_reply_queue,
|
||||
irqpoll);
|
||||
if (reply_q->irq_line_enable) {
|
||||
disable_irq(reply_q->os_irq);
|
||||
reply_q->irq_line_enable = false;
|
||||
}
|
||||
num_entries = _base_process_reply_queue(reply_q);
|
||||
if (num_entries < budget) {
|
||||
irq_poll_complete(irqpoll);
|
||||
reply_q->irq_poll_scheduled = false;
|
||||
reply_q->irq_line_enable = true;
|
||||
enable_irq(reply_q->os_irq);
|
||||
}
|
||||
|
||||
return num_entries;
|
||||
}
|
||||
|
||||
/**
|
||||
* _base_init_irqpolls - initliaze IRQ polls
|
||||
* @ioc: per adapter object
|
||||
*
|
||||
* returns nothing
|
||||
*/
|
||||
static void
|
||||
_base_init_irqpolls(struct MPT3SAS_ADAPTER *ioc)
|
||||
{
|
||||
struct adapter_reply_queue *reply_q, *next;
|
||||
|
||||
if (list_empty(&ioc->reply_queue_list))
|
||||
return;
|
||||
|
||||
list_for_each_entry_safe(reply_q, next, &ioc->reply_queue_list, list) {
|
||||
irq_poll_init(&reply_q->irqpoll,
|
||||
ioc->hba_queue_depth/4, _base_irqpoll);
|
||||
reply_q->irq_poll_scheduled = false;
|
||||
reply_q->irq_line_enable = true;
|
||||
reply_q->os_irq = pci_irq_vector(ioc->pdev,
|
||||
reply_q->msix_index);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -1596,6 +1688,17 @@ mpt3sas_base_sync_reply_irqs(struct MPT3SAS_ADAPTER *ioc)
|
|||
/* TMs are on msix_index == 0 */
|
||||
if (reply_q->msix_index == 0)
|
||||
continue;
|
||||
if (reply_q->irq_poll_scheduled) {
|
||||
/* Calling irq_poll_disable will wait for any pending
|
||||
* callbacks to have completed.
|
||||
*/
|
||||
irq_poll_disable(&reply_q->irqpoll);
|
||||
irq_poll_enable(&reply_q->irqpoll);
|
||||
reply_q->irq_poll_scheduled = false;
|
||||
reply_q->irq_line_enable = true;
|
||||
enable_irq(reply_q->os_irq);
|
||||
continue;
|
||||
}
|
||||
synchronize_irq(pci_irq_vector(ioc->pdev, reply_q->msix_index));
|
||||
}
|
||||
}
|
||||
|
@ -2757,6 +2860,11 @@ _base_assign_reply_queues(struct MPT3SAS_ADAPTER *ioc)
|
|||
|
||||
if (!_base_is_controller_msix_enabled(ioc))
|
||||
return;
|
||||
ioc->msix_load_balance = false;
|
||||
if (ioc->reply_queue_count < num_online_cpus()) {
|
||||
ioc->msix_load_balance = true;
|
||||
return;
|
||||
}
|
||||
|
||||
memset(ioc->cpu_msix_table, 0, ioc->cpu_msix_table_sz);
|
||||
|
||||
|
@ -3015,6 +3123,8 @@ mpt3sas_base_map_resources(struct MPT3SAS_ADAPTER *ioc)
|
|||
if (r)
|
||||
goto out_fail;
|
||||
|
||||
if (!ioc->is_driver_loading)
|
||||
_base_init_irqpolls(ioc);
|
||||
/* Use the Combined reply queue feature only for SAS3 C0 & higher
|
||||
* revision HBAs and also only when reply queue count is greater than 8
|
||||
*/
|
||||
|
@ -3158,6 +3268,12 @@ mpt3sas_base_get_reply_virt_addr(struct MPT3SAS_ADAPTER *ioc, u32 phys_addr)
|
|||
static inline u8
|
||||
_base_get_msix_index(struct MPT3SAS_ADAPTER *ioc)
|
||||
{
|
||||
/* Enables reply_queue load balancing */
|
||||
if (ioc->msix_load_balance)
|
||||
return ioc->reply_queue_count ?
|
||||
base_mod64(atomic64_add_return(1,
|
||||
&ioc->total_io_cnt), ioc->reply_queue_count) : 0;
|
||||
|
||||
return ioc->cpu_msix_table[raw_smp_processor_id()];
|
||||
}
|
||||
|
||||
|
@ -6506,6 +6622,12 @@ mpt3sas_base_attach(struct MPT3SAS_ADAPTER *ioc)
|
|||
if (r)
|
||||
goto out_free_resources;
|
||||
|
||||
if (irqpoll_weight > 0)
|
||||
ioc->thresh_hold = irqpoll_weight;
|
||||
else
|
||||
ioc->thresh_hold = ioc->hba_queue_depth/4;
|
||||
|
||||
_base_init_irqpolls(ioc);
|
||||
init_waitqueue_head(&ioc->reset_wq);
|
||||
|
||||
/* allocate memory pd handle bitmask list */
|
||||
|
|
|
@ -67,6 +67,7 @@
|
|||
#include <scsi/scsi_eh.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/poll.h>
|
||||
#include <linux/irq_poll.h>
|
||||
|
||||
#include "mpt3sas_debug.h"
|
||||
#include "mpt3sas_trigger_diag.h"
|
||||
|
@ -75,9 +76,9 @@
|
|||
#define MPT3SAS_DRIVER_NAME "mpt3sas"
|
||||
#define MPT3SAS_AUTHOR "Avago Technologies <MPT-FusionLinux.pdl@avagotech.com>"
|
||||
#define MPT3SAS_DESCRIPTION "LSI MPT Fusion SAS 3.0 Device Driver"
|
||||
#define MPT3SAS_DRIVER_VERSION "27.102.00.00"
|
||||
#define MPT3SAS_MAJOR_VERSION 27
|
||||
#define MPT3SAS_MINOR_VERSION 102
|
||||
#define MPT3SAS_DRIVER_VERSION "28.100.00.00"
|
||||
#define MPT3SAS_MAJOR_VERSION 28
|
||||
#define MPT3SAS_MINOR_VERSION 100
|
||||
#define MPT3SAS_BUILD_VERSION 0
|
||||
#define MPT3SAS_RELEASE_VERSION 00
|
||||
|
||||
|
@ -882,6 +883,9 @@ struct _event_ack_list {
|
|||
* @reply_post_free: reply post base virt address
|
||||
* @name: the name registered to request_irq()
|
||||
* @busy: isr is actively processing replies on another cpu
|
||||
* @os_irq: irq number
|
||||
* @irqpoll: irq_poll object
|
||||
* @irq_poll_scheduled: Tells whether irq poll is scheduled or not
|
||||
* @list: this list
|
||||
*/
|
||||
struct adapter_reply_queue {
|
||||
|
@ -891,6 +895,10 @@ struct adapter_reply_queue {
|
|||
Mpi2ReplyDescriptorsUnion_t *reply_post_free;
|
||||
char name[MPT_NAME_LENGTH];
|
||||
atomic_t busy;
|
||||
u32 os_irq;
|
||||
struct irq_poll irqpoll;
|
||||
bool irq_poll_scheduled;
|
||||
bool irq_line_enable;
|
||||
struct list_head list;
|
||||
};
|
||||
|
||||
|
@ -1016,7 +1024,12 @@ typedef void (*MPT3SAS_FLUSH_RUNNING_CMDS)(struct MPT3SAS_ADAPTER *ioc);
|
|||
* @msix_vector_count: number msix vectors
|
||||
* @cpu_msix_table: table for mapping cpus to msix index
|
||||
* @cpu_msix_table_sz: table size
|
||||
* @total_io_cnt: Gives total IO count, used to load balance the interrupts
|
||||
* @msix_load_balance: Enables load balancing of interrupts across
|
||||
* the multiple MSIXs
|
||||
* @schedule_dead_ioc_flush_running_cmds: callback to flush pending commands
|
||||
* @thresh_hold: Max number of reply descriptors processed
|
||||
* before updating Host Index
|
||||
* @scsi_io_cb_idx: shost generated commands
|
||||
* @tm_cb_idx: task management commands
|
||||
* @scsih_cb_idx: scsih internal commands
|
||||
|
@ -1192,6 +1205,9 @@ struct MPT3SAS_ADAPTER {
|
|||
u32 ioc_reset_count;
|
||||
MPT3SAS_FLUSH_RUNNING_CMDS schedule_dead_ioc_flush_running_cmds;
|
||||
u32 non_operational_loop;
|
||||
atomic64_t total_io_cnt;
|
||||
bool msix_load_balance;
|
||||
u16 thresh_hold;
|
||||
|
||||
/* internal commands, callback index */
|
||||
u8 scsi_io_cb_idx;
|
||||
|
|
|
@ -678,7 +678,8 @@ static u32 mvs_64xx_spi_read_data(struct mvs_info *mvi)
|
|||
static void mvs_64xx_spi_write_data(struct mvs_info *mvi, u32 data)
|
||||
{
|
||||
void __iomem *regs = mvi->regs_ex;
|
||||
iow32(SPI_DATA_REG_64XX, data);
|
||||
|
||||
iow32(SPI_DATA_REG_64XX, data);
|
||||
}
|
||||
|
||||
|
||||
|
|
|
@ -946,7 +946,8 @@ static u32 mvs_94xx_spi_read_data(struct mvs_info *mvi)
|
|||
static void mvs_94xx_spi_write_data(struct mvs_info *mvi, u32 data)
|
||||
{
|
||||
void __iomem *regs = mvi->regs_ex - 0x10200;
|
||||
mw32(SPI_RD_DATA_REG_94XX, data);
|
||||
|
||||
mw32(SPI_RD_DATA_REG_94XX, data);
|
||||
}
|
||||
|
||||
|
||||
|
|
|
@ -1422,7 +1422,7 @@ int mvs_I_T_nexus_reset(struct domain_device *dev)
|
|||
{
|
||||
unsigned long flags;
|
||||
int rc = TMF_RESP_FUNC_FAILED;
|
||||
struct mvs_device * mvi_dev = (struct mvs_device *)dev->lldd_dev;
|
||||
struct mvs_device *mvi_dev = (struct mvs_device *)dev->lldd_dev;
|
||||
struct mvs_info *mvi = mvi_dev->mvi_info;
|
||||
|
||||
if (mvi_dev->dev_status != MVS_DEV_EH)
|
||||
|
|
|
@ -752,7 +752,7 @@ static int mvumi_issue_blocked_cmd(struct mvumi_hba *mhba,
|
|||
spin_lock_irqsave(mhba->shost->host_lock, flags);
|
||||
atomic_dec(&cmd->sync_cmd);
|
||||
if (mhba->tag_cmd[cmd->frame->tag]) {
|
||||
mhba->tag_cmd[cmd->frame->tag] = 0;
|
||||
mhba->tag_cmd[cmd->frame->tag] = NULL;
|
||||
dev_warn(&mhba->pdev->dev, "TIMEOUT:release tag [%d]\n",
|
||||
cmd->frame->tag);
|
||||
tag_release_one(mhba, &mhba->tag_pool, cmd->frame->tag);
|
||||
|
@ -1794,7 +1794,7 @@ static void mvumi_handle_clob(struct mvumi_hba *mhba)
|
|||
cmd = mhba->tag_cmd[ob_frame->tag];
|
||||
|
||||
atomic_dec(&mhba->fw_outstanding);
|
||||
mhba->tag_cmd[ob_frame->tag] = 0;
|
||||
mhba->tag_cmd[ob_frame->tag] = NULL;
|
||||
tag_release_one(mhba, &mhba->tag_pool, ob_frame->tag);
|
||||
if (cmd->scmd)
|
||||
mvumi_complete_cmd(mhba, cmd, ob_frame);
|
||||
|
@ -2139,7 +2139,7 @@ static enum blk_eh_timer_return mvumi_timed_out(struct scsi_cmnd *scmd)
|
|||
spin_lock_irqsave(mhba->shost->host_lock, flags);
|
||||
|
||||
if (mhba->tag_cmd[cmd->frame->tag]) {
|
||||
mhba->tag_cmd[cmd->frame->tag] = 0;
|
||||
mhba->tag_cmd[cmd->frame->tag] = NULL;
|
||||
tag_release_one(mhba, &mhba->tag_pool, cmd->frame->tag);
|
||||
}
|
||||
if (!list_empty(&cmd->queue_pointer))
|
||||
|
|
|
@ -960,9 +960,9 @@ pm8001_chip_soft_rst(struct pm8001_hba_info *pm8001_ha)
|
|||
return -1;
|
||||
}
|
||||
regVal = pm8001_cr32(pm8001_ha, 2, GPIO_GPIO_0_0UTPUT_CTL_OFFSET);
|
||||
PM8001_INIT_DBG(pm8001_ha,
|
||||
pm8001_printk("GPIO Output Control Register:"
|
||||
" = 0x%x\n", regVal));
|
||||
PM8001_INIT_DBG(pm8001_ha,
|
||||
pm8001_printk("GPIO Output Control Register:"
|
||||
" = 0x%x\n", regVal));
|
||||
/* set GPIO-0 output control to tri-state */
|
||||
regVal &= 0xFFFFFFFC;
|
||||
pm8001_cw32(pm8001_ha, 2, GPIO_GPIO_0_0UTPUT_CTL_OFFSET, regVal);
|
||||
|
@ -1204,6 +1204,7 @@ void pm8001_chip_iounmap(struct pm8001_hba_info *pm8001_ha)
|
|||
}
|
||||
}
|
||||
|
||||
#ifndef PM8001_USE_MSIX
|
||||
/**
|
||||
* pm8001_chip_interrupt_enable - enable PM8001 chip interrupt
|
||||
* @pm8001_ha: our hba card information
|
||||
|
@ -1225,6 +1226,8 @@ pm8001_chip_intx_interrupt_disable(struct pm8001_hba_info *pm8001_ha)
|
|||
pm8001_cw32(pm8001_ha, 0, MSGU_ODMR, ODMR_MASK_ALL);
|
||||
}
|
||||
|
||||
#else
|
||||
|
||||
/**
|
||||
* pm8001_chip_msix_interrupt_enable - enable PM8001 chip interrupt
|
||||
* @pm8001_ha: our hba card information
|
||||
|
@ -1256,6 +1259,7 @@ pm8001_chip_msix_interrupt_disable(struct pm8001_hba_info *pm8001_ha,
|
|||
msi_index += MSIX_TABLE_BASE;
|
||||
pm8001_cw32(pm8001_ha, 0, msi_index, MSIX_INTERRUPT_DISABLE);
|
||||
}
|
||||
#endif
|
||||
|
||||
/**
|
||||
* pm8001_chip_interrupt_enable - enable PM8001 chip interrupt
|
||||
|
@ -1266,10 +1270,9 @@ pm8001_chip_interrupt_enable(struct pm8001_hba_info *pm8001_ha, u8 vec)
|
|||
{
|
||||
#ifdef PM8001_USE_MSIX
|
||||
pm8001_chip_msix_interrupt_enable(pm8001_ha, 0);
|
||||
return;
|
||||
#endif
|
||||
#else
|
||||
pm8001_chip_intx_interrupt_enable(pm8001_ha);
|
||||
|
||||
#endif
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -1281,10 +1284,9 @@ pm8001_chip_interrupt_disable(struct pm8001_hba_info *pm8001_ha, u8 vec)
|
|||
{
|
||||
#ifdef PM8001_USE_MSIX
|
||||
pm8001_chip_msix_interrupt_disable(pm8001_ha, 0);
|
||||
return;
|
||||
#endif
|
||||
#else
|
||||
pm8001_chip_intx_interrupt_disable(pm8001_ha);
|
||||
|
||||
#endif
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -2898,7 +2900,6 @@ static void mpi_sata_event(struct pm8001_hba_info *pm8001_ha , void *piomb)
|
|||
static void
|
||||
mpi_smp_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
|
||||
{
|
||||
u32 param;
|
||||
struct sas_task *t;
|
||||
struct pm8001_ccb_info *ccb;
|
||||
unsigned long flags;
|
||||
|
@ -2913,7 +2914,6 @@ mpi_smp_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
|
|||
tag = le32_to_cpu(psmpPayload->tag);
|
||||
|
||||
ccb = &pm8001_ha->ccb_info[tag];
|
||||
param = le32_to_cpu(psmpPayload->param);
|
||||
t = ccb->task;
|
||||
ts = &t->task_status;
|
||||
pm8001_dev = ccb->device;
|
||||
|
@ -2928,7 +2928,7 @@ mpi_smp_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
|
|||
PM8001_IO_DBG(pm8001_ha, pm8001_printk("IO_SUCCESS\n"));
|
||||
ts->resp = SAS_TASK_COMPLETE;
|
||||
ts->stat = SAM_STAT_GOOD;
|
||||
if (pm8001_dev)
|
||||
if (pm8001_dev)
|
||||
pm8001_dev->running_req--;
|
||||
break;
|
||||
case IO_ABORTED:
|
||||
|
@ -3244,11 +3244,9 @@ void pm8001_bytes_dmaed(struct pm8001_hba_info *pm8001_ha, int i)
|
|||
{
|
||||
struct pm8001_phy *phy = &pm8001_ha->phy[i];
|
||||
struct asd_sas_phy *sas_phy = &phy->sas_phy;
|
||||
struct sas_ha_struct *sas_ha;
|
||||
if (!phy->phy_attached)
|
||||
return;
|
||||
|
||||
sas_ha = pm8001_ha->sas;
|
||||
if (sas_phy->phy) {
|
||||
struct sas_phy *sphy = sas_phy->phy;
|
||||
sphy->negotiated_linkrate = sas_phy->linkrate;
|
||||
|
@ -4627,17 +4625,18 @@ static int pm8001_chip_phy_ctl_req(struct pm8001_hba_info *pm8001_ha,
|
|||
return ret;
|
||||
}
|
||||
|
||||
static u32 pm8001_chip_is_our_interupt(struct pm8001_hba_info *pm8001_ha)
|
||||
static u32 pm8001_chip_is_our_interrupt(struct pm8001_hba_info *pm8001_ha)
|
||||
{
|
||||
u32 value;
|
||||
#ifdef PM8001_USE_MSIX
|
||||
return 1;
|
||||
#endif
|
||||
#else
|
||||
u32 value;
|
||||
|
||||
value = pm8001_cr32(pm8001_ha, 0, MSGU_ODR);
|
||||
if (value)
|
||||
return 1;
|
||||
return 0;
|
||||
|
||||
#endif
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -5123,7 +5122,7 @@ const struct pm8001_dispatch pm8001_8001_dispatch = {
|
|||
.chip_rst = pm8001_hw_chip_rst,
|
||||
.chip_iounmap = pm8001_chip_iounmap,
|
||||
.isr = pm8001_chip_isr,
|
||||
.is_our_interupt = pm8001_chip_is_our_interupt,
|
||||
.is_our_interrupt = pm8001_chip_is_our_interrupt,
|
||||
.isr_process_oq = process_oq,
|
||||
.interrupt_enable = pm8001_chip_interrupt_enable,
|
||||
.interrupt_disable = pm8001_chip_interrupt_disable,
|
||||
|
|
|
@ -201,7 +201,7 @@ static irqreturn_t pm8001_interrupt_handler_msix(int irq, void *opaque)
|
|||
|
||||
if (unlikely(!pm8001_ha))
|
||||
return IRQ_NONE;
|
||||
if (!PM8001_CHIP_DISP->is_our_interupt(pm8001_ha))
|
||||
if (!PM8001_CHIP_DISP->is_our_interrupt(pm8001_ha))
|
||||
return IRQ_NONE;
|
||||
#ifdef PM8001_USE_TASKLET
|
||||
tasklet_schedule(&pm8001_ha->tasklet[irq_vector->irq_id]);
|
||||
|
@ -224,7 +224,7 @@ static irqreturn_t pm8001_interrupt_handler_intx(int irq, void *dev_id)
|
|||
pm8001_ha = sha->lldd_ha;
|
||||
if (unlikely(!pm8001_ha))
|
||||
return IRQ_NONE;
|
||||
if (!PM8001_CHIP_DISP->is_our_interupt(pm8001_ha))
|
||||
if (!PM8001_CHIP_DISP->is_our_interrupt(pm8001_ha))
|
||||
return IRQ_NONE;
|
||||
|
||||
#ifdef PM8001_USE_TASKLET
|
||||
|
|
|
@ -740,8 +740,8 @@ static int pm8001_exec_internal_tmf_task(struct domain_device *dev,
|
|||
wait_for_completion(&task->slow_task->completion);
|
||||
if (pm8001_ha->chip_id != chip_8001) {
|
||||
pm8001_dev->setds_completion = &completion_setstate;
|
||||
PM8001_CHIP_DISP->set_dev_state_req(pm8001_ha,
|
||||
pm8001_dev, 0x01);
|
||||
PM8001_CHIP_DISP->set_dev_state_req(pm8001_ha,
|
||||
pm8001_dev, 0x01);
|
||||
wait_for_completion(&completion_setstate);
|
||||
}
|
||||
res = -TMF_RESP_FUNC_FAILED;
|
||||
|
|
|
@ -197,7 +197,7 @@ struct pm8001_dispatch {
|
|||
int (*chip_ioremap)(struct pm8001_hba_info *pm8001_ha);
|
||||
void (*chip_iounmap)(struct pm8001_hba_info *pm8001_ha);
|
||||
irqreturn_t (*isr)(struct pm8001_hba_info *pm8001_ha, u8 vec);
|
||||
u32 (*is_our_interupt)(struct pm8001_hba_info *pm8001_ha);
|
||||
u32 (*is_our_interrupt)(struct pm8001_hba_info *pm8001_ha);
|
||||
int (*isr_process_oq)(struct pm8001_hba_info *pm8001_ha, u8 vec);
|
||||
void (*interrupt_enable)(struct pm8001_hba_info *pm8001_ha, u8 vec);
|
||||
void (*interrupt_disable)(struct pm8001_hba_info *pm8001_ha, u8 vec);
|
||||
|
|
|
@ -1316,7 +1316,7 @@ pm80xx_chip_soft_rst(struct pm8001_hba_info *pm8001_ha)
|
|||
|
||||
static void pm80xx_hw_chip_rst(struct pm8001_hba_info *pm8001_ha)
|
||||
{
|
||||
u32 i;
|
||||
u32 i;
|
||||
|
||||
PM8001_INIT_DBG(pm8001_ha,
|
||||
pm8001_printk("chip reset start\n"));
|
||||
|
@ -4381,27 +4381,27 @@ static int pm80xx_chip_sata_req(struct pm8001_hba_info *pm8001_ha,
|
|||
sata_cmd.len = cpu_to_le32(task->total_xfer_len);
|
||||
sata_cmd.esgl = 0;
|
||||
}
|
||||
/* scsi cdb */
|
||||
sata_cmd.atapi_scsi_cdb[0] =
|
||||
cpu_to_le32(((task->ata_task.atapi_packet[0]) |
|
||||
(task->ata_task.atapi_packet[1] << 8) |
|
||||
(task->ata_task.atapi_packet[2] << 16) |
|
||||
(task->ata_task.atapi_packet[3] << 24)));
|
||||
sata_cmd.atapi_scsi_cdb[1] =
|
||||
cpu_to_le32(((task->ata_task.atapi_packet[4]) |
|
||||
(task->ata_task.atapi_packet[5] << 8) |
|
||||
(task->ata_task.atapi_packet[6] << 16) |
|
||||
(task->ata_task.atapi_packet[7] << 24)));
|
||||
sata_cmd.atapi_scsi_cdb[2] =
|
||||
cpu_to_le32(((task->ata_task.atapi_packet[8]) |
|
||||
(task->ata_task.atapi_packet[9] << 8) |
|
||||
(task->ata_task.atapi_packet[10] << 16) |
|
||||
(task->ata_task.atapi_packet[11] << 24)));
|
||||
sata_cmd.atapi_scsi_cdb[3] =
|
||||
cpu_to_le32(((task->ata_task.atapi_packet[12]) |
|
||||
(task->ata_task.atapi_packet[13] << 8) |
|
||||
(task->ata_task.atapi_packet[14] << 16) |
|
||||
(task->ata_task.atapi_packet[15] << 24)));
|
||||
/* scsi cdb */
|
||||
sata_cmd.atapi_scsi_cdb[0] =
|
||||
cpu_to_le32(((task->ata_task.atapi_packet[0]) |
|
||||
(task->ata_task.atapi_packet[1] << 8) |
|
||||
(task->ata_task.atapi_packet[2] << 16) |
|
||||
(task->ata_task.atapi_packet[3] << 24)));
|
||||
sata_cmd.atapi_scsi_cdb[1] =
|
||||
cpu_to_le32(((task->ata_task.atapi_packet[4]) |
|
||||
(task->ata_task.atapi_packet[5] << 8) |
|
||||
(task->ata_task.atapi_packet[6] << 16) |
|
||||
(task->ata_task.atapi_packet[7] << 24)));
|
||||
sata_cmd.atapi_scsi_cdb[2] =
|
||||
cpu_to_le32(((task->ata_task.atapi_packet[8]) |
|
||||
(task->ata_task.atapi_packet[9] << 8) |
|
||||
(task->ata_task.atapi_packet[10] << 16) |
|
||||
(task->ata_task.atapi_packet[11] << 24)));
|
||||
sata_cmd.atapi_scsi_cdb[3] =
|
||||
cpu_to_le32(((task->ata_task.atapi_packet[12]) |
|
||||
(task->ata_task.atapi_packet[13] << 8) |
|
||||
(task->ata_task.atapi_packet[14] << 16) |
|
||||
(task->ata_task.atapi_packet[15] << 24)));
|
||||
}
|
||||
|
||||
/* Check for read log for failed drive and return */
|
||||
|
@ -4617,17 +4617,18 @@ static int pm80xx_chip_phy_ctl_req(struct pm8001_hba_info *pm8001_ha,
|
|||
return pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &payload, 0);
|
||||
}
|
||||
|
||||
static u32 pm80xx_chip_is_our_interupt(struct pm8001_hba_info *pm8001_ha)
|
||||
static u32 pm80xx_chip_is_our_interrupt(struct pm8001_hba_info *pm8001_ha)
|
||||
{
|
||||
u32 value;
|
||||
#ifdef PM8001_USE_MSIX
|
||||
return 1;
|
||||
#endif
|
||||
#else
|
||||
u32 value;
|
||||
|
||||
value = pm8001_cr32(pm8001_ha, 0, MSGU_ODR);
|
||||
if (value)
|
||||
return 1;
|
||||
return 0;
|
||||
|
||||
#endif
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -4724,7 +4725,7 @@ const struct pm8001_dispatch pm8001_80xx_dispatch = {
|
|||
.chip_rst = pm80xx_hw_chip_rst,
|
||||
.chip_iounmap = pm8001_chip_iounmap,
|
||||
.isr = pm80xx_chip_isr,
|
||||
.is_our_interupt = pm80xx_chip_is_our_interupt,
|
||||
.is_our_interrupt = pm80xx_chip_is_our_interrupt,
|
||||
.isr_process_oq = process_oq,
|
||||
.interrupt_enable = pm80xx_chip_interrupt_enable,
|
||||
.interrupt_disable = pm80xx_chip_interrupt_disable,
|
||||
|
|
|
@ -35,9 +35,6 @@
|
|||
#define QEDF_DESCR "QLogic FCoE Offload Driver"
|
||||
#define QEDF_MODULE_NAME "qedf"
|
||||
|
||||
#define QEDF_MIN_XID 0
|
||||
#define QEDF_MAX_SCSI_XID (NUM_TASKS_PER_CONNECTION - 1)
|
||||
#define QEDF_MAX_ELS_XID 4095
|
||||
#define QEDF_FLOGI_RETRY_CNT 3
|
||||
#define QEDF_RPORT_RETRY_CNT 255
|
||||
#define QEDF_MAX_SESSIONS 1024
|
||||
|
@ -52,8 +49,8 @@
|
|||
sizeof(struct fc_frame_header))
|
||||
#define QEDF_MAX_NPIV 64
|
||||
#define QEDF_TM_TIMEOUT 10
|
||||
#define QEDF_ABORT_TIMEOUT 10
|
||||
#define QEDF_CLEANUP_TIMEOUT 10
|
||||
#define QEDF_ABORT_TIMEOUT (10 * 1000)
|
||||
#define QEDF_CLEANUP_TIMEOUT 1
|
||||
#define QEDF_MAX_CDB_LEN 16
|
||||
|
||||
#define UPSTREAM_REMOVE 1
|
||||
|
@ -85,6 +82,7 @@ struct qedf_els_cb_arg {
|
|||
};
|
||||
|
||||
enum qedf_ioreq_event {
|
||||
QEDF_IOREQ_EV_NONE,
|
||||
QEDF_IOREQ_EV_ABORT_SUCCESS,
|
||||
QEDF_IOREQ_EV_ABORT_FAILED,
|
||||
QEDF_IOREQ_EV_SEND_RRQ,
|
||||
|
@ -105,7 +103,6 @@ struct qedf_ioreq {
|
|||
struct list_head link;
|
||||
uint16_t xid;
|
||||
struct scsi_cmnd *sc_cmd;
|
||||
bool use_slowpath; /* Use slow SGL for this I/O */
|
||||
#define QEDF_SCSI_CMD 1
|
||||
#define QEDF_TASK_MGMT_CMD 2
|
||||
#define QEDF_ABTS 3
|
||||
|
@ -117,22 +114,43 @@ struct qedf_ioreq {
|
|||
#define QEDF_CMD_IN_ABORT 0x1
|
||||
#define QEDF_CMD_IN_CLEANUP 0x2
|
||||
#define QEDF_CMD_SRR_SENT 0x3
|
||||
#define QEDF_CMD_DIRTY 0x4
|
||||
#define QEDF_CMD_ERR_SCSI_DONE 0x5
|
||||
u8 io_req_flags;
|
||||
uint8_t tm_flags;
|
||||
struct qedf_rport *fcport;
|
||||
#define QEDF_CMD_ST_INACTIVE 0
|
||||
#define QEDFC_CMD_ST_IO_ACTIVE 1
|
||||
#define QEDFC_CMD_ST_ABORT_ACTIVE 2
|
||||
#define QEDFC_CMD_ST_ABORT_ACTIVE_EH 3
|
||||
#define QEDFC_CMD_ST_CLEANUP_ACTIVE 4
|
||||
#define QEDFC_CMD_ST_CLEANUP_ACTIVE_EH 5
|
||||
#define QEDFC_CMD_ST_RRQ_ACTIVE 6
|
||||
#define QEDFC_CMD_ST_RRQ_WAIT 7
|
||||
#define QEDFC_CMD_ST_OXID_RETIRE_WAIT 8
|
||||
#define QEDFC_CMD_ST_TMF_ACTIVE 9
|
||||
#define QEDFC_CMD_ST_DRAIN_ACTIVE 10
|
||||
#define QEDFC_CMD_ST_CLEANED 11
|
||||
#define QEDFC_CMD_ST_ELS_ACTIVE 12
|
||||
atomic_t state;
|
||||
unsigned long flags;
|
||||
enum qedf_ioreq_event event;
|
||||
size_t data_xfer_len;
|
||||
/* ID: 001: Alloc cmd (qedf_alloc_cmd) */
|
||||
/* ID: 002: Initiate ABTS (qedf_initiate_abts) */
|
||||
/* ID: 003: For RRQ (qedf_process_abts_compl) */
|
||||
struct kref refcount;
|
||||
struct qedf_cmd_mgr *cmd_mgr;
|
||||
struct io_bdt *bd_tbl;
|
||||
struct delayed_work timeout_work;
|
||||
struct completion tm_done;
|
||||
struct completion abts_done;
|
||||
struct completion cleanup_done;
|
||||
struct e4_fcoe_task_context *task;
|
||||
struct fcoe_task_params *task_params;
|
||||
struct scsi_sgl_task_params *sgl_task_params;
|
||||
int idx;
|
||||
int lun;
|
||||
/*
|
||||
* Need to allocate enough room for both sense data and FCP response data
|
||||
* which has a max length of 8 bytes according to spec.
|
||||
|
@ -155,9 +173,9 @@ struct qedf_ioreq {
|
|||
int fp_idx;
|
||||
unsigned int cpu;
|
||||
unsigned int int_cpu;
|
||||
#define QEDF_IOREQ_SLOW_SGE 0
|
||||
#define QEDF_IOREQ_SINGLE_SGE 1
|
||||
#define QEDF_IOREQ_FAST_SGE 2
|
||||
#define QEDF_IOREQ_UNKNOWN_SGE 1
|
||||
#define QEDF_IOREQ_SLOW_SGE 2
|
||||
#define QEDF_IOREQ_FAST_SGE 3
|
||||
u8 sge_type;
|
||||
struct delayed_work rrq_work;
|
||||
|
||||
|
@ -172,6 +190,8 @@ struct qedf_ioreq {
|
|||
* during some form of error processing.
|
||||
*/
|
||||
bool return_scsi_cmd_on_abts;
|
||||
|
||||
unsigned int alloc;
|
||||
};
|
||||
|
||||
extern struct workqueue_struct *qedf_io_wq;
|
||||
|
@ -181,7 +201,10 @@ struct qedf_rport {
|
|||
#define QEDF_RPORT_SESSION_READY 1
|
||||
#define QEDF_RPORT_UPLOADING_CONNECTION 2
|
||||
#define QEDF_RPORT_IN_RESET 3
|
||||
#define QEDF_RPORT_IN_LUN_RESET 4
|
||||
#define QEDF_RPORT_IN_TARGET_RESET 5
|
||||
unsigned long flags;
|
||||
int lun_reset_lun;
|
||||
unsigned long retry_delay_timestamp;
|
||||
struct fc_rport *rport;
|
||||
struct fc_rport_priv *rdata;
|
||||
|
@ -191,6 +214,7 @@ struct qedf_rport {
|
|||
void __iomem *p_doorbell;
|
||||
/* Send queue management */
|
||||
atomic_t free_sqes;
|
||||
atomic_t ios_to_queue;
|
||||
atomic_t num_active_ios;
|
||||
struct fcoe_wqe *sq;
|
||||
dma_addr_t sq_dma;
|
||||
|
@ -295,8 +319,6 @@ struct qedf_ctx {
|
|||
#define QEDF_DCBX_PENDING 0
|
||||
#define QEDF_DCBX_DONE 1
|
||||
atomic_t dcbx;
|
||||
uint16_t max_scsi_xid;
|
||||
uint16_t max_els_xid;
|
||||
#define QEDF_NULL_VLAN_ID -1
|
||||
#define QEDF_FALLBACK_VLAN 1002
|
||||
#define QEDF_DEFAULT_PRIO 3
|
||||
|
@ -371,7 +393,6 @@ struct qedf_ctx {
|
|||
|
||||
u32 slow_sge_ios;
|
||||
u32 fast_sge_ios;
|
||||
u32 single_sge_ios;
|
||||
|
||||
uint8_t *grcdump;
|
||||
uint32_t grcdump_size;
|
||||
|
@ -396,6 +417,8 @@ struct qedf_ctx {
|
|||
u8 target_resets;
|
||||
u8 task_set_fulls;
|
||||
u8 busy;
|
||||
/* Used for flush routine */
|
||||
struct mutex flush_mutex;
|
||||
};
|
||||
|
||||
struct io_bdt {
|
||||
|
@ -435,6 +458,12 @@ static inline void qedf_stop_all_io(struct qedf_ctx *qedf)
|
|||
/*
|
||||
* Externs
|
||||
*/
|
||||
|
||||
/*
|
||||
* (QEDF_LOG_NPIV | QEDF_LOG_SESS | QEDF_LOG_LPORT | QEDF_LOG_ELS | QEDF_LOG_MQ
|
||||
* | QEDF_LOG_IO | QEDF_LOG_UNSOL | QEDF_LOG_SCSI_TM | QEDF_LOG_MP_REQ |
|
||||
* QEDF_LOG_EVT | QEDF_LOG_CONN | QEDF_LOG_DISC | QEDF_LOG_INFO)
|
||||
*/
|
||||
#define QEDF_DEFAULT_LOG_MASK 0x3CFB6
|
||||
extern const struct qed_fcoe_ops *qed_ops;
|
||||
extern uint qedf_dump_frames;
|
||||
|
@ -494,7 +523,7 @@ extern void qedf_set_vlan_id(struct qedf_ctx *qedf, int vlan_id);
|
|||
extern void qedf_create_sysfs_ctx_attr(struct qedf_ctx *qedf);
|
||||
extern void qedf_remove_sysfs_ctx_attr(struct qedf_ctx *qedf);
|
||||
extern void qedf_capture_grc_dump(struct qedf_ctx *qedf);
|
||||
extern void qedf_wait_for_upload(struct qedf_ctx *qedf);
|
||||
bool qedf_wait_for_upload(struct qedf_ctx *qedf);
|
||||
extern void qedf_process_unsol_compl(struct qedf_ctx *qedf, uint16_t que_idx,
|
||||
struct fcoe_cqe *cqe);
|
||||
extern void qedf_restart_rport(struct qedf_rport *fcport);
|
||||
|
@ -508,6 +537,8 @@ extern void qedf_get_protocol_tlv_data(void *dev, void *data);
|
|||
extern void qedf_fp_io_handler(struct work_struct *work);
|
||||
extern void qedf_get_generic_tlv_data(void *dev, struct qed_generic_tlvs *data);
|
||||
extern void qedf_wq_grcdump(struct work_struct *work);
|
||||
void qedf_stag_change_work(struct work_struct *work);
|
||||
void qedf_ctx_soft_reset(struct fc_lport *lport);
|
||||
|
||||
#define FCOE_WORD_TO_BYTE 4
|
||||
#define QEDF_MAX_TASK_NUM 0xFFFF
|
||||
|
|
|
@ -15,10 +15,6 @@ qedf_dbg_err(struct qedf_dbg_ctx *qedf, const char *func, u32 line,
|
|||
{
|
||||
va_list va;
|
||||
struct va_format vaf;
|
||||
char nfunc[32];
|
||||
|
||||
memset(nfunc, 0, sizeof(nfunc));
|
||||
memcpy(nfunc, func, sizeof(nfunc) - 1);
|
||||
|
||||
va_start(va, fmt);
|
||||
|
||||
|
@ -27,9 +23,9 @@ qedf_dbg_err(struct qedf_dbg_ctx *qedf, const char *func, u32 line,
|
|||
|
||||
if (likely(qedf) && likely(qedf->pdev))
|
||||
pr_err("[%s]:[%s:%d]:%d: %pV", dev_name(&(qedf->pdev->dev)),
|
||||
nfunc, line, qedf->host_no, &vaf);
|
||||
func, line, qedf->host_no, &vaf);
|
||||
else
|
||||
pr_err("[0000:00:00.0]:[%s:%d]: %pV", nfunc, line, &vaf);
|
||||
pr_err("[0000:00:00.0]:[%s:%d]: %pV", func, line, &vaf);
|
||||
|
||||
va_end(va);
|
||||
}
|
||||
|
@ -40,10 +36,6 @@ qedf_dbg_warn(struct qedf_dbg_ctx *qedf, const char *func, u32 line,
|
|||
{
|
||||
va_list va;
|
||||
struct va_format vaf;
|
||||
char nfunc[32];
|
||||
|
||||
memset(nfunc, 0, sizeof(nfunc));
|
||||
memcpy(nfunc, func, sizeof(nfunc) - 1);
|
||||
|
||||
va_start(va, fmt);
|
||||
|
||||
|
@ -55,9 +47,9 @@ qedf_dbg_warn(struct qedf_dbg_ctx *qedf, const char *func, u32 line,
|
|||
|
||||
if (likely(qedf) && likely(qedf->pdev))
|
||||
pr_warn("[%s]:[%s:%d]:%d: %pV", dev_name(&(qedf->pdev->dev)),
|
||||
nfunc, line, qedf->host_no, &vaf);
|
||||
func, line, qedf->host_no, &vaf);
|
||||
else
|
||||
pr_warn("[0000:00:00.0]:[%s:%d]: %pV", nfunc, line, &vaf);
|
||||
pr_warn("[0000:00:00.0]:[%s:%d]: %pV", func, line, &vaf);
|
||||
|
||||
ret:
|
||||
va_end(va);
|
||||
|
@ -69,10 +61,6 @@ qedf_dbg_notice(struct qedf_dbg_ctx *qedf, const char *func, u32 line,
|
|||
{
|
||||
va_list va;
|
||||
struct va_format vaf;
|
||||
char nfunc[32];
|
||||
|
||||
memset(nfunc, 0, sizeof(nfunc));
|
||||
memcpy(nfunc, func, sizeof(nfunc) - 1);
|
||||
|
||||
va_start(va, fmt);
|
||||
|
||||
|
@ -84,10 +72,10 @@ qedf_dbg_notice(struct qedf_dbg_ctx *qedf, const char *func, u32 line,
|
|||
|
||||
if (likely(qedf) && likely(qedf->pdev))
|
||||
pr_notice("[%s]:[%s:%d]:%d: %pV",
|
||||
dev_name(&(qedf->pdev->dev)), nfunc, line,
|
||||
dev_name(&(qedf->pdev->dev)), func, line,
|
||||
qedf->host_no, &vaf);
|
||||
else
|
||||
pr_notice("[0000:00:00.0]:[%s:%d]: %pV", nfunc, line, &vaf);
|
||||
pr_notice("[0000:00:00.0]:[%s:%d]: %pV", func, line, &vaf);
|
||||
|
||||
ret:
|
||||
va_end(va);
|
||||
|
@ -99,10 +87,6 @@ qedf_dbg_info(struct qedf_dbg_ctx *qedf, const char *func, u32 line,
|
|||
{
|
||||
va_list va;
|
||||
struct va_format vaf;
|
||||
char nfunc[32];
|
||||
|
||||
memset(nfunc, 0, sizeof(nfunc));
|
||||
memcpy(nfunc, func, sizeof(nfunc) - 1);
|
||||
|
||||
va_start(va, fmt);
|
||||
|
||||
|
@ -114,9 +98,9 @@ qedf_dbg_info(struct qedf_dbg_ctx *qedf, const char *func, u32 line,
|
|||
|
||||
if (likely(qedf) && likely(qedf->pdev))
|
||||
pr_info("[%s]:[%s:%d]:%d: %pV", dev_name(&(qedf->pdev->dev)),
|
||||
nfunc, line, qedf->host_no, &vaf);
|
||||
func, line, qedf->host_no, &vaf);
|
||||
else
|
||||
pr_info("[0000:00:00.0]:[%s:%d]: %pV", nfunc, line, &vaf);
|
||||
pr_info("[0000:00:00.0]:[%s:%d]: %pV", func, line, &vaf);
|
||||
|
||||
ret:
|
||||
va_end(va);
|
||||
|
|
|
@ -293,6 +293,33 @@ qedf_dbg_io_trace_open(struct inode *inode, struct file *file)
|
|||
return single_open(file, qedf_io_trace_show, qedf);
|
||||
}
|
||||
|
||||
/* Based on fip_state enum from libfcoe.h */
|
||||
static char *fip_state_names[] = {
|
||||
"FIP_ST_DISABLED",
|
||||
"FIP_ST_LINK_WAIT",
|
||||
"FIP_ST_AUTO",
|
||||
"FIP_ST_NON_FIP",
|
||||
"FIP_ST_ENABLED",
|
||||
"FIP_ST_VNMP_START",
|
||||
"FIP_ST_VNMP_PROBE1",
|
||||
"FIP_ST_VNMP_PROBE2",
|
||||
"FIP_ST_VNMP_CLAIM",
|
||||
"FIP_ST_VNMP_UP",
|
||||
};
|
||||
|
||||
/* Based on fc_rport_state enum from libfc.h */
|
||||
static char *fc_rport_state_names[] = {
|
||||
"RPORT_ST_INIT",
|
||||
"RPORT_ST_FLOGI",
|
||||
"RPORT_ST_PLOGI_WAIT",
|
||||
"RPORT_ST_PLOGI",
|
||||
"RPORT_ST_PRLI",
|
||||
"RPORT_ST_RTV",
|
||||
"RPORT_ST_READY",
|
||||
"RPORT_ST_ADISC",
|
||||
"RPORT_ST_DELETE",
|
||||
};
|
||||
|
||||
static int
|
||||
qedf_driver_stats_show(struct seq_file *s, void *unused)
|
||||
{
|
||||
|
@ -300,10 +327,28 @@ qedf_driver_stats_show(struct seq_file *s, void *unused)
|
|||
struct qedf_rport *fcport;
|
||||
struct fc_rport_priv *rdata;
|
||||
|
||||
seq_printf(s, "Host WWNN/WWPN: %016llx/%016llx\n",
|
||||
qedf->wwnn, qedf->wwpn);
|
||||
seq_printf(s, "Host NPortID: %06x\n", qedf->lport->port_id);
|
||||
seq_printf(s, "Link State: %s\n", atomic_read(&qedf->link_state) ?
|
||||
"Up" : "Down");
|
||||
seq_printf(s, "Logical Link State: %s\n", qedf->lport->link_up ?
|
||||
"Up" : "Down");
|
||||
seq_printf(s, "FIP state: %s\n", fip_state_names[qedf->ctlr.state]);
|
||||
seq_printf(s, "FIP VLAN ID: %d\n", qedf->vlan_id & 0xfff);
|
||||
seq_printf(s, "FIP 802.1Q Priority: %d\n", qedf->prio);
|
||||
if (qedf->ctlr.sel_fcf) {
|
||||
seq_printf(s, "FCF WWPN: %016llx\n",
|
||||
qedf->ctlr.sel_fcf->switch_name);
|
||||
seq_printf(s, "FCF MAC: %pM\n", qedf->ctlr.sel_fcf->fcf_mac);
|
||||
} else {
|
||||
seq_puts(s, "FCF not selected\n");
|
||||
}
|
||||
|
||||
seq_puts(s, "\nSGE stats:\n\n");
|
||||
seq_printf(s, "cmg_mgr free io_reqs: %d\n",
|
||||
atomic_read(&qedf->cmd_mgr->free_list_cnt));
|
||||
seq_printf(s, "slow SGEs: %d\n", qedf->slow_sge_ios);
|
||||
seq_printf(s, "single SGEs: %d\n", qedf->single_sge_ios);
|
||||
seq_printf(s, "fast SGEs: %d\n\n", qedf->fast_sge_ios);
|
||||
|
||||
seq_puts(s, "Offloaded ports:\n\n");
|
||||
|
@ -313,9 +358,12 @@ qedf_driver_stats_show(struct seq_file *s, void *unused)
|
|||
rdata = fcport->rdata;
|
||||
if (rdata == NULL)
|
||||
continue;
|
||||
seq_printf(s, "%06x: free_sqes: %d, num_active_ios: %d\n",
|
||||
rdata->ids.port_id, atomic_read(&fcport->free_sqes),
|
||||
atomic_read(&fcport->num_active_ios));
|
||||
seq_printf(s, "%016llx/%016llx/%06x: state=%s, free_sqes=%d, num_active_ios=%d\n",
|
||||
rdata->rport->node_name, rdata->rport->port_name,
|
||||
rdata->ids.port_id,
|
||||
fc_rport_state_names[rdata->rp_state],
|
||||
atomic_read(&fcport->free_sqes),
|
||||
atomic_read(&fcport->num_active_ios));
|
||||
}
|
||||
rcu_read_unlock();
|
||||
|
||||
|
@ -361,7 +409,6 @@ qedf_dbg_clear_stats_cmd_write(struct file *filp,
|
|||
|
||||
/* Clear stat counters exposed by 'stats' node */
|
||||
qedf->slow_sge_ios = 0;
|
||||
qedf->single_sge_ios = 0;
|
||||
qedf->fast_sge_ios = 0;
|
||||
|
||||
return count;
|
||||
|
|
|
@ -23,8 +23,6 @@ static int qedf_initiate_els(struct qedf_rport *fcport, unsigned int op,
|
|||
int rc = 0;
|
||||
uint32_t did, sid;
|
||||
uint16_t xid;
|
||||
uint32_t start_time = jiffies / HZ;
|
||||
uint32_t current_time;
|
||||
struct fcoe_wqe *sqe;
|
||||
unsigned long flags;
|
||||
u16 sqe_idx;
|
||||
|
@ -59,18 +57,12 @@ static int qedf_initiate_els(struct qedf_rport *fcport, unsigned int op,
|
|||
goto els_err;
|
||||
}
|
||||
|
||||
retry_els:
|
||||
els_req = qedf_alloc_cmd(fcport, QEDF_ELS);
|
||||
if (!els_req) {
|
||||
current_time = jiffies / HZ;
|
||||
if ((current_time - start_time) > 10) {
|
||||
QEDF_INFO(&(qedf->dbg_ctx), QEDF_LOG_ELS,
|
||||
"els: Failed els 0x%x\n", op);
|
||||
rc = -ENOMEM;
|
||||
goto els_err;
|
||||
}
|
||||
mdelay(20 * USEC_PER_MSEC);
|
||||
goto retry_els;
|
||||
QEDF_INFO(&qedf->dbg_ctx, QEDF_LOG_ELS,
|
||||
"Failed to alloc ELS request 0x%x\n", op);
|
||||
rc = -ENOMEM;
|
||||
goto els_err;
|
||||
}
|
||||
|
||||
QEDF_INFO(&(qedf->dbg_ctx), QEDF_LOG_ELS, "initiate_els els_req = "
|
||||
|
@ -143,6 +135,8 @@ static int qedf_initiate_els(struct qedf_rport *fcport, unsigned int op,
|
|||
QEDF_INFO(&(qedf->dbg_ctx), QEDF_LOG_ELS, "Ringing doorbell for ELS "
|
||||
"req\n");
|
||||
qedf_ring_doorbell(fcport);
|
||||
set_bit(QEDF_CMD_OUTSTANDING, &els_req->flags);
|
||||
|
||||
spin_unlock_irqrestore(&fcport->rport_lock, flags);
|
||||
els_err:
|
||||
return rc;
|
||||
|
@ -151,21 +145,16 @@ static int qedf_initiate_els(struct qedf_rport *fcport, unsigned int op,
|
|||
void qedf_process_els_compl(struct qedf_ctx *qedf, struct fcoe_cqe *cqe,
|
||||
struct qedf_ioreq *els_req)
|
||||
{
|
||||
struct fcoe_task_context *task_ctx;
|
||||
struct scsi_cmnd *sc_cmd;
|
||||
uint16_t xid;
|
||||
struct fcoe_cqe_midpath_info *mp_info;
|
||||
|
||||
QEDF_INFO(&(qedf->dbg_ctx), QEDF_LOG_ELS, "Entered with xid = 0x%x"
|
||||
" cmd_type = %d.\n", els_req->xid, els_req->cmd_type);
|
||||
|
||||
clear_bit(QEDF_CMD_OUTSTANDING, &els_req->flags);
|
||||
|
||||
/* Kill the ELS timer */
|
||||
cancel_delayed_work(&els_req->timeout_work);
|
||||
|
||||
xid = els_req->xid;
|
||||
task_ctx = qedf_get_task_mem(&qedf->tasks, xid);
|
||||
sc_cmd = els_req->sc_cmd;
|
||||
|
||||
/* Get ELS response length from CQE */
|
||||
mp_info = &cqe->cqe_info.midpath_info;
|
||||
els_req->mp_req.resp_len = mp_info->data_placement_size;
|
||||
|
@ -205,8 +194,12 @@ static void qedf_rrq_compl(struct qedf_els_cb_arg *cb_arg)
|
|||
" orig xid = 0x%x, rrq_xid = 0x%x, refcount=%d\n",
|
||||
orig_io_req, orig_io_req->xid, rrq_req->xid, refcount);
|
||||
|
||||
/* This should return the aborted io_req to the command pool */
|
||||
if (orig_io_req)
|
||||
/*
|
||||
* This should return the aborted io_req to the command pool. Note that
|
||||
* we need to check the refcound in case the original request was
|
||||
* flushed but we get a completion on this xid.
|
||||
*/
|
||||
if (orig_io_req && refcount > 0)
|
||||
kref_put(&orig_io_req->refcount, qedf_release_cmd);
|
||||
|
||||
out_free:
|
||||
|
@ -233,6 +226,7 @@ int qedf_send_rrq(struct qedf_ioreq *aborted_io_req)
|
|||
uint32_t sid;
|
||||
uint32_t r_a_tov;
|
||||
int rc;
|
||||
int refcount;
|
||||
|
||||
if (!aborted_io_req) {
|
||||
QEDF_ERR(NULL, "abort_io_req is NULL.\n");
|
||||
|
@ -241,6 +235,15 @@ int qedf_send_rrq(struct qedf_ioreq *aborted_io_req)
|
|||
|
||||
fcport = aborted_io_req->fcport;
|
||||
|
||||
if (!fcport) {
|
||||
refcount = kref_read(&aborted_io_req->refcount);
|
||||
QEDF_ERR(NULL,
|
||||
"RRQ work was queued prior to a flush xid=0x%x, refcount=%d.\n",
|
||||
aborted_io_req->xid, refcount);
|
||||
kref_put(&aborted_io_req->refcount, qedf_release_cmd);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* Check that fcport is still offloaded */
|
||||
if (!test_bit(QEDF_RPORT_SESSION_READY, &fcport->flags)) {
|
||||
QEDF_ERR(NULL, "fcport is no longer offloaded.\n");
|
||||
|
@ -253,6 +256,19 @@ int qedf_send_rrq(struct qedf_ioreq *aborted_io_req)
|
|||
}
|
||||
|
||||
qedf = fcport->qedf;
|
||||
|
||||
/*
|
||||
* Sanity check that we can send a RRQ to make sure that refcount isn't
|
||||
* 0
|
||||
*/
|
||||
refcount = kref_read(&aborted_io_req->refcount);
|
||||
if (refcount != 1) {
|
||||
QEDF_INFO(&qedf->dbg_ctx, QEDF_LOG_ELS,
|
||||
"refcount for xid=%x io_req=%p refcount=%d is not 1.\n",
|
||||
aborted_io_req->xid, aborted_io_req, refcount);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
lport = qedf->lport;
|
||||
sid = fcport->sid;
|
||||
r_a_tov = lport->r_a_tov;
|
||||
|
@ -335,32 +351,49 @@ void qedf_restart_rport(struct qedf_rport *fcport)
|
|||
struct fc_lport *lport;
|
||||
struct fc_rport_priv *rdata;
|
||||
u32 port_id;
|
||||
unsigned long flags;
|
||||
|
||||
if (!fcport)
|
||||
return;
|
||||
|
||||
spin_lock_irqsave(&fcport->rport_lock, flags);
|
||||
if (test_bit(QEDF_RPORT_IN_RESET, &fcport->flags) ||
|
||||
!test_bit(QEDF_RPORT_SESSION_READY, &fcport->flags) ||
|
||||
test_bit(QEDF_RPORT_UPLOADING_CONNECTION, &fcport->flags)) {
|
||||
QEDF_ERR(&(fcport->qedf->dbg_ctx), "fcport %p already in reset or not offloaded.\n",
|
||||
fcport);
|
||||
spin_unlock_irqrestore(&fcport->rport_lock, flags);
|
||||
return;
|
||||
}
|
||||
|
||||
/* Set that we are now in reset */
|
||||
set_bit(QEDF_RPORT_IN_RESET, &fcport->flags);
|
||||
spin_unlock_irqrestore(&fcport->rport_lock, flags);
|
||||
|
||||
rdata = fcport->rdata;
|
||||
if (rdata) {
|
||||
if (rdata && !kref_get_unless_zero(&rdata->kref)) {
|
||||
fcport->rdata = NULL;
|
||||
rdata = NULL;
|
||||
}
|
||||
|
||||
if (rdata && rdata->rp_state == RPORT_ST_READY) {
|
||||
lport = fcport->qedf->lport;
|
||||
port_id = rdata->ids.port_id;
|
||||
QEDF_ERR(&(fcport->qedf->dbg_ctx),
|
||||
"LOGO port_id=%x.\n", port_id);
|
||||
fc_rport_logoff(rdata);
|
||||
kref_put(&rdata->kref, fc_rport_destroy);
|
||||
mutex_lock(&lport->disc.disc_mutex);
|
||||
/* Recreate the rport and log back in */
|
||||
rdata = fc_rport_create(lport, port_id);
|
||||
if (rdata)
|
||||
if (rdata) {
|
||||
mutex_unlock(&lport->disc.disc_mutex);
|
||||
fc_rport_login(rdata);
|
||||
fcport->rdata = rdata;
|
||||
} else {
|
||||
mutex_unlock(&lport->disc.disc_mutex);
|
||||
fcport->rdata = NULL;
|
||||
}
|
||||
}
|
||||
clear_bit(QEDF_RPORT_IN_RESET, &fcport->flags);
|
||||
}
|
||||
|
@ -569,7 +602,7 @@ static int qedf_send_srr(struct qedf_ioreq *orig_io_req, u32 offset, u8 r_ctl)
|
|||
struct qedf_rport *fcport;
|
||||
struct fc_lport *lport;
|
||||
struct qedf_els_cb_arg *cb_arg = NULL;
|
||||
u32 sid, r_a_tov;
|
||||
u32 r_a_tov;
|
||||
int rc;
|
||||
|
||||
if (!orig_io_req) {
|
||||
|
@ -595,7 +628,6 @@ static int qedf_send_srr(struct qedf_ioreq *orig_io_req, u32 offset, u8 r_ctl)
|
|||
|
||||
qedf = fcport->qedf;
|
||||
lport = qedf->lport;
|
||||
sid = fcport->sid;
|
||||
r_a_tov = lport->r_a_tov;
|
||||
|
||||
QEDF_INFO(&(qedf->dbg_ctx), QEDF_LOG_ELS, "Sending SRR orig_io=%p, "
|
||||
|
|
|
@ -19,17 +19,16 @@ void qedf_fcoe_send_vlan_req(struct qedf_ctx *qedf)
|
|||
{
|
||||
struct sk_buff *skb;
|
||||
char *eth_fr;
|
||||
int fr_len;
|
||||
struct fip_vlan *vlan;
|
||||
#define MY_FIP_ALL_FCF_MACS ((__u8[6]) { 1, 0x10, 0x18, 1, 0, 2 })
|
||||
static u8 my_fcoe_all_fcfs[ETH_ALEN] = MY_FIP_ALL_FCF_MACS;
|
||||
unsigned long flags = 0;
|
||||
int rc = -1;
|
||||
|
||||
skb = dev_alloc_skb(sizeof(struct fip_vlan));
|
||||
if (!skb)
|
||||
return;
|
||||
|
||||
fr_len = sizeof(*vlan);
|
||||
eth_fr = (char *)skb->data;
|
||||
vlan = (struct fip_vlan *)eth_fr;
|
||||
|
||||
|
@ -68,7 +67,13 @@ void qedf_fcoe_send_vlan_req(struct qedf_ctx *qedf)
|
|||
}
|
||||
|
||||
set_bit(QED_LL2_XMIT_FLAGS_FIP_DISCOVERY, &flags);
|
||||
qed_ops->ll2->start_xmit(qedf->cdev, skb, flags);
|
||||
rc = qed_ops->ll2->start_xmit(qedf->cdev, skb, flags);
|
||||
if (rc) {
|
||||
QEDF_ERR(&qedf->dbg_ctx, "start_xmit failed rc = %d.\n", rc);
|
||||
kfree_skb(skb);
|
||||
return;
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
static void qedf_fcoe_process_vlan_resp(struct qedf_ctx *qedf,
|
||||
|
@ -95,6 +100,12 @@ static void qedf_fcoe_process_vlan_resp(struct qedf_ctx *qedf,
|
|||
rlen -= dlen;
|
||||
}
|
||||
|
||||
if (atomic_read(&qedf->link_state) == QEDF_LINK_DOWN) {
|
||||
QEDF_INFO(&qedf->dbg_ctx, QEDF_LOG_DISC,
|
||||
"Dropping VLAN response as link is down.\n");
|
||||
return;
|
||||
}
|
||||
|
||||
QEDF_INFO(&(qedf->dbg_ctx), QEDF_LOG_DISC, "VLAN response, "
|
||||
"vid=0x%x.\n", vid);
|
||||
|
||||
|
@ -114,6 +125,7 @@ void qedf_fip_send(struct fcoe_ctlr *fip, struct sk_buff *skb)
|
|||
struct fip_header *fiph;
|
||||
u16 op, vlan_tci = 0;
|
||||
u8 sub;
|
||||
int rc = -1;
|
||||
|
||||
if (!test_bit(QEDF_LL2_STARTED, &qedf->flags)) {
|
||||
QEDF_WARN(&(qedf->dbg_ctx), "LL2 not started\n");
|
||||
|
@ -142,9 +154,16 @@ void qedf_fip_send(struct fcoe_ctlr *fip, struct sk_buff *skb)
|
|||
print_hex_dump(KERN_WARNING, "fip ", DUMP_PREFIX_OFFSET, 16, 1,
|
||||
skb->data, skb->len, false);
|
||||
|
||||
qed_ops->ll2->start_xmit(qedf->cdev, skb, 0);
|
||||
rc = qed_ops->ll2->start_xmit(qedf->cdev, skb, 0);
|
||||
if (rc) {
|
||||
QEDF_ERR(&qedf->dbg_ctx, "start_xmit failed rc = %d.\n", rc);
|
||||
kfree_skb(skb);
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
static u8 fcoe_all_enode[ETH_ALEN] = FIP_ALL_ENODE_MACS;
|
||||
|
||||
/* Process incoming FIP frames. */
|
||||
void qedf_fip_recv(struct qedf_ctx *qedf, struct sk_buff *skb)
|
||||
{
|
||||
|
@ -157,20 +176,37 @@ void qedf_fip_recv(struct qedf_ctx *qedf, struct sk_buff *skb)
|
|||
size_t rlen, dlen;
|
||||
u16 op;
|
||||
u8 sub;
|
||||
bool do_reset = false;
|
||||
bool fcf_valid = false;
|
||||
/* Default is to handle CVL regardless of fabric id descriptor */
|
||||
bool fabric_id_valid = true;
|
||||
bool fc_wwpn_valid = false;
|
||||
u64 switch_name;
|
||||
u16 vlan = 0;
|
||||
|
||||
eth_hdr = (struct ethhdr *)skb_mac_header(skb);
|
||||
fiph = (struct fip_header *) ((void *)skb->data + 2 * ETH_ALEN + 2);
|
||||
op = ntohs(fiph->fip_op);
|
||||
sub = fiph->fip_subcode;
|
||||
|
||||
QEDF_INFO(&(qedf->dbg_ctx), QEDF_LOG_LL2, "FIP frame received: "
|
||||
"skb=%p fiph=%p source=%pM op=%x sub=%x", skb, fiph,
|
||||
eth_hdr->h_source, op, sub);
|
||||
QEDF_INFO(&qedf->dbg_ctx, QEDF_LOG_LL2,
|
||||
"FIP frame received: skb=%p fiph=%p source=%pM destn=%pM op=%x sub=%x vlan=%04x",
|
||||
skb, fiph, eth_hdr->h_source, eth_hdr->h_dest, op,
|
||||
sub, vlan);
|
||||
if (qedf_dump_frames)
|
||||
print_hex_dump(KERN_WARNING, "fip ", DUMP_PREFIX_OFFSET, 16, 1,
|
||||
skb->data, skb->len, false);
|
||||
|
||||
if (!ether_addr_equal(eth_hdr->h_dest, qedf->mac) &&
|
||||
!ether_addr_equal(eth_hdr->h_dest, fcoe_all_enode) &&
|
||||
!ether_addr_equal(eth_hdr->h_dest, qedf->data_src_addr)) {
|
||||
QEDF_INFO(&qedf->dbg_ctx, QEDF_LOG_LL2,
|
||||
"Dropping FIP type 0x%x pkt due to destination MAC mismatch dest_mac=%pM ctlr.dest_addr=%pM data_src_addr=%pM.\n",
|
||||
op, eth_hdr->h_dest, qedf->mac,
|
||||
qedf->data_src_addr);
|
||||
kfree_skb(skb);
|
||||
return;
|
||||
}
|
||||
|
||||
/* Handle FIP VLAN resp in the driver */
|
||||
if (op == FIP_OP_VLAN && sub == FIP_SC_VL_NOTE) {
|
||||
qedf_fcoe_process_vlan_resp(qedf, skb);
|
||||
|
@ -199,25 +235,36 @@ void qedf_fip_recv(struct qedf_ctx *qedf, struct sk_buff *skb)
|
|||
switch (desc->fip_dtype) {
|
||||
case FIP_DT_MAC:
|
||||
mp = (struct fip_mac_desc *)desc;
|
||||
QEDF_INFO(&(qedf->dbg_ctx), QEDF_LOG_LL2,
|
||||
"fd_mac=%pM\n", mp->fd_mac);
|
||||
QEDF_INFO(&qedf->dbg_ctx, QEDF_LOG_DISC,
|
||||
"Switch fd_mac=%pM.\n", mp->fd_mac);
|
||||
if (ether_addr_equal(mp->fd_mac,
|
||||
qedf->ctlr.sel_fcf->fcf_mac))
|
||||
do_reset = true;
|
||||
fcf_valid = true;
|
||||
break;
|
||||
case FIP_DT_NAME:
|
||||
wp = (struct fip_wwn_desc *)desc;
|
||||
QEDF_INFO(&(qedf->dbg_ctx), QEDF_LOG_LL2,
|
||||
"fc_wwpn=%016llx.\n",
|
||||
get_unaligned_be64(&wp->fd_wwn));
|
||||
switch_name = get_unaligned_be64(&wp->fd_wwn);
|
||||
QEDF_INFO(&qedf->dbg_ctx, QEDF_LOG_DISC,
|
||||
"Switch fd_wwn=%016llx fcf_switch_name=%016llx.\n",
|
||||
switch_name,
|
||||
qedf->ctlr.sel_fcf->switch_name);
|
||||
if (switch_name ==
|
||||
qedf->ctlr.sel_fcf->switch_name)
|
||||
fc_wwpn_valid = true;
|
||||
break;
|
||||
case FIP_DT_VN_ID:
|
||||
vp = (struct fip_vn_desc *)desc;
|
||||
QEDF_INFO(&(qedf->dbg_ctx), QEDF_LOG_LL2,
|
||||
"fd_fc_id=%x.\n", ntoh24(vp->fd_fc_id));
|
||||
if (ntoh24(vp->fd_fc_id) ==
|
||||
QEDF_INFO(&qedf->dbg_ctx, QEDF_LOG_DISC,
|
||||
"vx_port fd_fc_id=%x fd_mac=%pM.\n",
|
||||
ntoh24(vp->fd_fc_id), vp->fd_mac);
|
||||
/* Check vx_port fabric ID */
|
||||
if (ntoh24(vp->fd_fc_id) !=
|
||||
qedf->lport->port_id)
|
||||
do_reset = true;
|
||||
fabric_id_valid = false;
|
||||
/* Check vx_port MAC */
|
||||
if (!ether_addr_equal(vp->fd_mac,
|
||||
qedf->data_src_addr))
|
||||
fabric_id_valid = false;
|
||||
break;
|
||||
default:
|
||||
/* Ignore anything else */
|
||||
|
@ -227,13 +274,11 @@ void qedf_fip_recv(struct qedf_ctx *qedf, struct sk_buff *skb)
|
|||
rlen -= dlen;
|
||||
}
|
||||
|
||||
QEDF_INFO(&(qedf->dbg_ctx), QEDF_LOG_LL2,
|
||||
"do_reset=%d.\n", do_reset);
|
||||
if (do_reset) {
|
||||
fcoe_ctlr_link_down(&qedf->ctlr);
|
||||
qedf_wait_for_upload(qedf);
|
||||
fcoe_ctlr_link_up(&qedf->ctlr);
|
||||
}
|
||||
QEDF_INFO(&qedf->dbg_ctx, QEDF_LOG_DISC,
|
||||
"fcf_valid=%d fabric_id_valid=%d fc_wwpn_valid=%d.\n",
|
||||
fcf_valid, fabric_id_valid, fc_wwpn_valid);
|
||||
if (fcf_valid && fabric_id_valid && fc_wwpn_valid)
|
||||
qedf_ctx_soft_reset(qedf->lport);
|
||||
kfree_skb(skb);
|
||||
} else {
|
||||
/* Everything else is handled by libfcoe */
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -124,21 +124,24 @@ static bool qedf_initiate_fipvlan_req(struct qedf_ctx *qedf)
|
|||
{
|
||||
int rc;
|
||||
|
||||
if (atomic_read(&qedf->link_state) != QEDF_LINK_UP) {
|
||||
QEDF_ERR(&(qedf->dbg_ctx), "Link not up.\n");
|
||||
return false;
|
||||
}
|
||||
|
||||
while (qedf->fipvlan_retries--) {
|
||||
/* This is to catch if link goes down during fipvlan retries */
|
||||
if (atomic_read(&qedf->link_state) == QEDF_LINK_DOWN) {
|
||||
QEDF_ERR(&qedf->dbg_ctx, "Link not up.\n");
|
||||
return false;
|
||||
}
|
||||
|
||||
if (qedf->vlan_id > 0)
|
||||
return true;
|
||||
|
||||
QEDF_INFO(&(qedf->dbg_ctx), QEDF_LOG_DISC,
|
||||
"Retry %d.\n", qedf->fipvlan_retries);
|
||||
init_completion(&qedf->fipvlan_compl);
|
||||
qedf_fcoe_send_vlan_req(qedf);
|
||||
rc = wait_for_completion_timeout(&qedf->fipvlan_compl,
|
||||
1 * HZ);
|
||||
if (rc > 0) {
|
||||
if (rc > 0 &&
|
||||
(atomic_read(&qedf->link_state) == QEDF_LINK_UP)) {
|
||||
fcoe_ctlr_link_up(&qedf->ctlr);
|
||||
return true;
|
||||
}
|
||||
|
@ -153,12 +156,19 @@ static void qedf_handle_link_update(struct work_struct *work)
|
|||
container_of(work, struct qedf_ctx, link_update.work);
|
||||
int rc;
|
||||
|
||||
QEDF_INFO(&(qedf->dbg_ctx), QEDF_LOG_DISC, "Entered.\n");
|
||||
QEDF_INFO(&qedf->dbg_ctx, QEDF_LOG_DISC, "Entered. link_state=%d.\n",
|
||||
atomic_read(&qedf->link_state));
|
||||
|
||||
if (atomic_read(&qedf->link_state) == QEDF_LINK_UP) {
|
||||
rc = qedf_initiate_fipvlan_req(qedf);
|
||||
if (rc)
|
||||
return;
|
||||
|
||||
if (atomic_read(&qedf->link_state) != QEDF_LINK_UP) {
|
||||
qedf->vlan_id = 0;
|
||||
return;
|
||||
}
|
||||
|
||||
/*
|
||||
* If we get here then we never received a repsonse to our
|
||||
* fip vlan request so set the vlan_id to the default and
|
||||
|
@ -185,7 +195,9 @@ static void qedf_handle_link_update(struct work_struct *work)
|
|||
QEDF_INFO(&(qedf->dbg_ctx), QEDF_LOG_DISC,
|
||||
"Calling fcoe_ctlr_link_down().\n");
|
||||
fcoe_ctlr_link_down(&qedf->ctlr);
|
||||
qedf_wait_for_upload(qedf);
|
||||
if (qedf_wait_for_upload(qedf) == false)
|
||||
QEDF_ERR(&qedf->dbg_ctx,
|
||||
"Could not upload all sessions.\n");
|
||||
/* Reset the number of FIP VLAN retries */
|
||||
qedf->fipvlan_retries = qedf_fipvlan_retries;
|
||||
}
|
||||
|
@ -615,50 +627,113 @@ static struct scsi_transport_template *qedf_fc_vport_transport_template;
|
|||
static int qedf_eh_abort(struct scsi_cmnd *sc_cmd)
|
||||
{
|
||||
struct fc_rport *rport = starget_to_rport(scsi_target(sc_cmd->device));
|
||||
struct fc_rport_libfc_priv *rp = rport->dd_data;
|
||||
struct qedf_rport *fcport;
|
||||
struct fc_lport *lport;
|
||||
struct qedf_ctx *qedf;
|
||||
struct qedf_ioreq *io_req;
|
||||
struct fc_rport_libfc_priv *rp = rport->dd_data;
|
||||
struct fc_rport_priv *rdata;
|
||||
struct qedf_rport *fcport = NULL;
|
||||
int rc = FAILED;
|
||||
int wait_count = 100;
|
||||
int refcount = 0;
|
||||
int rval;
|
||||
|
||||
if (fc_remote_port_chkready(rport)) {
|
||||
QEDF_ERR(NULL, "rport not ready\n");
|
||||
goto out;
|
||||
}
|
||||
int got_ref = 0;
|
||||
|
||||
lport = shost_priv(sc_cmd->device->host);
|
||||
qedf = (struct qedf_ctx *)lport_priv(lport);
|
||||
|
||||
if ((lport->state != LPORT_ST_READY) || !(lport->link_up)) {
|
||||
QEDF_ERR(&(qedf->dbg_ctx), "link not ready.\n");
|
||||
/* rport and tgt are allocated together, so tgt should be non-NULL */
|
||||
fcport = (struct qedf_rport *)&rp[1];
|
||||
rdata = fcport->rdata;
|
||||
if (!rdata || !kref_get_unless_zero(&rdata->kref)) {
|
||||
QEDF_ERR(&qedf->dbg_ctx, "stale rport, sc_cmd=%p\n", sc_cmd);
|
||||
rc = 1;
|
||||
goto out;
|
||||
}
|
||||
|
||||
fcport = (struct qedf_rport *)&rp[1];
|
||||
|
||||
io_req = (struct qedf_ioreq *)sc_cmd->SCp.ptr;
|
||||
if (!io_req) {
|
||||
QEDF_ERR(&(qedf->dbg_ctx), "io_req is NULL.\n");
|
||||
QEDF_ERR(&qedf->dbg_ctx,
|
||||
"sc_cmd not queued with lld, sc_cmd=%p op=0x%02x, port_id=%06x\n",
|
||||
sc_cmd, sc_cmd->cmnd[0],
|
||||
rdata->ids.port_id);
|
||||
rc = SUCCESS;
|
||||
goto out;
|
||||
goto drop_rdata_kref;
|
||||
}
|
||||
|
||||
QEDF_ERR(&(qedf->dbg_ctx), "Aborting io_req sc_cmd=%p xid=0x%x "
|
||||
"fp_idx=%d.\n", sc_cmd, io_req->xid, io_req->fp_idx);
|
||||
rval = kref_get_unless_zero(&io_req->refcount); /* ID: 005 */
|
||||
if (rval)
|
||||
got_ref = 1;
|
||||
|
||||
/* If we got a valid io_req, confirm it belongs to this sc_cmd. */
|
||||
if (!rval || io_req->sc_cmd != sc_cmd) {
|
||||
QEDF_ERR(&qedf->dbg_ctx,
|
||||
"Freed/Incorrect io_req, io_req->sc_cmd=%p, sc_cmd=%p, port_id=%06x, bailing out.\n",
|
||||
io_req->sc_cmd, sc_cmd, rdata->ids.port_id);
|
||||
|
||||
goto drop_rdata_kref;
|
||||
}
|
||||
|
||||
if (fc_remote_port_chkready(rport)) {
|
||||
refcount = kref_read(&io_req->refcount);
|
||||
QEDF_ERR(&qedf->dbg_ctx,
|
||||
"rport not ready, io_req=%p, xid=0x%x sc_cmd=%p op=0x%02x, refcount=%d, port_id=%06x\n",
|
||||
io_req, io_req->xid, sc_cmd, sc_cmd->cmnd[0],
|
||||
refcount, rdata->ids.port_id);
|
||||
|
||||
goto drop_rdata_kref;
|
||||
}
|
||||
|
||||
rc = fc_block_scsi_eh(sc_cmd);
|
||||
if (rc)
|
||||
goto drop_rdata_kref;
|
||||
|
||||
if (test_bit(QEDF_RPORT_UPLOADING_CONNECTION, &fcport->flags)) {
|
||||
QEDF_ERR(&qedf->dbg_ctx,
|
||||
"Connection uploading, xid=0x%x., port_id=%06x\n",
|
||||
io_req->xid, rdata->ids.port_id);
|
||||
while (io_req->sc_cmd && (wait_count != 0)) {
|
||||
msleep(100);
|
||||
wait_count--;
|
||||
}
|
||||
if (wait_count) {
|
||||
QEDF_ERR(&qedf->dbg_ctx, "ABTS succeeded\n");
|
||||
rc = SUCCESS;
|
||||
} else {
|
||||
QEDF_ERR(&qedf->dbg_ctx, "ABTS failed\n");
|
||||
rc = FAILED;
|
||||
}
|
||||
goto drop_rdata_kref;
|
||||
}
|
||||
|
||||
if (lport->state != LPORT_ST_READY || !(lport->link_up)) {
|
||||
QEDF_ERR(&qedf->dbg_ctx, "link not ready.\n");
|
||||
goto drop_rdata_kref;
|
||||
}
|
||||
|
||||
QEDF_ERR(&qedf->dbg_ctx,
|
||||
"Aborting io_req=%p sc_cmd=%p xid=0x%x fp_idx=%d, port_id=%06x.\n",
|
||||
io_req, sc_cmd, io_req->xid, io_req->fp_idx,
|
||||
rdata->ids.port_id);
|
||||
|
||||
if (qedf->stop_io_on_error) {
|
||||
qedf_stop_all_io(qedf);
|
||||
rc = SUCCESS;
|
||||
goto out;
|
||||
goto drop_rdata_kref;
|
||||
}
|
||||
|
||||
init_completion(&io_req->abts_done);
|
||||
rval = qedf_initiate_abts(io_req, true);
|
||||
if (rval) {
|
||||
QEDF_ERR(&(qedf->dbg_ctx), "Failed to queue ABTS.\n");
|
||||
goto out;
|
||||
/*
|
||||
* If we fail to queue the ABTS then return this command to
|
||||
* the SCSI layer as it will own and free the xid
|
||||
*/
|
||||
rc = SUCCESS;
|
||||
qedf_scsi_done(qedf, io_req, DID_ERROR);
|
||||
goto drop_rdata_kref;
|
||||
}
|
||||
|
||||
wait_for_completion(&io_req->abts_done);
|
||||
|
@ -684,38 +759,68 @@ static int qedf_eh_abort(struct scsi_cmnd *sc_cmd)
|
|||
QEDF_ERR(&(qedf->dbg_ctx), "ABTS failed, xid=0x%x.\n",
|
||||
io_req->xid);
|
||||
|
||||
drop_rdata_kref:
|
||||
kref_put(&rdata->kref, fc_rport_destroy);
|
||||
out:
|
||||
if (got_ref)
|
||||
kref_put(&io_req->refcount, qedf_release_cmd);
|
||||
return rc;
|
||||
}
|
||||
|
||||
static int qedf_eh_target_reset(struct scsi_cmnd *sc_cmd)
|
||||
{
|
||||
QEDF_ERR(NULL, "TARGET RESET Issued...");
|
||||
QEDF_ERR(NULL, "%d:0:%d:%lld: TARGET RESET Issued...",
|
||||
sc_cmd->device->host->host_no, sc_cmd->device->id,
|
||||
sc_cmd->device->lun);
|
||||
return qedf_initiate_tmf(sc_cmd, FCP_TMF_TGT_RESET);
|
||||
}
|
||||
|
||||
static int qedf_eh_device_reset(struct scsi_cmnd *sc_cmd)
|
||||
{
|
||||
QEDF_ERR(NULL, "LUN RESET Issued...\n");
|
||||
QEDF_ERR(NULL, "%d:0:%d:%lld: LUN RESET Issued... ",
|
||||
sc_cmd->device->host->host_no, sc_cmd->device->id,
|
||||
sc_cmd->device->lun);
|
||||
return qedf_initiate_tmf(sc_cmd, FCP_TMF_LUN_RESET);
|
||||
}
|
||||
|
||||
void qedf_wait_for_upload(struct qedf_ctx *qedf)
|
||||
bool qedf_wait_for_upload(struct qedf_ctx *qedf)
|
||||
{
|
||||
while (1) {
|
||||
struct qedf_rport *fcport = NULL;
|
||||
int wait_cnt = 120;
|
||||
|
||||
while (wait_cnt--) {
|
||||
if (atomic_read(&qedf->num_offloads))
|
||||
QEDF_INFO(&(qedf->dbg_ctx), QEDF_LOG_DISC,
|
||||
"Waiting for all uploads to complete.\n");
|
||||
QEDF_INFO(&qedf->dbg_ctx, QEDF_LOG_DISC,
|
||||
"Waiting for all uploads to complete num_offloads = 0x%x.\n",
|
||||
atomic_read(&qedf->num_offloads));
|
||||
else
|
||||
break;
|
||||
return true;
|
||||
msleep(500);
|
||||
}
|
||||
|
||||
rcu_read_lock();
|
||||
list_for_each_entry_rcu(fcport, &qedf->fcports, peers) {
|
||||
if (fcport && test_bit(QEDF_RPORT_SESSION_READY,
|
||||
&fcport->flags)) {
|
||||
if (fcport->rdata)
|
||||
QEDF_ERR(&qedf->dbg_ctx,
|
||||
"Waiting for fcport %p portid=%06x.\n",
|
||||
fcport, fcport->rdata->ids.port_id);
|
||||
} else {
|
||||
QEDF_ERR(&qedf->dbg_ctx,
|
||||
"Waiting for fcport %p.\n", fcport);
|
||||
}
|
||||
}
|
||||
rcu_read_unlock();
|
||||
return false;
|
||||
|
||||
}
|
||||
|
||||
/* Performs soft reset of qedf_ctx by simulating a link down/up */
|
||||
static void qedf_ctx_soft_reset(struct fc_lport *lport)
|
||||
void qedf_ctx_soft_reset(struct fc_lport *lport)
|
||||
{
|
||||
struct qedf_ctx *qedf;
|
||||
struct qed_link_output if_link;
|
||||
|
||||
if (lport->vport) {
|
||||
QEDF_ERR(NULL, "Cannot issue host reset on NPIV port.\n");
|
||||
|
@ -726,11 +831,32 @@ static void qedf_ctx_soft_reset(struct fc_lport *lport)
|
|||
|
||||
/* For host reset, essentially do a soft link up/down */
|
||||
atomic_set(&qedf->link_state, QEDF_LINK_DOWN);
|
||||
QEDF_INFO(&qedf->dbg_ctx, QEDF_LOG_DISC,
|
||||
"Queuing link down work.\n");
|
||||
queue_delayed_work(qedf->link_update_wq, &qedf->link_update,
|
||||
0);
|
||||
qedf_wait_for_upload(qedf);
|
||||
|
||||
if (qedf_wait_for_upload(qedf) == false) {
|
||||
QEDF_ERR(&qedf->dbg_ctx, "Could not upload all sessions.\n");
|
||||
WARN_ON(atomic_read(&qedf->num_offloads));
|
||||
}
|
||||
|
||||
/* Before setting link up query physical link state */
|
||||
qed_ops->common->get_link(qedf->cdev, &if_link);
|
||||
/* Bail if the physical link is not up */
|
||||
if (!if_link.link_up) {
|
||||
QEDF_INFO(&qedf->dbg_ctx, QEDF_LOG_DISC,
|
||||
"Physical link is not up.\n");
|
||||
return;
|
||||
}
|
||||
/* Flush and wait to make sure link down is processed */
|
||||
flush_delayed_work(&qedf->link_update);
|
||||
msleep(500);
|
||||
|
||||
atomic_set(&qedf->link_state, QEDF_LINK_UP);
|
||||
qedf->vlan_id = 0;
|
||||
QEDF_INFO(&qedf->dbg_ctx, QEDF_LOG_DISC,
|
||||
"Queue link up work.\n");
|
||||
queue_delayed_work(qedf->link_update_wq, &qedf->link_update,
|
||||
0);
|
||||
}
|
||||
|
@ -740,22 +866,6 @@ static int qedf_eh_host_reset(struct scsi_cmnd *sc_cmd)
|
|||
{
|
||||
struct fc_lport *lport;
|
||||
struct qedf_ctx *qedf;
|
||||
struct fc_rport *rport = starget_to_rport(scsi_target(sc_cmd->device));
|
||||
struct fc_rport_libfc_priv *rp = rport->dd_data;
|
||||
struct qedf_rport *fcport = (struct qedf_rport *)&rp[1];
|
||||
int rval;
|
||||
|
||||
rval = fc_remote_port_chkready(rport);
|
||||
|
||||
if (rval) {
|
||||
QEDF_ERR(NULL, "device_reset rport not ready\n");
|
||||
return FAILED;
|
||||
}
|
||||
|
||||
if (fcport == NULL) {
|
||||
QEDF_ERR(NULL, "device_reset: rport is NULL\n");
|
||||
return FAILED;
|
||||
}
|
||||
|
||||
lport = shost_priv(sc_cmd->device->host);
|
||||
qedf = lport_priv(lport);
|
||||
|
@ -907,8 +1017,10 @@ static int qedf_xmit(struct fc_lport *lport, struct fc_frame *fp)
|
|||
"Dropping FCoE frame to %06x.\n", ntoh24(fh->fh_d_id));
|
||||
kfree_skb(skb);
|
||||
rdata = fc_rport_lookup(lport, ntoh24(fh->fh_d_id));
|
||||
if (rdata)
|
||||
if (rdata) {
|
||||
rdata->retries = lport->max_rport_retry_count;
|
||||
kref_put(&rdata->kref, fc_rport_destroy);
|
||||
}
|
||||
return -EINVAL;
|
||||
}
|
||||
/* End NPIV filtering */
|
||||
|
@ -1031,7 +1143,12 @@ static int qedf_xmit(struct fc_lport *lport, struct fc_frame *fp)
|
|||
if (qedf_dump_frames)
|
||||
print_hex_dump(KERN_WARNING, "fcoe: ", DUMP_PREFIX_OFFSET, 16,
|
||||
1, skb->data, skb->len, false);
|
||||
qed_ops->ll2->start_xmit(qedf->cdev, skb, 0);
|
||||
rc = qed_ops->ll2->start_xmit(qedf->cdev, skb, 0);
|
||||
if (rc) {
|
||||
QEDF_ERR(&qedf->dbg_ctx, "start_xmit failed rc = %d.\n", rc);
|
||||
kfree_skb(skb);
|
||||
return rc;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -1224,6 +1341,8 @@ static void qedf_upload_connection(struct qedf_ctx *qedf,
|
|||
static void qedf_cleanup_fcport(struct qedf_ctx *qedf,
|
||||
struct qedf_rport *fcport)
|
||||
{
|
||||
struct fc_rport_priv *rdata = fcport->rdata;
|
||||
|
||||
QEDF_INFO(&(qedf->dbg_ctx), QEDF_LOG_CONN, "Cleaning up portid=%06x.\n",
|
||||
fcport->rdata->ids.port_id);
|
||||
|
||||
|
@ -1235,6 +1354,7 @@ static void qedf_cleanup_fcport(struct qedf_ctx *qedf,
|
|||
qedf_free_sq(qedf, fcport);
|
||||
fcport->rdata = NULL;
|
||||
fcport->qedf = NULL;
|
||||
kref_put(&rdata->kref, fc_rport_destroy);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -1310,6 +1430,8 @@ static void qedf_rport_event_handler(struct fc_lport *lport,
|
|||
break;
|
||||
}
|
||||
|
||||
/* Initial reference held on entry, so this can't fail */
|
||||
kref_get(&rdata->kref);
|
||||
fcport->rdata = rdata;
|
||||
fcport->rport = rport;
|
||||
|
||||
|
@ -1369,11 +1491,15 @@ static void qedf_rport_event_handler(struct fc_lport *lport,
|
|||
*/
|
||||
fcport = (struct qedf_rport *)&rp[1];
|
||||
|
||||
spin_lock_irqsave(&fcport->rport_lock, flags);
|
||||
/* Only free this fcport if it is offloaded already */
|
||||
if (test_bit(QEDF_RPORT_SESSION_READY, &fcport->flags)) {
|
||||
set_bit(QEDF_RPORT_UPLOADING_CONNECTION, &fcport->flags);
|
||||
if (test_bit(QEDF_RPORT_SESSION_READY, &fcport->flags) &&
|
||||
!test_bit(QEDF_RPORT_UPLOADING_CONNECTION,
|
||||
&fcport->flags)) {
|
||||
set_bit(QEDF_RPORT_UPLOADING_CONNECTION,
|
||||
&fcport->flags);
|
||||
spin_unlock_irqrestore(&fcport->rport_lock, flags);
|
||||
qedf_cleanup_fcport(qedf, fcport);
|
||||
|
||||
/*
|
||||
* Remove fcport to list of qedf_ctx list of offloaded
|
||||
* ports
|
||||
|
@ -1385,8 +1511,9 @@ static void qedf_rport_event_handler(struct fc_lport *lport,
|
|||
clear_bit(QEDF_RPORT_UPLOADING_CONNECTION,
|
||||
&fcport->flags);
|
||||
atomic_dec(&qedf->num_offloads);
|
||||
} else {
|
||||
spin_unlock_irqrestore(&fcport->rport_lock, flags);
|
||||
}
|
||||
|
||||
break;
|
||||
|
||||
case RPORT_EV_NONE:
|
||||
|
@ -1498,11 +1625,15 @@ static int qedf_lport_setup(struct qedf_ctx *qedf)
|
|||
fc_set_wwnn(lport, qedf->wwnn);
|
||||
fc_set_wwpn(lport, qedf->wwpn);
|
||||
|
||||
fcoe_libfc_config(lport, &qedf->ctlr, &qedf_lport_template, 0);
|
||||
if (fcoe_libfc_config(lport, &qedf->ctlr, &qedf_lport_template, 0)) {
|
||||
QEDF_ERR(&qedf->dbg_ctx,
|
||||
"fcoe_libfc_config failed.\n");
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
/* Allocate the exchange manager */
|
||||
fc_exch_mgr_alloc(lport, FC_CLASS_3, qedf->max_scsi_xid + 1,
|
||||
qedf->max_els_xid, NULL);
|
||||
fc_exch_mgr_alloc(lport, FC_CLASS_3, FCOE_PARAMS_NUM_TASKS,
|
||||
0xfffe, NULL);
|
||||
|
||||
if (fc_lport_init_stats(lport))
|
||||
return -ENOMEM;
|
||||
|
@ -1625,14 +1756,15 @@ static int qedf_vport_create(struct fc_vport *vport, bool disabled)
|
|||
vport_qedf->wwpn = vn_port->wwpn;
|
||||
|
||||
vn_port->host->transportt = qedf_fc_vport_transport_template;
|
||||
vn_port->host->can_queue = QEDF_MAX_ELS_XID;
|
||||
vn_port->host->can_queue = FCOE_PARAMS_NUM_TASKS;
|
||||
vn_port->host->max_lun = qedf_max_lun;
|
||||
vn_port->host->sg_tablesize = QEDF_MAX_BDS_PER_CMD;
|
||||
vn_port->host->max_cmd_len = QEDF_MAX_CDB_LEN;
|
||||
|
||||
rc = scsi_add_host(vn_port->host, &vport->dev);
|
||||
if (rc) {
|
||||
QEDF_WARN(&(base_qedf->dbg_ctx), "Error adding Scsi_Host.\n");
|
||||
QEDF_WARN(&base_qedf->dbg_ctx,
|
||||
"Error adding Scsi_Host rc=0x%x.\n", rc);
|
||||
goto err2;
|
||||
}
|
||||
|
||||
|
@ -2155,7 +2287,8 @@ static int qedf_setup_int(struct qedf_ctx *qedf)
|
|||
QEDF_SIMD_HANDLER_NUM, qedf_simd_int_handler);
|
||||
qedf->int_info.used_cnt = 1;
|
||||
|
||||
QEDF_ERR(&qedf->dbg_ctx, "Only MSI-X supported. Failing probe.\n");
|
||||
QEDF_ERR(&qedf->dbg_ctx,
|
||||
"Cannot load driver due to a lack of MSI-X vectors.\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
|
@ -2356,6 +2489,13 @@ static int qedf_ll2_rx(void *cookie, struct sk_buff *skb,
|
|||
struct qedf_ctx *qedf = (struct qedf_ctx *)cookie;
|
||||
struct qedf_skb_work *skb_work;
|
||||
|
||||
if (atomic_read(&qedf->link_state) == QEDF_LINK_DOWN) {
|
||||
QEDF_INFO(&qedf->dbg_ctx, QEDF_LOG_LL2,
|
||||
"Dropping frame as link state is down.\n");
|
||||
kfree_skb(skb);
|
||||
return 0;
|
||||
}
|
||||
|
||||
skb_work = kzalloc(sizeof(struct qedf_skb_work), GFP_ATOMIC);
|
||||
if (!skb_work) {
|
||||
QEDF_WARN(&(qedf->dbg_ctx), "Could not allocate skb_work so "
|
||||
|
@ -2990,6 +3130,8 @@ static int __qedf_probe(struct pci_dev *pdev, int mode)
|
|||
goto err0;
|
||||
}
|
||||
|
||||
fc_disc_init(lport);
|
||||
|
||||
/* Initialize qedf_ctx */
|
||||
qedf = lport_priv(lport);
|
||||
qedf->lport = lport;
|
||||
|
@ -3005,6 +3147,7 @@ static int __qedf_probe(struct pci_dev *pdev, int mode)
|
|||
pci_set_drvdata(pdev, qedf);
|
||||
init_completion(&qedf->fipvlan_compl);
|
||||
mutex_init(&qedf->stats_mutex);
|
||||
mutex_init(&qedf->flush_mutex);
|
||||
|
||||
QEDF_INFO(&(qedf->dbg_ctx), QEDF_LOG_INFO,
|
||||
"QLogic FastLinQ FCoE Module qedf %s, "
|
||||
|
@ -3181,11 +3324,6 @@ static int __qedf_probe(struct pci_dev *pdev, int mode)
|
|||
sprintf(host_buf, "host_%d", host->host_no);
|
||||
qed_ops->common->set_name(qedf->cdev, host_buf);
|
||||
|
||||
|
||||
/* Set xid max values */
|
||||
qedf->max_scsi_xid = QEDF_MAX_SCSI_XID;
|
||||
qedf->max_els_xid = QEDF_MAX_ELS_XID;
|
||||
|
||||
/* Allocate cmd mgr */
|
||||
qedf->cmd_mgr = qedf_cmd_mgr_alloc(qedf);
|
||||
if (!qedf->cmd_mgr) {
|
||||
|
@ -3196,12 +3334,15 @@ static int __qedf_probe(struct pci_dev *pdev, int mode)
|
|||
|
||||
if (mode != QEDF_MODE_RECOVERY) {
|
||||
host->transportt = qedf_fc_transport_template;
|
||||
host->can_queue = QEDF_MAX_ELS_XID;
|
||||
host->max_lun = qedf_max_lun;
|
||||
host->max_cmd_len = QEDF_MAX_CDB_LEN;
|
||||
host->can_queue = FCOE_PARAMS_NUM_TASKS;
|
||||
rc = scsi_add_host(host, &pdev->dev);
|
||||
if (rc)
|
||||
if (rc) {
|
||||
QEDF_WARN(&qedf->dbg_ctx,
|
||||
"Error adding Scsi_Host rc=0x%x.\n", rc);
|
||||
goto err6;
|
||||
}
|
||||
}
|
||||
|
||||
memset(¶ms, 0, sizeof(params));
|
||||
|
@ -3377,7 +3518,9 @@ static void __qedf_remove(struct pci_dev *pdev, int mode)
|
|||
fcoe_ctlr_link_down(&qedf->ctlr);
|
||||
else
|
||||
fc_fabric_logoff(qedf->lport);
|
||||
qedf_wait_for_upload(qedf);
|
||||
|
||||
if (qedf_wait_for_upload(qedf) == false)
|
||||
QEDF_ERR(&qedf->dbg_ctx, "Could not upload all sessions.\n");
|
||||
|
||||
#ifdef CONFIG_DEBUG_FS
|
||||
qedf_dbg_host_exit(&(qedf->dbg_ctx));
|
||||
|
|
|
@ -7,9 +7,9 @@
|
|||
* this source tree.
|
||||
*/
|
||||
|
||||
#define QEDF_VERSION "8.33.16.20"
|
||||
#define QEDF_VERSION "8.37.25.20"
|
||||
#define QEDF_DRIVER_MAJOR_VER 8
|
||||
#define QEDF_DRIVER_MINOR_VER 33
|
||||
#define QEDF_DRIVER_REV_VER 16
|
||||
#define QEDF_DRIVER_MINOR_VER 37
|
||||
#define QEDF_DRIVER_REV_VER 25
|
||||
#define QEDF_DRIVER_ENG_VER 20
|
||||
|
||||
|
|
|
@ -155,12 +155,10 @@ static void qedi_tmf_resp_work(struct work_struct *work)
|
|||
struct iscsi_conn *conn = qedi_conn->cls_conn->dd_data;
|
||||
struct iscsi_session *session = conn->session;
|
||||
struct iscsi_tm_rsp *resp_hdr_ptr;
|
||||
struct iscsi_cls_session *cls_sess;
|
||||
int rval = 0;
|
||||
|
||||
set_bit(QEDI_CONN_FW_CLEANUP, &qedi_conn->flags);
|
||||
resp_hdr_ptr = (struct iscsi_tm_rsp *)qedi_cmd->tmf_resp_buf;
|
||||
cls_sess = iscsi_conn_to_session(qedi_conn->cls_conn);
|
||||
|
||||
iscsi_block_session(session->cls_session);
|
||||
rval = qedi_cleanup_all_io(qedi, qedi_conn, qedi_cmd->task, true);
|
||||
|
@ -1366,7 +1364,6 @@ static void qedi_tmf_work(struct work_struct *work)
|
|||
struct qedi_conn *qedi_conn = qedi_cmd->conn;
|
||||
struct qedi_ctx *qedi = qedi_conn->qedi;
|
||||
struct iscsi_conn *conn = qedi_conn->cls_conn->dd_data;
|
||||
struct iscsi_cls_session *cls_sess;
|
||||
struct qedi_work_map *list_work = NULL;
|
||||
struct iscsi_task *mtask;
|
||||
struct qedi_cmd *cmd;
|
||||
|
@ -1377,7 +1374,6 @@ static void qedi_tmf_work(struct work_struct *work)
|
|||
|
||||
mtask = qedi_cmd->task;
|
||||
tmf_hdr = (struct iscsi_tm *)mtask->hdr;
|
||||
cls_sess = iscsi_conn_to_session(qedi_conn->cls_conn);
|
||||
set_bit(QEDI_CONN_FW_CLEANUP, &qedi_conn->flags);
|
||||
|
||||
ctask = iscsi_itt_to_task(conn, tmf_hdr->rtt);
|
||||
|
|
|
@ -579,7 +579,7 @@ static int qedi_conn_start(struct iscsi_cls_conn *cls_conn)
|
|||
rval = qedi_iscsi_update_conn(qedi, qedi_conn);
|
||||
if (rval) {
|
||||
iscsi_conn_printk(KERN_ALERT, conn,
|
||||
"conn_start: FW oflload conn failed.\n");
|
||||
"conn_start: FW offload conn failed.\n");
|
||||
rval = -EINVAL;
|
||||
goto start_err;
|
||||
}
|
||||
|
@ -590,7 +590,7 @@ static int qedi_conn_start(struct iscsi_cls_conn *cls_conn)
|
|||
rval = iscsi_conn_start(cls_conn);
|
||||
if (rval) {
|
||||
iscsi_conn_printk(KERN_ALERT, conn,
|
||||
"iscsi_conn_start: FW oflload conn failed!!\n");
|
||||
"iscsi_conn_start: FW offload conn failed!!\n");
|
||||
}
|
||||
|
||||
start_err:
|
||||
|
@ -993,13 +993,17 @@ static void qedi_ep_disconnect(struct iscsi_endpoint *ep)
|
|||
struct iscsi_conn *conn = NULL;
|
||||
struct qedi_ctx *qedi;
|
||||
int ret = 0;
|
||||
int wait_delay = 20 * HZ;
|
||||
int wait_delay;
|
||||
int abrt_conn = 0;
|
||||
int count = 10;
|
||||
|
||||
wait_delay = 60 * HZ + DEF_MAX_RT_TIME;
|
||||
qedi_ep = ep->dd_data;
|
||||
qedi = qedi_ep->qedi;
|
||||
|
||||
if (qedi_ep->state == EP_STATE_OFLDCONN_START)
|
||||
goto ep_exit_recover;
|
||||
|
||||
flush_work(&qedi_ep->offload_work);
|
||||
|
||||
if (qedi_ep->conn) {
|
||||
|
@ -1163,7 +1167,7 @@ static void qedi_offload_work(struct work_struct *work)
|
|||
struct qedi_endpoint *qedi_ep =
|
||||
container_of(work, struct qedi_endpoint, offload_work);
|
||||
struct qedi_ctx *qedi;
|
||||
int wait_delay = 20 * HZ;
|
||||
int wait_delay = 5 * HZ;
|
||||
int ret;
|
||||
|
||||
qedi = qedi_ep->qedi;
|
||||
|
|
|
@ -29,24 +29,27 @@ qla2x00_sysfs_read_fw_dump(struct file *filp, struct kobject *kobj,
|
|||
if (!(ha->fw_dump_reading || ha->mctp_dump_reading))
|
||||
return 0;
|
||||
|
||||
mutex_lock(&ha->optrom_mutex);
|
||||
if (IS_P3P_TYPE(ha)) {
|
||||
if (off < ha->md_template_size) {
|
||||
rval = memory_read_from_buffer(buf, count,
|
||||
&off, ha->md_tmplt_hdr, ha->md_template_size);
|
||||
return rval;
|
||||
} else {
|
||||
off -= ha->md_template_size;
|
||||
rval = memory_read_from_buffer(buf, count,
|
||||
&off, ha->md_dump, ha->md_dump_size);
|
||||
}
|
||||
off -= ha->md_template_size;
|
||||
rval = memory_read_from_buffer(buf, count,
|
||||
&off, ha->md_dump, ha->md_dump_size);
|
||||
return rval;
|
||||
} else if (ha->mctp_dumped && ha->mctp_dump_reading)
|
||||
return memory_read_from_buffer(buf, count, &off, ha->mctp_dump,
|
||||
} else if (ha->mctp_dumped && ha->mctp_dump_reading) {
|
||||
rval = memory_read_from_buffer(buf, count, &off, ha->mctp_dump,
|
||||
MCTP_DUMP_SIZE);
|
||||
else if (ha->fw_dump_reading)
|
||||
return memory_read_from_buffer(buf, count, &off, ha->fw_dump,
|
||||
} else if (ha->fw_dump_reading) {
|
||||
rval = memory_read_from_buffer(buf, count, &off, ha->fw_dump,
|
||||
ha->fw_dump_len);
|
||||
else
|
||||
return 0;
|
||||
} else {
|
||||
rval = 0;
|
||||
}
|
||||
mutex_unlock(&ha->optrom_mutex);
|
||||
return rval;
|
||||
}
|
||||
|
||||
static ssize_t
|
||||
|
@ -154,6 +157,8 @@ qla2x00_sysfs_read_nvram(struct file *filp, struct kobject *kobj,
|
|||
struct scsi_qla_host *vha = shost_priv(dev_to_shost(container_of(kobj,
|
||||
struct device, kobj)));
|
||||
struct qla_hw_data *ha = vha->hw;
|
||||
uint32_t faddr;
|
||||
struct active_regions active_regions = { };
|
||||
|
||||
if (!capable(CAP_SYS_ADMIN))
|
||||
return 0;
|
||||
|
@ -164,11 +169,21 @@ qla2x00_sysfs_read_nvram(struct file *filp, struct kobject *kobj,
|
|||
return -EAGAIN;
|
||||
}
|
||||
|
||||
if (IS_NOCACHE_VPD_TYPE(ha))
|
||||
ha->isp_ops->read_optrom(vha, ha->nvram, ha->flt_region_nvram << 2,
|
||||
ha->nvram_size);
|
||||
if (!IS_NOCACHE_VPD_TYPE(ha)) {
|
||||
mutex_unlock(&ha->optrom_mutex);
|
||||
goto skip;
|
||||
}
|
||||
|
||||
faddr = ha->flt_region_nvram;
|
||||
if (IS_QLA28XX(ha)) {
|
||||
if (active_regions.aux.vpd_nvram == QLA27XX_SECONDARY_IMAGE)
|
||||
faddr = ha->flt_region_nvram_sec;
|
||||
}
|
||||
ha->isp_ops->read_optrom(vha, ha->nvram, faddr << 2, ha->nvram_size);
|
||||
|
||||
mutex_unlock(&ha->optrom_mutex);
|
||||
|
||||
skip:
|
||||
return memory_read_from_buffer(buf, count, &off, ha->nvram,
|
||||
ha->nvram_size);
|
||||
}
|
||||
|
@ -223,9 +238,9 @@ qla2x00_sysfs_write_nvram(struct file *filp, struct kobject *kobj,
|
|||
}
|
||||
|
||||
/* Write NVRAM. */
|
||||
ha->isp_ops->write_nvram(vha, (uint8_t *)buf, ha->nvram_base, count);
|
||||
ha->isp_ops->read_nvram(vha, (uint8_t *)ha->nvram, ha->nvram_base,
|
||||
count);
|
||||
ha->isp_ops->write_nvram(vha, buf, ha->nvram_base, count);
|
||||
ha->isp_ops->read_nvram(vha, ha->nvram, ha->nvram_base,
|
||||
count);
|
||||
mutex_unlock(&ha->optrom_mutex);
|
||||
|
||||
ql_dbg(ql_dbg_user, vha, 0x7060,
|
||||
|
@ -364,7 +379,7 @@ qla2x00_sysfs_write_optrom_ctl(struct file *filp, struct kobject *kobj,
|
|||
}
|
||||
|
||||
ha->optrom_region_start = start;
|
||||
ha->optrom_region_size = start + size;
|
||||
ha->optrom_region_size = size;
|
||||
|
||||
ha->optrom_state = QLA_SREADING;
|
||||
ha->optrom_buffer = vmalloc(ha->optrom_region_size);
|
||||
|
@ -418,6 +433,10 @@ qla2x00_sysfs_write_optrom_ctl(struct file *filp, struct kobject *kobj,
|
|||
* 0x000000 -> 0x07ffff -- Boot code.
|
||||
* 0x080000 -> 0x0fffff -- Firmware.
|
||||
* 0x120000 -> 0x12ffff -- VPD and HBA parameters.
|
||||
*
|
||||
* > ISP25xx type boards:
|
||||
*
|
||||
* None -- should go through BSG.
|
||||
*/
|
||||
valid = 0;
|
||||
if (ha->optrom_size == OPTROM_SIZE_2300 && start == 0)
|
||||
|
@ -425,9 +444,7 @@ qla2x00_sysfs_write_optrom_ctl(struct file *filp, struct kobject *kobj,
|
|||
else if (start == (ha->flt_region_boot * 4) ||
|
||||
start == (ha->flt_region_fw * 4))
|
||||
valid = 1;
|
||||
else if (IS_QLA24XX_TYPE(ha) || IS_QLA25XX(ha)
|
||||
|| IS_CNA_CAPABLE(ha) || IS_QLA2031(ha)
|
||||
|| IS_QLA27XX(ha))
|
||||
else if (IS_QLA24XX_TYPE(ha) || IS_QLA25XX(ha))
|
||||
valid = 1;
|
||||
if (!valid) {
|
||||
ql_log(ql_log_warn, vha, 0x7065,
|
||||
|
@ -437,7 +454,7 @@ qla2x00_sysfs_write_optrom_ctl(struct file *filp, struct kobject *kobj,
|
|||
}
|
||||
|
||||
ha->optrom_region_start = start;
|
||||
ha->optrom_region_size = start + size;
|
||||
ha->optrom_region_size = size;
|
||||
|
||||
ha->optrom_state = QLA_SWRITING;
|
||||
ha->optrom_buffer = vmalloc(ha->optrom_region_size);
|
||||
|
@ -504,6 +521,7 @@ qla2x00_sysfs_read_vpd(struct file *filp, struct kobject *kobj,
|
|||
struct device, kobj)));
|
||||
struct qla_hw_data *ha = vha->hw;
|
||||
uint32_t faddr;
|
||||
struct active_regions active_regions = { };
|
||||
|
||||
if (unlikely(pci_channel_offline(ha->pdev)))
|
||||
return -EAGAIN;
|
||||
|
@ -511,22 +529,33 @@ qla2x00_sysfs_read_vpd(struct file *filp, struct kobject *kobj,
|
|||
if (!capable(CAP_SYS_ADMIN))
|
||||
return -EINVAL;
|
||||
|
||||
if (IS_NOCACHE_VPD_TYPE(ha)) {
|
||||
faddr = ha->flt_region_vpd << 2;
|
||||
if (IS_NOCACHE_VPD_TYPE(ha))
|
||||
goto skip;
|
||||
|
||||
if (IS_QLA27XX(ha) &&
|
||||
qla27xx_find_valid_image(vha) == QLA27XX_SECONDARY_IMAGE)
|
||||
faddr = ha->flt_region_vpd << 2;
|
||||
|
||||
if (IS_QLA28XX(ha)) {
|
||||
qla28xx_get_aux_images(vha, &active_regions);
|
||||
if (active_regions.aux.vpd_nvram == QLA27XX_SECONDARY_IMAGE)
|
||||
faddr = ha->flt_region_vpd_sec << 2;
|
||||
|
||||
mutex_lock(&ha->optrom_mutex);
|
||||
if (qla2x00_chip_is_down(vha)) {
|
||||
mutex_unlock(&ha->optrom_mutex);
|
||||
return -EAGAIN;
|
||||
}
|
||||
ha->isp_ops->read_optrom(vha, ha->vpd, faddr,
|
||||
ha->vpd_size);
|
||||
mutex_unlock(&ha->optrom_mutex);
|
||||
ql_dbg(ql_dbg_init, vha, 0x7070,
|
||||
"Loading %s nvram image.\n",
|
||||
active_regions.aux.vpd_nvram == QLA27XX_PRIMARY_IMAGE ?
|
||||
"primary" : "secondary");
|
||||
}
|
||||
|
||||
mutex_lock(&ha->optrom_mutex);
|
||||
if (qla2x00_chip_is_down(vha)) {
|
||||
mutex_unlock(&ha->optrom_mutex);
|
||||
return -EAGAIN;
|
||||
}
|
||||
|
||||
ha->isp_ops->read_optrom(vha, ha->vpd, faddr, ha->vpd_size);
|
||||
mutex_unlock(&ha->optrom_mutex);
|
||||
|
||||
ha->isp_ops->read_optrom(vha, ha->vpd, faddr, ha->vpd_size);
|
||||
skip:
|
||||
return memory_read_from_buffer(buf, count, &off, ha->vpd, ha->vpd_size);
|
||||
}
|
||||
|
||||
|
@ -563,8 +592,8 @@ qla2x00_sysfs_write_vpd(struct file *filp, struct kobject *kobj,
|
|||
}
|
||||
|
||||
/* Write NVRAM. */
|
||||
ha->isp_ops->write_nvram(vha, (uint8_t *)buf, ha->vpd_base, count);
|
||||
ha->isp_ops->read_nvram(vha, (uint8_t *)ha->vpd, ha->vpd_base, count);
|
||||
ha->isp_ops->write_nvram(vha, buf, ha->vpd_base, count);
|
||||
ha->isp_ops->read_nvram(vha, ha->vpd, ha->vpd_base, count);
|
||||
|
||||
/* Update flash version information for 4Gb & above. */
|
||||
if (!IS_FWI2_CAPABLE(ha)) {
|
||||
|
@ -645,6 +674,7 @@ qla2x00_sysfs_write_reset(struct file *filp, struct kobject *kobj,
|
|||
int type;
|
||||
uint32_t idc_control;
|
||||
uint8_t *tmp_data = NULL;
|
||||
|
||||
if (off != 0)
|
||||
return -EINVAL;
|
||||
|
||||
|
@ -682,7 +712,7 @@ qla2x00_sysfs_write_reset(struct file *filp, struct kobject *kobj,
|
|||
ql_log(ql_log_info, vha, 0x706f,
|
||||
"Issuing MPI reset.\n");
|
||||
|
||||
if (IS_QLA83XX(ha) || IS_QLA27XX(ha)) {
|
||||
if (IS_QLA83XX(ha) || IS_QLA27XX(ha) || IS_QLA28XX(ha)) {
|
||||
uint32_t idc_control;
|
||||
|
||||
qla83xx_idc_lock(vha, 0);
|
||||
|
@ -858,7 +888,7 @@ qla2x00_sysfs_read_xgmac_stats(struct file *filp, struct kobject *kobj,
|
|||
count = 0;
|
||||
}
|
||||
|
||||
count = actual_size > count ? count: actual_size;
|
||||
count = actual_size > count ? count : actual_size;
|
||||
memcpy(buf, ha->xgmac_data, count);
|
||||
|
||||
return count;
|
||||
|
@ -934,7 +964,7 @@ static struct bin_attribute sysfs_dcbx_tlv_attr = {
|
|||
static struct sysfs_entry {
|
||||
char *name;
|
||||
struct bin_attribute *attr;
|
||||
int is4GBp_only;
|
||||
int type;
|
||||
} bin_file_entries[] = {
|
||||
{ "fw_dump", &sysfs_fw_dump_attr, },
|
||||
{ "nvram", &sysfs_nvram_attr, },
|
||||
|
@ -957,11 +987,11 @@ qla2x00_alloc_sysfs_attr(scsi_qla_host_t *vha)
|
|||
int ret;
|
||||
|
||||
for (iter = bin_file_entries; iter->name; iter++) {
|
||||
if (iter->is4GBp_only && !IS_FWI2_CAPABLE(vha->hw))
|
||||
if (iter->type && !IS_FWI2_CAPABLE(vha->hw))
|
||||
continue;
|
||||
if (iter->is4GBp_only == 2 && !IS_QLA25XX(vha->hw))
|
||||
if (iter->type == 2 && !IS_QLA25XX(vha->hw))
|
||||
continue;
|
||||
if (iter->is4GBp_only == 3 && !(IS_CNA_CAPABLE(vha->hw)))
|
||||
if (iter->type == 3 && !(IS_CNA_CAPABLE(vha->hw)))
|
||||
continue;
|
||||
|
||||
ret = sysfs_create_bin_file(&host->shost_gendev.kobj,
|
||||
|
@ -985,13 +1015,14 @@ qla2x00_free_sysfs_attr(scsi_qla_host_t *vha, bool stop_beacon)
|
|||
struct qla_hw_data *ha = vha->hw;
|
||||
|
||||
for (iter = bin_file_entries; iter->name; iter++) {
|
||||
if (iter->is4GBp_only && !IS_FWI2_CAPABLE(ha))
|
||||
if (iter->type && !IS_FWI2_CAPABLE(ha))
|
||||
continue;
|
||||
if (iter->is4GBp_only == 2 && !IS_QLA25XX(ha))
|
||||
if (iter->type == 2 && !IS_QLA25XX(ha))
|
||||
continue;
|
||||
if (iter->is4GBp_only == 3 && !(IS_CNA_CAPABLE(vha->hw)))
|
||||
if (iter->type == 3 && !(IS_CNA_CAPABLE(ha)))
|
||||
continue;
|
||||
if (iter->is4GBp_only == 0x27 && !IS_QLA27XX(vha->hw))
|
||||
if (iter->type == 0x27 &&
|
||||
(!IS_QLA27XX(ha) || !IS_QLA28XX(ha)))
|
||||
continue;
|
||||
|
||||
sysfs_remove_bin_file(&host->shost_gendev.kobj,
|
||||
|
@ -1049,6 +1080,7 @@ qla2x00_isp_name_show(struct device *dev, struct device_attribute *attr,
|
|||
char *buf)
|
||||
{
|
||||
scsi_qla_host_t *vha = shost_priv(class_to_shost(dev));
|
||||
|
||||
return scnprintf(buf, PAGE_SIZE, "ISP%04X\n", vha->hw->pdev->device);
|
||||
}
|
||||
|
||||
|
@ -1082,6 +1114,7 @@ qla2x00_model_desc_show(struct device *dev, struct device_attribute *attr,
|
|||
char *buf)
|
||||
{
|
||||
scsi_qla_host_t *vha = shost_priv(class_to_shost(dev));
|
||||
|
||||
return scnprintf(buf, PAGE_SIZE, "%s\n", vha->hw->model_desc);
|
||||
}
|
||||
|
||||
|
@ -1294,6 +1327,7 @@ qla2x00_optrom_bios_version_show(struct device *dev,
|
|||
{
|
||||
scsi_qla_host_t *vha = shost_priv(class_to_shost(dev));
|
||||
struct qla_hw_data *ha = vha->hw;
|
||||
|
||||
return scnprintf(buf, PAGE_SIZE, "%d.%02d\n", ha->bios_revision[1],
|
||||
ha->bios_revision[0]);
|
||||
}
|
||||
|
@ -1304,6 +1338,7 @@ qla2x00_optrom_efi_version_show(struct device *dev,
|
|||
{
|
||||
scsi_qla_host_t *vha = shost_priv(class_to_shost(dev));
|
||||
struct qla_hw_data *ha = vha->hw;
|
||||
|
||||
return scnprintf(buf, PAGE_SIZE, "%d.%02d\n", ha->efi_revision[1],
|
||||
ha->efi_revision[0]);
|
||||
}
|
||||
|
@ -1314,6 +1349,7 @@ qla2x00_optrom_fcode_version_show(struct device *dev,
|
|||
{
|
||||
scsi_qla_host_t *vha = shost_priv(class_to_shost(dev));
|
||||
struct qla_hw_data *ha = vha->hw;
|
||||
|
||||
return scnprintf(buf, PAGE_SIZE, "%d.%02d\n", ha->fcode_revision[1],
|
||||
ha->fcode_revision[0]);
|
||||
}
|
||||
|
@ -1324,6 +1360,7 @@ qla2x00_optrom_fw_version_show(struct device *dev,
|
|||
{
|
||||
scsi_qla_host_t *vha = shost_priv(class_to_shost(dev));
|
||||
struct qla_hw_data *ha = vha->hw;
|
||||
|
||||
return scnprintf(buf, PAGE_SIZE, "%d.%02d.%02d %d\n",
|
||||
ha->fw_revision[0], ha->fw_revision[1], ha->fw_revision[2],
|
||||
ha->fw_revision[3]);
|
||||
|
@ -1336,7 +1373,8 @@ qla2x00_optrom_gold_fw_version_show(struct device *dev,
|
|||
scsi_qla_host_t *vha = shost_priv(class_to_shost(dev));
|
||||
struct qla_hw_data *ha = vha->hw;
|
||||
|
||||
if (!IS_QLA81XX(ha) && !IS_QLA83XX(ha) && !IS_QLA27XX(ha))
|
||||
if (!IS_QLA81XX(ha) && !IS_QLA83XX(ha) &&
|
||||
!IS_QLA27XX(ha) && !IS_QLA28XX(ha))
|
||||
return scnprintf(buf, PAGE_SIZE, "\n");
|
||||
|
||||
return scnprintf(buf, PAGE_SIZE, "%d.%02d.%02d (%d)\n",
|
||||
|
@ -1349,6 +1387,7 @@ qla2x00_total_isp_aborts_show(struct device *dev,
|
|||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
scsi_qla_host_t *vha = shost_priv(class_to_shost(dev));
|
||||
|
||||
return scnprintf(buf, PAGE_SIZE, "%d\n",
|
||||
vha->qla_stats.total_isp_aborts);
|
||||
}
|
||||
|
@ -1358,23 +1397,39 @@ qla24xx_84xx_fw_version_show(struct device *dev,
|
|||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
int rval = QLA_SUCCESS;
|
||||
uint16_t status[2] = {0, 0};
|
||||
uint16_t status[2] = { 0 };
|
||||
scsi_qla_host_t *vha = shost_priv(class_to_shost(dev));
|
||||
struct qla_hw_data *ha = vha->hw;
|
||||
|
||||
if (!IS_QLA84XX(ha))
|
||||
return scnprintf(buf, PAGE_SIZE, "\n");
|
||||
|
||||
if (ha->cs84xx->op_fw_version == 0)
|
||||
if (!ha->cs84xx->op_fw_version) {
|
||||
rval = qla84xx_verify_chip(vha, status);
|
||||
|
||||
if ((rval == QLA_SUCCESS) && (status[0] == 0))
|
||||
return scnprintf(buf, PAGE_SIZE, "%u\n",
|
||||
(uint32_t)ha->cs84xx->op_fw_version);
|
||||
if (!rval && !status[0])
|
||||
return scnprintf(buf, PAGE_SIZE, "%u\n",
|
||||
(uint32_t)ha->cs84xx->op_fw_version);
|
||||
}
|
||||
|
||||
return scnprintf(buf, PAGE_SIZE, "\n");
|
||||
}
|
||||
|
||||
static ssize_t
|
||||
qla2x00_serdes_version_show(struct device *dev, struct device_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
scsi_qla_host_t *vha = shost_priv(class_to_shost(dev));
|
||||
struct qla_hw_data *ha = vha->hw;
|
||||
|
||||
if (!IS_QLA27XX(ha) && !IS_QLA28XX(ha))
|
||||
return scnprintf(buf, PAGE_SIZE, "\n");
|
||||
|
||||
return scnprintf(buf, PAGE_SIZE, "%d.%02d.%02d\n",
|
||||
ha->serdes_version[0], ha->serdes_version[1],
|
||||
ha->serdes_version[2]);
|
||||
}
|
||||
|
||||
static ssize_t
|
||||
qla2x00_mpi_version_show(struct device *dev, struct device_attribute *attr,
|
||||
char *buf)
|
||||
|
@ -1383,7 +1438,7 @@ qla2x00_mpi_version_show(struct device *dev, struct device_attribute *attr,
|
|||
struct qla_hw_data *ha = vha->hw;
|
||||
|
||||
if (!IS_QLA81XX(ha) && !IS_QLA8031(ha) && !IS_QLA8044(ha) &&
|
||||
!IS_QLA27XX(ha))
|
||||
!IS_QLA27XX(ha) && !IS_QLA28XX(ha))
|
||||
return scnprintf(buf, PAGE_SIZE, "\n");
|
||||
|
||||
return scnprintf(buf, PAGE_SIZE, "%d.%02d.%02d (%x)\n",
|
||||
|
@ -1596,7 +1651,7 @@ qla2x00_pep_version_show(struct device *dev, struct device_attribute *attr,
|
|||
scsi_qla_host_t *vha = shost_priv(class_to_shost(dev));
|
||||
struct qla_hw_data *ha = vha->hw;
|
||||
|
||||
if (!IS_QLA27XX(ha))
|
||||
if (!IS_QLA27XX(ha) && !IS_QLA28XX(ha))
|
||||
return scnprintf(buf, PAGE_SIZE, "\n");
|
||||
|
||||
return scnprintf(buf, PAGE_SIZE, "%d.%02d.%02d\n",
|
||||
|
@ -1604,35 +1659,38 @@ qla2x00_pep_version_show(struct device *dev, struct device_attribute *attr,
|
|||
}
|
||||
|
||||
static ssize_t
|
||||
qla2x00_min_link_speed_show(struct device *dev, struct device_attribute *attr,
|
||||
char *buf)
|
||||
qla2x00_min_supported_speed_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
scsi_qla_host_t *vha = shost_priv(class_to_shost(dev));
|
||||
struct qla_hw_data *ha = vha->hw;
|
||||
|
||||
if (!IS_QLA27XX(ha))
|
||||
if (!IS_QLA27XX(ha) && !IS_QLA28XX(ha))
|
||||
return scnprintf(buf, PAGE_SIZE, "\n");
|
||||
|
||||
return scnprintf(buf, PAGE_SIZE, "%s\n",
|
||||
ha->min_link_speed == 5 ? "32Gps" :
|
||||
ha->min_link_speed == 4 ? "16Gps" :
|
||||
ha->min_link_speed == 3 ? "8Gps" :
|
||||
ha->min_link_speed == 2 ? "4Gps" :
|
||||
ha->min_link_speed != 0 ? "unknown" : "");
|
||||
ha->min_supported_speed == 6 ? "64Gps" :
|
||||
ha->min_supported_speed == 5 ? "32Gps" :
|
||||
ha->min_supported_speed == 4 ? "16Gps" :
|
||||
ha->min_supported_speed == 3 ? "8Gps" :
|
||||
ha->min_supported_speed == 2 ? "4Gps" :
|
||||
ha->min_supported_speed != 0 ? "unknown" : "");
|
||||
}
|
||||
|
||||
static ssize_t
|
||||
qla2x00_max_speed_sup_show(struct device *dev, struct device_attribute *attr,
|
||||
char *buf)
|
||||
qla2x00_max_supported_speed_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
scsi_qla_host_t *vha = shost_priv(class_to_shost(dev));
|
||||
struct qla_hw_data *ha = vha->hw;
|
||||
|
||||
if (!IS_QLA27XX(ha))
|
||||
if (!IS_QLA27XX(ha) && !IS_QLA28XX(ha))
|
||||
return scnprintf(buf, PAGE_SIZE, "\n");
|
||||
|
||||
return scnprintf(buf, PAGE_SIZE, "%s\n",
|
||||
ha->max_speed_sup ? "32Gps" : "16Gps");
|
||||
ha->max_supported_speed == 2 ? "64Gps" :
|
||||
ha->max_supported_speed == 1 ? "32Gps" :
|
||||
ha->max_supported_speed == 0 ? "16Gps" : "unknown");
|
||||
}
|
||||
|
||||
static ssize_t
|
||||
|
@ -1645,7 +1703,7 @@ qla2x00_port_speed_store(struct device *dev, struct device_attribute *attr,
|
|||
int mode = QLA_SET_DATA_RATE_LR;
|
||||
struct qla_hw_data *ha = vha->hw;
|
||||
|
||||
if (!IS_QLA27XX(vha->hw)) {
|
||||
if (!IS_QLA27XX(ha) && !IS_QLA28XX(ha)) {
|
||||
ql_log(ql_log_warn, vha, 0x70d8,
|
||||
"Speed setting not supported \n");
|
||||
return -EINVAL;
|
||||
|
@ -2164,6 +2222,32 @@ qla2x00_dif_bundle_statistics_show(struct device *dev,
|
|||
ha->dif_bundle_dma_allocs, ha->pool.unusable.count);
|
||||
}
|
||||
|
||||
static ssize_t
|
||||
qla2x00_fw_attr_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
scsi_qla_host_t *vha = shost_priv(class_to_shost(dev));
|
||||
struct qla_hw_data *ha = vha->hw;
|
||||
|
||||
if (!IS_QLA27XX(ha) && !IS_QLA28XX(ha))
|
||||
return scnprintf(buf, PAGE_SIZE, "\n");
|
||||
|
||||
return scnprintf(buf, PAGE_SIZE, "%llx\n",
|
||||
(uint64_t)ha->fw_attributes_ext[1] << 48 |
|
||||
(uint64_t)ha->fw_attributes_ext[0] << 32 |
|
||||
(uint64_t)ha->fw_attributes_h << 16 |
|
||||
(uint64_t)ha->fw_attributes);
|
||||
}
|
||||
|
||||
static ssize_t
|
||||
qla2x00_port_no_show(struct device *dev, struct device_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
scsi_qla_host_t *vha = shost_priv(class_to_shost(dev));
|
||||
|
||||
return scnprintf(buf, PAGE_SIZE, "%u\n", vha->hw->port_no);
|
||||
}
|
||||
|
||||
static DEVICE_ATTR(driver_version, S_IRUGO, qla2x00_driver_version_show, NULL);
|
||||
static DEVICE_ATTR(fw_version, S_IRUGO, qla2x00_fw_version_show, NULL);
|
||||
static DEVICE_ATTR(serial_num, S_IRUGO, qla2x00_serial_num_show, NULL);
|
||||
|
@ -2192,6 +2276,7 @@ static DEVICE_ATTR(84xx_fw_version, S_IRUGO, qla24xx_84xx_fw_version_show,
|
|||
NULL);
|
||||
static DEVICE_ATTR(total_isp_aborts, S_IRUGO, qla2x00_total_isp_aborts_show,
|
||||
NULL);
|
||||
static DEVICE_ATTR(serdes_version, 0444, qla2x00_serdes_version_show, NULL);
|
||||
static DEVICE_ATTR(mpi_version, S_IRUGO, qla2x00_mpi_version_show, NULL);
|
||||
static DEVICE_ATTR(phy_version, S_IRUGO, qla2x00_phy_version_show, NULL);
|
||||
static DEVICE_ATTR(flash_block_size, S_IRUGO, qla2x00_flash_block_size_show,
|
||||
|
@ -2209,8 +2294,10 @@ static DEVICE_ATTR(allow_cna_fw_dump, S_IRUGO | S_IWUSR,
|
|||
qla2x00_allow_cna_fw_dump_show,
|
||||
qla2x00_allow_cna_fw_dump_store);
|
||||
static DEVICE_ATTR(pep_version, S_IRUGO, qla2x00_pep_version_show, NULL);
|
||||
static DEVICE_ATTR(min_link_speed, S_IRUGO, qla2x00_min_link_speed_show, NULL);
|
||||
static DEVICE_ATTR(max_speed_sup, S_IRUGO, qla2x00_max_speed_sup_show, NULL);
|
||||
static DEVICE_ATTR(min_supported_speed, 0444,
|
||||
qla2x00_min_supported_speed_show, NULL);
|
||||
static DEVICE_ATTR(max_supported_speed, 0444,
|
||||
qla2x00_max_supported_speed_show, NULL);
|
||||
static DEVICE_ATTR(zio_threshold, 0644,
|
||||
qla_zio_threshold_show,
|
||||
qla_zio_threshold_store);
|
||||
|
@ -2221,6 +2308,8 @@ static DEVICE_ATTR(dif_bundle_statistics, 0444,
|
|||
qla2x00_dif_bundle_statistics_show, NULL);
|
||||
static DEVICE_ATTR(port_speed, 0644, qla2x00_port_speed_show,
|
||||
qla2x00_port_speed_store);
|
||||
static DEVICE_ATTR(port_no, 0444, qla2x00_port_no_show, NULL);
|
||||
static DEVICE_ATTR(fw_attr, 0444, qla2x00_fw_attr_show, NULL);
|
||||
|
||||
|
||||
struct device_attribute *qla2x00_host_attrs[] = {
|
||||
|
@ -2242,6 +2331,7 @@ struct device_attribute *qla2x00_host_attrs[] = {
|
|||
&dev_attr_optrom_fw_version,
|
||||
&dev_attr_84xx_fw_version,
|
||||
&dev_attr_total_isp_aborts,
|
||||
&dev_attr_serdes_version,
|
||||
&dev_attr_mpi_version,
|
||||
&dev_attr_phy_version,
|
||||
&dev_attr_flash_block_size,
|
||||
|
@ -2256,11 +2346,13 @@ struct device_attribute *qla2x00_host_attrs[] = {
|
|||
&dev_attr_fw_dump_size,
|
||||
&dev_attr_allow_cna_fw_dump,
|
||||
&dev_attr_pep_version,
|
||||
&dev_attr_min_link_speed,
|
||||
&dev_attr_max_speed_sup,
|
||||
&dev_attr_min_supported_speed,
|
||||
&dev_attr_max_supported_speed,
|
||||
&dev_attr_zio_threshold,
|
||||
&dev_attr_dif_bundle_statistics,
|
||||
&dev_attr_port_speed,
|
||||
&dev_attr_port_no,
|
||||
&dev_attr_fw_attr,
|
||||
NULL, /* reserve for qlini_mode */
|
||||
NULL, /* reserve for ql2xiniexchg */
|
||||
NULL, /* reserve for ql2xexchoffld */
|
||||
|
@ -2296,16 +2388,15 @@ qla2x00_get_host_port_id(struct Scsi_Host *shost)
|
|||
static void
|
||||
qla2x00_get_host_speed(struct Scsi_Host *shost)
|
||||
{
|
||||
struct qla_hw_data *ha = ((struct scsi_qla_host *)
|
||||
(shost_priv(shost)))->hw;
|
||||
u32 speed = FC_PORTSPEED_UNKNOWN;
|
||||
scsi_qla_host_t *vha = shost_priv(shost);
|
||||
u32 speed;
|
||||
|
||||
if (IS_QLAFX00(ha)) {
|
||||
if (IS_QLAFX00(vha->hw)) {
|
||||
qlafx00_get_host_speed(shost);
|
||||
return;
|
||||
}
|
||||
|
||||
switch (ha->link_data_rate) {
|
||||
switch (vha->hw->link_data_rate) {
|
||||
case PORT_SPEED_1GB:
|
||||
speed = FC_PORTSPEED_1GBIT;
|
||||
break;
|
||||
|
@ -2327,7 +2418,14 @@ qla2x00_get_host_speed(struct Scsi_Host *shost)
|
|||
case PORT_SPEED_32GB:
|
||||
speed = FC_PORTSPEED_32GBIT;
|
||||
break;
|
||||
case PORT_SPEED_64GB:
|
||||
speed = FC_PORTSPEED_64GBIT;
|
||||
break;
|
||||
default:
|
||||
speed = FC_PORTSPEED_UNKNOWN;
|
||||
break;
|
||||
}
|
||||
|
||||
fc_host_speed(shost) = speed;
|
||||
}
|
||||
|
||||
|
@ -2335,7 +2433,7 @@ static void
|
|||
qla2x00_get_host_port_type(struct Scsi_Host *shost)
|
||||
{
|
||||
scsi_qla_host_t *vha = shost_priv(shost);
|
||||
uint32_t port_type = FC_PORTTYPE_UNKNOWN;
|
||||
uint32_t port_type;
|
||||
|
||||
if (vha->vp_idx) {
|
||||
fc_host_port_type(shost) = FC_PORTTYPE_NPIV;
|
||||
|
@ -2354,7 +2452,11 @@ qla2x00_get_host_port_type(struct Scsi_Host *shost)
|
|||
case ISP_CFG_F:
|
||||
port_type = FC_PORTTYPE_NPORT;
|
||||
break;
|
||||
default:
|
||||
port_type = FC_PORTTYPE_UNKNOWN;
|
||||
break;
|
||||
}
|
||||
|
||||
fc_host_port_type(shost) = port_type;
|
||||
}
|
||||
|
||||
|
@ -2416,13 +2518,10 @@ qla2x00_get_starget_port_id(struct scsi_target *starget)
|
|||
fc_starget_port_id(starget) = port_id;
|
||||
}
|
||||
|
||||
static void
|
||||
static inline void
|
||||
qla2x00_set_rport_loss_tmo(struct fc_rport *rport, uint32_t timeout)
|
||||
{
|
||||
if (timeout)
|
||||
rport->dev_loss_tmo = timeout;
|
||||
else
|
||||
rport->dev_loss_tmo = 1;
|
||||
rport->dev_loss_tmo = timeout ? timeout : 1;
|
||||
}
|
||||
|
||||
static void
|
||||
|
@ -2632,8 +2731,9 @@ static void
|
|||
qla2x00_get_host_fabric_name(struct Scsi_Host *shost)
|
||||
{
|
||||
scsi_qla_host_t *vha = shost_priv(shost);
|
||||
uint8_t node_name[WWN_SIZE] = { 0xFF, 0xFF, 0xFF, 0xFF, \
|
||||
0xFF, 0xFF, 0xFF, 0xFF};
|
||||
static const uint8_t node_name[WWN_SIZE] = {
|
||||
0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF
|
||||
};
|
||||
u64 fabric_name = wwn_to_u64(node_name);
|
||||
|
||||
if (vha->device_flags & SWITCH_FOUND)
|
||||
|
@ -2711,8 +2811,8 @@ qla24xx_vport_create(struct fc_vport *fc_vport, bool disable)
|
|||
|
||||
/* initialized vport states */
|
||||
atomic_set(&vha->loop_state, LOOP_DOWN);
|
||||
vha->vp_err_state= VP_ERR_PORTDWN;
|
||||
vha->vp_prev_err_state= VP_ERR_UNKWN;
|
||||
vha->vp_err_state = VP_ERR_PORTDWN;
|
||||
vha->vp_prev_err_state = VP_ERR_UNKWN;
|
||||
/* Check if physical ha port is Up */
|
||||
if (atomic_read(&base_vha->loop_state) == LOOP_DOWN ||
|
||||
atomic_read(&base_vha->loop_state) == LOOP_DEAD) {
|
||||
|
@ -2727,6 +2827,7 @@ qla24xx_vport_create(struct fc_vport *fc_vport, bool disable)
|
|||
if (IS_T10_PI_CAPABLE(ha) && ql2xenabledif) {
|
||||
if (ha->fw_attributes & BIT_4) {
|
||||
int prot = 0, guard;
|
||||
|
||||
vha->flags.difdix_supported = 1;
|
||||
ql_dbg(ql_dbg_user, vha, 0x7082,
|
||||
"Registered for DIF/DIX type 1 and 3 protection.\n");
|
||||
|
@ -2977,7 +3078,7 @@ void
|
|||
qla2x00_init_host_attr(scsi_qla_host_t *vha)
|
||||
{
|
||||
struct qla_hw_data *ha = vha->hw;
|
||||
u32 speed = FC_PORTSPEED_UNKNOWN;
|
||||
u32 speeds = FC_PORTSPEED_UNKNOWN;
|
||||
|
||||
fc_host_dev_loss_tmo(vha->host) = ha->port_down_retry_count;
|
||||
fc_host_node_name(vha->host) = wwn_to_u64(vha->node_name);
|
||||
|
@ -2988,25 +3089,45 @@ qla2x00_init_host_attr(scsi_qla_host_t *vha)
|
|||
fc_host_npiv_vports_inuse(vha->host) = ha->cur_vport_count;
|
||||
|
||||
if (IS_CNA_CAPABLE(ha))
|
||||
speed = FC_PORTSPEED_10GBIT;
|
||||
else if (IS_QLA2031(ha))
|
||||
speed = FC_PORTSPEED_16GBIT | FC_PORTSPEED_8GBIT |
|
||||
FC_PORTSPEED_4GBIT;
|
||||
else if (IS_QLA25XX(ha))
|
||||
speed = FC_PORTSPEED_8GBIT | FC_PORTSPEED_4GBIT |
|
||||
FC_PORTSPEED_2GBIT | FC_PORTSPEED_1GBIT;
|
||||
speeds = FC_PORTSPEED_10GBIT;
|
||||
else if (IS_QLA28XX(ha) || IS_QLA27XX(ha)) {
|
||||
if (ha->max_supported_speed == 2) {
|
||||
if (ha->min_supported_speed <= 6)
|
||||
speeds |= FC_PORTSPEED_64GBIT;
|
||||
}
|
||||
if (ha->max_supported_speed == 2 ||
|
||||
ha->max_supported_speed == 1) {
|
||||
if (ha->min_supported_speed <= 5)
|
||||
speeds |= FC_PORTSPEED_32GBIT;
|
||||
}
|
||||
if (ha->max_supported_speed == 2 ||
|
||||
ha->max_supported_speed == 1 ||
|
||||
ha->max_supported_speed == 0) {
|
||||
if (ha->min_supported_speed <= 4)
|
||||
speeds |= FC_PORTSPEED_16GBIT;
|
||||
}
|
||||
if (ha->max_supported_speed == 1 ||
|
||||
ha->max_supported_speed == 0) {
|
||||
if (ha->min_supported_speed <= 3)
|
||||
speeds |= FC_PORTSPEED_8GBIT;
|
||||
}
|
||||
if (ha->max_supported_speed == 0) {
|
||||
if (ha->min_supported_speed <= 2)
|
||||
speeds |= FC_PORTSPEED_4GBIT;
|
||||
}
|
||||
} else if (IS_QLA2031(ha))
|
||||
speeds = FC_PORTSPEED_16GBIT|FC_PORTSPEED_8GBIT|
|
||||
FC_PORTSPEED_4GBIT;
|
||||
else if (IS_QLA25XX(ha) || IS_QLAFX00(ha))
|
||||
speeds = FC_PORTSPEED_8GBIT|FC_PORTSPEED_4GBIT|
|
||||
FC_PORTSPEED_2GBIT|FC_PORTSPEED_1GBIT;
|
||||
else if (IS_QLA24XX_TYPE(ha))
|
||||
speed = FC_PORTSPEED_4GBIT | FC_PORTSPEED_2GBIT |
|
||||
FC_PORTSPEED_1GBIT;
|
||||
speeds = FC_PORTSPEED_4GBIT|FC_PORTSPEED_2GBIT|
|
||||
FC_PORTSPEED_1GBIT;
|
||||
else if (IS_QLA23XX(ha))
|
||||
speed = FC_PORTSPEED_2GBIT | FC_PORTSPEED_1GBIT;
|
||||
else if (IS_QLAFX00(ha))
|
||||
speed = FC_PORTSPEED_8GBIT | FC_PORTSPEED_4GBIT |
|
||||
FC_PORTSPEED_2GBIT | FC_PORTSPEED_1GBIT;
|
||||
else if (IS_QLA27XX(ha))
|
||||
speed = FC_PORTSPEED_32GBIT | FC_PORTSPEED_16GBIT |
|
||||
FC_PORTSPEED_8GBIT;
|
||||
speeds = FC_PORTSPEED_2GBIT|FC_PORTSPEED_1GBIT;
|
||||
else
|
||||
speed = FC_PORTSPEED_1GBIT;
|
||||
fc_host_supported_speeds(vha->host) = speed;
|
||||
speeds = FC_PORTSPEED_1GBIT;
|
||||
|
||||
fc_host_supported_speeds(vha->host) = speeds;
|
||||
}
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
/*
|
||||
/*
|
||||
* QLogic Fibre Channel HBA Driver
|
||||
* Copyright (c) 2003-2014 QLogic Corporation
|
||||
*
|
||||
|
@ -84,8 +84,7 @@ qla24xx_fcp_prio_cfg_valid(scsi_qla_host_t *vha,
|
|||
return 0;
|
||||
}
|
||||
|
||||
if (bcode[0] != 'H' || bcode[1] != 'Q' || bcode[2] != 'O' ||
|
||||
bcode[3] != 'S') {
|
||||
if (memcmp(bcode, "HQOS", 4)) {
|
||||
/* Invalid FCP priority data header*/
|
||||
ql_dbg(ql_dbg_user, vha, 0x7052,
|
||||
"Invalid FCP Priority data header. bcode=0x%x.\n",
|
||||
|
@ -1044,7 +1043,7 @@ qla84xx_updatefw(struct bsg_job *bsg_job)
|
|||
}
|
||||
|
||||
flag = bsg_request->rqst_data.h_vendor.vendor_cmd[1];
|
||||
fw_ver = le32_to_cpu(*((uint32_t *)((uint32_t *)fw_buf + 2)));
|
||||
fw_ver = get_unaligned_le32((uint32_t *)fw_buf + 2);
|
||||
|
||||
mn->entry_type = VERIFY_CHIP_IOCB_TYPE;
|
||||
mn->entry_count = 1;
|
||||
|
@ -1057,9 +1056,8 @@ qla84xx_updatefw(struct bsg_job *bsg_job)
|
|||
mn->fw_ver = cpu_to_le32(fw_ver);
|
||||
mn->fw_size = cpu_to_le32(data_len);
|
||||
mn->fw_seq_size = cpu_to_le32(data_len);
|
||||
mn->dseg_address[0] = cpu_to_le32(LSD(fw_dma));
|
||||
mn->dseg_address[1] = cpu_to_le32(MSD(fw_dma));
|
||||
mn->dseg_length = cpu_to_le32(data_len);
|
||||
put_unaligned_le64(fw_dma, &mn->dsd.address);
|
||||
mn->dsd.length = cpu_to_le32(data_len);
|
||||
mn->data_seg_cnt = cpu_to_le16(1);
|
||||
|
||||
rval = qla2x00_issue_iocb_timeout(vha, mn, mn_dma, 0, 120);
|
||||
|
@ -1238,9 +1236,8 @@ qla84xx_mgmt_cmd(struct bsg_job *bsg_job)
|
|||
if (ql84_mgmt->mgmt.cmd != QLA84_MGMT_CHNG_CONFIG) {
|
||||
mn->total_byte_cnt = cpu_to_le32(ql84_mgmt->mgmt.len);
|
||||
mn->dseg_count = cpu_to_le16(1);
|
||||
mn->dseg_address[0] = cpu_to_le32(LSD(mgmt_dma));
|
||||
mn->dseg_address[1] = cpu_to_le32(MSD(mgmt_dma));
|
||||
mn->dseg_length = cpu_to_le32(ql84_mgmt->mgmt.len);
|
||||
put_unaligned_le64(mgmt_dma, &mn->dsd.address);
|
||||
mn->dsd.length = cpu_to_le32(ql84_mgmt->mgmt.len);
|
||||
}
|
||||
|
||||
rval = qla2x00_issue_iocb(vha, mn, mn_dma, 0);
|
||||
|
@ -1354,7 +1351,7 @@ qla24xx_iidma(struct bsg_job *bsg_job)
|
|||
|
||||
if (rval) {
|
||||
ql_log(ql_log_warn, vha, 0x704c,
|
||||
"iIDMA cmd failed for %8phN -- "
|
||||
"iiDMA cmd failed for %8phN -- "
|
||||
"%04x %x %04x %04x.\n", fcport->port_name,
|
||||
rval, fcport->fp_speed, mb[0], mb[1]);
|
||||
rval = (DID_ERROR << 16);
|
||||
|
@ -1412,7 +1409,8 @@ qla2x00_optrom_setup(struct bsg_job *bsg_job, scsi_qla_host_t *vha,
|
|||
start == (ha->flt_region_fw * 4))
|
||||
valid = 1;
|
||||
else if (IS_QLA24XX_TYPE(ha) || IS_QLA25XX(ha) ||
|
||||
IS_CNA_CAPABLE(ha) || IS_QLA2031(ha) || IS_QLA27XX(ha))
|
||||
IS_CNA_CAPABLE(ha) || IS_QLA2031(ha) || IS_QLA27XX(ha) ||
|
||||
IS_QLA28XX(ha))
|
||||
valid = 1;
|
||||
if (!valid) {
|
||||
ql_log(ql_log_warn, vha, 0x7058,
|
||||
|
@ -1534,6 +1532,7 @@ qla2x00_update_fru_versions(struct bsg_job *bsg_job)
|
|||
uint32_t count;
|
||||
dma_addr_t sfp_dma;
|
||||
void *sfp = dma_pool_alloc(ha->s_dma_pool, GFP_KERNEL, &sfp_dma);
|
||||
|
||||
if (!sfp) {
|
||||
bsg_reply->reply_data.vendor_reply.vendor_rsp[0] =
|
||||
EXT_STATUS_NO_MEMORY;
|
||||
|
@ -1584,6 +1583,7 @@ qla2x00_read_fru_status(struct bsg_job *bsg_job)
|
|||
struct qla_status_reg *sr = (void *)bsg;
|
||||
dma_addr_t sfp_dma;
|
||||
uint8_t *sfp = dma_pool_alloc(ha->s_dma_pool, GFP_KERNEL, &sfp_dma);
|
||||
|
||||
if (!sfp) {
|
||||
bsg_reply->reply_data.vendor_reply.vendor_rsp[0] =
|
||||
EXT_STATUS_NO_MEMORY;
|
||||
|
@ -1634,6 +1634,7 @@ qla2x00_write_fru_status(struct bsg_job *bsg_job)
|
|||
struct qla_status_reg *sr = (void *)bsg;
|
||||
dma_addr_t sfp_dma;
|
||||
uint8_t *sfp = dma_pool_alloc(ha->s_dma_pool, GFP_KERNEL, &sfp_dma);
|
||||
|
||||
if (!sfp) {
|
||||
bsg_reply->reply_data.vendor_reply.vendor_rsp[0] =
|
||||
EXT_STATUS_NO_MEMORY;
|
||||
|
@ -1680,6 +1681,7 @@ qla2x00_write_i2c(struct bsg_job *bsg_job)
|
|||
struct qla_i2c_access *i2c = (void *)bsg;
|
||||
dma_addr_t sfp_dma;
|
||||
uint8_t *sfp = dma_pool_alloc(ha->s_dma_pool, GFP_KERNEL, &sfp_dma);
|
||||
|
||||
if (!sfp) {
|
||||
bsg_reply->reply_data.vendor_reply.vendor_rsp[0] =
|
||||
EXT_STATUS_NO_MEMORY;
|
||||
|
@ -1725,6 +1727,7 @@ qla2x00_read_i2c(struct bsg_job *bsg_job)
|
|||
struct qla_i2c_access *i2c = (void *)bsg;
|
||||
dma_addr_t sfp_dma;
|
||||
uint8_t *sfp = dma_pool_alloc(ha->s_dma_pool, GFP_KERNEL, &sfp_dma);
|
||||
|
||||
if (!sfp) {
|
||||
bsg_reply->reply_data.vendor_reply.vendor_rsp[0] =
|
||||
EXT_STATUS_NO_MEMORY;
|
||||
|
@ -1961,7 +1964,7 @@ qlafx00_mgmt_cmd(struct bsg_job *bsg_job)
|
|||
|
||||
/* Dump the vendor information */
|
||||
ql_dump_buffer(ql_dbg_user + ql_dbg_verbose , vha, 0x70cf,
|
||||
(uint8_t *)piocb_rqst, sizeof(struct qla_mt_iocb_rqst_fx00));
|
||||
piocb_rqst, sizeof(*piocb_rqst));
|
||||
|
||||
if (!vha->flags.online) {
|
||||
ql_log(ql_log_warn, vha, 0x70d0,
|
||||
|
@ -2157,7 +2160,7 @@ qla27xx_get_flash_upd_cap(struct bsg_job *bsg_job)
|
|||
struct qla_hw_data *ha = vha->hw;
|
||||
struct qla_flash_update_caps cap;
|
||||
|
||||
if (!(IS_QLA27XX(ha)))
|
||||
if (!(IS_QLA27XX(ha)) && !IS_QLA28XX(ha))
|
||||
return -EPERM;
|
||||
|
||||
memset(&cap, 0, sizeof(cap));
|
||||
|
@ -2190,7 +2193,7 @@ qla27xx_set_flash_upd_cap(struct bsg_job *bsg_job)
|
|||
uint64_t online_fw_attr = 0;
|
||||
struct qla_flash_update_caps cap;
|
||||
|
||||
if (!(IS_QLA27XX(ha)))
|
||||
if (!IS_QLA27XX(ha) && !IS_QLA28XX(ha))
|
||||
return -EPERM;
|
||||
|
||||
memset(&cap, 0, sizeof(cap));
|
||||
|
@ -2238,7 +2241,7 @@ qla27xx_get_bbcr_data(struct bsg_job *bsg_job)
|
|||
uint8_t domain, area, al_pa, state;
|
||||
int rval;
|
||||
|
||||
if (!(IS_QLA27XX(ha)))
|
||||
if (!IS_QLA27XX(ha) && !IS_QLA28XX(ha))
|
||||
return -EPERM;
|
||||
|
||||
memset(&bbcr, 0, sizeof(bbcr));
|
||||
|
@ -2323,8 +2326,8 @@ qla2x00_get_priv_stats(struct bsg_job *bsg_job)
|
|||
rval = qla24xx_get_isp_stats(base_vha, stats, stats_dma, options);
|
||||
|
||||
if (rval == QLA_SUCCESS) {
|
||||
ql_dump_buffer(ql_dbg_user + ql_dbg_verbose, vha, 0x70e3,
|
||||
(uint8_t *)stats, sizeof(*stats));
|
||||
ql_dump_buffer(ql_dbg_user + ql_dbg_verbose, vha, 0x70e5,
|
||||
stats, sizeof(*stats));
|
||||
sg_copy_from_buffer(bsg_job->reply_payload.sg_list,
|
||||
bsg_job->reply_payload.sg_cnt, stats, sizeof(*stats));
|
||||
}
|
||||
|
@ -2353,7 +2356,8 @@ qla2x00_do_dport_diagnostics(struct bsg_job *bsg_job)
|
|||
int rval;
|
||||
struct qla_dport_diag *dd;
|
||||
|
||||
if (!IS_QLA83XX(vha->hw) && !IS_QLA27XX(vha->hw))
|
||||
if (!IS_QLA83XX(vha->hw) && !IS_QLA27XX(vha->hw) &&
|
||||
!IS_QLA28XX(vha->hw))
|
||||
return -EPERM;
|
||||
|
||||
dd = kmalloc(sizeof(*dd), GFP_KERNEL);
|
||||
|
@ -2387,6 +2391,45 @@ qla2x00_do_dport_diagnostics(struct bsg_job *bsg_job)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int
|
||||
qla2x00_get_flash_image_status(struct bsg_job *bsg_job)
|
||||
{
|
||||
scsi_qla_host_t *vha = shost_priv(fc_bsg_to_shost(bsg_job));
|
||||
struct fc_bsg_reply *bsg_reply = bsg_job->reply;
|
||||
struct qla_hw_data *ha = vha->hw;
|
||||
struct qla_active_regions regions = { };
|
||||
struct active_regions active_regions = { };
|
||||
|
||||
qla28xx_get_aux_images(vha, &active_regions);
|
||||
regions.global_image = active_regions.global;
|
||||
|
||||
if (IS_QLA28XX(ha)) {
|
||||
qla27xx_get_active_image(vha, &active_regions);
|
||||
regions.board_config = active_regions.aux.board_config;
|
||||
regions.vpd_nvram = active_regions.aux.vpd_nvram;
|
||||
regions.npiv_config_0_1 = active_regions.aux.npiv_config_0_1;
|
||||
regions.npiv_config_2_3 = active_regions.aux.npiv_config_2_3;
|
||||
}
|
||||
|
||||
ql_dbg(ql_dbg_user, vha, 0x70e1,
|
||||
"%s(%lu): FW=%u BCFG=%u VPDNVR=%u NPIV01=%u NPIV02=%u\n",
|
||||
__func__, vha->host_no, regions.global_image,
|
||||
regions.board_config, regions.vpd_nvram,
|
||||
regions.npiv_config_0_1, regions.npiv_config_2_3);
|
||||
|
||||
sg_copy_from_buffer(bsg_job->reply_payload.sg_list,
|
||||
bsg_job->reply_payload.sg_cnt, ®ions, sizeof(regions));
|
||||
|
||||
bsg_reply->reply_data.vendor_reply.vendor_rsp[0] = EXT_STATUS_OK;
|
||||
bsg_reply->reply_payload_rcv_len = sizeof(regions);
|
||||
bsg_reply->result = DID_OK << 16;
|
||||
bsg_job->reply_len = sizeof(struct fc_bsg_reply);
|
||||
bsg_job_done(bsg_job, bsg_reply->result,
|
||||
bsg_reply->reply_payload_rcv_len);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int
|
||||
qla2x00_process_vendor_specific(struct bsg_job *bsg_job)
|
||||
{
|
||||
|
@ -2460,6 +2503,9 @@ qla2x00_process_vendor_specific(struct bsg_job *bsg_job)
|
|||
case QL_VND_DPORT_DIAGNOSTICS:
|
||||
return qla2x00_do_dport_diagnostics(bsg_job);
|
||||
|
||||
case QL_VND_SS_GET_FLASH_IMAGE_STATUS:
|
||||
return qla2x00_get_flash_image_status(bsg_job);
|
||||
|
||||
default:
|
||||
return -ENOSYS;
|
||||
}
|
||||
|
|
|
@ -31,6 +31,7 @@
|
|||
#define QL_VND_GET_PRIV_STATS 0x18
|
||||
#define QL_VND_DPORT_DIAGNOSTICS 0x19
|
||||
#define QL_VND_GET_PRIV_STATS_EX 0x1A
|
||||
#define QL_VND_SS_GET_FLASH_IMAGE_STATUS 0x1E
|
||||
|
||||
/* BSG Vendor specific subcode returns */
|
||||
#define EXT_STATUS_OK 0
|
||||
|
@ -279,4 +280,14 @@ struct qla_dport_diag {
|
|||
#define QLA_DPORT_RESULT 0x0
|
||||
#define QLA_DPORT_START 0x2
|
||||
|
||||
/* active images in flash */
|
||||
struct qla_active_regions {
|
||||
uint8_t global_image;
|
||||
uint8_t board_config;
|
||||
uint8_t vpd_nvram;
|
||||
uint8_t npiv_config_0_1;
|
||||
uint8_t npiv_config_2_3;
|
||||
uint8_t reserved[32];
|
||||
} __packed;
|
||||
|
||||
#endif
|
||||
|
|
|
@ -111,30 +111,25 @@ int
|
|||
qla27xx_dump_mpi_ram(struct qla_hw_data *ha, uint32_t addr, uint32_t *ram,
|
||||
uint32_t ram_dwords, void **nxt)
|
||||
{
|
||||
int rval;
|
||||
uint32_t cnt, stat, timer, dwords, idx;
|
||||
uint16_t mb0;
|
||||
struct device_reg_24xx __iomem *reg = &ha->iobase->isp24;
|
||||
dma_addr_t dump_dma = ha->gid_list_dma;
|
||||
uint32_t *dump = (uint32_t *)ha->gid_list;
|
||||
uint32_t *chunk = (void *)ha->gid_list;
|
||||
uint32_t dwords = qla2x00_gid_list_size(ha) / 4;
|
||||
uint32_t stat;
|
||||
ulong i, j, timer = 6000000;
|
||||
int rval = QLA_FUNCTION_FAILED;
|
||||
|
||||
rval = QLA_SUCCESS;
|
||||
mb0 = 0;
|
||||
|
||||
WRT_REG_WORD(®->mailbox0, MBC_LOAD_DUMP_MPI_RAM);
|
||||
clear_bit(MBX_INTERRUPT, &ha->mbx_cmd_flags);
|
||||
for (i = 0; i < ram_dwords; i += dwords, addr += dwords) {
|
||||
if (i + dwords > ram_dwords)
|
||||
dwords = ram_dwords - i;
|
||||
|
||||
dwords = qla2x00_gid_list_size(ha) / 4;
|
||||
for (cnt = 0; cnt < ram_dwords && rval == QLA_SUCCESS;
|
||||
cnt += dwords, addr += dwords) {
|
||||
if (cnt + dwords > ram_dwords)
|
||||
dwords = ram_dwords - cnt;
|
||||
|
||||
WRT_REG_WORD(®->mailbox0, MBC_LOAD_DUMP_MPI_RAM);
|
||||
WRT_REG_WORD(®->mailbox1, LSW(addr));
|
||||
WRT_REG_WORD(®->mailbox8, MSW(addr));
|
||||
|
||||
WRT_REG_WORD(®->mailbox2, MSW(dump_dma));
|
||||
WRT_REG_WORD(®->mailbox3, LSW(dump_dma));
|
||||
WRT_REG_WORD(®->mailbox2, MSW(LSD(dump_dma)));
|
||||
WRT_REG_WORD(®->mailbox3, LSW(LSD(dump_dma)));
|
||||
WRT_REG_WORD(®->mailbox6, MSW(MSD(dump_dma)));
|
||||
WRT_REG_WORD(®->mailbox7, LSW(MSD(dump_dma)));
|
||||
|
||||
|
@ -145,76 +140,76 @@ qla27xx_dump_mpi_ram(struct qla_hw_data *ha, uint32_t addr, uint32_t *ram,
|
|||
WRT_REG_DWORD(®->hccr, HCCRX_SET_HOST_INT);
|
||||
|
||||
ha->flags.mbox_int = 0;
|
||||
for (timer = 6000000; timer; timer--) {
|
||||
/* Check for pending interrupts. */
|
||||
while (timer--) {
|
||||
udelay(5);
|
||||
|
||||
stat = RD_REG_DWORD(®->host_status);
|
||||
if (stat & HSRX_RISC_INT) {
|
||||
stat &= 0xff;
|
||||
/* Check for pending interrupts. */
|
||||
if (!(stat & HSRX_RISC_INT))
|
||||
continue;
|
||||
|
||||
if (stat == 0x1 || stat == 0x2 ||
|
||||
stat == 0x10 || stat == 0x11) {
|
||||
set_bit(MBX_INTERRUPT,
|
||||
&ha->mbx_cmd_flags);
|
||||
|
||||
mb0 = RD_REG_WORD(®->mailbox0);
|
||||
RD_REG_WORD(®->mailbox1);
|
||||
|
||||
WRT_REG_DWORD(®->hccr,
|
||||
HCCRX_CLR_RISC_INT);
|
||||
RD_REG_DWORD(®->hccr);
|
||||
break;
|
||||
}
|
||||
stat &= 0xff;
|
||||
if (stat != 0x1 && stat != 0x2 &&
|
||||
stat != 0x10 && stat != 0x11) {
|
||||
|
||||
/* Clear this intr; it wasn't a mailbox intr */
|
||||
WRT_REG_DWORD(®->hccr, HCCRX_CLR_RISC_INT);
|
||||
RD_REG_DWORD(®->hccr);
|
||||
continue;
|
||||
}
|
||||
udelay(5);
|
||||
|
||||
set_bit(MBX_INTERRUPT, &ha->mbx_cmd_flags);
|
||||
rval = RD_REG_WORD(®->mailbox0) & MBS_MASK;
|
||||
WRT_REG_DWORD(®->hccr, HCCRX_CLR_RISC_INT);
|
||||
RD_REG_DWORD(®->hccr);
|
||||
break;
|
||||
}
|
||||
ha->flags.mbox_int = 1;
|
||||
*nxt = ram + i;
|
||||
|
||||
if (test_and_clear_bit(MBX_INTERRUPT, &ha->mbx_cmd_flags)) {
|
||||
rval = mb0 & MBS_MASK;
|
||||
for (idx = 0; idx < dwords; idx++)
|
||||
ram[cnt + idx] = IS_QLA27XX(ha) ?
|
||||
le32_to_cpu(dump[idx]) : swab32(dump[idx]);
|
||||
} else {
|
||||
rval = QLA_FUNCTION_FAILED;
|
||||
if (!test_and_clear_bit(MBX_INTERRUPT, &ha->mbx_cmd_flags)) {
|
||||
/* no interrupt, timed out*/
|
||||
return rval;
|
||||
}
|
||||
if (rval) {
|
||||
/* error completion status */
|
||||
return rval;
|
||||
}
|
||||
for (j = 0; j < dwords; j++) {
|
||||
ram[i + j] =
|
||||
(IS_QLA27XX(ha) || IS_QLA28XX(ha)) ?
|
||||
chunk[j] : swab32(chunk[j]);
|
||||
}
|
||||
}
|
||||
|
||||
*nxt = rval == QLA_SUCCESS ? &ram[cnt] : NULL;
|
||||
return rval;
|
||||
*nxt = ram + i;
|
||||
return QLA_SUCCESS;
|
||||
}
|
||||
|
||||
int
|
||||
qla24xx_dump_ram(struct qla_hw_data *ha, uint32_t addr, uint32_t *ram,
|
||||
uint32_t ram_dwords, void **nxt)
|
||||
{
|
||||
int rval;
|
||||
uint32_t cnt, stat, timer, dwords, idx;
|
||||
uint16_t mb0;
|
||||
int rval = QLA_FUNCTION_FAILED;
|
||||
struct device_reg_24xx __iomem *reg = &ha->iobase->isp24;
|
||||
dma_addr_t dump_dma = ha->gid_list_dma;
|
||||
uint32_t *dump = (uint32_t *)ha->gid_list;
|
||||
uint32_t *chunk = (void *)ha->gid_list;
|
||||
uint32_t dwords = qla2x00_gid_list_size(ha) / 4;
|
||||
uint32_t stat;
|
||||
ulong i, j, timer = 6000000;
|
||||
|
||||
rval = QLA_SUCCESS;
|
||||
mb0 = 0;
|
||||
|
||||
WRT_REG_WORD(®->mailbox0, MBC_DUMP_RISC_RAM_EXTENDED);
|
||||
clear_bit(MBX_INTERRUPT, &ha->mbx_cmd_flags);
|
||||
|
||||
dwords = qla2x00_gid_list_size(ha) / 4;
|
||||
for (cnt = 0; cnt < ram_dwords && rval == QLA_SUCCESS;
|
||||
cnt += dwords, addr += dwords) {
|
||||
if (cnt + dwords > ram_dwords)
|
||||
dwords = ram_dwords - cnt;
|
||||
for (i = 0; i < ram_dwords; i += dwords, addr += dwords) {
|
||||
if (i + dwords > ram_dwords)
|
||||
dwords = ram_dwords - i;
|
||||
|
||||
WRT_REG_WORD(®->mailbox0, MBC_DUMP_RISC_RAM_EXTENDED);
|
||||
WRT_REG_WORD(®->mailbox1, LSW(addr));
|
||||
WRT_REG_WORD(®->mailbox8, MSW(addr));
|
||||
|
||||
WRT_REG_WORD(®->mailbox2, MSW(dump_dma));
|
||||
WRT_REG_WORD(®->mailbox3, LSW(dump_dma));
|
||||
WRT_REG_WORD(®->mailbox2, MSW(LSD(dump_dma)));
|
||||
WRT_REG_WORD(®->mailbox3, LSW(LSD(dump_dma)));
|
||||
WRT_REG_WORD(®->mailbox6, MSW(MSD(dump_dma)));
|
||||
WRT_REG_WORD(®->mailbox7, LSW(MSD(dump_dma)));
|
||||
|
||||
|
@ -223,45 +218,48 @@ qla24xx_dump_ram(struct qla_hw_data *ha, uint32_t addr, uint32_t *ram,
|
|||
WRT_REG_DWORD(®->hccr, HCCRX_SET_HOST_INT);
|
||||
|
||||
ha->flags.mbox_int = 0;
|
||||
for (timer = 6000000; timer; timer--) {
|
||||
/* Check for pending interrupts. */
|
||||
while (timer--) {
|
||||
udelay(5);
|
||||
stat = RD_REG_DWORD(®->host_status);
|
||||
if (stat & HSRX_RISC_INT) {
|
||||
stat &= 0xff;
|
||||
|
||||
if (stat == 0x1 || stat == 0x2 ||
|
||||
stat == 0x10 || stat == 0x11) {
|
||||
set_bit(MBX_INTERRUPT,
|
||||
&ha->mbx_cmd_flags);
|
||||
/* Check for pending interrupts. */
|
||||
if (!(stat & HSRX_RISC_INT))
|
||||
continue;
|
||||
|
||||
mb0 = RD_REG_WORD(®->mailbox0);
|
||||
|
||||
WRT_REG_DWORD(®->hccr,
|
||||
HCCRX_CLR_RISC_INT);
|
||||
RD_REG_DWORD(®->hccr);
|
||||
break;
|
||||
}
|
||||
|
||||
/* Clear this intr; it wasn't a mailbox intr */
|
||||
stat &= 0xff;
|
||||
if (stat != 0x1 && stat != 0x2 &&
|
||||
stat != 0x10 && stat != 0x11) {
|
||||
WRT_REG_DWORD(®->hccr, HCCRX_CLR_RISC_INT);
|
||||
RD_REG_DWORD(®->hccr);
|
||||
continue;
|
||||
}
|
||||
udelay(5);
|
||||
|
||||
set_bit(MBX_INTERRUPT, &ha->mbx_cmd_flags);
|
||||
rval = RD_REG_WORD(®->mailbox0) & MBS_MASK;
|
||||
WRT_REG_DWORD(®->hccr, HCCRX_CLR_RISC_INT);
|
||||
RD_REG_DWORD(®->hccr);
|
||||
break;
|
||||
}
|
||||
ha->flags.mbox_int = 1;
|
||||
*nxt = ram + i;
|
||||
|
||||
if (test_and_clear_bit(MBX_INTERRUPT, &ha->mbx_cmd_flags)) {
|
||||
rval = mb0 & MBS_MASK;
|
||||
for (idx = 0; idx < dwords; idx++)
|
||||
ram[cnt + idx] = IS_QLA27XX(ha) ?
|
||||
le32_to_cpu(dump[idx]) : swab32(dump[idx]);
|
||||
} else {
|
||||
rval = QLA_FUNCTION_FAILED;
|
||||
if (!test_and_clear_bit(MBX_INTERRUPT, &ha->mbx_cmd_flags)) {
|
||||
/* no interrupt, timed out*/
|
||||
return rval;
|
||||
}
|
||||
if (rval) {
|
||||
/* error completion status */
|
||||
return rval;
|
||||
}
|
||||
for (j = 0; j < dwords; j++) {
|
||||
ram[i + j] =
|
||||
(IS_QLA27XX(ha) || IS_QLA28XX(ha)) ?
|
||||
chunk[j] : swab32(chunk[j]);
|
||||
}
|
||||
}
|
||||
|
||||
*nxt = rval == QLA_SUCCESS ? &ram[cnt]: NULL;
|
||||
return rval;
|
||||
*nxt = ram + i;
|
||||
return QLA_SUCCESS;
|
||||
}
|
||||
|
||||
static int
|
||||
|
@ -447,7 +445,7 @@ qla2xxx_dump_ram(struct qla_hw_data *ha, uint32_t addr, uint16_t *ram,
|
|||
}
|
||||
}
|
||||
|
||||
*nxt = rval == QLA_SUCCESS ? &ram[cnt]: NULL;
|
||||
*nxt = rval == QLA_SUCCESS ? &ram[cnt] : NULL;
|
||||
return rval;
|
||||
}
|
||||
|
||||
|
@ -669,7 +667,8 @@ qla25xx_copy_mq(struct qla_hw_data *ha, void *ptr, uint32_t **last_chain)
|
|||
struct qla2xxx_mq_chain *mq = ptr;
|
||||
device_reg_t *reg;
|
||||
|
||||
if (!ha->mqenable || IS_QLA83XX(ha) || IS_QLA27XX(ha))
|
||||
if (!ha->mqenable || IS_QLA83XX(ha) || IS_QLA27XX(ha) ||
|
||||
IS_QLA28XX(ha))
|
||||
return ptr;
|
||||
|
||||
mq = ptr;
|
||||
|
@ -2521,7 +2520,7 @@ qla83xx_fw_dump(scsi_qla_host_t *vha, int hardware_locked)
|
|||
/****************************************************************************/
|
||||
|
||||
static inline int
|
||||
ql_mask_match(uint32_t level)
|
||||
ql_mask_match(uint level)
|
||||
{
|
||||
return (level & ql2xextended_error_logging) == level;
|
||||
}
|
||||
|
@ -2540,7 +2539,7 @@ ql_mask_match(uint32_t level)
|
|||
* msg: The message to be displayed.
|
||||
*/
|
||||
void
|
||||
ql_dbg(uint32_t level, scsi_qla_host_t *vha, int32_t id, const char *fmt, ...)
|
||||
ql_dbg(uint level, scsi_qla_host_t *vha, uint id, const char *fmt, ...)
|
||||
{
|
||||
va_list va;
|
||||
struct va_format vaf;
|
||||
|
@ -2583,8 +2582,7 @@ ql_dbg(uint32_t level, scsi_qla_host_t *vha, int32_t id, const char *fmt, ...)
|
|||
* msg: The message to be displayed.
|
||||
*/
|
||||
void
|
||||
ql_dbg_pci(uint32_t level, struct pci_dev *pdev, int32_t id,
|
||||
const char *fmt, ...)
|
||||
ql_dbg_pci(uint level, struct pci_dev *pdev, uint id, const char *fmt, ...)
|
||||
{
|
||||
va_list va;
|
||||
struct va_format vaf;
|
||||
|
@ -2620,7 +2618,7 @@ ql_dbg_pci(uint32_t level, struct pci_dev *pdev, int32_t id,
|
|||
* msg: The message to be displayed.
|
||||
*/
|
||||
void
|
||||
ql_log(uint32_t level, scsi_qla_host_t *vha, int32_t id, const char *fmt, ...)
|
||||
ql_log(uint level, scsi_qla_host_t *vha, uint id, const char *fmt, ...)
|
||||
{
|
||||
va_list va;
|
||||
struct va_format vaf;
|
||||
|
@ -2678,8 +2676,7 @@ ql_log(uint32_t level, scsi_qla_host_t *vha, int32_t id, const char *fmt, ...)
|
|||
* msg: The message to be displayed.
|
||||
*/
|
||||
void
|
||||
ql_log_pci(uint32_t level, struct pci_dev *pdev, int32_t id,
|
||||
const char *fmt, ...)
|
||||
ql_log_pci(uint level, struct pci_dev *pdev, uint id, const char *fmt, ...)
|
||||
{
|
||||
va_list va;
|
||||
struct va_format vaf;
|
||||
|
@ -2719,7 +2716,7 @@ ql_log_pci(uint32_t level, struct pci_dev *pdev, int32_t id,
|
|||
}
|
||||
|
||||
void
|
||||
ql_dump_regs(uint32_t level, scsi_qla_host_t *vha, int32_t id)
|
||||
ql_dump_regs(uint level, scsi_qla_host_t *vha, uint id)
|
||||
{
|
||||
int i;
|
||||
struct qla_hw_data *ha = vha->hw;
|
||||
|
@ -2741,13 +2738,12 @@ ql_dump_regs(uint32_t level, scsi_qla_host_t *vha, int32_t id)
|
|||
ql_dbg(level, vha, id, "Mailbox registers:\n");
|
||||
for (i = 0; i < 6; i++, mbx_reg++)
|
||||
ql_dbg(level, vha, id,
|
||||
"mbox[%d] 0x%04x\n", i, RD_REG_WORD(mbx_reg));
|
||||
"mbox[%d] %#04x\n", i, RD_REG_WORD(mbx_reg));
|
||||
}
|
||||
|
||||
|
||||
void
|
||||
ql_dump_buffer(uint32_t level, scsi_qla_host_t *vha, int32_t id,
|
||||
uint8_t *buf, uint size)
|
||||
ql_dump_buffer(uint level, scsi_qla_host_t *vha, uint id, void *buf, uint size)
|
||||
{
|
||||
uint cnt;
|
||||
|
||||
|
|
|
@ -318,20 +318,20 @@ struct qla2xxx_fw_dump {
|
|||
* as compared to other log levels.
|
||||
*/
|
||||
|
||||
extern int ql_errlev;
|
||||
extern uint ql_errlev;
|
||||
|
||||
void __attribute__((format (printf, 4, 5)))
|
||||
ql_dbg(uint32_t, scsi_qla_host_t *vha, int32_t, const char *fmt, ...);
|
||||
ql_dbg(uint, scsi_qla_host_t *vha, uint, const char *fmt, ...);
|
||||
void __attribute__((format (printf, 4, 5)))
|
||||
ql_dbg_pci(uint32_t, struct pci_dev *pdev, int32_t, const char *fmt, ...);
|
||||
ql_dbg_pci(uint, struct pci_dev *pdev, uint, const char *fmt, ...);
|
||||
void __attribute__((format (printf, 4, 5)))
|
||||
ql_dbg_qp(uint32_t, struct qla_qpair *, int32_t, const char *fmt, ...);
|
||||
|
||||
|
||||
void __attribute__((format (printf, 4, 5)))
|
||||
ql_log(uint32_t, scsi_qla_host_t *vha, int32_t, const char *fmt, ...);
|
||||
ql_log(uint, scsi_qla_host_t *vha, uint, const char *fmt, ...);
|
||||
void __attribute__((format (printf, 4, 5)))
|
||||
ql_log_pci(uint32_t, struct pci_dev *pdev, int32_t, const char *fmt, ...);
|
||||
ql_log_pci(uint, struct pci_dev *pdev, uint, const char *fmt, ...);
|
||||
|
||||
void __attribute__((format (printf, 4, 5)))
|
||||
ql_log_qp(uint32_t, struct qla_qpair *, int32_t, const char *fmt, ...);
|
||||
|
|
|
@ -35,6 +35,7 @@
|
|||
#include <scsi/scsi_bsg_fc.h>
|
||||
|
||||
#include "qla_bsg.h"
|
||||
#include "qla_dsd.h"
|
||||
#include "qla_nx.h"
|
||||
#include "qla_nx2.h"
|
||||
#include "qla_nvme.h"
|
||||
|
@ -545,7 +546,7 @@ typedef struct srb {
|
|||
u32 gen2; /* scratch */
|
||||
int rc;
|
||||
int retry_count;
|
||||
struct completion comp;
|
||||
struct completion *comp;
|
||||
union {
|
||||
struct srb_iocb iocb_cmd;
|
||||
struct bsg_job *bsg_job;
|
||||
|
@ -1033,6 +1034,7 @@ struct mbx_cmd_32 {
|
|||
#define MBC_GET_FIRMWARE_VERSION 8 /* Get firmware revision. */
|
||||
#define MBC_LOAD_RISC_RAM 9 /* Load RAM command. */
|
||||
#define MBC_DUMP_RISC_RAM 0xa /* Dump RAM command. */
|
||||
#define MBC_SECURE_FLASH_UPDATE 0xa /* Secure Flash Update(28xx) */
|
||||
#define MBC_LOAD_RISC_RAM_EXTENDED 0xb /* Load RAM extended. */
|
||||
#define MBC_DUMP_RISC_RAM_EXTENDED 0xc /* Dump RAM extended. */
|
||||
#define MBC_WRITE_RAM_WORD_EXTENDED 0xd /* Write RAM word extended */
|
||||
|
@ -1203,6 +1205,10 @@ struct mbx_cmd_32 {
|
|||
#define QLA27XX_IMG_STATUS_VER_MAJOR 0x01
|
||||
#define QLA27XX_IMG_STATUS_VER_MINOR 0x00
|
||||
#define QLA27XX_IMG_STATUS_SIGN 0xFACEFADE
|
||||
#define QLA28XX_IMG_STATUS_SIGN 0xFACEFADF
|
||||
#define QLA28XX_IMG_STATUS_SIGN 0xFACEFADF
|
||||
#define QLA28XX_AUX_IMG_STATUS_SIGN 0xFACEFAED
|
||||
#define QLA27XX_DEFAULT_IMAGE 0
|
||||
#define QLA27XX_PRIMARY_IMAGE 1
|
||||
#define QLA27XX_SECONDARY_IMAGE 2
|
||||
|
||||
|
@ -1323,8 +1329,8 @@ typedef struct {
|
|||
uint16_t response_q_inpointer;
|
||||
uint16_t request_q_length;
|
||||
uint16_t response_q_length;
|
||||
uint32_t request_q_address[2];
|
||||
uint32_t response_q_address[2];
|
||||
__le64 request_q_address __packed;
|
||||
__le64 response_q_address __packed;
|
||||
|
||||
uint16_t lun_enables;
|
||||
uint8_t command_resource_count;
|
||||
|
@ -1749,12 +1755,10 @@ typedef struct {
|
|||
uint16_t dseg_count; /* Data segment count. */
|
||||
uint8_t scsi_cdb[MAX_CMDSZ]; /* SCSI command words. */
|
||||
uint32_t byte_count; /* Total byte count. */
|
||||
uint32_t dseg_0_address; /* Data segment 0 address. */
|
||||
uint32_t dseg_0_length; /* Data segment 0 length. */
|
||||
uint32_t dseg_1_address; /* Data segment 1 address. */
|
||||
uint32_t dseg_1_length; /* Data segment 1 length. */
|
||||
uint32_t dseg_2_address; /* Data segment 2 address. */
|
||||
uint32_t dseg_2_length; /* Data segment 2 length. */
|
||||
union {
|
||||
struct dsd32 dsd32[3];
|
||||
struct dsd64 dsd64[2];
|
||||
};
|
||||
} cmd_entry_t;
|
||||
|
||||
/*
|
||||
|
@ -1775,10 +1779,7 @@ typedef struct {
|
|||
uint16_t dseg_count; /* Data segment count. */
|
||||
uint8_t scsi_cdb[MAX_CMDSZ]; /* SCSI command words. */
|
||||
uint32_t byte_count; /* Total byte count. */
|
||||
uint32_t dseg_0_address[2]; /* Data segment 0 address. */
|
||||
uint32_t dseg_0_length; /* Data segment 0 length. */
|
||||
uint32_t dseg_1_address[2]; /* Data segment 1 address. */
|
||||
uint32_t dseg_1_length; /* Data segment 1 length. */
|
||||
struct dsd64 dsd[2];
|
||||
} cmd_a64_entry_t, request_t;
|
||||
|
||||
/*
|
||||
|
@ -1791,20 +1792,7 @@ typedef struct {
|
|||
uint8_t sys_define; /* System defined. */
|
||||
uint8_t entry_status; /* Entry Status. */
|
||||
uint32_t reserved;
|
||||
uint32_t dseg_0_address; /* Data segment 0 address. */
|
||||
uint32_t dseg_0_length; /* Data segment 0 length. */
|
||||
uint32_t dseg_1_address; /* Data segment 1 address. */
|
||||
uint32_t dseg_1_length; /* Data segment 1 length. */
|
||||
uint32_t dseg_2_address; /* Data segment 2 address. */
|
||||
uint32_t dseg_2_length; /* Data segment 2 length. */
|
||||
uint32_t dseg_3_address; /* Data segment 3 address. */
|
||||
uint32_t dseg_3_length; /* Data segment 3 length. */
|
||||
uint32_t dseg_4_address; /* Data segment 4 address. */
|
||||
uint32_t dseg_4_length; /* Data segment 4 length. */
|
||||
uint32_t dseg_5_address; /* Data segment 5 address. */
|
||||
uint32_t dseg_5_length; /* Data segment 5 length. */
|
||||
uint32_t dseg_6_address; /* Data segment 6 address. */
|
||||
uint32_t dseg_6_length; /* Data segment 6 length. */
|
||||
struct dsd32 dsd[7];
|
||||
} cont_entry_t;
|
||||
|
||||
/*
|
||||
|
@ -1816,16 +1804,7 @@ typedef struct {
|
|||
uint8_t entry_count; /* Entry count. */
|
||||
uint8_t sys_define; /* System defined. */
|
||||
uint8_t entry_status; /* Entry Status. */
|
||||
uint32_t dseg_0_address[2]; /* Data segment 0 address. */
|
||||
uint32_t dseg_0_length; /* Data segment 0 length. */
|
||||
uint32_t dseg_1_address[2]; /* Data segment 1 address. */
|
||||
uint32_t dseg_1_length; /* Data segment 1 length. */
|
||||
uint32_t dseg_2_address [2]; /* Data segment 2 address. */
|
||||
uint32_t dseg_2_length; /* Data segment 2 length. */
|
||||
uint32_t dseg_3_address[2]; /* Data segment 3 address. */
|
||||
uint32_t dseg_3_length; /* Data segment 3 length. */
|
||||
uint32_t dseg_4_address[2]; /* Data segment 4 address. */
|
||||
uint32_t dseg_4_length; /* Data segment 4 length. */
|
||||
struct dsd64 dsd[5];
|
||||
} cont_a64_entry_t;
|
||||
|
||||
#define PO_MODE_DIF_INSERT 0
|
||||
|
@ -1869,8 +1848,7 @@ struct crc_context {
|
|||
uint16_t reserved_2;
|
||||
uint16_t reserved_3;
|
||||
uint32_t reserved_4;
|
||||
uint32_t data_address[2];
|
||||
uint32_t data_length;
|
||||
struct dsd64 data_dsd;
|
||||
uint32_t reserved_5[2];
|
||||
uint32_t reserved_6;
|
||||
} nobundling;
|
||||
|
@ -1880,11 +1858,8 @@ struct crc_context {
|
|||
uint16_t reserved_1;
|
||||
__le16 dseg_count; /* Data segment count */
|
||||
uint32_t reserved_2;
|
||||
uint32_t data_address[2];
|
||||
uint32_t data_length;
|
||||
uint32_t dif_address[2];
|
||||
uint32_t dif_length; /* Data segment 0
|
||||
* length */
|
||||
struct dsd64 data_dsd;
|
||||
struct dsd64 dif_dsd;
|
||||
} bundling;
|
||||
} u;
|
||||
|
||||
|
@ -2083,10 +2058,8 @@ typedef struct {
|
|||
uint32_t handle2;
|
||||
uint32_t rsp_bytecount;
|
||||
uint32_t req_bytecount;
|
||||
uint32_t dseg_req_address[2]; /* Data segment 0 address. */
|
||||
uint32_t dseg_req_length; /* Data segment 0 length. */
|
||||
uint32_t dseg_rsp_address[2]; /* Data segment 1 address. */
|
||||
uint32_t dseg_rsp_length; /* Data segment 1 length. */
|
||||
struct dsd64 req_dsd;
|
||||
struct dsd64 rsp_dsd;
|
||||
} ms_iocb_entry_t;
|
||||
|
||||
|
||||
|
@ -2258,7 +2231,10 @@ typedef enum {
|
|||
FCT_BROADCAST,
|
||||
FCT_INITIATOR,
|
||||
FCT_TARGET,
|
||||
FCT_NVME
|
||||
FCT_NVME_INITIATOR = 0x10,
|
||||
FCT_NVME_TARGET = 0x20,
|
||||
FCT_NVME_DISCOVERY = 0x40,
|
||||
FCT_NVME = 0xf0,
|
||||
} fc_port_type_t;
|
||||
|
||||
enum qla_sess_deletion {
|
||||
|
@ -2463,13 +2439,7 @@ struct event_arg {
|
|||
#define FCS_DEVICE_LOST 3
|
||||
#define FCS_ONLINE 4
|
||||
|
||||
static const char * const port_state_str[] = {
|
||||
"Unknown",
|
||||
"UNCONFIGURED",
|
||||
"DEAD",
|
||||
"LOST",
|
||||
"ONLINE"
|
||||
};
|
||||
extern const char *const port_state_str[5];
|
||||
|
||||
/*
|
||||
* FC port flags.
|
||||
|
@ -2672,6 +2642,7 @@ struct ct_fdmiv2_hba_attributes {
|
|||
#define FDMI_PORT_SPEED_8GB 0x10
|
||||
#define FDMI_PORT_SPEED_16GB 0x20
|
||||
#define FDMI_PORT_SPEED_32GB 0x40
|
||||
#define FDMI_PORT_SPEED_64GB 0x80
|
||||
#define FDMI_PORT_SPEED_UNKNOWN 0x8000
|
||||
|
||||
#define FC_CLASS_2 0x04
|
||||
|
@ -3060,7 +3031,7 @@ struct sns_cmd_pkt {
|
|||
struct {
|
||||
uint16_t buffer_length;
|
||||
uint16_t reserved_1;
|
||||
uint32_t buffer_address[2];
|
||||
__le64 buffer_address __packed;
|
||||
uint16_t subcommand_length;
|
||||
uint16_t reserved_2;
|
||||
uint16_t subcommand;
|
||||
|
@ -3130,10 +3101,10 @@ struct rsp_que;
|
|||
struct isp_operations {
|
||||
|
||||
int (*pci_config) (struct scsi_qla_host *);
|
||||
void (*reset_chip) (struct scsi_qla_host *);
|
||||
int (*reset_chip)(struct scsi_qla_host *);
|
||||
int (*chip_diag) (struct scsi_qla_host *);
|
||||
void (*config_rings) (struct scsi_qla_host *);
|
||||
void (*reset_adapter) (struct scsi_qla_host *);
|
||||
int (*reset_adapter)(struct scsi_qla_host *);
|
||||
int (*nvram_config) (struct scsi_qla_host *);
|
||||
void (*update_fw_options) (struct scsi_qla_host *);
|
||||
int (*load_risc) (struct scsi_qla_host *, uint32_t *);
|
||||
|
@ -3159,9 +3130,9 @@ struct isp_operations {
|
|||
void *(*prep_ms_fdmi_iocb) (struct scsi_qla_host *, uint32_t,
|
||||
uint32_t);
|
||||
|
||||
uint8_t *(*read_nvram) (struct scsi_qla_host *, uint8_t *,
|
||||
uint8_t *(*read_nvram)(struct scsi_qla_host *, void *,
|
||||
uint32_t, uint32_t);
|
||||
int (*write_nvram) (struct scsi_qla_host *, uint8_t *, uint32_t,
|
||||
int (*write_nvram)(struct scsi_qla_host *, void *, uint32_t,
|
||||
uint32_t);
|
||||
|
||||
void (*fw_dump) (struct scsi_qla_host *, int);
|
||||
|
@ -3170,16 +3141,16 @@ struct isp_operations {
|
|||
int (*beacon_off) (struct scsi_qla_host *);
|
||||
void (*beacon_blink) (struct scsi_qla_host *);
|
||||
|
||||
uint8_t * (*read_optrom) (struct scsi_qla_host *, uint8_t *,
|
||||
void *(*read_optrom)(struct scsi_qla_host *, void *,
|
||||
uint32_t, uint32_t);
|
||||
int (*write_optrom) (struct scsi_qla_host *, uint8_t *, uint32_t,
|
||||
int (*write_optrom)(struct scsi_qla_host *, void *, uint32_t,
|
||||
uint32_t);
|
||||
|
||||
int (*get_flash_version) (struct scsi_qla_host *, void *);
|
||||
int (*start_scsi) (srb_t *);
|
||||
int (*start_scsi_mq) (srb_t *);
|
||||
int (*abort_isp) (struct scsi_qla_host *);
|
||||
int (*iospace_config)(struct qla_hw_data*);
|
||||
int (*iospace_config)(struct qla_hw_data *);
|
||||
int (*initialize_adapter)(struct scsi_qla_host *);
|
||||
};
|
||||
|
||||
|
@ -3368,7 +3339,8 @@ struct qla_tc_param {
|
|||
#define QLA_MQ_SIZE 32
|
||||
#define QLA_MAX_QUEUES 256
|
||||
#define ISP_QUE_REG(ha, id) \
|
||||
((ha->mqenable || IS_QLA83XX(ha) || IS_QLA27XX(ha)) ? \
|
||||
((ha->mqenable || IS_QLA83XX(ha) || \
|
||||
IS_QLA27XX(ha) || IS_QLA28XX(ha)) ? \
|
||||
((void __iomem *)ha->mqiobase + (QLA_QUE_PAGE * id)) :\
|
||||
((void __iomem *)ha->iobase))
|
||||
#define QLA_REQ_QUE_ID(tag) \
|
||||
|
@ -3621,6 +3593,8 @@ struct qla_hw_data {
|
|||
uint32_t rida_fmt2:1;
|
||||
uint32_t purge_mbox:1;
|
||||
uint32_t n2n_bigger:1;
|
||||
uint32_t secure_adapter:1;
|
||||
uint32_t secure_fw:1;
|
||||
} flags;
|
||||
|
||||
uint16_t max_exchg;
|
||||
|
@ -3703,6 +3677,7 @@ struct qla_hw_data {
|
|||
#define PORT_SPEED_8GB 0x04
|
||||
#define PORT_SPEED_16GB 0x05
|
||||
#define PORT_SPEED_32GB 0x06
|
||||
#define PORT_SPEED_64GB 0x07
|
||||
#define PORT_SPEED_10GB 0x13
|
||||
uint16_t link_data_rate; /* F/W operating speed */
|
||||
uint16_t set_data_rate; /* Set by user */
|
||||
|
@ -3729,6 +3704,11 @@ struct qla_hw_data {
|
|||
#define PCI_DEVICE_ID_QLOGIC_ISP2071 0x2071
|
||||
#define PCI_DEVICE_ID_QLOGIC_ISP2271 0x2271
|
||||
#define PCI_DEVICE_ID_QLOGIC_ISP2261 0x2261
|
||||
#define PCI_DEVICE_ID_QLOGIC_ISP2061 0x2061
|
||||
#define PCI_DEVICE_ID_QLOGIC_ISP2081 0x2081
|
||||
#define PCI_DEVICE_ID_QLOGIC_ISP2089 0x2089
|
||||
#define PCI_DEVICE_ID_QLOGIC_ISP2281 0x2281
|
||||
#define PCI_DEVICE_ID_QLOGIC_ISP2289 0x2289
|
||||
|
||||
uint32_t isp_type;
|
||||
#define DT_ISP2100 BIT_0
|
||||
|
@ -3753,7 +3733,12 @@ struct qla_hw_data {
|
|||
#define DT_ISP2071 BIT_19
|
||||
#define DT_ISP2271 BIT_20
|
||||
#define DT_ISP2261 BIT_21
|
||||
#define DT_ISP_LAST (DT_ISP2261 << 1)
|
||||
#define DT_ISP2061 BIT_22
|
||||
#define DT_ISP2081 BIT_23
|
||||
#define DT_ISP2089 BIT_24
|
||||
#define DT_ISP2281 BIT_25
|
||||
#define DT_ISP2289 BIT_26
|
||||
#define DT_ISP_LAST (DT_ISP2289 << 1)
|
||||
|
||||
uint32_t device_type;
|
||||
#define DT_T10_PI BIT_25
|
||||
|
@ -3788,6 +3773,8 @@ struct qla_hw_data {
|
|||
#define IS_QLA2071(ha) (DT_MASK(ha) & DT_ISP2071)
|
||||
#define IS_QLA2271(ha) (DT_MASK(ha) & DT_ISP2271)
|
||||
#define IS_QLA2261(ha) (DT_MASK(ha) & DT_ISP2261)
|
||||
#define IS_QLA2081(ha) (DT_MASK(ha) & DT_ISP2081)
|
||||
#define IS_QLA2281(ha) (DT_MASK(ha) & DT_ISP2281)
|
||||
|
||||
#define IS_QLA23XX(ha) (IS_QLA2300(ha) || IS_QLA2312(ha) || IS_QLA2322(ha) || \
|
||||
IS_QLA6312(ha) || IS_QLA6322(ha))
|
||||
|
@ -3797,6 +3784,7 @@ struct qla_hw_data {
|
|||
#define IS_QLA83XX(ha) (IS_QLA2031(ha) || IS_QLA8031(ha))
|
||||
#define IS_QLA84XX(ha) (IS_QLA8432(ha))
|
||||
#define IS_QLA27XX(ha) (IS_QLA2071(ha) || IS_QLA2271(ha) || IS_QLA2261(ha))
|
||||
#define IS_QLA28XX(ha) (IS_QLA2081(ha) || IS_QLA2281(ha))
|
||||
#define IS_QLA24XX_TYPE(ha) (IS_QLA24XX(ha) || IS_QLA54XX(ha) || \
|
||||
IS_QLA84XX(ha))
|
||||
#define IS_CNA_CAPABLE(ha) (IS_QLA81XX(ha) || IS_QLA82XX(ha) || \
|
||||
|
@ -3805,14 +3793,15 @@ struct qla_hw_data {
|
|||
#define IS_QLA2XXX_MIDTYPE(ha) (IS_QLA24XX(ha) || IS_QLA84XX(ha) || \
|
||||
IS_QLA25XX(ha) || IS_QLA81XX(ha) || \
|
||||
IS_QLA82XX(ha) || IS_QLA83XX(ha) || \
|
||||
IS_QLA8044(ha) || IS_QLA27XX(ha))
|
||||
IS_QLA8044(ha) || IS_QLA27XX(ha) || \
|
||||
IS_QLA28XX(ha))
|
||||
#define IS_MSIX_NACK_CAPABLE(ha) (IS_QLA81XX(ha) || IS_QLA83XX(ha) || \
|
||||
IS_QLA27XX(ha))
|
||||
IS_QLA27XX(ha) || IS_QLA28XX(ha))
|
||||
#define IS_NOPOLLING_TYPE(ha) (IS_QLA81XX(ha) && (ha)->flags.msix_enabled)
|
||||
#define IS_FAC_REQUIRED(ha) (IS_QLA81XX(ha) || IS_QLA83XX(ha) || \
|
||||
IS_QLA27XX(ha))
|
||||
IS_QLA27XX(ha) || IS_QLA28XX(ha))
|
||||
#define IS_NOCACHE_VPD_TYPE(ha) (IS_QLA81XX(ha) || IS_QLA83XX(ha) || \
|
||||
IS_QLA27XX(ha))
|
||||
IS_QLA27XX(ha) || IS_QLA28XX(ha))
|
||||
#define IS_ALOGIO_CAPABLE(ha) (IS_QLA23XX(ha) || IS_FWI2_CAPABLE(ha))
|
||||
|
||||
#define IS_T10_PI_CAPABLE(ha) ((ha)->device_type & DT_T10_PI)
|
||||
|
@ -3823,28 +3812,34 @@ struct qla_hw_data {
|
|||
#define HAS_EXTENDED_IDS(ha) ((ha)->device_type & DT_EXTENDED_IDS)
|
||||
#define IS_CT6_SUPPORTED(ha) ((ha)->device_type & DT_CT6_SUPPORTED)
|
||||
#define IS_MQUE_CAPABLE(ha) ((ha)->mqenable || IS_QLA83XX(ha) || \
|
||||
IS_QLA27XX(ha))
|
||||
#define IS_BIDI_CAPABLE(ha) ((IS_QLA25XX(ha) || IS_QLA2031(ha)))
|
||||
IS_QLA27XX(ha) || IS_QLA28XX(ha))
|
||||
#define IS_BIDI_CAPABLE(ha) \
|
||||
(IS_QLA25XX(ha) || IS_QLA2031(ha) || IS_QLA27XX(ha) || IS_QLA28XX(ha))
|
||||
/* Bit 21 of fw_attributes decides the MCTP capabilities */
|
||||
#define IS_MCTP_CAPABLE(ha) (IS_QLA2031(ha) && \
|
||||
((ha)->fw_attributes_ext[0] & BIT_0))
|
||||
#define IS_PI_UNINIT_CAPABLE(ha) (IS_QLA83XX(ha) || IS_QLA27XX(ha))
|
||||
#define IS_PI_IPGUARD_CAPABLE(ha) (IS_QLA83XX(ha) || IS_QLA27XX(ha))
|
||||
#define IS_PI_DIFB_DIX0_CAPABLE(ha) (0)
|
||||
#define IS_PI_SPLIT_DET_CAPABLE_HBA(ha) (IS_QLA83XX(ha) || IS_QLA27XX(ha))
|
||||
#define IS_PI_SPLIT_DET_CAPABLE_HBA(ha) (IS_QLA83XX(ha) || IS_QLA27XX(ha) || \
|
||||
IS_QLA28XX(ha))
|
||||
#define IS_PI_SPLIT_DET_CAPABLE(ha) (IS_PI_SPLIT_DET_CAPABLE_HBA(ha) && \
|
||||
(((ha)->fw_attributes_h << 16 | (ha)->fw_attributes) & BIT_22))
|
||||
#define IS_ATIO_MSIX_CAPABLE(ha) (IS_QLA83XX(ha) || IS_QLA27XX(ha))
|
||||
#define IS_ATIO_MSIX_CAPABLE(ha) (IS_QLA83XX(ha) || IS_QLA27XX(ha) || \
|
||||
IS_QLA28XX(ha))
|
||||
#define IS_TGT_MODE_CAPABLE(ha) (ha->tgt.atio_q_length)
|
||||
#define IS_SHADOW_REG_CAPABLE(ha) (IS_QLA27XX(ha))
|
||||
#define IS_DPORT_CAPABLE(ha) (IS_QLA83XX(ha) || IS_QLA27XX(ha))
|
||||
#define IS_FAWWN_CAPABLE(ha) (IS_QLA83XX(ha) || IS_QLA27XX(ha))
|
||||
#define IS_SHADOW_REG_CAPABLE(ha) (IS_QLA27XX(ha) || IS_QLA28XX(ha))
|
||||
#define IS_DPORT_CAPABLE(ha) (IS_QLA83XX(ha) || IS_QLA27XX(ha) || \
|
||||
IS_QLA28XX(ha))
|
||||
#define IS_FAWWN_CAPABLE(ha) (IS_QLA83XX(ha) || IS_QLA27XX(ha) || \
|
||||
IS_QLA28XX(ha))
|
||||
#define IS_EXCHG_OFFLD_CAPABLE(ha) \
|
||||
(IS_QLA81XX(ha) || IS_QLA83XX(ha) || IS_QLA27XX(ha))
|
||||
(IS_QLA81XX(ha) || IS_QLA83XX(ha) || IS_QLA27XX(ha) || IS_QLA28XX(ha))
|
||||
#define IS_EXLOGIN_OFFLD_CAPABLE(ha) \
|
||||
(IS_QLA25XX(ha) || IS_QLA81XX(ha) || IS_QLA83XX(ha) || IS_QLA27XX(ha))
|
||||
(IS_QLA25XX(ha) || IS_QLA81XX(ha) || IS_QLA83XX(ha) || \
|
||||
IS_QLA27XX(ha) || IS_QLA28XX(ha))
|
||||
#define USE_ASYNC_SCAN(ha) (IS_QLA25XX(ha) || IS_QLA81XX(ha) ||\
|
||||
IS_QLA83XX(ha) || IS_QLA27XX(ha))
|
||||
IS_QLA83XX(ha) || IS_QLA27XX(ha) || IS_QLA28XX(ha))
|
||||
|
||||
/* HBA serial number */
|
||||
uint8_t serial0;
|
||||
|
@ -3888,6 +3883,9 @@ struct qla_hw_data {
|
|||
void *sfp_data;
|
||||
dma_addr_t sfp_data_dma;
|
||||
|
||||
void *flt;
|
||||
dma_addr_t flt_dma;
|
||||
|
||||
#define XGMAC_DATA_SIZE 4096
|
||||
void *xgmac_data;
|
||||
dma_addr_t xgmac_data_dma;
|
||||
|
@ -3999,18 +3997,23 @@ struct qla_hw_data {
|
|||
uint8_t fw_seriallink_options[4];
|
||||
uint16_t fw_seriallink_options24[4];
|
||||
|
||||
uint8_t serdes_version[3];
|
||||
uint8_t mpi_version[3];
|
||||
uint32_t mpi_capabilities;
|
||||
uint8_t phy_version[3];
|
||||
uint8_t pep_version[3];
|
||||
|
||||
/* Firmware dump template */
|
||||
void *fw_dump_template;
|
||||
uint32_t fw_dump_template_len;
|
||||
/* Firmware dump information. */
|
||||
struct fwdt {
|
||||
void *template;
|
||||
ulong length;
|
||||
ulong dump_size;
|
||||
} fwdt[2];
|
||||
struct qla2xxx_fw_dump *fw_dump;
|
||||
uint32_t fw_dump_len;
|
||||
int fw_dumped;
|
||||
u32 fw_dump_alloc_len;
|
||||
bool fw_dumped;
|
||||
bool fw_dump_mpi;
|
||||
unsigned long fw_dump_cap_flags;
|
||||
#define RISC_PAUSE_CMPL 0
|
||||
#define DMA_SHUTDOWN_CMPL 1
|
||||
|
@ -4049,7 +4052,6 @@ struct qla_hw_data {
|
|||
uint16_t product_id[4];
|
||||
|
||||
uint8_t model_number[16+1];
|
||||
#define BINZERO "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"
|
||||
char model_desc[80];
|
||||
uint8_t adapter_id[16+1];
|
||||
|
||||
|
@ -4089,22 +4091,28 @@ struct qla_hw_data {
|
|||
uint32_t fdt_protect_sec_cmd;
|
||||
uint32_t fdt_wrt_sts_reg_cmd;
|
||||
|
||||
uint32_t flt_region_flt;
|
||||
uint32_t flt_region_fdt;
|
||||
uint32_t flt_region_boot;
|
||||
uint32_t flt_region_boot_sec;
|
||||
uint32_t flt_region_fw;
|
||||
uint32_t flt_region_fw_sec;
|
||||
uint32_t flt_region_vpd_nvram;
|
||||
uint32_t flt_region_vpd;
|
||||
uint32_t flt_region_vpd_sec;
|
||||
uint32_t flt_region_nvram;
|
||||
uint32_t flt_region_npiv_conf;
|
||||
uint32_t flt_region_gold_fw;
|
||||
uint32_t flt_region_fcp_prio;
|
||||
uint32_t flt_region_bootload;
|
||||
uint32_t flt_region_img_status_pri;
|
||||
uint32_t flt_region_img_status_sec;
|
||||
struct {
|
||||
uint32_t flt_region_flt;
|
||||
uint32_t flt_region_fdt;
|
||||
uint32_t flt_region_boot;
|
||||
uint32_t flt_region_boot_sec;
|
||||
uint32_t flt_region_fw;
|
||||
uint32_t flt_region_fw_sec;
|
||||
uint32_t flt_region_vpd_nvram;
|
||||
uint32_t flt_region_vpd_nvram_sec;
|
||||
uint32_t flt_region_vpd;
|
||||
uint32_t flt_region_vpd_sec;
|
||||
uint32_t flt_region_nvram;
|
||||
uint32_t flt_region_nvram_sec;
|
||||
uint32_t flt_region_npiv_conf;
|
||||
uint32_t flt_region_gold_fw;
|
||||
uint32_t flt_region_fcp_prio;
|
||||
uint32_t flt_region_bootload;
|
||||
uint32_t flt_region_img_status_pri;
|
||||
uint32_t flt_region_img_status_sec;
|
||||
uint32_t flt_region_aux_img_status_pri;
|
||||
uint32_t flt_region_aux_img_status_sec;
|
||||
};
|
||||
uint8_t active_image;
|
||||
|
||||
/* Needed for BEACON */
|
||||
|
@ -4197,8 +4205,8 @@ struct qla_hw_data {
|
|||
struct qlt_hw_data tgt;
|
||||
int allow_cna_fw_dump;
|
||||
uint32_t fw_ability_mask;
|
||||
uint16_t min_link_speed;
|
||||
uint16_t max_speed_sup;
|
||||
uint16_t min_supported_speed;
|
||||
uint16_t max_supported_speed;
|
||||
|
||||
/* DMA pool for the DIF bundling buffers */
|
||||
struct dma_pool *dif_bundl_pool;
|
||||
|
@ -4225,9 +4233,20 @@ struct qla_hw_data {
|
|||
|
||||
atomic_t zio_threshold;
|
||||
uint16_t last_zio_threshold;
|
||||
|
||||
#define DEFAULT_ZIO_THRESHOLD 5
|
||||
};
|
||||
|
||||
struct active_regions {
|
||||
uint8_t global;
|
||||
struct {
|
||||
uint8_t board_config;
|
||||
uint8_t vpd_nvram;
|
||||
uint8_t npiv_config_0_1;
|
||||
uint8_t npiv_config_2_3;
|
||||
} aux;
|
||||
};
|
||||
|
||||
#define FW_ABILITY_MAX_SPEED_MASK 0xFUL
|
||||
#define FW_ABILITY_MAX_SPEED_16G 0x0
|
||||
#define FW_ABILITY_MAX_SPEED_32G 0x1
|
||||
|
@ -4315,6 +4334,7 @@ typedef struct scsi_qla_host {
|
|||
#define N2N_LOGIN_NEEDED 30
|
||||
#define IOCB_WORK_ACTIVE 31
|
||||
#define SET_ZIO_THRESHOLD_NEEDED 32
|
||||
#define ISP_ABORT_TO_ROM 33
|
||||
|
||||
unsigned long pci_flags;
|
||||
#define PFLG_DISCONNECTED 0 /* PCI device removed */
|
||||
|
@ -4429,7 +4449,7 @@ typedef struct scsi_qla_host {
|
|||
int fcport_count;
|
||||
wait_queue_head_t fcport_waitQ;
|
||||
wait_queue_head_t vref_waitq;
|
||||
uint8_t min_link_speed_feat;
|
||||
uint8_t min_supported_speed;
|
||||
uint8_t n2n_node_name[WWN_SIZE];
|
||||
uint8_t n2n_port_name[WWN_SIZE];
|
||||
uint16_t n2n_id;
|
||||
|
@ -4441,14 +4461,21 @@ typedef struct scsi_qla_host {
|
|||
|
||||
struct qla27xx_image_status {
|
||||
uint8_t image_status_mask;
|
||||
uint16_t generation_number;
|
||||
uint8_t reserved[3];
|
||||
uint8_t ver_minor;
|
||||
uint16_t generation;
|
||||
uint8_t ver_major;
|
||||
uint8_t ver_minor;
|
||||
uint8_t bitmap; /* 28xx only */
|
||||
uint8_t reserved[2];
|
||||
uint32_t checksum;
|
||||
uint32_t signature;
|
||||
} __packed;
|
||||
|
||||
/* 28xx aux image status bimap values */
|
||||
#define QLA28XX_AUX_IMG_BOARD_CONFIG BIT_0
|
||||
#define QLA28XX_AUX_IMG_VPD_NVRAM BIT_1
|
||||
#define QLA28XX_AUX_IMG_NPIV_CONFIG_0_1 BIT_2
|
||||
#define QLA28XX_AUX_IMG_NPIV_CONFIG_2_3 BIT_3
|
||||
|
||||
#define SET_VP_IDX 1
|
||||
#define SET_AL_PA 2
|
||||
#define RESET_VP_IDX 3
|
||||
|
@ -4495,6 +4522,24 @@ struct qla2_sgx {
|
|||
} \
|
||||
}
|
||||
|
||||
|
||||
#define SFUB_CHECKSUM_SIZE 4
|
||||
|
||||
struct secure_flash_update_block {
|
||||
uint32_t block_info;
|
||||
uint32_t signature_lo;
|
||||
uint32_t signature_hi;
|
||||
uint32_t signature_upper[0x3e];
|
||||
};
|
||||
|
||||
struct secure_flash_update_block_pk {
|
||||
uint32_t block_info;
|
||||
uint32_t signature_lo;
|
||||
uint32_t signature_hi;
|
||||
uint32_t signature_upper[0x3e];
|
||||
uint32_t public_key[0x41];
|
||||
};
|
||||
|
||||
/*
|
||||
* Macros to help code, maintain, etc.
|
||||
*/
|
||||
|
@ -4595,6 +4640,7 @@ struct qla2_sgx {
|
|||
#define OPTROM_SIZE_81XX 0x400000
|
||||
#define OPTROM_SIZE_82XX 0x800000
|
||||
#define OPTROM_SIZE_83XX 0x1000000
|
||||
#define OPTROM_SIZE_28XX 0x2000000
|
||||
|
||||
#define OPTROM_BURST_SIZE 0x1000
|
||||
#define OPTROM_BURST_DWORDS (OPTROM_BURST_SIZE / 4)
|
||||
|
@ -4691,10 +4737,13 @@ struct sff_8247_a0 {
|
|||
#define AUTO_DETECT_SFP_SUPPORT(_vha)\
|
||||
(ql2xautodetectsfp && !_vha->vp_idx && \
|
||||
(IS_QLA25XX(_vha->hw) || IS_QLA81XX(_vha->hw) ||\
|
||||
IS_QLA83XX(_vha->hw) || IS_QLA27XX(_vha->hw)))
|
||||
IS_QLA83XX(_vha->hw) || IS_QLA27XX(_vha->hw) || \
|
||||
IS_QLA28XX(_vha->hw)))
|
||||
|
||||
#define FLASH_SEMAPHORE_REGISTER_ADDR 0x00101016
|
||||
|
||||
#define USER_CTRL_IRQ(_ha) (ql2xuctrlirq && QLA_TGT_MODE_ENABLED() && \
|
||||
(IS_QLA27XX(_ha) || IS_QLA83XX(_ha)))
|
||||
(IS_QLA27XX(_ha) || IS_QLA28XX(_ha) || IS_QLA83XX(_ha)))
|
||||
|
||||
#define SAVE_TOPO(_ha) { \
|
||||
if (_ha->current_topology) \
|
||||
|
|
|
@ -41,6 +41,7 @@ static int
|
|||
qla2x00_dfs_tgt_sess_open(struct inode *inode, struct file *file)
|
||||
{
|
||||
scsi_qla_host_t *vha = inode->i_private;
|
||||
|
||||
return single_open(file, qla2x00_dfs_tgt_sess_show, vha);
|
||||
}
|
||||
|
||||
|
@ -161,6 +162,7 @@ static int
|
|||
qla_dfs_fw_resource_cnt_open(struct inode *inode, struct file *file)
|
||||
{
|
||||
struct scsi_qla_host *vha = inode->i_private;
|
||||
|
||||
return single_open(file, qla_dfs_fw_resource_cnt_show, vha);
|
||||
}
|
||||
|
||||
|
@ -250,6 +252,7 @@ static int
|
|||
qla_dfs_tgt_counters_open(struct inode *inode, struct file *file)
|
||||
{
|
||||
struct scsi_qla_host *vha = inode->i_private;
|
||||
|
||||
return single_open(file, qla_dfs_tgt_counters_show, vha);
|
||||
}
|
||||
|
||||
|
@ -386,7 +389,7 @@ qla_dfs_naqp_write(struct file *file, const char __user *buffer,
|
|||
int rc = 0;
|
||||
unsigned long num_act_qp;
|
||||
|
||||
if (!(IS_QLA27XX(ha) || IS_QLA83XX(ha))) {
|
||||
if (!(IS_QLA27XX(ha) || IS_QLA83XX(ha) || IS_QLA28XX(ha))) {
|
||||
pr_err("host%ld: this adapter does not support Multi Q.",
|
||||
vha->host_no);
|
||||
return -EINVAL;
|
||||
|
@ -438,7 +441,7 @@ qla2x00_dfs_setup(scsi_qla_host_t *vha)
|
|||
struct qla_hw_data *ha = vha->hw;
|
||||
|
||||
if (!IS_QLA25XX(ha) && !IS_QLA81XX(ha) && !IS_QLA83XX(ha) &&
|
||||
!IS_QLA27XX(ha))
|
||||
!IS_QLA27XX(ha) && !IS_QLA28XX(ha))
|
||||
goto out;
|
||||
if (!ha->fce)
|
||||
goto out;
|
||||
|
@ -474,7 +477,7 @@ qla2x00_dfs_setup(scsi_qla_host_t *vha)
|
|||
ha->tgt.dfs_tgt_sess = debugfs_create_file("tgt_sess",
|
||||
S_IRUSR, ha->dfs_dir, vha, &dfs_tgt_sess_ops);
|
||||
|
||||
if (IS_QLA27XX(ha) || IS_QLA83XX(ha))
|
||||
if (IS_QLA27XX(ha) || IS_QLA83XX(ha) || IS_QLA28XX(ha))
|
||||
ha->tgt.dfs_naqp = debugfs_create_file("naqp",
|
||||
0400, ha->dfs_dir, vha, &dfs_naqp_ops);
|
||||
out:
|
||||
|
|
|
@ -0,0 +1,30 @@
|
|||
#ifndef _QLA_DSD_H_
|
||||
#define _QLA_DSD_H_
|
||||
|
||||
/* 32-bit data segment descriptor (8 bytes) */
|
||||
struct dsd32 {
|
||||
__le32 address;
|
||||
__le32 length;
|
||||
};
|
||||
|
||||
static inline void append_dsd32(struct dsd32 **dsd, struct scatterlist *sg)
|
||||
{
|
||||
put_unaligned_le32(sg_dma_address(sg), &(*dsd)->address);
|
||||
put_unaligned_le32(sg_dma_len(sg), &(*dsd)->length);
|
||||
(*dsd)++;
|
||||
}
|
||||
|
||||
/* 64-bit data segment descriptor (12 bytes) */
|
||||
struct dsd64 {
|
||||
__le64 address;
|
||||
__le32 length;
|
||||
} __packed;
|
||||
|
||||
static inline void append_dsd64(struct dsd64 **dsd, struct scatterlist *sg)
|
||||
{
|
||||
put_unaligned_le64(sg_dma_address(sg), &(*dsd)->address);
|
||||
put_unaligned_le32(sg_dma_len(sg), &(*dsd)->length);
|
||||
(*dsd)++;
|
||||
}
|
||||
|
||||
#endif
|
|
@ -10,6 +10,8 @@
|
|||
#include <linux/nvme.h>
|
||||
#include <linux/nvme-fc.h>
|
||||
|
||||
#include "qla_dsd.h"
|
||||
|
||||
#define MBS_CHECKSUM_ERROR 0x4010
|
||||
#define MBS_INVALID_PRODUCT_KEY 0x4020
|
||||
|
||||
|
@ -339,9 +341,9 @@ struct init_cb_24xx {
|
|||
|
||||
uint16_t prio_request_q_length;
|
||||
|
||||
uint32_t request_q_address[2];
|
||||
uint32_t response_q_address[2];
|
||||
uint32_t prio_request_q_address[2];
|
||||
__le64 request_q_address __packed;
|
||||
__le64 response_q_address __packed;
|
||||
__le64 prio_request_q_address __packed;
|
||||
|
||||
uint16_t msix;
|
||||
uint16_t msix_atio;
|
||||
|
@ -349,7 +351,7 @@ struct init_cb_24xx {
|
|||
|
||||
uint16_t atio_q_inpointer;
|
||||
uint16_t atio_q_length;
|
||||
uint32_t atio_q_address[2];
|
||||
__le64 atio_q_address __packed;
|
||||
|
||||
uint16_t interrupt_delay_timer; /* 100us increments. */
|
||||
uint16_t login_timeout;
|
||||
|
@ -453,7 +455,7 @@ struct cmd_bidir {
|
|||
#define BD_WRITE_DATA BIT_0
|
||||
|
||||
uint16_t fcp_cmnd_dseg_len; /* Data segment length. */
|
||||
uint32_t fcp_cmnd_dseg_address[2]; /* Data segment address. */
|
||||
__le64 fcp_cmnd_dseg_address __packed;/* Data segment address. */
|
||||
|
||||
uint16_t reserved[2]; /* Reserved */
|
||||
|
||||
|
@ -463,8 +465,7 @@ struct cmd_bidir {
|
|||
uint8_t port_id[3]; /* PortID of destination port.*/
|
||||
uint8_t vp_index;
|
||||
|
||||
uint32_t fcp_data_dseg_address[2]; /* Data segment address. */
|
||||
uint16_t fcp_data_dseg_len; /* Data segment length. */
|
||||
struct dsd64 fcp_dsd;
|
||||
};
|
||||
|
||||
#define COMMAND_TYPE_6 0x48 /* Command Type 6 entry */
|
||||
|
@ -491,18 +492,18 @@ struct cmd_type_6 {
|
|||
#define CF_READ_DATA BIT_1
|
||||
#define CF_WRITE_DATA BIT_0
|
||||
|
||||
uint16_t fcp_cmnd_dseg_len; /* Data segment length. */
|
||||
uint32_t fcp_cmnd_dseg_address[2]; /* Data segment address. */
|
||||
|
||||
uint32_t fcp_rsp_dseg_address[2]; /* Data segment address. */
|
||||
uint16_t fcp_cmnd_dseg_len; /* Data segment length. */
|
||||
/* Data segment address. */
|
||||
__le64 fcp_cmnd_dseg_address __packed;
|
||||
/* Data segment address. */
|
||||
__le64 fcp_rsp_dseg_address __packed;
|
||||
|
||||
uint32_t byte_count; /* Total byte count. */
|
||||
|
||||
uint8_t port_id[3]; /* PortID of destination port. */
|
||||
uint8_t vp_index;
|
||||
|
||||
uint32_t fcp_data_dseg_address[2]; /* Data segment address. */
|
||||
uint32_t fcp_data_dseg_len; /* Data segment length. */
|
||||
struct dsd64 fcp_dsd;
|
||||
};
|
||||
|
||||
#define COMMAND_TYPE_7 0x18 /* Command Type 7 entry */
|
||||
|
@ -548,8 +549,7 @@ struct cmd_type_7 {
|
|||
uint8_t port_id[3]; /* PortID of destination port. */
|
||||
uint8_t vp_index;
|
||||
|
||||
uint32_t dseg_0_address[2]; /* Data segment 0 address. */
|
||||
uint32_t dseg_0_len; /* Data segment 0 length. */
|
||||
struct dsd64 dsd;
|
||||
};
|
||||
|
||||
#define COMMAND_TYPE_CRC_2 0x6A /* Command Type CRC_2 (Type 6)
|
||||
|
@ -573,17 +573,17 @@ struct cmd_type_crc_2 {
|
|||
|
||||
uint16_t control_flags; /* Control flags. */
|
||||
|
||||
uint16_t fcp_cmnd_dseg_len; /* Data segment length. */
|
||||
uint32_t fcp_cmnd_dseg_address[2]; /* Data segment address. */
|
||||
|
||||
uint32_t fcp_rsp_dseg_address[2]; /* Data segment address. */
|
||||
uint16_t fcp_cmnd_dseg_len; /* Data segment length. */
|
||||
__le64 fcp_cmnd_dseg_address __packed;
|
||||
/* Data segment address. */
|
||||
__le64 fcp_rsp_dseg_address __packed;
|
||||
|
||||
uint32_t byte_count; /* Total byte count. */
|
||||
|
||||
uint8_t port_id[3]; /* PortID of destination port. */
|
||||
uint8_t vp_index;
|
||||
|
||||
uint32_t crc_context_address[2]; /* Data segment address. */
|
||||
__le64 crc_context_address __packed; /* Data segment address. */
|
||||
uint16_t crc_context_len; /* Data segment length. */
|
||||
uint16_t reserved_1; /* MUST be set to 0. */
|
||||
};
|
||||
|
@ -717,10 +717,7 @@ struct ct_entry_24xx {
|
|||
uint32_t rsp_byte_count;
|
||||
uint32_t cmd_byte_count;
|
||||
|
||||
uint32_t dseg_0_address[2]; /* Data segment 0 address. */
|
||||
uint32_t dseg_0_len; /* Data segment 0 length. */
|
||||
uint32_t dseg_1_address[2]; /* Data segment 1 address. */
|
||||
uint32_t dseg_1_len; /* Data segment 1 length. */
|
||||
struct dsd64 dsd[2];
|
||||
};
|
||||
|
||||
/*
|
||||
|
@ -767,9 +764,9 @@ struct els_entry_24xx {
|
|||
uint32_t rx_byte_count;
|
||||
uint32_t tx_byte_count;
|
||||
|
||||
uint32_t tx_address[2]; /* Data segment 0 address. */
|
||||
__le64 tx_address __packed; /* Data segment 0 address. */
|
||||
uint32_t tx_len; /* Data segment 0 length. */
|
||||
uint32_t rx_address[2]; /* Data segment 1 address. */
|
||||
__le64 rx_address __packed; /* Data segment 1 address. */
|
||||
uint32_t rx_len; /* Data segment 1 length. */
|
||||
};
|
||||
|
||||
|
@ -1422,9 +1419,9 @@ struct vf_evfp_entry_24xx {
|
|||
uint16_t control_flags;
|
||||
uint32_t io_parameter_0;
|
||||
uint32_t io_parameter_1;
|
||||
uint32_t tx_address[2]; /* Data segment 0 address. */
|
||||
__le64 tx_address __packed; /* Data segment 0 address. */
|
||||
uint32_t tx_len; /* Data segment 0 length. */
|
||||
uint32_t rx_address[2]; /* Data segment 1 address. */
|
||||
__le64 rx_address __packed; /* Data segment 1 address. */
|
||||
uint32_t rx_len; /* Data segment 1 length. */
|
||||
};
|
||||
|
||||
|
@ -1515,13 +1512,31 @@ struct qla_flt_header {
|
|||
#define FLT_REG_VPD_SEC_27XX_2 0xD8
|
||||
#define FLT_REG_VPD_SEC_27XX_3 0xDA
|
||||
|
||||
/* 28xx */
|
||||
#define FLT_REG_AUX_IMG_PRI_28XX 0x125
|
||||
#define FLT_REG_AUX_IMG_SEC_28XX 0x126
|
||||
#define FLT_REG_VPD_SEC_28XX_0 0x10C
|
||||
#define FLT_REG_VPD_SEC_28XX_1 0x10E
|
||||
#define FLT_REG_VPD_SEC_28XX_2 0x110
|
||||
#define FLT_REG_VPD_SEC_28XX_3 0x112
|
||||
#define FLT_REG_NVRAM_SEC_28XX_0 0x10D
|
||||
#define FLT_REG_NVRAM_SEC_28XX_1 0x10F
|
||||
#define FLT_REG_NVRAM_SEC_28XX_2 0x111
|
||||
#define FLT_REG_NVRAM_SEC_28XX_3 0x113
|
||||
|
||||
struct qla_flt_region {
|
||||
uint32_t code;
|
||||
uint16_t code;
|
||||
uint8_t attribute;
|
||||
uint8_t reserved;
|
||||
uint32_t size;
|
||||
uint32_t start;
|
||||
uint32_t end;
|
||||
};
|
||||
|
||||
#define FLT_REGION_SIZE 16
|
||||
#define FLT_MAX_REGIONS 0xFF
|
||||
#define FLT_REGIONS_SIZE (FLT_REGION_SIZE * FLT_MAX_REGIONS)
|
||||
|
||||
/* Flash NPIV Configuration Table ********************************************/
|
||||
|
||||
struct qla_npiv_header {
|
||||
|
@ -1588,8 +1603,7 @@ struct verify_chip_entry_84xx {
|
|||
uint32_t fw_seq_size;
|
||||
uint32_t relative_offset;
|
||||
|
||||
uint32_t dseg_address[2];
|
||||
uint32_t dseg_length;
|
||||
struct dsd64 dsd;
|
||||
};
|
||||
|
||||
struct verify_chip_rsp_84xx {
|
||||
|
@ -1646,8 +1660,7 @@ struct access_chip_84xx {
|
|||
uint32_t total_byte_cnt;
|
||||
uint32_t reserved4;
|
||||
|
||||
uint32_t dseg_address[2];
|
||||
uint32_t dseg_length;
|
||||
struct dsd64 dsd;
|
||||
};
|
||||
|
||||
struct access_chip_rsp_84xx {
|
||||
|
@ -1711,6 +1724,10 @@ struct access_chip_rsp_84xx {
|
|||
#define LR_DIST_FW_SHIFT (LR_DIST_FW_POS - LR_DIST_NV_POS)
|
||||
#define LR_DIST_FW_FIELD(x) ((x) << LR_DIST_FW_SHIFT & 0xf000)
|
||||
|
||||
/* FAC semaphore defines */
|
||||
#define FAC_SEMAPHORE_UNLOCK 0
|
||||
#define FAC_SEMAPHORE_LOCK 1
|
||||
|
||||
struct nvram_81xx {
|
||||
/* NVRAM header. */
|
||||
uint8_t id[4];
|
||||
|
@ -1757,7 +1774,7 @@ struct nvram_81xx {
|
|||
uint16_t reserved_6_3[14];
|
||||
|
||||
/* Offset 192. */
|
||||
uint8_t min_link_speed;
|
||||
uint8_t min_supported_speed;
|
||||
uint8_t reserved_7_0;
|
||||
uint16_t reserved_7[31];
|
||||
|
||||
|
@ -1911,15 +1928,15 @@ struct init_cb_81xx {
|
|||
|
||||
uint16_t prio_request_q_length;
|
||||
|
||||
uint32_t request_q_address[2];
|
||||
uint32_t response_q_address[2];
|
||||
uint32_t prio_request_q_address[2];
|
||||
__le64 request_q_address __packed;
|
||||
__le64 response_q_address __packed;
|
||||
__le64 prio_request_q_address __packed;
|
||||
|
||||
uint8_t reserved_4[8];
|
||||
|
||||
uint16_t atio_q_inpointer;
|
||||
uint16_t atio_q_length;
|
||||
uint32_t atio_q_address[2];
|
||||
__le64 atio_q_address __packed;
|
||||
|
||||
uint16_t interrupt_delay_timer; /* 100us increments. */
|
||||
uint16_t login_timeout;
|
||||
|
@ -2005,6 +2022,8 @@ struct ex_init_cb_81xx {
|
|||
|
||||
#define FARX_ACCESS_FLASH_CONF_81XX 0x7FFD0000
|
||||
#define FARX_ACCESS_FLASH_DATA_81XX 0x7F800000
|
||||
#define FARX_ACCESS_FLASH_CONF_28XX 0x7FFD0000
|
||||
#define FARX_ACCESS_FLASH_DATA_28XX 0x7F7D0000
|
||||
|
||||
/* FCP priority config defines *************************************/
|
||||
/* operations */
|
||||
|
@ -2079,6 +2098,7 @@ struct qla_fcp_prio_cfg {
|
|||
#define FA_NPIV_CONF1_ADDR_81 0xD2000
|
||||
|
||||
/* 83XX Flash locations -- occupies second 8MB region. */
|
||||
#define FA_FLASH_LAYOUT_ADDR_83 0xFC400
|
||||
#define FA_FLASH_LAYOUT_ADDR_83 (0x3F1000/4)
|
||||
#define FA_FLASH_LAYOUT_ADDR_28 (0x11000/4)
|
||||
|
||||
#endif
|
||||
|
|
|
@ -18,14 +18,14 @@ extern int qla2100_pci_config(struct scsi_qla_host *);
|
|||
extern int qla2300_pci_config(struct scsi_qla_host *);
|
||||
extern int qla24xx_pci_config(scsi_qla_host_t *);
|
||||
extern int qla25xx_pci_config(scsi_qla_host_t *);
|
||||
extern void qla2x00_reset_chip(struct scsi_qla_host *);
|
||||
extern void qla24xx_reset_chip(struct scsi_qla_host *);
|
||||
extern int qla2x00_reset_chip(struct scsi_qla_host *);
|
||||
extern int qla24xx_reset_chip(struct scsi_qla_host *);
|
||||
extern int qla2x00_chip_diag(struct scsi_qla_host *);
|
||||
extern int qla24xx_chip_diag(struct scsi_qla_host *);
|
||||
extern void qla2x00_config_rings(struct scsi_qla_host *);
|
||||
extern void qla24xx_config_rings(struct scsi_qla_host *);
|
||||
extern void qla2x00_reset_adapter(struct scsi_qla_host *);
|
||||
extern void qla24xx_reset_adapter(struct scsi_qla_host *);
|
||||
extern int qla2x00_reset_adapter(struct scsi_qla_host *);
|
||||
extern int qla24xx_reset_adapter(struct scsi_qla_host *);
|
||||
extern int qla2x00_nvram_config(struct scsi_qla_host *);
|
||||
extern int qla24xx_nvram_config(struct scsi_qla_host *);
|
||||
extern int qla81xx_nvram_config(struct scsi_qla_host *);
|
||||
|
@ -38,8 +38,7 @@ extern int qla81xx_load_risc(scsi_qla_host_t *, uint32_t *);
|
|||
|
||||
extern int qla2x00_perform_loop_resync(scsi_qla_host_t *);
|
||||
extern int qla2x00_loop_resync(scsi_qla_host_t *);
|
||||
|
||||
extern int qla2x00_find_new_loop_id(scsi_qla_host_t *, fc_port_t *);
|
||||
extern void qla2x00_clear_loop_id(fc_port_t *fcport);
|
||||
|
||||
extern int qla2x00_fabric_login(scsi_qla_host_t *, fc_port_t *, uint16_t *);
|
||||
extern int qla2x00_local_device_login(scsi_qla_host_t *, fc_port_t *);
|
||||
|
@ -80,6 +79,7 @@ int qla2x00_post_work(struct scsi_qla_host *vha, struct qla_work_evt *e);
|
|||
extern void *qla2x00_alloc_iocbs_ready(struct qla_qpair *, srb_t *);
|
||||
extern int qla24xx_update_fcport_fcp_prio(scsi_qla_host_t *, fc_port_t *);
|
||||
|
||||
extern void qla2x00_set_fcport_state(fc_port_t *fcport, int state);
|
||||
extern fc_port_t *
|
||||
qla2x00_alloc_fcport(scsi_qla_host_t *, gfp_t );
|
||||
|
||||
|
@ -93,7 +93,6 @@ extern int qla2xxx_mctp_dump(scsi_qla_host_t *);
|
|||
extern int
|
||||
qla2x00_alloc_outstanding_cmds(struct qla_hw_data *, struct req_que *);
|
||||
extern int qla2x00_init_rings(scsi_qla_host_t *);
|
||||
extern uint8_t qla27xx_find_valid_image(struct scsi_qla_host *);
|
||||
extern struct qla_qpair *qla2xxx_create_qpair(struct scsi_qla_host *,
|
||||
int, int, bool);
|
||||
extern int qla2xxx_delete_qpair(struct scsi_qla_host *, struct qla_qpair *);
|
||||
|
@ -108,6 +107,11 @@ int qla24xx_fcport_handle_login(struct scsi_qla_host *, fc_port_t *);
|
|||
int qla24xx_detect_sfp(scsi_qla_host_t *vha);
|
||||
int qla24xx_post_gpdb_work(struct scsi_qla_host *, fc_port_t *, u8);
|
||||
|
||||
extern void qla28xx_get_aux_images(struct scsi_qla_host *,
|
||||
struct active_regions *);
|
||||
extern void qla27xx_get_active_image(struct scsi_qla_host *,
|
||||
struct active_regions *);
|
||||
|
||||
void qla2x00_async_prlo_done(struct scsi_qla_host *, fc_port_t *,
|
||||
uint16_t *);
|
||||
extern int qla2x00_post_async_prlo_work(struct scsi_qla_host *, fc_port_t *,
|
||||
|
@ -118,6 +122,7 @@ int qla_post_iidma_work(struct scsi_qla_host *vha, fc_port_t *fcport);
|
|||
void qla_do_iidma_work(struct scsi_qla_host *vha, fc_port_t *fcport);
|
||||
int qla2x00_reserve_mgmt_server_loop_id(scsi_qla_host_t *);
|
||||
void qla_rscn_replay(fc_port_t *fcport);
|
||||
extern bool qla24xx_risc_firmware_invalid(uint32_t *);
|
||||
|
||||
/*
|
||||
* Global Data in qla_os.c source file.
|
||||
|
@ -215,7 +220,6 @@ extern void qla24xx_sched_upd_fcport(fc_port_t *);
|
|||
void qla2x00_handle_login_done_event(struct scsi_qla_host *, fc_port_t *,
|
||||
uint16_t *);
|
||||
int qla24xx_post_gnl_work(struct scsi_qla_host *, fc_port_t *);
|
||||
int qla24xx_async_abort_cmd(srb_t *, bool);
|
||||
int qla24xx_post_relogin_work(struct scsi_qla_host *vha);
|
||||
void qla2x00_wait_for_sess_deletion(scsi_qla_host_t *);
|
||||
|
||||
|
@ -238,7 +242,7 @@ extern void qla24xx_report_id_acquisition(scsi_qla_host_t *,
|
|||
struct vp_rpt_id_entry_24xx *);
|
||||
extern void qla2x00_do_dpc_all_vps(scsi_qla_host_t *);
|
||||
extern int qla24xx_vport_create_req_sanity_check(struct fc_vport *);
|
||||
extern scsi_qla_host_t * qla24xx_create_vhost(struct fc_vport *);
|
||||
extern scsi_qla_host_t *qla24xx_create_vhost(struct fc_vport *);
|
||||
|
||||
extern void qla2x00_sp_free_dma(void *);
|
||||
extern char *qla2x00_get_fw_version_str(struct scsi_qla_host *, char *);
|
||||
|
@ -276,21 +280,20 @@ extern int qla2x00_start_sp(srb_t *);
|
|||
extern int qla24xx_dif_start_scsi(srb_t *);
|
||||
extern int qla2x00_start_bidir(srb_t *, struct scsi_qla_host *, uint32_t);
|
||||
extern int qla2xxx_dif_start_scsi_mq(srb_t *);
|
||||
extern void qla2x00_init_timer(srb_t *sp, unsigned long tmo);
|
||||
extern unsigned long qla2x00_get_async_timeout(struct scsi_qla_host *);
|
||||
|
||||
extern void *qla2x00_alloc_iocbs(struct scsi_qla_host *, srb_t *);
|
||||
extern void *__qla2x00_alloc_iocbs(struct qla_qpair *, srb_t *);
|
||||
extern int qla2x00_issue_marker(scsi_qla_host_t *, int);
|
||||
extern int qla24xx_walk_and_build_sglist_no_difb(struct qla_hw_data *, srb_t *,
|
||||
uint32_t *, uint16_t, struct qla_tc_param *);
|
||||
struct dsd64 *, uint16_t, struct qla_tc_param *);
|
||||
extern int qla24xx_walk_and_build_sglist(struct qla_hw_data *, srb_t *,
|
||||
uint32_t *, uint16_t, struct qla_tc_param *);
|
||||
struct dsd64 *, uint16_t, struct qla_tc_param *);
|
||||
extern int qla24xx_walk_and_build_prot_sglist(struct qla_hw_data *, srb_t *,
|
||||
uint32_t *, uint16_t, struct qla_tgt_cmd *);
|
||||
struct dsd64 *, uint16_t, struct qla_tgt_cmd *);
|
||||
extern int qla24xx_get_one_block_sg(uint32_t, struct qla2_sgx *, uint32_t *);
|
||||
extern int qla24xx_configure_prot_mode(srb_t *, uint16_t *);
|
||||
extern int qla24xx_build_scsi_crc_2_iocbs(srb_t *,
|
||||
struct cmd_type_crc_2 *, uint16_t, uint16_t, uint16_t);
|
||||
|
||||
/*
|
||||
* Global Function Prototypes in qla_mbx.c source file.
|
||||
|
@ -466,6 +469,8 @@ qla81xx_fac_do_write_enable(scsi_qla_host_t *, int);
|
|||
extern int
|
||||
qla81xx_fac_erase_sector(scsi_qla_host_t *, uint32_t, uint32_t);
|
||||
|
||||
extern int qla81xx_fac_semaphore_access(scsi_qla_host_t *, int);
|
||||
|
||||
extern int
|
||||
qla2x00_get_xgmac_stats(scsi_qla_host_t *, dma_addr_t, uint16_t, uint16_t *);
|
||||
|
||||
|
@ -511,6 +516,14 @@ extern int qla27xx_get_zio_threshold(scsi_qla_host_t *, uint16_t *);
|
|||
extern int qla27xx_set_zio_threshold(scsi_qla_host_t *, uint16_t);
|
||||
int qla24xx_res_count_wait(struct scsi_qla_host *, uint16_t *, int);
|
||||
|
||||
extern int qla28xx_secure_flash_update(scsi_qla_host_t *, uint16_t, uint16_t,
|
||||
uint32_t, dma_addr_t, uint32_t);
|
||||
|
||||
extern int qla2xxx_read_remote_register(scsi_qla_host_t *, uint32_t,
|
||||
uint32_t *);
|
||||
extern int qla2xxx_write_remote_register(scsi_qla_host_t *, uint32_t,
|
||||
uint32_t);
|
||||
|
||||
/*
|
||||
* Global Function Prototypes in qla_isr.c source file.
|
||||
*/
|
||||
|
@ -542,19 +555,20 @@ fc_port_t *qla2x00_find_fcport_by_nportid(scsi_qla_host_t *, port_id_t *, u8);
|
|||
*/
|
||||
extern void qla2x00_release_nvram_protection(scsi_qla_host_t *);
|
||||
extern uint32_t *qla24xx_read_flash_data(scsi_qla_host_t *, uint32_t *,
|
||||
uint32_t, uint32_t);
|
||||
extern uint8_t *qla2x00_read_nvram_data(scsi_qla_host_t *, uint8_t *, uint32_t,
|
||||
uint32_t);
|
||||
extern uint8_t *qla24xx_read_nvram_data(scsi_qla_host_t *, uint8_t *, uint32_t,
|
||||
uint32_t);
|
||||
extern int qla2x00_write_nvram_data(scsi_qla_host_t *, uint8_t *, uint32_t,
|
||||
uint32_t);
|
||||
extern int qla24xx_write_nvram_data(scsi_qla_host_t *, uint8_t *, uint32_t,
|
||||
uint32_t);
|
||||
extern uint8_t *qla25xx_read_nvram_data(scsi_qla_host_t *, uint8_t *, uint32_t,
|
||||
uint32_t);
|
||||
extern int qla25xx_write_nvram_data(scsi_qla_host_t *, uint8_t *, uint32_t,
|
||||
uint32_t);
|
||||
uint32_t, uint32_t);
|
||||
extern uint8_t *qla2x00_read_nvram_data(scsi_qla_host_t *, void *, uint32_t,
|
||||
uint32_t);
|
||||
extern uint8_t *qla24xx_read_nvram_data(scsi_qla_host_t *, void *, uint32_t,
|
||||
uint32_t);
|
||||
extern int qla2x00_write_nvram_data(scsi_qla_host_t *, void *, uint32_t,
|
||||
uint32_t);
|
||||
extern int qla24xx_write_nvram_data(scsi_qla_host_t *, void *, uint32_t,
|
||||
uint32_t);
|
||||
extern uint8_t *qla25xx_read_nvram_data(scsi_qla_host_t *, void *, uint32_t,
|
||||
uint32_t);
|
||||
extern int qla25xx_write_nvram_data(scsi_qla_host_t *, void *, uint32_t,
|
||||
uint32_t);
|
||||
|
||||
extern int qla2x00_is_a_vp_did(scsi_qla_host_t *, uint32_t);
|
||||
bool qla2x00_check_reg32_for_disconnect(scsi_qla_host_t *, uint32_t);
|
||||
bool qla2x00_check_reg16_for_disconnect(scsi_qla_host_t *, uint16_t);
|
||||
|
@ -574,18 +588,18 @@ extern int qla83xx_restart_nic_firmware(scsi_qla_host_t *);
|
|||
extern int qla83xx_access_control(scsi_qla_host_t *, uint16_t, uint32_t,
|
||||
uint32_t, uint16_t *);
|
||||
|
||||
extern uint8_t *qla2x00_read_optrom_data(struct scsi_qla_host *, uint8_t *,
|
||||
extern void *qla2x00_read_optrom_data(struct scsi_qla_host *, void *,
|
||||
uint32_t, uint32_t);
|
||||
extern int qla2x00_write_optrom_data(struct scsi_qla_host *, uint8_t *,
|
||||
extern int qla2x00_write_optrom_data(struct scsi_qla_host *, void *,
|
||||
uint32_t, uint32_t);
|
||||
extern uint8_t *qla24xx_read_optrom_data(struct scsi_qla_host *, uint8_t *,
|
||||
extern void *qla24xx_read_optrom_data(struct scsi_qla_host *, void *,
|
||||
uint32_t, uint32_t);
|
||||
extern int qla24xx_write_optrom_data(struct scsi_qla_host *, uint8_t *,
|
||||
extern int qla24xx_write_optrom_data(struct scsi_qla_host *, void *,
|
||||
uint32_t, uint32_t);
|
||||
extern uint8_t *qla25xx_read_optrom_data(struct scsi_qla_host *, uint8_t *,
|
||||
extern void *qla25xx_read_optrom_data(struct scsi_qla_host *, void *,
|
||||
uint32_t, uint32_t);
|
||||
extern uint8_t *qla8044_read_optrom_data(struct scsi_qla_host *,
|
||||
uint8_t *, uint32_t, uint32_t);
|
||||
extern void *qla8044_read_optrom_data(struct scsi_qla_host *,
|
||||
void *, uint32_t, uint32_t);
|
||||
extern void qla8044_watchdog(struct scsi_qla_host *vha);
|
||||
|
||||
extern int qla2x00_get_flash_version(scsi_qla_host_t *, void *);
|
||||
|
@ -610,20 +624,13 @@ extern void qla82xx_fw_dump(scsi_qla_host_t *, int);
|
|||
extern void qla8044_fw_dump(scsi_qla_host_t *, int);
|
||||
|
||||
extern void qla27xx_fwdump(scsi_qla_host_t *, int);
|
||||
extern ulong qla27xx_fwdt_calculate_dump_size(struct scsi_qla_host *);
|
||||
extern ulong qla27xx_fwdt_calculate_dump_size(struct scsi_qla_host *, void *);
|
||||
extern int qla27xx_fwdt_template_valid(void *);
|
||||
extern ulong qla27xx_fwdt_template_size(void *);
|
||||
extern const void *qla27xx_fwdt_template_default(void);
|
||||
extern ulong qla27xx_fwdt_template_default_size(void);
|
||||
|
||||
extern void qla2x00_dump_regs(scsi_qla_host_t *);
|
||||
extern void qla2x00_dump_buffer(uint8_t *, uint32_t);
|
||||
extern void qla2x00_dump_buffer_zipped(uint8_t *, uint32_t);
|
||||
extern void ql_dump_regs(uint32_t, scsi_qla_host_t *, int32_t);
|
||||
extern void ql_dump_buffer(uint32_t, scsi_qla_host_t *, int32_t,
|
||||
uint8_t *, uint32_t);
|
||||
extern void qla2xxx_dump_post_process(scsi_qla_host_t *, int);
|
||||
|
||||
extern void ql_dump_regs(uint, scsi_qla_host_t *, uint);
|
||||
extern void ql_dump_buffer(uint, scsi_qla_host_t *, uint, void *, uint);
|
||||
/*
|
||||
* Global Function Prototypes in qla_gs.c source file.
|
||||
*/
|
||||
|
@ -722,7 +729,7 @@ extern void qla24xx_wrt_rsp_reg(struct qla_hw_data *, uint16_t, uint16_t);
|
|||
/* qlafx00 related functions */
|
||||
extern int qlafx00_pci_config(struct scsi_qla_host *);
|
||||
extern int qlafx00_initialize_adapter(struct scsi_qla_host *);
|
||||
extern void qlafx00_soft_reset(scsi_qla_host_t *);
|
||||
extern int qlafx00_soft_reset(scsi_qla_host_t *);
|
||||
extern int qlafx00_chip_diag(scsi_qla_host_t *);
|
||||
extern void qlafx00_config_rings(struct scsi_qla_host *);
|
||||
extern char *qlafx00_pci_info_str(struct scsi_qla_host *, char *);
|
||||
|
@ -765,16 +772,16 @@ extern int qla82xx_pci_region_offset(struct pci_dev *, int);
|
|||
extern int qla82xx_iospace_config(struct qla_hw_data *);
|
||||
|
||||
/* Initialization related functions */
|
||||
extern void qla82xx_reset_chip(struct scsi_qla_host *);
|
||||
extern int qla82xx_reset_chip(struct scsi_qla_host *);
|
||||
extern void qla82xx_config_rings(struct scsi_qla_host *);
|
||||
extern void qla82xx_watchdog(scsi_qla_host_t *);
|
||||
extern int qla82xx_start_firmware(scsi_qla_host_t *);
|
||||
|
||||
/* Firmware and flash related functions */
|
||||
extern int qla82xx_load_risc(scsi_qla_host_t *, uint32_t *);
|
||||
extern uint8_t *qla82xx_read_optrom_data(struct scsi_qla_host *, uint8_t *,
|
||||
extern void *qla82xx_read_optrom_data(struct scsi_qla_host *, void *,
|
||||
uint32_t, uint32_t);
|
||||
extern int qla82xx_write_optrom_data(struct scsi_qla_host *, uint8_t *,
|
||||
extern int qla82xx_write_optrom_data(struct scsi_qla_host *, void *,
|
||||
uint32_t, uint32_t);
|
||||
|
||||
/* Mailbox related functions */
|
||||
|
@ -870,7 +877,7 @@ extern void qla8044_clear_drv_active(struct qla_hw_data *);
|
|||
void qla8044_get_minidump(struct scsi_qla_host *vha);
|
||||
int qla8044_collect_md_data(struct scsi_qla_host *vha);
|
||||
extern int qla8044_md_get_template(scsi_qla_host_t *);
|
||||
extern int qla8044_write_optrom_data(struct scsi_qla_host *, uint8_t *,
|
||||
extern int qla8044_write_optrom_data(struct scsi_qla_host *, void *,
|
||||
uint32_t, uint32_t);
|
||||
extern irqreturn_t qla8044_intr_handler(int, void *);
|
||||
extern void qla82xx_mbx_completion(scsi_qla_host_t *, uint16_t);
|
||||
|
|
|
@ -45,13 +45,11 @@ qla2x00_prep_ms_iocb(scsi_qla_host_t *vha, struct ct_arg *arg)
|
|||
ms_pkt->rsp_bytecount = cpu_to_le32(arg->rsp_size);
|
||||
ms_pkt->req_bytecount = cpu_to_le32(arg->req_size);
|
||||
|
||||
ms_pkt->dseg_req_address[0] = cpu_to_le32(LSD(arg->req_dma));
|
||||
ms_pkt->dseg_req_address[1] = cpu_to_le32(MSD(arg->req_dma));
|
||||
ms_pkt->dseg_req_length = ms_pkt->req_bytecount;
|
||||
put_unaligned_le64(arg->req_dma, &ms_pkt->req_dsd.address);
|
||||
ms_pkt->req_dsd.length = ms_pkt->req_bytecount;
|
||||
|
||||
ms_pkt->dseg_rsp_address[0] = cpu_to_le32(LSD(arg->rsp_dma));
|
||||
ms_pkt->dseg_rsp_address[1] = cpu_to_le32(MSD(arg->rsp_dma));
|
||||
ms_pkt->dseg_rsp_length = ms_pkt->rsp_bytecount;
|
||||
put_unaligned_le64(arg->rsp_dma, &ms_pkt->rsp_dsd.address);
|
||||
ms_pkt->rsp_dsd.length = ms_pkt->rsp_bytecount;
|
||||
|
||||
vha->qla_stats.control_requests++;
|
||||
|
||||
|
@ -83,13 +81,11 @@ qla24xx_prep_ms_iocb(scsi_qla_host_t *vha, struct ct_arg *arg)
|
|||
ct_pkt->rsp_byte_count = cpu_to_le32(arg->rsp_size);
|
||||
ct_pkt->cmd_byte_count = cpu_to_le32(arg->req_size);
|
||||
|
||||
ct_pkt->dseg_0_address[0] = cpu_to_le32(LSD(arg->req_dma));
|
||||
ct_pkt->dseg_0_address[1] = cpu_to_le32(MSD(arg->req_dma));
|
||||
ct_pkt->dseg_0_len = ct_pkt->cmd_byte_count;
|
||||
put_unaligned_le64(arg->req_dma, &ct_pkt->dsd[0].address);
|
||||
ct_pkt->dsd[0].length = ct_pkt->cmd_byte_count;
|
||||
|
||||
ct_pkt->dseg_1_address[0] = cpu_to_le32(LSD(arg->rsp_dma));
|
||||
ct_pkt->dseg_1_address[1] = cpu_to_le32(MSD(arg->rsp_dma));
|
||||
ct_pkt->dseg_1_len = ct_pkt->rsp_byte_count;
|
||||
put_unaligned_le64(arg->rsp_dma, &ct_pkt->dsd[1].address);
|
||||
ct_pkt->dsd[1].length = ct_pkt->rsp_byte_count;
|
||||
ct_pkt->vp_index = vha->vp_idx;
|
||||
|
||||
vha->qla_stats.control_requests++;
|
||||
|
@ -152,8 +148,8 @@ qla2x00_chk_ms_status(scsi_qla_host_t *vha, ms_iocb_entry_t *ms_pkt,
|
|||
vha->d_id.b.area, vha->d_id.b.al_pa,
|
||||
comp_status, ct_rsp->header.response);
|
||||
ql_dump_buffer(ql_dbg_disc + ql_dbg_buffer, vha,
|
||||
0x2078, (uint8_t *)&ct_rsp->header,
|
||||
sizeof(struct ct_rsp_hdr));
|
||||
0x2078, ct_rsp,
|
||||
offsetof(typeof(*ct_rsp), rsp));
|
||||
rval = QLA_INVALID_COMMAND;
|
||||
} else
|
||||
rval = QLA_SUCCESS;
|
||||
|
@ -1000,8 +996,7 @@ qla2x00_prep_sns_cmd(scsi_qla_host_t *vha, uint16_t cmd, uint16_t scmd_len,
|
|||
memset(sns_cmd, 0, sizeof(struct sns_cmd_pkt));
|
||||
wc = data_size / 2; /* Size in 16bit words. */
|
||||
sns_cmd->p.cmd.buffer_length = cpu_to_le16(wc);
|
||||
sns_cmd->p.cmd.buffer_address[0] = cpu_to_le32(LSD(ha->sns_cmd_dma));
|
||||
sns_cmd->p.cmd.buffer_address[1] = cpu_to_le32(MSD(ha->sns_cmd_dma));
|
||||
put_unaligned_le64(ha->sns_cmd_dma, &sns_cmd->p.cmd.buffer_address);
|
||||
sns_cmd->p.cmd.subcommand_length = cpu_to_le16(scmd_len);
|
||||
sns_cmd->p.cmd.subcommand = cpu_to_le16(cmd);
|
||||
wc = (data_size - 16) / 4; /* Size in 32bit words. */
|
||||
|
@ -1385,6 +1380,7 @@ qla2x00_mgmt_svr_login(scsi_qla_host_t *vha)
|
|||
int ret, rval;
|
||||
uint16_t mb[MAILBOX_REGISTER_COUNT];
|
||||
struct qla_hw_data *ha = vha->hw;
|
||||
|
||||
ret = QLA_SUCCESS;
|
||||
if (vha->flags.management_server_logged_in)
|
||||
return ret;
|
||||
|
@ -1423,6 +1419,7 @@ qla2x00_prep_ms_fdmi_iocb(scsi_qla_host_t *vha, uint32_t req_size,
|
|||
{
|
||||
ms_iocb_entry_t *ms_pkt;
|
||||
struct qla_hw_data *ha = vha->hw;
|
||||
|
||||
ms_pkt = ha->ms_iocb;
|
||||
memset(ms_pkt, 0, sizeof(ms_iocb_entry_t));
|
||||
|
||||
|
@ -1436,13 +1433,11 @@ qla2x00_prep_ms_fdmi_iocb(scsi_qla_host_t *vha, uint32_t req_size,
|
|||
ms_pkt->rsp_bytecount = cpu_to_le32(rsp_size);
|
||||
ms_pkt->req_bytecount = cpu_to_le32(req_size);
|
||||
|
||||
ms_pkt->dseg_req_address[0] = cpu_to_le32(LSD(ha->ct_sns_dma));
|
||||
ms_pkt->dseg_req_address[1] = cpu_to_le32(MSD(ha->ct_sns_dma));
|
||||
ms_pkt->dseg_req_length = ms_pkt->req_bytecount;
|
||||
put_unaligned_le64(ha->ct_sns_dma, &ms_pkt->req_dsd.address);
|
||||
ms_pkt->req_dsd.length = ms_pkt->req_bytecount;
|
||||
|
||||
ms_pkt->dseg_rsp_address[0] = cpu_to_le32(LSD(ha->ct_sns_dma));
|
||||
ms_pkt->dseg_rsp_address[1] = cpu_to_le32(MSD(ha->ct_sns_dma));
|
||||
ms_pkt->dseg_rsp_length = ms_pkt->rsp_bytecount;
|
||||
put_unaligned_le64(ha->ct_sns_dma, &ms_pkt->rsp_dsd.address);
|
||||
ms_pkt->rsp_dsd.length = ms_pkt->rsp_bytecount;
|
||||
|
||||
return ms_pkt;
|
||||
}
|
||||
|
@ -1474,13 +1469,11 @@ qla24xx_prep_ms_fdmi_iocb(scsi_qla_host_t *vha, uint32_t req_size,
|
|||
ct_pkt->rsp_byte_count = cpu_to_le32(rsp_size);
|
||||
ct_pkt->cmd_byte_count = cpu_to_le32(req_size);
|
||||
|
||||
ct_pkt->dseg_0_address[0] = cpu_to_le32(LSD(ha->ct_sns_dma));
|
||||
ct_pkt->dseg_0_address[1] = cpu_to_le32(MSD(ha->ct_sns_dma));
|
||||
ct_pkt->dseg_0_len = ct_pkt->cmd_byte_count;
|
||||
put_unaligned_le64(ha->ct_sns_dma, &ct_pkt->dsd[0].address);
|
||||
ct_pkt->dsd[0].length = ct_pkt->cmd_byte_count;
|
||||
|
||||
ct_pkt->dseg_1_address[0] = cpu_to_le32(LSD(ha->ct_sns_dma));
|
||||
ct_pkt->dseg_1_address[1] = cpu_to_le32(MSD(ha->ct_sns_dma));
|
||||
ct_pkt->dseg_1_len = ct_pkt->rsp_byte_count;
|
||||
put_unaligned_le64(ha->ct_sns_dma, &ct_pkt->dsd[1].address);
|
||||
ct_pkt->dsd[1].length = ct_pkt->rsp_byte_count;
|
||||
ct_pkt->vp_index = vha->vp_idx;
|
||||
|
||||
return ct_pkt;
|
||||
|
@ -1495,10 +1488,10 @@ qla2x00_update_ms_fdmi_iocb(scsi_qla_host_t *vha, uint32_t req_size)
|
|||
|
||||
if (IS_FWI2_CAPABLE(ha)) {
|
||||
ct_pkt->cmd_byte_count = cpu_to_le32(req_size);
|
||||
ct_pkt->dseg_0_len = ct_pkt->cmd_byte_count;
|
||||
ct_pkt->dsd[0].length = ct_pkt->cmd_byte_count;
|
||||
} else {
|
||||
ms_pkt->req_bytecount = cpu_to_le32(req_size);
|
||||
ms_pkt->dseg_req_length = ms_pkt->req_bytecount;
|
||||
ms_pkt->req_dsd.length = ms_pkt->req_bytecount;
|
||||
}
|
||||
|
||||
return ms_pkt;
|
||||
|
@ -1794,7 +1787,7 @@ qla2x00_fdmi_rpa(scsi_qla_host_t *vha)
|
|||
if (IS_CNA_CAPABLE(ha))
|
||||
eiter->a.sup_speed = cpu_to_be32(
|
||||
FDMI_PORT_SPEED_10GB);
|
||||
else if (IS_QLA27XX(ha))
|
||||
else if (IS_QLA27XX(ha) || IS_QLA28XX(ha))
|
||||
eiter->a.sup_speed = cpu_to_be32(
|
||||
FDMI_PORT_SPEED_32GB|
|
||||
FDMI_PORT_SPEED_16GB|
|
||||
|
@ -2373,7 +2366,7 @@ qla2x00_fdmiv2_rpa(scsi_qla_host_t *vha)
|
|||
if (IS_CNA_CAPABLE(ha))
|
||||
eiter->a.sup_speed = cpu_to_be32(
|
||||
FDMI_PORT_SPEED_10GB);
|
||||
else if (IS_QLA27XX(ha))
|
||||
else if (IS_QLA27XX(ha) || IS_QLA28XX(ha))
|
||||
eiter->a.sup_speed = cpu_to_be32(
|
||||
FDMI_PORT_SPEED_32GB|
|
||||
FDMI_PORT_SPEED_16GB|
|
||||
|
@ -2446,7 +2439,7 @@ qla2x00_fdmiv2_rpa(scsi_qla_host_t *vha)
|
|||
eiter->type = cpu_to_be16(FDMI_PORT_MAX_FRAME_SIZE);
|
||||
eiter->len = cpu_to_be16(4 + 4);
|
||||
eiter->a.max_frame_size = IS_FWI2_CAPABLE(ha) ?
|
||||
le16_to_cpu(icb24->frame_payload_size):
|
||||
le16_to_cpu(icb24->frame_payload_size) :
|
||||
le16_to_cpu(ha->init_cb->frame_payload_size);
|
||||
eiter->a.max_frame_size = cpu_to_be32(eiter->a.max_frame_size);
|
||||
size += 4 + 4;
|
||||
|
@ -2783,6 +2776,31 @@ qla24xx_prep_ct_fm_req(struct ct_sns_pkt *p, uint16_t cmd,
|
|||
return &p->p.req;
|
||||
}
|
||||
|
||||
static uint16_t
|
||||
qla2x00_port_speed_capability(uint16_t speed)
|
||||
{
|
||||
switch (speed) {
|
||||
case BIT_15:
|
||||
return PORT_SPEED_1GB;
|
||||
case BIT_14:
|
||||
return PORT_SPEED_2GB;
|
||||
case BIT_13:
|
||||
return PORT_SPEED_4GB;
|
||||
case BIT_12:
|
||||
return PORT_SPEED_10GB;
|
||||
case BIT_11:
|
||||
return PORT_SPEED_8GB;
|
||||
case BIT_10:
|
||||
return PORT_SPEED_16GB;
|
||||
case BIT_8:
|
||||
return PORT_SPEED_32GB;
|
||||
case BIT_7:
|
||||
return PORT_SPEED_64GB;
|
||||
default:
|
||||
return PORT_SPEED_UNKNOWN;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* qla2x00_gpsc() - FCS Get Port Speed Capabilities (GPSC) query.
|
||||
* @vha: HA context
|
||||
|
@ -2855,31 +2873,8 @@ qla2x00_gpsc(scsi_qla_host_t *vha, sw_info_t *list)
|
|||
}
|
||||
rval = QLA_FUNCTION_FAILED;
|
||||
} else {
|
||||
/* Save port-speed */
|
||||
switch (be16_to_cpu(ct_rsp->rsp.gpsc.speed)) {
|
||||
case BIT_15:
|
||||
list[i].fp_speed = PORT_SPEED_1GB;
|
||||
break;
|
||||
case BIT_14:
|
||||
list[i].fp_speed = PORT_SPEED_2GB;
|
||||
break;
|
||||
case BIT_13:
|
||||
list[i].fp_speed = PORT_SPEED_4GB;
|
||||
break;
|
||||
case BIT_12:
|
||||
list[i].fp_speed = PORT_SPEED_10GB;
|
||||
break;
|
||||
case BIT_11:
|
||||
list[i].fp_speed = PORT_SPEED_8GB;
|
||||
break;
|
||||
case BIT_10:
|
||||
list[i].fp_speed = PORT_SPEED_16GB;
|
||||
break;
|
||||
case BIT_8:
|
||||
list[i].fp_speed = PORT_SPEED_32GB;
|
||||
break;
|
||||
}
|
||||
|
||||
list->fp_speed = qla2x00_port_speed_capability(
|
||||
be16_to_cpu(ct_rsp->rsp.gpsc.speed));
|
||||
ql_dbg(ql_dbg_disc, vha, 0x205b,
|
||||
"GPSC ext entry - fpn "
|
||||
"%8phN speeds=%04x speed=%04x.\n",
|
||||
|
@ -3031,6 +3026,8 @@ static void qla24xx_async_gpsc_sp_done(void *s, int res)
|
|||
"Async done-%s res %x, WWPN %8phC \n",
|
||||
sp->name, res, fcport->port_name);
|
||||
|
||||
fcport->flags &= ~(FCF_ASYNC_SENT | FCF_ASYNC_ACTIVE);
|
||||
|
||||
if (res == QLA_FUNCTION_TIMEOUT)
|
||||
return;
|
||||
|
||||
|
@ -3048,29 +3045,8 @@ static void qla24xx_async_gpsc_sp_done(void *s, int res)
|
|||
goto done;
|
||||
}
|
||||
} else {
|
||||
switch (be16_to_cpu(ct_rsp->rsp.gpsc.speed)) {
|
||||
case BIT_15:
|
||||
fcport->fp_speed = PORT_SPEED_1GB;
|
||||
break;
|
||||
case BIT_14:
|
||||
fcport->fp_speed = PORT_SPEED_2GB;
|
||||
break;
|
||||
case BIT_13:
|
||||
fcport->fp_speed = PORT_SPEED_4GB;
|
||||
break;
|
||||
case BIT_12:
|
||||
fcport->fp_speed = PORT_SPEED_10GB;
|
||||
break;
|
||||
case BIT_11:
|
||||
fcport->fp_speed = PORT_SPEED_8GB;
|
||||
break;
|
||||
case BIT_10:
|
||||
fcport->fp_speed = PORT_SPEED_16GB;
|
||||
break;
|
||||
case BIT_8:
|
||||
fcport->fp_speed = PORT_SPEED_32GB;
|
||||
break;
|
||||
}
|
||||
fcport->fp_speed = qla2x00_port_speed_capability(
|
||||
be16_to_cpu(ct_rsp->rsp.gpsc.speed));
|
||||
|
||||
ql_dbg(ql_dbg_disc, vha, 0x2054,
|
||||
"Async-%s OUT WWPN %8phC speeds=%04x speed=%04x.\n",
|
||||
|
@ -4370,6 +4346,7 @@ int qla24xx_async_gnnid(scsi_qla_host_t *vha, fc_port_t *fcport)
|
|||
|
||||
done_free_sp:
|
||||
sp->free(sp);
|
||||
fcport->flags &= ~FCF_ASYNC_SENT;
|
||||
done:
|
||||
return rval;
|
||||
}
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -90,43 +90,6 @@ host_to_adap(uint8_t *src, uint8_t *dst, uint32_t bsize)
|
|||
*odest++ = cpu_to_le32(*isrc);
|
||||
}
|
||||
|
||||
static inline void
|
||||
qla2x00_set_reserved_loop_ids(struct qla_hw_data *ha)
|
||||
{
|
||||
int i;
|
||||
|
||||
if (IS_FWI2_CAPABLE(ha))
|
||||
return;
|
||||
|
||||
for (i = 0; i < SNS_FIRST_LOOP_ID; i++)
|
||||
set_bit(i, ha->loop_id_map);
|
||||
set_bit(MANAGEMENT_SERVER, ha->loop_id_map);
|
||||
set_bit(BROADCAST, ha->loop_id_map);
|
||||
}
|
||||
|
||||
static inline int
|
||||
qla2x00_is_reserved_id(scsi_qla_host_t *vha, uint16_t loop_id)
|
||||
{
|
||||
struct qla_hw_data *ha = vha->hw;
|
||||
if (IS_FWI2_CAPABLE(ha))
|
||||
return (loop_id > NPH_LAST_HANDLE);
|
||||
|
||||
return ((loop_id > ha->max_loop_id && loop_id < SNS_FIRST_LOOP_ID) ||
|
||||
loop_id == MANAGEMENT_SERVER || loop_id == BROADCAST);
|
||||
}
|
||||
|
||||
static inline void
|
||||
qla2x00_clear_loop_id(fc_port_t *fcport) {
|
||||
struct qla_hw_data *ha = fcport->vha->hw;
|
||||
|
||||
if (fcport->loop_id == FC_NO_LOOP_ID ||
|
||||
qla2x00_is_reserved_id(fcport->vha, fcport->loop_id))
|
||||
return;
|
||||
|
||||
clear_bit(fcport->loop_id, ha->loop_id_map);
|
||||
fcport->loop_id = FC_NO_LOOP_ID;
|
||||
}
|
||||
|
||||
static inline void
|
||||
qla2x00_clean_dsd_pool(struct qla_hw_data *ha, struct crc_context *ctx)
|
||||
{
|
||||
|
@ -142,25 +105,6 @@ qla2x00_clean_dsd_pool(struct qla_hw_data *ha, struct crc_context *ctx)
|
|||
INIT_LIST_HEAD(&ctx->dsd_list);
|
||||
}
|
||||
|
||||
static inline void
|
||||
qla2x00_set_fcport_state(fc_port_t *fcport, int state)
|
||||
{
|
||||
int old_state;
|
||||
|
||||
old_state = atomic_read(&fcport->state);
|
||||
atomic_set(&fcport->state, state);
|
||||
|
||||
/* Don't print state transitions during initial allocation of fcport */
|
||||
if (old_state && old_state != state) {
|
||||
ql_dbg(ql_dbg_disc, fcport->vha, 0x207d,
|
||||
"FCPort %8phC state transitioned from %s to %s - "
|
||||
"portid=%02x%02x%02x.\n", fcport->port_name,
|
||||
port_state_str[old_state], port_state_str[state],
|
||||
fcport->d_id.b.domain, fcport->d_id.b.area,
|
||||
fcport->d_id.b.al_pa);
|
||||
}
|
||||
}
|
||||
|
||||
static inline int
|
||||
qla2x00_hba_err_chk_enabled(srb_t *sp)
|
||||
{
|
||||
|
@ -240,6 +184,7 @@ qla2xxx_get_qpair_sp(scsi_qla_host_t *vha, struct qla_qpair *qpair,
|
|||
static inline void
|
||||
qla2xxx_rel_qpair_sp(struct qla_qpair *qpair, srb_t *sp)
|
||||
{
|
||||
sp->qpair = NULL;
|
||||
mempool_free(sp, qpair->srb_mempool);
|
||||
QLA_QPAIR_MARK_NOT_BUSY(qpair);
|
||||
}
|
||||
|
@ -274,18 +219,6 @@ qla2x00_rel_sp(srb_t *sp)
|
|||
qla2xxx_rel_qpair_sp(sp->qpair, sp);
|
||||
}
|
||||
|
||||
static inline void
|
||||
qla2x00_init_timer(srb_t *sp, unsigned long tmo)
|
||||
{
|
||||
timer_setup(&sp->u.iocb_cmd.timer, qla2x00_sp_timeout, 0);
|
||||
sp->u.iocb_cmd.timer.expires = jiffies + tmo * HZ;
|
||||
sp->free = qla2x00_sp_free;
|
||||
init_completion(&sp->comp);
|
||||
if (IS_QLAFX00(sp->vha->hw) && (sp->type == SRB_FXIOCB_DCMD))
|
||||
init_completion(&sp->u.iocb_cmd.u.fxiocb.fxiocb_comp);
|
||||
add_timer(&sp->u.iocb_cmd.timer);
|
||||
}
|
||||
|
||||
static inline int
|
||||
qla2x00_gid_list_size(struct qla_hw_data *ha)
|
||||
{
|
||||
|
|
|
@ -107,7 +107,7 @@ qla2x00_prep_cont_type0_iocb(struct scsi_qla_host *vha)
|
|||
cont_pkt = (cont_entry_t *)req->ring_ptr;
|
||||
|
||||
/* Load packet defaults. */
|
||||
*((uint32_t *)(&cont_pkt->entry_type)) = cpu_to_le32(CONTINUE_TYPE);
|
||||
put_unaligned_le32(CONTINUE_TYPE, &cont_pkt->entry_type);
|
||||
|
||||
return (cont_pkt);
|
||||
}
|
||||
|
@ -136,9 +136,8 @@ qla2x00_prep_cont_type1_iocb(scsi_qla_host_t *vha, struct req_que *req)
|
|||
cont_pkt = (cont_a64_entry_t *)req->ring_ptr;
|
||||
|
||||
/* Load packet defaults. */
|
||||
*((uint32_t *)(&cont_pkt->entry_type)) = IS_QLAFX00(vha->hw) ?
|
||||
cpu_to_le32(CONTINUE_A64_TYPE_FX00) :
|
||||
cpu_to_le32(CONTINUE_A64_TYPE);
|
||||
put_unaligned_le32(IS_QLAFX00(vha->hw) ? CONTINUE_A64_TYPE_FX00 :
|
||||
CONTINUE_A64_TYPE, &cont_pkt->entry_type);
|
||||
|
||||
return (cont_pkt);
|
||||
}
|
||||
|
@ -193,7 +192,7 @@ void qla2x00_build_scsi_iocbs_32(srb_t *sp, cmd_entry_t *cmd_pkt,
|
|||
uint16_t tot_dsds)
|
||||
{
|
||||
uint16_t avail_dsds;
|
||||
uint32_t *cur_dsd;
|
||||
struct dsd32 *cur_dsd;
|
||||
scsi_qla_host_t *vha;
|
||||
struct scsi_cmnd *cmd;
|
||||
struct scatterlist *sg;
|
||||
|
@ -202,8 +201,7 @@ void qla2x00_build_scsi_iocbs_32(srb_t *sp, cmd_entry_t *cmd_pkt,
|
|||
cmd = GET_CMD_SP(sp);
|
||||
|
||||
/* Update entry type to indicate Command Type 2 IOCB */
|
||||
*((uint32_t *)(&cmd_pkt->entry_type)) =
|
||||
cpu_to_le32(COMMAND_TYPE);
|
||||
put_unaligned_le32(COMMAND_TYPE, &cmd_pkt->entry_type);
|
||||
|
||||
/* No data transfer */
|
||||
if (!scsi_bufflen(cmd) || cmd->sc_data_direction == DMA_NONE) {
|
||||
|
@ -215,8 +213,8 @@ void qla2x00_build_scsi_iocbs_32(srb_t *sp, cmd_entry_t *cmd_pkt,
|
|||
cmd_pkt->control_flags |= cpu_to_le16(qla2x00_get_cmd_direction(sp));
|
||||
|
||||
/* Three DSDs are available in the Command Type 2 IOCB */
|
||||
avail_dsds = 3;
|
||||
cur_dsd = (uint32_t *)&cmd_pkt->dseg_0_address;
|
||||
avail_dsds = ARRAY_SIZE(cmd_pkt->dsd32);
|
||||
cur_dsd = cmd_pkt->dsd32;
|
||||
|
||||
/* Load data segments */
|
||||
scsi_for_each_sg(cmd, sg, tot_dsds, i) {
|
||||
|
@ -229,12 +227,11 @@ void qla2x00_build_scsi_iocbs_32(srb_t *sp, cmd_entry_t *cmd_pkt,
|
|||
* Type 0 IOCB.
|
||||
*/
|
||||
cont_pkt = qla2x00_prep_cont_type0_iocb(vha);
|
||||
cur_dsd = (uint32_t *)&cont_pkt->dseg_0_address;
|
||||
avail_dsds = 7;
|
||||
cur_dsd = cont_pkt->dsd;
|
||||
avail_dsds = ARRAY_SIZE(cont_pkt->dsd);
|
||||
}
|
||||
|
||||
*cur_dsd++ = cpu_to_le32(sg_dma_address(sg));
|
||||
*cur_dsd++ = cpu_to_le32(sg_dma_len(sg));
|
||||
append_dsd32(&cur_dsd, sg);
|
||||
avail_dsds--;
|
||||
}
|
||||
}
|
||||
|
@ -251,7 +248,7 @@ void qla2x00_build_scsi_iocbs_64(srb_t *sp, cmd_entry_t *cmd_pkt,
|
|||
uint16_t tot_dsds)
|
||||
{
|
||||
uint16_t avail_dsds;
|
||||
uint32_t *cur_dsd;
|
||||
struct dsd64 *cur_dsd;
|
||||
scsi_qla_host_t *vha;
|
||||
struct scsi_cmnd *cmd;
|
||||
struct scatterlist *sg;
|
||||
|
@ -260,7 +257,7 @@ void qla2x00_build_scsi_iocbs_64(srb_t *sp, cmd_entry_t *cmd_pkt,
|
|||
cmd = GET_CMD_SP(sp);
|
||||
|
||||
/* Update entry type to indicate Command Type 3 IOCB */
|
||||
*((uint32_t *)(&cmd_pkt->entry_type)) = cpu_to_le32(COMMAND_A64_TYPE);
|
||||
put_unaligned_le32(COMMAND_A64_TYPE, &cmd_pkt->entry_type);
|
||||
|
||||
/* No data transfer */
|
||||
if (!scsi_bufflen(cmd) || cmd->sc_data_direction == DMA_NONE) {
|
||||
|
@ -272,12 +269,11 @@ void qla2x00_build_scsi_iocbs_64(srb_t *sp, cmd_entry_t *cmd_pkt,
|
|||
cmd_pkt->control_flags |= cpu_to_le16(qla2x00_get_cmd_direction(sp));
|
||||
|
||||
/* Two DSDs are available in the Command Type 3 IOCB */
|
||||
avail_dsds = 2;
|
||||
cur_dsd = (uint32_t *)&cmd_pkt->dseg_0_address;
|
||||
avail_dsds = ARRAY_SIZE(cmd_pkt->dsd64);
|
||||
cur_dsd = cmd_pkt->dsd64;
|
||||
|
||||
/* Load data segments */
|
||||
scsi_for_each_sg(cmd, sg, tot_dsds, i) {
|
||||
dma_addr_t sle_dma;
|
||||
cont_a64_entry_t *cont_pkt;
|
||||
|
||||
/* Allocate additional continuation packets? */
|
||||
|
@ -287,14 +283,11 @@ void qla2x00_build_scsi_iocbs_64(srb_t *sp, cmd_entry_t *cmd_pkt,
|
|||
* Type 1 IOCB.
|
||||
*/
|
||||
cont_pkt = qla2x00_prep_cont_type1_iocb(vha, vha->req);
|
||||
cur_dsd = (uint32_t *)cont_pkt->dseg_0_address;
|
||||
avail_dsds = 5;
|
||||
cur_dsd = cont_pkt->dsd;
|
||||
avail_dsds = ARRAY_SIZE(cont_pkt->dsd);
|
||||
}
|
||||
|
||||
sle_dma = sg_dma_address(sg);
|
||||
*cur_dsd++ = cpu_to_le32(LSD(sle_dma));
|
||||
*cur_dsd++ = cpu_to_le32(MSD(sle_dma));
|
||||
*cur_dsd++ = cpu_to_le32(sg_dma_len(sg));
|
||||
append_dsd64(&cur_dsd, sg);
|
||||
avail_dsds--;
|
||||
}
|
||||
}
|
||||
|
@ -467,7 +460,7 @@ qla2x00_start_iocbs(struct scsi_qla_host *vha, struct req_que *req)
|
|||
req->ring_ptr++;
|
||||
|
||||
/* Set chip new ring index. */
|
||||
if (ha->mqenable || IS_QLA27XX(ha)) {
|
||||
if (ha->mqenable || IS_QLA27XX(ha) || IS_QLA28XX(ha)) {
|
||||
WRT_REG_DWORD(req->req_q_in, req->ring_index);
|
||||
} else if (IS_QLA83XX(ha)) {
|
||||
WRT_REG_DWORD(req->req_q_in, req->ring_index);
|
||||
|
@ -580,13 +573,11 @@ static inline int
|
|||
qla24xx_build_scsi_type_6_iocbs(srb_t *sp, struct cmd_type_6 *cmd_pkt,
|
||||
uint16_t tot_dsds)
|
||||
{
|
||||
uint32_t *cur_dsd = NULL;
|
||||
struct dsd64 *cur_dsd = NULL, *next_dsd;
|
||||
scsi_qla_host_t *vha;
|
||||
struct qla_hw_data *ha;
|
||||
struct scsi_cmnd *cmd;
|
||||
struct scatterlist *cur_seg;
|
||||
uint32_t *dsd_seg;
|
||||
void *next_dsd;
|
||||
uint8_t avail_dsds;
|
||||
uint8_t first_iocb = 1;
|
||||
uint32_t dsd_list_len;
|
||||
|
@ -596,7 +587,7 @@ qla24xx_build_scsi_type_6_iocbs(srb_t *sp, struct cmd_type_6 *cmd_pkt,
|
|||
cmd = GET_CMD_SP(sp);
|
||||
|
||||
/* Update entry type to indicate Command Type 3 IOCB */
|
||||
*((uint32_t *)(&cmd_pkt->entry_type)) = cpu_to_le32(COMMAND_TYPE_6);
|
||||
put_unaligned_le32(COMMAND_TYPE_6, &cmd_pkt->entry_type);
|
||||
|
||||
/* No data transfer */
|
||||
if (!scsi_bufflen(cmd) || cmd->sc_data_direction == DMA_NONE) {
|
||||
|
@ -638,32 +629,27 @@ qla24xx_build_scsi_type_6_iocbs(srb_t *sp, struct cmd_type_6 *cmd_pkt,
|
|||
|
||||
if (first_iocb) {
|
||||
first_iocb = 0;
|
||||
dsd_seg = (uint32_t *)&cmd_pkt->fcp_data_dseg_address;
|
||||
*dsd_seg++ = cpu_to_le32(LSD(dsd_ptr->dsd_list_dma));
|
||||
*dsd_seg++ = cpu_to_le32(MSD(dsd_ptr->dsd_list_dma));
|
||||
cmd_pkt->fcp_data_dseg_len = cpu_to_le32(dsd_list_len);
|
||||
put_unaligned_le64(dsd_ptr->dsd_list_dma,
|
||||
&cmd_pkt->fcp_dsd.address);
|
||||
cmd_pkt->fcp_dsd.length = cpu_to_le32(dsd_list_len);
|
||||
} else {
|
||||
*cur_dsd++ = cpu_to_le32(LSD(dsd_ptr->dsd_list_dma));
|
||||
*cur_dsd++ = cpu_to_le32(MSD(dsd_ptr->dsd_list_dma));
|
||||
*cur_dsd++ = cpu_to_le32(dsd_list_len);
|
||||
put_unaligned_le64(dsd_ptr->dsd_list_dma,
|
||||
&cur_dsd->address);
|
||||
cur_dsd->length = cpu_to_le32(dsd_list_len);
|
||||
cur_dsd++;
|
||||
}
|
||||
cur_dsd = (uint32_t *)next_dsd;
|
||||
cur_dsd = next_dsd;
|
||||
while (avail_dsds) {
|
||||
dma_addr_t sle_dma;
|
||||
|
||||
sle_dma = sg_dma_address(cur_seg);
|
||||
*cur_dsd++ = cpu_to_le32(LSD(sle_dma));
|
||||
*cur_dsd++ = cpu_to_le32(MSD(sle_dma));
|
||||
*cur_dsd++ = cpu_to_le32(sg_dma_len(cur_seg));
|
||||
append_dsd64(&cur_dsd, cur_seg);
|
||||
cur_seg = sg_next(cur_seg);
|
||||
avail_dsds--;
|
||||
}
|
||||
}
|
||||
|
||||
/* Null termination */
|
||||
*cur_dsd++ = 0;
|
||||
*cur_dsd++ = 0;
|
||||
*cur_dsd++ = 0;
|
||||
cur_dsd->address = 0;
|
||||
cur_dsd->length = 0;
|
||||
cur_dsd++;
|
||||
cmd_pkt->control_flags |= CF_DATA_SEG_DESCR_ENABLE;
|
||||
return 0;
|
||||
}
|
||||
|
@ -702,7 +688,7 @@ qla24xx_build_scsi_iocbs(srb_t *sp, struct cmd_type_7 *cmd_pkt,
|
|||
uint16_t tot_dsds, struct req_que *req)
|
||||
{
|
||||
uint16_t avail_dsds;
|
||||
uint32_t *cur_dsd;
|
||||
struct dsd64 *cur_dsd;
|
||||
scsi_qla_host_t *vha;
|
||||
struct scsi_cmnd *cmd;
|
||||
struct scatterlist *sg;
|
||||
|
@ -711,7 +697,7 @@ qla24xx_build_scsi_iocbs(srb_t *sp, struct cmd_type_7 *cmd_pkt,
|
|||
cmd = GET_CMD_SP(sp);
|
||||
|
||||
/* Update entry type to indicate Command Type 3 IOCB */
|
||||
*((uint32_t *)(&cmd_pkt->entry_type)) = cpu_to_le32(COMMAND_TYPE_7);
|
||||
put_unaligned_le32(COMMAND_TYPE_7, &cmd_pkt->entry_type);
|
||||
|
||||
/* No data transfer */
|
||||
if (!scsi_bufflen(cmd) || cmd->sc_data_direction == DMA_NONE) {
|
||||
|
@ -734,12 +720,11 @@ qla24xx_build_scsi_iocbs(srb_t *sp, struct cmd_type_7 *cmd_pkt,
|
|||
|
||||
/* One DSD is available in the Command Type 3 IOCB */
|
||||
avail_dsds = 1;
|
||||
cur_dsd = (uint32_t *)&cmd_pkt->dseg_0_address;
|
||||
cur_dsd = &cmd_pkt->dsd;
|
||||
|
||||
/* Load data segments */
|
||||
|
||||
scsi_for_each_sg(cmd, sg, tot_dsds, i) {
|
||||
dma_addr_t sle_dma;
|
||||
cont_a64_entry_t *cont_pkt;
|
||||
|
||||
/* Allocate additional continuation packets? */
|
||||
|
@ -749,14 +734,11 @@ qla24xx_build_scsi_iocbs(srb_t *sp, struct cmd_type_7 *cmd_pkt,
|
|||
* Type 1 IOCB.
|
||||
*/
|
||||
cont_pkt = qla2x00_prep_cont_type1_iocb(vha, req);
|
||||
cur_dsd = (uint32_t *)cont_pkt->dseg_0_address;
|
||||
avail_dsds = 5;
|
||||
cur_dsd = cont_pkt->dsd;
|
||||
avail_dsds = ARRAY_SIZE(cont_pkt->dsd);
|
||||
}
|
||||
|
||||
sle_dma = sg_dma_address(sg);
|
||||
*cur_dsd++ = cpu_to_le32(LSD(sle_dma));
|
||||
*cur_dsd++ = cpu_to_le32(MSD(sle_dma));
|
||||
*cur_dsd++ = cpu_to_le32(sg_dma_len(sg));
|
||||
append_dsd64(&cur_dsd, sg);
|
||||
avail_dsds--;
|
||||
}
|
||||
}
|
||||
|
@ -892,14 +874,14 @@ qla24xx_get_one_block_sg(uint32_t blk_sz, struct qla2_sgx *sgx,
|
|||
|
||||
int
|
||||
qla24xx_walk_and_build_sglist_no_difb(struct qla_hw_data *ha, srb_t *sp,
|
||||
uint32_t *dsd, uint16_t tot_dsds, struct qla_tc_param *tc)
|
||||
struct dsd64 *dsd, uint16_t tot_dsds, struct qla_tc_param *tc)
|
||||
{
|
||||
void *next_dsd;
|
||||
uint8_t avail_dsds = 0;
|
||||
uint32_t dsd_list_len;
|
||||
struct dsd_dma *dsd_ptr;
|
||||
struct scatterlist *sg_prot;
|
||||
uint32_t *cur_dsd = dsd;
|
||||
struct dsd64 *cur_dsd = dsd;
|
||||
uint16_t used_dsds = tot_dsds;
|
||||
uint32_t prot_int; /* protection interval */
|
||||
uint32_t partial;
|
||||
|
@ -973,14 +955,14 @@ qla24xx_walk_and_build_sglist_no_difb(struct qla_hw_data *ha, srb_t *sp,
|
|||
|
||||
|
||||
/* add new list to cmd iocb or last list */
|
||||
*cur_dsd++ = cpu_to_le32(LSD(dsd_ptr->dsd_list_dma));
|
||||
*cur_dsd++ = cpu_to_le32(MSD(dsd_ptr->dsd_list_dma));
|
||||
*cur_dsd++ = dsd_list_len;
|
||||
cur_dsd = (uint32_t *)next_dsd;
|
||||
put_unaligned_le64(dsd_ptr->dsd_list_dma,
|
||||
&cur_dsd->address);
|
||||
cur_dsd->length = cpu_to_le32(dsd_list_len);
|
||||
cur_dsd = next_dsd;
|
||||
}
|
||||
*cur_dsd++ = cpu_to_le32(LSD(sle_dma));
|
||||
*cur_dsd++ = cpu_to_le32(MSD(sle_dma));
|
||||
*cur_dsd++ = cpu_to_le32(sle_dma_len);
|
||||
put_unaligned_le64(sle_dma, &cur_dsd->address);
|
||||
cur_dsd->length = cpu_to_le32(sle_dma_len);
|
||||
cur_dsd++;
|
||||
avail_dsds--;
|
||||
|
||||
if (partial == 0) {
|
||||
|
@ -999,22 +981,22 @@ qla24xx_walk_and_build_sglist_no_difb(struct qla_hw_data *ha, srb_t *sp,
|
|||
}
|
||||
}
|
||||
/* Null termination */
|
||||
*cur_dsd++ = 0;
|
||||
*cur_dsd++ = 0;
|
||||
*cur_dsd++ = 0;
|
||||
cur_dsd->address = 0;
|
||||
cur_dsd->length = 0;
|
||||
cur_dsd++;
|
||||
return 0;
|
||||
}
|
||||
|
||||
int
|
||||
qla24xx_walk_and_build_sglist(struct qla_hw_data *ha, srb_t *sp, uint32_t *dsd,
|
||||
uint16_t tot_dsds, struct qla_tc_param *tc)
|
||||
qla24xx_walk_and_build_sglist(struct qla_hw_data *ha, srb_t *sp,
|
||||
struct dsd64 *dsd, uint16_t tot_dsds, struct qla_tc_param *tc)
|
||||
{
|
||||
void *next_dsd;
|
||||
uint8_t avail_dsds = 0;
|
||||
uint32_t dsd_list_len;
|
||||
struct dsd_dma *dsd_ptr;
|
||||
struct scatterlist *sg, *sgl;
|
||||
uint32_t *cur_dsd = dsd;
|
||||
struct dsd64 *cur_dsd = dsd;
|
||||
int i;
|
||||
uint16_t used_dsds = tot_dsds;
|
||||
struct scsi_cmnd *cmd;
|
||||
|
@ -1031,8 +1013,6 @@ qla24xx_walk_and_build_sglist(struct qla_hw_data *ha, srb_t *sp, uint32_t *dsd,
|
|||
|
||||
|
||||
for_each_sg(sgl, sg, tot_dsds, i) {
|
||||
dma_addr_t sle_dma;
|
||||
|
||||
/* Allocate additional continuation packets? */
|
||||
if (avail_dsds == 0) {
|
||||
avail_dsds = (used_dsds > QLA_DSDS_PER_IOCB) ?
|
||||
|
@ -1072,29 +1052,25 @@ qla24xx_walk_and_build_sglist(struct qla_hw_data *ha, srb_t *sp, uint32_t *dsd,
|
|||
}
|
||||
|
||||
/* add new list to cmd iocb or last list */
|
||||
*cur_dsd++ = cpu_to_le32(LSD(dsd_ptr->dsd_list_dma));
|
||||
*cur_dsd++ = cpu_to_le32(MSD(dsd_ptr->dsd_list_dma));
|
||||
*cur_dsd++ = dsd_list_len;
|
||||
cur_dsd = (uint32_t *)next_dsd;
|
||||
put_unaligned_le64(dsd_ptr->dsd_list_dma,
|
||||
&cur_dsd->address);
|
||||
cur_dsd->length = cpu_to_le32(dsd_list_len);
|
||||
cur_dsd = next_dsd;
|
||||
}
|
||||
sle_dma = sg_dma_address(sg);
|
||||
|
||||
*cur_dsd++ = cpu_to_le32(LSD(sle_dma));
|
||||
*cur_dsd++ = cpu_to_le32(MSD(sle_dma));
|
||||
*cur_dsd++ = cpu_to_le32(sg_dma_len(sg));
|
||||
append_dsd64(&cur_dsd, sg);
|
||||
avail_dsds--;
|
||||
|
||||
}
|
||||
/* Null termination */
|
||||
*cur_dsd++ = 0;
|
||||
*cur_dsd++ = 0;
|
||||
*cur_dsd++ = 0;
|
||||
cur_dsd->address = 0;
|
||||
cur_dsd->length = 0;
|
||||
cur_dsd++;
|
||||
return 0;
|
||||
}
|
||||
|
||||
int
|
||||
qla24xx_walk_and_build_prot_sglist(struct qla_hw_data *ha, srb_t *sp,
|
||||
uint32_t *cur_dsd, uint16_t tot_dsds, struct qla_tgt_cmd *tc)
|
||||
struct dsd64 *cur_dsd, uint16_t tot_dsds, struct qla_tgt_cmd *tc)
|
||||
{
|
||||
struct dsd_dma *dsd_ptr = NULL, *dif_dsd, *nxt_dsd;
|
||||
struct scatterlist *sg, *sgl;
|
||||
|
@ -1109,6 +1085,7 @@ qla24xx_walk_and_build_prot_sglist(struct qla_hw_data *ha, srb_t *sp,
|
|||
|
||||
if (sp) {
|
||||
struct scsi_cmnd *cmd = GET_CMD_SP(sp);
|
||||
|
||||
sgl = scsi_prot_sglist(cmd);
|
||||
vha = sp->vha;
|
||||
difctx = sp->u.scmd.ctx;
|
||||
|
@ -1314,16 +1291,15 @@ qla24xx_walk_and_build_prot_sglist(struct qla_hw_data *ha, srb_t *sp,
|
|||
}
|
||||
|
||||
/* add new list to cmd iocb or last list */
|
||||
*cur_dsd++ =
|
||||
cpu_to_le32(LSD(dsd_ptr->dsd_list_dma));
|
||||
*cur_dsd++ =
|
||||
cpu_to_le32(MSD(dsd_ptr->dsd_list_dma));
|
||||
*cur_dsd++ = dsd_list_len;
|
||||
put_unaligned_le64(dsd_ptr->dsd_list_dma,
|
||||
&cur_dsd->address);
|
||||
cur_dsd->length = cpu_to_le32(dsd_list_len);
|
||||
cur_dsd = dsd_ptr->dsd_addr;
|
||||
}
|
||||
*cur_dsd++ = cpu_to_le32(LSD(dif_dsd->dsd_list_dma));
|
||||
*cur_dsd++ = cpu_to_le32(MSD(dif_dsd->dsd_list_dma));
|
||||
*cur_dsd++ = cpu_to_le32(sglen);
|
||||
put_unaligned_le64(dif_dsd->dsd_list_dma,
|
||||
&cur_dsd->address);
|
||||
cur_dsd->length = cpu_to_le32(sglen);
|
||||
cur_dsd++;
|
||||
avail_dsds--;
|
||||
difctx->dif_bundl_len -= sglen;
|
||||
track_difbundl_buf--;
|
||||
|
@ -1334,8 +1310,6 @@ qla24xx_walk_and_build_prot_sglist(struct qla_hw_data *ha, srb_t *sp,
|
|||
difctx->no_ldif_dsd, difctx->no_dif_bundl);
|
||||
} else {
|
||||
for_each_sg(sgl, sg, tot_dsds, i) {
|
||||
dma_addr_t sle_dma;
|
||||
|
||||
/* Allocate additional continuation packets? */
|
||||
if (avail_dsds == 0) {
|
||||
avail_dsds = (used_dsds > QLA_DSDS_PER_IOCB) ?
|
||||
|
@ -1375,24 +1349,19 @@ qla24xx_walk_and_build_prot_sglist(struct qla_hw_data *ha, srb_t *sp,
|
|||
}
|
||||
|
||||
/* add new list to cmd iocb or last list */
|
||||
*cur_dsd++ =
|
||||
cpu_to_le32(LSD(dsd_ptr->dsd_list_dma));
|
||||
*cur_dsd++ =
|
||||
cpu_to_le32(MSD(dsd_ptr->dsd_list_dma));
|
||||
*cur_dsd++ = dsd_list_len;
|
||||
put_unaligned_le64(dsd_ptr->dsd_list_dma,
|
||||
&cur_dsd->address);
|
||||
cur_dsd->length = cpu_to_le32(dsd_list_len);
|
||||
cur_dsd = dsd_ptr->dsd_addr;
|
||||
}
|
||||
sle_dma = sg_dma_address(sg);
|
||||
*cur_dsd++ = cpu_to_le32(LSD(sle_dma));
|
||||
*cur_dsd++ = cpu_to_le32(MSD(sle_dma));
|
||||
*cur_dsd++ = cpu_to_le32(sg_dma_len(sg));
|
||||
append_dsd64(&cur_dsd, sg);
|
||||
avail_dsds--;
|
||||
}
|
||||
}
|
||||
/* Null termination */
|
||||
*cur_dsd++ = 0;
|
||||
*cur_dsd++ = 0;
|
||||
*cur_dsd++ = 0;
|
||||
cur_dsd->address = 0;
|
||||
cur_dsd->length = 0;
|
||||
cur_dsd++;
|
||||
return 0;
|
||||
}
|
||||
/**
|
||||
|
@ -1405,11 +1374,12 @@ qla24xx_walk_and_build_prot_sglist(struct qla_hw_data *ha, srb_t *sp,
|
|||
* @tot_prot_dsds: Total number of segments with protection information
|
||||
* @fw_prot_opts: Protection options to be passed to firmware
|
||||
*/
|
||||
inline int
|
||||
static inline int
|
||||
qla24xx_build_scsi_crc_2_iocbs(srb_t *sp, struct cmd_type_crc_2 *cmd_pkt,
|
||||
uint16_t tot_dsds, uint16_t tot_prot_dsds, uint16_t fw_prot_opts)
|
||||
{
|
||||
uint32_t *cur_dsd, *fcp_dl;
|
||||
struct dsd64 *cur_dsd;
|
||||
uint32_t *fcp_dl;
|
||||
scsi_qla_host_t *vha;
|
||||
struct scsi_cmnd *cmd;
|
||||
uint32_t total_bytes = 0;
|
||||
|
@ -1427,7 +1397,7 @@ qla24xx_build_scsi_crc_2_iocbs(srb_t *sp, struct cmd_type_crc_2 *cmd_pkt,
|
|||
cmd = GET_CMD_SP(sp);
|
||||
|
||||
/* Update entry type to indicate Command Type CRC_2 IOCB */
|
||||
*((uint32_t *)(&cmd_pkt->entry_type)) = cpu_to_le32(COMMAND_TYPE_CRC_2);
|
||||
put_unaligned_le32(COMMAND_TYPE_CRC_2, &cmd_pkt->entry_type);
|
||||
|
||||
vha = sp->vha;
|
||||
ha = vha->hw;
|
||||
|
@ -1475,8 +1445,7 @@ qla24xx_build_scsi_crc_2_iocbs(srb_t *sp, struct cmd_type_crc_2 *cmd_pkt,
|
|||
qla24xx_set_t10dif_tags(sp, (struct fw_dif_context *)
|
||||
&crc_ctx_pkt->ref_tag, tot_prot_dsds);
|
||||
|
||||
cmd_pkt->crc_context_address[0] = cpu_to_le32(LSD(crc_ctx_dma));
|
||||
cmd_pkt->crc_context_address[1] = cpu_to_le32(MSD(crc_ctx_dma));
|
||||
put_unaligned_le64(crc_ctx_dma, &cmd_pkt->crc_context_address);
|
||||
cmd_pkt->crc_context_len = CRC_CONTEXT_LEN_FW;
|
||||
|
||||
/* Determine SCSI command length -- align to 4 byte boundary */
|
||||
|
@ -1503,10 +1472,8 @@ qla24xx_build_scsi_crc_2_iocbs(srb_t *sp, struct cmd_type_crc_2 *cmd_pkt,
|
|||
int_to_scsilun(cmd->device->lun, &fcp_cmnd->lun);
|
||||
memcpy(fcp_cmnd->cdb, cmd->cmnd, cmd->cmd_len);
|
||||
cmd_pkt->fcp_cmnd_dseg_len = cpu_to_le16(fcp_cmnd_len);
|
||||
cmd_pkt->fcp_cmnd_dseg_address[0] = cpu_to_le32(
|
||||
LSD(crc_ctx_dma + CRC_CONTEXT_FCPCMND_OFF));
|
||||
cmd_pkt->fcp_cmnd_dseg_address[1] = cpu_to_le32(
|
||||
MSD(crc_ctx_dma + CRC_CONTEXT_FCPCMND_OFF));
|
||||
put_unaligned_le64(crc_ctx_dma + CRC_CONTEXT_FCPCMND_OFF,
|
||||
&cmd_pkt->fcp_cmnd_dseg_address);
|
||||
fcp_cmnd->task_management = 0;
|
||||
fcp_cmnd->task_attribute = TSK_SIMPLE;
|
||||
|
||||
|
@ -1520,18 +1487,18 @@ qla24xx_build_scsi_crc_2_iocbs(srb_t *sp, struct cmd_type_crc_2 *cmd_pkt,
|
|||
switch (scsi_get_prot_op(GET_CMD_SP(sp))) {
|
||||
case SCSI_PROT_READ_INSERT:
|
||||
case SCSI_PROT_WRITE_STRIP:
|
||||
total_bytes = data_bytes;
|
||||
data_bytes += dif_bytes;
|
||||
break;
|
||||
total_bytes = data_bytes;
|
||||
data_bytes += dif_bytes;
|
||||
break;
|
||||
|
||||
case SCSI_PROT_READ_STRIP:
|
||||
case SCSI_PROT_WRITE_INSERT:
|
||||
case SCSI_PROT_READ_PASS:
|
||||
case SCSI_PROT_WRITE_PASS:
|
||||
total_bytes = data_bytes + dif_bytes;
|
||||
break;
|
||||
total_bytes = data_bytes + dif_bytes;
|
||||
break;
|
||||
default:
|
||||
BUG();
|
||||
BUG();
|
||||
}
|
||||
|
||||
if (!qla2x00_hba_err_chk_enabled(sp))
|
||||
|
@ -1548,7 +1515,7 @@ qla24xx_build_scsi_crc_2_iocbs(srb_t *sp, struct cmd_type_crc_2 *cmd_pkt,
|
|||
}
|
||||
|
||||
if (!bundling) {
|
||||
cur_dsd = (uint32_t *) &crc_ctx_pkt->u.nobundling.data_address;
|
||||
cur_dsd = &crc_ctx_pkt->u.nobundling.data_dsd;
|
||||
} else {
|
||||
/*
|
||||
* Configure Bundling if we need to fetch interlaving
|
||||
|
@ -1558,7 +1525,7 @@ qla24xx_build_scsi_crc_2_iocbs(srb_t *sp, struct cmd_type_crc_2 *cmd_pkt,
|
|||
crc_ctx_pkt->u.bundling.dif_byte_count = cpu_to_le32(dif_bytes);
|
||||
crc_ctx_pkt->u.bundling.dseg_count = cpu_to_le16(tot_dsds -
|
||||
tot_prot_dsds);
|
||||
cur_dsd = (uint32_t *) &crc_ctx_pkt->u.bundling.data_address;
|
||||
cur_dsd = &crc_ctx_pkt->u.bundling.data_dsd;
|
||||
}
|
||||
|
||||
/* Finish the common fields of CRC pkt */
|
||||
|
@ -1591,7 +1558,7 @@ qla24xx_build_scsi_crc_2_iocbs(srb_t *sp, struct cmd_type_crc_2 *cmd_pkt,
|
|||
if (bundling && tot_prot_dsds) {
|
||||
/* Walks dif segments */
|
||||
cmd_pkt->control_flags |= cpu_to_le16(CF_DIF_SEG_DESCR_ENABLE);
|
||||
cur_dsd = (uint32_t *) &crc_ctx_pkt->u.bundling.dif_address;
|
||||
cur_dsd = &crc_ctx_pkt->u.bundling.dif_dsd;
|
||||
if (qla24xx_walk_and_build_prot_sglist(ha, sp, cur_dsd,
|
||||
tot_prot_dsds, NULL))
|
||||
goto crc_queuing_error;
|
||||
|
@ -2325,7 +2292,8 @@ __qla2x00_alloc_iocbs(struct qla_qpair *qpair, srb_t *sp)
|
|||
if (req->cnt < req_cnt + 2) {
|
||||
if (qpair->use_shadow_reg)
|
||||
cnt = *req->out_ptr;
|
||||
else if (ha->mqenable || IS_QLA83XX(ha) || IS_QLA27XX(ha))
|
||||
else if (ha->mqenable || IS_QLA83XX(ha) || IS_QLA27XX(ha) ||
|
||||
IS_QLA28XX(ha))
|
||||
cnt = RD_REG_DWORD(®->isp25mq.req_q_out);
|
||||
else if (IS_P3P_TYPE(ha))
|
||||
cnt = RD_REG_DWORD(®->isp82.req_q_out);
|
||||
|
@ -2494,7 +2462,7 @@ qla2x00_logout_iocb(srb_t *sp, struct mbx_entry *mbx)
|
|||
SET_TARGET_ID(ha, mbx->loop_id, sp->fcport->loop_id);
|
||||
mbx->mb0 = cpu_to_le16(MBC_LOGOUT_FABRIC_PORT);
|
||||
mbx->mb1 = HAS_EXTENDED_IDS(ha) ?
|
||||
cpu_to_le16(sp->fcport->loop_id):
|
||||
cpu_to_le16(sp->fcport->loop_id) :
|
||||
cpu_to_le16(sp->fcport->loop_id << 8);
|
||||
mbx->mb2 = cpu_to_le16(sp->fcport->d_id.b.domain);
|
||||
mbx->mb3 = cpu_to_le16(sp->fcport->d_id.b.area << 8 |
|
||||
|
@ -2565,6 +2533,16 @@ qla24xx_tm_iocb(srb_t *sp, struct tsk_mgmt_entry *tsk)
|
|||
}
|
||||
}
|
||||
|
||||
void qla2x00_init_timer(srb_t *sp, unsigned long tmo)
|
||||
{
|
||||
timer_setup(&sp->u.iocb_cmd.timer, qla2x00_sp_timeout, 0);
|
||||
sp->u.iocb_cmd.timer.expires = jiffies + tmo * HZ;
|
||||
sp->free = qla2x00_sp_free;
|
||||
if (IS_QLAFX00(sp->vha->hw) && sp->type == SRB_FXIOCB_DCMD)
|
||||
init_completion(&sp->u.iocb_cmd.u.fxiocb.fxiocb_comp);
|
||||
add_timer(&sp->u.iocb_cmd.timer);
|
||||
}
|
||||
|
||||
static void
|
||||
qla2x00_els_dcmd_sp_free(void *data)
|
||||
{
|
||||
|
@ -2726,18 +2704,13 @@ qla24xx_els_logo_iocb(srb_t *sp, struct els_entry_24xx *els_iocb)
|
|||
if (elsio->u.els_logo.els_cmd == ELS_DCMD_PLOGI) {
|
||||
els_iocb->tx_byte_count = els_iocb->tx_len =
|
||||
sizeof(struct els_plogi_payload);
|
||||
els_iocb->tx_address[0] =
|
||||
cpu_to_le32(LSD(elsio->u.els_plogi.els_plogi_pyld_dma));
|
||||
els_iocb->tx_address[1] =
|
||||
cpu_to_le32(MSD(elsio->u.els_plogi.els_plogi_pyld_dma));
|
||||
|
||||
put_unaligned_le64(elsio->u.els_plogi.els_plogi_pyld_dma,
|
||||
&els_iocb->tx_address);
|
||||
els_iocb->rx_dsd_count = 1;
|
||||
els_iocb->rx_byte_count = els_iocb->rx_len =
|
||||
sizeof(struct els_plogi_payload);
|
||||
els_iocb->rx_address[0] =
|
||||
cpu_to_le32(LSD(elsio->u.els_plogi.els_resp_pyld_dma));
|
||||
els_iocb->rx_address[1] =
|
||||
cpu_to_le32(MSD(elsio->u.els_plogi.els_resp_pyld_dma));
|
||||
put_unaligned_le64(elsio->u.els_plogi.els_resp_pyld_dma,
|
||||
&els_iocb->rx_address);
|
||||
|
||||
ql_dbg(ql_dbg_io + ql_dbg_buffer, vha, 0x3073,
|
||||
"PLOGI ELS IOCB:\n");
|
||||
|
@ -2745,15 +2718,12 @@ qla24xx_els_logo_iocb(srb_t *sp, struct els_entry_24xx *els_iocb)
|
|||
(uint8_t *)els_iocb, 0x70);
|
||||
} else {
|
||||
els_iocb->tx_byte_count = sizeof(struct els_logo_payload);
|
||||
els_iocb->tx_address[0] =
|
||||
cpu_to_le32(LSD(elsio->u.els_logo.els_logo_pyld_dma));
|
||||
els_iocb->tx_address[1] =
|
||||
cpu_to_le32(MSD(elsio->u.els_logo.els_logo_pyld_dma));
|
||||
put_unaligned_le64(elsio->u.els_logo.els_logo_pyld_dma,
|
||||
&els_iocb->tx_address);
|
||||
els_iocb->tx_len = cpu_to_le32(sizeof(struct els_logo_payload));
|
||||
|
||||
els_iocb->rx_byte_count = 0;
|
||||
els_iocb->rx_address[0] = 0;
|
||||
els_iocb->rx_address[1] = 0;
|
||||
els_iocb->rx_address = 0;
|
||||
els_iocb->rx_len = 0;
|
||||
}
|
||||
|
||||
|
@ -2976,17 +2946,13 @@ qla24xx_els_iocb(srb_t *sp, struct els_entry_24xx *els_iocb)
|
|||
els_iocb->tx_byte_count =
|
||||
cpu_to_le32(bsg_job->request_payload.payload_len);
|
||||
|
||||
els_iocb->tx_address[0] = cpu_to_le32(LSD(sg_dma_address
|
||||
(bsg_job->request_payload.sg_list)));
|
||||
els_iocb->tx_address[1] = cpu_to_le32(MSD(sg_dma_address
|
||||
(bsg_job->request_payload.sg_list)));
|
||||
put_unaligned_le64(sg_dma_address(bsg_job->request_payload.sg_list),
|
||||
&els_iocb->tx_address);
|
||||
els_iocb->tx_len = cpu_to_le32(sg_dma_len
|
||||
(bsg_job->request_payload.sg_list));
|
||||
|
||||
els_iocb->rx_address[0] = cpu_to_le32(LSD(sg_dma_address
|
||||
(bsg_job->reply_payload.sg_list)));
|
||||
els_iocb->rx_address[1] = cpu_to_le32(MSD(sg_dma_address
|
||||
(bsg_job->reply_payload.sg_list)));
|
||||
put_unaligned_le64(sg_dma_address(bsg_job->reply_payload.sg_list),
|
||||
&els_iocb->rx_address);
|
||||
els_iocb->rx_len = cpu_to_le32(sg_dma_len
|
||||
(bsg_job->reply_payload.sg_list));
|
||||
|
||||
|
@ -2997,14 +2963,13 @@ static void
|
|||
qla2x00_ct_iocb(srb_t *sp, ms_iocb_entry_t *ct_iocb)
|
||||
{
|
||||
uint16_t avail_dsds;
|
||||
uint32_t *cur_dsd;
|
||||
struct dsd64 *cur_dsd;
|
||||
struct scatterlist *sg;
|
||||
int index;
|
||||
uint16_t tot_dsds;
|
||||
scsi_qla_host_t *vha = sp->vha;
|
||||
struct qla_hw_data *ha = vha->hw;
|
||||
struct bsg_job *bsg_job = sp->u.bsg_job;
|
||||
int loop_iterartion = 0;
|
||||
int entry_count = 1;
|
||||
|
||||
memset(ct_iocb, 0, sizeof(ms_iocb_entry_t));
|
||||
|
@ -3024,25 +2989,20 @@ qla2x00_ct_iocb(srb_t *sp, ms_iocb_entry_t *ct_iocb)
|
|||
ct_iocb->rsp_bytecount =
|
||||
cpu_to_le32(bsg_job->reply_payload.payload_len);
|
||||
|
||||
ct_iocb->dseg_req_address[0] = cpu_to_le32(LSD(sg_dma_address
|
||||
(bsg_job->request_payload.sg_list)));
|
||||
ct_iocb->dseg_req_address[1] = cpu_to_le32(MSD(sg_dma_address
|
||||
(bsg_job->request_payload.sg_list)));
|
||||
ct_iocb->dseg_req_length = ct_iocb->req_bytecount;
|
||||
put_unaligned_le64(sg_dma_address(bsg_job->request_payload.sg_list),
|
||||
&ct_iocb->req_dsd.address);
|
||||
ct_iocb->req_dsd.length = ct_iocb->req_bytecount;
|
||||
|
||||
ct_iocb->dseg_rsp_address[0] = cpu_to_le32(LSD(sg_dma_address
|
||||
(bsg_job->reply_payload.sg_list)));
|
||||
ct_iocb->dseg_rsp_address[1] = cpu_to_le32(MSD(sg_dma_address
|
||||
(bsg_job->reply_payload.sg_list)));
|
||||
ct_iocb->dseg_rsp_length = ct_iocb->rsp_bytecount;
|
||||
put_unaligned_le64(sg_dma_address(bsg_job->reply_payload.sg_list),
|
||||
&ct_iocb->rsp_dsd.address);
|
||||
ct_iocb->rsp_dsd.length = ct_iocb->rsp_bytecount;
|
||||
|
||||
avail_dsds = 1;
|
||||
cur_dsd = (uint32_t *)ct_iocb->dseg_rsp_address;
|
||||
cur_dsd = &ct_iocb->rsp_dsd;
|
||||
index = 0;
|
||||
tot_dsds = bsg_job->reply_payload.sg_cnt;
|
||||
|
||||
for_each_sg(bsg_job->reply_payload.sg_list, sg, tot_dsds, index) {
|
||||
dma_addr_t sle_dma;
|
||||
cont_a64_entry_t *cont_pkt;
|
||||
|
||||
/* Allocate additional continuation packets? */
|
||||
|
@ -3053,16 +3013,12 @@ qla2x00_ct_iocb(srb_t *sp, ms_iocb_entry_t *ct_iocb)
|
|||
*/
|
||||
cont_pkt = qla2x00_prep_cont_type1_iocb(vha,
|
||||
vha->hw->req_q_map[0]);
|
||||
cur_dsd = (uint32_t *) cont_pkt->dseg_0_address;
|
||||
cur_dsd = cont_pkt->dsd;
|
||||
avail_dsds = 5;
|
||||
entry_count++;
|
||||
}
|
||||
|
||||
sle_dma = sg_dma_address(sg);
|
||||
*cur_dsd++ = cpu_to_le32(LSD(sle_dma));
|
||||
*cur_dsd++ = cpu_to_le32(MSD(sle_dma));
|
||||
*cur_dsd++ = cpu_to_le32(sg_dma_len(sg));
|
||||
loop_iterartion++;
|
||||
append_dsd64(&cur_dsd, sg);
|
||||
avail_dsds--;
|
||||
}
|
||||
ct_iocb->entry_count = entry_count;
|
||||
|
@ -3074,7 +3030,7 @@ static void
|
|||
qla24xx_ct_iocb(srb_t *sp, struct ct_entry_24xx *ct_iocb)
|
||||
{
|
||||
uint16_t avail_dsds;
|
||||
uint32_t *cur_dsd;
|
||||
struct dsd64 *cur_dsd;
|
||||
struct scatterlist *sg;
|
||||
int index;
|
||||
uint16_t cmd_dsds, rsp_dsds;
|
||||
|
@ -3103,12 +3059,10 @@ qla24xx_ct_iocb(srb_t *sp, struct ct_entry_24xx *ct_iocb)
|
|||
cpu_to_le32(bsg_job->request_payload.payload_len);
|
||||
|
||||
avail_dsds = 2;
|
||||
cur_dsd = (uint32_t *)ct_iocb->dseg_0_address;
|
||||
cur_dsd = ct_iocb->dsd;
|
||||
index = 0;
|
||||
|
||||
for_each_sg(bsg_job->request_payload.sg_list, sg, cmd_dsds, index) {
|
||||
dma_addr_t sle_dma;
|
||||
|
||||
/* Allocate additional continuation packets? */
|
||||
if (avail_dsds == 0) {
|
||||
/*
|
||||
|
@ -3117,23 +3071,18 @@ qla24xx_ct_iocb(srb_t *sp, struct ct_entry_24xx *ct_iocb)
|
|||
*/
|
||||
cont_pkt = qla2x00_prep_cont_type1_iocb(
|
||||
vha, ha->req_q_map[0]);
|
||||
cur_dsd = (uint32_t *) cont_pkt->dseg_0_address;
|
||||
cur_dsd = cont_pkt->dsd;
|
||||
avail_dsds = 5;
|
||||
entry_count++;
|
||||
}
|
||||
|
||||
sle_dma = sg_dma_address(sg);
|
||||
*cur_dsd++ = cpu_to_le32(LSD(sle_dma));
|
||||
*cur_dsd++ = cpu_to_le32(MSD(sle_dma));
|
||||
*cur_dsd++ = cpu_to_le32(sg_dma_len(sg));
|
||||
append_dsd64(&cur_dsd, sg);
|
||||
avail_dsds--;
|
||||
}
|
||||
|
||||
index = 0;
|
||||
|
||||
for_each_sg(bsg_job->reply_payload.sg_list, sg, rsp_dsds, index) {
|
||||
dma_addr_t sle_dma;
|
||||
|
||||
/* Allocate additional continuation packets? */
|
||||
if (avail_dsds == 0) {
|
||||
/*
|
||||
|
@ -3142,15 +3091,12 @@ qla24xx_ct_iocb(srb_t *sp, struct ct_entry_24xx *ct_iocb)
|
|||
*/
|
||||
cont_pkt = qla2x00_prep_cont_type1_iocb(vha,
|
||||
ha->req_q_map[0]);
|
||||
cur_dsd = (uint32_t *) cont_pkt->dseg_0_address;
|
||||
cur_dsd = cont_pkt->dsd;
|
||||
avail_dsds = 5;
|
||||
entry_count++;
|
||||
}
|
||||
|
||||
sle_dma = sg_dma_address(sg);
|
||||
*cur_dsd++ = cpu_to_le32(LSD(sle_dma));
|
||||
*cur_dsd++ = cpu_to_le32(MSD(sle_dma));
|
||||
*cur_dsd++ = cpu_to_le32(sg_dma_len(sg));
|
||||
append_dsd64(&cur_dsd, sg);
|
||||
avail_dsds--;
|
||||
}
|
||||
ct_iocb->entry_count = entry_count;
|
||||
|
@ -3371,10 +3317,8 @@ qla82xx_start_scsi(srb_t *sp)
|
|||
*fcp_dl = htonl((uint32_t)scsi_bufflen(cmd));
|
||||
|
||||
cmd_pkt->fcp_cmnd_dseg_len = cpu_to_le16(ctx->fcp_cmnd_len);
|
||||
cmd_pkt->fcp_cmnd_dseg_address[0] =
|
||||
cpu_to_le32(LSD(ctx->fcp_cmnd_dma));
|
||||
cmd_pkt->fcp_cmnd_dseg_address[1] =
|
||||
cpu_to_le32(MSD(ctx->fcp_cmnd_dma));
|
||||
put_unaligned_le64(ctx->fcp_cmnd_dma,
|
||||
&cmd_pkt->fcp_cmnd_dseg_address);
|
||||
|
||||
sp->flags |= SRB_FCP_CMND_DMA_VALID;
|
||||
cmd_pkt->byte_count = cpu_to_le32((uint32_t)scsi_bufflen(cmd));
|
||||
|
@ -3386,6 +3330,7 @@ qla82xx_start_scsi(srb_t *sp)
|
|||
cmd_pkt->entry_status = (uint8_t) rsp->id;
|
||||
} else {
|
||||
struct cmd_type_7 *cmd_pkt;
|
||||
|
||||
req_cnt = qla24xx_calc_iocbs(vha, tot_dsds);
|
||||
if (req->cnt < (req_cnt + 2)) {
|
||||
cnt = (uint16_t)RD_REG_DWORD_RELAXED(
|
||||
|
@ -3590,15 +3535,13 @@ qla_nvme_ls(srb_t *sp, struct pt_ls4_request *cmd_pkt)
|
|||
|
||||
cmd_pkt->tx_dseg_count = 1;
|
||||
cmd_pkt->tx_byte_count = nvme->u.nvme.cmd_len;
|
||||
cmd_pkt->dseg0_len = nvme->u.nvme.cmd_len;
|
||||
cmd_pkt->dseg0_address[0] = cpu_to_le32(LSD(nvme->u.nvme.cmd_dma));
|
||||
cmd_pkt->dseg0_address[1] = cpu_to_le32(MSD(nvme->u.nvme.cmd_dma));
|
||||
cmd_pkt->dsd[0].length = nvme->u.nvme.cmd_len;
|
||||
put_unaligned_le64(nvme->u.nvme.cmd_dma, &cmd_pkt->dsd[0].address);
|
||||
|
||||
cmd_pkt->rx_dseg_count = 1;
|
||||
cmd_pkt->rx_byte_count = nvme->u.nvme.rsp_len;
|
||||
cmd_pkt->dseg1_len = nvme->u.nvme.rsp_len;
|
||||
cmd_pkt->dseg1_address[0] = cpu_to_le32(LSD(nvme->u.nvme.rsp_dma));
|
||||
cmd_pkt->dseg1_address[1] = cpu_to_le32(MSD(nvme->u.nvme.rsp_dma));
|
||||
cmd_pkt->dsd[1].length = nvme->u.nvme.rsp_len;
|
||||
put_unaligned_le64(nvme->u.nvme.rsp_dma, &cmd_pkt->dsd[1].address);
|
||||
|
||||
return rval;
|
||||
}
|
||||
|
@ -3737,7 +3680,7 @@ qla25xx_build_bidir_iocb(srb_t *sp, struct scsi_qla_host *vha,
|
|||
struct cmd_bidir *cmd_pkt, uint32_t tot_dsds)
|
||||
{
|
||||
uint16_t avail_dsds;
|
||||
uint32_t *cur_dsd;
|
||||
struct dsd64 *cur_dsd;
|
||||
uint32_t req_data_len = 0;
|
||||
uint32_t rsp_data_len = 0;
|
||||
struct scatterlist *sg;
|
||||
|
@ -3746,8 +3689,7 @@ qla25xx_build_bidir_iocb(srb_t *sp, struct scsi_qla_host *vha,
|
|||
struct bsg_job *bsg_job = sp->u.bsg_job;
|
||||
|
||||
/*Update entry type to indicate bidir command */
|
||||
*((uint32_t *)(&cmd_pkt->entry_type)) =
|
||||
cpu_to_le32(COMMAND_BIDIRECTIONAL);
|
||||
put_unaligned_le32(COMMAND_BIDIRECTIONAL, &cmd_pkt->entry_type);
|
||||
|
||||
/* Set the transfer direction, in this set both flags
|
||||
* Also set the BD_WRAP_BACK flag, firmware will take care
|
||||
|
@ -3773,13 +3715,12 @@ qla25xx_build_bidir_iocb(srb_t *sp, struct scsi_qla_host *vha,
|
|||
* are bundled in continuation iocb
|
||||
*/
|
||||
avail_dsds = 1;
|
||||
cur_dsd = (uint32_t *)&cmd_pkt->fcp_data_dseg_address;
|
||||
cur_dsd = &cmd_pkt->fcp_dsd;
|
||||
|
||||
index = 0;
|
||||
|
||||
for_each_sg(bsg_job->request_payload.sg_list, sg,
|
||||
bsg_job->request_payload.sg_cnt, index) {
|
||||
dma_addr_t sle_dma;
|
||||
cont_a64_entry_t *cont_pkt;
|
||||
|
||||
/* Allocate additional continuation packets */
|
||||
|
@ -3788,14 +3729,11 @@ qla25xx_build_bidir_iocb(srb_t *sp, struct scsi_qla_host *vha,
|
|||
* 5 DSDS
|
||||
*/
|
||||
cont_pkt = qla2x00_prep_cont_type1_iocb(vha, vha->req);
|
||||
cur_dsd = (uint32_t *) cont_pkt->dseg_0_address;
|
||||
cur_dsd = cont_pkt->dsd;
|
||||
avail_dsds = 5;
|
||||
entry_count++;
|
||||
}
|
||||
sle_dma = sg_dma_address(sg);
|
||||
*cur_dsd++ = cpu_to_le32(LSD(sle_dma));
|
||||
*cur_dsd++ = cpu_to_le32(MSD(sle_dma));
|
||||
*cur_dsd++ = cpu_to_le32(sg_dma_len(sg));
|
||||
append_dsd64(&cur_dsd, sg);
|
||||
avail_dsds--;
|
||||
}
|
||||
/* For read request DSD will always goes to continuation IOCB
|
||||
|
@ -3805,7 +3743,6 @@ qla25xx_build_bidir_iocb(srb_t *sp, struct scsi_qla_host *vha,
|
|||
*/
|
||||
for_each_sg(bsg_job->reply_payload.sg_list, sg,
|
||||
bsg_job->reply_payload.sg_cnt, index) {
|
||||
dma_addr_t sle_dma;
|
||||
cont_a64_entry_t *cont_pkt;
|
||||
|
||||
/* Allocate additional continuation packets */
|
||||
|
@ -3814,14 +3751,11 @@ qla25xx_build_bidir_iocb(srb_t *sp, struct scsi_qla_host *vha,
|
|||
* 5 DSDS
|
||||
*/
|
||||
cont_pkt = qla2x00_prep_cont_type1_iocb(vha, vha->req);
|
||||
cur_dsd = (uint32_t *) cont_pkt->dseg_0_address;
|
||||
cur_dsd = cont_pkt->dsd;
|
||||
avail_dsds = 5;
|
||||
entry_count++;
|
||||
}
|
||||
sle_dma = sg_dma_address(sg);
|
||||
*cur_dsd++ = cpu_to_le32(LSD(sle_dma));
|
||||
*cur_dsd++ = cpu_to_le32(MSD(sle_dma));
|
||||
*cur_dsd++ = cpu_to_le32(sg_dma_len(sg));
|
||||
append_dsd64(&cur_dsd, sg);
|
||||
avail_dsds--;
|
||||
}
|
||||
/* This value should be same as number of IOCB required for this cmd */
|
||||
|
|
|
@ -23,6 +23,14 @@ static void qla2x00_status_cont_entry(struct rsp_que *, sts_cont_entry_t *);
|
|||
static int qla2x00_error_entry(scsi_qla_host_t *, struct rsp_que *,
|
||||
sts_entry_t *);
|
||||
|
||||
const char *const port_state_str[] = {
|
||||
"Unknown",
|
||||
"UNCONFIGURED",
|
||||
"DEAD",
|
||||
"LOST",
|
||||
"ONLINE"
|
||||
};
|
||||
|
||||
/**
|
||||
* qla2100_intr_handler() - Process interrupts for the ISP2100 and ISP2200.
|
||||
* @irq: interrupt number
|
||||
|
@ -41,7 +49,7 @@ qla2100_intr_handler(int irq, void *dev_id)
|
|||
int status;
|
||||
unsigned long iter;
|
||||
uint16_t hccr;
|
||||
uint16_t mb[4];
|
||||
uint16_t mb[8];
|
||||
struct rsp_que *rsp;
|
||||
unsigned long flags;
|
||||
|
||||
|
@ -160,7 +168,7 @@ qla2300_intr_handler(int irq, void *dev_id)
|
|||
unsigned long iter;
|
||||
uint32_t stat;
|
||||
uint16_t hccr;
|
||||
uint16_t mb[4];
|
||||
uint16_t mb[8];
|
||||
struct rsp_que *rsp;
|
||||
struct qla_hw_data *ha;
|
||||
unsigned long flags;
|
||||
|
@ -366,7 +374,7 @@ qla2x00_get_link_speed_str(struct qla_hw_data *ha, uint16_t speed)
|
|||
static const char *const link_speeds[] = {
|
||||
"1", "2", "?", "4", "8", "16", "32", "10"
|
||||
};
|
||||
#define QLA_LAST_SPEED 7
|
||||
#define QLA_LAST_SPEED (ARRAY_SIZE(link_speeds) - 1)
|
||||
|
||||
if (IS_QLA2100(ha) || IS_QLA2200(ha))
|
||||
return link_speeds[0];
|
||||
|
@ -708,12 +716,15 @@ qla2x00_async_event(scsi_qla_host_t *vha, struct rsp_que *rsp, uint16_t *mb)
|
|||
break;
|
||||
|
||||
case MBA_SYSTEM_ERR: /* System Error */
|
||||
mbx = (IS_QLA81XX(ha) || IS_QLA83XX(ha) || IS_QLA27XX(ha)) ?
|
||||
mbx = (IS_QLA81XX(ha) || IS_QLA83XX(ha) || IS_QLA27XX(ha) ||
|
||||
IS_QLA28XX(ha)) ?
|
||||
RD_REG_WORD(®24->mailbox7) : 0;
|
||||
ql_log(ql_log_warn, vha, 0x5003,
|
||||
"ISP System Error - mbx1=%xh mbx2=%xh mbx3=%xh "
|
||||
"mbx7=%xh.\n", mb[1], mb[2], mb[3], mbx);
|
||||
|
||||
ha->fw_dump_mpi =
|
||||
(IS_QLA27XX(ha) || IS_QLA28XX(ha)) &&
|
||||
RD_REG_WORD(®24->mailbox7) & BIT_8;
|
||||
ha->isp_ops->fw_dump(vha, 1);
|
||||
ha->flags.fw_init_done = 0;
|
||||
QLA_FW_STOPPED(ha);
|
||||
|
@ -837,6 +848,7 @@ qla2x00_async_event(scsi_qla_host_t *vha, struct rsp_que *rsp, uint16_t *mb)
|
|||
if (ha->flags.fawwpn_enabled &&
|
||||
(ha->current_topology == ISP_CFG_F)) {
|
||||
void *wwpn = ha->init_cb->port_name;
|
||||
|
||||
memcpy(vha->port_name, wwpn, WWN_SIZE);
|
||||
fc_host_port_name(vha->host) =
|
||||
wwn_to_u64(vha->port_name);
|
||||
|
@ -1372,7 +1384,7 @@ qla2x00_mbx_iocb_entry(scsi_qla_host_t *vha, struct req_que *req,
|
|||
le16_to_cpu(mbx->status_flags));
|
||||
|
||||
ql_dump_buffer(ql_dbg_async + ql_dbg_buffer, vha, 0x5029,
|
||||
(uint8_t *)mbx, sizeof(*mbx));
|
||||
mbx, sizeof(*mbx));
|
||||
|
||||
goto logio_done;
|
||||
}
|
||||
|
@ -1516,7 +1528,7 @@ qla2x00_ct_entry(scsi_qla_host_t *vha, struct req_que *req,
|
|||
bsg_reply->reply_payload_rcv_len = 0;
|
||||
}
|
||||
ql_dump_buffer(ql_dbg_async + ql_dbg_buffer, vha, 0x5035,
|
||||
(uint8_t *)pkt, sizeof(*pkt));
|
||||
pkt, sizeof(*pkt));
|
||||
} else {
|
||||
res = DID_OK << 16;
|
||||
bsg_reply->reply_payload_rcv_len =
|
||||
|
@ -1591,8 +1603,8 @@ qla24xx_els_ct_entry(scsi_qla_host_t *vha, struct req_que *req,
|
|||
}
|
||||
|
||||
comp_status = fw_status[0] = le16_to_cpu(pkt->comp_status);
|
||||
fw_status[1] = le16_to_cpu(((struct els_sts_entry_24xx*)pkt)->error_subcode_1);
|
||||
fw_status[2] = le16_to_cpu(((struct els_sts_entry_24xx*)pkt)->error_subcode_2);
|
||||
fw_status[1] = le16_to_cpu(((struct els_sts_entry_24xx *)pkt)->error_subcode_1);
|
||||
fw_status[2] = le16_to_cpu(((struct els_sts_entry_24xx *)pkt)->error_subcode_2);
|
||||
|
||||
if (iocb_type == ELS_IOCB_TYPE) {
|
||||
els = &sp->u.iocb_cmd;
|
||||
|
@ -1613,7 +1625,7 @@ qla24xx_els_ct_entry(scsi_qla_host_t *vha, struct req_que *req,
|
|||
res = DID_ERROR << 16;
|
||||
}
|
||||
}
|
||||
ql_log(ql_log_info, vha, 0x503f,
|
||||
ql_dbg(ql_dbg_user, vha, 0x503f,
|
||||
"ELS IOCB Done -%s error hdl=%x comp_status=0x%x error subcode 1=0x%x error subcode 2=0x%x total_byte=0x%x\n",
|
||||
type, sp->handle, comp_status, fw_status[1], fw_status[2],
|
||||
le16_to_cpu(((struct els_sts_entry_24xx *)
|
||||
|
@ -1656,7 +1668,7 @@ qla24xx_els_ct_entry(scsi_qla_host_t *vha, struct req_que *req,
|
|||
memcpy(bsg_job->reply + sizeof(struct fc_bsg_reply),
|
||||
fw_status, sizeof(fw_status));
|
||||
ql_dump_buffer(ql_dbg_user + ql_dbg_buffer, vha, 0x5056,
|
||||
(uint8_t *)pkt, sizeof(*pkt));
|
||||
pkt, sizeof(*pkt));
|
||||
}
|
||||
else {
|
||||
res = DID_OK << 16;
|
||||
|
@ -1700,7 +1712,7 @@ qla24xx_logio_entry(scsi_qla_host_t *vha, struct req_que *req,
|
|||
fcport->d_id.b.area, fcport->d_id.b.al_pa,
|
||||
logio->entry_status);
|
||||
ql_dump_buffer(ql_dbg_async + ql_dbg_buffer, vha, 0x504d,
|
||||
(uint8_t *)logio, sizeof(*logio));
|
||||
logio, sizeof(*logio));
|
||||
|
||||
goto logio_done;
|
||||
}
|
||||
|
@ -1846,8 +1858,8 @@ qla24xx_tm_iocb_entry(scsi_qla_host_t *vha, struct req_que *req, void *tsk)
|
|||
}
|
||||
|
||||
if (iocb->u.tmf.data != QLA_SUCCESS)
|
||||
ql_dump_buffer(ql_dbg_async + ql_dbg_buffer, vha, 0x5055,
|
||||
(uint8_t *)sts, sizeof(*sts));
|
||||
ql_dump_buffer(ql_dbg_async + ql_dbg_buffer, sp->vha, 0x5055,
|
||||
sts, sizeof(*sts));
|
||||
|
||||
sp->done(sp, 0);
|
||||
}
|
||||
|
@ -1969,6 +1981,52 @@ static void qla_ctrlvp_completed(scsi_qla_host_t *vha, struct req_que *req,
|
|||
sp->done(sp, rval);
|
||||
}
|
||||
|
||||
/* Process a single response queue entry. */
|
||||
static void qla2x00_process_response_entry(struct scsi_qla_host *vha,
|
||||
struct rsp_que *rsp,
|
||||
sts_entry_t *pkt)
|
||||
{
|
||||
sts21_entry_t *sts21_entry;
|
||||
sts22_entry_t *sts22_entry;
|
||||
uint16_t handle_cnt;
|
||||
uint16_t cnt;
|
||||
|
||||
switch (pkt->entry_type) {
|
||||
case STATUS_TYPE:
|
||||
qla2x00_status_entry(vha, rsp, pkt);
|
||||
break;
|
||||
case STATUS_TYPE_21:
|
||||
sts21_entry = (sts21_entry_t *)pkt;
|
||||
handle_cnt = sts21_entry->handle_count;
|
||||
for (cnt = 0; cnt < handle_cnt; cnt++)
|
||||
qla2x00_process_completed_request(vha, rsp->req,
|
||||
sts21_entry->handle[cnt]);
|
||||
break;
|
||||
case STATUS_TYPE_22:
|
||||
sts22_entry = (sts22_entry_t *)pkt;
|
||||
handle_cnt = sts22_entry->handle_count;
|
||||
for (cnt = 0; cnt < handle_cnt; cnt++)
|
||||
qla2x00_process_completed_request(vha, rsp->req,
|
||||
sts22_entry->handle[cnt]);
|
||||
break;
|
||||
case STATUS_CONT_TYPE:
|
||||
qla2x00_status_cont_entry(rsp, (sts_cont_entry_t *)pkt);
|
||||
break;
|
||||
case MBX_IOCB_TYPE:
|
||||
qla2x00_mbx_iocb_entry(vha, rsp->req, (struct mbx_entry *)pkt);
|
||||
break;
|
||||
case CT_IOCB_TYPE:
|
||||
qla2x00_ct_entry(vha, rsp->req, pkt, CT_IOCB_TYPE);
|
||||
break;
|
||||
default:
|
||||
/* Type Not Supported. */
|
||||
ql_log(ql_log_warn, vha, 0x504a,
|
||||
"Received unknown response pkt type %x entry status=%x.\n",
|
||||
pkt->entry_type, pkt->entry_status);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* qla2x00_process_response_queue() - Process response queue entries.
|
||||
* @rsp: response queue
|
||||
|
@ -1980,8 +2038,6 @@ qla2x00_process_response_queue(struct rsp_que *rsp)
|
|||
struct qla_hw_data *ha = rsp->hw;
|
||||
struct device_reg_2xxx __iomem *reg = &ha->iobase->isp;
|
||||
sts_entry_t *pkt;
|
||||
uint16_t handle_cnt;
|
||||
uint16_t cnt;
|
||||
|
||||
vha = pci_get_drvdata(ha->pdev);
|
||||
|
||||
|
@ -2006,42 +2062,7 @@ qla2x00_process_response_queue(struct rsp_que *rsp)
|
|||
continue;
|
||||
}
|
||||
|
||||
switch (pkt->entry_type) {
|
||||
case STATUS_TYPE:
|
||||
qla2x00_status_entry(vha, rsp, pkt);
|
||||
break;
|
||||
case STATUS_TYPE_21:
|
||||
handle_cnt = ((sts21_entry_t *)pkt)->handle_count;
|
||||
for (cnt = 0; cnt < handle_cnt; cnt++) {
|
||||
qla2x00_process_completed_request(vha, rsp->req,
|
||||
((sts21_entry_t *)pkt)->handle[cnt]);
|
||||
}
|
||||
break;
|
||||
case STATUS_TYPE_22:
|
||||
handle_cnt = ((sts22_entry_t *)pkt)->handle_count;
|
||||
for (cnt = 0; cnt < handle_cnt; cnt++) {
|
||||
qla2x00_process_completed_request(vha, rsp->req,
|
||||
((sts22_entry_t *)pkt)->handle[cnt]);
|
||||
}
|
||||
break;
|
||||
case STATUS_CONT_TYPE:
|
||||
qla2x00_status_cont_entry(rsp, (sts_cont_entry_t *)pkt);
|
||||
break;
|
||||
case MBX_IOCB_TYPE:
|
||||
qla2x00_mbx_iocb_entry(vha, rsp->req,
|
||||
(struct mbx_entry *)pkt);
|
||||
break;
|
||||
case CT_IOCB_TYPE:
|
||||
qla2x00_ct_entry(vha, rsp->req, pkt, CT_IOCB_TYPE);
|
||||
break;
|
||||
default:
|
||||
/* Type Not Supported. */
|
||||
ql_log(ql_log_warn, vha, 0x504a,
|
||||
"Received unknown response pkt type %x "
|
||||
"entry status=%x.\n",
|
||||
pkt->entry_type, pkt->entry_status);
|
||||
break;
|
||||
}
|
||||
qla2x00_process_response_entry(vha, rsp, pkt);
|
||||
((response_t *)pkt)->signature = RESPONSE_PROCESSED;
|
||||
wmb();
|
||||
}
|
||||
|
@ -2238,6 +2259,7 @@ qla25xx_process_bidir_status_iocb(scsi_qla_host_t *vha, void *pkt,
|
|||
struct fc_bsg_reply *bsg_reply;
|
||||
sts_entry_t *sts;
|
||||
struct sts_entry_24xx *sts24;
|
||||
|
||||
sts = (sts_entry_t *) pkt;
|
||||
sts24 = (struct sts_entry_24xx *) pkt;
|
||||
|
||||
|
@ -3014,7 +3036,8 @@ void qla24xx_process_response_queue(struct scsi_qla_host *vha,
|
|||
qla24xx_els_ct_entry(vha, rsp->req, pkt, ELS_IOCB_TYPE);
|
||||
break;
|
||||
case ABTS_RECV_24XX:
|
||||
if (IS_QLA83XX(ha) || IS_QLA27XX(ha)) {
|
||||
if (IS_QLA83XX(ha) || IS_QLA27XX(ha) ||
|
||||
IS_QLA28XX(ha)) {
|
||||
/* ensure that the ATIO queue is empty */
|
||||
qlt_handle_abts_recv(vha, rsp,
|
||||
(response_t *)pkt);
|
||||
|
@ -3072,6 +3095,7 @@ void qla24xx_process_response_queue(struct scsi_qla_host *vha,
|
|||
/* Adjust ring index */
|
||||
if (IS_P3P_TYPE(ha)) {
|
||||
struct device_reg_82xx __iomem *reg = &ha->iobase->isp82;
|
||||
|
||||
WRT_REG_DWORD(®->rsp_q_out[0], rsp->ring_index);
|
||||
} else {
|
||||
WRT_REG_DWORD(rsp->rsp_q_out, rsp->ring_index);
|
||||
|
@ -3087,7 +3111,7 @@ qla2xxx_check_risc_status(scsi_qla_host_t *vha)
|
|||
struct device_reg_24xx __iomem *reg = &ha->iobase->isp24;
|
||||
|
||||
if (!IS_QLA25XX(ha) && !IS_QLA81XX(ha) && !IS_QLA83XX(ha) &&
|
||||
!IS_QLA27XX(ha))
|
||||
!IS_QLA27XX(ha) && !IS_QLA28XX(ha))
|
||||
return;
|
||||
|
||||
rval = QLA_SUCCESS;
|
||||
|
@ -3475,7 +3499,7 @@ qla24xx_enable_msix(struct qla_hw_data *ha, struct rsp_que *rsp)
|
|||
ql_log(ql_log_fatal, vha, 0x00c8,
|
||||
"Failed to allocate memory for ha->msix_entries.\n");
|
||||
ret = -ENOMEM;
|
||||
goto msix_out;
|
||||
goto free_irqs;
|
||||
}
|
||||
ha->flags.msix_enabled = 1;
|
||||
|
||||
|
@ -3539,7 +3563,7 @@ qla24xx_enable_msix(struct qla_hw_data *ha, struct rsp_que *rsp)
|
|||
}
|
||||
|
||||
/* Enable MSI-X vector for response queue update for queue 0 */
|
||||
if (IS_QLA83XX(ha) || IS_QLA27XX(ha)) {
|
||||
if (IS_QLA83XX(ha) || IS_QLA27XX(ha) || IS_QLA28XX(ha)) {
|
||||
if (ha->msixbase && ha->mqiobase &&
|
||||
(ha->max_rsp_queues > 1 || ha->max_req_queues > 1 ||
|
||||
ql2xmqsupport))
|
||||
|
@ -3558,6 +3582,10 @@ qla24xx_enable_msix(struct qla_hw_data *ha, struct rsp_que *rsp)
|
|||
|
||||
msix_out:
|
||||
return ret;
|
||||
|
||||
free_irqs:
|
||||
pci_free_irq_vectors(ha->pdev);
|
||||
goto msix_out;
|
||||
}
|
||||
|
||||
int
|
||||
|
@ -3570,7 +3598,7 @@ qla2x00_request_irqs(struct qla_hw_data *ha, struct rsp_que *rsp)
|
|||
/* If possible, enable MSI-X. */
|
||||
if (ql2xenablemsix == 0 || (!IS_QLA2432(ha) && !IS_QLA2532(ha) &&
|
||||
!IS_QLA8432(ha) && !IS_CNA_CAPABLE(ha) && !IS_QLA2031(ha) &&
|
||||
!IS_QLAFX00(ha) && !IS_QLA27XX(ha)))
|
||||
!IS_QLAFX00(ha) && !IS_QLA27XX(ha) && !IS_QLA28XX(ha)))
|
||||
goto skip_msi;
|
||||
|
||||
if (ql2xenablemsix == 2)
|
||||
|
@ -3609,7 +3637,7 @@ qla2x00_request_irqs(struct qla_hw_data *ha, struct rsp_que *rsp)
|
|||
|
||||
if (!IS_QLA24XX(ha) && !IS_QLA2532(ha) && !IS_QLA8432(ha) &&
|
||||
!IS_QLA8001(ha) && !IS_P3P_TYPE(ha) && !IS_QLAFX00(ha) &&
|
||||
!IS_QLA27XX(ha))
|
||||
!IS_QLA27XX(ha) && !IS_QLA28XX(ha))
|
||||
goto skip_msi;
|
||||
|
||||
ret = pci_alloc_irq_vectors(ha->pdev, 1, 1, PCI_IRQ_MSI);
|
||||
|
|
|
@ -567,9 +567,9 @@ qla2x00_mailbox_command(scsi_qla_host_t *vha, mbx_cmd_t *mcp)
|
|||
mcp->mb[0]);
|
||||
} else if (rval) {
|
||||
if (ql2xextended_error_logging & (ql_dbg_disc|ql_dbg_mbx)) {
|
||||
pr_warn("%s [%s]-%04x:%ld: **** Failed", QL_MSGHDR,
|
||||
pr_warn("%s [%s]-%04x:%ld: **** Failed=%x", QL_MSGHDR,
|
||||
dev_name(&ha->pdev->dev), 0x1020+0x800,
|
||||
vha->host_no);
|
||||
vha->host_no, rval);
|
||||
mboxes = mcp->in_mb;
|
||||
cnt = 4;
|
||||
for (i = 0; i < ha->mbx_count && cnt; i++, mboxes >>= 1)
|
||||
|
@ -634,14 +634,15 @@ qla2x00_load_ram(scsi_qla_host_t *vha, dma_addr_t req_dma, uint32_t risc_addr,
|
|||
mcp->out_mb |= MBX_4;
|
||||
}
|
||||
|
||||
mcp->in_mb = MBX_0;
|
||||
mcp->in_mb = MBX_1|MBX_0;
|
||||
mcp->tov = MBX_TOV_SECONDS;
|
||||
mcp->flags = 0;
|
||||
rval = qla2x00_mailbox_command(vha, mcp);
|
||||
|
||||
if (rval != QLA_SUCCESS) {
|
||||
ql_dbg(ql_dbg_mbx, vha, 0x1023,
|
||||
"Failed=%x mb[0]=%x.\n", rval, mcp->mb[0]);
|
||||
"Failed=%x mb[0]=%x mb[1]=%x.\n",
|
||||
rval, mcp->mb[0], mcp->mb[1]);
|
||||
} else {
|
||||
ql_dbg(ql_dbg_mbx + ql_dbg_verbose, vha, 0x1024,
|
||||
"Done %s.\n", __func__);
|
||||
|
@ -656,7 +657,7 @@ static inline uint16_t qla25xx_set_sfp_lr_dist(struct qla_hw_data *ha)
|
|||
{
|
||||
uint16_t mb4 = BIT_0;
|
||||
|
||||
if (IS_QLA83XX(ha) || IS_QLA27XX(ha))
|
||||
if (IS_QLA83XX(ha) || IS_QLA27XX(ha) || IS_QLA28XX(ha))
|
||||
mb4 |= ha->long_range_distance << LR_DIST_FW_POS;
|
||||
|
||||
return mb4;
|
||||
|
@ -666,7 +667,7 @@ static inline uint16_t qla25xx_set_nvr_lr_dist(struct qla_hw_data *ha)
|
|||
{
|
||||
uint16_t mb4 = BIT_0;
|
||||
|
||||
if (IS_QLA83XX(ha) || IS_QLA27XX(ha)) {
|
||||
if (IS_QLA83XX(ha) || IS_QLA27XX(ha) || IS_QLA28XX(ha)) {
|
||||
struct nvram_81xx *nv = ha->nvram;
|
||||
|
||||
mb4 |= LR_DIST_FW_FIELD(nv->enhanced_features);
|
||||
|
@ -711,7 +712,7 @@ qla2x00_execute_fw(scsi_qla_host_t *vha, uint32_t risc_addr)
|
|||
mcp->mb[4] = 0;
|
||||
ha->flags.using_lr_setting = 0;
|
||||
if (IS_QLA25XX(ha) || IS_QLA81XX(ha) || IS_QLA83XX(ha) ||
|
||||
IS_QLA27XX(ha)) {
|
||||
IS_QLA27XX(ha) || IS_QLA28XX(ha)) {
|
||||
if (ql2xautodetectsfp) {
|
||||
if (ha->flags.detected_lr_sfp) {
|
||||
mcp->mb[4] |=
|
||||
|
@ -730,19 +731,20 @@ qla2x00_execute_fw(scsi_qla_host_t *vha, uint32_t risc_addr)
|
|||
}
|
||||
}
|
||||
|
||||
if (ql2xnvmeenable && IS_QLA27XX(ha))
|
||||
if (ql2xnvmeenable && (IS_QLA27XX(ha) || IS_QLA28XX(ha)))
|
||||
mcp->mb[4] |= NVME_ENABLE_FLAG;
|
||||
|
||||
if (IS_QLA83XX(ha) || IS_QLA27XX(ha)) {
|
||||
if (IS_QLA83XX(ha) || IS_QLA27XX(ha) || IS_QLA28XX(ha)) {
|
||||
struct nvram_81xx *nv = ha->nvram;
|
||||
/* set minimum speed if specified in nvram */
|
||||
if (nv->min_link_speed >= 2 &&
|
||||
nv->min_link_speed <= 5) {
|
||||
if (nv->min_supported_speed >= 2 &&
|
||||
nv->min_supported_speed <= 5) {
|
||||
mcp->mb[4] |= BIT_4;
|
||||
mcp->mb[11] = nv->min_link_speed;
|
||||
mcp->mb[11] |= nv->min_supported_speed & 0xF;
|
||||
mcp->out_mb |= MBX_11;
|
||||
mcp->in_mb |= BIT_5;
|
||||
vha->min_link_speed_feat = nv->min_link_speed;
|
||||
vha->min_supported_speed =
|
||||
nv->min_supported_speed;
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -770,34 +772,39 @@ qla2x00_execute_fw(scsi_qla_host_t *vha, uint32_t risc_addr)
|
|||
if (rval != QLA_SUCCESS) {
|
||||
ql_dbg(ql_dbg_mbx, vha, 0x1026,
|
||||
"Failed=%x mb[0]=%x.\n", rval, mcp->mb[0]);
|
||||
} else {
|
||||
if (IS_FWI2_CAPABLE(ha)) {
|
||||
ha->fw_ability_mask = mcp->mb[3] << 16 | mcp->mb[2];
|
||||
ql_dbg(ql_dbg_mbx, vha, 0x119a,
|
||||
"fw_ability_mask=%x.\n", ha->fw_ability_mask);
|
||||
ql_dbg(ql_dbg_mbx, vha, 0x1027,
|
||||
"exchanges=%x.\n", mcp->mb[1]);
|
||||
if (IS_QLA83XX(ha) || IS_QLA27XX(ha)) {
|
||||
ha->max_speed_sup = mcp->mb[2] & BIT_0;
|
||||
ql_dbg(ql_dbg_mbx, vha, 0x119b,
|
||||
"Maximum speed supported=%s.\n",
|
||||
ha->max_speed_sup ? "32Gps" : "16Gps");
|
||||
if (vha->min_link_speed_feat) {
|
||||
ha->min_link_speed = mcp->mb[5];
|
||||
ql_dbg(ql_dbg_mbx, vha, 0x119c,
|
||||
"Minimum speed set=%s.\n",
|
||||
mcp->mb[5] == 5 ? "32Gps" :
|
||||
mcp->mb[5] == 4 ? "16Gps" :
|
||||
mcp->mb[5] == 3 ? "8Gps" :
|
||||
mcp->mb[5] == 2 ? "4Gps" :
|
||||
"unknown");
|
||||
}
|
||||
}
|
||||
}
|
||||
ql_dbg(ql_dbg_mbx + ql_dbg_verbose, vha, 0x1028,
|
||||
"Done.\n");
|
||||
return rval;
|
||||
}
|
||||
|
||||
if (!IS_FWI2_CAPABLE(ha))
|
||||
goto done;
|
||||
|
||||
ha->fw_ability_mask = mcp->mb[3] << 16 | mcp->mb[2];
|
||||
ql_dbg(ql_dbg_mbx, vha, 0x119a,
|
||||
"fw_ability_mask=%x.\n", ha->fw_ability_mask);
|
||||
ql_dbg(ql_dbg_mbx, vha, 0x1027, "exchanges=%x.\n", mcp->mb[1]);
|
||||
if (IS_QLA27XX(ha) || IS_QLA28XX(ha)) {
|
||||
ha->max_supported_speed = mcp->mb[2] & (BIT_0|BIT_1);
|
||||
ql_dbg(ql_dbg_mbx, vha, 0x119b, "max_supported_speed=%s.\n",
|
||||
ha->max_supported_speed == 0 ? "16Gps" :
|
||||
ha->max_supported_speed == 1 ? "32Gps" :
|
||||
ha->max_supported_speed == 2 ? "64Gps" : "unknown");
|
||||
if (vha->min_supported_speed) {
|
||||
ha->min_supported_speed = mcp->mb[5] &
|
||||
(BIT_0 | BIT_1 | BIT_2);
|
||||
ql_dbg(ql_dbg_mbx, vha, 0x119c,
|
||||
"min_supported_speed=%s.\n",
|
||||
ha->min_supported_speed == 6 ? "64Gps" :
|
||||
ha->min_supported_speed == 5 ? "32Gps" :
|
||||
ha->min_supported_speed == 4 ? "16Gps" :
|
||||
ha->min_supported_speed == 3 ? "8Gps" :
|
||||
ha->min_supported_speed == 2 ? "4Gps" : "unknown");
|
||||
}
|
||||
}
|
||||
|
||||
done:
|
||||
ql_dbg(ql_dbg_mbx + ql_dbg_verbose, vha, 0x1028,
|
||||
"Done %s.\n", __func__);
|
||||
|
||||
return rval;
|
||||
}
|
||||
|
||||
|
@ -1053,10 +1060,10 @@ qla2x00_get_fw_version(scsi_qla_host_t *vha)
|
|||
mcp->in_mb |= MBX_13|MBX_12|MBX_11|MBX_10|MBX_9|MBX_8;
|
||||
if (IS_FWI2_CAPABLE(ha))
|
||||
mcp->in_mb |= MBX_17|MBX_16|MBX_15;
|
||||
if (IS_QLA27XX(ha))
|
||||
if (IS_QLA27XX(ha) || IS_QLA28XX(ha))
|
||||
mcp->in_mb |=
|
||||
MBX_25|MBX_24|MBX_23|MBX_22|MBX_21|MBX_20|MBX_19|MBX_18|
|
||||
MBX_14|MBX_13|MBX_11|MBX_10|MBX_9|MBX_8;
|
||||
MBX_14|MBX_13|MBX_11|MBX_10|MBX_9|MBX_8|MBX_7;
|
||||
|
||||
mcp->flags = 0;
|
||||
mcp->tov = MBX_TOV_SECONDS;
|
||||
|
@ -1122,7 +1129,10 @@ qla2x00_get_fw_version(scsi_qla_host_t *vha)
|
|||
}
|
||||
}
|
||||
|
||||
if (IS_QLA27XX(ha)) {
|
||||
if (IS_QLA27XX(ha) || IS_QLA28XX(ha)) {
|
||||
ha->serdes_version[0] = mcp->mb[7] & 0xff;
|
||||
ha->serdes_version[1] = mcp->mb[8] >> 8;
|
||||
ha->serdes_version[2] = mcp->mb[8] & 0xff;
|
||||
ha->mpi_version[0] = mcp->mb[10] & 0xff;
|
||||
ha->mpi_version[1] = mcp->mb[11] >> 8;
|
||||
ha->mpi_version[2] = mcp->mb[11] & 0xff;
|
||||
|
@ -1133,6 +1143,13 @@ qla2x00_get_fw_version(scsi_qla_host_t *vha)
|
|||
ha->fw_shared_ram_end = (mcp->mb[21] << 16) | mcp->mb[20];
|
||||
ha->fw_ddr_ram_start = (mcp->mb[23] << 16) | mcp->mb[22];
|
||||
ha->fw_ddr_ram_end = (mcp->mb[25] << 16) | mcp->mb[24];
|
||||
if (IS_QLA28XX(ha)) {
|
||||
if (mcp->mb[16] & BIT_10) {
|
||||
ql_log(ql_log_info, vha, 0xffff,
|
||||
"FW support secure flash updates\n");
|
||||
ha->flags.secure_fw = 1;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
failed:
|
||||
|
@ -1638,7 +1655,7 @@ qla2x00_get_adapter_id(scsi_qla_host_t *vha, uint16_t *id, uint8_t *al_pa,
|
|||
mcp->in_mb |= MBX_13|MBX_12|MBX_11|MBX_10;
|
||||
if (IS_FWI2_CAPABLE(vha->hw))
|
||||
mcp->in_mb |= MBX_19|MBX_18|MBX_17|MBX_16;
|
||||
if (IS_QLA27XX(vha->hw))
|
||||
if (IS_QLA27XX(vha->hw) || IS_QLA28XX(vha->hw))
|
||||
mcp->in_mb |= MBX_15;
|
||||
mcp->tov = MBX_TOV_SECONDS;
|
||||
mcp->flags = 0;
|
||||
|
@ -1692,7 +1709,7 @@ qla2x00_get_adapter_id(scsi_qla_host_t *vha, uint16_t *id, uint8_t *al_pa,
|
|||
}
|
||||
}
|
||||
|
||||
if (IS_QLA27XX(vha->hw))
|
||||
if (IS_QLA27XX(vha->hw) || IS_QLA28XX(vha->hw))
|
||||
vha->bbcr = mcp->mb[15];
|
||||
}
|
||||
|
||||
|
@ -1808,7 +1825,7 @@ qla2x00_init_firmware(scsi_qla_host_t *vha, uint16_t size)
|
|||
}
|
||||
/* 1 and 2 should normally be captured. */
|
||||
mcp->in_mb = MBX_2|MBX_1|MBX_0;
|
||||
if (IS_QLA83XX(ha) || IS_QLA27XX(ha))
|
||||
if (IS_QLA83XX(ha) || IS_QLA27XX(ha) || IS_QLA28XX(ha))
|
||||
/* mb3 is additional info about the installed SFP. */
|
||||
mcp->in_mb |= MBX_3;
|
||||
mcp->buf_size = size;
|
||||
|
@ -1819,10 +1836,20 @@ qla2x00_init_firmware(scsi_qla_host_t *vha, uint16_t size)
|
|||
if (rval != QLA_SUCCESS) {
|
||||
/*EMPTY*/
|
||||
ql_dbg(ql_dbg_mbx, vha, 0x104d,
|
||||
"Failed=%x mb[0]=%x, mb[1]=%x, mb[2]=%x, mb[3]=%x,.\n",
|
||||
"Failed=%x mb[0]=%x, mb[1]=%x, mb[2]=%x, mb[3]=%x.\n",
|
||||
rval, mcp->mb[0], mcp->mb[1], mcp->mb[2], mcp->mb[3]);
|
||||
if (ha->init_cb) {
|
||||
ql_dbg(ql_dbg_mbx, vha, 0x104d, "init_cb:\n");
|
||||
ql_dump_buffer(ql_dbg_init + ql_dbg_verbose, vha,
|
||||
0x0104d, ha->init_cb, sizeof(*ha->init_cb));
|
||||
}
|
||||
if (ha->ex_init_cb && ha->ex_init_cb->ex_version) {
|
||||
ql_dbg(ql_dbg_mbx, vha, 0x104d, "ex_init_cb:\n");
|
||||
ql_dump_buffer(ql_dbg_init + ql_dbg_verbose, vha,
|
||||
0x0104d, ha->ex_init_cb, sizeof(*ha->ex_init_cb));
|
||||
}
|
||||
} else {
|
||||
if (IS_QLA27XX(ha)) {
|
||||
if (IS_QLA27XX(ha) || IS_QLA28XX(ha)) {
|
||||
if (mcp->mb[2] == 6 || mcp->mb[3] == 2)
|
||||
ql_dbg(ql_dbg_mbx, vha, 0x119d,
|
||||
"Invalid SFP/Validation Failed\n");
|
||||
|
@ -2006,7 +2033,7 @@ qla2x00_get_port_database(scsi_qla_host_t *vha, fc_port_t *fcport, uint8_t opt)
|
|||
|
||||
/* Passback COS information. */
|
||||
fcport->supported_classes = (pd->options & BIT_4) ?
|
||||
FC_COS_CLASS2: FC_COS_CLASS3;
|
||||
FC_COS_CLASS2 : FC_COS_CLASS3;
|
||||
}
|
||||
|
||||
gpd_error_out:
|
||||
|
@ -2076,7 +2103,7 @@ qla2x00_get_firmware_state(scsi_qla_host_t *vha, uint16_t *states)
|
|||
/*EMPTY*/
|
||||
ql_dbg(ql_dbg_mbx, vha, 0x1055, "Failed=%x.\n", rval);
|
||||
} else {
|
||||
if (IS_QLA27XX(ha)) {
|
||||
if (IS_QLA27XX(ha) || IS_QLA28XX(ha)) {
|
||||
if (mcp->mb[2] == 6 || mcp->mb[3] == 2)
|
||||
ql_dbg(ql_dbg_mbx, vha, 0x119e,
|
||||
"Invalid SFP/Validation Failed\n");
|
||||
|
@ -2859,7 +2886,8 @@ qla2x00_get_resource_cnts(scsi_qla_host_t *vha)
|
|||
mcp->mb[0] = MBC_GET_RESOURCE_COUNTS;
|
||||
mcp->out_mb = MBX_0;
|
||||
mcp->in_mb = MBX_11|MBX_10|MBX_7|MBX_6|MBX_3|MBX_2|MBX_1|MBX_0;
|
||||
if (IS_QLA81XX(vha->hw) || IS_QLA83XX(vha->hw) || IS_QLA27XX(vha->hw))
|
||||
if (IS_QLA81XX(ha) || IS_QLA83XX(ha) ||
|
||||
IS_QLA27XX(ha) || IS_QLA28XX(ha))
|
||||
mcp->in_mb |= MBX_12;
|
||||
mcp->tov = MBX_TOV_SECONDS;
|
||||
mcp->flags = 0;
|
||||
|
@ -2884,7 +2912,8 @@ qla2x00_get_resource_cnts(scsi_qla_host_t *vha)
|
|||
ha->orig_fw_iocb_count = mcp->mb[10];
|
||||
if (ha->flags.npiv_supported)
|
||||
ha->max_npiv_vports = mcp->mb[11];
|
||||
if (IS_QLA81XX(ha) || IS_QLA83XX(ha) || IS_QLA27XX(ha))
|
||||
if (IS_QLA81XX(ha) || IS_QLA83XX(ha) || IS_QLA27XX(ha) ||
|
||||
IS_QLA28XX(ha))
|
||||
ha->fw_max_fcf_count = mcp->mb[12];
|
||||
}
|
||||
|
||||
|
@ -3248,7 +3277,7 @@ __qla24xx_issue_tmf(char *name, uint32_t type, struct fc_port *fcport,
|
|||
|
||||
/* Issue marker IOCB. */
|
||||
rval2 = qla2x00_marker(vha, ha->base_qpair, fcport->loop_id, l,
|
||||
type == TCF_LUN_RESET ? MK_SYNC_ID_LUN: MK_SYNC_ID);
|
||||
type == TCF_LUN_RESET ? MK_SYNC_ID_LUN : MK_SYNC_ID);
|
||||
if (rval2 != QLA_SUCCESS) {
|
||||
ql_dbg(ql_dbg_mbx, vha, 0x1099,
|
||||
"Failed to issue marker IOCB (%x).\n", rval2);
|
||||
|
@ -3323,7 +3352,7 @@ qla2x00_write_serdes_word(scsi_qla_host_t *vha, uint16_t addr, uint16_t data)
|
|||
mbx_cmd_t *mcp = &mc;
|
||||
|
||||
if (!IS_QLA25XX(vha->hw) && !IS_QLA2031(vha->hw) &&
|
||||
!IS_QLA27XX(vha->hw))
|
||||
!IS_QLA27XX(vha->hw) && !IS_QLA28XX(vha->hw))
|
||||
return QLA_FUNCTION_FAILED;
|
||||
|
||||
ql_dbg(ql_dbg_mbx + ql_dbg_verbose, vha, 0x1182,
|
||||
|
@ -3362,7 +3391,7 @@ qla2x00_read_serdes_word(scsi_qla_host_t *vha, uint16_t addr, uint16_t *data)
|
|||
mbx_cmd_t *mcp = &mc;
|
||||
|
||||
if (!IS_QLA25XX(vha->hw) && !IS_QLA2031(vha->hw) &&
|
||||
!IS_QLA27XX(vha->hw))
|
||||
!IS_QLA27XX(vha->hw) && !IS_QLA28XX(vha->hw))
|
||||
return QLA_FUNCTION_FAILED;
|
||||
|
||||
ql_dbg(ql_dbg_mbx + ql_dbg_verbose, vha, 0x1185,
|
||||
|
@ -3631,7 +3660,8 @@ qla2x00_enable_fce_trace(scsi_qla_host_t *vha, dma_addr_t fce_dma,
|
|||
"Entered %s.\n", __func__);
|
||||
|
||||
if (!IS_QLA25XX(vha->hw) && !IS_QLA81XX(vha->hw) &&
|
||||
!IS_QLA83XX(vha->hw) && !IS_QLA27XX(vha->hw))
|
||||
!IS_QLA83XX(vha->hw) && !IS_QLA27XX(vha->hw) &&
|
||||
!IS_QLA28XX(vha->hw))
|
||||
return QLA_FUNCTION_FAILED;
|
||||
|
||||
if (unlikely(pci_channel_offline(vha->hw->pdev)))
|
||||
|
@ -3744,7 +3774,7 @@ qla2x00_get_idma_speed(scsi_qla_host_t *vha, uint16_t loop_id,
|
|||
rval = qla2x00_mailbox_command(vha, mcp);
|
||||
|
||||
/* Return mailbox statuses. */
|
||||
if (mb != NULL) {
|
||||
if (mb) {
|
||||
mb[0] = mcp->mb[0];
|
||||
mb[1] = mcp->mb[1];
|
||||
mb[3] = mcp->mb[3];
|
||||
|
@ -3779,7 +3809,7 @@ qla2x00_set_idma_speed(scsi_qla_host_t *vha, uint16_t loop_id,
|
|||
mcp->mb[0] = MBC_PORT_PARAMS;
|
||||
mcp->mb[1] = loop_id;
|
||||
mcp->mb[2] = BIT_0;
|
||||
mcp->mb[3] = port_speed & (BIT_5|BIT_4|BIT_3|BIT_2|BIT_1|BIT_0);
|
||||
mcp->mb[3] = port_speed & 0x3F;
|
||||
mcp->mb[9] = vha->vp_idx;
|
||||
mcp->out_mb = MBX_9|MBX_3|MBX_2|MBX_1|MBX_0;
|
||||
mcp->in_mb = MBX_3|MBX_1|MBX_0;
|
||||
|
@ -3788,7 +3818,7 @@ qla2x00_set_idma_speed(scsi_qla_host_t *vha, uint16_t loop_id,
|
|||
rval = qla2x00_mailbox_command(vha, mcp);
|
||||
|
||||
/* Return mailbox statuses. */
|
||||
if (mb != NULL) {
|
||||
if (mb) {
|
||||
mb[0] = mcp->mb[0];
|
||||
mb[1] = mcp->mb[1];
|
||||
mb[3] = mcp->mb[3];
|
||||
|
@ -4230,7 +4260,7 @@ qla84xx_verify_chip(struct scsi_qla_host *vha, uint16_t *status)
|
|||
ql_dbg(ql_dbg_mbx + ql_dbg_buffer, vha, 0x111c,
|
||||
"Dump of Verify Request.\n");
|
||||
ql_dump_buffer(ql_dbg_mbx + ql_dbg_buffer, vha, 0x111e,
|
||||
(uint8_t *)mn, sizeof(*mn));
|
||||
mn, sizeof(*mn));
|
||||
|
||||
rval = qla2x00_issue_iocb_timeout(vha, mn, mn_dma, 0, 120);
|
||||
if (rval != QLA_SUCCESS) {
|
||||
|
@ -4242,7 +4272,7 @@ qla84xx_verify_chip(struct scsi_qla_host *vha, uint16_t *status)
|
|||
ql_dbg(ql_dbg_mbx + ql_dbg_buffer, vha, 0x1110,
|
||||
"Dump of Verify Response.\n");
|
||||
ql_dump_buffer(ql_dbg_mbx + ql_dbg_buffer, vha, 0x1118,
|
||||
(uint8_t *)mn, sizeof(*mn));
|
||||
mn, sizeof(*mn));
|
||||
|
||||
status[0] = le16_to_cpu(mn->p.rsp.comp_status);
|
||||
status[1] = status[0] == CS_VCS_CHIP_FAILURE ?
|
||||
|
@ -4318,7 +4348,7 @@ qla25xx_init_req_que(struct scsi_qla_host *vha, struct req_que *req)
|
|||
mcp->mb[12] = req->qos;
|
||||
mcp->mb[11] = req->vp_idx;
|
||||
mcp->mb[13] = req->rid;
|
||||
if (IS_QLA83XX(ha) || IS_QLA27XX(ha))
|
||||
if (IS_QLA83XX(ha) || IS_QLA27XX(ha) || IS_QLA28XX(ha))
|
||||
mcp->mb[15] = 0;
|
||||
|
||||
mcp->mb[4] = req->id;
|
||||
|
@ -4332,9 +4362,10 @@ qla25xx_init_req_que(struct scsi_qla_host *vha, struct req_que *req)
|
|||
mcp->flags = MBX_DMA_OUT;
|
||||
mcp->tov = MBX_TOV_SECONDS * 2;
|
||||
|
||||
if (IS_QLA81XX(ha) || IS_QLA83XX(ha) || IS_QLA27XX(ha))
|
||||
if (IS_QLA81XX(ha) || IS_QLA83XX(ha) || IS_QLA27XX(ha) ||
|
||||
IS_QLA28XX(ha))
|
||||
mcp->in_mb |= MBX_1;
|
||||
if (IS_QLA83XX(ha) || IS_QLA27XX(ha)) {
|
||||
if (IS_QLA83XX(ha) || IS_QLA27XX(ha) || IS_QLA28XX(ha)) {
|
||||
mcp->out_mb |= MBX_15;
|
||||
/* debug q create issue in SR-IOV */
|
||||
mcp->in_mb |= MBX_9 | MBX_8 | MBX_7;
|
||||
|
@ -4343,7 +4374,7 @@ qla25xx_init_req_que(struct scsi_qla_host *vha, struct req_que *req)
|
|||
spin_lock_irqsave(&ha->hardware_lock, flags);
|
||||
if (!(req->options & BIT_0)) {
|
||||
WRT_REG_DWORD(req->req_q_in, 0);
|
||||
if (!IS_QLA83XX(ha) && !IS_QLA27XX(ha))
|
||||
if (!IS_QLA83XX(ha) && !IS_QLA27XX(ha) && !IS_QLA28XX(ha))
|
||||
WRT_REG_DWORD(req->req_q_out, 0);
|
||||
}
|
||||
spin_unlock_irqrestore(&ha->hardware_lock, flags);
|
||||
|
@ -4387,7 +4418,7 @@ qla25xx_init_rsp_que(struct scsi_qla_host *vha, struct rsp_que *rsp)
|
|||
mcp->mb[5] = rsp->length;
|
||||
mcp->mb[14] = rsp->msix->entry;
|
||||
mcp->mb[13] = rsp->rid;
|
||||
if (IS_QLA83XX(ha) || IS_QLA27XX(ha))
|
||||
if (IS_QLA83XX(ha) || IS_QLA27XX(ha) || IS_QLA28XX(ha))
|
||||
mcp->mb[15] = 0;
|
||||
|
||||
mcp->mb[4] = rsp->id;
|
||||
|
@ -4404,7 +4435,7 @@ qla25xx_init_rsp_que(struct scsi_qla_host *vha, struct rsp_que *rsp)
|
|||
if (IS_QLA81XX(ha)) {
|
||||
mcp->out_mb |= MBX_12|MBX_11|MBX_10;
|
||||
mcp->in_mb |= MBX_1;
|
||||
} else if (IS_QLA83XX(ha) || IS_QLA27XX(ha)) {
|
||||
} else if (IS_QLA83XX(ha) || IS_QLA27XX(ha) || IS_QLA28XX(ha)) {
|
||||
mcp->out_mb |= MBX_15|MBX_12|MBX_11|MBX_10;
|
||||
mcp->in_mb |= MBX_1;
|
||||
/* debug q create issue in SR-IOV */
|
||||
|
@ -4414,7 +4445,7 @@ qla25xx_init_rsp_que(struct scsi_qla_host *vha, struct rsp_que *rsp)
|
|||
spin_lock_irqsave(&ha->hardware_lock, flags);
|
||||
if (!(rsp->options & BIT_0)) {
|
||||
WRT_REG_DWORD(rsp->rsp_q_out, 0);
|
||||
if (!IS_QLA83XX(ha) && !IS_QLA27XX(ha))
|
||||
if (!IS_QLA83XX(ha) && !IS_QLA27XX(ha) && !IS_QLA28XX(ha))
|
||||
WRT_REG_DWORD(rsp->rsp_q_in, 0);
|
||||
}
|
||||
|
||||
|
@ -4472,7 +4503,7 @@ qla81xx_fac_get_sector_size(scsi_qla_host_t *vha, uint32_t *sector_size)
|
|||
"Entered %s.\n", __func__);
|
||||
|
||||
if (!IS_QLA81XX(vha->hw) && !IS_QLA83XX(vha->hw) &&
|
||||
!IS_QLA27XX(vha->hw))
|
||||
!IS_QLA27XX(vha->hw) && !IS_QLA28XX(vha->hw))
|
||||
return QLA_FUNCTION_FAILED;
|
||||
|
||||
mcp->mb[0] = MBC_FLASH_ACCESS_CTRL;
|
||||
|
@ -4504,7 +4535,7 @@ qla81xx_fac_do_write_enable(scsi_qla_host_t *vha, int enable)
|
|||
mbx_cmd_t *mcp = &mc;
|
||||
|
||||
if (!IS_QLA81XX(vha->hw) && !IS_QLA83XX(vha->hw) &&
|
||||
!IS_QLA27XX(vha->hw))
|
||||
!IS_QLA27XX(vha->hw) && !IS_QLA28XX(vha->hw))
|
||||
return QLA_FUNCTION_FAILED;
|
||||
|
||||
ql_dbg(ql_dbg_mbx + ql_dbg_verbose, vha, 0x10df,
|
||||
|
@ -4539,7 +4570,7 @@ qla81xx_fac_erase_sector(scsi_qla_host_t *vha, uint32_t start, uint32_t finish)
|
|||
mbx_cmd_t *mcp = &mc;
|
||||
|
||||
if (!IS_QLA81XX(vha->hw) && !IS_QLA83XX(vha->hw) &&
|
||||
!IS_QLA27XX(vha->hw))
|
||||
!IS_QLA27XX(vha->hw) && !IS_QLA28XX(vha->hw))
|
||||
return QLA_FUNCTION_FAILED;
|
||||
|
||||
ql_dbg(ql_dbg_mbx + ql_dbg_verbose, vha, 0x10e2,
|
||||
|
@ -4569,6 +4600,42 @@ qla81xx_fac_erase_sector(scsi_qla_host_t *vha, uint32_t start, uint32_t finish)
|
|||
return rval;
|
||||
}
|
||||
|
||||
int
|
||||
qla81xx_fac_semaphore_access(scsi_qla_host_t *vha, int lock)
|
||||
{
|
||||
int rval = QLA_SUCCESS;
|
||||
mbx_cmd_t mc;
|
||||
mbx_cmd_t *mcp = &mc;
|
||||
struct qla_hw_data *ha = vha->hw;
|
||||
|
||||
if (!IS_QLA81XX(ha) && !IS_QLA83XX(ha) &&
|
||||
!IS_QLA27XX(ha) && !IS_QLA28XX(ha))
|
||||
return rval;
|
||||
|
||||
ql_dbg(ql_dbg_mbx + ql_dbg_verbose, vha, 0x10e2,
|
||||
"Entered %s.\n", __func__);
|
||||
|
||||
mcp->mb[0] = MBC_FLASH_ACCESS_CTRL;
|
||||
mcp->mb[1] = (lock ? FAC_OPT_CMD_LOCK_SEMAPHORE :
|
||||
FAC_OPT_CMD_UNLOCK_SEMAPHORE);
|
||||
mcp->out_mb = MBX_1|MBX_0;
|
||||
mcp->in_mb = MBX_1|MBX_0;
|
||||
mcp->tov = MBX_TOV_SECONDS;
|
||||
mcp->flags = 0;
|
||||
rval = qla2x00_mailbox_command(vha, mcp);
|
||||
|
||||
if (rval != QLA_SUCCESS) {
|
||||
ql_dbg(ql_dbg_mbx, vha, 0x10e3,
|
||||
"Failed=%x mb[0]=%x mb[1]=%x mb[2]=%x.\n",
|
||||
rval, mcp->mb[0], mcp->mb[1], mcp->mb[2]);
|
||||
} else {
|
||||
ql_dbg(ql_dbg_mbx + ql_dbg_verbose, vha, 0x10e4,
|
||||
"Done %s.\n", __func__);
|
||||
}
|
||||
|
||||
return rval;
|
||||
}
|
||||
|
||||
int
|
||||
qla81xx_restart_mpi_firmware(scsi_qla_host_t *vha)
|
||||
{
|
||||
|
@ -4818,10 +4885,10 @@ qla2x00_read_sfp(scsi_qla_host_t *vha, dma_addr_t sfp_dma, uint8_t *sfp,
|
|||
if (rval != QLA_SUCCESS) {
|
||||
ql_dbg(ql_dbg_mbx, vha, 0x10e9,
|
||||
"Failed=%x mb[0]=%x.\n", rval, mcp->mb[0]);
|
||||
if (mcp->mb[0] == MBS_COMMAND_ERROR &&
|
||||
mcp->mb[1] == 0x22)
|
||||
if (mcp->mb[0] == MBS_COMMAND_ERROR && mcp->mb[1] == 0x22) {
|
||||
/* sfp is not there */
|
||||
rval = QLA_INTERFACE_ERROR;
|
||||
}
|
||||
} else {
|
||||
ql_dbg(ql_dbg_mbx + ql_dbg_verbose, vha, 0x10ea,
|
||||
"Done %s.\n", __func__);
|
||||
|
@ -5161,13 +5228,14 @@ qla2x00_write_ram_word(scsi_qla_host_t *vha, uint32_t risc_addr, uint32_t data)
|
|||
mcp->mb[3] = MSW(data);
|
||||
mcp->mb[8] = MSW(risc_addr);
|
||||
mcp->out_mb = MBX_8|MBX_3|MBX_2|MBX_1|MBX_0;
|
||||
mcp->in_mb = MBX_0;
|
||||
mcp->in_mb = MBX_1|MBX_0;
|
||||
mcp->tov = 30;
|
||||
mcp->flags = 0;
|
||||
rval = qla2x00_mailbox_command(vha, mcp);
|
||||
if (rval != QLA_SUCCESS) {
|
||||
ql_dbg(ql_dbg_mbx, vha, 0x1101,
|
||||
"Failed=%x mb[0]=%x.\n", rval, mcp->mb[0]);
|
||||
"Failed=%x mb[0]=%x mb[1]=%x.\n",
|
||||
rval, mcp->mb[0], mcp->mb[1]);
|
||||
} else {
|
||||
ql_dbg(ql_dbg_mbx + ql_dbg_verbose, vha, 0x1102,
|
||||
"Done %s.\n", __func__);
|
||||
|
@ -5278,7 +5346,7 @@ qla2x00_set_data_rate(scsi_qla_host_t *vha, uint16_t mode)
|
|||
|
||||
mcp->out_mb = MBX_2|MBX_1|MBX_0;
|
||||
mcp->in_mb = MBX_2|MBX_1|MBX_0;
|
||||
if (IS_QLA83XX(ha) || IS_QLA27XX(ha))
|
||||
if (IS_QLA83XX(ha) || IS_QLA27XX(ha) || IS_QLA28XX(ha))
|
||||
mcp->in_mb |= MBX_4|MBX_3;
|
||||
mcp->tov = MBX_TOV_SECONDS;
|
||||
mcp->flags = 0;
|
||||
|
@ -5316,7 +5384,7 @@ qla2x00_get_data_rate(scsi_qla_host_t *vha)
|
|||
mcp->mb[1] = QLA_GET_DATA_RATE;
|
||||
mcp->out_mb = MBX_1|MBX_0;
|
||||
mcp->in_mb = MBX_2|MBX_1|MBX_0;
|
||||
if (IS_QLA83XX(ha) || IS_QLA27XX(ha))
|
||||
if (IS_QLA83XX(ha) || IS_QLA27XX(ha) || IS_QLA28XX(ha))
|
||||
mcp->in_mb |= MBX_3;
|
||||
mcp->tov = MBX_TOV_SECONDS;
|
||||
mcp->flags = 0;
|
||||
|
@ -5346,7 +5414,7 @@ qla81xx_get_port_config(scsi_qla_host_t *vha, uint16_t *mb)
|
|||
"Entered %s.\n", __func__);
|
||||
|
||||
if (!IS_QLA81XX(ha) && !IS_QLA83XX(ha) && !IS_QLA8044(ha) &&
|
||||
!IS_QLA27XX(ha))
|
||||
!IS_QLA27XX(ha) && !IS_QLA28XX(ha))
|
||||
return QLA_FUNCTION_FAILED;
|
||||
mcp->mb[0] = MBC_GET_PORT_CONFIG;
|
||||
mcp->out_mb = MBX_0;
|
||||
|
@ -5662,6 +5730,7 @@ qla8044_md_get_template(scsi_qla_host_t *vha)
|
|||
mbx_cmd_t *mcp = &mc;
|
||||
int rval = QLA_FUNCTION_FAILED;
|
||||
int offset = 0, size = MINIDUMP_SIZE_36K;
|
||||
|
||||
ql_dbg(ql_dbg_mbx + ql_dbg_verbose, vha, 0xb11f,
|
||||
"Entered %s.\n", __func__);
|
||||
|
||||
|
@ -5842,7 +5911,7 @@ qla83xx_wr_reg(scsi_qla_host_t *vha, uint32_t reg, uint32_t data)
|
|||
mbx_cmd_t mc;
|
||||
mbx_cmd_t *mcp = &mc;
|
||||
|
||||
if (!IS_QLA83XX(ha) && !IS_QLA27XX(ha))
|
||||
if (!IS_QLA83XX(ha) && !IS_QLA27XX(ha) && !IS_QLA28XX(ha))
|
||||
return QLA_FUNCTION_FAILED;
|
||||
|
||||
ql_dbg(ql_dbg_mbx + ql_dbg_verbose, vha, 0x1130,
|
||||
|
@ -5917,7 +5986,7 @@ qla83xx_rd_reg(scsi_qla_host_t *vha, uint32_t reg, uint32_t *data)
|
|||
struct qla_hw_data *ha = vha->hw;
|
||||
unsigned long retry_max_time = jiffies + (2 * HZ);
|
||||
|
||||
if (!IS_QLA83XX(ha) && !IS_QLA27XX(ha))
|
||||
if (!IS_QLA83XX(ha) && !IS_QLA27XX(ha) && !IS_QLA28XX(ha))
|
||||
return QLA_FUNCTION_FAILED;
|
||||
|
||||
ql_dbg(ql_dbg_mbx, vha, 0x114b, "Entered %s.\n", __func__);
|
||||
|
@ -5967,7 +6036,7 @@ qla83xx_restart_nic_firmware(scsi_qla_host_t *vha)
|
|||
mbx_cmd_t *mcp = &mc;
|
||||
struct qla_hw_data *ha = vha->hw;
|
||||
|
||||
if (!IS_QLA83XX(ha) && !IS_QLA27XX(ha))
|
||||
if (!IS_QLA83XX(ha))
|
||||
return QLA_FUNCTION_FAILED;
|
||||
|
||||
ql_dbg(ql_dbg_mbx, vha, 0x1143, "Entered %s.\n", __func__);
|
||||
|
@ -6101,7 +6170,8 @@ qla26xx_dport_diagnostics(scsi_qla_host_t *vha,
|
|||
mbx_cmd_t *mcp = &mc;
|
||||
dma_addr_t dd_dma;
|
||||
|
||||
if (!IS_QLA83XX(vha->hw) && !IS_QLA27XX(vha->hw))
|
||||
if (!IS_QLA83XX(vha->hw) && !IS_QLA27XX(vha->hw) &&
|
||||
!IS_QLA28XX(vha->hw))
|
||||
return QLA_FUNCTION_FAILED;
|
||||
|
||||
ql_dbg(ql_dbg_mbx + ql_dbg_verbose, vha, 0x119f,
|
||||
|
@ -6318,7 +6388,13 @@ int __qla24xx_parse_gpdb(struct scsi_qla_host *vha, fc_port_t *fcport,
|
|||
fcport->d_id.b.rsvd_1 = 0;
|
||||
|
||||
if (fcport->fc4f_nvme) {
|
||||
fcport->port_type = FCT_NVME;
|
||||
fcport->port_type = 0;
|
||||
if ((pd->prli_svc_param_word_3[0] & BIT_5) == 0)
|
||||
fcport->port_type |= FCT_NVME_INITIATOR;
|
||||
if ((pd->prli_svc_param_word_3[0] & BIT_4) == 0)
|
||||
fcport->port_type |= FCT_NVME_TARGET;
|
||||
if ((pd->prli_svc_param_word_3[0] & BIT_3) == 0)
|
||||
fcport->port_type |= FCT_NVME_DISCOVERY;
|
||||
} else {
|
||||
/* If not target must be initiator or unknown type. */
|
||||
if ((pd->prli_svc_param_word_3[0] & BIT_4) == 0)
|
||||
|
@ -6507,3 +6583,101 @@ int qla24xx_res_count_wait(struct scsi_qla_host *vha,
|
|||
done:
|
||||
return rval;
|
||||
}
|
||||
|
||||
int qla28xx_secure_flash_update(scsi_qla_host_t *vha, uint16_t opts,
|
||||
uint16_t region, uint32_t len, dma_addr_t sfub_dma_addr,
|
||||
uint32_t sfub_len)
|
||||
{
|
||||
int rval;
|
||||
mbx_cmd_t mc;
|
||||
mbx_cmd_t *mcp = &mc;
|
||||
|
||||
mcp->mb[0] = MBC_SECURE_FLASH_UPDATE;
|
||||
mcp->mb[1] = opts;
|
||||
mcp->mb[2] = region;
|
||||
mcp->mb[3] = MSW(len);
|
||||
mcp->mb[4] = LSW(len);
|
||||
mcp->mb[5] = MSW(sfub_dma_addr);
|
||||
mcp->mb[6] = LSW(sfub_dma_addr);
|
||||
mcp->mb[7] = MSW(MSD(sfub_dma_addr));
|
||||
mcp->mb[8] = LSW(MSD(sfub_dma_addr));
|
||||
mcp->mb[9] = sfub_len;
|
||||
mcp->out_mb =
|
||||
MBX_9|MBX_8|MBX_7|MBX_6|MBX_5|MBX_4|MBX_3|MBX_2|MBX_1|MBX_0;
|
||||
mcp->in_mb = MBX_2|MBX_1|MBX_0;
|
||||
mcp->tov = MBX_TOV_SECONDS;
|
||||
mcp->flags = 0;
|
||||
rval = qla2x00_mailbox_command(vha, mcp);
|
||||
|
||||
if (rval != QLA_SUCCESS) {
|
||||
ql_dbg(ql_dbg_mbx, vha, 0xffff, "%s(%ld): failed rval 0x%x, %x %x %x",
|
||||
__func__, vha->host_no, rval, mcp->mb[0], mcp->mb[1],
|
||||
mcp->mb[2]);
|
||||
}
|
||||
|
||||
return rval;
|
||||
}
|
||||
|
||||
int qla2xxx_write_remote_register(scsi_qla_host_t *vha, uint32_t addr,
|
||||
uint32_t data)
|
||||
{
|
||||
int rval;
|
||||
mbx_cmd_t mc;
|
||||
mbx_cmd_t *mcp = &mc;
|
||||
|
||||
ql_dbg(ql_dbg_mbx + ql_dbg_verbose, vha, 0x10e8,
|
||||
"Entered %s.\n", __func__);
|
||||
|
||||
mcp->mb[0] = MBC_WRITE_REMOTE_REG;
|
||||
mcp->mb[1] = LSW(addr);
|
||||
mcp->mb[2] = MSW(addr);
|
||||
mcp->mb[3] = LSW(data);
|
||||
mcp->mb[4] = MSW(data);
|
||||
mcp->out_mb = MBX_4|MBX_3|MBX_2|MBX_1|MBX_0;
|
||||
mcp->in_mb = MBX_1|MBX_0;
|
||||
mcp->tov = MBX_TOV_SECONDS;
|
||||
mcp->flags = 0;
|
||||
rval = qla2x00_mailbox_command(vha, mcp);
|
||||
|
||||
if (rval != QLA_SUCCESS) {
|
||||
ql_dbg(ql_dbg_mbx, vha, 0x10e9,
|
||||
"Failed=%x mb[0]=%x.\n", rval, mcp->mb[0]);
|
||||
} else {
|
||||
ql_dbg(ql_dbg_mbx + ql_dbg_verbose, vha, 0x10ea,
|
||||
"Done %s.\n", __func__);
|
||||
}
|
||||
|
||||
return rval;
|
||||
}
|
||||
|
||||
int qla2xxx_read_remote_register(scsi_qla_host_t *vha, uint32_t addr,
|
||||
uint32_t *data)
|
||||
{
|
||||
int rval;
|
||||
mbx_cmd_t mc;
|
||||
mbx_cmd_t *mcp = &mc;
|
||||
|
||||
ql_dbg(ql_dbg_mbx + ql_dbg_verbose, vha, 0x10e8,
|
||||
"Entered %s.\n", __func__);
|
||||
|
||||
mcp->mb[0] = MBC_READ_REMOTE_REG;
|
||||
mcp->mb[1] = LSW(addr);
|
||||
mcp->mb[2] = MSW(addr);
|
||||
mcp->out_mb = MBX_2|MBX_1|MBX_0;
|
||||
mcp->in_mb = MBX_4|MBX_3|MBX_2|MBX_1|MBX_0;
|
||||
mcp->tov = MBX_TOV_SECONDS;
|
||||
mcp->flags = 0;
|
||||
rval = qla2x00_mailbox_command(vha, mcp);
|
||||
|
||||
*data = (uint32_t)((((uint32_t)mcp->mb[4]) << 16) | mcp->mb[3]);
|
||||
|
||||
if (rval != QLA_SUCCESS) {
|
||||
ql_dbg(ql_dbg_mbx, vha, 0x10e9,
|
||||
"Failed=%x mb[0]=%x.\n", rval, mcp->mb[0]);
|
||||
} else {
|
||||
ql_dbg(ql_dbg_mbx + ql_dbg_verbose, vha, 0x10ea,
|
||||
"Done %s.\n", __func__);
|
||||
}
|
||||
|
||||
return rval;
|
||||
}
|
||||
|
|
|
@ -905,7 +905,8 @@ static void qla_ctrlvp_sp_done(void *s, int res)
|
|||
{
|
||||
struct srb *sp = s;
|
||||
|
||||
complete(&sp->comp);
|
||||
if (sp->comp)
|
||||
complete(sp->comp);
|
||||
/* don't free sp here. Let the caller do the free */
|
||||
}
|
||||
|
||||
|
@ -922,6 +923,7 @@ int qla24xx_control_vp(scsi_qla_host_t *vha, int cmd)
|
|||
struct qla_hw_data *ha = vha->hw;
|
||||
int vp_index = vha->vp_idx;
|
||||
struct scsi_qla_host *base_vha = pci_get_drvdata(ha->pdev);
|
||||
DECLARE_COMPLETION_ONSTACK(comp);
|
||||
srb_t *sp;
|
||||
|
||||
ql_dbg(ql_dbg_vport, vha, 0x10c1,
|
||||
|
@ -936,6 +938,7 @@ int qla24xx_control_vp(scsi_qla_host_t *vha, int cmd)
|
|||
|
||||
sp->type = SRB_CTRL_VP;
|
||||
sp->name = "ctrl_vp";
|
||||
sp->comp = ∁
|
||||
sp->done = qla_ctrlvp_sp_done;
|
||||
sp->u.iocb_cmd.timeout = qla2x00_async_iocb_timeout;
|
||||
qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha) + 2);
|
||||
|
@ -953,7 +956,9 @@ int qla24xx_control_vp(scsi_qla_host_t *vha, int cmd)
|
|||
ql_dbg(ql_dbg_vport, vha, 0x113f, "%s hndl %x submitted\n",
|
||||
sp->name, sp->handle);
|
||||
|
||||
wait_for_completion(&sp->comp);
|
||||
wait_for_completion(&comp);
|
||||
sp->comp = NULL;
|
||||
|
||||
rval = sp->rc;
|
||||
switch (rval) {
|
||||
case QLA_FUNCTION_TIMEOUT:
|
||||
|
|
|
@ -273,9 +273,9 @@ qlafx00_mailbox_command(scsi_qla_host_t *vha, struct mbx_cmd_32 *mcp)
|
|||
|
||||
if (rval) {
|
||||
ql_log(ql_log_warn, base_vha, 0x1163,
|
||||
"**** Failed mbx[0]=%x, mb[1]=%x, mb[2]=%x, "
|
||||
"mb[3]=%x, cmd=%x ****.\n",
|
||||
mcp->mb[0], mcp->mb[1], mcp->mb[2], mcp->mb[3], command);
|
||||
"**** Failed=%x mbx[0]=%x, mb[1]=%x, mb[2]=%x, mb[3]=%x, cmd=%x ****.\n",
|
||||
rval, mcp->mb[0], mcp->mb[1], mcp->mb[2], mcp->mb[3],
|
||||
command);
|
||||
} else {
|
||||
ql_dbg(ql_dbg_mbx, base_vha, 0x1164, "Done %s.\n", __func__);
|
||||
}
|
||||
|
@ -629,17 +629,20 @@ qlafx00_soc_cpu_reset(scsi_qla_host_t *vha)
|
|||
*
|
||||
* Returns 0 on success.
|
||||
*/
|
||||
void
|
||||
int
|
||||
qlafx00_soft_reset(scsi_qla_host_t *vha)
|
||||
{
|
||||
struct qla_hw_data *ha = vha->hw;
|
||||
int rval = QLA_FUNCTION_FAILED;
|
||||
|
||||
if (unlikely(pci_channel_offline(ha->pdev) &&
|
||||
ha->flags.pci_channel_io_perm_failure))
|
||||
return;
|
||||
return rval;
|
||||
|
||||
ha->isp_ops->disable_intrs(ha);
|
||||
qlafx00_soc_cpu_reset(vha);
|
||||
|
||||
return QLA_SUCCESS;
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -1138,8 +1141,8 @@ qlafx00_find_all_targets(scsi_qla_host_t *vha,
|
|||
|
||||
ql_dbg(ql_dbg_disc + ql_dbg_init, vha, 0x2088,
|
||||
"Listing Target bit map...\n");
|
||||
ql_dump_buffer(ql_dbg_disc + ql_dbg_init, vha,
|
||||
0x2089, (uint8_t *)ha->gid_list, 32);
|
||||
ql_dump_buffer(ql_dbg_disc + ql_dbg_init, vha, 0x2089,
|
||||
ha->gid_list, 32);
|
||||
|
||||
/* Allocate temporary rmtport for any new rmtports discovered. */
|
||||
new_fcport = qla2x00_alloc_fcport(vha, GFP_KERNEL);
|
||||
|
@ -1320,6 +1323,7 @@ qlafx00_configure_devices(scsi_qla_host_t *vha)
|
|||
{
|
||||
int rval;
|
||||
unsigned long flags;
|
||||
|
||||
rval = QLA_SUCCESS;
|
||||
|
||||
flags = vha->dpc_flags;
|
||||
|
@ -1913,8 +1917,7 @@ qlafx00_fx_disc(scsi_qla_host_t *vha, fc_port_t *fcport, uint16_t fx_type)
|
|||
phost_info->domainname,
|
||||
phost_info->hostdriver);
|
||||
ql_dump_buffer(ql_dbg_init + ql_dbg_disc, vha, 0x014d,
|
||||
(uint8_t *)phost_info,
|
||||
sizeof(struct host_system_info));
|
||||
phost_info, sizeof(*phost_info));
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -1968,7 +1971,7 @@ qlafx00_fx_disc(scsi_qla_host_t *vha, fc_port_t *fcport, uint16_t fx_type)
|
|||
vha->d_id.b.al_pa = pinfo->port_id[2];
|
||||
qlafx00_update_host_attr(vha, pinfo);
|
||||
ql_dump_buffer(ql_dbg_init + ql_dbg_buffer, vha, 0x0141,
|
||||
(uint8_t *)pinfo, 16);
|
||||
pinfo, 16);
|
||||
} else if (fx_type == FXDISC_GET_TGT_NODE_INFO) {
|
||||
struct qlafx00_tgt_node_info *pinfo =
|
||||
(struct qlafx00_tgt_node_info *) fdisc->u.fxiocb.rsp_addr;
|
||||
|
@ -1976,12 +1979,12 @@ qlafx00_fx_disc(scsi_qla_host_t *vha, fc_port_t *fcport, uint16_t fx_type)
|
|||
memcpy(fcport->port_name, pinfo->tgt_node_wwpn, WWN_SIZE);
|
||||
fcport->port_type = FCT_TARGET;
|
||||
ql_dump_buffer(ql_dbg_init + ql_dbg_buffer, vha, 0x0144,
|
||||
(uint8_t *)pinfo, 16);
|
||||
pinfo, 16);
|
||||
} else if (fx_type == FXDISC_GET_TGT_NODE_LIST) {
|
||||
struct qlafx00_tgt_node_info *pinfo =
|
||||
(struct qlafx00_tgt_node_info *) fdisc->u.fxiocb.rsp_addr;
|
||||
ql_dump_buffer(ql_dbg_init + ql_dbg_buffer, vha, 0x0146,
|
||||
(uint8_t *)pinfo, 16);
|
||||
pinfo, 16);
|
||||
memcpy(vha->hw->gid_list, pinfo, QLAFX00_TGT_NODE_LIST_SIZE);
|
||||
} else if (fx_type == FXDISC_ABORT_IOCTL)
|
||||
fdisc->u.fxiocb.result =
|
||||
|
@ -2248,18 +2251,16 @@ qlafx00_ioctl_iosb_entry(scsi_qla_host_t *vha, struct req_que *req,
|
|||
|
||||
fw_sts_ptr = bsg_job->reply + sizeof(struct fc_bsg_reply);
|
||||
|
||||
memcpy(fw_sts_ptr, (uint8_t *)&fstatus,
|
||||
sizeof(struct qla_mt_iocb_rsp_fx00));
|
||||
memcpy(fw_sts_ptr, &fstatus, sizeof(fstatus));
|
||||
bsg_job->reply_len = sizeof(struct fc_bsg_reply) +
|
||||
sizeof(struct qla_mt_iocb_rsp_fx00) + sizeof(uint8_t);
|
||||
|
||||
ql_dump_buffer(ql_dbg_user + ql_dbg_verbose,
|
||||
sp->fcport->vha, 0x5080,
|
||||
(uint8_t *)pkt, sizeof(struct ioctl_iocb_entry_fx00));
|
||||
sp->vha, 0x5080, pkt, sizeof(*pkt));
|
||||
|
||||
ql_dump_buffer(ql_dbg_user + ql_dbg_verbose,
|
||||
sp->fcport->vha, 0x5074,
|
||||
(uint8_t *)fw_sts_ptr, sizeof(struct qla_mt_iocb_rsp_fx00));
|
||||
sp->vha, 0x5074,
|
||||
fw_sts_ptr, sizeof(fstatus));
|
||||
|
||||
res = bsg_reply->result = DID_OK << 16;
|
||||
bsg_reply->reply_payload_rcv_len =
|
||||
|
@ -2597,7 +2598,7 @@ qlafx00_status_cont_entry(struct rsp_que *rsp, sts_cont_entry_t *pkt)
|
|||
|
||||
/* Move sense data. */
|
||||
ql_dump_buffer(ql_dbg_io + ql_dbg_buffer, vha, 0x304e,
|
||||
(uint8_t *)pkt, sizeof(sts_cont_entry_t));
|
||||
pkt, sizeof(*pkt));
|
||||
memcpy(sense_ptr, pkt->data, sense_sz);
|
||||
ql_dump_buffer(ql_dbg_io + ql_dbg_buffer, vha, 0x304a,
|
||||
sense_ptr, sense_sz);
|
||||
|
@ -2992,7 +2993,7 @@ qlafx00_build_scsi_iocbs(srb_t *sp, struct cmd_type_7_fx00 *cmd_pkt,
|
|||
uint16_t tot_dsds, struct cmd_type_7_fx00 *lcmd_pkt)
|
||||
{
|
||||
uint16_t avail_dsds;
|
||||
__le32 *cur_dsd;
|
||||
struct dsd64 *cur_dsd;
|
||||
scsi_qla_host_t *vha;
|
||||
struct scsi_cmnd *cmd;
|
||||
struct scatterlist *sg;
|
||||
|
@ -3028,12 +3029,10 @@ qlafx00_build_scsi_iocbs(srb_t *sp, struct cmd_type_7_fx00 *cmd_pkt,
|
|||
|
||||
/* One DSD is available in the Command Type 3 IOCB */
|
||||
avail_dsds = 1;
|
||||
cur_dsd = (__le32 *)&lcmd_pkt->dseg_0_address;
|
||||
cur_dsd = &lcmd_pkt->dsd;
|
||||
|
||||
/* Load data segments */
|
||||
scsi_for_each_sg(cmd, sg, tot_dsds, i) {
|
||||
dma_addr_t sle_dma;
|
||||
|
||||
/* Allocate additional continuation packets? */
|
||||
if (avail_dsds == 0) {
|
||||
/*
|
||||
|
@ -3043,26 +3042,23 @@ qlafx00_build_scsi_iocbs(srb_t *sp, struct cmd_type_7_fx00 *cmd_pkt,
|
|||
memset(&lcont_pkt, 0, REQUEST_ENTRY_SIZE);
|
||||
cont_pkt =
|
||||
qlafx00_prep_cont_type1_iocb(req, &lcont_pkt);
|
||||
cur_dsd = (__le32 *)lcont_pkt.dseg_0_address;
|
||||
cur_dsd = lcont_pkt.dsd;
|
||||
avail_dsds = 5;
|
||||
cont = 1;
|
||||
}
|
||||
|
||||
sle_dma = sg_dma_address(sg);
|
||||
*cur_dsd++ = cpu_to_le32(LSD(sle_dma));
|
||||
*cur_dsd++ = cpu_to_le32(MSD(sle_dma));
|
||||
*cur_dsd++ = cpu_to_le32(sg_dma_len(sg));
|
||||
append_dsd64(&cur_dsd, sg);
|
||||
avail_dsds--;
|
||||
if (avail_dsds == 0 && cont == 1) {
|
||||
cont = 0;
|
||||
memcpy_toio((void __iomem *)cont_pkt, &lcont_pkt,
|
||||
REQUEST_ENTRY_SIZE);
|
||||
sizeof(lcont_pkt));
|
||||
}
|
||||
|
||||
}
|
||||
if (avail_dsds != 0 && cont == 1) {
|
||||
memcpy_toio((void __iomem *)cont_pkt, &lcont_pkt,
|
||||
REQUEST_ENTRY_SIZE);
|
||||
sizeof(lcont_pkt));
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -3172,9 +3168,9 @@ qlafx00_start_scsi(srb_t *sp)
|
|||
lcmd_pkt.entry_status = (uint8_t) rsp->id;
|
||||
|
||||
ql_dump_buffer(ql_dbg_io + ql_dbg_buffer, vha, 0x302e,
|
||||
(uint8_t *)cmd->cmnd, cmd->cmd_len);
|
||||
cmd->cmnd, cmd->cmd_len);
|
||||
ql_dump_buffer(ql_dbg_io + ql_dbg_buffer, vha, 0x3032,
|
||||
(uint8_t *)&lcmd_pkt, REQUEST_ENTRY_SIZE);
|
||||
&lcmd_pkt, sizeof(lcmd_pkt));
|
||||
|
||||
memcpy_toio((void __iomem *)cmd_pkt, &lcmd_pkt, REQUEST_ENTRY_SIZE);
|
||||
wmb();
|
||||
|
@ -3282,11 +3278,9 @@ qlafx00_fxdisc_iocb(srb_t *sp, struct fxdisc_entry_fx00 *pfxiocb)
|
|||
fx_iocb.req_dsdcnt = cpu_to_le16(1);
|
||||
fx_iocb.req_xfrcnt =
|
||||
cpu_to_le16(fxio->u.fxiocb.req_len);
|
||||
fx_iocb.dseg_rq_address[0] =
|
||||
cpu_to_le32(LSD(fxio->u.fxiocb.req_dma_handle));
|
||||
fx_iocb.dseg_rq_address[1] =
|
||||
cpu_to_le32(MSD(fxio->u.fxiocb.req_dma_handle));
|
||||
fx_iocb.dseg_rq_len =
|
||||
put_unaligned_le64(fxio->u.fxiocb.req_dma_handle,
|
||||
&fx_iocb.dseg_rq.address);
|
||||
fx_iocb.dseg_rq.length =
|
||||
cpu_to_le32(fxio->u.fxiocb.req_len);
|
||||
}
|
||||
|
||||
|
@ -3294,11 +3288,9 @@ qlafx00_fxdisc_iocb(srb_t *sp, struct fxdisc_entry_fx00 *pfxiocb)
|
|||
fx_iocb.rsp_dsdcnt = cpu_to_le16(1);
|
||||
fx_iocb.rsp_xfrcnt =
|
||||
cpu_to_le16(fxio->u.fxiocb.rsp_len);
|
||||
fx_iocb.dseg_rsp_address[0] =
|
||||
cpu_to_le32(LSD(fxio->u.fxiocb.rsp_dma_handle));
|
||||
fx_iocb.dseg_rsp_address[1] =
|
||||
cpu_to_le32(MSD(fxio->u.fxiocb.rsp_dma_handle));
|
||||
fx_iocb.dseg_rsp_len =
|
||||
put_unaligned_le64(fxio->u.fxiocb.rsp_dma_handle,
|
||||
&fx_iocb.dseg_rsp.address);
|
||||
fx_iocb.dseg_rsp.length =
|
||||
cpu_to_le32(fxio->u.fxiocb.rsp_len);
|
||||
}
|
||||
|
||||
|
@ -3308,6 +3300,7 @@ qlafx00_fxdisc_iocb(srb_t *sp, struct fxdisc_entry_fx00 *pfxiocb)
|
|||
fx_iocb.flags = fxio->u.fxiocb.flags;
|
||||
} else {
|
||||
struct scatterlist *sg;
|
||||
|
||||
bsg_job = sp->u.bsg_job;
|
||||
bsg_request = bsg_job->request;
|
||||
piocb_rqst = (struct qla_mt_iocb_rqst_fx00 *)
|
||||
|
@ -3327,19 +3320,17 @@ qlafx00_fxdisc_iocb(srb_t *sp, struct fxdisc_entry_fx00 *pfxiocb)
|
|||
int avail_dsds, tot_dsds;
|
||||
cont_a64_entry_t lcont_pkt;
|
||||
cont_a64_entry_t *cont_pkt = NULL;
|
||||
__le32 *cur_dsd;
|
||||
struct dsd64 *cur_dsd;
|
||||
int index = 0, cont = 0;
|
||||
|
||||
fx_iocb.req_dsdcnt =
|
||||
cpu_to_le16(bsg_job->request_payload.sg_cnt);
|
||||
tot_dsds =
|
||||
bsg_job->request_payload.sg_cnt;
|
||||
cur_dsd = (__le32 *)&fx_iocb.dseg_rq_address[0];
|
||||
cur_dsd = &fx_iocb.dseg_rq;
|
||||
avail_dsds = 1;
|
||||
for_each_sg(bsg_job->request_payload.sg_list, sg,
|
||||
tot_dsds, index) {
|
||||
dma_addr_t sle_dma;
|
||||
|
||||
/* Allocate additional continuation packets? */
|
||||
if (avail_dsds == 0) {
|
||||
/*
|
||||
|
@ -3351,17 +3342,13 @@ qlafx00_fxdisc_iocb(srb_t *sp, struct fxdisc_entry_fx00 *pfxiocb)
|
|||
cont_pkt =
|
||||
qlafx00_prep_cont_type1_iocb(
|
||||
sp->vha->req, &lcont_pkt);
|
||||
cur_dsd = (__le32 *)
|
||||
lcont_pkt.dseg_0_address;
|
||||
cur_dsd = lcont_pkt.dsd;
|
||||
avail_dsds = 5;
|
||||
cont = 1;
|
||||
entry_cnt++;
|
||||
}
|
||||
|
||||
sle_dma = sg_dma_address(sg);
|
||||
*cur_dsd++ = cpu_to_le32(LSD(sle_dma));
|
||||
*cur_dsd++ = cpu_to_le32(MSD(sle_dma));
|
||||
*cur_dsd++ = cpu_to_le32(sg_dma_len(sg));
|
||||
append_dsd64(&cur_dsd, sg);
|
||||
avail_dsds--;
|
||||
|
||||
if (avail_dsds == 0 && cont == 1) {
|
||||
|
@ -3389,19 +3376,17 @@ qlafx00_fxdisc_iocb(srb_t *sp, struct fxdisc_entry_fx00 *pfxiocb)
|
|||
int avail_dsds, tot_dsds;
|
||||
cont_a64_entry_t lcont_pkt;
|
||||
cont_a64_entry_t *cont_pkt = NULL;
|
||||
__le32 *cur_dsd;
|
||||
struct dsd64 *cur_dsd;
|
||||
int index = 0, cont = 0;
|
||||
|
||||
fx_iocb.rsp_dsdcnt =
|
||||
cpu_to_le16(bsg_job->reply_payload.sg_cnt);
|
||||
tot_dsds = bsg_job->reply_payload.sg_cnt;
|
||||
cur_dsd = (__le32 *)&fx_iocb.dseg_rsp_address[0];
|
||||
cur_dsd = &fx_iocb.dseg_rsp;
|
||||
avail_dsds = 1;
|
||||
|
||||
for_each_sg(bsg_job->reply_payload.sg_list, sg,
|
||||
tot_dsds, index) {
|
||||
dma_addr_t sle_dma;
|
||||
|
||||
/* Allocate additional continuation packets? */
|
||||
if (avail_dsds == 0) {
|
||||
/*
|
||||
|
@ -3413,17 +3398,13 @@ qlafx00_fxdisc_iocb(srb_t *sp, struct fxdisc_entry_fx00 *pfxiocb)
|
|||
cont_pkt =
|
||||
qlafx00_prep_cont_type1_iocb(
|
||||
sp->vha->req, &lcont_pkt);
|
||||
cur_dsd = (__le32 *)
|
||||
lcont_pkt.dseg_0_address;
|
||||
cur_dsd = lcont_pkt.dsd;
|
||||
avail_dsds = 5;
|
||||
cont = 1;
|
||||
entry_cnt++;
|
||||
}
|
||||
|
||||
sle_dma = sg_dma_address(sg);
|
||||
*cur_dsd++ = cpu_to_le32(LSD(sle_dma));
|
||||
*cur_dsd++ = cpu_to_le32(MSD(sle_dma));
|
||||
*cur_dsd++ = cpu_to_le32(sg_dma_len(sg));
|
||||
append_dsd64(&cur_dsd, sg);
|
||||
avail_dsds--;
|
||||
|
||||
if (avail_dsds == 0 && cont == 1) {
|
||||
|
@ -3454,10 +3435,8 @@ qlafx00_fxdisc_iocb(srb_t *sp, struct fxdisc_entry_fx00 *pfxiocb)
|
|||
}
|
||||
|
||||
ql_dump_buffer(ql_dbg_user + ql_dbg_verbose,
|
||||
sp->vha, 0x3047,
|
||||
(uint8_t *)&fx_iocb, sizeof(struct fxdisc_entry_fx00));
|
||||
sp->vha, 0x3047, &fx_iocb, sizeof(fx_iocb));
|
||||
|
||||
memcpy_toio((void __iomem *)pfxiocb, &fx_iocb,
|
||||
sizeof(struct fxdisc_entry_fx00));
|
||||
memcpy_toio((void __iomem *)pfxiocb, &fx_iocb, sizeof(fx_iocb));
|
||||
wmb();
|
||||
}
|
||||
|
|
|
@ -7,6 +7,8 @@
|
|||
#ifndef __QLA_MR_H
|
||||
#define __QLA_MR_H
|
||||
|
||||
#include "qla_dsd.h"
|
||||
|
||||
/*
|
||||
* The PCI VendorID and DeviceID for our board.
|
||||
*/
|
||||
|
@ -46,8 +48,7 @@ struct cmd_type_7_fx00 {
|
|||
uint8_t fcp_cdb[MAX_CMDSZ]; /* SCSI command words. */
|
||||
__le32 byte_count; /* Total byte count. */
|
||||
|
||||
uint32_t dseg_0_address[2]; /* Data segment 0 address. */
|
||||
uint32_t dseg_0_len; /* Data segment 0 length. */
|
||||
struct dsd64 dsd;
|
||||
};
|
||||
|
||||
#define STATUS_TYPE_FX00 0x01 /* Status entry. */
|
||||
|
@ -176,10 +177,8 @@ struct fxdisc_entry_fx00 {
|
|||
uint8_t flags;
|
||||
uint8_t reserved_1;
|
||||
|
||||
__le32 dseg_rq_address[2]; /* Data segment 0 address. */
|
||||
__le32 dseg_rq_len; /* Data segment 0 length. */
|
||||
__le32 dseg_rsp_address[2]; /* Data segment 1 address. */
|
||||
__le32 dseg_rsp_len; /* Data segment 1 length. */
|
||||
struct dsd64 dseg_rq;
|
||||
struct dsd64 dseg_rsp;
|
||||
|
||||
__le32 dataword;
|
||||
__le32 adapid;
|
||||
|
|
|
@ -131,14 +131,10 @@ static void qla_nvme_sp_ls_done(void *ptr, int res)
|
|||
struct nvmefc_ls_req *fd;
|
||||
struct nvme_private *priv;
|
||||
|
||||
if (atomic_read(&sp->ref_count) == 0) {
|
||||
ql_log(ql_log_warn, sp->fcport->vha, 0x2123,
|
||||
"SP reference-count to ZERO on LS_done -- sp=%p.\n", sp);
|
||||
if (WARN_ON_ONCE(atomic_read(&sp->ref_count) == 0))
|
||||
return;
|
||||
}
|
||||
|
||||
if (!atomic_dec_and_test(&sp->ref_count))
|
||||
return;
|
||||
atomic_dec(&sp->ref_count);
|
||||
|
||||
if (res)
|
||||
res = -EINVAL;
|
||||
|
@ -161,15 +157,18 @@ static void qla_nvme_sp_done(void *ptr, int res)
|
|||
nvme = &sp->u.iocb_cmd;
|
||||
fd = nvme->u.nvme.desc;
|
||||
|
||||
if (!atomic_dec_and_test(&sp->ref_count))
|
||||
if (WARN_ON_ONCE(atomic_read(&sp->ref_count) == 0))
|
||||
return;
|
||||
|
||||
if (res == QLA_SUCCESS)
|
||||
fd->status = 0;
|
||||
else
|
||||
fd->status = NVME_SC_INTERNAL;
|
||||
atomic_dec(&sp->ref_count);
|
||||
|
||||
fd->rcv_rsplen = nvme->u.nvme.rsp_pyld_len;
|
||||
if (res == QLA_SUCCESS) {
|
||||
fd->rcv_rsplen = nvme->u.nvme.rsp_pyld_len;
|
||||
} else {
|
||||
fd->rcv_rsplen = 0;
|
||||
fd->transferred_length = 0;
|
||||
}
|
||||
fd->status = 0;
|
||||
fd->done(fd);
|
||||
qla2xxx_rel_qpair_sp(sp->qpair, sp);
|
||||
|
||||
|
@ -185,14 +184,24 @@ static void qla_nvme_abort_work(struct work_struct *work)
|
|||
struct qla_hw_data *ha = fcport->vha->hw;
|
||||
int rval;
|
||||
|
||||
if (fcport)
|
||||
ql_dbg(ql_dbg_io, fcport->vha, 0xffff,
|
||||
"%s called for sp=%p, hndl=%x on fcport=%p deleted=%d\n",
|
||||
__func__, sp, sp->handle, fcport, fcport->deleted);
|
||||
ql_dbg(ql_dbg_io, fcport->vha, 0xffff,
|
||||
"%s called for sp=%p, hndl=%x on fcport=%p deleted=%d\n",
|
||||
__func__, sp, sp->handle, fcport, fcport->deleted);
|
||||
|
||||
if (!ha->flags.fw_started && (fcport && fcport->deleted))
|
||||
return;
|
||||
|
||||
if (ha->flags.host_shutting_down) {
|
||||
ql_log(ql_log_info, sp->fcport->vha, 0xffff,
|
||||
"%s Calling done on sp: %p, type: 0x%x, sp->ref_count: 0x%x\n",
|
||||
__func__, sp, sp->type, atomic_read(&sp->ref_count));
|
||||
sp->done(sp, 0);
|
||||
return;
|
||||
}
|
||||
|
||||
if (WARN_ON_ONCE(atomic_read(&sp->ref_count) == 0))
|
||||
return;
|
||||
|
||||
rval = ha->isp_ops->abort_command(sp);
|
||||
|
||||
ql_dbg(ql_dbg_io, fcport->vha, 0x212b,
|
||||
|
@ -291,7 +300,7 @@ static inline int qla2x00_start_nvme_mq(srb_t *sp)
|
|||
uint16_t req_cnt;
|
||||
uint16_t tot_dsds;
|
||||
uint16_t avail_dsds;
|
||||
uint32_t *cur_dsd;
|
||||
struct dsd64 *cur_dsd;
|
||||
struct req_que *req = NULL;
|
||||
struct scsi_qla_host *vha = sp->fcport->vha;
|
||||
struct qla_hw_data *ha = vha->hw;
|
||||
|
@ -340,6 +349,7 @@ static inline int qla2x00_start_nvme_mq(srb_t *sp)
|
|||
|
||||
if (unlikely(!fd->sqid)) {
|
||||
struct nvme_fc_cmd_iu *cmd = fd->cmdaddr;
|
||||
|
||||
if (cmd->sqe.common.opcode == nvme_admin_async_event) {
|
||||
nvme->u.nvme.aen_op = 1;
|
||||
atomic_inc(&ha->nvme_active_aen_cnt);
|
||||
|
@ -395,25 +405,22 @@ static inline int qla2x00_start_nvme_mq(srb_t *sp)
|
|||
|
||||
/* NVME RSP IU */
|
||||
cmd_pkt->nvme_rsp_dsd_len = cpu_to_le16(fd->rsplen);
|
||||
cmd_pkt->nvme_rsp_dseg_address[0] = cpu_to_le32(LSD(fd->rspdma));
|
||||
cmd_pkt->nvme_rsp_dseg_address[1] = cpu_to_le32(MSD(fd->rspdma));
|
||||
put_unaligned_le64(fd->rspdma, &cmd_pkt->nvme_rsp_dseg_address);
|
||||
|
||||
/* NVME CNMD IU */
|
||||
cmd_pkt->nvme_cmnd_dseg_len = cpu_to_le16(fd->cmdlen);
|
||||
cmd_pkt->nvme_cmnd_dseg_address[0] = cpu_to_le32(LSD(fd->cmddma));
|
||||
cmd_pkt->nvme_cmnd_dseg_address[1] = cpu_to_le32(MSD(fd->cmddma));
|
||||
cmd_pkt->nvme_cmnd_dseg_address = cpu_to_le64(fd->cmddma);
|
||||
|
||||
cmd_pkt->dseg_count = cpu_to_le16(tot_dsds);
|
||||
cmd_pkt->byte_count = cpu_to_le32(fd->payload_length);
|
||||
|
||||
/* One DSD is available in the Command Type NVME IOCB */
|
||||
avail_dsds = 1;
|
||||
cur_dsd = (uint32_t *)&cmd_pkt->nvme_data_dseg_address[0];
|
||||
cur_dsd = &cmd_pkt->nvme_dsd;
|
||||
sgl = fd->first_sgl;
|
||||
|
||||
/* Load data segments */
|
||||
for_each_sg(sgl, sg, tot_dsds, i) {
|
||||
dma_addr_t sle_dma;
|
||||
cont_a64_entry_t *cont_pkt;
|
||||
|
||||
/* Allocate additional continuation packets? */
|
||||
|
@ -432,17 +439,14 @@ static inline int qla2x00_start_nvme_mq(srb_t *sp)
|
|||
req->ring_ptr++;
|
||||
}
|
||||
cont_pkt = (cont_a64_entry_t *)req->ring_ptr;
|
||||
*((uint32_t *)(&cont_pkt->entry_type)) =
|
||||
cpu_to_le32(CONTINUE_A64_TYPE);
|
||||
put_unaligned_le32(CONTINUE_A64_TYPE,
|
||||
&cont_pkt->entry_type);
|
||||
|
||||
cur_dsd = (uint32_t *)cont_pkt->dseg_0_address;
|
||||
avail_dsds = 5;
|
||||
cur_dsd = cont_pkt->dsd;
|
||||
avail_dsds = ARRAY_SIZE(cont_pkt->dsd);
|
||||
}
|
||||
|
||||
sle_dma = sg_dma_address(sg);
|
||||
*cur_dsd++ = cpu_to_le32(LSD(sle_dma));
|
||||
*cur_dsd++ = cpu_to_le32(MSD(sle_dma));
|
||||
*cur_dsd++ = cpu_to_le32(sg_dma_len(sg));
|
||||
append_dsd64(&cur_dsd, sg);
|
||||
avail_dsds--;
|
||||
}
|
||||
|
||||
|
@ -573,7 +577,7 @@ static struct nvme_fc_port_template qla_nvme_fc_transport = {
|
|||
.fcp_io = qla_nvme_post_cmd,
|
||||
.fcp_abort = qla_nvme_fcp_abort,
|
||||
.max_hw_queues = 8,
|
||||
.max_sgl_segments = 128,
|
||||
.max_sgl_segments = 1024,
|
||||
.max_dif_sgl_segments = 64,
|
||||
.dma_boundary = 0xFFFFFFFF,
|
||||
.local_priv_sz = 8,
|
||||
|
@ -582,40 +586,11 @@ static struct nvme_fc_port_template qla_nvme_fc_transport = {
|
|||
.fcprqst_priv_sz = sizeof(struct nvme_private),
|
||||
};
|
||||
|
||||
#define NVME_ABORT_POLLING_PERIOD 2
|
||||
static int qla_nvme_wait_on_command(srb_t *sp)
|
||||
{
|
||||
int ret = QLA_SUCCESS;
|
||||
|
||||
wait_event_timeout(sp->nvme_ls_waitq, (atomic_read(&sp->ref_count) > 1),
|
||||
NVME_ABORT_POLLING_PERIOD*HZ);
|
||||
|
||||
if (atomic_read(&sp->ref_count) > 1)
|
||||
ret = QLA_FUNCTION_FAILED;
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
void qla_nvme_abort(struct qla_hw_data *ha, struct srb *sp, int res)
|
||||
{
|
||||
int rval;
|
||||
|
||||
if (ha->flags.fw_started) {
|
||||
rval = ha->isp_ops->abort_command(sp);
|
||||
if (!rval && !qla_nvme_wait_on_command(sp))
|
||||
ql_log(ql_log_warn, NULL, 0x2112,
|
||||
"timed out waiting on sp=%p\n", sp);
|
||||
} else {
|
||||
sp->done(sp, res);
|
||||
}
|
||||
}
|
||||
|
||||
static void qla_nvme_unregister_remote_port(struct work_struct *work)
|
||||
{
|
||||
struct fc_port *fcport = container_of(work, struct fc_port,
|
||||
nvme_del_work);
|
||||
struct qla_nvme_rport *qla_rport, *trport;
|
||||
scsi_qla_host_t *base_vha;
|
||||
|
||||
if (!IS_ENABLED(CONFIG_NVME_FC))
|
||||
return;
|
||||
|
@ -623,23 +598,19 @@ static void qla_nvme_unregister_remote_port(struct work_struct *work)
|
|||
ql_log(ql_log_warn, NULL, 0x2112,
|
||||
"%s: unregister remoteport on %p\n",__func__, fcport);
|
||||
|
||||
base_vha = pci_get_drvdata(fcport->vha->hw->pdev);
|
||||
if (test_bit(PFLG_DRIVER_REMOVING, &base_vha->pci_flags)) {
|
||||
ql_dbg(ql_dbg_disc, fcport->vha, 0x2114,
|
||||
"%s: Notify FC-NVMe transport, set devloss=0\n",
|
||||
__func__);
|
||||
|
||||
nvme_fc_set_remoteport_devloss(fcport->nvme_remote_port, 0);
|
||||
}
|
||||
|
||||
list_for_each_entry_safe(qla_rport, trport,
|
||||
&fcport->vha->nvme_rport_list, list) {
|
||||
if (qla_rport->fcport == fcport) {
|
||||
ql_log(ql_log_info, fcport->vha, 0x2113,
|
||||
"%s: fcport=%p\n", __func__, fcport);
|
||||
nvme_fc_set_remoteport_devloss
|
||||
(fcport->nvme_remote_port, 0);
|
||||
init_completion(&fcport->nvme_del_done);
|
||||
nvme_fc_unregister_remoteport(
|
||||
fcport->nvme_remote_port);
|
||||
if (nvme_fc_unregister_remoteport
|
||||
(fcport->nvme_remote_port))
|
||||
ql_log(ql_log_info, fcport->vha, 0x2114,
|
||||
"%s: Failed to unregister nvme_remote_port\n",
|
||||
__func__);
|
||||
wait_for_completion(&fcport->nvme_del_done);
|
||||
break;
|
||||
}
|
||||
|
|
|
@ -13,6 +13,7 @@
|
|||
#include <linux/nvme-fc-driver.h>
|
||||
|
||||
#include "qla_def.h"
|
||||
#include "qla_dsd.h"
|
||||
|
||||
/* default dev loss time (seconds) before transport tears down ctrl */
|
||||
#define NVME_FC_DEV_LOSS_TMO 30
|
||||
|
@ -64,16 +65,15 @@ struct cmd_nvme {
|
|||
#define CF_WRITE_DATA BIT_0
|
||||
|
||||
uint16_t nvme_cmnd_dseg_len; /* Data segment length. */
|
||||
uint32_t nvme_cmnd_dseg_address[2]; /* Data segment address. */
|
||||
uint32_t nvme_rsp_dseg_address[2]; /* Data segment address. */
|
||||
__le64 nvme_cmnd_dseg_address __packed;/* Data segment address. */
|
||||
__le64 nvme_rsp_dseg_address __packed; /* Data segment address. */
|
||||
|
||||
uint32_t byte_count; /* Total byte count. */
|
||||
|
||||
uint8_t port_id[3]; /* PortID of destination port. */
|
||||
uint8_t vp_index;
|
||||
|
||||
uint32_t nvme_data_dseg_address[2]; /* Data segment address. */
|
||||
uint32_t nvme_data_dseg_len; /* Data segment length. */
|
||||
struct dsd64 nvme_dsd;
|
||||
};
|
||||
|
||||
#define PT_LS4_REQUEST 0x89 /* Link Service pass-through IOCB (request) */
|
||||
|
@ -101,10 +101,7 @@ struct pt_ls4_request {
|
|||
uint32_t rsvd3;
|
||||
uint32_t rx_byte_count;
|
||||
uint32_t tx_byte_count;
|
||||
uint32_t dseg0_address[2];
|
||||
uint32_t dseg0_len;
|
||||
uint32_t dseg1_address[2];
|
||||
uint32_t dseg1_len;
|
||||
struct dsd64 dsd[2];
|
||||
};
|
||||
|
||||
#define PT_LS4_UNSOL 0x56 /* pass-up unsolicited rec FC-NVMe request */
|
||||
|
@ -145,7 +142,6 @@ struct pt_ls4_rx_unsol {
|
|||
int qla_nvme_register_hba(struct scsi_qla_host *);
|
||||
int qla_nvme_register_remote(struct scsi_qla_host *, struct fc_port *);
|
||||
void qla_nvme_delete(struct scsi_qla_host *);
|
||||
void qla_nvme_abort(struct qla_hw_data *, struct srb *sp, int res);
|
||||
void qla24xx_nvme_ls4_iocb(struct scsi_qla_host *, struct pt_ls4_request *,
|
||||
struct req_que *);
|
||||
void qla24xx_async_gffid_sp_done(void *, int);
|
||||
|
|
|
@ -6,6 +6,7 @@
|
|||
*/
|
||||
#include "qla_def.h"
|
||||
#include <linux/delay.h>
|
||||
#include <linux/io-64-nonatomic-lo-hi.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/ratelimit.h>
|
||||
#include <linux/vmalloc.h>
|
||||
|
@ -608,6 +609,7 @@ qla82xx_pci_set_window(struct qla_hw_data *ha, unsigned long long addr)
|
|||
} else if (addr_in_range(addr, QLA82XX_ADDR_OCM0,
|
||||
QLA82XX_ADDR_OCM0_MAX)) {
|
||||
unsigned int temp1;
|
||||
|
||||
if ((addr & 0x00ff800) == 0xff800) {
|
||||
ql_log(ql_log_warn, vha, 0xb004,
|
||||
"%s: QM access not handled.\n", __func__);
|
||||
|
@ -990,6 +992,7 @@ static int
|
|||
qla82xx_read_status_reg(struct qla_hw_data *ha, uint32_t *val)
|
||||
{
|
||||
scsi_qla_host_t *vha = pci_get_drvdata(ha->pdev);
|
||||
|
||||
qla82xx_wr_32(ha, QLA82XX_ROMUSB_ROM_INSTR_OPCODE, M25P_INSTR_RDSR);
|
||||
qla82xx_wait_rom_busy(ha);
|
||||
if (qla82xx_wait_rom_done(ha)) {
|
||||
|
@ -1030,6 +1033,7 @@ static int
|
|||
qla82xx_flash_set_write_enable(struct qla_hw_data *ha)
|
||||
{
|
||||
uint32_t val;
|
||||
|
||||
qla82xx_wait_rom_busy(ha);
|
||||
qla82xx_wr_32(ha, QLA82XX_ROMUSB_ROM_ABYTE_CNT, 0);
|
||||
qla82xx_wr_32(ha, QLA82XX_ROMUSB_ROM_INSTR_OPCODE, M25P_INSTR_WREN);
|
||||
|
@ -1047,6 +1051,7 @@ static int
|
|||
qla82xx_write_status_reg(struct qla_hw_data *ha, uint32_t val)
|
||||
{
|
||||
scsi_qla_host_t *vha = pci_get_drvdata(ha->pdev);
|
||||
|
||||
if (qla82xx_flash_set_write_enable(ha))
|
||||
return -1;
|
||||
qla82xx_wr_32(ha, QLA82XX_ROMUSB_ROM_WDATA, val);
|
||||
|
@ -1063,6 +1068,7 @@ static int
|
|||
qla82xx_write_disable_flash(struct qla_hw_data *ha)
|
||||
{
|
||||
scsi_qla_host_t *vha = pci_get_drvdata(ha->pdev);
|
||||
|
||||
qla82xx_wr_32(ha, QLA82XX_ROMUSB_ROM_INSTR_OPCODE, M25P_INSTR_WRDI);
|
||||
if (qla82xx_wait_rom_done(ha)) {
|
||||
ql_log(ql_log_warn, vha, 0xb00f,
|
||||
|
@ -1435,6 +1441,7 @@ qla82xx_fw_load_from_flash(struct qla_hw_data *ha)
|
|||
long memaddr = BOOTLD_START;
|
||||
u64 data;
|
||||
u32 high, low;
|
||||
|
||||
size = (IMAGE_START - BOOTLD_START) / 8;
|
||||
|
||||
for (i = 0; i < size; i++) {
|
||||
|
@ -1757,11 +1764,14 @@ qla82xx_pci_config(scsi_qla_host_t *vha)
|
|||
*
|
||||
* Returns 0 on success.
|
||||
*/
|
||||
void
|
||||
int
|
||||
qla82xx_reset_chip(scsi_qla_host_t *vha)
|
||||
{
|
||||
struct qla_hw_data *ha = vha->hw;
|
||||
|
||||
ha->isp_ops->disable_intrs(ha);
|
||||
|
||||
return QLA_SUCCESS;
|
||||
}
|
||||
|
||||
void qla82xx_config_rings(struct scsi_qla_host *vha)
|
||||
|
@ -1778,10 +1788,8 @@ void qla82xx_config_rings(struct scsi_qla_host *vha)
|
|||
icb->response_q_inpointer = cpu_to_le16(0);
|
||||
icb->request_q_length = cpu_to_le16(req->length);
|
||||
icb->response_q_length = cpu_to_le16(rsp->length);
|
||||
icb->request_q_address[0] = cpu_to_le32(LSD(req->dma));
|
||||
icb->request_q_address[1] = cpu_to_le32(MSD(req->dma));
|
||||
icb->response_q_address[0] = cpu_to_le32(LSD(rsp->dma));
|
||||
icb->response_q_address[1] = cpu_to_le32(MSD(rsp->dma));
|
||||
put_unaligned_le64(req->dma, &icb->request_q_address);
|
||||
put_unaligned_le64(rsp->dma, &icb->response_q_address);
|
||||
|
||||
WRT_REG_DWORD(®->req_q_out[0], 0);
|
||||
WRT_REG_DWORD(®->rsp_q_in[0], 0);
|
||||
|
@ -1992,6 +2000,7 @@ qla82xx_mbx_completion(scsi_qla_host_t *vha, uint16_t mb0)
|
|||
uint16_t __iomem *wptr;
|
||||
struct qla_hw_data *ha = vha->hw;
|
||||
struct device_reg_82xx __iomem *reg = &ha->iobase->isp82;
|
||||
|
||||
wptr = (uint16_t __iomem *)®->mailbox_out[1];
|
||||
|
||||
/* Load return mailbox registers. */
|
||||
|
@ -2028,7 +2037,7 @@ qla82xx_intr_handler(int irq, void *dev_id)
|
|||
unsigned long flags;
|
||||
unsigned long iter;
|
||||
uint32_t stat = 0;
|
||||
uint16_t mb[4];
|
||||
uint16_t mb[8];
|
||||
|
||||
rsp = (struct rsp_que *) dev_id;
|
||||
if (!rsp) {
|
||||
|
@ -2112,7 +2121,7 @@ qla82xx_msix_default(int irq, void *dev_id)
|
|||
unsigned long flags;
|
||||
uint32_t stat = 0;
|
||||
uint32_t host_int = 0;
|
||||
uint16_t mb[4];
|
||||
uint16_t mb[8];
|
||||
|
||||
rsp = (struct rsp_que *) dev_id;
|
||||
if (!rsp) {
|
||||
|
@ -2208,7 +2217,7 @@ qla82xx_poll(int irq, void *dev_id)
|
|||
int status = 0;
|
||||
uint32_t stat;
|
||||
uint32_t host_int = 0;
|
||||
uint16_t mb[4];
|
||||
uint16_t mb[8];
|
||||
unsigned long flags;
|
||||
|
||||
rsp = (struct rsp_que *) dev_id;
|
||||
|
@ -2262,6 +2271,7 @@ void
|
|||
qla82xx_enable_intrs(struct qla_hw_data *ha)
|
||||
{
|
||||
scsi_qla_host_t *vha = pci_get_drvdata(ha->pdev);
|
||||
|
||||
qla82xx_mbx_intr_enable(vha);
|
||||
spin_lock_irq(&ha->hardware_lock);
|
||||
if (IS_QLA8044(ha))
|
||||
|
@ -2276,6 +2286,7 @@ void
|
|||
qla82xx_disable_intrs(struct qla_hw_data *ha)
|
||||
{
|
||||
scsi_qla_host_t *vha = pci_get_drvdata(ha->pdev);
|
||||
|
||||
qla82xx_mbx_intr_disable(vha);
|
||||
spin_lock_irq(&ha->hardware_lock);
|
||||
if (IS_QLA8044(ha))
|
||||
|
@ -2658,8 +2669,8 @@ qla82xx_erase_sector(struct qla_hw_data *ha, int addr)
|
|||
/*
|
||||
* Address and length are byte address
|
||||
*/
|
||||
uint8_t *
|
||||
qla82xx_read_optrom_data(struct scsi_qla_host *vha, uint8_t *buf,
|
||||
void *
|
||||
qla82xx_read_optrom_data(struct scsi_qla_host *vha, void *buf,
|
||||
uint32_t offset, uint32_t length)
|
||||
{
|
||||
scsi_block_requests(vha->host);
|
||||
|
@ -2767,15 +2778,14 @@ qla82xx_write_flash_data(struct scsi_qla_host *vha, uint32_t *dwptr,
|
|||
}
|
||||
|
||||
int
|
||||
qla82xx_write_optrom_data(struct scsi_qla_host *vha, uint8_t *buf,
|
||||
qla82xx_write_optrom_data(struct scsi_qla_host *vha, void *buf,
|
||||
uint32_t offset, uint32_t length)
|
||||
{
|
||||
int rval;
|
||||
|
||||
/* Suspend HBA. */
|
||||
scsi_block_requests(vha->host);
|
||||
rval = qla82xx_write_flash_data(vha, (uint32_t *)buf, offset,
|
||||
length >> 2);
|
||||
rval = qla82xx_write_flash_data(vha, buf, offset, length >> 2);
|
||||
scsi_unblock_requests(vha->host);
|
||||
|
||||
/* Convert return ISP82xx to generic */
|
||||
|
@ -4464,6 +4474,7 @@ qla82xx_beacon_on(struct scsi_qla_host *vha)
|
|||
|
||||
int rval;
|
||||
struct qla_hw_data *ha = vha->hw;
|
||||
|
||||
qla82xx_idc_lock(ha);
|
||||
rval = qla82xx_mbx_beacon_ctl(vha, 1);
|
||||
|
||||
|
@ -4484,6 +4495,7 @@ qla82xx_beacon_off(struct scsi_qla_host *vha)
|
|||
|
||||
int rval;
|
||||
struct qla_hw_data *ha = vha->hw;
|
||||
|
||||
qla82xx_idc_lock(ha);
|
||||
rval = qla82xx_mbx_beacon_ctl(vha, 0);
|
||||
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue