mirror of https://gitee.com/openkylin/linux.git
SCSI misc on 20190306
This is mostly update of the usual drivers: arcmsr, qla2xxx, lpfc, hisi_sas, target/iscsi and target/core. Additionally Christoph refactored gdth as part of the dma changes. The major mid-layer change this time is the removal of bidi commands and with them the whole of the osd/exofs driver and filesystem. Signed-off-by: James E.J. Bottomley <jejb@linux.ibm.com> -----BEGIN PGP SIGNATURE----- iJwEABMIAEQWIQTnYEDbdso9F2cI+arnQslM7pishQUCXIC54SYcamFtZXMuYm90 dG9tbGV5QGhhbnNlbnBhcnRuZXJzaGlwLmNvbQAKCRDnQslM7pishT1GAPwJEV23 ExPiPsnuVgKj49nLTagZ3rILRQcYNbL+MNYqxQEA0cT8FHzSDBfWY5OKPNE+RQ8z f69LpXGmMpuagKGvvd4= =Fhy1 -----END PGP SIGNATURE----- Merge tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi Pull SCSI updates from James Bottomley: "This is mostly update of the usual drivers: arcmsr, qla2xxx, lpfc, hisi_sas, target/iscsi and target/core. Additionally Christoph refactored gdth as part of the dma changes. The major mid-layer change this time is the removal of bidi commands and with them the whole of the osd/exofs driver and filesystem. This is a major simplification for block and mq in particular" * tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: (240 commits) scsi: cxgb4i: validate tcp sequence number only if chip version <= T5 scsi: cxgb4i: get pf number from lldi->pf scsi: core: replace GFP_ATOMIC with GFP_KERNEL in scsi_scan.c scsi: mpt3sas: Add missing breaks in switch statements scsi: aacraid: Fix missing break in switch statement scsi: kill command serial number scsi: csiostor: drop serial_number usage scsi: mvumi: use request tag instead of serial_number scsi: dpt_i2o: remove serial number usage scsi: st: osst: Remove negative constant left-shifts scsi: ufs-bsg: Allow reading descriptors scsi: ufs: Allow reading descriptor via raw upiu scsi: ufs-bsg: Change the calling convention for write descriptor scsi: ufs: Remove unused device quirks Revert "scsi: ufs: disable vccq if it's not needed by UFS device" scsi: megaraid_sas: Remove a bunch of set but not used variables scsi: clean obsolete return values of eh_timed_out scsi: sd: Optimal I/O size should be a multiple of physical block size scsi: MAINTAINERS: SCSI initiator and target tweaks scsi: fcoe: make use of fip_mode enum complete ...
This commit is contained in:
commit
92fff53b71
|
@ -6,9 +6,10 @@ Each UFS Host Controller should have its own node.
|
|||
Required properties:
|
||||
- compatible : compatible list, contains one of the following -
|
||||
"hisilicon,hi3660-ufs", "jedec,ufs-1.1" for hisi ufs
|
||||
host controller present on Hi36xx chipset.
|
||||
host controller present on Hi3660 chipset.
|
||||
"hisilicon,hi3670-ufs", "jedec,ufs-2.1" for hisi ufs
|
||||
host controller present on Hi3670 chipset.
|
||||
- reg : should contain UFS register address space & UFS SYS CTRL register address,
|
||||
- interrupt-parent : interrupt device
|
||||
- interrupts : interrupt number
|
||||
- clocks : List of phandle and clock specifier pairs
|
||||
- clock-names : List of clock input name strings sorted in the same
|
||||
|
|
|
@ -4,11 +4,14 @@ UFSHC nodes are defined to describe on-chip UFS host controllers.
|
|||
Each UFS controller instance should have its own node.
|
||||
|
||||
Required properties:
|
||||
- compatible : must contain "jedec,ufs-1.1" or "jedec,ufs-2.0", may
|
||||
also list one or more of the following:
|
||||
"qcom,msm8994-ufshc"
|
||||
"qcom,msm8996-ufshc"
|
||||
"qcom,ufshc"
|
||||
- compatible : must contain "jedec,ufs-1.1" or "jedec,ufs-2.0"
|
||||
|
||||
For Qualcomm SoCs must contain, as below, an
|
||||
SoC-specific compatible along with "qcom,ufshc" and
|
||||
the appropriate jedec string:
|
||||
"qcom,msm8994-ufshc", "qcom,ufshc", "jedec,ufs-2.0"
|
||||
"qcom,msm8996-ufshc", "qcom,ufshc", "jedec,ufs-2.0"
|
||||
"qcom,sdm845-ufshc", "qcom,ufshc", "jedec,ufs-2.0"
|
||||
- interrupts : <interrupt mapping for UFS host controller IRQ>
|
||||
- reg : <registers mapping>
|
||||
|
||||
|
|
|
@ -1,185 +0,0 @@
|
|||
===============================================================================
|
||||
WHAT IS EXOFS?
|
||||
===============================================================================
|
||||
|
||||
exofs is a file system that uses an OSD and exports the API of a normal Linux
|
||||
file system. Users access exofs like any other local file system, and exofs
|
||||
will in turn issue commands to the local OSD initiator.
|
||||
|
||||
OSD is a new T10 command set that views storage devices not as a large/flat
|
||||
array of sectors but as a container of objects, each having a length, quota,
|
||||
time attributes and more. Each object is addressed by a 64bit ID, and is
|
||||
contained in a 64bit ID partition. Each object has associated attributes
|
||||
attached to it, which are integral part of the object and provide metadata about
|
||||
the object. The standard defines some common obligatory attributes, but user
|
||||
attributes can be added as needed.
|
||||
|
||||
===============================================================================
|
||||
ENVIRONMENT
|
||||
===============================================================================
|
||||
|
||||
To use this file system, you need to have an object store to run it on. You
|
||||
may download a target from:
|
||||
http://open-osd.org
|
||||
|
||||
See Documentation/scsi/osd.txt for how to setup a working osd environment.
|
||||
|
||||
===============================================================================
|
||||
USAGE
|
||||
===============================================================================
|
||||
|
||||
1. Download and compile exofs and open-osd initiator:
|
||||
You need an external Kernel source tree or kernel headers from your
|
||||
distribution. (anything based on 2.6.26 or later).
|
||||
|
||||
a. download open-osd including exofs source using:
|
||||
[parent-directory]$ git clone git://git.open-osd.org/open-osd.git
|
||||
|
||||
b. Build the library module like this:
|
||||
[parent-directory]$ make -C KSRC=$(KER_DIR) open-osd
|
||||
|
||||
This will build both the open-osd initiator as well as the exofs kernel
|
||||
module. Use whatever parameters you compiled your Kernel with and
|
||||
$(KER_DIR) above pointing to the Kernel you compile against. See the file
|
||||
open-osd/top-level-Makefile for an example.
|
||||
|
||||
2. Get the OSD initiator and target set up properly, and login to the target.
|
||||
See Documentation/scsi/osd.txt for farther instructions. Also see ./do-osd
|
||||
for example script that does all these steps.
|
||||
|
||||
3. Insmod the exofs.ko module:
|
||||
[exofs]$ insmod exofs.ko
|
||||
|
||||
4. Make sure the directory where you want to mount exists. If not, create it.
|
||||
(For example, mkdir /mnt/exofs)
|
||||
|
||||
5. At first run you will need to invoke the mkfs.exofs application
|
||||
|
||||
As an example, this will create the file system on:
|
||||
/dev/osd0 partition ID 65536
|
||||
|
||||
mkfs.exofs --pid=65536 --format /dev/osd0
|
||||
|
||||
The --format is optional. If not specified, no OSD_FORMAT will be
|
||||
performed and a clean file system will be created in the specified pid,
|
||||
in the available space of the target. (Use --format=size_in_meg to limit
|
||||
the total LUN space available)
|
||||
|
||||
If pid already exists, it will be deleted and a new one will be created in
|
||||
its place. Be careful.
|
||||
|
||||
An exofs lives inside a single OSD partition. You can create multiple exofs
|
||||
filesystems on the same device using multiple pids.
|
||||
|
||||
(run mkfs.exofs without any parameters for usage help message)
|
||||
|
||||
6. Mount the file system.
|
||||
|
||||
For example, to mount /dev/osd0, partition ID 0x10000 on /mnt/exofs:
|
||||
|
||||
mount -t exofs -o pid=65536 /dev/osd0 /mnt/exofs/
|
||||
|
||||
7. For reference (See do-exofs example script):
|
||||
do-exofs start - an example of how to perform the above steps.
|
||||
do-exofs stop - an example of how to unmount the file system.
|
||||
do-exofs format - an example of how to format and mkfs a new exofs.
|
||||
|
||||
8. Extra compilation flags (uncomment in fs/exofs/Kbuild):
|
||||
CONFIG_EXOFS_DEBUG - for debug messages and extra checks.
|
||||
|
||||
===============================================================================
|
||||
exofs mount options
|
||||
===============================================================================
|
||||
Similar to any mount command:
|
||||
mount -t exofs -o exofs_options /dev/osdX mount_exofs_directory
|
||||
|
||||
Where:
|
||||
-t exofs: specifies the exofs file system
|
||||
|
||||
/dev/osdX: X is a decimal number. /dev/osdX was created after a successful
|
||||
login into an OSD target.
|
||||
|
||||
mount_exofs_directory: The directory to mount the file system on
|
||||
|
||||
exofs specific options: Options are separated by commas (,)
|
||||
pid=<integer> - The partition number to mount/create as
|
||||
container of the filesystem.
|
||||
This option is mandatory. integer can be
|
||||
Hex by pre-pending an 0x to the number.
|
||||
osdname=<id> - Mount by a device's osdname.
|
||||
osdname is usually a 36 character uuid of the
|
||||
form "d2683732-c906-4ee1-9dbd-c10c27bb40df".
|
||||
It is one of the device's uuid specified in the
|
||||
mkfs.exofs format command.
|
||||
If this option is specified then the /dev/osdX
|
||||
above can be empty and is ignored.
|
||||
to=<integer> - Timeout in ticks for a single command.
|
||||
default is (60 * HZ) [for debugging only]
|
||||
|
||||
===============================================================================
|
||||
DESIGN
|
||||
===============================================================================
|
||||
|
||||
* The file system control block (AKA on-disk superblock) resides in an object
|
||||
with a special ID (defined in common.h).
|
||||
Information included in the file system control block is used to fill the
|
||||
in-memory superblock structure at mount time. This object is created before
|
||||
the file system is used by mkexofs.c. It contains information such as:
|
||||
- The file system's magic number
|
||||
- The next inode number to be allocated
|
||||
|
||||
* Each file resides in its own object and contains the data (and it will be
|
||||
possible to extend the file over multiple objects, though this has not been
|
||||
implemented yet).
|
||||
|
||||
* A directory is treated as a file, and essentially contains a list of <file
|
||||
name, inode #> pairs for files that are found in that directory. The object
|
||||
IDs correspond to the files' inode numbers and will be allocated according to
|
||||
a bitmap (stored in a separate object). Now they are allocated using a
|
||||
counter.
|
||||
|
||||
* Each file's control block (AKA on-disk inode) is stored in its object's
|
||||
attributes. This applies to both regular files and other types (directories,
|
||||
device files, symlinks, etc.).
|
||||
|
||||
* Credentials are generated per object (inode and superblock) when they are
|
||||
created in memory (read from disk or created). The credential works for all
|
||||
operations and is used as long as the object remains in memory.
|
||||
|
||||
* Async OSD operations are used whenever possible, but the target may execute
|
||||
them out of order. The operations that concern us are create, delete,
|
||||
readpage, writepage, update_inode, and truncate. The following pairs of
|
||||
operations should execute in the order written, and we need to prevent them
|
||||
from executing in reverse order:
|
||||
- The following are handled with the OBJ_CREATED and OBJ_2BCREATED
|
||||
flags. OBJ_CREATED is set when we know the object exists on the OSD -
|
||||
in create's callback function, and when we successfully do a
|
||||
read_inode.
|
||||
OBJ_2BCREATED is set in the beginning of the create function, so we
|
||||
know that we should wait.
|
||||
- create/delete: delete should wait until the object is created
|
||||
on the OSD.
|
||||
- create/readpage: readpage should be able to return a page
|
||||
full of zeroes in this case. If there was a write already
|
||||
en-route (i.e. create, writepage, readpage) then the page
|
||||
would be locked, and so it would really be the same as
|
||||
create/writepage.
|
||||
- create/writepage: if writepage is called for a sync write, it
|
||||
should wait until the object is created on the OSD.
|
||||
Otherwise, it should just return.
|
||||
- create/truncate: truncate should wait until the object is
|
||||
created on the OSD.
|
||||
- create/update_inode: update_inode should wait until the
|
||||
object is created on the OSD.
|
||||
- Handled by VFS locks:
|
||||
- readpage/delete: shouldn't happen because of page lock.
|
||||
- writepage/delete: shouldn't happen because of page lock.
|
||||
- readpage/writepage: shouldn't happen because of page lock.
|
||||
|
||||
===============================================================================
|
||||
LICENSE/COPYRIGHT
|
||||
===============================================================================
|
||||
The exofs file system is based on ext2 v0.5b (distributed with the Linux kernel
|
||||
version 2.6.10). All files include the original copyrights, and the license
|
||||
is GPL version 2 (only version 2, as is true for the Linux kernel). The
|
||||
Linux kernel can be downloaded from www.kernel.org.
|
|
@ -1,197 +0,0 @@
|
|||
The OSD Standard
|
||||
================
|
||||
OSD (Object-Based Storage Device) is a T10 SCSI command set that is designed
|
||||
to provide efficient operation of input/output logical units that manage the
|
||||
allocation, placement, and accessing of variable-size data-storage containers,
|
||||
called objects. Objects are intended to contain operating system and application
|
||||
constructs. Each object has associated attributes attached to it, which are
|
||||
integral part of the object and provide metadata about the object. The standard
|
||||
defines some common obligatory attributes, but user attributes can be added as
|
||||
needed.
|
||||
|
||||
See: http://www.t10.org/ftp/t10/drafts/osd2/ for the latest draft for OSD 2
|
||||
or search the web for "OSD SCSI"
|
||||
|
||||
OSD in the Linux Kernel
|
||||
=======================
|
||||
osd-initiator:
|
||||
The main component of OSD in Kernel is the osd-initiator library. Its main
|
||||
user is intended to be the pNFS-over-objects layout driver, which uses objects
|
||||
as its back-end data storage. Other clients are the other osd parts listed below.
|
||||
|
||||
osd-uld:
|
||||
This is a SCSI ULD that registers for OSD type devices and provides a testing
|
||||
platform, both for the in-kernel initiator as well as connected targets. It
|
||||
currently has no useful user-mode API, though it could have if need be.
|
||||
|
||||
exofs:
|
||||
Is an OSD based Linux file system. It uses the osd-initiator and osd-uld,
|
||||
to export a usable file system for users.
|
||||
See Documentation/filesystems/exofs.txt for more details
|
||||
|
||||
osd target:
|
||||
There are no current plans for an OSD target implementation in kernel. For all
|
||||
needs, a user-mode target that is based on the scsi tgt target framework is
|
||||
available from Ohio Supercomputer Center (OSC) at:
|
||||
http://www.open-osd.org/bin/view/Main/OscOsdProject
|
||||
There are several other target implementations. See http://open-osd.org for more
|
||||
links.
|
||||
|
||||
Files and Folders
|
||||
=================
|
||||
This is the complete list of files included in this work:
|
||||
include/scsi/
|
||||
osd_initiator.h Main API for the initiator library
|
||||
osd_types.h Common OSD types
|
||||
osd_sec.h Security Manager API
|
||||
osd_protocol.h Wire definitions of the OSD standard protocol
|
||||
osd_attributes.h Wire definitions of OSD attributes
|
||||
|
||||
drivers/scsi/osd/
|
||||
osd_initiator.c OSD-Initiator library implementation
|
||||
osd_uld.c The OSD scsi ULD
|
||||
osd_ktest.{h,c} In-kernel test suite (called by osd_uld)
|
||||
osd_debug.h Some printk macros
|
||||
Makefile For both in-tree and out-of-tree compilation
|
||||
Kconfig Enables inclusion of the different pieces
|
||||
osd_test.c User-mode application to call the kernel tests
|
||||
|
||||
The OSD-Initiator Library
|
||||
=========================
|
||||
osd_initiator is a low level implementation of an osd initiator encoder.
|
||||
But even though, it should be intuitive and easy to use. Perhaps over time an
|
||||
higher lever will form that automates some of the more common recipes.
|
||||
|
||||
init/fini:
|
||||
- osd_dev_init() associates a scsi_device with an osd_dev structure
|
||||
and initializes some global pools. This should be done once per scsi_device
|
||||
(OSD LUN). The osd_dev structure is needed for calling osd_start_request().
|
||||
|
||||
- osd_dev_fini() cleans up before a osd_dev/scsi_device destruction.
|
||||
|
||||
OSD commands encoding, execution, and decoding of results:
|
||||
|
||||
struct osd_request's is used to iteratively encode an OSD command and carry
|
||||
its state throughout execution. Each request goes through these stages:
|
||||
|
||||
a. osd_start_request() allocates the request.
|
||||
|
||||
b. Any of the osd_req_* methods is used to encode a request of the specified
|
||||
type.
|
||||
|
||||
c. osd_req_add_{get,set}_attr_* may be called to add get/set attributes to the
|
||||
CDB. "List" or "Page" mode can be used exclusively. The attribute-list API
|
||||
can be called multiple times on the same request. However, only one
|
||||
attribute-page can be read, as mandated by the OSD standard.
|
||||
|
||||
d. osd_finalize_request() computes offsets into the data-in and data-out buffers
|
||||
and signs the request using the provided capability key and integrity-
|
||||
check parameters.
|
||||
|
||||
e. osd_execute_request() may be called to execute the request via the block
|
||||
layer and wait for its completion. The request can be executed
|
||||
asynchronously by calling the block layer API directly.
|
||||
|
||||
f. After execution, osd_req_decode_sense() can be called to decode the request's
|
||||
sense information.
|
||||
|
||||
g. osd_req_decode_get_attr() may be called to retrieve osd_add_get_attr_list()
|
||||
values.
|
||||
|
||||
h. osd_end_request() must be called to deallocate the request and any resource
|
||||
associated with it. Note that osd_end_request cleans up the request at any
|
||||
stage and it must always be called after a successful osd_start_request().
|
||||
|
||||
osd_request's structure:
|
||||
|
||||
The OSD standard defines a complex structure of IO segments pointed to by
|
||||
members in the CDB. Up to 3 segments can be deployed in the IN-Buffer and up to
|
||||
4 in the OUT-Buffer. The ASCII illustration below depicts a secure-read with
|
||||
associated get+set of attributes-lists. Other combinations very on the same
|
||||
basic theme. From no-segments-used up to all-segments-used.
|
||||
|
||||
|________OSD-CDB__________|
|
||||
| |
|
||||
|read_len (offset=0) -|---------\
|
||||
| | |
|
||||
|get_attrs_list_length | |
|
||||
|get_attrs_list_offset -|----\ |
|
||||
| | | |
|
||||
|retrieved_attrs_alloc_len| | |
|
||||
|retrieved_attrs_offset -|----|----|-\
|
||||
| | | | |
|
||||
|set_attrs_list_length | | | |
|
||||
|set_attrs_list_offset -|-\ | | |
|
||||
| | | | | |
|
||||
|in_data_integ_offset -|-|--|----|-|-\
|
||||
|out_data_integ_offset -|-|--|--\ | | |
|
||||
\_________________________/ | | | | | |
|
||||
| | | | | |
|
||||
|_______OUT-BUFFER________| | | | | | |
|
||||
| Set attr list |</ | | | | |
|
||||
| | | | | | |
|
||||
|-------------------------| | | | | |
|
||||
| Get attr descriptors |<---/ | | | |
|
||||
| | | | | |
|
||||
|-------------------------| | | | |
|
||||
| Out-data integrity |<------/ | | |
|
||||
| | | | |
|
||||
\_________________________/ | | |
|
||||
| | |
|
||||
|________IN-BUFFER________| | | |
|
||||
| In-Data read |<--------/ | |
|
||||
| | | |
|
||||
|-------------------------| | |
|
||||
| Get attr list |<----------/ |
|
||||
| | |
|
||||
|-------------------------| |
|
||||
| In-data integrity |<------------/
|
||||
| |
|
||||
\_________________________/
|
||||
|
||||
A block device request can carry bidirectional payload by means of associating
|
||||
a bidi_read request with a main write-request. Each in/out request is described
|
||||
by a chain of BIOs associated with each request.
|
||||
The CDB is of a SCSI VARLEN CDB format, as described by OSD standard.
|
||||
The OSD standard also mandates alignment restrictions at start of each segment.
|
||||
|
||||
In the code, in struct osd_request, there are two _osd_io_info structures to
|
||||
describe the IN/OUT buffers above, two BIOs for the data payload and up to five
|
||||
_osd_req_data_segment structures to hold the different segments allocation and
|
||||
information.
|
||||
|
||||
Important: We have chosen to disregard the assumption that a BIO-chain (and
|
||||
the resulting sg-list) describes a linear memory buffer. Meaning only first and
|
||||
last scatter chain can be incomplete and all the middle chains are of PAGE_SIZE.
|
||||
For us, a scatter-gather-list, as its name implies and as used by the Networking
|
||||
layer, is to describe a vector of buffers that will be transferred to/from the
|
||||
wire. It works very well with current iSCSI transport. iSCSI is currently the
|
||||
only deployed OSD transport. In the future we anticipate SAS and FC attached OSD
|
||||
devices as well.
|
||||
|
||||
The OSD Testing ULD
|
||||
===================
|
||||
TODO: More user-mode control on tests.
|
||||
|
||||
Authors, Mailing list
|
||||
=====================
|
||||
Please communicate with us on any deployment of osd, whether using this code
|
||||
or not.
|
||||
|
||||
Any problems, questions, bug reports, lonely OSD nights, please email:
|
||||
OSD Dev List <osd-dev@open-osd.org>
|
||||
|
||||
More up-to-date information can be found on:
|
||||
http://open-osd.org
|
||||
|
||||
Boaz Harrosh <ooo@electrozaur.com>
|
||||
|
||||
References
|
||||
==========
|
||||
Weber, R., "SCSI Object-Based Storage Device Commands",
|
||||
T10/1355-D ANSI/INCITS 400-2004,
|
||||
http://www.t10.org/ftp/t10/drafts/osd/osd-r10.pdf
|
||||
|
||||
Weber, R., "SCSI Object-Based Storage Device Commands -2 (OSD-2)"
|
||||
T10/1729-D, Working Draft, rev. 3
|
||||
http://www.t10.org/ftp/t10/drafts/osd2/osd2r03.pdf
|
|
@ -147,6 +147,17 @@ send SG_IO with the applicable sg_io_v4:
|
|||
io_hdr_v4.max_response_len = reply_len;
|
||||
io_hdr_v4.request_len = request_len;
|
||||
io_hdr_v4.request = (__u64)request_upiu;
|
||||
if (dir == SG_DXFER_TO_DEV) {
|
||||
io_hdr_v4.dout_xfer_len = (uint32_t)byte_cnt;
|
||||
io_hdr_v4.dout_xferp = (uintptr_t)(__u64)buff;
|
||||
} else {
|
||||
io_hdr_v4.din_xfer_len = (uint32_t)byte_cnt;
|
||||
io_hdr_v4.din_xferp = (uintptr_t)(__u64)buff;
|
||||
}
|
||||
|
||||
If you wish to read or write a descriptor, use the appropriate xferp of
|
||||
sg_io_v4.
|
||||
|
||||
|
||||
UFS Specifications can be found at,
|
||||
UFS - http://www.jedec.org/sites/default/files/docs/JESD220.pdf
|
||||
|
|
|
@ -297,7 +297,6 @@ def tcm_mod_build_configfs(proto_ident, fabric_mod_dir_var, fabric_mod_name):
|
|||
buf += " .sess_get_index = " + fabric_mod_name + "_sess_get_index,\n"
|
||||
buf += " .sess_get_initiator_sid = NULL,\n"
|
||||
buf += " .write_pending = " + fabric_mod_name + "_write_pending,\n"
|
||||
buf += " .write_pending_status = " + fabric_mod_name + "_write_pending_status,\n"
|
||||
buf += " .set_default_node_attributes = " + fabric_mod_name + "_set_default_node_attrs,\n"
|
||||
buf += " .get_cmd_state = " + fabric_mod_name + "_get_cmd_state,\n"
|
||||
buf += " .queue_data_in = " + fabric_mod_name + "_queue_data_in,\n"
|
||||
|
@ -479,13 +478,6 @@ def tcm_mod_dump_fabric_ops(proto_ident, fabric_mod_dir_var, fabric_mod_name):
|
|||
buf += "}\n\n"
|
||||
bufi += "int " + fabric_mod_name + "_write_pending(struct se_cmd *);\n"
|
||||
|
||||
if re.search('write_pending_status\)\(', fo):
|
||||
buf += "int " + fabric_mod_name + "_write_pending_status(struct se_cmd *se_cmd)\n"
|
||||
buf += "{\n"
|
||||
buf += " return 0;\n"
|
||||
buf += "}\n\n"
|
||||
bufi += "int " + fabric_mod_name + "_write_pending_status(struct se_cmd *);\n"
|
||||
|
||||
if re.search('set_default_node_attributes\)\(', fo):
|
||||
buf += "void " + fabric_mod_name + "_set_default_node_attrs(struct se_node_acl *nacl)\n"
|
||||
buf += "{\n"
|
||||
|
|
40
MAINTAINERS
40
MAINTAINERS
|
@ -5990,7 +5990,7 @@ S: Maintained
|
|||
F: drivers/media/tuners/fc2580*
|
||||
|
||||
FCOE SUBSYSTEM (libfc, libfcoe, fcoe)
|
||||
M: Johannes Thumshirn <jth@kernel.org>
|
||||
M: Hannes Reinecke <hare@suse.de>
|
||||
L: linux-scsi@vger.kernel.org
|
||||
W: www.Open-FCoE.org
|
||||
S: Supported
|
||||
|
@ -11629,13 +11629,6 @@ W: http://www.nongnu.org/orinoco/
|
|||
S: Orphan
|
||||
F: drivers/net/wireless/intersil/orinoco/
|
||||
|
||||
OSD LIBRARY and FILESYSTEM
|
||||
M: Boaz Harrosh <ooo@electrozaur.com>
|
||||
S: Maintained
|
||||
F: drivers/scsi/osd/
|
||||
F: include/scsi/osd_*
|
||||
F: fs/exofs/
|
||||
|
||||
OV2659 OMNIVISION SENSOR DRIVER
|
||||
M: "Lad, Prabhakar" <prabhakar.csengg@gmail.com>
|
||||
L: linux-media@vger.kernel.org
|
||||
|
@ -13766,6 +13759,7 @@ M: "James E.J. Bottomley" <jejb@linux.ibm.com>
|
|||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi.git
|
||||
M: "Martin K. Petersen" <martin.petersen@oracle.com>
|
||||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/mkp/scsi.git
|
||||
Q: https://patchwork.kernel.org/project/linux-scsi/list/
|
||||
L: linux-scsi@vger.kernel.org
|
||||
S: Maintained
|
||||
F: Documentation/devicetree/bindings/scsi/
|
||||
|
@ -13780,6 +13774,18 @@ F: Documentation/scsi/st.txt
|
|||
F: drivers/scsi/st.*
|
||||
F: drivers/scsi/st_*.h
|
||||
|
||||
SCSI TARGET SUBSYSTEM
|
||||
M: "Martin K. Petersen" <martin.petersen@oracle.com>
|
||||
L: linux-scsi@vger.kernel.org
|
||||
L: target-devel@vger.kernel.org
|
||||
W: http://www.linux-iscsi.org
|
||||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/mkp/scsi.git
|
||||
Q: https://patchwork.kernel.org/project/target-devel/list/
|
||||
S: Supported
|
||||
F: drivers/target/
|
||||
F: include/target/
|
||||
F: Documentation/target/
|
||||
|
||||
SCTP PROTOCOL
|
||||
M: Vlad Yasevich <vyasevich@gmail.com>
|
||||
M: Neil Horman <nhorman@tuxdriver.com>
|
||||
|
@ -15051,18 +15057,6 @@ F: Documentation/filesystems/sysv-fs.txt
|
|||
F: fs/sysv/
|
||||
F: include/linux/sysv_fs.h
|
||||
|
||||
TARGET SUBSYSTEM
|
||||
M: "Nicholas A. Bellinger" <nab@linux-iscsi.org>
|
||||
L: linux-scsi@vger.kernel.org
|
||||
L: target-devel@vger.kernel.org
|
||||
W: http://www.linux-iscsi.org
|
||||
W: http://groups.google.com/group/linux-iscsi-target-dev
|
||||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/nab/target-pending.git master
|
||||
S: Supported
|
||||
F: drivers/target/
|
||||
F: include/target/
|
||||
F: Documentation/target/
|
||||
|
||||
TASKSTATS STATISTICS INTERFACE
|
||||
M: Balbir Singh <bsingharora@gmail.com>
|
||||
S: Maintained
|
||||
|
@ -15960,14 +15954,16 @@ F: drivers/visorbus/
|
|||
F: drivers/staging/unisys/
|
||||
|
||||
UNIVERSAL FLASH STORAGE HOST CONTROLLER DRIVER
|
||||
M: Vinayak Holikatti <vinholikatti@gmail.com>
|
||||
R: Alim Akhtar <alim.akhtar@samsung.com>
|
||||
R: Avri Altman <avri.altman@wdc.com>
|
||||
R: Pedro Sousa <pedrom.sousa@synopsys.com>
|
||||
L: linux-scsi@vger.kernel.org
|
||||
S: Supported
|
||||
F: Documentation/scsi/ufs.txt
|
||||
F: drivers/scsi/ufs/
|
||||
|
||||
UNIVERSAL FLASH STORAGE HOST CONTROLLER DRIVER DWC HOOKS
|
||||
M: Joao Pinto <jpinto@synopsys.com>
|
||||
M: Pedro Sousa <pedrom.sousa@synopsys.com>
|
||||
L: linux-scsi@vger.kernel.org
|
||||
S: Supported
|
||||
F: drivers/scsi/ufs/*dwc*
|
||||
|
|
|
@ -115,7 +115,6 @@ static int queue_pm_only_show(void *data, struct seq_file *m)
|
|||
static const char *const blk_queue_flag_name[] = {
|
||||
QUEUE_FLAG_NAME(STOPPED),
|
||||
QUEUE_FLAG_NAME(DYING),
|
||||
QUEUE_FLAG_NAME(BIDI),
|
||||
QUEUE_FLAG_NAME(NOMERGES),
|
||||
QUEUE_FLAG_NAME(SAME_COMP),
|
||||
QUEUE_FLAG_NAME(FAIL_IO),
|
||||
|
|
|
@ -331,7 +331,6 @@ static struct request *blk_mq_rq_ctx_init(struct blk_mq_alloc_data *data,
|
|||
#if defined(CONFIG_BLK_DEV_INTEGRITY)
|
||||
rq->nr_integrity_segments = 0;
|
||||
#endif
|
||||
rq->special = NULL;
|
||||
/* tag was already set */
|
||||
rq->extra_len = 0;
|
||||
WRITE_ONCE(rq->deadline, 0);
|
||||
|
@ -340,7 +339,6 @@ static struct request *blk_mq_rq_ctx_init(struct blk_mq_alloc_data *data,
|
|||
|
||||
rq->end_io = NULL;
|
||||
rq->end_io_data = NULL;
|
||||
rq->next_rq = NULL;
|
||||
|
||||
data->ctx->rq_dispatched[op_is_sync(op)]++;
|
||||
refcount_set(&rq->ref, 1);
|
||||
|
@ -550,8 +548,6 @@ inline void __blk_mq_end_request(struct request *rq, blk_status_t error)
|
|||
rq_qos_done(rq->q, rq);
|
||||
rq->end_io(rq, error);
|
||||
} else {
|
||||
if (unlikely(blk_bidi_rq(rq)))
|
||||
blk_mq_free_request(rq->next_rq);
|
||||
blk_mq_free_request(rq);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -51,11 +51,40 @@ static int bsg_transport_fill_hdr(struct request *rq, struct sg_io_v4 *hdr,
|
|||
fmode_t mode)
|
||||
{
|
||||
struct bsg_job *job = blk_mq_rq_to_pdu(rq);
|
||||
int ret;
|
||||
|
||||
job->request_len = hdr->request_len;
|
||||
job->request = memdup_user(uptr64(hdr->request), hdr->request_len);
|
||||
if (IS_ERR(job->request))
|
||||
return PTR_ERR(job->request);
|
||||
|
||||
return PTR_ERR_OR_ZERO(job->request);
|
||||
if (hdr->dout_xfer_len && hdr->din_xfer_len) {
|
||||
job->bidi_rq = blk_get_request(rq->q, REQ_OP_SCSI_IN, 0);
|
||||
if (IS_ERR(job->bidi_rq)) {
|
||||
ret = PTR_ERR(job->bidi_rq);
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = blk_rq_map_user(rq->q, job->bidi_rq, NULL,
|
||||
uptr64(hdr->din_xferp), hdr->din_xfer_len,
|
||||
GFP_KERNEL);
|
||||
if (ret)
|
||||
goto out_free_bidi_rq;
|
||||
|
||||
job->bidi_bio = job->bidi_rq->bio;
|
||||
} else {
|
||||
job->bidi_rq = NULL;
|
||||
job->bidi_bio = NULL;
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
||||
out_free_bidi_rq:
|
||||
if (job->bidi_rq)
|
||||
blk_put_request(job->bidi_rq);
|
||||
out:
|
||||
kfree(job->request);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int bsg_transport_complete_rq(struct request *rq, struct sg_io_v4 *hdr)
|
||||
|
@ -93,7 +122,7 @@ static int bsg_transport_complete_rq(struct request *rq, struct sg_io_v4 *hdr)
|
|||
/* we assume all request payload was transferred, residual == 0 */
|
||||
hdr->dout_resid = 0;
|
||||
|
||||
if (rq->next_rq) {
|
||||
if (job->bidi_rq) {
|
||||
unsigned int rsp_len = job->reply_payload.payload_len;
|
||||
|
||||
if (WARN_ON(job->reply_payload_rcv_len > rsp_len))
|
||||
|
@ -111,6 +140,11 @@ static void bsg_transport_free_rq(struct request *rq)
|
|||
{
|
||||
struct bsg_job *job = blk_mq_rq_to_pdu(rq);
|
||||
|
||||
if (job->bidi_rq) {
|
||||
blk_rq_unmap_user(job->bidi_bio);
|
||||
blk_put_request(job->bidi_rq);
|
||||
}
|
||||
|
||||
kfree(job->request);
|
||||
}
|
||||
|
||||
|
@ -200,7 +234,6 @@ static int bsg_map_buffer(struct bsg_buffer *buf, struct request *req)
|
|||
*/
|
||||
static bool bsg_prepare_job(struct device *dev, struct request *req)
|
||||
{
|
||||
struct request *rsp = req->next_rq;
|
||||
struct bsg_job *job = blk_mq_rq_to_pdu(req);
|
||||
int ret;
|
||||
|
||||
|
@ -211,8 +244,8 @@ static bool bsg_prepare_job(struct device *dev, struct request *req)
|
|||
if (ret)
|
||||
goto failjob_rls_job;
|
||||
}
|
||||
if (rsp && rsp->bio) {
|
||||
ret = bsg_map_buffer(&job->reply_payload, rsp);
|
||||
if (job->bidi_rq) {
|
||||
ret = bsg_map_buffer(&job->reply_payload, job->bidi_rq);
|
||||
if (ret)
|
||||
goto failjob_rls_rqst_payload;
|
||||
}
|
||||
|
@ -369,7 +402,6 @@ struct request_queue *bsg_setup_queue(struct device *dev, const char *name,
|
|||
}
|
||||
|
||||
q->queuedata = dev;
|
||||
blk_queue_flag_set(QUEUE_FLAG_BIDI, q);
|
||||
blk_queue_rq_timeout(q, BLK_DEFAULT_SG_TIMEOUT);
|
||||
|
||||
ret = bsg_register_queue(q, dev, name, &bsg_transport_ops);
|
||||
|
|
190
block/bsg.c
190
block/bsg.c
|
@ -74,6 +74,11 @@ static int bsg_scsi_fill_hdr(struct request *rq, struct sg_io_v4 *hdr,
|
|||
{
|
||||
struct scsi_request *sreq = scsi_req(rq);
|
||||
|
||||
if (hdr->dout_xfer_len && hdr->din_xfer_len) {
|
||||
pr_warn_once("BIDI support in bsg has been removed.\n");
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
sreq->cmd_len = hdr->request_len;
|
||||
if (sreq->cmd_len > BLK_MAX_CDB) {
|
||||
sreq->cmd = kzalloc(sreq->cmd_len, GFP_KERNEL);
|
||||
|
@ -114,14 +119,10 @@ static int bsg_scsi_complete_rq(struct request *rq, struct sg_io_v4 *hdr)
|
|||
hdr->response_len = len;
|
||||
}
|
||||
|
||||
if (rq->next_rq) {
|
||||
hdr->dout_resid = sreq->resid_len;
|
||||
hdr->din_resid = scsi_req(rq->next_rq)->resid_len;
|
||||
} else if (rq_data_dir(rq) == READ) {
|
||||
if (rq_data_dir(rq) == READ)
|
||||
hdr->din_resid = sreq->resid_len;
|
||||
} else {
|
||||
else
|
||||
hdr->dout_resid = sreq->resid_len;
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
@ -138,32 +139,35 @@ static const struct bsg_ops bsg_scsi_ops = {
|
|||
.free_rq = bsg_scsi_free_rq,
|
||||
};
|
||||
|
||||
static struct request *
|
||||
bsg_map_hdr(struct request_queue *q, struct sg_io_v4 *hdr, fmode_t mode)
|
||||
static int bsg_sg_io(struct request_queue *q, fmode_t mode, void __user *uarg)
|
||||
{
|
||||
struct request *rq, *next_rq = NULL;
|
||||
struct request *rq;
|
||||
struct bio *bio;
|
||||
struct sg_io_v4 hdr;
|
||||
int ret;
|
||||
|
||||
if (copy_from_user(&hdr, uarg, sizeof(hdr)))
|
||||
return -EFAULT;
|
||||
|
||||
if (!q->bsg_dev.class_dev)
|
||||
return ERR_PTR(-ENXIO);
|
||||
return -ENXIO;
|
||||
|
||||
if (hdr->guard != 'Q')
|
||||
return ERR_PTR(-EINVAL);
|
||||
|
||||
ret = q->bsg_dev.ops->check_proto(hdr);
|
||||
if (hdr.guard != 'Q')
|
||||
return -EINVAL;
|
||||
ret = q->bsg_dev.ops->check_proto(&hdr);
|
||||
if (ret)
|
||||
return ERR_PTR(ret);
|
||||
return ret;
|
||||
|
||||
rq = blk_get_request(q, hdr->dout_xfer_len ?
|
||||
rq = blk_get_request(q, hdr.dout_xfer_len ?
|
||||
REQ_OP_SCSI_OUT : REQ_OP_SCSI_IN, 0);
|
||||
if (IS_ERR(rq))
|
||||
return rq;
|
||||
return PTR_ERR(rq);
|
||||
|
||||
ret = q->bsg_dev.ops->fill_hdr(rq, hdr, mode);
|
||||
ret = q->bsg_dev.ops->fill_hdr(rq, &hdr, mode);
|
||||
if (ret)
|
||||
goto out;
|
||||
return ret;
|
||||
|
||||
rq->timeout = msecs_to_jiffies(hdr->timeout);
|
||||
rq->timeout = msecs_to_jiffies(hdr.timeout);
|
||||
if (!rq->timeout)
|
||||
rq->timeout = q->sg_timeout;
|
||||
if (!rq->timeout)
|
||||
|
@ -171,68 +175,28 @@ bsg_map_hdr(struct request_queue *q, struct sg_io_v4 *hdr, fmode_t mode)
|
|||
if (rq->timeout < BLK_MIN_SG_TIMEOUT)
|
||||
rq->timeout = BLK_MIN_SG_TIMEOUT;
|
||||
|
||||
if (hdr->dout_xfer_len && hdr->din_xfer_len) {
|
||||
if (!test_bit(QUEUE_FLAG_BIDI, &q->queue_flags)) {
|
||||
ret = -EOPNOTSUPP;
|
||||
goto out;
|
||||
}
|
||||
|
||||
pr_warn_once(
|
||||
"BIDI support in bsg has been deprecated and might be removed. "
|
||||
"Please report your use case to linux-scsi@vger.kernel.org\n");
|
||||
|
||||
next_rq = blk_get_request(q, REQ_OP_SCSI_IN, 0);
|
||||
if (IS_ERR(next_rq)) {
|
||||
ret = PTR_ERR(next_rq);
|
||||
goto out;
|
||||
}
|
||||
|
||||
rq->next_rq = next_rq;
|
||||
ret = blk_rq_map_user(q, next_rq, NULL, uptr64(hdr->din_xferp),
|
||||
hdr->din_xfer_len, GFP_KERNEL);
|
||||
if (ret)
|
||||
goto out_free_nextrq;
|
||||
}
|
||||
|
||||
if (hdr->dout_xfer_len) {
|
||||
ret = blk_rq_map_user(q, rq, NULL, uptr64(hdr->dout_xferp),
|
||||
hdr->dout_xfer_len, GFP_KERNEL);
|
||||
} else if (hdr->din_xfer_len) {
|
||||
ret = blk_rq_map_user(q, rq, NULL, uptr64(hdr->din_xferp),
|
||||
hdr->din_xfer_len, GFP_KERNEL);
|
||||
if (hdr.dout_xfer_len) {
|
||||
ret = blk_rq_map_user(q, rq, NULL, uptr64(hdr.dout_xferp),
|
||||
hdr.dout_xfer_len, GFP_KERNEL);
|
||||
} else if (hdr.din_xfer_len) {
|
||||
ret = blk_rq_map_user(q, rq, NULL, uptr64(hdr.din_xferp),
|
||||
hdr.din_xfer_len, GFP_KERNEL);
|
||||
}
|
||||
|
||||
if (ret)
|
||||
goto out_unmap_nextrq;
|
||||
return rq;
|
||||
goto out_free_rq;
|
||||
|
||||
out_unmap_nextrq:
|
||||
if (rq->next_rq)
|
||||
blk_rq_unmap_user(rq->next_rq->bio);
|
||||
out_free_nextrq:
|
||||
if (rq->next_rq)
|
||||
blk_put_request(rq->next_rq);
|
||||
out:
|
||||
q->bsg_dev.ops->free_rq(rq);
|
||||
blk_put_request(rq);
|
||||
return ERR_PTR(ret);
|
||||
}
|
||||
|
||||
static int blk_complete_sgv4_hdr_rq(struct request *rq, struct sg_io_v4 *hdr,
|
||||
struct bio *bio, struct bio *bidi_bio)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = rq->q->bsg_dev.ops->complete_rq(rq, hdr);
|
||||
|
||||
if (rq->next_rq) {
|
||||
blk_rq_unmap_user(bidi_bio);
|
||||
blk_put_request(rq->next_rq);
|
||||
}
|
||||
bio = rq->bio;
|
||||
|
||||
blk_execute_rq(q, NULL, rq, !(hdr.flags & BSG_FLAG_Q_AT_TAIL));
|
||||
ret = rq->q->bsg_dev.ops->complete_rq(rq, &hdr);
|
||||
blk_rq_unmap_user(bio);
|
||||
|
||||
out_free_rq:
|
||||
rq->q->bsg_dev.ops->free_rq(rq);
|
||||
blk_put_request(rq);
|
||||
if (!ret && copy_to_user(uarg, &hdr, sizeof(hdr)))
|
||||
return -EFAULT;
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -367,31 +331,39 @@ static int bsg_release(struct inode *inode, struct file *file)
|
|||
return bsg_put_device(bd);
|
||||
}
|
||||
|
||||
static int bsg_get_command_q(struct bsg_device *bd, int __user *uarg)
|
||||
{
|
||||
return put_user(bd->max_queue, uarg);
|
||||
}
|
||||
|
||||
static int bsg_set_command_q(struct bsg_device *bd, int __user *uarg)
|
||||
{
|
||||
int queue;
|
||||
|
||||
if (get_user(queue, uarg))
|
||||
return -EFAULT;
|
||||
if (queue < 1)
|
||||
return -EINVAL;
|
||||
|
||||
spin_lock_irq(&bd->lock);
|
||||
bd->max_queue = queue;
|
||||
spin_unlock_irq(&bd->lock);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static long bsg_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
|
||||
{
|
||||
struct bsg_device *bd = file->private_data;
|
||||
int __user *uarg = (int __user *) arg;
|
||||
int ret;
|
||||
void __user *uarg = (void __user *) arg;
|
||||
|
||||
switch (cmd) {
|
||||
/*
|
||||
* our own ioctls
|
||||
*/
|
||||
/*
|
||||
* Our own ioctls
|
||||
*/
|
||||
case SG_GET_COMMAND_Q:
|
||||
return put_user(bd->max_queue, uarg);
|
||||
case SG_SET_COMMAND_Q: {
|
||||
int queue;
|
||||
|
||||
if (get_user(queue, uarg))
|
||||
return -EFAULT;
|
||||
if (queue < 1)
|
||||
return -EINVAL;
|
||||
|
||||
spin_lock_irq(&bd->lock);
|
||||
bd->max_queue = queue;
|
||||
spin_unlock_irq(&bd->lock);
|
||||
return 0;
|
||||
}
|
||||
return bsg_get_command_q(bd, uarg);
|
||||
case SG_SET_COMMAND_Q:
|
||||
return bsg_set_command_q(bd, uarg);
|
||||
|
||||
/*
|
||||
* SCSI/sg ioctls
|
||||
|
@ -404,36 +376,10 @@ static long bsg_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
|
|||
case SG_GET_RESERVED_SIZE:
|
||||
case SG_SET_RESERVED_SIZE:
|
||||
case SG_EMULATED_HOST:
|
||||
case SCSI_IOCTL_SEND_COMMAND: {
|
||||
void __user *uarg = (void __user *) arg;
|
||||
case SCSI_IOCTL_SEND_COMMAND:
|
||||
return scsi_cmd_ioctl(bd->queue, NULL, file->f_mode, cmd, uarg);
|
||||
}
|
||||
case SG_IO: {
|
||||
struct request *rq;
|
||||
struct bio *bio, *bidi_bio = NULL;
|
||||
struct sg_io_v4 hdr;
|
||||
int at_head;
|
||||
|
||||
if (copy_from_user(&hdr, uarg, sizeof(hdr)))
|
||||
return -EFAULT;
|
||||
|
||||
rq = bsg_map_hdr(bd->queue, &hdr, file->f_mode);
|
||||
if (IS_ERR(rq))
|
||||
return PTR_ERR(rq);
|
||||
|
||||
bio = rq->bio;
|
||||
if (rq->next_rq)
|
||||
bidi_bio = rq->next_rq->bio;
|
||||
|
||||
at_head = (0 == (hdr.flags & BSG_FLAG_Q_AT_TAIL));
|
||||
blk_execute_rq(bd->queue, NULL, rq, at_head);
|
||||
ret = blk_complete_sgv4_hdr_rq(rq, &hdr, bio, bidi_bio);
|
||||
|
||||
if (copy_to_user(uarg, &hdr, sizeof(hdr)))
|
||||
return -EFAULT;
|
||||
|
||||
return ret;
|
||||
}
|
||||
case SG_IO:
|
||||
return bsg_sg_io(bd->queue, file->f_mode, uarg);
|
||||
default:
|
||||
return -ENOTTY;
|
||||
}
|
||||
|
|
|
@ -778,7 +778,7 @@ static int ata_ioc32(struct ata_port *ap)
|
|||
}
|
||||
|
||||
int ata_sas_scsi_ioctl(struct ata_port *ap, struct scsi_device *scsidev,
|
||||
int cmd, void __user *arg)
|
||||
unsigned int cmd, void __user *arg)
|
||||
{
|
||||
unsigned long val;
|
||||
int rc = -EINVAL;
|
||||
|
@ -829,7 +829,8 @@ int ata_sas_scsi_ioctl(struct ata_port *ap, struct scsi_device *scsidev,
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(ata_sas_scsi_ioctl);
|
||||
|
||||
int ata_scsi_ioctl(struct scsi_device *scsidev, int cmd, void __user *arg)
|
||||
int ata_scsi_ioctl(struct scsi_device *scsidev, unsigned int cmd,
|
||||
void __user *arg)
|
||||
{
|
||||
return ata_sas_scsi_ioctl(ata_shost_to_port(scsidev->host),
|
||||
scsidev, cmd, arg);
|
||||
|
|
|
@ -1186,7 +1186,7 @@ isert_handle_scsi_cmd(struct isert_conn *isert_conn,
|
|||
rc = iscsit_sequence_cmd(conn, cmd, buf, hdr->cmdsn);
|
||||
|
||||
if (!rc && dump_payload == false && unsol_data)
|
||||
iscsit_set_unsoliticed_dataout(cmd);
|
||||
iscsit_set_unsolicited_dataout(cmd);
|
||||
else if (dump_payload && imm_data)
|
||||
target_put_sess_cmd(&cmd->se_cmd);
|
||||
|
||||
|
|
|
@ -1217,22 +1217,15 @@ static int srpt_ch_qp_err(struct srpt_rdma_ch *ch)
|
|||
static struct srpt_send_ioctx *srpt_get_send_ioctx(struct srpt_rdma_ch *ch)
|
||||
{
|
||||
struct srpt_send_ioctx *ioctx;
|
||||
unsigned long flags;
|
||||
int tag, cpu;
|
||||
|
||||
BUG_ON(!ch);
|
||||
|
||||
ioctx = NULL;
|
||||
spin_lock_irqsave(&ch->spinlock, flags);
|
||||
if (!list_empty(&ch->free_list)) {
|
||||
ioctx = list_first_entry(&ch->free_list,
|
||||
struct srpt_send_ioctx, free_list);
|
||||
list_del(&ioctx->free_list);
|
||||
}
|
||||
spin_unlock_irqrestore(&ch->spinlock, flags);
|
||||
|
||||
if (!ioctx)
|
||||
return ioctx;
|
||||
tag = sbitmap_queue_get(&ch->sess->sess_tag_pool, &cpu);
|
||||
if (tag < 0)
|
||||
return NULL;
|
||||
|
||||
ioctx = ch->ioctx_ring[tag];
|
||||
BUG_ON(ioctx->ch != ch);
|
||||
ioctx->state = SRPT_STATE_NEW;
|
||||
WARN_ON_ONCE(ioctx->recv_ioctx);
|
||||
|
@ -1245,6 +1238,8 @@ static struct srpt_send_ioctx *srpt_get_send_ioctx(struct srpt_rdma_ch *ch)
|
|||
*/
|
||||
memset(&ioctx->cmd, 0, sizeof(ioctx->cmd));
|
||||
memset(&ioctx->sense_data, 0, sizeof(ioctx->sense_data));
|
||||
ioctx->cmd.map_tag = tag;
|
||||
ioctx->cmd.map_cpu = cpu;
|
||||
|
||||
return ioctx;
|
||||
}
|
||||
|
@ -1505,7 +1500,7 @@ static void srpt_handle_cmd(struct srpt_rdma_ch *ch,
|
|||
pr_err("0x%llx: parsing SRP descriptor table failed.\n",
|
||||
srp_cmd->tag);
|
||||
}
|
||||
goto release_ioctx;
|
||||
goto busy;
|
||||
}
|
||||
|
||||
rc = target_submit_cmd_map_sgls(cmd, ch->sess, srp_cmd->cdb,
|
||||
|
@ -1516,13 +1511,12 @@ static void srpt_handle_cmd(struct srpt_rdma_ch *ch,
|
|||
if (rc != 0) {
|
||||
pr_debug("target_submit_cmd() returned %d for tag %#llx\n", rc,
|
||||
srp_cmd->tag);
|
||||
goto release_ioctx;
|
||||
goto busy;
|
||||
}
|
||||
return;
|
||||
|
||||
release_ioctx:
|
||||
send_ioctx->state = SRPT_STATE_DONE;
|
||||
srpt_release_cmd(cmd);
|
||||
busy:
|
||||
target_send_busy(cmd);
|
||||
}
|
||||
|
||||
static int srp_tmr_to_tcm(int fn)
|
||||
|
@ -1582,11 +1576,9 @@ static void srpt_handle_tsk_mgmt(struct srpt_rdma_ch *ch,
|
|||
TARGET_SCF_ACK_KREF);
|
||||
if (rc != 0) {
|
||||
send_ioctx->cmd.se_tmr_req->response = TMR_FUNCTION_REJECTED;
|
||||
goto fail;
|
||||
cmd->se_tfo->queue_tm_rsp(cmd);
|
||||
}
|
||||
return;
|
||||
fail:
|
||||
transport_send_check_condition_and_sense(cmd, 0, 0); // XXX:
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -2151,7 +2143,7 @@ static int srpt_cm_req_recv(struct srpt_device *const sdev,
|
|||
struct srpt_rdma_ch *ch = NULL;
|
||||
char i_port_id[36];
|
||||
u32 it_iu_len;
|
||||
int i, ret;
|
||||
int i, tag_num, tag_size, ret;
|
||||
|
||||
WARN_ON_ONCE(irqs_disabled());
|
||||
|
||||
|
@ -2251,11 +2243,8 @@ static int srpt_cm_req_recv(struct srpt_device *const sdev,
|
|||
goto free_rsp_cache;
|
||||
}
|
||||
|
||||
INIT_LIST_HEAD(&ch->free_list);
|
||||
for (i = 0; i < ch->rq_size; i++) {
|
||||
for (i = 0; i < ch->rq_size; i++)
|
||||
ch->ioctx_ring[i]->ch = ch;
|
||||
list_add_tail(&ch->ioctx_ring[i]->free_list, &ch->free_list);
|
||||
}
|
||||
if (!sdev->use_srq) {
|
||||
u16 imm_data_offset = req->req_flags & SRP_IMMED_REQUESTED ?
|
||||
be16_to_cpu(req->imm_data_offset) : 0;
|
||||
|
@ -2309,18 +2298,20 @@ static int srpt_cm_req_recv(struct srpt_device *const sdev,
|
|||
|
||||
pr_debug("registering session %s\n", ch->sess_name);
|
||||
|
||||
tag_num = ch->rq_size;
|
||||
tag_size = 1; /* ib_srpt does not use se_sess->sess_cmd_map */
|
||||
if (sport->port_guid_tpg.se_tpg_wwn)
|
||||
ch->sess = target_setup_session(&sport->port_guid_tpg, 0, 0,
|
||||
TARGET_PROT_NORMAL,
|
||||
ch->sess = target_setup_session(&sport->port_guid_tpg, tag_num,
|
||||
tag_size, TARGET_PROT_NORMAL,
|
||||
ch->sess_name, ch, NULL);
|
||||
if (sport->port_gid_tpg.se_tpg_wwn && IS_ERR_OR_NULL(ch->sess))
|
||||
ch->sess = target_setup_session(&sport->port_gid_tpg, 0, 0,
|
||||
TARGET_PROT_NORMAL, i_port_id, ch,
|
||||
NULL);
|
||||
ch->sess = target_setup_session(&sport->port_gid_tpg, tag_num,
|
||||
tag_size, TARGET_PROT_NORMAL, i_port_id,
|
||||
ch, NULL);
|
||||
/* Retry without leading "0x" */
|
||||
if (sport->port_gid_tpg.se_tpg_wwn && IS_ERR_OR_NULL(ch->sess))
|
||||
ch->sess = target_setup_session(&sport->port_gid_tpg, 0, 0,
|
||||
TARGET_PROT_NORMAL,
|
||||
ch->sess = target_setup_session(&sport->port_gid_tpg, tag_num,
|
||||
tag_size, TARGET_PROT_NORMAL,
|
||||
i_port_id + 2, ch, NULL);
|
||||
if (IS_ERR_OR_NULL(ch->sess)) {
|
||||
WARN_ON_ONCE(ch->sess == NULL);
|
||||
|
@ -2703,14 +2694,6 @@ static int srpt_rdma_cm_handler(struct rdma_cm_id *cm_id,
|
|||
return ret;
|
||||
}
|
||||
|
||||
static int srpt_write_pending_status(struct se_cmd *se_cmd)
|
||||
{
|
||||
struct srpt_send_ioctx *ioctx;
|
||||
|
||||
ioctx = container_of(se_cmd, struct srpt_send_ioctx, cmd);
|
||||
return ioctx->state == SRPT_STATE_NEED_DATA;
|
||||
}
|
||||
|
||||
/*
|
||||
* srpt_write_pending - Start data transfer from initiator to target (write).
|
||||
*/
|
||||
|
@ -2887,8 +2870,19 @@ static void srpt_queue_tm_rsp(struct se_cmd *cmd)
|
|||
srpt_queue_response(cmd);
|
||||
}
|
||||
|
||||
/*
|
||||
* This function is called for aborted commands if no response is sent to the
|
||||
* initiator. Make sure that the credits freed by aborting a command are
|
||||
* returned to the initiator the next time a response is sent by incrementing
|
||||
* ch->req_lim_delta.
|
||||
*/
|
||||
static void srpt_aborted_task(struct se_cmd *cmd)
|
||||
{
|
||||
struct srpt_send_ioctx *ioctx = container_of(cmd,
|
||||
struct srpt_send_ioctx, cmd);
|
||||
struct srpt_rdma_ch *ch = ioctx->ch;
|
||||
|
||||
atomic_inc(&ch->req_lim_delta);
|
||||
}
|
||||
|
||||
static int srpt_queue_status(struct se_cmd *cmd)
|
||||
|
@ -3290,7 +3284,6 @@ static void srpt_release_cmd(struct se_cmd *se_cmd)
|
|||
struct srpt_send_ioctx, cmd);
|
||||
struct srpt_rdma_ch *ch = ioctx->ch;
|
||||
struct srpt_recv_ioctx *recv_ioctx = ioctx->recv_ioctx;
|
||||
unsigned long flags;
|
||||
|
||||
WARN_ON_ONCE(ioctx->state != SRPT_STATE_DONE &&
|
||||
!(ioctx->cmd.transport_state & CMD_T_ABORTED));
|
||||
|
@ -3306,9 +3299,7 @@ static void srpt_release_cmd(struct se_cmd *se_cmd)
|
|||
ioctx->n_rw_ctx = 0;
|
||||
}
|
||||
|
||||
spin_lock_irqsave(&ch->spinlock, flags);
|
||||
list_add(&ioctx->free_list, &ch->free_list);
|
||||
spin_unlock_irqrestore(&ch->spinlock, flags);
|
||||
target_free_tag(se_cmd->se_sess, se_cmd);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -3806,7 +3797,6 @@ static const struct target_core_fabric_ops srpt_template = {
|
|||
.sess_get_index = srpt_sess_get_index,
|
||||
.sess_get_initiator_sid = NULL,
|
||||
.write_pending = srpt_write_pending,
|
||||
.write_pending_status = srpt_write_pending_status,
|
||||
.set_default_node_attributes = srpt_set_default_node_attrs,
|
||||
.get_cmd_state = srpt_get_tcm_cmd_state,
|
||||
.queue_data_in = srpt_queue_data_in,
|
||||
|
|
|
@ -207,7 +207,6 @@ struct srpt_rw_ctx {
|
|||
* @rw_ctxs: RDMA read/write contexts.
|
||||
* @imm_sg: Scatterlist for immediate data.
|
||||
* @rdma_cqe: RDMA completion queue element.
|
||||
* @free_list: Node in srpt_rdma_ch.free_list.
|
||||
* @state: I/O context state.
|
||||
* @cmd: Target core command data structure.
|
||||
* @sense_data: SCSI sense data.
|
||||
|
@ -227,7 +226,6 @@ struct srpt_send_ioctx {
|
|||
struct scatterlist imm_sg;
|
||||
|
||||
struct ib_cqe rdma_cqe;
|
||||
struct list_head free_list;
|
||||
enum srpt_command_state state;
|
||||
struct se_cmd cmd;
|
||||
u8 n_rdma;
|
||||
|
@ -277,7 +275,6 @@ enum rdma_ch_state {
|
|||
* @req_lim_delta: Number of credits not yet sent back to the initiator.
|
||||
* @imm_data_offset: Offset from start of SRP_CMD for immediate data.
|
||||
* @spinlock: Protects free_list and state.
|
||||
* @free_list: Head of list with free send I/O contexts.
|
||||
* @state: channel state. See also enum rdma_ch_state.
|
||||
* @using_rdma_cm: Whether the RDMA/CM or IB/CM is used for this channel.
|
||||
* @processing_wait_list: Whether or not cmd_wait_list is being processed.
|
||||
|
@ -318,7 +315,6 @@ struct srpt_rdma_ch {
|
|||
atomic_t req_lim_delta;
|
||||
u16 imm_data_offset;
|
||||
spinlock_t spinlock;
|
||||
struct list_head free_list;
|
||||
enum rdma_ch_state state;
|
||||
struct kmem_cache *rsp_buf_cache;
|
||||
struct srpt_send_ioctx **ioctx_ring;
|
||||
|
|
|
@ -665,7 +665,7 @@ config SCSI_DMX3191D
|
|||
|
||||
config SCSI_GDTH
|
||||
tristate "Intel/ICP (former GDT SCSI Disk Array) RAID Controller support"
|
||||
depends on (ISA || EISA || PCI) && SCSI && ISA_DMA_API
|
||||
depends on PCI && SCSI
|
||||
---help---
|
||||
Formerly called GDT SCSI Disk Array Controller Support.
|
||||
|
||||
|
@ -1196,8 +1196,6 @@ config SCSI_AM53C974
|
|||
PCscsi/PCnet (Am53/79C974) solutions.
|
||||
This is a new implementation base on the generic esp_scsi driver.
|
||||
|
||||
Documentation can be found in <file:Documentation/scsi/tmscsim.txt>.
|
||||
|
||||
Note that this driver does NOT support Tekram DC390W/U/F, which are
|
||||
based on NCR/Symbios chips. Use "NCR53C8XX SCSI support" for those.
|
||||
|
||||
|
@ -1517,6 +1515,4 @@ source "drivers/scsi/pcmcia/Kconfig"
|
|||
|
||||
source "drivers/scsi/device_handler/Kconfig"
|
||||
|
||||
source "drivers/scsi/osd/Kconfig"
|
||||
|
||||
endmenu
|
||||
|
|
|
@ -150,7 +150,6 @@ obj-$(CONFIG_CHR_DEV_SG) += sg.o
|
|||
obj-$(CONFIG_CHR_DEV_SCH) += ch.o
|
||||
obj-$(CONFIG_SCSI_ENCLOSURE) += ses.o
|
||||
|
||||
obj-$(CONFIG_SCSI_OSD_INITIATOR) += osd/
|
||||
obj-$(CONFIG_SCSI_HISI_SAS) += hisi_sas/
|
||||
|
||||
# This goes last, so that "real" scsi devices probe earlier
|
||||
|
|
|
@ -4,5 +4,3 @@ obj-$(CONFIG_SCSI_AACRAID) := aacraid.o
|
|||
|
||||
aacraid-objs := linit.o aachba.o commctrl.o comminit.o commsup.o \
|
||||
dpcsup.o rx.o sa.o rkt.o nark.o src.o
|
||||
|
||||
ccflags-y := -Idrivers/scsi
|
||||
|
|
|
@ -3455,7 +3455,7 @@ static int delete_disk(struct aac_dev *dev, void __user *arg)
|
|||
}
|
||||
}
|
||||
|
||||
int aac_dev_ioctl(struct aac_dev *dev, int cmd, void __user *arg)
|
||||
int aac_dev_ioctl(struct aac_dev *dev, unsigned int cmd, void __user *arg)
|
||||
{
|
||||
switch (cmd) {
|
||||
case FSACTL_QUERY_DISK:
|
||||
|
|
|
@ -2706,12 +2706,12 @@ void aac_set_intx_mode(struct aac_dev *dev);
|
|||
int aac_get_config_status(struct aac_dev *dev, int commit_flag);
|
||||
int aac_get_containers(struct aac_dev *dev);
|
||||
int aac_scsi_cmd(struct scsi_cmnd *cmd);
|
||||
int aac_dev_ioctl(struct aac_dev *dev, int cmd, void __user *arg);
|
||||
int aac_dev_ioctl(struct aac_dev *dev, unsigned int cmd, void __user *arg);
|
||||
#ifndef shost_to_class
|
||||
#define shost_to_class(shost) &shost->shost_dev
|
||||
#endif
|
||||
ssize_t aac_get_serial_number(struct device *dev, char *buf);
|
||||
int aac_do_ioctl(struct aac_dev * dev, int cmd, void __user *arg);
|
||||
int aac_do_ioctl(struct aac_dev *dev, unsigned int cmd, void __user *arg);
|
||||
int aac_rx_init(struct aac_dev *dev);
|
||||
int aac_rkt_init(struct aac_dev *dev);
|
||||
int aac_nark_init(struct aac_dev *dev);
|
||||
|
|
|
@ -1060,7 +1060,7 @@ static int aac_send_reset_adapter(struct aac_dev *dev, void __user *arg)
|
|||
return retval;
|
||||
}
|
||||
|
||||
int aac_do_ioctl(struct aac_dev * dev, int cmd, void __user *arg)
|
||||
int aac_do_ioctl(struct aac_dev *dev, unsigned int cmd, void __user *arg)
|
||||
{
|
||||
int status;
|
||||
|
||||
|
|
|
@ -1303,8 +1303,9 @@ static void aac_handle_aif(struct aac_dev * dev, struct fib * fibptr)
|
|||
ADD : DELETE;
|
||||
break;
|
||||
}
|
||||
case AifBuManagerEvent:
|
||||
aac_handle_aif_bu(dev, aifcmd);
|
||||
break;
|
||||
case AifBuManagerEvent:
|
||||
aac_handle_aif_bu(dev, aifcmd);
|
||||
break;
|
||||
}
|
||||
|
||||
|
@ -1376,18 +1377,19 @@ static void aac_handle_aif(struct aac_dev * dev, struct fib * fibptr)
|
|||
|
||||
container = 0;
|
||||
retry_next:
|
||||
if (device_config_needed == NOTHING)
|
||||
for (; container < dev->maximum_num_containers; ++container) {
|
||||
if ((dev->fsa_dev[container].config_waiting_on == 0) &&
|
||||
(dev->fsa_dev[container].config_needed != NOTHING) &&
|
||||
time_before(jiffies, dev->fsa_dev[container].config_waiting_stamp + AIF_SNIFF_TIMEOUT)) {
|
||||
device_config_needed =
|
||||
dev->fsa_dev[container].config_needed;
|
||||
dev->fsa_dev[container].config_needed = NOTHING;
|
||||
channel = CONTAINER_TO_CHANNEL(container);
|
||||
id = CONTAINER_TO_ID(container);
|
||||
lun = CONTAINER_TO_LUN(container);
|
||||
break;
|
||||
if (device_config_needed == NOTHING) {
|
||||
for (; container < dev->maximum_num_containers; ++container) {
|
||||
if ((dev->fsa_dev[container].config_waiting_on == 0) &&
|
||||
(dev->fsa_dev[container].config_needed != NOTHING) &&
|
||||
time_before(jiffies, dev->fsa_dev[container].config_waiting_stamp + AIF_SNIFF_TIMEOUT)) {
|
||||
device_config_needed =
|
||||
dev->fsa_dev[container].config_needed;
|
||||
dev->fsa_dev[container].config_needed = NOTHING;
|
||||
channel = CONTAINER_TO_CHANNEL(container);
|
||||
id = CONTAINER_TO_ID(container);
|
||||
lun = CONTAINER_TO_LUN(container);
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
if (device_config_needed == NOTHING)
|
||||
|
|
|
@ -616,7 +616,8 @@ static struct device_attribute *aac_dev_attrs[] = {
|
|||
NULL,
|
||||
};
|
||||
|
||||
static int aac_ioctl(struct scsi_device *sdev, int cmd, void __user * arg)
|
||||
static int aac_ioctl(struct scsi_device *sdev, unsigned int cmd,
|
||||
void __user *arg)
|
||||
{
|
||||
struct aac_dev *dev = (struct aac_dev *)sdev->host->hostdata;
|
||||
if (!capable(CAP_SYS_RAWIO))
|
||||
|
@ -852,8 +853,7 @@ static u8 aac_eh_tmf_hard_reset_fib(struct aac_hba_map_info *info,
|
|||
|
||||
address = (u64)fib->hw_error_pa;
|
||||
rst->error_ptr_hi = cpu_to_le32((u32)(address >> 32));
|
||||
rst->error_ptr_lo = cpu_to_le32
|
||||
((u32)(address & 0xffffffff));
|
||||
rst->error_ptr_lo = cpu_to_le32((u32)(address & 0xffffffff));
|
||||
rst->error_length = cpu_to_le32(FW_ERROR_BUFFER_SIZE);
|
||||
fib->hbacmd_size = sizeof(*rst);
|
||||
|
||||
|
@ -1206,7 +1206,8 @@ static long aac_compat_do_ioctl(struct aac_dev *dev, unsigned cmd, unsigned long
|
|||
return ret;
|
||||
}
|
||||
|
||||
static int aac_compat_ioctl(struct scsi_device *sdev, int cmd, void __user *arg)
|
||||
static int aac_compat_ioctl(struct scsi_device *sdev, unsigned int cmd,
|
||||
void __user *arg)
|
||||
{
|
||||
struct aac_dev *dev = (struct aac_dev *)sdev->host->hostdata;
|
||||
if (!capable(CAP_SYS_RAWIO))
|
||||
|
|
|
@ -1157,7 +1157,7 @@ static int aac_src_soft_reset(struct aac_dev *dev)
|
|||
dev_err(&dev->pdev->dev, "%s: %s status = %d", __func__,
|
||||
state_str[state], rc);
|
||||
|
||||
return rc;
|
||||
return rc;
|
||||
}
|
||||
/**
|
||||
* aac_srcv_init - initialize an SRCv card
|
||||
|
|
|
@ -34,7 +34,6 @@ aic79xx-y += aic79xx_osm.o \
|
|||
aic79xx_proc.o \
|
||||
aic79xx_osm_pci.o
|
||||
|
||||
ccflags-y += -Idrivers/scsi
|
||||
ifdef WARNINGS_BECOME_ERRORS
|
||||
ccflags-y += -Werror
|
||||
endif
|
||||
|
|
|
@ -2285,6 +2285,7 @@ ahd_handle_seqint(struct ahd_softc *ahd, u_int intstat)
|
|||
switch (scb->hscb->task_management) {
|
||||
case SIU_TASKMGMT_ABORT_TASK:
|
||||
tag = SCB_GET_TAG(scb);
|
||||
/* fall through */
|
||||
case SIU_TASKMGMT_ABORT_TASK_SET:
|
||||
case SIU_TASKMGMT_CLEAR_TASK_SET:
|
||||
lun = scb->hscb->lun;
|
||||
|
@ -2295,6 +2296,7 @@ ahd_handle_seqint(struct ahd_softc *ahd, u_int intstat)
|
|||
break;
|
||||
case SIU_TASKMGMT_LUN_RESET:
|
||||
lun = scb->hscb->lun;
|
||||
/* fall through */
|
||||
case SIU_TASKMGMT_TARGET_RESET:
|
||||
{
|
||||
struct ahd_devinfo devinfo;
|
||||
|
@ -6550,8 +6552,8 @@ ahd_fini_scbdata(struct ahd_softc *ahd)
|
|||
kfree(sns_map);
|
||||
}
|
||||
ahd_dma_tag_destroy(ahd, scb_data->sense_dmat);
|
||||
/* FALLTHROUGH */
|
||||
}
|
||||
/* fall through */
|
||||
case 6:
|
||||
{
|
||||
struct map_node *sg_map;
|
||||
|
@ -6565,8 +6567,8 @@ ahd_fini_scbdata(struct ahd_softc *ahd)
|
|||
kfree(sg_map);
|
||||
}
|
||||
ahd_dma_tag_destroy(ahd, scb_data->sg_dmat);
|
||||
/* FALLTHROUGH */
|
||||
}
|
||||
/* fall through */
|
||||
case 5:
|
||||
{
|
||||
struct map_node *hscb_map;
|
||||
|
@ -7209,6 +7211,7 @@ ahd_init(struct ahd_softc *ahd)
|
|||
case FLX_CSTAT_OVER:
|
||||
case FLX_CSTAT_UNDER:
|
||||
warn_user++;
|
||||
/* fall through */
|
||||
case FLX_CSTAT_INVALID:
|
||||
case FLX_CSTAT_OKAY:
|
||||
if (warn_user == 0 && bootverbose == 0)
|
||||
|
@ -8413,7 +8416,7 @@ ahd_search_scb_list(struct ahd_softc *ahd, int target, char channel,
|
|||
if ((scb->flags & SCB_ACTIVE) == 0)
|
||||
printk("Inactive SCB in Waiting List\n");
|
||||
ahd_done_with_status(ahd, scb, status);
|
||||
/* FALLTHROUGH */
|
||||
/* fall through */
|
||||
case SEARCH_REMOVE:
|
||||
ahd_rem_wscb(ahd, scbid, prev, next, tid);
|
||||
*list_tail = prev;
|
||||
|
@ -8422,6 +8425,7 @@ ahd_search_scb_list(struct ahd_softc *ahd, int target, char channel,
|
|||
break;
|
||||
case SEARCH_PRINT:
|
||||
printk("0x%x ", scbid);
|
||||
/* fall through */
|
||||
case SEARCH_COUNT:
|
||||
prev = scbid;
|
||||
break;
|
||||
|
@ -9547,8 +9551,8 @@ ahd_download_instr(struct ahd_softc *ahd, u_int instrptr, uint8_t *dconsts)
|
|||
{
|
||||
fmt3_ins = &instr.format3;
|
||||
fmt3_ins->address = ahd_resolve_seqaddr(ahd, fmt3_ins->address);
|
||||
/* FALLTHROUGH */
|
||||
}
|
||||
/* fall through */
|
||||
case AIC_OP_OR:
|
||||
case AIC_OP_AND:
|
||||
case AIC_OP_XOR:
|
||||
|
@ -9559,7 +9563,7 @@ ahd_download_instr(struct ahd_softc *ahd, u_int instrptr, uint8_t *dconsts)
|
|||
fmt1_ins->immediate = dconsts[fmt1_ins->immediate];
|
||||
}
|
||||
fmt1_ins->parity = 0;
|
||||
/* FALLTHROUGH */
|
||||
/* fall through */
|
||||
case AIC_OP_ROL:
|
||||
{
|
||||
int i, count;
|
||||
|
|
|
@ -49,7 +49,7 @@ struct device_attribute;
|
|||
#define ARCMSR_MAX_OUTSTANDING_CMD 1024
|
||||
#define ARCMSR_DEFAULT_OUTSTANDING_CMD 128
|
||||
#define ARCMSR_MIN_OUTSTANDING_CMD 32
|
||||
#define ARCMSR_DRIVER_VERSION "v1.40.00.09-20180709"
|
||||
#define ARCMSR_DRIVER_VERSION "v1.40.00.10-20190116"
|
||||
#define ARCMSR_SCSI_INITIATOR_ID 255
|
||||
#define ARCMSR_MAX_XFER_SECTORS 512
|
||||
#define ARCMSR_MAX_XFER_SECTORS_B 4096
|
||||
|
@ -739,7 +739,7 @@ struct AdapterControlBlock
|
|||
#define ACB_ADAPTER_TYPE_C 0x00000002 /* hbc L IOP */
|
||||
#define ACB_ADAPTER_TYPE_D 0x00000003 /* hbd M IOP */
|
||||
#define ACB_ADAPTER_TYPE_E 0x00000004 /* hba L IOP */
|
||||
u32 roundup_ccbsize;
|
||||
u32 ioqueue_size;
|
||||
struct pci_dev * pdev;
|
||||
struct Scsi_Host * host;
|
||||
unsigned long vir2phy_offset;
|
||||
|
@ -747,6 +747,7 @@ struct AdapterControlBlock
|
|||
uint32_t outbound_int_enable;
|
||||
uint32_t cdb_phyaddr_hi32;
|
||||
uint32_t reg_mu_acc_handle0;
|
||||
uint64_t cdb_phyadd_hipart;
|
||||
spinlock_t eh_lock;
|
||||
spinlock_t ccblist_lock;
|
||||
spinlock_t postq_lock;
|
||||
|
@ -855,11 +856,11 @@ struct AdapterControlBlock
|
|||
*******************************************************************************
|
||||
*/
|
||||
struct CommandControlBlock{
|
||||
/*x32:sizeof struct_CCB=(32+60)byte, x64:sizeof struct_CCB=(64+60)byte*/
|
||||
/*x32:sizeof struct_CCB=(64+60)byte, x64:sizeof struct_CCB=(64+60)byte*/
|
||||
struct list_head list; /*x32: 8byte, x64: 16byte*/
|
||||
struct scsi_cmnd *pcmd; /*8 bytes pointer of linux scsi command */
|
||||
struct AdapterControlBlock *acb; /*x32: 4byte, x64: 8byte*/
|
||||
uint32_t cdb_phyaddr; /*x32: 4byte, x64: 4byte*/
|
||||
unsigned long cdb_phyaddr; /*x32: 4byte, x64: 8byte*/
|
||||
uint32_t arc_cdb_size; /*x32:4byte,x64:4byte*/
|
||||
uint16_t ccb_flags; /*x32: 2byte, x64: 2byte*/
|
||||
#define CCB_FLAG_READ 0x0000
|
||||
|
@ -875,10 +876,10 @@ struct CommandControlBlock{
|
|||
uint32_t smid;
|
||||
#if BITS_PER_LONG == 64
|
||||
/* ======================512+64 bytes======================== */
|
||||
uint32_t reserved[4]; /*16 byte*/
|
||||
uint32_t reserved[3]; /*12 byte*/
|
||||
#else
|
||||
/* ======================512+32 bytes======================== */
|
||||
// uint32_t reserved; /*4 byte*/
|
||||
uint32_t reserved[8]; /*32 byte*/
|
||||
#endif
|
||||
/* ======================================================= */
|
||||
struct ARCMSR_CDB arcmsr_cdb;
|
||||
|
|
|
@ -91,6 +91,10 @@ static int cmd_per_lun = ARCMSR_DEFAULT_CMD_PERLUN;
|
|||
module_param(cmd_per_lun, int, S_IRUGO);
|
||||
MODULE_PARM_DESC(cmd_per_lun, " device queue depth(1 ~ 128), default is 32");
|
||||
|
||||
static int dma_mask_64 = 0;
|
||||
module_param(dma_mask_64, int, S_IRUGO);
|
||||
MODULE_PARM_DESC(dma_mask_64, " set DMA mask to 64 bits(0 ~ 1), dma_mask_64=1(64 bits), =0(32 bits)");
|
||||
|
||||
static int set_date_time = 0;
|
||||
module_param(set_date_time, int, S_IRUGO);
|
||||
MODULE_PARM_DESC(set_date_time, " send date, time to iop(0 ~ 1), set_date_time=1(enable), default(=0) is disable");
|
||||
|
@ -223,13 +227,13 @@ static struct pci_driver arcmsr_pci_driver = {
|
|||
****************************************************************************
|
||||
*/
|
||||
|
||||
static void arcmsr_free_mu(struct AdapterControlBlock *acb)
|
||||
static void arcmsr_free_io_queue(struct AdapterControlBlock *acb)
|
||||
{
|
||||
switch (acb->adapter_type) {
|
||||
case ACB_ADAPTER_TYPE_B:
|
||||
case ACB_ADAPTER_TYPE_D:
|
||||
case ACB_ADAPTER_TYPE_E: {
|
||||
dma_free_coherent(&acb->pdev->dev, acb->roundup_ccbsize,
|
||||
dma_free_coherent(&acb->pdev->dev, acb->ioqueue_size,
|
||||
acb->dma_coherent2, acb->dma_coherent_handle2);
|
||||
break;
|
||||
}
|
||||
|
@ -576,6 +580,58 @@ static void arcmsr_flush_adapter_cache(struct AdapterControlBlock *acb)
|
|||
}
|
||||
}
|
||||
|
||||
static void arcmsr_hbaB_assign_regAddr(struct AdapterControlBlock *acb)
|
||||
{
|
||||
struct MessageUnit_B *reg = acb->pmuB;
|
||||
|
||||
if (acb->pdev->device == PCI_DEVICE_ID_ARECA_1203) {
|
||||
reg->drv2iop_doorbell = MEM_BASE0(ARCMSR_DRV2IOP_DOORBELL_1203);
|
||||
reg->drv2iop_doorbell_mask = MEM_BASE0(ARCMSR_DRV2IOP_DOORBELL_MASK_1203);
|
||||
reg->iop2drv_doorbell = MEM_BASE0(ARCMSR_IOP2DRV_DOORBELL_1203);
|
||||
reg->iop2drv_doorbell_mask = MEM_BASE0(ARCMSR_IOP2DRV_DOORBELL_MASK_1203);
|
||||
} else {
|
||||
reg->drv2iop_doorbell= MEM_BASE0(ARCMSR_DRV2IOP_DOORBELL);
|
||||
reg->drv2iop_doorbell_mask = MEM_BASE0(ARCMSR_DRV2IOP_DOORBELL_MASK);
|
||||
reg->iop2drv_doorbell = MEM_BASE0(ARCMSR_IOP2DRV_DOORBELL);
|
||||
reg->iop2drv_doorbell_mask = MEM_BASE0(ARCMSR_IOP2DRV_DOORBELL_MASK);
|
||||
}
|
||||
reg->message_wbuffer = MEM_BASE1(ARCMSR_MESSAGE_WBUFFER);
|
||||
reg->message_rbuffer = MEM_BASE1(ARCMSR_MESSAGE_RBUFFER);
|
||||
reg->message_rwbuffer = MEM_BASE1(ARCMSR_MESSAGE_RWBUFFER);
|
||||
}
|
||||
|
||||
static void arcmsr_hbaD_assign_regAddr(struct AdapterControlBlock *acb)
|
||||
{
|
||||
struct MessageUnit_D *reg = acb->pmuD;
|
||||
|
||||
reg->chip_id = MEM_BASE0(ARCMSR_ARC1214_CHIP_ID);
|
||||
reg->cpu_mem_config = MEM_BASE0(ARCMSR_ARC1214_CPU_MEMORY_CONFIGURATION);
|
||||
reg->i2o_host_interrupt_mask = MEM_BASE0(ARCMSR_ARC1214_I2_HOST_INTERRUPT_MASK);
|
||||
reg->sample_at_reset = MEM_BASE0(ARCMSR_ARC1214_SAMPLE_RESET);
|
||||
reg->reset_request = MEM_BASE0(ARCMSR_ARC1214_RESET_REQUEST);
|
||||
reg->host_int_status = MEM_BASE0(ARCMSR_ARC1214_MAIN_INTERRUPT_STATUS);
|
||||
reg->pcief0_int_enable = MEM_BASE0(ARCMSR_ARC1214_PCIE_F0_INTERRUPT_ENABLE);
|
||||
reg->inbound_msgaddr0 = MEM_BASE0(ARCMSR_ARC1214_INBOUND_MESSAGE0);
|
||||
reg->inbound_msgaddr1 = MEM_BASE0(ARCMSR_ARC1214_INBOUND_MESSAGE1);
|
||||
reg->outbound_msgaddr0 = MEM_BASE0(ARCMSR_ARC1214_OUTBOUND_MESSAGE0);
|
||||
reg->outbound_msgaddr1 = MEM_BASE0(ARCMSR_ARC1214_OUTBOUND_MESSAGE1);
|
||||
reg->inbound_doorbell = MEM_BASE0(ARCMSR_ARC1214_INBOUND_DOORBELL);
|
||||
reg->outbound_doorbell = MEM_BASE0(ARCMSR_ARC1214_OUTBOUND_DOORBELL);
|
||||
reg->outbound_doorbell_enable = MEM_BASE0(ARCMSR_ARC1214_OUTBOUND_DOORBELL_ENABLE);
|
||||
reg->inboundlist_base_low = MEM_BASE0(ARCMSR_ARC1214_INBOUND_LIST_BASE_LOW);
|
||||
reg->inboundlist_base_high = MEM_BASE0(ARCMSR_ARC1214_INBOUND_LIST_BASE_HIGH);
|
||||
reg->inboundlist_write_pointer = MEM_BASE0(ARCMSR_ARC1214_INBOUND_LIST_WRITE_POINTER);
|
||||
reg->outboundlist_base_low = MEM_BASE0(ARCMSR_ARC1214_OUTBOUND_LIST_BASE_LOW);
|
||||
reg->outboundlist_base_high = MEM_BASE0(ARCMSR_ARC1214_OUTBOUND_LIST_BASE_HIGH);
|
||||
reg->outboundlist_copy_pointer = MEM_BASE0(ARCMSR_ARC1214_OUTBOUND_LIST_COPY_POINTER);
|
||||
reg->outboundlist_read_pointer = MEM_BASE0(ARCMSR_ARC1214_OUTBOUND_LIST_READ_POINTER);
|
||||
reg->outboundlist_interrupt_cause = MEM_BASE0(ARCMSR_ARC1214_OUTBOUND_INTERRUPT_CAUSE);
|
||||
reg->outboundlist_interrupt_enable = MEM_BASE0(ARCMSR_ARC1214_OUTBOUND_INTERRUPT_ENABLE);
|
||||
reg->message_wbuffer = MEM_BASE0(ARCMSR_ARC1214_MESSAGE_WBUFFER);
|
||||
reg->message_rbuffer = MEM_BASE0(ARCMSR_ARC1214_MESSAGE_RBUFFER);
|
||||
reg->msgcode_rwbuffer = MEM_BASE0(ARCMSR_ARC1214_MESSAGE_RWBUFFER);
|
||||
}
|
||||
|
||||
static bool arcmsr_alloc_io_queue(struct AdapterControlBlock *acb)
|
||||
{
|
||||
bool rtn = true;
|
||||
|
@ -585,88 +641,39 @@ static bool arcmsr_alloc_io_queue(struct AdapterControlBlock *acb)
|
|||
|
||||
switch (acb->adapter_type) {
|
||||
case ACB_ADAPTER_TYPE_B: {
|
||||
struct MessageUnit_B *reg;
|
||||
acb->roundup_ccbsize = roundup(sizeof(struct MessageUnit_B), 32);
|
||||
dma_coherent = dma_alloc_coherent(&pdev->dev,
|
||||
acb->roundup_ccbsize,
|
||||
&dma_coherent_handle,
|
||||
GFP_KERNEL);
|
||||
acb->ioqueue_size = roundup(sizeof(struct MessageUnit_B), 32);
|
||||
dma_coherent = dma_alloc_coherent(&pdev->dev, acb->ioqueue_size,
|
||||
&dma_coherent_handle, GFP_KERNEL);
|
||||
if (!dma_coherent) {
|
||||
pr_notice("arcmsr%d: DMA allocation failed\n", acb->host->host_no);
|
||||
return false;
|
||||
}
|
||||
acb->dma_coherent_handle2 = dma_coherent_handle;
|
||||
acb->dma_coherent2 = dma_coherent;
|
||||
reg = (struct MessageUnit_B *)dma_coherent;
|
||||
acb->pmuB = reg;
|
||||
if (acb->pdev->device == PCI_DEVICE_ID_ARECA_1203) {
|
||||
reg->drv2iop_doorbell = MEM_BASE0(ARCMSR_DRV2IOP_DOORBELL_1203);
|
||||
reg->drv2iop_doorbell_mask = MEM_BASE0(ARCMSR_DRV2IOP_DOORBELL_MASK_1203);
|
||||
reg->iop2drv_doorbell = MEM_BASE0(ARCMSR_IOP2DRV_DOORBELL_1203);
|
||||
reg->iop2drv_doorbell_mask = MEM_BASE0(ARCMSR_IOP2DRV_DOORBELL_MASK_1203);
|
||||
} else {
|
||||
reg->drv2iop_doorbell = MEM_BASE0(ARCMSR_DRV2IOP_DOORBELL);
|
||||
reg->drv2iop_doorbell_mask = MEM_BASE0(ARCMSR_DRV2IOP_DOORBELL_MASK);
|
||||
reg->iop2drv_doorbell = MEM_BASE0(ARCMSR_IOP2DRV_DOORBELL);
|
||||
reg->iop2drv_doorbell_mask = MEM_BASE0(ARCMSR_IOP2DRV_DOORBELL_MASK);
|
||||
}
|
||||
reg->message_wbuffer = MEM_BASE1(ARCMSR_MESSAGE_WBUFFER);
|
||||
reg->message_rbuffer = MEM_BASE1(ARCMSR_MESSAGE_RBUFFER);
|
||||
reg->message_rwbuffer = MEM_BASE1(ARCMSR_MESSAGE_RWBUFFER);
|
||||
acb->pmuB = (struct MessageUnit_B *)dma_coherent;
|
||||
arcmsr_hbaB_assign_regAddr(acb);
|
||||
}
|
||||
break;
|
||||
case ACB_ADAPTER_TYPE_D: {
|
||||
struct MessageUnit_D *reg;
|
||||
|
||||
acb->roundup_ccbsize = roundup(sizeof(struct MessageUnit_D), 32);
|
||||
dma_coherent = dma_alloc_coherent(&pdev->dev,
|
||||
acb->roundup_ccbsize,
|
||||
&dma_coherent_handle,
|
||||
GFP_KERNEL);
|
||||
acb->ioqueue_size = roundup(sizeof(struct MessageUnit_D), 32);
|
||||
dma_coherent = dma_alloc_coherent(&pdev->dev, acb->ioqueue_size,
|
||||
&dma_coherent_handle, GFP_KERNEL);
|
||||
if (!dma_coherent) {
|
||||
pr_notice("arcmsr%d: DMA allocation failed\n", acb->host->host_no);
|
||||
return false;
|
||||
}
|
||||
acb->dma_coherent_handle2 = dma_coherent_handle;
|
||||
acb->dma_coherent2 = dma_coherent;
|
||||
reg = (struct MessageUnit_D *)dma_coherent;
|
||||
acb->pmuD = reg;
|
||||
reg->chip_id = MEM_BASE0(ARCMSR_ARC1214_CHIP_ID);
|
||||
reg->cpu_mem_config = MEM_BASE0(ARCMSR_ARC1214_CPU_MEMORY_CONFIGURATION);
|
||||
reg->i2o_host_interrupt_mask = MEM_BASE0(ARCMSR_ARC1214_I2_HOST_INTERRUPT_MASK);
|
||||
reg->sample_at_reset = MEM_BASE0(ARCMSR_ARC1214_SAMPLE_RESET);
|
||||
reg->reset_request = MEM_BASE0(ARCMSR_ARC1214_RESET_REQUEST);
|
||||
reg->host_int_status = MEM_BASE0(ARCMSR_ARC1214_MAIN_INTERRUPT_STATUS);
|
||||
reg->pcief0_int_enable = MEM_BASE0(ARCMSR_ARC1214_PCIE_F0_INTERRUPT_ENABLE);
|
||||
reg->inbound_msgaddr0 = MEM_BASE0(ARCMSR_ARC1214_INBOUND_MESSAGE0);
|
||||
reg->inbound_msgaddr1 = MEM_BASE0(ARCMSR_ARC1214_INBOUND_MESSAGE1);
|
||||
reg->outbound_msgaddr0 = MEM_BASE0(ARCMSR_ARC1214_OUTBOUND_MESSAGE0);
|
||||
reg->outbound_msgaddr1 = MEM_BASE0(ARCMSR_ARC1214_OUTBOUND_MESSAGE1);
|
||||
reg->inbound_doorbell = MEM_BASE0(ARCMSR_ARC1214_INBOUND_DOORBELL);
|
||||
reg->outbound_doorbell = MEM_BASE0(ARCMSR_ARC1214_OUTBOUND_DOORBELL);
|
||||
reg->outbound_doorbell_enable = MEM_BASE0(ARCMSR_ARC1214_OUTBOUND_DOORBELL_ENABLE);
|
||||
reg->inboundlist_base_low = MEM_BASE0(ARCMSR_ARC1214_INBOUND_LIST_BASE_LOW);
|
||||
reg->inboundlist_base_high = MEM_BASE0(ARCMSR_ARC1214_INBOUND_LIST_BASE_HIGH);
|
||||
reg->inboundlist_write_pointer = MEM_BASE0(ARCMSR_ARC1214_INBOUND_LIST_WRITE_POINTER);
|
||||
reg->outboundlist_base_low = MEM_BASE0(ARCMSR_ARC1214_OUTBOUND_LIST_BASE_LOW);
|
||||
reg->outboundlist_base_high = MEM_BASE0(ARCMSR_ARC1214_OUTBOUND_LIST_BASE_HIGH);
|
||||
reg->outboundlist_copy_pointer = MEM_BASE0(ARCMSR_ARC1214_OUTBOUND_LIST_COPY_POINTER);
|
||||
reg->outboundlist_read_pointer = MEM_BASE0(ARCMSR_ARC1214_OUTBOUND_LIST_READ_POINTER);
|
||||
reg->outboundlist_interrupt_cause = MEM_BASE0(ARCMSR_ARC1214_OUTBOUND_INTERRUPT_CAUSE);
|
||||
reg->outboundlist_interrupt_enable = MEM_BASE0(ARCMSR_ARC1214_OUTBOUND_INTERRUPT_ENABLE);
|
||||
reg->message_wbuffer = MEM_BASE0(ARCMSR_ARC1214_MESSAGE_WBUFFER);
|
||||
reg->message_rbuffer = MEM_BASE0(ARCMSR_ARC1214_MESSAGE_RBUFFER);
|
||||
reg->msgcode_rwbuffer = MEM_BASE0(ARCMSR_ARC1214_MESSAGE_RWBUFFER);
|
||||
acb->pmuD = (struct MessageUnit_D *)dma_coherent;
|
||||
arcmsr_hbaD_assign_regAddr(acb);
|
||||
}
|
||||
break;
|
||||
case ACB_ADAPTER_TYPE_E: {
|
||||
uint32_t completeQ_size;
|
||||
completeQ_size = sizeof(struct deliver_completeQ) * ARCMSR_MAX_HBE_DONEQUEUE + 128;
|
||||
acb->roundup_ccbsize = roundup(completeQ_size, 32);
|
||||
dma_coherent = dma_alloc_coherent(&pdev->dev,
|
||||
acb->roundup_ccbsize,
|
||||
&dma_coherent_handle,
|
||||
GFP_KERNEL);
|
||||
acb->ioqueue_size = roundup(completeQ_size, 32);
|
||||
dma_coherent = dma_alloc_coherent(&pdev->dev, acb->ioqueue_size,
|
||||
&dma_coherent_handle, GFP_KERNEL);
|
||||
if (!dma_coherent){
|
||||
pr_notice("arcmsr%d: DMA allocation failed\n", acb->host->host_no);
|
||||
return false;
|
||||
|
@ -674,7 +681,7 @@ static bool arcmsr_alloc_io_queue(struct AdapterControlBlock *acb)
|
|||
acb->dma_coherent_handle2 = dma_coherent_handle;
|
||||
acb->dma_coherent2 = dma_coherent;
|
||||
acb->pCompletionQ = dma_coherent;
|
||||
acb->completionQ_entry = acb->roundup_ccbsize / sizeof(struct deliver_completeQ);
|
||||
acb->completionQ_entry = acb->ioqueue_size / sizeof(struct deliver_completeQ);
|
||||
acb->doneq_index = 0;
|
||||
}
|
||||
break;
|
||||
|
@ -691,11 +698,11 @@ static int arcmsr_alloc_ccb_pool(struct AdapterControlBlock *acb)
|
|||
dma_addr_t dma_coherent_handle;
|
||||
struct CommandControlBlock *ccb_tmp;
|
||||
int i = 0, j = 0;
|
||||
dma_addr_t cdb_phyaddr;
|
||||
unsigned long cdb_phyaddr, next_ccb_phy;
|
||||
unsigned long roundup_ccbsize;
|
||||
unsigned long max_xfer_len;
|
||||
unsigned long max_sg_entrys;
|
||||
uint32_t firm_config_version;
|
||||
uint32_t firm_config_version, curr_phy_upper32;
|
||||
|
||||
for (i = 0; i < ARCMSR_MAX_TARGETID; i++)
|
||||
for (j = 0; j < ARCMSR_MAX_TARGETLUN; j++)
|
||||
|
@ -712,6 +719,7 @@ static int arcmsr_alloc_ccb_pool(struct AdapterControlBlock *acb)
|
|||
acb->host->sg_tablesize = max_sg_entrys;
|
||||
roundup_ccbsize = roundup(sizeof(struct CommandControlBlock) + (max_sg_entrys - 1) * sizeof(struct SG64ENTRY), 32);
|
||||
acb->uncache_size = roundup_ccbsize * acb->maxFreeCCB;
|
||||
acb->uncache_size += acb->ioqueue_size;
|
||||
dma_coherent = dma_alloc_coherent(&pdev->dev, acb->uncache_size, &dma_coherent_handle, GFP_KERNEL);
|
||||
if(!dma_coherent){
|
||||
printk(KERN_NOTICE "arcmsr%d: dma_alloc_coherent got error\n", acb->host->host_no);
|
||||
|
@ -722,9 +730,10 @@ static int arcmsr_alloc_ccb_pool(struct AdapterControlBlock *acb)
|
|||
memset(dma_coherent, 0, acb->uncache_size);
|
||||
acb->ccbsize = roundup_ccbsize;
|
||||
ccb_tmp = dma_coherent;
|
||||
curr_phy_upper32 = upper_32_bits(dma_coherent_handle);
|
||||
acb->vir2phy_offset = (unsigned long)dma_coherent - (unsigned long)dma_coherent_handle;
|
||||
for(i = 0; i < acb->maxFreeCCB; i++){
|
||||
cdb_phyaddr = dma_coherent_handle + offsetof(struct CommandControlBlock, arcmsr_cdb);
|
||||
cdb_phyaddr = (unsigned long)dma_coherent_handle + offsetof(struct CommandControlBlock, arcmsr_cdb);
|
||||
switch (acb->adapter_type) {
|
||||
case ACB_ADAPTER_TYPE_A:
|
||||
case ACB_ADAPTER_TYPE_B:
|
||||
|
@ -740,10 +749,34 @@ static int arcmsr_alloc_ccb_pool(struct AdapterControlBlock *acb)
|
|||
ccb_tmp->acb = acb;
|
||||
ccb_tmp->smid = (u32)i << 16;
|
||||
INIT_LIST_HEAD(&ccb_tmp->list);
|
||||
list_add_tail(&ccb_tmp->list, &acb->ccb_free_list);
|
||||
next_ccb_phy = dma_coherent_handle + roundup_ccbsize;
|
||||
if (upper_32_bits(next_ccb_phy) != curr_phy_upper32) {
|
||||
acb->maxFreeCCB = i;
|
||||
acb->host->can_queue = i;
|
||||
break;
|
||||
}
|
||||
else
|
||||
list_add_tail(&ccb_tmp->list, &acb->ccb_free_list);
|
||||
ccb_tmp = (struct CommandControlBlock *)((unsigned long)ccb_tmp + roundup_ccbsize);
|
||||
dma_coherent_handle = dma_coherent_handle + roundup_ccbsize;
|
||||
dma_coherent_handle = next_ccb_phy;
|
||||
}
|
||||
acb->dma_coherent_handle2 = dma_coherent_handle;
|
||||
acb->dma_coherent2 = ccb_tmp;
|
||||
switch (acb->adapter_type) {
|
||||
case ACB_ADAPTER_TYPE_B:
|
||||
acb->pmuB = (struct MessageUnit_B *)acb->dma_coherent2;
|
||||
arcmsr_hbaB_assign_regAddr(acb);
|
||||
break;
|
||||
case ACB_ADAPTER_TYPE_D:
|
||||
acb->pmuD = (struct MessageUnit_D *)acb->dma_coherent2;
|
||||
arcmsr_hbaD_assign_regAddr(acb);
|
||||
break;
|
||||
case ACB_ADAPTER_TYPE_E:
|
||||
acb->pCompletionQ = acb->dma_coherent2;
|
||||
acb->completionQ_entry = acb->ioqueue_size / sizeof(struct deliver_completeQ);
|
||||
acb->doneq_index = 0;
|
||||
break;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -894,6 +927,31 @@ static void arcmsr_init_set_datetime_timer(struct AdapterControlBlock *pacb)
|
|||
add_timer(&pacb->refresh_timer);
|
||||
}
|
||||
|
||||
static int arcmsr_set_dma_mask(struct AdapterControlBlock *acb)
|
||||
{
|
||||
struct pci_dev *pcidev = acb->pdev;
|
||||
|
||||
if (IS_DMA64) {
|
||||
if (((acb->adapter_type == ACB_ADAPTER_TYPE_A) && !dma_mask_64) ||
|
||||
dma_set_mask(&pcidev->dev, DMA_BIT_MASK(64)))
|
||||
goto dma32;
|
||||
if (dma_set_coherent_mask(&pcidev->dev, DMA_BIT_MASK(64)) ||
|
||||
dma_set_mask_and_coherent(&pcidev->dev, DMA_BIT_MASK(64))) {
|
||||
printk("arcmsr: set DMA 64 mask failed\n");
|
||||
return -ENXIO;
|
||||
}
|
||||
} else {
|
||||
dma32:
|
||||
if (dma_set_mask(&pcidev->dev, DMA_BIT_MASK(32)) ||
|
||||
dma_set_coherent_mask(&pcidev->dev, DMA_BIT_MASK(32)) ||
|
||||
dma_set_mask_and_coherent(&pcidev->dev, DMA_BIT_MASK(32))) {
|
||||
printk("arcmsr: set DMA 32-bit mask failed\n");
|
||||
return -ENXIO;
|
||||
}
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int arcmsr_probe(struct pci_dev *pdev, const struct pci_device_id *id)
|
||||
{
|
||||
struct Scsi_Host *host;
|
||||
|
@ -908,22 +966,15 @@ static int arcmsr_probe(struct pci_dev *pdev, const struct pci_device_id *id)
|
|||
if(!host){
|
||||
goto pci_disable_dev;
|
||||
}
|
||||
error = dma_set_mask(&pdev->dev, DMA_BIT_MASK(64));
|
||||
if(error){
|
||||
error = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32));
|
||||
if(error){
|
||||
printk(KERN_WARNING
|
||||
"scsi%d: No suitable DMA mask available\n",
|
||||
host->host_no);
|
||||
goto scsi_host_release;
|
||||
}
|
||||
}
|
||||
init_waitqueue_head(&wait_q);
|
||||
bus = pdev->bus->number;
|
||||
dev_fun = pdev->devfn;
|
||||
acb = (struct AdapterControlBlock *) host->hostdata;
|
||||
memset(acb,0,sizeof(struct AdapterControlBlock));
|
||||
acb->pdev = pdev;
|
||||
acb->adapter_type = id->driver_data;
|
||||
if (arcmsr_set_dma_mask(acb))
|
||||
goto scsi_host_release;
|
||||
acb->host = host;
|
||||
host->max_lun = ARCMSR_MAX_TARGETLUN;
|
||||
host->max_id = ARCMSR_MAX_TARGETID; /*16:8*/
|
||||
|
@ -953,7 +1004,6 @@ static int arcmsr_probe(struct pci_dev *pdev, const struct pci_device_id *id)
|
|||
ACB_F_MESSAGE_WQBUFFER_READED);
|
||||
acb->acb_flags &= ~ACB_F_SCSISTOPADAPTER;
|
||||
INIT_LIST_HEAD(&acb->ccb_free_list);
|
||||
acb->adapter_type = id->driver_data;
|
||||
error = arcmsr_remap_pciregion(acb);
|
||||
if(!error){
|
||||
goto pci_release_regs;
|
||||
|
@ -965,9 +1015,10 @@ static int arcmsr_probe(struct pci_dev *pdev, const struct pci_device_id *id)
|
|||
if(!error){
|
||||
goto free_hbb_mu;
|
||||
}
|
||||
arcmsr_free_io_queue(acb);
|
||||
error = arcmsr_alloc_ccb_pool(acb);
|
||||
if(error){
|
||||
goto free_hbb_mu;
|
||||
goto unmap_pci_region;
|
||||
}
|
||||
error = scsi_add_host(host, &pdev->dev);
|
||||
if(error){
|
||||
|
@ -995,8 +1046,9 @@ static int arcmsr_probe(struct pci_dev *pdev, const struct pci_device_id *id)
|
|||
scsi_remove_host(host);
|
||||
free_ccb_pool:
|
||||
arcmsr_free_ccb_pool(acb);
|
||||
goto unmap_pci_region;
|
||||
free_hbb_mu:
|
||||
arcmsr_free_mu(acb);
|
||||
arcmsr_free_io_queue(acb);
|
||||
unmap_pci_region:
|
||||
arcmsr_unmap_pciregion(acb);
|
||||
pci_release_regs:
|
||||
|
@ -1042,7 +1094,6 @@ static int arcmsr_suspend(struct pci_dev *pdev, pm_message_t state)
|
|||
|
||||
static int arcmsr_resume(struct pci_dev *pdev)
|
||||
{
|
||||
int error;
|
||||
struct Scsi_Host *host = pci_get_drvdata(pdev);
|
||||
struct AdapterControlBlock *acb =
|
||||
(struct AdapterControlBlock *)host->hostdata;
|
||||
|
@ -1054,24 +1105,30 @@ static int arcmsr_resume(struct pci_dev *pdev)
|
|||
pr_warn("%s: pci_enable_device error\n", __func__);
|
||||
return -ENODEV;
|
||||
}
|
||||
error = dma_set_mask(&pdev->dev, DMA_BIT_MASK(64));
|
||||
if (error) {
|
||||
error = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32));
|
||||
if (error) {
|
||||
pr_warn("scsi%d: No suitable DMA mask available\n",
|
||||
host->host_no);
|
||||
goto controller_unregister;
|
||||
}
|
||||
}
|
||||
if (arcmsr_set_dma_mask(acb))
|
||||
goto controller_unregister;
|
||||
pci_set_master(pdev);
|
||||
if (arcmsr_request_irq(pdev, acb) == FAILED)
|
||||
goto controller_stop;
|
||||
if (acb->adapter_type == ACB_ADAPTER_TYPE_E) {
|
||||
switch (acb->adapter_type) {
|
||||
case ACB_ADAPTER_TYPE_B: {
|
||||
struct MessageUnit_B *reg = acb->pmuB;
|
||||
uint32_t i;
|
||||
for (i = 0; i < ARCMSR_MAX_HBB_POSTQUEUE; i++) {
|
||||
reg->post_qbuffer[i] = 0;
|
||||
reg->done_qbuffer[i] = 0;
|
||||
}
|
||||
reg->postq_index = 0;
|
||||
reg->doneq_index = 0;
|
||||
break;
|
||||
}
|
||||
case ACB_ADAPTER_TYPE_E:
|
||||
writel(0, &acb->pmuE->host_int_status);
|
||||
writel(ARCMSR_HBEMU_DOORBELL_SYNC, &acb->pmuE->iobound_doorbell);
|
||||
acb->in_doorbell = 0;
|
||||
acb->out_doorbell = 0;
|
||||
acb->doneq_index = 0;
|
||||
break;
|
||||
}
|
||||
arcmsr_iop_init(acb);
|
||||
arcmsr_init_get_devmap_timer(acb);
|
||||
|
@ -1351,10 +1408,12 @@ static void arcmsr_drain_donequeue(struct AdapterControlBlock *acb, struct Comma
|
|||
static void arcmsr_done4abort_postqueue(struct AdapterControlBlock *acb)
|
||||
{
|
||||
int i = 0;
|
||||
uint32_t flag_ccb, ccb_cdb_phy;
|
||||
uint32_t flag_ccb;
|
||||
struct ARCMSR_CDB *pARCMSR_CDB;
|
||||
bool error;
|
||||
struct CommandControlBlock *pCCB;
|
||||
unsigned long ccb_cdb_phy, cdb_phy_hipart;
|
||||
|
||||
switch (acb->adapter_type) {
|
||||
|
||||
case ACB_ADAPTER_TYPE_A: {
|
||||
|
@ -1366,7 +1425,10 @@ static void arcmsr_done4abort_postqueue(struct AdapterControlBlock *acb)
|
|||
writel(outbound_intstatus, ®->outbound_intstatus);/*clear interrupt*/
|
||||
while(((flag_ccb = readl(®->outbound_queueport)) != 0xFFFFFFFF)
|
||||
&& (i++ < acb->maxOutstanding)) {
|
||||
pARCMSR_CDB = (struct ARCMSR_CDB *)(acb->vir2phy_offset + (flag_ccb << 5));/*frame must be 32 bytes aligned*/
|
||||
ccb_cdb_phy = (flag_ccb << 5) & 0xffffffff;
|
||||
if (acb->cdb_phyadd_hipart)
|
||||
ccb_cdb_phy = ccb_cdb_phy | acb->cdb_phyadd_hipart;
|
||||
pARCMSR_CDB = (struct ARCMSR_CDB *)(acb->vir2phy_offset + ccb_cdb_phy);
|
||||
pCCB = container_of(pARCMSR_CDB, struct CommandControlBlock, arcmsr_cdb);
|
||||
error = (flag_ccb & ARCMSR_CCBREPLY_FLAG_ERROR_MODE0) ? true : false;
|
||||
arcmsr_drain_donequeue(acb, pCCB, error);
|
||||
|
@ -1382,7 +1444,10 @@ static void arcmsr_done4abort_postqueue(struct AdapterControlBlock *acb)
|
|||
flag_ccb = reg->done_qbuffer[i];
|
||||
if (flag_ccb != 0) {
|
||||
reg->done_qbuffer[i] = 0;
|
||||
pARCMSR_CDB = (struct ARCMSR_CDB *)(acb->vir2phy_offset+(flag_ccb << 5));/*frame must be 32 bytes aligned*/
|
||||
ccb_cdb_phy = (flag_ccb << 5) & 0xffffffff;
|
||||
if (acb->cdb_phyadd_hipart)
|
||||
ccb_cdb_phy = ccb_cdb_phy | acb->cdb_phyadd_hipart;
|
||||
pARCMSR_CDB = (struct ARCMSR_CDB *)(acb->vir2phy_offset + ccb_cdb_phy);
|
||||
pCCB = container_of(pARCMSR_CDB, struct CommandControlBlock, arcmsr_cdb);
|
||||
error = (flag_ccb & ARCMSR_CCBREPLY_FLAG_ERROR_MODE0) ? true : false;
|
||||
arcmsr_drain_donequeue(acb, pCCB, error);
|
||||
|
@ -1399,7 +1464,9 @@ static void arcmsr_done4abort_postqueue(struct AdapterControlBlock *acb)
|
|||
/*need to do*/
|
||||
flag_ccb = readl(®->outbound_queueport_low);
|
||||
ccb_cdb_phy = (flag_ccb & 0xFFFFFFF0);
|
||||
pARCMSR_CDB = (struct ARCMSR_CDB *)(acb->vir2phy_offset+ccb_cdb_phy);/*frame must be 32 bytes aligned*/
|
||||
if (acb->cdb_phyadd_hipart)
|
||||
ccb_cdb_phy = ccb_cdb_phy | acb->cdb_phyadd_hipart;
|
||||
pARCMSR_CDB = (struct ARCMSR_CDB *)(acb->vir2phy_offset + ccb_cdb_phy);
|
||||
pCCB = container_of(pARCMSR_CDB, struct CommandControlBlock, arcmsr_cdb);
|
||||
error = (flag_ccb & ARCMSR_CCBREPLY_FLAG_ERROR_MODE1) ? true : false;
|
||||
arcmsr_drain_donequeue(acb, pCCB, error);
|
||||
|
@ -1427,9 +1494,13 @@ static void arcmsr_done4abort_postqueue(struct AdapterControlBlock *acb)
|
|||
((toggle ^ 0x4000) + 1);
|
||||
doneq_index = pmu->doneq_index;
|
||||
spin_unlock_irqrestore(&acb->doneq_lock, flags);
|
||||
cdb_phy_hipart = pmu->done_qbuffer[doneq_index &
|
||||
0xFFF].addressHigh;
|
||||
addressLow = pmu->done_qbuffer[doneq_index &
|
||||
0xFFF].addressLow;
|
||||
ccb_cdb_phy = (addressLow & 0xFFFFFFF0);
|
||||
if (acb->cdb_phyadd_hipart)
|
||||
ccb_cdb_phy = ccb_cdb_phy | acb->cdb_phyadd_hipart;
|
||||
pARCMSR_CDB = (struct ARCMSR_CDB *)
|
||||
(acb->vir2phy_offset + ccb_cdb_phy);
|
||||
pCCB = container_of(pARCMSR_CDB,
|
||||
|
@ -1506,7 +1577,6 @@ static void arcmsr_free_pcidev(struct AdapterControlBlock *acb)
|
|||
pdev = acb->pdev;
|
||||
arcmsr_free_irq(pdev, acb);
|
||||
arcmsr_free_ccb_pool(acb);
|
||||
arcmsr_free_mu(acb);
|
||||
arcmsr_unmap_pciregion(acb);
|
||||
pci_release_regions(pdev);
|
||||
scsi_host_put(host);
|
||||
|
@ -1564,7 +1634,6 @@ static void arcmsr_remove(struct pci_dev *pdev)
|
|||
}
|
||||
arcmsr_free_irq(pdev, acb);
|
||||
arcmsr_free_ccb_pool(acb);
|
||||
arcmsr_free_mu(acb);
|
||||
arcmsr_unmap_pciregion(acb);
|
||||
pci_release_regions(pdev);
|
||||
scsi_host_put(host);
|
||||
|
@ -1749,12 +1818,8 @@ static void arcmsr_post_ccb(struct AdapterControlBlock *acb, struct CommandContr
|
|||
|
||||
arc_cdb_size = (ccb->arc_cdb_size > 0x300) ? 0x300 : ccb->arc_cdb_size;
|
||||
ccb_post_stamp = (cdb_phyaddr | ((arc_cdb_size - 1) >> 6) | 1);
|
||||
if (acb->cdb_phyaddr_hi32) {
|
||||
writel(acb->cdb_phyaddr_hi32, &phbcmu->inbound_queueport_high);
|
||||
writel(ccb_post_stamp, &phbcmu->inbound_queueport_low);
|
||||
} else {
|
||||
writel(ccb_post_stamp, &phbcmu->inbound_queueport_low);
|
||||
}
|
||||
writel(upper_32_bits(ccb->cdb_phyaddr), &phbcmu->inbound_queueport_high);
|
||||
writel(ccb_post_stamp, &phbcmu->inbound_queueport_low);
|
||||
}
|
||||
break;
|
||||
case ACB_ADAPTER_TYPE_D: {
|
||||
|
@ -1767,8 +1832,8 @@ static void arcmsr_post_ccb(struct AdapterControlBlock *acb, struct CommandContr
|
|||
spin_lock_irqsave(&acb->postq_lock, flags);
|
||||
postq_index = pmu->postq_index;
|
||||
pinbound_srb = (struct InBound_SRB *)&(pmu->post_qbuffer[postq_index & 0xFF]);
|
||||
pinbound_srb->addressHigh = dma_addr_hi32(cdb_phyaddr);
|
||||
pinbound_srb->addressLow = dma_addr_lo32(cdb_phyaddr);
|
||||
pinbound_srb->addressHigh = upper_32_bits(ccb->cdb_phyaddr);
|
||||
pinbound_srb->addressLow = cdb_phyaddr;
|
||||
pinbound_srb->length = ccb->arc_cdb_size >> 2;
|
||||
arcmsr_cdb->msgContext = dma_addr_lo32(cdb_phyaddr);
|
||||
toggle = postq_index & 0x4000;
|
||||
|
@ -2304,8 +2369,13 @@ static void arcmsr_hbaA_postqueue_isr(struct AdapterControlBlock *acb)
|
|||
struct ARCMSR_CDB *pARCMSR_CDB;
|
||||
struct CommandControlBlock *pCCB;
|
||||
bool error;
|
||||
unsigned long cdb_phy_addr;
|
||||
|
||||
while ((flag_ccb = readl(®->outbound_queueport)) != 0xFFFFFFFF) {
|
||||
pARCMSR_CDB = (struct ARCMSR_CDB *)(acb->vir2phy_offset + (flag_ccb << 5));/*frame must be 32 bytes aligned*/
|
||||
cdb_phy_addr = (flag_ccb << 5) & 0xffffffff;
|
||||
if (acb->cdb_phyadd_hipart)
|
||||
cdb_phy_addr = cdb_phy_addr | acb->cdb_phyadd_hipart;
|
||||
pARCMSR_CDB = (struct ARCMSR_CDB *)(acb->vir2phy_offset + cdb_phy_addr);
|
||||
pCCB = container_of(pARCMSR_CDB, struct CommandControlBlock, arcmsr_cdb);
|
||||
error = (flag_ccb & ARCMSR_CCBREPLY_FLAG_ERROR_MODE0) ? true : false;
|
||||
arcmsr_drain_donequeue(acb, pCCB, error);
|
||||
|
@ -2319,13 +2389,18 @@ static void arcmsr_hbaB_postqueue_isr(struct AdapterControlBlock *acb)
|
|||
struct ARCMSR_CDB *pARCMSR_CDB;
|
||||
struct CommandControlBlock *pCCB;
|
||||
bool error;
|
||||
unsigned long cdb_phy_addr;
|
||||
|
||||
index = reg->doneq_index;
|
||||
while ((flag_ccb = reg->done_qbuffer[index]) != 0) {
|
||||
reg->done_qbuffer[index] = 0;
|
||||
pARCMSR_CDB = (struct ARCMSR_CDB *)(acb->vir2phy_offset+(flag_ccb << 5));/*frame must be 32 bytes aligned*/
|
||||
cdb_phy_addr = (flag_ccb << 5) & 0xffffffff;
|
||||
if (acb->cdb_phyadd_hipart)
|
||||
cdb_phy_addr = cdb_phy_addr | acb->cdb_phyadd_hipart;
|
||||
pARCMSR_CDB = (struct ARCMSR_CDB *)(acb->vir2phy_offset + cdb_phy_addr);
|
||||
pCCB = container_of(pARCMSR_CDB, struct CommandControlBlock, arcmsr_cdb);
|
||||
error = (flag_ccb & ARCMSR_CCBREPLY_FLAG_ERROR_MODE0) ? true : false;
|
||||
arcmsr_drain_donequeue(acb, pCCB, error);
|
||||
reg->done_qbuffer[index] = 0;
|
||||
index++;
|
||||
index %= ARCMSR_MAX_HBB_POSTQUEUE;
|
||||
reg->doneq_index = index;
|
||||
|
@ -2337,7 +2412,8 @@ static void arcmsr_hbaC_postqueue_isr(struct AdapterControlBlock *acb)
|
|||
struct MessageUnit_C __iomem *phbcmu;
|
||||
struct ARCMSR_CDB *arcmsr_cdb;
|
||||
struct CommandControlBlock *ccb;
|
||||
uint32_t flag_ccb, ccb_cdb_phy, throttling = 0;
|
||||
uint32_t flag_ccb, throttling = 0;
|
||||
unsigned long ccb_cdb_phy;
|
||||
int error;
|
||||
|
||||
phbcmu = acb->pmuC;
|
||||
|
@ -2347,6 +2423,8 @@ static void arcmsr_hbaC_postqueue_isr(struct AdapterControlBlock *acb)
|
|||
while ((flag_ccb = readl(&phbcmu->outbound_queueport_low)) !=
|
||||
0xFFFFFFFF) {
|
||||
ccb_cdb_phy = (flag_ccb & 0xFFFFFFF0);
|
||||
if (acb->cdb_phyadd_hipart)
|
||||
ccb_cdb_phy = ccb_cdb_phy | acb->cdb_phyadd_hipart;
|
||||
arcmsr_cdb = (struct ARCMSR_CDB *)(acb->vir2phy_offset
|
||||
+ ccb_cdb_phy);
|
||||
ccb = container_of(arcmsr_cdb, struct CommandControlBlock,
|
||||
|
@ -2367,12 +2445,12 @@ static void arcmsr_hbaC_postqueue_isr(struct AdapterControlBlock *acb)
|
|||
static void arcmsr_hbaD_postqueue_isr(struct AdapterControlBlock *acb)
|
||||
{
|
||||
u32 outbound_write_pointer, doneq_index, index_stripped, toggle;
|
||||
uint32_t addressLow, ccb_cdb_phy;
|
||||
uint32_t addressLow;
|
||||
int error;
|
||||
struct MessageUnit_D *pmu;
|
||||
struct ARCMSR_CDB *arcmsr_cdb;
|
||||
struct CommandControlBlock *ccb;
|
||||
unsigned long flags;
|
||||
unsigned long flags, ccb_cdb_phy, cdb_phy_hipart;
|
||||
|
||||
spin_lock_irqsave(&acb->doneq_lock, flags);
|
||||
pmu = acb->pmuD;
|
||||
|
@ -2386,9 +2464,13 @@ static void arcmsr_hbaD_postqueue_isr(struct AdapterControlBlock *acb)
|
|||
pmu->doneq_index = index_stripped ? (index_stripped | toggle) :
|
||||
((toggle ^ 0x4000) + 1);
|
||||
doneq_index = pmu->doneq_index;
|
||||
cdb_phy_hipart = pmu->done_qbuffer[doneq_index &
|
||||
0xFFF].addressHigh;
|
||||
addressLow = pmu->done_qbuffer[doneq_index &
|
||||
0xFFF].addressLow;
|
||||
ccb_cdb_phy = (addressLow & 0xFFFFFFF0);
|
||||
if (acb->cdb_phyadd_hipart)
|
||||
ccb_cdb_phy = ccb_cdb_phy | acb->cdb_phyadd_hipart;
|
||||
arcmsr_cdb = (struct ARCMSR_CDB *)(acb->vir2phy_offset
|
||||
+ ccb_cdb_phy);
|
||||
ccb = container_of(arcmsr_cdb,
|
||||
|
@ -3229,7 +3311,9 @@ static int arcmsr_hbaA_polling_ccbdone(struct AdapterControlBlock *acb,
|
|||
uint32_t flag_ccb, outbound_intstatus, poll_ccb_done = 0, poll_count = 0;
|
||||
int rtn;
|
||||
bool error;
|
||||
polling_hba_ccb_retry:
|
||||
unsigned long ccb_cdb_phy;
|
||||
|
||||
polling_hba_ccb_retry:
|
||||
poll_count++;
|
||||
outbound_intstatus = readl(®->outbound_intstatus) & acb->outbound_int_enable;
|
||||
writel(outbound_intstatus, ®->outbound_intstatus);/*clear interrupt*/
|
||||
|
@ -3247,7 +3331,10 @@ static int arcmsr_hbaA_polling_ccbdone(struct AdapterControlBlock *acb,
|
|||
goto polling_hba_ccb_retry;
|
||||
}
|
||||
}
|
||||
arcmsr_cdb = (struct ARCMSR_CDB *)(acb->vir2phy_offset + (flag_ccb << 5));
|
||||
ccb_cdb_phy = (flag_ccb << 5) & 0xffffffff;
|
||||
if (acb->cdb_phyadd_hipart)
|
||||
ccb_cdb_phy = ccb_cdb_phy | acb->cdb_phyadd_hipart;
|
||||
arcmsr_cdb = (struct ARCMSR_CDB *)(acb->vir2phy_offset + ccb_cdb_phy);
|
||||
ccb = container_of(arcmsr_cdb, struct CommandControlBlock, arcmsr_cdb);
|
||||
poll_ccb_done |= (ccb == poll_ccb) ? 1 : 0;
|
||||
if ((ccb->acb != acb) || (ccb->startdone != ARCMSR_CCB_START)) {
|
||||
|
@ -3285,8 +3372,9 @@ static int arcmsr_hbaB_polling_ccbdone(struct AdapterControlBlock *acb,
|
|||
uint32_t flag_ccb, poll_ccb_done = 0, poll_count = 0;
|
||||
int index, rtn;
|
||||
bool error;
|
||||
polling_hbb_ccb_retry:
|
||||
unsigned long ccb_cdb_phy;
|
||||
|
||||
polling_hbb_ccb_retry:
|
||||
poll_count++;
|
||||
/* clear doorbell interrupt */
|
||||
writel(ARCMSR_DOORBELL_INT_CLEAR_PATTERN, reg->iop2drv_doorbell);
|
||||
|
@ -3312,7 +3400,10 @@ static int arcmsr_hbaB_polling_ccbdone(struct AdapterControlBlock *acb,
|
|||
index %= ARCMSR_MAX_HBB_POSTQUEUE;
|
||||
reg->doneq_index = index;
|
||||
/* check if command done with no error*/
|
||||
arcmsr_cdb = (struct ARCMSR_CDB *)(acb->vir2phy_offset + (flag_ccb << 5));
|
||||
ccb_cdb_phy = (flag_ccb << 5) & 0xffffffff;
|
||||
if (acb->cdb_phyadd_hipart)
|
||||
ccb_cdb_phy = ccb_cdb_phy | acb->cdb_phyadd_hipart;
|
||||
arcmsr_cdb = (struct ARCMSR_CDB *)(acb->vir2phy_offset + ccb_cdb_phy);
|
||||
ccb = container_of(arcmsr_cdb, struct CommandControlBlock, arcmsr_cdb);
|
||||
poll_ccb_done |= (ccb == poll_ccb) ? 1 : 0;
|
||||
if ((ccb->acb != acb) || (ccb->startdone != ARCMSR_CCB_START)) {
|
||||
|
@ -3345,12 +3436,14 @@ static int arcmsr_hbaC_polling_ccbdone(struct AdapterControlBlock *acb,
|
|||
struct CommandControlBlock *poll_ccb)
|
||||
{
|
||||
struct MessageUnit_C __iomem *reg = acb->pmuC;
|
||||
uint32_t flag_ccb, ccb_cdb_phy;
|
||||
uint32_t flag_ccb;
|
||||
struct ARCMSR_CDB *arcmsr_cdb;
|
||||
bool error;
|
||||
struct CommandControlBlock *pCCB;
|
||||
uint32_t poll_ccb_done = 0, poll_count = 0;
|
||||
int rtn;
|
||||
unsigned long ccb_cdb_phy;
|
||||
|
||||
polling_hbc_ccb_retry:
|
||||
poll_count++;
|
||||
while (1) {
|
||||
|
@ -3369,7 +3462,9 @@ static int arcmsr_hbaC_polling_ccbdone(struct AdapterControlBlock *acb,
|
|||
}
|
||||
flag_ccb = readl(®->outbound_queueport_low);
|
||||
ccb_cdb_phy = (flag_ccb & 0xFFFFFFF0);
|
||||
arcmsr_cdb = (struct ARCMSR_CDB *)(acb->vir2phy_offset + ccb_cdb_phy);/*frame must be 32 bytes aligned*/
|
||||
if (acb->cdb_phyadd_hipart)
|
||||
ccb_cdb_phy = ccb_cdb_phy | acb->cdb_phyadd_hipart;
|
||||
arcmsr_cdb = (struct ARCMSR_CDB *)(acb->vir2phy_offset + ccb_cdb_phy);
|
||||
pCCB = container_of(arcmsr_cdb, struct CommandControlBlock, arcmsr_cdb);
|
||||
poll_ccb_done |= (pCCB == poll_ccb) ? 1 : 0;
|
||||
/* check ifcommand done with no error*/
|
||||
|
@ -3403,9 +3498,9 @@ static int arcmsr_hbaD_polling_ccbdone(struct AdapterControlBlock *acb,
|
|||
struct CommandControlBlock *poll_ccb)
|
||||
{
|
||||
bool error;
|
||||
uint32_t poll_ccb_done = 0, poll_count = 0, flag_ccb, ccb_cdb_phy;
|
||||
uint32_t poll_ccb_done = 0, poll_count = 0, flag_ccb;
|
||||
int rtn, doneq_index, index_stripped, outbound_write_pointer, toggle;
|
||||
unsigned long flags;
|
||||
unsigned long flags, ccb_cdb_phy, cdb_phy_hipart;
|
||||
struct ARCMSR_CDB *arcmsr_cdb;
|
||||
struct CommandControlBlock *pCCB;
|
||||
struct MessageUnit_D *pmu = acb->pmuD;
|
||||
|
@ -3437,8 +3532,12 @@ static int arcmsr_hbaD_polling_ccbdone(struct AdapterControlBlock *acb,
|
|||
((toggle ^ 0x4000) + 1);
|
||||
doneq_index = pmu->doneq_index;
|
||||
spin_unlock_irqrestore(&acb->doneq_lock, flags);
|
||||
cdb_phy_hipart = pmu->done_qbuffer[doneq_index &
|
||||
0xFFF].addressHigh;
|
||||
flag_ccb = pmu->done_qbuffer[doneq_index & 0xFFF].addressLow;
|
||||
ccb_cdb_phy = (flag_ccb & 0xFFFFFFF0);
|
||||
if (acb->cdb_phyadd_hipart)
|
||||
ccb_cdb_phy = ccb_cdb_phy | acb->cdb_phyadd_hipart;
|
||||
arcmsr_cdb = (struct ARCMSR_CDB *)(acb->vir2phy_offset +
|
||||
ccb_cdb_phy);
|
||||
pCCB = container_of(arcmsr_cdb, struct CommandControlBlock,
|
||||
|
@ -3680,6 +3779,7 @@ static int arcmsr_iop_confirm(struct AdapterControlBlock *acb)
|
|||
cdb_phyaddr = lower_32_bits(dma_coherent_handle);
|
||||
cdb_phyaddr_hi32 = upper_32_bits(dma_coherent_handle);
|
||||
acb->cdb_phyaddr_hi32 = cdb_phyaddr_hi32;
|
||||
acb->cdb_phyadd_hipart = ((uint64_t)cdb_phyaddr_hi32) << 32;
|
||||
/*
|
||||
***********************************************************************
|
||||
** if adapter type B, set window of "post command Q"
|
||||
|
@ -3744,7 +3844,6 @@ static int arcmsr_iop_confirm(struct AdapterControlBlock *acb)
|
|||
}
|
||||
break;
|
||||
case ACB_ADAPTER_TYPE_C: {
|
||||
if (cdb_phyaddr_hi32 != 0) {
|
||||
struct MessageUnit_C __iomem *reg = acb->pmuC;
|
||||
|
||||
printk(KERN_NOTICE "arcmsr%d: cdb_phyaddr_hi32=0x%x\n",
|
||||
|
@ -3759,7 +3858,6 @@ static int arcmsr_iop_confirm(struct AdapterControlBlock *acb)
|
|||
return 1;
|
||||
}
|
||||
}
|
||||
}
|
||||
break;
|
||||
case ACB_ADAPTER_TYPE_D: {
|
||||
uint32_t __iomem *rwbuffer;
|
||||
|
@ -3793,7 +3891,7 @@ static int arcmsr_iop_confirm(struct AdapterControlBlock *acb)
|
|||
cdb_phyaddr_hi32 = (uint32_t)((dma_coherent_handle >> 16) >> 16);
|
||||
writel(cdb_phyaddr, ®->msgcode_rwbuffer[5]);
|
||||
writel(cdb_phyaddr_hi32, ®->msgcode_rwbuffer[6]);
|
||||
writel(acb->roundup_ccbsize, ®->msgcode_rwbuffer[7]);
|
||||
writel(acb->ioqueue_size, ®->msgcode_rwbuffer[7]);
|
||||
writel(ARCMSR_INBOUND_MESG0_SET_CONFIG, ®->inbound_msgaddr0);
|
||||
acb->out_doorbell ^= ARCMSR_HBEMU_DRV2IOP_MESSAGE_CMD_DONE;
|
||||
writel(acb->out_doorbell, ®->iobound_doorbell);
|
||||
|
|
|
@ -6430,9 +6430,7 @@ bfa_fcs_vport_sm_logo_for_stop(struct bfa_fcs_vport_s *vport,
|
|||
switch (event) {
|
||||
case BFA_FCS_VPORT_SM_OFFLINE:
|
||||
bfa_sm_send_event(vport->lps, BFA_LPS_SM_OFFLINE);
|
||||
/*
|
||||
* !!! fall through !!!
|
||||
*/
|
||||
/* fall through */
|
||||
|
||||
case BFA_FCS_VPORT_SM_RSP_OK:
|
||||
case BFA_FCS_VPORT_SM_RSP_ERROR:
|
||||
|
@ -6458,9 +6456,7 @@ bfa_fcs_vport_sm_logo(struct bfa_fcs_vport_s *vport,
|
|||
switch (event) {
|
||||
case BFA_FCS_VPORT_SM_OFFLINE:
|
||||
bfa_sm_send_event(vport->lps, BFA_LPS_SM_OFFLINE);
|
||||
/*
|
||||
* !!! fall through !!!
|
||||
*/
|
||||
/* fall through */
|
||||
|
||||
case BFA_FCS_VPORT_SM_RSP_OK:
|
||||
case BFA_FCS_VPORT_SM_RSP_ERROR:
|
||||
|
|
|
@ -427,17 +427,13 @@ bfa_fcs_rport_sm_plogi(struct bfa_fcs_rport_s *rport, enum rport_event event)
|
|||
|
||||
case RPSM_EVENT_LOGO_RCVD:
|
||||
bfa_fcs_rport_send_logo_acc(rport);
|
||||
/*
|
||||
* !! fall through !!
|
||||
*/
|
||||
/* fall through */
|
||||
case RPSM_EVENT_PRLO_RCVD:
|
||||
if (rport->prlo == BFA_TRUE)
|
||||
bfa_fcs_rport_send_prlo_acc(rport);
|
||||
|
||||
bfa_fcxp_discard(rport->fcxp);
|
||||
/*
|
||||
* !! fall through !!
|
||||
*/
|
||||
/* fall through */
|
||||
case RPSM_EVENT_FAILED:
|
||||
if (rport->plogi_retries < BFA_FCS_RPORT_MAX_RETRIES) {
|
||||
rport->plogi_retries++;
|
||||
|
@ -868,9 +864,7 @@ bfa_fcs_rport_sm_adisc_online(struct bfa_fcs_rport_s *rport,
|
|||
* At least go offline when a PLOGI is received.
|
||||
*/
|
||||
bfa_fcxp_discard(rport->fcxp);
|
||||
/*
|
||||
* !!! fall through !!!
|
||||
*/
|
||||
/* fall through */
|
||||
|
||||
case RPSM_EVENT_FAILED:
|
||||
case RPSM_EVENT_ADDRESS_CHANGE:
|
||||
|
@ -1056,6 +1050,7 @@ bfa_fcs_rport_sm_fc4_logosend(struct bfa_fcs_rport_s *rport,
|
|||
|
||||
case RPSM_EVENT_LOGO_RCVD:
|
||||
bfa_fcs_rport_send_logo_acc(rport);
|
||||
/* fall through */
|
||||
case RPSM_EVENT_PRLO_RCVD:
|
||||
if (rport->prlo == BFA_TRUE)
|
||||
bfa_fcs_rport_send_prlo_acc(rport);
|
||||
|
@ -1144,9 +1139,7 @@ bfa_fcs_rport_sm_hcb_offline(struct bfa_fcs_rport_s *rport,
|
|||
bfa_fcs_rport_send_plogiacc(rport, NULL);
|
||||
break;
|
||||
}
|
||||
/*
|
||||
* !! fall through !!
|
||||
*/
|
||||
/* fall through */
|
||||
|
||||
case RPSM_EVENT_ADDRESS_CHANGE:
|
||||
if (!bfa_fcs_lport_is_online(rport->port)) {
|
||||
|
@ -1303,6 +1296,7 @@ bfa_fcs_rport_sm_hcb_logosend(struct bfa_fcs_rport_s *rport,
|
|||
|
||||
case RPSM_EVENT_LOGO_RCVD:
|
||||
bfa_fcs_rport_send_logo_acc(rport);
|
||||
/* fall through */
|
||||
case RPSM_EVENT_PRLO_RCVD:
|
||||
if (rport->prlo == BFA_TRUE)
|
||||
bfa_fcs_rport_send_prlo_acc(rport);
|
||||
|
@ -1346,6 +1340,7 @@ bfa_fcs_rport_sm_logo_sending(struct bfa_fcs_rport_s *rport,
|
|||
|
||||
case RPSM_EVENT_LOGO_RCVD:
|
||||
bfa_fcs_rport_send_logo_acc(rport);
|
||||
/* fall through */
|
||||
case RPSM_EVENT_PRLO_RCVD:
|
||||
if (rport->prlo == BFA_TRUE)
|
||||
bfa_fcs_rport_send_prlo_acc(rport);
|
||||
|
|
|
@ -978,9 +978,7 @@ bfa_iocpf_sm_enabling(struct bfa_iocpf_s *iocpf, enum iocpf_event event)
|
|||
|
||||
case IOCPF_E_INITFAIL:
|
||||
bfa_iocpf_timer_stop(ioc);
|
||||
/*
|
||||
* !!! fall through !!!
|
||||
*/
|
||||
/* fall through */
|
||||
|
||||
case IOCPF_E_TIMEOUT:
|
||||
writel(1, ioc->ioc_regs.ioc_sem_reg);
|
||||
|
@ -1056,9 +1054,7 @@ bfa_iocpf_sm_disabling(struct bfa_iocpf_s *iocpf, enum iocpf_event event)
|
|||
|
||||
case IOCPF_E_FAIL:
|
||||
bfa_iocpf_timer_stop(ioc);
|
||||
/*
|
||||
* !!! fall through !!!
|
||||
*/
|
||||
/* fall through */
|
||||
|
||||
case IOCPF_E_TIMEOUT:
|
||||
bfa_ioc_set_cur_ioc_fwstate(ioc, BFI_IOC_FAIL);
|
||||
|
@ -6007,6 +6003,7 @@ bfa_dconf_sm_final_sync(struct bfa_dconf_mod_s *dconf,
|
|||
case BFA_DCONF_SM_IOCDISABLE:
|
||||
case BFA_DCONF_SM_FLASH_COMP:
|
||||
bfa_timer_stop(&dconf->timer);
|
||||
/* fall through */
|
||||
case BFA_DCONF_SM_TIMEOUT:
|
||||
bfa_sm_set_state(dconf, bfa_dconf_sm_uninit);
|
||||
bfa_fsm_send_event(&dconf->bfa->iocfc, IOCFC_E_DCONF_DONE);
|
||||
|
|
|
@ -460,11 +460,6 @@ bfad_debugfs_init(struct bfad_port_s *port)
|
|||
if (!bfa_debugfs_root) {
|
||||
bfa_debugfs_root = debugfs_create_dir("bfa", NULL);
|
||||
atomic_set(&bfa_debugfs_port_count, 0);
|
||||
if (!bfa_debugfs_root) {
|
||||
printk(KERN_WARNING
|
||||
"BFA debugfs root dir creation failed\n");
|
||||
goto err;
|
||||
}
|
||||
}
|
||||
|
||||
/* Setup the pci_dev debugfs directory for the port */
|
||||
|
@ -472,12 +467,6 @@ bfad_debugfs_init(struct bfad_port_s *port)
|
|||
if (!port->port_debugfs_root) {
|
||||
port->port_debugfs_root =
|
||||
debugfs_create_dir(name, bfa_debugfs_root);
|
||||
if (!port->port_debugfs_root) {
|
||||
printk(KERN_WARNING
|
||||
"bfa %s: debugfs root creation failed\n",
|
||||
bfad->pci_name);
|
||||
goto err;
|
||||
}
|
||||
|
||||
atomic_inc(&bfa_debugfs_port_count);
|
||||
|
||||
|
@ -489,16 +478,9 @@ bfad_debugfs_init(struct bfad_port_s *port)
|
|||
port->port_debugfs_root,
|
||||
port,
|
||||
file->fops);
|
||||
if (!bfad->bfad_dentry_files[i]) {
|
||||
printk(KERN_WARNING
|
||||
"bfa %s: debugfs %s creation failed\n",
|
||||
bfad->pci_name, file->name);
|
||||
goto err;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
err:
|
||||
return;
|
||||
}
|
||||
|
||||
|
|
|
@ -1438,7 +1438,7 @@ static struct bnx2fc_hba *bnx2fc_hba_create(struct cnic_dev *cnic)
|
|||
static struct bnx2fc_interface *
|
||||
bnx2fc_interface_create(struct bnx2fc_hba *hba,
|
||||
struct net_device *netdev,
|
||||
enum fip_state fip_mode)
|
||||
enum fip_mode fip_mode)
|
||||
{
|
||||
struct fcoe_ctlr_device *ctlr_dev;
|
||||
struct bnx2fc_interface *interface;
|
||||
|
|
|
@ -577,7 +577,7 @@ static void bnx2i_free_mp_bdt(struct bnx2i_hba *hba)
|
|||
hba->dummy_buffer, hba->dummy_buf_dma);
|
||||
hba->dummy_buffer = NULL;
|
||||
}
|
||||
return;
|
||||
return;
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
@ -497,7 +497,6 @@ csio_fcoe_alloc_vnp(struct csio_hw *hw, struct csio_lnode *ln)
|
|||
static int
|
||||
csio_fcoe_free_vnp(struct csio_hw *hw, struct csio_lnode *ln)
|
||||
{
|
||||
struct csio_lnode *pln;
|
||||
struct csio_mb *mbp;
|
||||
struct fw_fcoe_vnp_cmd *rsp;
|
||||
int ret = 0;
|
||||
|
@ -514,8 +513,6 @@ csio_fcoe_free_vnp(struct csio_hw *hw, struct csio_lnode *ln)
|
|||
goto out;
|
||||
}
|
||||
|
||||
pln = ln->pln;
|
||||
|
||||
csio_fcoe_vnp_free_init_mb(ln, mbp, CSIO_MB_DEFAULT_TMO,
|
||||
ln->fcf_flowid, ln->vnp_flowid,
|
||||
NULL);
|
||||
|
|
|
@ -167,14 +167,10 @@ csio_dfs_destroy(struct csio_hw *hw)
|
|||
* csio_dfs_init - Debug filesystem initialization for the module.
|
||||
*
|
||||
*/
|
||||
static int
|
||||
static void
|
||||
csio_dfs_init(void)
|
||||
{
|
||||
csio_debugfs_root = debugfs_create_dir(KBUILD_MODNAME, NULL);
|
||||
if (!csio_debugfs_root)
|
||||
pr_warn("Could not create debugfs entry, continuing\n");
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
|
|
|
@ -1984,15 +1984,15 @@ csio_eh_abort_handler(struct scsi_cmnd *cmnd)
|
|||
/* FW successfully aborted the request */
|
||||
if (host_byte(cmnd->result) == DID_REQUEUE) {
|
||||
csio_info(hw,
|
||||
"Aborted SCSI command to (%d:%llu) serial#:0x%lx\n",
|
||||
"Aborted SCSI command to (%d:%llu) tag %u\n",
|
||||
cmnd->device->id, cmnd->device->lun,
|
||||
cmnd->serial_number);
|
||||
cmnd->request->tag);
|
||||
return SUCCESS;
|
||||
} else {
|
||||
csio_info(hw,
|
||||
"Failed to abort SCSI command, (%d:%llu) serial#:0x%lx\n",
|
||||
"Failed to abort SCSI command, (%d:%llu) tag %u\n",
|
||||
cmnd->device->id, cmnd->device->lun,
|
||||
cmnd->serial_number);
|
||||
cmnd->request->tag);
|
||||
return FAILED;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
ccflags-y += -Idrivers/net/ethernet/chelsio/libcxgb
|
||||
ccflags-y += -I $(srctree)/drivers/net/ethernet/chelsio/libcxgb
|
||||
|
||||
obj-$(CONFIG_SCSI_CXGB3_ISCSI) += libcxgbi.o cxgb3i/
|
||||
obj-$(CONFIG_SCSI_CXGB4_ISCSI) += libcxgbi.o cxgb4i/
|
||||
|
|
|
@ -1210,7 +1210,8 @@ static void do_rx_iscsi_hdr(struct cxgbi_device *cdev, struct sk_buff *skb)
|
|||
csk->skb_ulp_lhdr = skb;
|
||||
cxgbi_skcb_set_flag(skb, SKCBF_RX_HDR);
|
||||
|
||||
if (cxgbi_skcb_tcp_seq(skb) != csk->rcv_nxt) {
|
||||
if ((CHELSIO_CHIP_VERSION(lldi->adapter_type) <= CHELSIO_T5) &&
|
||||
(cxgbi_skcb_tcp_seq(skb) != csk->rcv_nxt)) {
|
||||
pr_info("tid %u, CPL_ISCSI_HDR, bad seq, 0x%x/0x%x.\n",
|
||||
csk->tid, cxgbi_skcb_tcp_seq(skb),
|
||||
csk->rcv_nxt);
|
||||
|
@ -2134,8 +2135,7 @@ static void *t4_uld_add(const struct cxgb4_lld_info *lldi)
|
|||
cdev->itp = &cxgb4i_iscsi_transport;
|
||||
cdev->owner = THIS_MODULE;
|
||||
|
||||
cdev->pfvf = FW_VIID_PFN_G(cxgb4_port_viid(lldi->ports[0]))
|
||||
<< FW_VIID_PFN_S;
|
||||
cdev->pfvf = FW_PFVF_CMD_PFN_V(lldi->pf);
|
||||
pr_info("cdev 0x%p,%s, pfvf %u.\n",
|
||||
cdev, lldi->ports[0]->name, cdev->pfvf);
|
||||
|
||||
|
|
|
@ -1212,7 +1212,7 @@ scmd_get_params(struct scsi_cmnd *sc, struct scatterlist **sgl,
|
|||
unsigned int *sgcnt, unsigned int *dlen,
|
||||
unsigned int prot)
|
||||
{
|
||||
struct scsi_data_buffer *sdb = prot ? scsi_prot(sc) : scsi_out(sc);
|
||||
struct scsi_data_buffer *sdb = prot ? scsi_prot(sc) : &sc->sdb;
|
||||
|
||||
*sgl = sdb->table.sgl;
|
||||
*sgcnt = sdb->table.nents;
|
||||
|
@ -1428,8 +1428,7 @@ static void task_release_itt(struct iscsi_task *task, itt_t hdr_itt)
|
|||
log_debug(1 << CXGBI_DBG_DDP,
|
||||
"cdev 0x%p, task 0x%p, release tag 0x%x.\n",
|
||||
cdev, task, tag);
|
||||
if (sc &&
|
||||
(scsi_bidi_cmnd(sc) || sc->sc_data_direction == DMA_FROM_DEVICE) &&
|
||||
if (sc && sc->sc_data_direction == DMA_FROM_DEVICE &&
|
||||
cxgbi_ppm_is_ddp_tag(ppm, tag)) {
|
||||
struct cxgbi_task_data *tdata = iscsi_task_cxgbi_data(task);
|
||||
struct cxgbi_task_tag_info *ttinfo = &tdata->ttinfo;
|
||||
|
@ -1461,9 +1460,7 @@ static int task_reserve_itt(struct iscsi_task *task, itt_t *hdr_itt)
|
|||
u32 tag = 0;
|
||||
int err = -EINVAL;
|
||||
|
||||
if (sc &&
|
||||
(scsi_bidi_cmnd(sc) || sc->sc_data_direction == DMA_FROM_DEVICE)
|
||||
) {
|
||||
if (sc && sc->sc_data_direction == DMA_FROM_DEVICE) {
|
||||
struct cxgbi_task_data *tdata = iscsi_task_cxgbi_data(task);
|
||||
struct cxgbi_task_tag_info *ttinfo = &tdata->ttinfo;
|
||||
|
||||
|
@ -1897,7 +1894,7 @@ int cxgbi_conn_alloc_pdu(struct iscsi_task *task, u8 opcode)
|
|||
if (SKB_MAX_HEAD(cdev->skb_tx_rsvd) > (512 * MAX_SKB_FRAGS) &&
|
||||
(opcode == ISCSI_OP_SCSI_DATA_OUT ||
|
||||
(opcode == ISCSI_OP_SCSI_CMD &&
|
||||
(scsi_bidi_cmnd(sc) || sc->sc_data_direction == DMA_TO_DEVICE))))
|
||||
sc->sc_data_direction == DMA_TO_DEVICE)))
|
||||
/* data could goes into skb head */
|
||||
headroom += min_t(unsigned int,
|
||||
SKB_MAX_HEAD(cdev->skb_tx_rsvd),
|
||||
|
@ -1972,7 +1969,7 @@ int cxgbi_conn_init_pdu(struct iscsi_task *task, unsigned int offset,
|
|||
return 0;
|
||||
|
||||
if (task->sc) {
|
||||
struct scsi_data_buffer *sdb = scsi_out(task->sc);
|
||||
struct scsi_data_buffer *sdb = &task->sc->sdb;
|
||||
struct scatterlist *sg = NULL;
|
||||
int err;
|
||||
|
||||
|
|
|
@ -334,7 +334,8 @@ int cxlflash_afu_sync(struct afu *afu, ctx_hndl_t c, res_hndl_t r, u8 mode);
|
|||
void cxlflash_list_init(void);
|
||||
void cxlflash_term_global_luns(void);
|
||||
void cxlflash_free_errpage(void);
|
||||
int cxlflash_ioctl(struct scsi_device *sdev, int cmd, void __user *arg);
|
||||
int cxlflash_ioctl(struct scsi_device *sdev, unsigned int cmd,
|
||||
void __user *arg);
|
||||
void cxlflash_stop_term_user_contexts(struct cxlflash_cfg *cfg);
|
||||
int cxlflash_mark_contexts_error(struct cxlflash_cfg *cfg);
|
||||
void cxlflash_term_local_luns(struct cxlflash_cfg *cfg);
|
||||
|
|
|
@ -3282,7 +3282,7 @@ static int cxlflash_chr_open(struct inode *inode, struct file *file)
|
|||
*
|
||||
* Return: A string identifying the decoded host ioctl.
|
||||
*/
|
||||
static char *decode_hioctl(int cmd)
|
||||
static char *decode_hioctl(unsigned int cmd)
|
||||
{
|
||||
switch (cmd) {
|
||||
case HT_CXLFLASH_LUN_PROVISION:
|
||||
|
|
|
@ -1924,7 +1924,7 @@ static int cxlflash_disk_verify(struct scsi_device *sdev,
|
|||
*
|
||||
* Return: A string identifying the decoded ioctl.
|
||||
*/
|
||||
static char *decode_ioctl(int cmd)
|
||||
static char *decode_ioctl(unsigned int cmd)
|
||||
{
|
||||
switch (cmd) {
|
||||
case DK_CXLFLASH_ATTACH:
|
||||
|
@ -2051,7 +2051,7 @@ static int cxlflash_disk_direct_open(struct scsi_device *sdev, void *arg)
|
|||
*
|
||||
* Return: 0 on success, -errno on failure
|
||||
*/
|
||||
static int ioctl_common(struct scsi_device *sdev, int cmd)
|
||||
static int ioctl_common(struct scsi_device *sdev, unsigned int cmd)
|
||||
{
|
||||
struct cxlflash_cfg *cfg = shost_priv(sdev->host);
|
||||
struct device *dev = &cfg->dev->dev;
|
||||
|
@ -2096,7 +2096,7 @@ static int ioctl_common(struct scsi_device *sdev, int cmd)
|
|||
*
|
||||
* Return: 0 on success, -errno on failure
|
||||
*/
|
||||
int cxlflash_ioctl(struct scsi_device *sdev, int cmd, void __user *arg)
|
||||
int cxlflash_ioctl(struct scsi_device *sdev, unsigned int cmd, void __user *arg)
|
||||
{
|
||||
typedef int (*sioctl) (struct scsi_device *, void *);
|
||||
|
||||
|
@ -2179,8 +2179,7 @@ int cxlflash_ioctl(struct scsi_device *sdev, int cmd, void __user *arg)
|
|||
}
|
||||
|
||||
if (unlikely(copy_from_user(&buf, arg, size))) {
|
||||
dev_err(dev, "%s: copy_from_user() fail "
|
||||
"size=%lu cmd=%d (%s) arg=%p\n",
|
||||
dev_err(dev, "%s: copy_from_user() fail size=%lu cmd=%u (%s) arg=%p\n",
|
||||
__func__, size, cmd, decode_ioctl(cmd), arg);
|
||||
rc = -EFAULT;
|
||||
goto cxlflash_ioctl_exit;
|
||||
|
@ -2203,8 +2202,7 @@ int cxlflash_ioctl(struct scsi_device *sdev, int cmd, void __user *arg)
|
|||
rc = do_ioctl(sdev, (void *)&buf);
|
||||
if (likely(!rc))
|
||||
if (unlikely(copy_to_user(arg, &buf, size))) {
|
||||
dev_err(dev, "%s: copy_to_user() fail "
|
||||
"size=%lu cmd=%d (%s) arg=%p\n",
|
||||
dev_err(dev, "%s: copy_to_user() fail size=%lu cmd=%u (%s) arg=%p\n",
|
||||
__func__, size, cmd, decode_ioctl(cmd), arg);
|
||||
rc = -EFAULT;
|
||||
}
|
||||
|
|
|
@ -588,46 +588,6 @@ static int adpt_show_info(struct seq_file *m, struct Scsi_Host *host)
|
|||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* Turn a struct scsi_cmnd * into a unique 32 bit 'context'.
|
||||
*/
|
||||
static u32 adpt_cmd_to_context(struct scsi_cmnd *cmd)
|
||||
{
|
||||
return (u32)cmd->serial_number;
|
||||
}
|
||||
|
||||
/*
|
||||
* Go from a u32 'context' to a struct scsi_cmnd * .
|
||||
* This could probably be made more efficient.
|
||||
*/
|
||||
static struct scsi_cmnd *
|
||||
adpt_cmd_from_context(adpt_hba * pHba, u32 context)
|
||||
{
|
||||
struct scsi_cmnd * cmd;
|
||||
struct scsi_device * d;
|
||||
|
||||
if (context == 0)
|
||||
return NULL;
|
||||
|
||||
spin_unlock(pHba->host->host_lock);
|
||||
shost_for_each_device(d, pHba->host) {
|
||||
unsigned long flags;
|
||||
spin_lock_irqsave(&d->list_lock, flags);
|
||||
list_for_each_entry(cmd, &d->cmd_list, list) {
|
||||
if (((u32)cmd->serial_number == context)) {
|
||||
spin_unlock_irqrestore(&d->list_lock, flags);
|
||||
scsi_device_put(d);
|
||||
spin_lock(pHba->host->host_lock);
|
||||
return cmd;
|
||||
}
|
||||
}
|
||||
spin_unlock_irqrestore(&d->list_lock, flags);
|
||||
}
|
||||
spin_lock(pHba->host->host_lock);
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/*
|
||||
* Turn a pointer to ioctl reply data into an u32 'context'
|
||||
*/
|
||||
|
@ -685,9 +645,6 @@ static int adpt_abort(struct scsi_cmnd * cmd)
|
|||
u32 msg[5];
|
||||
int rcode;
|
||||
|
||||
if(cmd->serial_number == 0){
|
||||
return FAILED;
|
||||
}
|
||||
pHba = (adpt_hba*) cmd->device->host->hostdata[0];
|
||||
printk(KERN_INFO"%s: Trying to Abort\n",pHba->name);
|
||||
if ((dptdevice = (void*) (cmd->device->hostdata)) == NULL) {
|
||||
|
@ -699,8 +656,9 @@ static int adpt_abort(struct scsi_cmnd * cmd)
|
|||
msg[0] = FIVE_WORD_MSG_SIZE|SGL_OFFSET_0;
|
||||
msg[1] = I2O_CMD_SCSI_ABORT<<24|HOST_TID<<12|dptdevice->tid;
|
||||
msg[2] = 0;
|
||||
msg[3]= 0;
|
||||
msg[4] = adpt_cmd_to_context(cmd);
|
||||
msg[3]= 0;
|
||||
/* Add 1 to avoid firmware treating it as invalid command */
|
||||
msg[4] = cmd->request->tag + 1;
|
||||
if (pHba->host)
|
||||
spin_lock_irq(pHba->host->host_lock);
|
||||
rcode = adpt_i2o_post_wait(pHba, msg, sizeof(msg), FOREVER);
|
||||
|
@ -2198,20 +2156,27 @@ static irqreturn_t adpt_isr(int irq, void *dev_id)
|
|||
status = I2O_POST_WAIT_OK;
|
||||
}
|
||||
if(!(context & 0x40000000)) {
|
||||
cmd = adpt_cmd_from_context(pHba,
|
||||
readl(reply+12));
|
||||
/*
|
||||
* The request tag is one less than the command tag
|
||||
* as the firmware might treat a 0 tag as invalid
|
||||
*/
|
||||
cmd = scsi_host_find_tag(pHba->host,
|
||||
readl(reply + 12) - 1);
|
||||
if(cmd != NULL) {
|
||||
printk(KERN_WARNING"%s: Apparent SCSI cmd in Post Wait Context - cmd=%p context=%x\n", pHba->name, cmd, context);
|
||||
}
|
||||
}
|
||||
adpt_i2o_post_wait_complete(context, status);
|
||||
} else { // SCSI message
|
||||
cmd = adpt_cmd_from_context (pHba, readl(reply+12));
|
||||
/*
|
||||
* The request tag is one less than the command tag
|
||||
* as the firmware might treat a 0 tag as invalid
|
||||
*/
|
||||
cmd = scsi_host_find_tag(pHba->host,
|
||||
readl(reply + 12) - 1);
|
||||
if(cmd != NULL){
|
||||
scsi_dma_unmap(cmd);
|
||||
if(cmd->serial_number != 0) { // If not timedout
|
||||
adpt_i2o_to_scsi(reply, cmd);
|
||||
}
|
||||
adpt_i2o_to_scsi(reply, cmd);
|
||||
}
|
||||
}
|
||||
writel(m, pHba->reply_port);
|
||||
|
@ -2277,7 +2242,8 @@ static s32 adpt_scsi_to_i2o(adpt_hba* pHba, struct scsi_cmnd* cmd, struct adpt_d
|
|||
// I2O_CMD_SCSI_EXEC
|
||||
msg[1] = ((0xff<<24)|(HOST_TID<<12)|d->tid);
|
||||
msg[2] = 0;
|
||||
msg[3] = adpt_cmd_to_context(cmd); /* Want SCSI control block back */
|
||||
/* Add 1 to avoid firmware treating it as invalid command */
|
||||
msg[3] = cmd->request->tag + 1;
|
||||
// Our cards use the transaction context as the tag for queueing
|
||||
// Adaptec/DPT Private stuff
|
||||
msg[4] = I2O_CMD_SCSI_EXEC|(DPT_ORGANIZATION_ID<<16);
|
||||
|
@ -2693,9 +2659,6 @@ static void adpt_fail_posted_scbs(adpt_hba* pHba)
|
|||
unsigned long flags;
|
||||
spin_lock_irqsave(&d->list_lock, flags);
|
||||
list_for_each_entry(cmd, &d->cmd_list, list) {
|
||||
if(cmd->serial_number == 0){
|
||||
continue;
|
||||
}
|
||||
cmd->result = (DID_OK << 16) | (QUEUE_FULL <<1);
|
||||
cmd->scsi_done(cmd);
|
||||
}
|
||||
|
|
|
@ -965,8 +965,8 @@ struct esas2r_adapter {
|
|||
const char *esas2r_info(struct Scsi_Host *);
|
||||
int esas2r_write_params(struct esas2r_adapter *a, struct esas2r_request *rq,
|
||||
struct esas2r_sas_nvram *data);
|
||||
int esas2r_ioctl_handler(void *hostdata, int cmd, void __user *arg);
|
||||
int esas2r_ioctl(struct scsi_device *dev, int cmd, void __user *arg);
|
||||
int esas2r_ioctl_handler(void *hostdata, unsigned int cmd, void __user *arg);
|
||||
int esas2r_ioctl(struct scsi_device *dev, unsigned int cmd, void __user *arg);
|
||||
u8 handle_hba_ioctl(struct esas2r_adapter *a,
|
||||
struct atto_ioctl *ioctl_hba);
|
||||
int esas2r_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd);
|
||||
|
|
|
@ -1241,6 +1241,7 @@ static bool esas2r_format_init_msg(struct esas2r_adapter *a,
|
|||
a->init_msg = ESAS2R_INIT_MSG_GET_INIT;
|
||||
break;
|
||||
}
|
||||
/* fall through */
|
||||
|
||||
case ESAS2R_INIT_MSG_GET_INIT:
|
||||
if (msg == ESAS2R_INIT_MSG_GET_INIT) {
|
||||
|
@ -1254,7 +1255,7 @@ static bool esas2r_format_init_msg(struct esas2r_adapter *a,
|
|||
esas2r_hdebug("FAILED");
|
||||
}
|
||||
}
|
||||
/* fall through */
|
||||
/* fall through */
|
||||
|
||||
default:
|
||||
rq->req_stat = RS_SUCCESS;
|
||||
|
|
|
@ -1274,7 +1274,7 @@ int esas2r_write_params(struct esas2r_adapter *a, struct esas2r_request *rq,
|
|||
|
||||
|
||||
/* This function only cares about ATTO-specific ioctls (atto_express_ioctl) */
|
||||
int esas2r_ioctl_handler(void *hostdata, int cmd, void __user *arg)
|
||||
int esas2r_ioctl_handler(void *hostdata, unsigned int cmd, void __user *arg)
|
||||
{
|
||||
struct atto_express_ioctl *ioctl = NULL;
|
||||
struct esas2r_adapter *a;
|
||||
|
@ -1292,9 +1292,8 @@ int esas2r_ioctl_handler(void *hostdata, int cmd, void __user *arg)
|
|||
ioctl = memdup_user(arg, sizeof(struct atto_express_ioctl));
|
||||
if (IS_ERR(ioctl)) {
|
||||
esas2r_log(ESAS2R_LOG_WARN,
|
||||
"ioctl_handler access_ok failed for cmd %d, "
|
||||
"address %p", cmd,
|
||||
arg);
|
||||
"ioctl_handler access_ok failed for cmd %u, address %p",
|
||||
cmd, arg);
|
||||
return PTR_ERR(ioctl);
|
||||
}
|
||||
|
||||
|
@ -1493,7 +1492,7 @@ int esas2r_ioctl_handler(void *hostdata, int cmd, void __user *arg)
|
|||
ioctl_done:
|
||||
|
||||
if (err < 0) {
|
||||
esas2r_log(ESAS2R_LOG_WARN, "err %d on ioctl cmd %d", err,
|
||||
esas2r_log(ESAS2R_LOG_WARN, "err %d on ioctl cmd %u", err,
|
||||
cmd);
|
||||
|
||||
switch (err) {
|
||||
|
@ -1518,9 +1517,8 @@ int esas2r_ioctl_handler(void *hostdata, int cmd, void __user *arg)
|
|||
err = __copy_to_user(arg, ioctl, sizeof(struct atto_express_ioctl));
|
||||
if (err != 0) {
|
||||
esas2r_log(ESAS2R_LOG_WARN,
|
||||
"ioctl_handler copy_to_user didn't copy "
|
||||
"everything (err %d, cmd %d)", err,
|
||||
cmd);
|
||||
"ioctl_handler copy_to_user didn't copy everything (err %d, cmd %u)",
|
||||
err, cmd);
|
||||
kfree(ioctl);
|
||||
|
||||
return -EFAULT;
|
||||
|
@ -1531,7 +1529,7 @@ int esas2r_ioctl_handler(void *hostdata, int cmd, void __user *arg)
|
|||
return 0;
|
||||
}
|
||||
|
||||
int esas2r_ioctl(struct scsi_device *sd, int cmd, void __user *arg)
|
||||
int esas2r_ioctl(struct scsi_device *sd, unsigned int cmd, void __user *arg)
|
||||
{
|
||||
return esas2r_ioctl_handler(sd->host->hostdata, cmd, arg);
|
||||
}
|
||||
|
|
|
@ -623,7 +623,7 @@ static int esas2r_proc_major;
|
|||
long esas2r_proc_ioctl(struct file *fp, unsigned int cmd, unsigned long arg)
|
||||
{
|
||||
return esas2r_ioctl_handler(esas2r_proc_host->hostdata,
|
||||
(int)cmd, (void __user *)arg);
|
||||
cmd, (void __user *)arg);
|
||||
}
|
||||
|
||||
static void __exit esas2r_exit(void)
|
||||
|
|
|
@ -389,7 +389,7 @@ static int fcoe_interface_setup(struct fcoe_interface *fcoe,
|
|||
* Returns: pointer to a struct fcoe_interface or NULL on error
|
||||
*/
|
||||
static struct fcoe_interface *fcoe_interface_create(struct net_device *netdev,
|
||||
enum fip_state fip_mode)
|
||||
enum fip_mode fip_mode)
|
||||
{
|
||||
struct fcoe_ctlr_device *ctlr_dev;
|
||||
struct fcoe_ctlr *ctlr;
|
||||
|
|
|
@ -147,7 +147,7 @@ static void fcoe_ctlr_map_dest(struct fcoe_ctlr *fip)
|
|||
* fcoe_ctlr_init() - Initialize the FCoE Controller instance
|
||||
* @fip: The FCoE controller to initialize
|
||||
*/
|
||||
void fcoe_ctlr_init(struct fcoe_ctlr *fip, enum fip_state mode)
|
||||
void fcoe_ctlr_init(struct fcoe_ctlr *fip, enum fip_mode mode)
|
||||
{
|
||||
fcoe_ctlr_set_state(fip, FIP_ST_LINK_WAIT);
|
||||
fip->mode = mode;
|
||||
|
@ -454,7 +454,10 @@ void fcoe_ctlr_link_up(struct fcoe_ctlr *fip)
|
|||
mutex_unlock(&fip->ctlr_mutex);
|
||||
fc_linkup(fip->lp);
|
||||
} else if (fip->state == FIP_ST_LINK_WAIT) {
|
||||
fcoe_ctlr_set_state(fip, fip->mode);
|
||||
if (fip->mode == FIP_MODE_NON_FIP)
|
||||
fcoe_ctlr_set_state(fip, FIP_ST_NON_FIP);
|
||||
else
|
||||
fcoe_ctlr_set_state(fip, FIP_ST_AUTO);
|
||||
switch (fip->mode) {
|
||||
default:
|
||||
LIBFCOE_FIP_DBG(fip, "invalid mode %d\n", fip->mode);
|
||||
|
|
|
@ -671,8 +671,19 @@ static const struct device_type fcoe_fcf_device_type = {
|
|||
.release = fcoe_fcf_device_release,
|
||||
};
|
||||
|
||||
static BUS_ATTR(ctlr_create, S_IWUSR, NULL, fcoe_ctlr_create_store);
|
||||
static BUS_ATTR(ctlr_destroy, S_IWUSR, NULL, fcoe_ctlr_destroy_store);
|
||||
static ssize_t ctlr_create_store(struct bus_type *bus, const char *buf,
|
||||
size_t count)
|
||||
{
|
||||
return fcoe_ctlr_create_store(bus, buf, count);
|
||||
}
|
||||
static BUS_ATTR_WO(ctlr_create);
|
||||
|
||||
static ssize_t ctlr_destroy_store(struct bus_type *bus, const char *buf,
|
||||
size_t count)
|
||||
{
|
||||
return fcoe_ctlr_destroy_store(bus, buf, count);
|
||||
}
|
||||
static BUS_ATTR_WO(ctlr_destroy);
|
||||
|
||||
static struct attribute *fcoe_bus_attrs[] = {
|
||||
&bus_attr_ctlr_create.attr,
|
||||
|
|
|
@ -855,7 +855,6 @@ ssize_t fcoe_ctlr_destroy_store(struct bus_type *bus,
|
|||
mutex_unlock(&ft_mutex);
|
||||
return rc;
|
||||
}
|
||||
EXPORT_SYMBOL(fcoe_ctlr_destroy_store);
|
||||
|
||||
/**
|
||||
* fcoe_transport_create() - Create a fcoe interface
|
||||
|
@ -873,7 +872,7 @@ static int fcoe_transport_create(const char *buffer,
|
|||
int rc = -ENODEV;
|
||||
struct net_device *netdev = NULL;
|
||||
struct fcoe_transport *ft = NULL;
|
||||
enum fip_state fip_mode = (enum fip_state)(long)kp->arg;
|
||||
enum fip_mode fip_mode = (enum fip_mode)kp->arg;
|
||||
|
||||
mutex_lock(&ft_mutex);
|
||||
|
||||
|
|
|
@ -39,7 +39,7 @@
|
|||
|
||||
#define DRV_NAME "fnic"
|
||||
#define DRV_DESCRIPTION "Cisco FCoE HBA Driver"
|
||||
#define DRV_VERSION "1.6.0.34"
|
||||
#define DRV_VERSION "1.6.0.47"
|
||||
#define PFX DRV_NAME ": "
|
||||
#define DFX DRV_NAME "%d: "
|
||||
|
||||
|
@ -49,7 +49,7 @@
|
|||
#define FNIC_MAX_IO_REQ 1024 /* scsi_cmnd tag map entries */
|
||||
#define FNIC_DFLT_IO_REQ 256 /* Default scsi_cmnd tag map entries */
|
||||
#define FNIC_IO_LOCKS 64 /* IO locks: power of 2 */
|
||||
#define FNIC_DFLT_QUEUE_DEPTH 32
|
||||
#define FNIC_DFLT_QUEUE_DEPTH 256
|
||||
#define FNIC_STATS_RATE_LIMIT 4 /* limit rate at which stats are pulled up */
|
||||
|
||||
/*
|
||||
|
@ -128,6 +128,7 @@
|
|||
__fnic_set_state_flags(fnicp, st_flags, 1)
|
||||
|
||||
extern unsigned int fnic_log_level;
|
||||
extern unsigned int io_completions;
|
||||
|
||||
#define FNIC_MAIN_LOGGING 0x01
|
||||
#define FNIC_FCS_LOGGING 0x02
|
||||
|
@ -196,6 +197,7 @@ enum fnic_state {
|
|||
#define FNIC_WQ_MAX 1
|
||||
#define FNIC_RQ_MAX 1
|
||||
#define FNIC_CQ_MAX (FNIC_WQ_COPY_MAX + FNIC_WQ_MAX + FNIC_RQ_MAX)
|
||||
#define FNIC_DFLT_IO_COMPLETIONS 256
|
||||
|
||||
struct mempool;
|
||||
|
||||
|
|
|
@ -54,23 +54,9 @@ int fnic_debugfs_init(void)
|
|||
{
|
||||
int rc = -1;
|
||||
fnic_trace_debugfs_root = debugfs_create_dir("fnic", NULL);
|
||||
if (!fnic_trace_debugfs_root) {
|
||||
printk(KERN_DEBUG "Cannot create debugfs root\n");
|
||||
return rc;
|
||||
}
|
||||
|
||||
if (!fnic_trace_debugfs_root) {
|
||||
printk(KERN_DEBUG
|
||||
"fnic root directory doesn't exist in debugfs\n");
|
||||
return rc;
|
||||
}
|
||||
|
||||
fnic_stats_debugfs_root = debugfs_create_dir("statistics",
|
||||
fnic_trace_debugfs_root);
|
||||
if (!fnic_stats_debugfs_root) {
|
||||
printk(KERN_DEBUG "Cannot create Statistics directory\n");
|
||||
return rc;
|
||||
}
|
||||
|
||||
/* Allocate memory to structure */
|
||||
fc_trc_flag = (struct fc_trace_flag_type *)
|
||||
|
@ -356,39 +342,19 @@ static const struct file_operations fnic_trace_debugfs_fops = {
|
|||
* it will also create file trace_enable to control enable/disable of
|
||||
* trace logging into trace buffer.
|
||||
*/
|
||||
int fnic_trace_debugfs_init(void)
|
||||
void fnic_trace_debugfs_init(void)
|
||||
{
|
||||
int rc = -1;
|
||||
if (!fnic_trace_debugfs_root) {
|
||||
printk(KERN_DEBUG
|
||||
"FNIC Debugfs root directory doesn't exist\n");
|
||||
return rc;
|
||||
}
|
||||
fnic_trace_enable = debugfs_create_file("tracing_enable",
|
||||
S_IFREG|S_IRUGO|S_IWUSR,
|
||||
fnic_trace_debugfs_root,
|
||||
&(fc_trc_flag->fnic_trace),
|
||||
&fnic_trace_ctrl_fops);
|
||||
|
||||
if (!fnic_trace_enable) {
|
||||
printk(KERN_DEBUG
|
||||
"Cannot create trace_enable file under debugfs\n");
|
||||
return rc;
|
||||
}
|
||||
|
||||
fnic_trace_debugfs_file = debugfs_create_file("trace",
|
||||
S_IFREG|S_IRUGO|S_IWUSR,
|
||||
fnic_trace_debugfs_root,
|
||||
&(fc_trc_flag->fnic_trace),
|
||||
&fnic_trace_debugfs_fops);
|
||||
|
||||
if (!fnic_trace_debugfs_file) {
|
||||
printk(KERN_DEBUG
|
||||
"Cannot create trace file under debugfs\n");
|
||||
return rc;
|
||||
}
|
||||
rc = 0;
|
||||
return rc;
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -419,37 +385,20 @@ void fnic_trace_debugfs_terminate(void)
|
|||
* trace logging into trace buffer.
|
||||
*/
|
||||
|
||||
int fnic_fc_trace_debugfs_init(void)
|
||||
void fnic_fc_trace_debugfs_init(void)
|
||||
{
|
||||
int rc = -1;
|
||||
|
||||
if (!fnic_trace_debugfs_root) {
|
||||
pr_err("fnic:Debugfs root directory doesn't exist\n");
|
||||
return rc;
|
||||
}
|
||||
|
||||
fnic_fc_trace_enable = debugfs_create_file("fc_trace_enable",
|
||||
S_IFREG|S_IRUGO|S_IWUSR,
|
||||
fnic_trace_debugfs_root,
|
||||
&(fc_trc_flag->fc_trace),
|
||||
&fnic_trace_ctrl_fops);
|
||||
|
||||
if (!fnic_fc_trace_enable) {
|
||||
pr_err("fnic: Failed create fc_trace_enable file\n");
|
||||
return rc;
|
||||
}
|
||||
|
||||
fnic_fc_trace_clear = debugfs_create_file("fc_trace_clear",
|
||||
S_IFREG|S_IRUGO|S_IWUSR,
|
||||
fnic_trace_debugfs_root,
|
||||
&(fc_trc_flag->fc_clear),
|
||||
&fnic_trace_ctrl_fops);
|
||||
|
||||
if (!fnic_fc_trace_clear) {
|
||||
pr_err("fnic: Failed to create fc_trace_enable file\n");
|
||||
return rc;
|
||||
}
|
||||
|
||||
fnic_fc_rdata_trace_debugfs_file =
|
||||
debugfs_create_file("fc_trace_rdata",
|
||||
S_IFREG|S_IRUGO|S_IWUSR,
|
||||
|
@ -457,24 +406,12 @@ int fnic_fc_trace_debugfs_init(void)
|
|||
&(fc_trc_flag->fc_normal_file),
|
||||
&fnic_trace_debugfs_fops);
|
||||
|
||||
if (!fnic_fc_rdata_trace_debugfs_file) {
|
||||
pr_err("fnic: Failed create fc_rdata_trace file\n");
|
||||
return rc;
|
||||
}
|
||||
|
||||
fnic_fc_trace_debugfs_file =
|
||||
debugfs_create_file("fc_trace",
|
||||
S_IFREG|S_IRUGO|S_IWUSR,
|
||||
fnic_trace_debugfs_root,
|
||||
&(fc_trc_flag->fc_row_file),
|
||||
&fnic_trace_debugfs_fops);
|
||||
|
||||
if (!fnic_fc_trace_debugfs_file) {
|
||||
pr_err("fnic: Failed to create fc_trace file\n");
|
||||
return rc;
|
||||
}
|
||||
rc = 0;
|
||||
return rc;
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -757,45 +694,26 @@ static const struct file_operations fnic_reset_debugfs_fops = {
|
|||
* It will create file stats and reset_stats under statistics/host# directory
|
||||
* to log per fnic stats.
|
||||
*/
|
||||
int fnic_stats_debugfs_init(struct fnic *fnic)
|
||||
void fnic_stats_debugfs_init(struct fnic *fnic)
|
||||
{
|
||||
int rc = -1;
|
||||
char name[16];
|
||||
|
||||
snprintf(name, sizeof(name), "host%d", fnic->lport->host->host_no);
|
||||
|
||||
if (!fnic_stats_debugfs_root) {
|
||||
printk(KERN_DEBUG "fnic_stats root doesn't exist\n");
|
||||
return rc;
|
||||
}
|
||||
fnic->fnic_stats_debugfs_host = debugfs_create_dir(name,
|
||||
fnic_stats_debugfs_root);
|
||||
if (!fnic->fnic_stats_debugfs_host) {
|
||||
printk(KERN_DEBUG "Cannot create host directory\n");
|
||||
return rc;
|
||||
}
|
||||
|
||||
fnic->fnic_stats_debugfs_file = debugfs_create_file("stats",
|
||||
S_IFREG|S_IRUGO|S_IWUSR,
|
||||
fnic->fnic_stats_debugfs_host,
|
||||
fnic,
|
||||
&fnic_stats_debugfs_fops);
|
||||
if (!fnic->fnic_stats_debugfs_file) {
|
||||
printk(KERN_DEBUG "Cannot create host stats file\n");
|
||||
return rc;
|
||||
}
|
||||
|
||||
fnic->fnic_reset_debugfs_file = debugfs_create_file("reset_stats",
|
||||
S_IFREG|S_IRUGO|S_IWUSR,
|
||||
fnic->fnic_stats_debugfs_host,
|
||||
fnic,
|
||||
&fnic_reset_debugfs_fops);
|
||||
if (!fnic->fnic_reset_debugfs_file) {
|
||||
printk(KERN_DEBUG "Cannot create host stats file\n");
|
||||
return rc;
|
||||
}
|
||||
rc = 0;
|
||||
return rc;
|
||||
}
|
||||
|
||||
/*
|
||||
|
|
|
@ -65,11 +65,21 @@ void fnic_handle_link(struct work_struct *work)
|
|||
fnic->link_status = vnic_dev_link_status(fnic->vdev);
|
||||
fnic->link_down_cnt = vnic_dev_link_down_cnt(fnic->vdev);
|
||||
|
||||
atomic64_set(&fnic->fnic_stats.misc_stats.current_port_speed,
|
||||
vnic_dev_port_speed(fnic->vdev));
|
||||
shost_printk(KERN_INFO, fnic->lport->host, "Current vnic speed set to : %llu\n",
|
||||
(u64)atomic64_read(
|
||||
&fnic->fnic_stats.misc_stats.current_port_speed));
|
||||
|
||||
switch (vnic_dev_port_speed(fnic->vdev)) {
|
||||
case DCEM_PORTSPEED_10G:
|
||||
fc_host_speed(fnic->lport->host) = FC_PORTSPEED_10GBIT;
|
||||
fnic->lport->link_supported_speeds = FC_PORTSPEED_10GBIT;
|
||||
break;
|
||||
case DCEM_PORTSPEED_20G:
|
||||
fc_host_speed(fnic->lport->host) = FC_PORTSPEED_20GBIT;
|
||||
fnic->lport->link_supported_speeds = FC_PORTSPEED_20GBIT;
|
||||
break;
|
||||
case DCEM_PORTSPEED_25G:
|
||||
fc_host_speed(fnic->lport->host) = FC_PORTSPEED_25GBIT;
|
||||
fnic->lport->link_supported_speeds = FC_PORTSPEED_25GBIT;
|
||||
|
|
|
@ -70,9 +70,10 @@ enum fnic_port_speeds {
|
|||
DCEM_PORTSPEED_NONE = 0,
|
||||
DCEM_PORTSPEED_1G = 1000,
|
||||
DCEM_PORTSPEED_10G = 10000,
|
||||
DCEM_PORTSPEED_20G = 20000,
|
||||
DCEM_PORTSPEED_25G = 25000,
|
||||
DCEM_PORTSPEED_40G = 40000,
|
||||
DCEM_PORTSPEED_4x10G = 41000,
|
||||
DCEM_PORTSPEED_25G = 25000,
|
||||
DCEM_PORTSPEED_100G = 100000,
|
||||
};
|
||||
#endif /* _FNIC_IO_H_ */
|
||||
|
|
|
@ -51,7 +51,7 @@ static irqreturn_t fnic_isr_legacy(int irq, void *data)
|
|||
}
|
||||
|
||||
if (pba & (1 << FNIC_INTX_WQ_RQ_COPYWQ)) {
|
||||
work_done += fnic_wq_copy_cmpl_handler(fnic, -1);
|
||||
work_done += fnic_wq_copy_cmpl_handler(fnic, io_completions);
|
||||
work_done += fnic_wq_cmpl_handler(fnic, -1);
|
||||
work_done += fnic_rq_cmpl_handler(fnic, -1);
|
||||
|
||||
|
@ -72,7 +72,7 @@ static irqreturn_t fnic_isr_msi(int irq, void *data)
|
|||
fnic->fnic_stats.misc_stats.last_isr_time = jiffies;
|
||||
atomic64_inc(&fnic->fnic_stats.misc_stats.isr_count);
|
||||
|
||||
work_done += fnic_wq_copy_cmpl_handler(fnic, -1);
|
||||
work_done += fnic_wq_copy_cmpl_handler(fnic, io_completions);
|
||||
work_done += fnic_wq_cmpl_handler(fnic, -1);
|
||||
work_done += fnic_rq_cmpl_handler(fnic, -1);
|
||||
|
||||
|
@ -125,7 +125,7 @@ static irqreturn_t fnic_isr_msix_wq_copy(int irq, void *data)
|
|||
fnic->fnic_stats.misc_stats.last_isr_time = jiffies;
|
||||
atomic64_inc(&fnic->fnic_stats.misc_stats.isr_count);
|
||||
|
||||
wq_copy_work_done = fnic_wq_copy_cmpl_handler(fnic, -1);
|
||||
wq_copy_work_done = fnic_wq_copy_cmpl_handler(fnic, io_completions);
|
||||
vnic_intr_return_credits(&fnic->intr[FNIC_MSIX_WQ_COPY],
|
||||
wq_copy_work_done,
|
||||
1 /* unmask intr */,
|
||||
|
|
|
@ -69,6 +69,11 @@ unsigned int fnic_log_level;
|
|||
module_param(fnic_log_level, int, S_IRUGO|S_IWUSR);
|
||||
MODULE_PARM_DESC(fnic_log_level, "bit mask of fnic logging levels");
|
||||
|
||||
|
||||
unsigned int io_completions = FNIC_DFLT_IO_COMPLETIONS;
|
||||
module_param(io_completions, int, S_IRUGO|S_IWUSR);
|
||||
MODULE_PARM_DESC(io_completions, "Max CQ entries to process at a time");
|
||||
|
||||
unsigned int fnic_trace_max_pages = 16;
|
||||
module_param(fnic_trace_max_pages, uint, S_IRUGO|S_IWUSR);
|
||||
MODULE_PARM_DESC(fnic_trace_max_pages, "Total allocated memory pages "
|
||||
|
@ -178,6 +183,9 @@ static void fnic_get_host_speed(struct Scsi_Host *shost)
|
|||
case DCEM_PORTSPEED_10G:
|
||||
fc_host_speed(shost) = FC_PORTSPEED_10GBIT;
|
||||
break;
|
||||
case DCEM_PORTSPEED_20G:
|
||||
fc_host_speed(shost) = FC_PORTSPEED_20GBIT;
|
||||
break;
|
||||
case DCEM_PORTSPEED_25G:
|
||||
fc_host_speed(shost) = FC_PORTSPEED_25GBIT;
|
||||
break;
|
||||
|
@ -500,7 +508,7 @@ static int fnic_cleanup(struct fnic *fnic)
|
|||
}
|
||||
|
||||
/* Clean up completed IOs and FCS frames */
|
||||
fnic_wq_copy_cmpl_handler(fnic, -1);
|
||||
fnic_wq_copy_cmpl_handler(fnic, io_completions);
|
||||
fnic_wq_cmpl_handler(fnic, -1);
|
||||
fnic_rq_cmpl_handler(fnic, -1);
|
||||
|
||||
|
@ -578,12 +586,7 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
|
|||
|
||||
host->transportt = fnic_fc_transport;
|
||||
|
||||
err = fnic_stats_debugfs_init(fnic);
|
||||
if (err) {
|
||||
shost_printk(KERN_ERR, fnic->lport->host,
|
||||
"Failed to initialize debugfs for stats\n");
|
||||
fnic_stats_debugfs_remove(fnic);
|
||||
}
|
||||
fnic_stats_debugfs_init(fnic);
|
||||
|
||||
/* Setup PCI resources */
|
||||
pci_set_drvdata(pdev, fnic);
|
||||
|
@ -650,12 +653,20 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
|
|||
goto err_out_iounmap;
|
||||
}
|
||||
|
||||
err = vnic_dev_cmd_init(fnic->vdev);
|
||||
if (err) {
|
||||
shost_printk(KERN_ERR, fnic->lport->host,
|
||||
"vnic_dev_cmd_init() returns %d, aborting\n",
|
||||
err);
|
||||
goto err_out_vnic_unregister;
|
||||
}
|
||||
|
||||
err = fnic_dev_wait(fnic->vdev, vnic_dev_open,
|
||||
vnic_dev_open_done, 0);
|
||||
vnic_dev_open_done, CMD_OPENF_RQ_ENABLE_THEN_POST);
|
||||
if (err) {
|
||||
shost_printk(KERN_ERR, fnic->lport->host,
|
||||
"vNIC dev open failed, aborting.\n");
|
||||
goto err_out_vnic_unregister;
|
||||
goto err_out_dev_cmd_deinit;
|
||||
}
|
||||
|
||||
err = vnic_dev_init(fnic->vdev, 0);
|
||||
|
@ -796,6 +807,7 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
|
|||
|
||||
/* allocate RQ buffers and post them to RQ*/
|
||||
for (i = 0; i < fnic->rq_count; i++) {
|
||||
vnic_rq_enable(&fnic->rq[i]);
|
||||
err = vnic_rq_fill(&fnic->rq[i], fnic_alloc_rq_frame);
|
||||
if (err) {
|
||||
shost_printk(KERN_ERR, fnic->lport->host,
|
||||
|
@ -870,15 +882,11 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
|
|||
/* Enable all queues */
|
||||
for (i = 0; i < fnic->raw_wq_count; i++)
|
||||
vnic_wq_enable(&fnic->wq[i]);
|
||||
for (i = 0; i < fnic->rq_count; i++)
|
||||
vnic_rq_enable(&fnic->rq[i]);
|
||||
for (i = 0; i < fnic->wq_copy_count; i++)
|
||||
vnic_wq_copy_enable(&fnic->wq_copy[i]);
|
||||
|
||||
fc_fabric_login(lp);
|
||||
|
||||
vnic_dev_enable(fnic->vdev);
|
||||
|
||||
err = fnic_request_intr(fnic);
|
||||
if (err) {
|
||||
shost_printk(KERN_ERR, fnic->lport->host,
|
||||
|
@ -886,6 +894,8 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
|
|||
goto err_out_free_exch_mgr;
|
||||
}
|
||||
|
||||
vnic_dev_enable(fnic->vdev);
|
||||
|
||||
for (i = 0; i < fnic->intr_count; i++)
|
||||
vnic_intr_unmask(&fnic->intr[i]);
|
||||
|
||||
|
@ -914,6 +924,7 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
|
|||
fnic_clear_intr_mode(fnic);
|
||||
err_out_dev_close:
|
||||
vnic_dev_close(fnic->vdev);
|
||||
err_out_dev_cmd_deinit:
|
||||
err_out_vnic_unregister:
|
||||
vnic_dev_unregister(fnic->vdev);
|
||||
err_out_iounmap:
|
||||
|
|
|
@ -180,20 +180,19 @@ void
|
|||
__fnic_set_state_flags(struct fnic *fnic, unsigned long st_flags,
|
||||
unsigned long clearbits)
|
||||
{
|
||||
struct Scsi_Host *host = fnic->lport->host;
|
||||
int sh_locked = spin_is_locked(host->host_lock);
|
||||
unsigned long flags = 0;
|
||||
unsigned long host_lock_flags = 0;
|
||||
|
||||
if (!sh_locked)
|
||||
spin_lock_irqsave(host->host_lock, flags);
|
||||
spin_lock_irqsave(&fnic->fnic_lock, flags);
|
||||
spin_lock_irqsave(fnic->lport->host->host_lock, host_lock_flags);
|
||||
|
||||
if (clearbits)
|
||||
fnic->state_flags &= ~st_flags;
|
||||
else
|
||||
fnic->state_flags |= st_flags;
|
||||
|
||||
if (!sh_locked)
|
||||
spin_unlock_irqrestore(host->host_lock, flags);
|
||||
spin_unlock_irqrestore(fnic->lport->host->host_lock, host_lock_flags);
|
||||
spin_unlock_irqrestore(&fnic->fnic_lock, flags);
|
||||
|
||||
return;
|
||||
}
|
||||
|
@ -1326,13 +1325,32 @@ int fnic_wq_copy_cmpl_handler(struct fnic *fnic, int copy_work_to_do)
|
|||
unsigned int wq_work_done = 0;
|
||||
unsigned int i, cq_index;
|
||||
unsigned int cur_work_done;
|
||||
struct misc_stats *misc_stats = &fnic->fnic_stats.misc_stats;
|
||||
u64 start_jiffies = 0;
|
||||
u64 end_jiffies = 0;
|
||||
u64 delta_jiffies = 0;
|
||||
u64 delta_ms = 0;
|
||||
|
||||
for (i = 0; i < fnic->wq_copy_count; i++) {
|
||||
cq_index = i + fnic->raw_wq_count + fnic->rq_count;
|
||||
|
||||
start_jiffies = jiffies;
|
||||
cur_work_done = vnic_cq_copy_service(&fnic->cq[cq_index],
|
||||
fnic_fcpio_cmpl_handler,
|
||||
copy_work_to_do);
|
||||
end_jiffies = jiffies;
|
||||
|
||||
wq_work_done += cur_work_done;
|
||||
delta_jiffies = end_jiffies - start_jiffies;
|
||||
if (delta_jiffies >
|
||||
(u64) atomic64_read(&misc_stats->max_isr_jiffies)) {
|
||||
atomic64_set(&misc_stats->max_isr_jiffies,
|
||||
delta_jiffies);
|
||||
delta_ms = jiffies_to_msecs(delta_jiffies);
|
||||
atomic64_set(&misc_stats->max_isr_time_ms, delta_ms);
|
||||
atomic64_set(&misc_stats->corr_work_done,
|
||||
cur_work_done);
|
||||
}
|
||||
}
|
||||
return wq_work_done;
|
||||
}
|
||||
|
@ -1397,8 +1415,9 @@ static void fnic_cleanup_io(struct fnic *fnic, int exclude_id)
|
|||
cleanup_scsi_cmd:
|
||||
sc->result = DID_TRANSPORT_DISRUPTED << 16;
|
||||
FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
|
||||
"%s: sc duration = %lu DID_TRANSPORT_DISRUPTED\n",
|
||||
__func__, (jiffies - start_time));
|
||||
"%s: tag:0x%x : sc:0x%p duration = %lu DID_TRANSPORT_DISRUPTED\n",
|
||||
__func__, sc->request->tag, sc,
|
||||
(jiffies - start_time));
|
||||
|
||||
if (atomic64_read(&fnic->io_cmpl_skip))
|
||||
atomic64_dec(&fnic->io_cmpl_skip);
|
||||
|
@ -1407,6 +1426,11 @@ static void fnic_cleanup_io(struct fnic *fnic, int exclude_id)
|
|||
|
||||
/* Complete the command to SCSI */
|
||||
if (sc->scsi_done) {
|
||||
if (!(CMD_FLAGS(sc) & FNIC_IO_ISSUED))
|
||||
shost_printk(KERN_ERR, fnic->lport->host,
|
||||
"Calling done for IO not issued to fw: tag:0x%x sc:0x%p\n",
|
||||
sc->request->tag, sc);
|
||||
|
||||
FNIC_TRACE(fnic_cleanup_io,
|
||||
sc->device->host->host_no, i, sc,
|
||||
jiffies_to_msecs(jiffies - start_time),
|
||||
|
|
|
@ -97,6 +97,9 @@ struct vlan_stats {
|
|||
struct misc_stats {
|
||||
u64 last_isr_time;
|
||||
u64 last_ack_time;
|
||||
atomic64_t max_isr_jiffies;
|
||||
atomic64_t max_isr_time_ms;
|
||||
atomic64_t corr_work_done;
|
||||
atomic64_t isr_count;
|
||||
atomic64_t max_cq_entries;
|
||||
atomic64_t ack_index_out_of_range;
|
||||
|
@ -113,6 +116,7 @@ struct misc_stats {
|
|||
atomic64_t queue_fulls;
|
||||
atomic64_t rport_not_ready;
|
||||
atomic64_t frame_errors;
|
||||
atomic64_t current_port_speed;
|
||||
};
|
||||
|
||||
struct fnic_stats {
|
||||
|
@ -134,6 +138,6 @@ struct stats_debug_info {
|
|||
};
|
||||
|
||||
int fnic_get_stats_data(struct stats_debug_info *, struct fnic_stats *);
|
||||
int fnic_stats_debugfs_init(struct fnic *);
|
||||
void fnic_stats_debugfs_init(struct fnic *);
|
||||
void fnic_stats_debugfs_remove(struct fnic *);
|
||||
#endif /* _FNIC_STATS_H_ */
|
||||
|
|
|
@ -409,6 +409,9 @@ int fnic_get_stats_data(struct stats_debug_info *debug,
|
|||
len += snprintf(debug->debug_buffer + len, buf_size - len,
|
||||
"Last ISR time: %llu (%8llu.%09lu)\n"
|
||||
"Last ACK time: %llu (%8llu.%09lu)\n"
|
||||
"Max ISR jiffies: %llu\n"
|
||||
"Max ISR time (ms) (0 denotes < 1 ms): %llu\n"
|
||||
"Corr. work done: %llu\n"
|
||||
"Number of ISRs: %lld\n"
|
||||
"Maximum CQ Entries: %lld\n"
|
||||
"Number of ACK index out of range: %lld\n"
|
||||
|
@ -428,6 +431,9 @@ int fnic_get_stats_data(struct stats_debug_info *debug,
|
|||
(s64)val1.tv_sec, val1.tv_nsec,
|
||||
(u64)stats->misc_stats.last_ack_time,
|
||||
(s64)val2.tv_sec, val2.tv_nsec,
|
||||
(u64)atomic64_read(&stats->misc_stats.max_isr_jiffies),
|
||||
(u64)atomic64_read(&stats->misc_stats.max_isr_time_ms),
|
||||
(u64)atomic64_read(&stats->misc_stats.corr_work_done),
|
||||
(u64)atomic64_read(&stats->misc_stats.isr_count),
|
||||
(u64)atomic64_read(&stats->misc_stats.max_cq_entries),
|
||||
(u64)atomic64_read(&stats->misc_stats.ack_index_out_of_range),
|
||||
|
@ -446,6 +452,11 @@ int fnic_get_stats_data(struct stats_debug_info *debug,
|
|||
(u64)atomic64_read(&stats->misc_stats.rport_not_ready),
|
||||
(u64)atomic64_read(&stats->misc_stats.frame_errors));
|
||||
|
||||
len += snprintf(debug->debug_buffer + len, buf_size - len,
|
||||
"Firmware reported port seed: %llu\n",
|
||||
(u64)atomic64_read(
|
||||
&stats->misc_stats.current_port_speed));
|
||||
|
||||
return len;
|
||||
|
||||
}
|
||||
|
@ -503,15 +514,10 @@ int fnic_trace_buf_init(void)
|
|||
fnic_trace_entries.page_offset[i] = fnic_buf_head;
|
||||
fnic_buf_head += FNIC_ENTRY_SIZE_BYTES;
|
||||
}
|
||||
err = fnic_trace_debugfs_init();
|
||||
if (err < 0) {
|
||||
pr_err("fnic: Failed to initialize debugfs for tracing\n");
|
||||
goto err_fnic_trace_debugfs_init;
|
||||
}
|
||||
fnic_trace_debugfs_init();
|
||||
pr_info("fnic: Successfully Initialized Trace Buffer\n");
|
||||
return err;
|
||||
err_fnic_trace_debugfs_init:
|
||||
fnic_trace_free();
|
||||
|
||||
err_fnic_trace_buf_init:
|
||||
return err;
|
||||
}
|
||||
|
@ -596,16 +602,10 @@ int fnic_fc_trace_init(void)
|
|||
fc_trace_entries.page_offset[i] = fc_trace_buf_head;
|
||||
fc_trace_buf_head += FC_TRC_SIZE_BYTES;
|
||||
}
|
||||
err = fnic_fc_trace_debugfs_init();
|
||||
if (err < 0) {
|
||||
pr_err("fnic: Failed to initialize FC_CTLR tracing.\n");
|
||||
goto err_fnic_fc_ctlr_trace_debugfs_init;
|
||||
}
|
||||
fnic_fc_trace_debugfs_init();
|
||||
pr_info("fnic: Successfully Initialized FC_CTLR Trace Buffer\n");
|
||||
return err;
|
||||
|
||||
err_fnic_fc_ctlr_trace_debugfs_init:
|
||||
fnic_fc_trace_free();
|
||||
err_fnic_fc_ctlr_trace_buf_init:
|
||||
return err;
|
||||
}
|
||||
|
|
|
@ -111,7 +111,7 @@ int fnic_trace_buf_init(void);
|
|||
void fnic_trace_free(void);
|
||||
int fnic_debugfs_init(void);
|
||||
void fnic_debugfs_terminate(void);
|
||||
int fnic_trace_debugfs_init(void);
|
||||
void fnic_trace_debugfs_init(void);
|
||||
void fnic_trace_debugfs_terminate(void);
|
||||
|
||||
/* Fnic FC CTLR Trace releated function */
|
||||
|
@ -123,7 +123,7 @@ int fnic_fc_trace_get_data(fnic_dbgfs_t *fnic_dbgfs_prt, u8 rdata_flag);
|
|||
void copy_and_format_trace_data(struct fc_trace_hdr *tdata,
|
||||
fnic_dbgfs_t *fnic_dbgfs_prt,
|
||||
int *len, u8 rdata_flag);
|
||||
int fnic_fc_trace_debugfs_init(void);
|
||||
void fnic_fc_trace_debugfs_init(void);
|
||||
void fnic_fc_trace_debugfs_terminate(void);
|
||||
|
||||
#endif
|
||||
|
|
|
@ -27,6 +27,24 @@
|
|||
#include "vnic_devcmd.h"
|
||||
#include "vnic_dev.h"
|
||||
#include "vnic_stats.h"
|
||||
#include "vnic_wq.h"
|
||||
|
||||
struct devcmd2_controller {
|
||||
struct vnic_wq_ctrl *wq_ctrl;
|
||||
struct vnic_dev_ring results_ring;
|
||||
struct vnic_wq wq;
|
||||
struct vnic_devcmd2 *cmd_ring;
|
||||
struct devcmd2_result *result;
|
||||
u16 next_result;
|
||||
u16 result_size;
|
||||
int color;
|
||||
};
|
||||
|
||||
enum vnic_proxy_type {
|
||||
PROXY_NONE,
|
||||
PROXY_BY_BDF,
|
||||
PROXY_BY_INDEX,
|
||||
};
|
||||
|
||||
struct vnic_res {
|
||||
void __iomem *vaddr;
|
||||
|
@ -48,6 +66,12 @@ struct vnic_dev {
|
|||
dma_addr_t stats_pa;
|
||||
struct vnic_devcmd_fw_info *fw_info;
|
||||
dma_addr_t fw_info_pa;
|
||||
enum vnic_proxy_type proxy;
|
||||
u32 proxy_index;
|
||||
u64 args[VNIC_DEVCMD_NARGS];
|
||||
struct devcmd2_controller *devcmd2;
|
||||
int (*devcmd_rtn)(struct vnic_dev *vdev, enum vnic_devcmd_cmd cmd,
|
||||
int wait);
|
||||
};
|
||||
|
||||
#define VNIC_MAX_RES_HDR_SIZE \
|
||||
|
@ -119,6 +143,7 @@ static int vnic_dev_discover_res(struct vnic_dev *vdev,
|
|||
}
|
||||
break;
|
||||
case RES_TYPE_INTR_PBA_LEGACY:
|
||||
case RES_TYPE_DEVCMD2:
|
||||
case RES_TYPE_DEVCMD:
|
||||
len = count;
|
||||
break;
|
||||
|
@ -229,8 +254,7 @@ void vnic_dev_free_desc_ring(struct vnic_dev *vdev, struct vnic_dev_ring *ring)
|
|||
}
|
||||
}
|
||||
|
||||
int vnic_dev_cmd(struct vnic_dev *vdev, enum vnic_devcmd_cmd cmd,
|
||||
u64 *a0, u64 *a1, int wait)
|
||||
int vnic_dev_cmd1(struct vnic_dev *vdev, enum vnic_devcmd_cmd cmd, int wait)
|
||||
{
|
||||
struct vnic_devcmd __iomem *devcmd = vdev->devcmd;
|
||||
int delay;
|
||||
|
@ -244,6 +268,8 @@ int vnic_dev_cmd(struct vnic_dev *vdev, enum vnic_devcmd_cmd cmd,
|
|||
EBUSY, /* ERR_EBUSY */
|
||||
};
|
||||
int err;
|
||||
u64 *a0 = &vdev->args[0];
|
||||
u64 *a1 = &vdev->args[1];
|
||||
|
||||
status = ioread32(&devcmd->status);
|
||||
if (status & STAT_BUSY) {
|
||||
|
@ -290,6 +316,223 @@ int vnic_dev_cmd(struct vnic_dev *vdev, enum vnic_devcmd_cmd cmd,
|
|||
return -ETIMEDOUT;
|
||||
}
|
||||
|
||||
int vnic_dev_cmd2(struct vnic_dev *vdev, enum vnic_devcmd_cmd cmd,
|
||||
int wait)
|
||||
{
|
||||
struct devcmd2_controller *dc2c = vdev->devcmd2;
|
||||
struct devcmd2_result *result;
|
||||
u8 color;
|
||||
unsigned int i;
|
||||
int delay;
|
||||
int err;
|
||||
u32 fetch_index;
|
||||
u32 posted;
|
||||
u32 new_posted;
|
||||
|
||||
posted = ioread32(&dc2c->wq_ctrl->posted_index);
|
||||
fetch_index = ioread32(&dc2c->wq_ctrl->fetch_index);
|
||||
|
||||
if (posted == 0xFFFFFFFF || fetch_index == 0xFFFFFFFF) {
|
||||
/* Hardware surprise removal: return error */
|
||||
pr_err("%s: devcmd2 invalid posted or fetch index on cmd %d\n",
|
||||
pci_name(vdev->pdev), _CMD_N(cmd));
|
||||
pr_err("%s: fetch index: %u, posted index: %u\n",
|
||||
pci_name(vdev->pdev), fetch_index, posted);
|
||||
|
||||
return -ENODEV;
|
||||
|
||||
}
|
||||
|
||||
new_posted = (posted + 1) % DEVCMD2_RING_SIZE;
|
||||
|
||||
if (new_posted == fetch_index) {
|
||||
pr_err("%s: devcmd2 wq full while issuing cmd %d\n",
|
||||
pci_name(vdev->pdev), _CMD_N(cmd));
|
||||
pr_err("%s: fetch index: %u, posted index: %u\n",
|
||||
pci_name(vdev->pdev), fetch_index, posted);
|
||||
return -EBUSY;
|
||||
|
||||
}
|
||||
dc2c->cmd_ring[posted].cmd = cmd;
|
||||
dc2c->cmd_ring[posted].flags = 0;
|
||||
|
||||
if ((_CMD_FLAGS(cmd) & _CMD_FLAGS_NOWAIT))
|
||||
dc2c->cmd_ring[posted].flags |= DEVCMD2_FNORESULT;
|
||||
if (_CMD_DIR(cmd) & _CMD_DIR_WRITE) {
|
||||
for (i = 0; i < VNIC_DEVCMD_NARGS; i++)
|
||||
dc2c->cmd_ring[posted].args[i] = vdev->args[i];
|
||||
|
||||
}
|
||||
|
||||
/* Adding write memory barrier prevents compiler and/or CPU
|
||||
* reordering, thus avoiding descriptor posting before
|
||||
* descriptor is initialized. Otherwise, hardware can read
|
||||
* stale descriptor fields.
|
||||
*/
|
||||
wmb();
|
||||
iowrite32(new_posted, &dc2c->wq_ctrl->posted_index);
|
||||
|
||||
if (dc2c->cmd_ring[posted].flags & DEVCMD2_FNORESULT)
|
||||
return 0;
|
||||
|
||||
result = dc2c->result + dc2c->next_result;
|
||||
color = dc2c->color;
|
||||
|
||||
dc2c->next_result++;
|
||||
if (dc2c->next_result == dc2c->result_size) {
|
||||
dc2c->next_result = 0;
|
||||
dc2c->color = dc2c->color ? 0 : 1;
|
||||
}
|
||||
|
||||
for (delay = 0; delay < wait; delay++) {
|
||||
udelay(100);
|
||||
if (result->color == color) {
|
||||
if (result->error) {
|
||||
err = -(int) result->error;
|
||||
if (err != ERR_ECMDUNKNOWN ||
|
||||
cmd != CMD_CAPABILITY)
|
||||
pr_err("%s:Error %d devcmd %d\n",
|
||||
pci_name(vdev->pdev),
|
||||
err, _CMD_N(cmd));
|
||||
return err;
|
||||
}
|
||||
if (_CMD_DIR(cmd) & _CMD_DIR_READ) {
|
||||
rmb(); /*prevent reorder while reding result*/
|
||||
for (i = 0; i < VNIC_DEVCMD_NARGS; i++)
|
||||
vdev->args[i] = result->results[i];
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
pr_err("%s:Timed out devcmd %d\n", pci_name(vdev->pdev), _CMD_N(cmd));
|
||||
|
||||
return -ETIMEDOUT;
|
||||
}
|
||||
|
||||
|
||||
int vnic_dev_init_devcmd1(struct vnic_dev *vdev)
|
||||
{
|
||||
vdev->devcmd = vnic_dev_get_res(vdev, RES_TYPE_DEVCMD, 0);
|
||||
if (!vdev->devcmd)
|
||||
return -ENODEV;
|
||||
|
||||
vdev->devcmd_rtn = &vnic_dev_cmd1;
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
int vnic_dev_init_devcmd2(struct vnic_dev *vdev)
|
||||
{
|
||||
int err;
|
||||
unsigned int fetch_index;
|
||||
|
||||
if (vdev->devcmd2)
|
||||
return 0;
|
||||
|
||||
vdev->devcmd2 = kzalloc(sizeof(*vdev->devcmd2), GFP_ATOMIC);
|
||||
if (!vdev->devcmd2)
|
||||
return -ENOMEM;
|
||||
|
||||
vdev->devcmd2->color = 1;
|
||||
vdev->devcmd2->result_size = DEVCMD2_RING_SIZE;
|
||||
err = vnic_wq_devcmd2_alloc(vdev, &vdev->devcmd2->wq,
|
||||
DEVCMD2_RING_SIZE, DEVCMD2_DESC_SIZE);
|
||||
if (err)
|
||||
goto err_free_devcmd2;
|
||||
|
||||
fetch_index = ioread32(&vdev->devcmd2->wq.ctrl->fetch_index);
|
||||
if (fetch_index == 0xFFFFFFFF) { /* check for hardware gone */
|
||||
pr_err("error in devcmd2 init");
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
/*
|
||||
* Don't change fetch_index ever and
|
||||
* set posted_index same as fetch_index
|
||||
* when setting up the WQ for devcmd2.
|
||||
*/
|
||||
vnic_wq_init_start(&vdev->devcmd2->wq, 0, fetch_index,
|
||||
fetch_index, 0, 0);
|
||||
|
||||
vnic_wq_enable(&vdev->devcmd2->wq);
|
||||
|
||||
err = vnic_dev_alloc_desc_ring(vdev, &vdev->devcmd2->results_ring,
|
||||
DEVCMD2_RING_SIZE, DEVCMD2_DESC_SIZE);
|
||||
if (err)
|
||||
goto err_free_wq;
|
||||
|
||||
vdev->devcmd2->result =
|
||||
(struct devcmd2_result *) vdev->devcmd2->results_ring.descs;
|
||||
vdev->devcmd2->cmd_ring =
|
||||
(struct vnic_devcmd2 *) vdev->devcmd2->wq.ring.descs;
|
||||
vdev->devcmd2->wq_ctrl = vdev->devcmd2->wq.ctrl;
|
||||
vdev->args[0] = (u64) vdev->devcmd2->results_ring.base_addr |
|
||||
VNIC_PADDR_TARGET;
|
||||
vdev->args[1] = DEVCMD2_RING_SIZE;
|
||||
|
||||
err = vnic_dev_cmd2(vdev, CMD_INITIALIZE_DEVCMD2, 1000);
|
||||
if (err)
|
||||
goto err_free_desc_ring;
|
||||
|
||||
vdev->devcmd_rtn = &vnic_dev_cmd2;
|
||||
|
||||
return 0;
|
||||
|
||||
err_free_desc_ring:
|
||||
vnic_dev_free_desc_ring(vdev, &vdev->devcmd2->results_ring);
|
||||
err_free_wq:
|
||||
vnic_wq_disable(&vdev->devcmd2->wq);
|
||||
vnic_wq_free(&vdev->devcmd2->wq);
|
||||
err_free_devcmd2:
|
||||
kfree(vdev->devcmd2);
|
||||
vdev->devcmd2 = NULL;
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
|
||||
void vnic_dev_deinit_devcmd2(struct vnic_dev *vdev)
|
||||
{
|
||||
vnic_dev_free_desc_ring(vdev, &vdev->devcmd2->results_ring);
|
||||
vnic_wq_disable(&vdev->devcmd2->wq);
|
||||
vnic_wq_free(&vdev->devcmd2->wq);
|
||||
kfree(vdev->devcmd2);
|
||||
vdev->devcmd2 = NULL;
|
||||
vdev->devcmd_rtn = &vnic_dev_cmd1;
|
||||
}
|
||||
|
||||
|
||||
int vnic_dev_cmd_no_proxy(struct vnic_dev *vdev,
|
||||
enum vnic_devcmd_cmd cmd, u64 *a0, u64 *a1, int wait)
|
||||
{
|
||||
int err;
|
||||
|
||||
vdev->args[0] = *a0;
|
||||
vdev->args[1] = *a1;
|
||||
|
||||
err = (*vdev->devcmd_rtn)(vdev, cmd, wait);
|
||||
|
||||
*a0 = vdev->args[0];
|
||||
*a1 = vdev->args[1];
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
|
||||
int vnic_dev_cmd(struct vnic_dev *vdev, enum vnic_devcmd_cmd cmd,
|
||||
u64 *a0, u64 *a1, int wait)
|
||||
{
|
||||
memset(vdev->args, 0, sizeof(vdev->args));
|
||||
|
||||
switch (vdev->proxy) {
|
||||
case PROXY_NONE:
|
||||
default:
|
||||
return vnic_dev_cmd_no_proxy(vdev, cmd, a0, a1, wait);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
int vnic_dev_fw_info(struct vnic_dev *vdev,
|
||||
struct vnic_devcmd_fw_info **fw_info)
|
||||
{
|
||||
|
@ -664,6 +907,8 @@ void vnic_dev_unregister(struct vnic_dev *vdev)
|
|||
dma_free_coherent(&vdev->pdev->dev,
|
||||
sizeof(struct vnic_devcmd_fw_info),
|
||||
vdev->fw_info, vdev->fw_info_pa);
|
||||
if (vdev->devcmd2)
|
||||
vnic_dev_deinit_devcmd2(vdev);
|
||||
kfree(vdev);
|
||||
}
|
||||
}
|
||||
|
@ -683,13 +928,26 @@ struct vnic_dev *vnic_dev_register(struct vnic_dev *vdev,
|
|||
if (vnic_dev_discover_res(vdev, bar))
|
||||
goto err_out;
|
||||
|
||||
vdev->devcmd = vnic_dev_get_res(vdev, RES_TYPE_DEVCMD, 0);
|
||||
if (!vdev->devcmd)
|
||||
goto err_out;
|
||||
|
||||
return vdev;
|
||||
|
||||
err_out:
|
||||
vnic_dev_unregister(vdev);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
int vnic_dev_cmd_init(struct vnic_dev *vdev)
|
||||
{
|
||||
int err;
|
||||
void *p;
|
||||
|
||||
p = vnic_dev_get_res(vdev, RES_TYPE_DEVCMD2, 0);
|
||||
if (p) {
|
||||
pr_err("fnic: DEVCMD2 resource found!\n");
|
||||
err = vnic_dev_init_devcmd2(vdev);
|
||||
} else {
|
||||
pr_err("fnic: DEVCMD2 not found, fall back to Devcmd\n");
|
||||
err = vnic_dev_init_devcmd1(vdev);
|
||||
}
|
||||
|
||||
return err;
|
||||
}
|
||||
|
|
|
@ -36,6 +36,7 @@
|
|||
#define vnic_dev_fw_info fnic_dev_fw_info
|
||||
#define vnic_dev_spec fnic_dev_spec
|
||||
#define vnic_dev_stats_clear fnic_dev_stats_clear
|
||||
#define vnic_dev_cmd_init fnic_dev_cmd_init
|
||||
#define vnic_dev_stats_dump fnic_dev_stats_dump
|
||||
#define vnic_dev_hang_notify fnic_dev_hang_notify
|
||||
#define vnic_dev_packet_filter fnic_dev_packet_filter
|
||||
|
@ -128,6 +129,7 @@ int vnic_dev_fw_info(struct vnic_dev *vdev,
|
|||
int vnic_dev_spec(struct vnic_dev *vdev, unsigned int offset,
|
||||
unsigned int size, void *value);
|
||||
int vnic_dev_stats_clear(struct vnic_dev *vdev);
|
||||
int vnic_dev_cmd_init(struct vnic_dev *vdev);
|
||||
int vnic_dev_stats_dump(struct vnic_dev *vdev, struct vnic_stats **stats);
|
||||
int vnic_dev_hang_notify(struct vnic_dev *vdev);
|
||||
void vnic_dev_packet_filter(struct vnic_dev *vdev, int directed, int multicast,
|
||||
|
|
|
@ -170,7 +170,8 @@ enum vnic_devcmd_cmd {
|
|||
|
||||
/* variant of CMD_INIT, with provisioning info
|
||||
* (u64)a0=paddr of vnic_devcmd_provinfo
|
||||
* (u32)a1=sizeof provision info */
|
||||
* (u32)a1=sizeof provision info
|
||||
*/
|
||||
CMD_INIT_PROV_INFO = _CMDC(_CMD_DIR_WRITE, _CMD_VTYPE_ENET, 27),
|
||||
|
||||
/* enable virtual link */
|
||||
|
@ -262,12 +263,132 @@ enum vnic_devcmd_cmd {
|
|||
* non-zero for resetting vlan to the default
|
||||
* out: (u16)a0=old default vlan
|
||||
*/
|
||||
CMD_SET_DEFAULT_VLAN = _CMDC(_CMD_DIR_RW, _CMD_VTYPE_ALL, 46)
|
||||
CMD_SET_DEFAULT_VLAN = _CMDC(_CMD_DIR_RW, _CMD_VTYPE_ALL, 46),
|
||||
|
||||
/* init_prov_info2:
|
||||
* Variant of CMD_INIT_PROV_INFO, where it will not try to enable
|
||||
* the vnic until CMD_ENABLE2 is issued.
|
||||
* (u64)a0=paddr of vnic_devcmd_provinfo
|
||||
* (u32)a1=sizeof provision info
|
||||
*/
|
||||
CMD_INIT_PROV_INFO2 = _CMDC(_CMD_DIR_WRITE, _CMD_VTYPE_ENET, 47),
|
||||
|
||||
/* enable2:
|
||||
* (u32)a0=0 ==> standby
|
||||
* =CMD_ENABLE2_ACTIVE ==> active
|
||||
*/
|
||||
CMD_ENABLE2 = _CMDC(_CMD_DIR_WRITE, _CMD_VTYPE_ENET, 48),
|
||||
|
||||
/*
|
||||
* cmd_status:
|
||||
* Returns the status of the specified command
|
||||
* Input:
|
||||
* a0 = command for which status is being queried.
|
||||
* Possible values are:
|
||||
* CMD_SOFT_RESET
|
||||
* CMD_HANG_RESET
|
||||
* CMD_OPEN
|
||||
* CMD_INIT
|
||||
* CMD_INIT_PROV_INFO
|
||||
* CMD_DEINIT
|
||||
* CMD_INIT_PROV_INFO2
|
||||
* CMD_ENABLE2
|
||||
* Output:
|
||||
* if status == STAT_ERROR
|
||||
* a0 = ERR_ENOTSUPPORTED - status for command in a0 is
|
||||
* not supported
|
||||
* if status == STAT_NONE
|
||||
* a0 = status of the devcmd specified in a0 as follows.
|
||||
* ERR_SUCCESS - command in a0 completed successfully
|
||||
* ERR_EINPROGRESS - command in a0 is still in progress
|
||||
*/
|
||||
CMD_STATUS = _CMDC(_CMD_DIR_RW, _CMD_VTYPE_ALL, 49),
|
||||
|
||||
/*
|
||||
* Returns interrupt coalescing timer conversion factors.
|
||||
* After calling this devcmd, ENIC driver can convert
|
||||
* interrupt coalescing timer in usec into CPU cycles as follows:
|
||||
*
|
||||
* intr_timer_cycles = intr_timer_usec * multiplier / divisor
|
||||
*
|
||||
* Interrupt coalescing timer in usecs can be be converted/obtained
|
||||
* from CPU cycles as follows:
|
||||
*
|
||||
* intr_timer_usec = intr_timer_cycles * divisor / multiplier
|
||||
*
|
||||
* in: none
|
||||
* out: (u32)a0 = multiplier
|
||||
* (u32)a1 = divisor
|
||||
* (u32)a2 = maximum timer value in usec
|
||||
*/
|
||||
CMD_INTR_COAL_CONVERT = _CMDC(_CMD_DIR_READ, _CMD_VTYPE_ALL, 50),
|
||||
|
||||
/*
|
||||
* ISCSI DUMP API:
|
||||
* in: (u64)a0=paddr of the param or param itself
|
||||
* (u32)a1=ISCSI_CMD_xxx
|
||||
*/
|
||||
CMD_ISCSI_DUMP_REQ = _CMDC(_CMD_DIR_WRITE, _CMD_VTYPE_ALL, 51),
|
||||
|
||||
/*
|
||||
* ISCSI DUMP STATUS API:
|
||||
* in: (u32)a0=cmd tag
|
||||
* in: (u32)a1=ISCSI_CMD_xxx
|
||||
* out: (u32)a0=cmd status
|
||||
*/
|
||||
CMD_ISCSI_DUMP_STATUS = _CMDC(_CMD_DIR_RW, _CMD_VTYPE_ALL, 52),
|
||||
|
||||
/*
|
||||
* Subvnic migration from MQ <--> VF.
|
||||
* Enable the LIF migration from MQ to VF and vice versa. MQ and VF
|
||||
* indexes are statically bound at the time of initialization.
|
||||
* Based on the
|
||||
* direction of migration, the resources of either MQ or the VF shall
|
||||
* be attached to the LIF.
|
||||
* in: (u32)a0=Direction of Migration
|
||||
* 0=> Migrate to VF
|
||||
* 1=> Migrate to MQ
|
||||
* (u32)a1=VF index (MQ index)
|
||||
*/
|
||||
CMD_MIGRATE_SUBVNIC = _CMDC(_CMD_DIR_WRITE, _CMD_VTYPE_ENET, 53),
|
||||
|
||||
/*
|
||||
* Register / Deregister the notification block for MQ subvnics
|
||||
* in:
|
||||
* (u64)a0=paddr to notify (set paddr=0 to unset)
|
||||
* (u32)a1 & 0x00000000ffffffff=sizeof(struct vnic_devcmd_notify)
|
||||
* (u16)a1 & 0x0000ffff00000000=intr num (-1 for no intr)
|
||||
* out:
|
||||
* (u32)a1 = effective size
|
||||
*/
|
||||
CMD_SUBVNIC_NOTIFY = _CMDC(_CMD_DIR_RW, _CMD_VTYPE_ALL, 54),
|
||||
|
||||
/*
|
||||
* Set the predefined mac address as default
|
||||
* in:
|
||||
* (u48)a0=mac addr
|
||||
*/
|
||||
CMD_SET_MAC_ADDR = _CMDC(_CMD_DIR_WRITE, _CMD_VTYPE_ENET, 55),
|
||||
|
||||
/* Update the provisioning info of the given VIF
|
||||
* (u64)a0=paddr of vnic_devcmd_provinfo
|
||||
* (u32)a1=sizeof provision info
|
||||
*/
|
||||
CMD_PROV_INFO_UPDATE = _CMDC(_CMD_DIR_WRITE, _CMD_VTYPE_ENET, 56),
|
||||
|
||||
/*
|
||||
* Initialization for the devcmd2 interface.
|
||||
* in: (u64) a0=host result buffer physical address
|
||||
* in: (u16) a1=number of entries in result buffer
|
||||
*/
|
||||
CMD_INITIALIZE_DEVCMD2 = _CMDC(_CMD_DIR_WRITE, _CMD_VTYPE_ALL, 57)
|
||||
};
|
||||
|
||||
/* flags for CMD_OPEN */
|
||||
#define CMD_OPENF_OPROM 0x1 /* open coming from option rom */
|
||||
|
||||
#define CMD_OPENF_RQ_ENABLE_THEN_POST 0x2
|
||||
|
||||
/* flags for CMD_INIT */
|
||||
#define CMD_INITF_DEFAULT_MAC 0x1 /* init with default mac addr */
|
||||
|
||||
|
@ -345,4 +466,39 @@ struct vnic_devcmd {
|
|||
u64 args[VNIC_DEVCMD_NARGS]; /* RW cmd args (little-endian) */
|
||||
};
|
||||
|
||||
/*
|
||||
* Version 2 of the interface.
|
||||
*
|
||||
* Some things are carried over, notably the vnic_devcmd_cmd enum.
|
||||
*/
|
||||
|
||||
/*
|
||||
* Flags for vnic_devcmd2.flags
|
||||
*/
|
||||
|
||||
#define DEVCMD2_FNORESULT 0x1 /* Don't copy result to host */
|
||||
|
||||
#define VNIC_DEVCMD2_NARGS VNIC_DEVCMD_NARGS
|
||||
|
||||
struct vnic_devcmd2 {
|
||||
u16 pad;
|
||||
u16 flags;
|
||||
u32 cmd; /* same command #defines as original */
|
||||
u64 args[VNIC_DEVCMD2_NARGS];
|
||||
};
|
||||
|
||||
#define VNIC_DEVCMD2_NRESULTS VNIC_DEVCMD_NARGS
|
||||
struct devcmd2_result {
|
||||
u64 results[VNIC_DEVCMD2_NRESULTS];
|
||||
u32 pad;
|
||||
u16 completed_index; /* into copy WQ */
|
||||
u8 error; /* same error codes as original */
|
||||
u8 color; /* 0 or 1 as with completion queues */
|
||||
};
|
||||
|
||||
#define DEVCMD2_RING_SIZE 32
|
||||
#define DEVCMD2_DESC_SIZE 128
|
||||
|
||||
#define DEVCMD2_RESULTS_SIZE_MAX ((1 << 16) - 1)
|
||||
|
||||
#endif /* _VNIC_DEVCMD_H_ */
|
||||
|
|
|
@ -41,6 +41,13 @@ enum vnic_res_type {
|
|||
RES_TYPE_RSVD7,
|
||||
RES_TYPE_DEVCMD, /* Device command region */
|
||||
RES_TYPE_PASS_THRU_PAGE, /* Pass-thru page */
|
||||
RES_TYPE_SUBVNIC, /* subvnic resource type */
|
||||
RES_TYPE_MQ_WQ, /* MQ Work queues */
|
||||
RES_TYPE_MQ_RQ, /* MQ Receive queues */
|
||||
RES_TYPE_MQ_CQ, /* MQ Completion queues */
|
||||
RES_TYPE_DEPRECATED1, /* Old version of devcmd 2 */
|
||||
RES_TYPE_DEPRECATED2, /* Old version of devcmd 2 */
|
||||
RES_TYPE_DEVCMD2, /* Device control region */
|
||||
|
||||
RES_TYPE_MAX, /* Count of resource types */
|
||||
};
|
||||
|
|
|
@ -27,12 +27,9 @@
|
|||
static int vnic_rq_alloc_bufs(struct vnic_rq *rq)
|
||||
{
|
||||
struct vnic_rq_buf *buf;
|
||||
struct vnic_dev *vdev;
|
||||
unsigned int i, j, count = rq->ring.desc_count;
|
||||
unsigned int blks = VNIC_RQ_BUF_BLKS_NEEDED(count);
|
||||
|
||||
vdev = rq->vdev;
|
||||
|
||||
for (i = 0; i < blks; i++) {
|
||||
rq->bufs[i] = kzalloc(VNIC_RQ_BUF_BLK_SZ, GFP_ATOMIC);
|
||||
if (!rq->bufs[i]) {
|
||||
|
@ -171,7 +168,7 @@ void vnic_rq_clean(struct vnic_rq *rq,
|
|||
struct vnic_rq_buf *buf;
|
||||
u32 fetch_index;
|
||||
|
||||
BUG_ON(ioread32(&rq->ctrl->enable));
|
||||
WARN_ON(ioread32(&rq->ctrl->enable));
|
||||
|
||||
buf = rq->to_clean;
|
||||
|
||||
|
|
|
@ -24,15 +24,32 @@
|
|||
#include "vnic_dev.h"
|
||||
#include "vnic_wq.h"
|
||||
|
||||
|
||||
int vnic_wq_get_ctrl(struct vnic_dev *vdev, struct vnic_wq *wq,
|
||||
unsigned int index, enum vnic_res_type res_type)
|
||||
{
|
||||
wq->ctrl = vnic_dev_get_res(vdev, res_type, index);
|
||||
|
||||
if (!wq->ctrl)
|
||||
return -EINVAL;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
int vnic_wq_alloc_ring(struct vnic_dev *vdev, struct vnic_wq *wq,
|
||||
unsigned int desc_count, unsigned int desc_size)
|
||||
{
|
||||
return vnic_dev_alloc_desc_ring(vdev, &wq->ring, desc_count, desc_size);
|
||||
}
|
||||
|
||||
|
||||
static int vnic_wq_alloc_bufs(struct vnic_wq *wq)
|
||||
{
|
||||
struct vnic_wq_buf *buf;
|
||||
struct vnic_dev *vdev;
|
||||
unsigned int i, j, count = wq->ring.desc_count;
|
||||
unsigned int blks = VNIC_WQ_BUF_BLKS_NEEDED(count);
|
||||
|
||||
vdev = wq->vdev;
|
||||
|
||||
for (i = 0; i < blks; i++) {
|
||||
wq->bufs[i] = kzalloc(VNIC_WQ_BUF_BLK_SZ, GFP_ATOMIC);
|
||||
if (!wq->bufs[i]) {
|
||||
|
@ -111,6 +128,52 @@ int vnic_wq_alloc(struct vnic_dev *vdev, struct vnic_wq *wq, unsigned int index,
|
|||
return 0;
|
||||
}
|
||||
|
||||
|
||||
int vnic_wq_devcmd2_alloc(struct vnic_dev *vdev, struct vnic_wq *wq,
|
||||
unsigned int desc_count, unsigned int desc_size)
|
||||
{
|
||||
int err;
|
||||
|
||||
wq->index = 0;
|
||||
wq->vdev = vdev;
|
||||
|
||||
err = vnic_wq_get_ctrl(vdev, wq, 0, RES_TYPE_DEVCMD2);
|
||||
if (err) {
|
||||
pr_err("Failed to get devcmd2 resource\n");
|
||||
return err;
|
||||
}
|
||||
vnic_wq_disable(wq);
|
||||
|
||||
err = vnic_wq_alloc_ring(vdev, wq, desc_count, desc_size);
|
||||
if (err)
|
||||
return err;
|
||||
return 0;
|
||||
}
|
||||
|
||||
void vnic_wq_init_start(struct vnic_wq *wq, unsigned int cq_index,
|
||||
unsigned int fetch_index, unsigned int posted_index,
|
||||
unsigned int error_interrupt_enable,
|
||||
unsigned int error_interrupt_offset)
|
||||
{
|
||||
u64 paddr;
|
||||
unsigned int count = wq->ring.desc_count;
|
||||
|
||||
paddr = (u64)wq->ring.base_addr | VNIC_PADDR_TARGET;
|
||||
writeq(paddr, &wq->ctrl->ring_base);
|
||||
iowrite32(count, &wq->ctrl->ring_size);
|
||||
iowrite32(fetch_index, &wq->ctrl->fetch_index);
|
||||
iowrite32(posted_index, &wq->ctrl->posted_index);
|
||||
iowrite32(cq_index, &wq->ctrl->cq_index);
|
||||
iowrite32(error_interrupt_enable, &wq->ctrl->error_interrupt_enable);
|
||||
iowrite32(error_interrupt_offset, &wq->ctrl->error_interrupt_offset);
|
||||
iowrite32(0, &wq->ctrl->error_status);
|
||||
|
||||
wq->to_use = wq->to_clean =
|
||||
&wq->bufs[fetch_index / VNIC_WQ_BUF_BLK_ENTRIES]
|
||||
[fetch_index % VNIC_WQ_BUF_BLK_ENTRIES];
|
||||
}
|
||||
|
||||
|
||||
void vnic_wq_init(struct vnic_wq *wq, unsigned int cq_index,
|
||||
unsigned int error_interrupt_enable,
|
||||
unsigned int error_interrupt_offset)
|
||||
|
|
|
@ -33,6 +33,8 @@
|
|||
#define vnic_wq_service fnic_wq_service
|
||||
#define vnic_wq_free fnic_wq_free
|
||||
#define vnic_wq_alloc fnic_wq_alloc
|
||||
#define vnic_wq_devcmd2_alloc fnic_wq_devcmd2_alloc
|
||||
#define vnic_wq_init_start fnic_wq_init_start
|
||||
#define vnic_wq_init fnic_wq_init
|
||||
#define vnic_wq_error_status fnic_wq_error_status
|
||||
#define vnic_wq_enable fnic_wq_enable
|
||||
|
@ -163,6 +165,12 @@ static inline void vnic_wq_service(struct vnic_wq *wq,
|
|||
void vnic_wq_free(struct vnic_wq *wq);
|
||||
int vnic_wq_alloc(struct vnic_dev *vdev, struct vnic_wq *wq, unsigned int index,
|
||||
unsigned int desc_count, unsigned int desc_size);
|
||||
int vnic_wq_devcmd2_alloc(struct vnic_dev *vdev, struct vnic_wq *wq,
|
||||
unsigned int desc_count, unsigned int desc_size);
|
||||
void vnic_wq_init_start(struct vnic_wq *wq, unsigned int cq_index,
|
||||
unsigned int fetch_index, unsigned int posted_index,
|
||||
unsigned int error_interrupt_enable,
|
||||
unsigned int error_interrupt_offset);
|
||||
void vnic_wq_init(struct vnic_wq *wq, unsigned int cq_index,
|
||||
unsigned int error_interrupt_enable,
|
||||
unsigned int error_interrupt_offset);
|
||||
|
|
1282
drivers/scsi/gdth.c
1282
drivers/scsi/gdth.c
File diff suppressed because it is too large
Load Diff
|
@ -38,17 +38,9 @@
|
|||
#define OEM_ID_INTEL 0x8000
|
||||
|
||||
/* controller classes */
|
||||
#define GDT_ISA 0x01 /* ISA controller */
|
||||
#define GDT_EISA 0x02 /* EISA controller */
|
||||
#define GDT_PCI 0x03 /* PCI controller */
|
||||
#define GDT_PCINEW 0x04 /* new PCI controller */
|
||||
#define GDT_PCIMPR 0x05 /* PCI MPR controller */
|
||||
/* GDT_EISA, controller subtypes EISA */
|
||||
#define GDT3_ID 0x0130941c /* GDT3000/3020 */
|
||||
#define GDT3A_ID 0x0230941c /* GDT3000A/3020A/3050A */
|
||||
#define GDT3B_ID 0x0330941c /* GDT3000B/3010A */
|
||||
/* GDT_ISA */
|
||||
#define GDT2_ID 0x0120941c /* GDT2000/2020 */
|
||||
|
||||
#ifndef PCI_DEVICE_ID_VORTEX_GDT60x0
|
||||
/* GDT_PCI */
|
||||
|
@ -281,17 +273,6 @@
|
|||
#define GDTH_DATA_IN 0x01000000L /* data from target */
|
||||
#define GDTH_DATA_OUT 0x00000000L /* data to target */
|
||||
|
||||
/* BMIC registers (EISA controllers) */
|
||||
#define ID0REG 0x0c80 /* board ID */
|
||||
#define EINTENABREG 0x0c89 /* interrupt enable */
|
||||
#define SEMA0REG 0x0c8a /* command semaphore */
|
||||
#define SEMA1REG 0x0c8b /* status semaphore */
|
||||
#define LDOORREG 0x0c8d /* local doorbell */
|
||||
#define EDENABREG 0x0c8e /* EISA system doorbell enab. */
|
||||
#define EDOORREG 0x0c8f /* EISA system doorbell */
|
||||
#define MAILBOXREG 0x0c90 /* mailbox reg. (16 bytes) */
|
||||
#define EISAREG 0x0cc0 /* EISA configuration */
|
||||
|
||||
/* other defines */
|
||||
#define LINUX_OS 8 /* used for cache optim. */
|
||||
#define SECS32 0x1f /* round capacity */
|
||||
|
@ -706,21 +687,11 @@ typedef struct {
|
|||
u8 fw_magic; /* contr. ID from firmware */
|
||||
} __attribute__((packed)) gdt_pci_sram;
|
||||
|
||||
/* SRAM structure EISA controllers (but NOT GDT3000/3020) */
|
||||
typedef struct {
|
||||
u8 os_used[16]; /* OS code per service */
|
||||
u16 need_deinit; /* switch betw. BIOS/driver */
|
||||
u8 switch_support; /* see need_deinit */
|
||||
u8 padding;
|
||||
} __attribute__((packed)) gdt_eisa_sram;
|
||||
|
||||
|
||||
/* DPRAM ISA controllers */
|
||||
typedef struct {
|
||||
union {
|
||||
struct {
|
||||
u8 bios_used[0x3c00-32]; /* 15KB - 32Bytes BIOS */
|
||||
u32 magic; /* controller (EISA) ID */
|
||||
u16 need_deinit; /* switch betw. BIOS/driver */
|
||||
u8 switch_support; /* see need_deinit */
|
||||
u8 padding[9];
|
||||
|
@ -843,7 +814,6 @@ typedef struct {
|
|||
u16 cache_feat; /* feat. cache serv. (s/g,..)*/
|
||||
u16 raw_feat; /* feat. raw service (s/g,..)*/
|
||||
u16 screen_feat; /* feat. raw service (s/g,..)*/
|
||||
u16 bmic; /* BMIC address (EISA) */
|
||||
void __iomem *brd; /* DPRAM address */
|
||||
u32 brd_phys; /* slot number/BIOS address */
|
||||
gdt6c_plx_regs *plx; /* PLX regs (new PCI contr.) */
|
||||
|
|
|
@ -27,11 +27,7 @@
|
|||
#define GDTH_MAXSG 32 /* max. s/g elements */
|
||||
|
||||
#define MAX_LDRIVES 255 /* max. log. drive count */
|
||||
#ifdef GDTH_IOCTL_PROC
|
||||
#define MAX_HDRIVES 100 /* max. host drive count */
|
||||
#else
|
||||
#define MAX_HDRIVES MAX_LDRIVES /* max. host drive count */
|
||||
#endif
|
||||
|
||||
/* scatter/gather element */
|
||||
typedef struct {
|
||||
|
@ -178,91 +174,6 @@ typedef struct {
|
|||
gdth_evt_data event_data;
|
||||
} __attribute__((packed)) gdth_evt_str;
|
||||
|
||||
|
||||
#ifdef GDTH_IOCTL_PROC
|
||||
/* IOCTL structure (write) */
|
||||
typedef struct {
|
||||
u32 magic; /* IOCTL magic */
|
||||
u16 ioctl; /* IOCTL */
|
||||
u16 ionode; /* controller number */
|
||||
u16 service; /* controller service */
|
||||
u16 timeout; /* timeout */
|
||||
union {
|
||||
struct {
|
||||
u8 command[512]; /* controller command */
|
||||
u8 data[1]; /* add. data */
|
||||
} general;
|
||||
struct {
|
||||
u8 lock; /* lock/unlock */
|
||||
u8 drive_cnt; /* drive count */
|
||||
u16 drives[MAX_HDRIVES];/* drives */
|
||||
} lockdrv;
|
||||
struct {
|
||||
u8 lock; /* lock/unlock */
|
||||
u8 channel; /* channel */
|
||||
} lockchn;
|
||||
struct {
|
||||
int erase; /* erase event ? */
|
||||
int handle;
|
||||
u8 evt[EVENT_SIZE]; /* event structure */
|
||||
} event;
|
||||
struct {
|
||||
u8 bus; /* SCSI bus */
|
||||
u8 target; /* target ID */
|
||||
u8 lun; /* LUN */
|
||||
u8 cmd_len; /* command length */
|
||||
u8 cmd[12]; /* SCSI command */
|
||||
} scsi;
|
||||
struct {
|
||||
u16 hdr_no; /* host drive number */
|
||||
u8 flag; /* old meth./add/remove */
|
||||
} rescan;
|
||||
} iu;
|
||||
} gdth_iowr_str;
|
||||
|
||||
/* IOCTL structure (read) */
|
||||
typedef struct {
|
||||
u32 size; /* buffer size */
|
||||
u32 status; /* IOCTL error code */
|
||||
union {
|
||||
struct {
|
||||
u8 data[1]; /* data */
|
||||
} general;
|
||||
struct {
|
||||
u16 version; /* driver version */
|
||||
} drvers;
|
||||
struct {
|
||||
u8 type; /* controller type */
|
||||
u16 info; /* slot etc. */
|
||||
u16 oem_id; /* OEM ID */
|
||||
u16 bios_ver; /* not used */
|
||||
u16 access; /* not used */
|
||||
u16 ext_type; /* extended type */
|
||||
u16 device_id; /* device ID */
|
||||
u16 sub_device_id; /* sub device ID */
|
||||
} ctrtype;
|
||||
struct {
|
||||
u8 version; /* OS version */
|
||||
u8 subversion; /* OS subversion */
|
||||
u16 revision; /* revision */
|
||||
} osvers;
|
||||
struct {
|
||||
u16 count; /* controller count */
|
||||
} ctrcnt;
|
||||
struct {
|
||||
int handle;
|
||||
u8 evt[EVENT_SIZE]; /* event structure */
|
||||
} event;
|
||||
struct {
|
||||
u8 bus; /* SCSI bus, 0xff: invalid */
|
||||
u8 target; /* target ID */
|
||||
u8 lun; /* LUN */
|
||||
u8 cluster_type; /* cluster properties */
|
||||
} hdr_list[MAX_HDRIVES]; /* index is host drive number */
|
||||
} iu;
|
||||
} gdth_iord_str;
|
||||
#endif
|
||||
|
||||
/* GDTIOCTL_GENERAL */
|
||||
typedef struct {
|
||||
u16 ionode; /* controller number */
|
||||
|
|
|
@ -31,7 +31,6 @@ static int gdth_set_asc_info(struct Scsi_Host *host, char *buffer,
|
|||
int i, found;
|
||||
gdth_cmd_str gdtcmd;
|
||||
gdth_cpar_str *pcpar;
|
||||
u64 paddr;
|
||||
|
||||
char cmnd[MAX_COMMAND_SIZE];
|
||||
memset(cmnd, 0xff, 12);
|
||||
|
@ -113,13 +112,23 @@ static int gdth_set_asc_info(struct Scsi_Host *host, char *buffer,
|
|||
}
|
||||
|
||||
if (wb_mode) {
|
||||
if (!gdth_ioctl_alloc(ha, sizeof(gdth_cpar_str), TRUE, &paddr))
|
||||
return(-EBUSY);
|
||||
unsigned long flags;
|
||||
|
||||
BUILD_BUG_ON(sizeof(gdth_cpar_str) > GDTH_SCRATCH);
|
||||
|
||||
spin_lock_irqsave(&ha->smp_lock, flags);
|
||||
if (ha->scratch_busy) {
|
||||
spin_unlock_irqrestore(&ha->smp_lock, flags);
|
||||
return -EBUSY;
|
||||
}
|
||||
ha->scratch_busy = TRUE;
|
||||
spin_unlock_irqrestore(&ha->smp_lock, flags);
|
||||
|
||||
pcpar = (gdth_cpar_str *)ha->pscratch;
|
||||
memcpy( pcpar, &ha->cpar, sizeof(gdth_cpar_str) );
|
||||
gdtcmd.Service = CACHESERVICE;
|
||||
gdtcmd.OpCode = GDT_IOCTL;
|
||||
gdtcmd.u.ioctl.p_param = paddr;
|
||||
gdtcmd.u.ioctl.p_param = ha->scratch_phys;
|
||||
gdtcmd.u.ioctl.param_size = sizeof(gdth_cpar_str);
|
||||
gdtcmd.u.ioctl.subfunc = CACHE_CONFIG;
|
||||
gdtcmd.u.ioctl.channel = INVALID_CHANNEL;
|
||||
|
@ -127,7 +136,10 @@ static int gdth_set_asc_info(struct Scsi_Host *host, char *buffer,
|
|||
|
||||
gdth_execute(host, &gdtcmd, cmnd, 30, NULL);
|
||||
|
||||
gdth_ioctl_free(ha, GDTH_SCRATCH, ha->pscratch, paddr);
|
||||
spin_lock_irqsave(&ha->smp_lock, flags);
|
||||
ha->scratch_busy = FALSE;
|
||||
spin_unlock_irqrestore(&ha->smp_lock, flags);
|
||||
|
||||
printk("Done.\n");
|
||||
return(orig_length);
|
||||
}
|
||||
|
@ -143,7 +155,7 @@ int gdth_show_info(struct seq_file *m, struct Scsi_Host *host)
|
|||
int id, i, j, k, sec, flag;
|
||||
int no_mdrv = 0, drv_no, is_mirr;
|
||||
u32 cnt;
|
||||
u64 paddr;
|
||||
dma_addr_t paddr;
|
||||
int rc = -ENOMEM;
|
||||
|
||||
gdth_cmd_str *gdtcmd;
|
||||
|
@ -217,20 +229,14 @@ int gdth_show_info(struct seq_file *m, struct Scsi_Host *host)
|
|||
" Serial No.: \t0x%8X\tCache RAM size:\t%d KB\n",
|
||||
ha->binfo.ser_no, ha->binfo.memsize / 1024);
|
||||
|
||||
#ifdef GDTH_DMA_STATISTICS
|
||||
/* controller statistics */
|
||||
seq_puts(m, "\nController Statistics:\n");
|
||||
seq_printf(m,
|
||||
" 32-bit DMA buffer:\t%lu\t64-bit DMA buffer:\t%lu\n",
|
||||
ha->dma32_cnt, ha->dma64_cnt);
|
||||
#endif
|
||||
|
||||
if (ha->more_proc) {
|
||||
size_t size = max_t(size_t, GDTH_SCRATCH, sizeof(gdth_hget_str));
|
||||
|
||||
/* more information: 2. about physical devices */
|
||||
seq_puts(m, "\nPhysical Devices:");
|
||||
flag = FALSE;
|
||||
|
||||
buf = gdth_ioctl_alloc(ha, GDTH_SCRATCH, FALSE, &paddr);
|
||||
buf = dma_alloc_coherent(&ha->pdev->dev, size, &paddr, GFP_KERNEL);
|
||||
if (!buf)
|
||||
goto stop_output;
|
||||
for (i = 0; i < ha->bus_cnt; ++i) {
|
||||
|
@ -323,7 +329,6 @@ int gdth_show_info(struct seq_file *m, struct Scsi_Host *host)
|
|||
}
|
||||
}
|
||||
}
|
||||
gdth_ioctl_free(ha, GDTH_SCRATCH, buf, paddr);
|
||||
|
||||
if (!flag)
|
||||
seq_puts(m, "\n --\n");
|
||||
|
@ -332,9 +337,6 @@ int gdth_show_info(struct seq_file *m, struct Scsi_Host *host)
|
|||
seq_puts(m, "\nLogical Drives:");
|
||||
flag = FALSE;
|
||||
|
||||
buf = gdth_ioctl_alloc(ha, GDTH_SCRATCH, FALSE, &paddr);
|
||||
if (!buf)
|
||||
goto stop_output;
|
||||
for (i = 0; i < MAX_LDRIVES; ++i) {
|
||||
if (!ha->hdr[i].is_logdrv)
|
||||
continue;
|
||||
|
@ -408,8 +410,7 @@ int gdth_show_info(struct seq_file *m, struct Scsi_Host *host)
|
|||
seq_printf(m,
|
||||
" To Array Drv.:\t%s\n", hrec);
|
||||
}
|
||||
gdth_ioctl_free(ha, GDTH_SCRATCH, buf, paddr);
|
||||
|
||||
|
||||
if (!flag)
|
||||
seq_puts(m, "\n --\n");
|
||||
|
||||
|
@ -417,9 +418,6 @@ int gdth_show_info(struct seq_file *m, struct Scsi_Host *host)
|
|||
seq_puts(m, "\nArray Drives:");
|
||||
flag = FALSE;
|
||||
|
||||
buf = gdth_ioctl_alloc(ha, GDTH_SCRATCH, FALSE, &paddr);
|
||||
if (!buf)
|
||||
goto stop_output;
|
||||
for (i = 0; i < MAX_LDRIVES; ++i) {
|
||||
if (!(ha->hdr[i].is_arraydrv && ha->hdr[i].is_master))
|
||||
continue;
|
||||
|
@ -468,8 +466,7 @@ int gdth_show_info(struct seq_file *m, struct Scsi_Host *host)
|
|||
hrec);
|
||||
}
|
||||
}
|
||||
gdth_ioctl_free(ha, GDTH_SCRATCH, buf, paddr);
|
||||
|
||||
|
||||
if (!flag)
|
||||
seq_puts(m, "\n --\n");
|
||||
|
||||
|
@ -477,9 +474,6 @@ int gdth_show_info(struct seq_file *m, struct Scsi_Host *host)
|
|||
seq_puts(m, "\nHost Drives:");
|
||||
flag = FALSE;
|
||||
|
||||
buf = gdth_ioctl_alloc(ha, sizeof(gdth_hget_str), FALSE, &paddr);
|
||||
if (!buf)
|
||||
goto stop_output;
|
||||
for (i = 0; i < MAX_LDRIVES; ++i) {
|
||||
if (!ha->hdr[i].is_logdrv ||
|
||||
(ha->hdr[i].is_arraydrv && !ha->hdr[i].is_master))
|
||||
|
@ -510,7 +504,7 @@ int gdth_show_info(struct seq_file *m, struct Scsi_Host *host)
|
|||
}
|
||||
}
|
||||
}
|
||||
gdth_ioctl_free(ha, sizeof(gdth_hget_str), buf, paddr);
|
||||
dma_free_coherent(&ha->pdev->dev, size, buf, paddr);
|
||||
|
||||
for (i = 0; i < MAX_HDRIVES; ++i) {
|
||||
if (!(ha->hdr[i].present))
|
||||
|
@ -563,65 +557,6 @@ int gdth_show_info(struct seq_file *m, struct Scsi_Host *host)
|
|||
return rc;
|
||||
}
|
||||
|
||||
static char *gdth_ioctl_alloc(gdth_ha_str *ha, int size, int scratch,
|
||||
u64 *paddr)
|
||||
{
|
||||
unsigned long flags;
|
||||
char *ret_val;
|
||||
|
||||
if (size == 0)
|
||||
return NULL;
|
||||
|
||||
spin_lock_irqsave(&ha->smp_lock, flags);
|
||||
|
||||
if (!ha->scratch_busy && size <= GDTH_SCRATCH) {
|
||||
ha->scratch_busy = TRUE;
|
||||
ret_val = ha->pscratch;
|
||||
*paddr = ha->scratch_phys;
|
||||
} else if (scratch) {
|
||||
ret_val = NULL;
|
||||
} else {
|
||||
dma_addr_t dma_addr;
|
||||
|
||||
ret_val = pci_alloc_consistent(ha->pdev, size, &dma_addr);
|
||||
*paddr = dma_addr;
|
||||
}
|
||||
|
||||
spin_unlock_irqrestore(&ha->smp_lock, flags);
|
||||
return ret_val;
|
||||
}
|
||||
|
||||
static void gdth_ioctl_free(gdth_ha_str *ha, int size, char *buf, u64 paddr)
|
||||
{
|
||||
unsigned long flags;
|
||||
|
||||
if (buf == ha->pscratch) {
|
||||
spin_lock_irqsave(&ha->smp_lock, flags);
|
||||
ha->scratch_busy = FALSE;
|
||||
spin_unlock_irqrestore(&ha->smp_lock, flags);
|
||||
} else {
|
||||
pci_free_consistent(ha->pdev, size, buf, paddr);
|
||||
}
|
||||
}
|
||||
|
||||
#ifdef GDTH_IOCTL_PROC
|
||||
static int gdth_ioctl_check_bin(gdth_ha_str *ha, u16 size)
|
||||
{
|
||||
unsigned long flags;
|
||||
int ret_val;
|
||||
|
||||
spin_lock_irqsave(&ha->smp_lock, flags);
|
||||
|
||||
ret_val = FALSE;
|
||||
if (ha->scratch_busy) {
|
||||
if (((gdth_iord_str *)ha->pscratch)->size == (u32)size)
|
||||
ret_val = TRUE;
|
||||
}
|
||||
spin_unlock_irqrestore(&ha->smp_lock, flags);
|
||||
return ret_val;
|
||||
}
|
||||
#endif
|
||||
|
||||
static void gdth_wait_completion(gdth_ha_str *ha, int busnum, int id)
|
||||
{
|
||||
unsigned long flags;
|
||||
|
|
|
@ -12,9 +12,6 @@ int gdth_execute(struct Scsi_Host *shost, gdth_cmd_str *gdtcmd, char *cmnd,
|
|||
static int gdth_set_asc_info(struct Scsi_Host *host, char *buffer,
|
||||
int length, gdth_ha_str *ha);
|
||||
|
||||
static char *gdth_ioctl_alloc(gdth_ha_str *ha, int size, int scratch,
|
||||
u64 *paddr);
|
||||
static void gdth_ioctl_free(gdth_ha_str *ha, int size, char *buf, u64 paddr);
|
||||
static void gdth_wait_completion(gdth_ha_str *ha, int busnum, int id);
|
||||
|
||||
#endif
|
||||
|
|
|
@ -14,6 +14,7 @@
|
|||
|
||||
#include <linux/acpi.h>
|
||||
#include <linux/clk.h>
|
||||
#include <linux/debugfs.h>
|
||||
#include <linux/dmapool.h>
|
||||
#include <linux/iopoll.h>
|
||||
#include <linux/lcm.h>
|
||||
|
@ -29,7 +30,7 @@
|
|||
|
||||
#define HISI_SAS_MAX_PHYS 9
|
||||
#define HISI_SAS_MAX_QUEUES 32
|
||||
#define HISI_SAS_QUEUE_SLOTS 512
|
||||
#define HISI_SAS_QUEUE_SLOTS 4096
|
||||
#define HISI_SAS_MAX_ITCT_ENTRIES 1024
|
||||
#define HISI_SAS_MAX_DEVICES HISI_SAS_MAX_ITCT_ENTRIES
|
||||
#define HISI_SAS_RESET_BIT 0
|
||||
|
@ -40,20 +41,25 @@
|
|||
#define HISI_SAS_COMMAND_TABLE_SZ (sizeof(union hisi_sas_command_table))
|
||||
|
||||
#define hisi_sas_status_buf_addr(buf) \
|
||||
(buf + offsetof(struct hisi_sas_slot_buf_table, status_buffer))
|
||||
#define hisi_sas_status_buf_addr_mem(slot) hisi_sas_status_buf_addr(slot->buf)
|
||||
((buf) + offsetof(struct hisi_sas_slot_buf_table, status_buffer))
|
||||
#define hisi_sas_status_buf_addr_mem(slot) hisi_sas_status_buf_addr((slot)->buf)
|
||||
#define hisi_sas_status_buf_addr_dma(slot) \
|
||||
hisi_sas_status_buf_addr(slot->buf_dma)
|
||||
hisi_sas_status_buf_addr((slot)->buf_dma)
|
||||
|
||||
#define hisi_sas_cmd_hdr_addr(buf) \
|
||||
(buf + offsetof(struct hisi_sas_slot_buf_table, command_header))
|
||||
#define hisi_sas_cmd_hdr_addr_mem(slot) hisi_sas_cmd_hdr_addr(slot->buf)
|
||||
#define hisi_sas_cmd_hdr_addr_dma(slot) hisi_sas_cmd_hdr_addr(slot->buf_dma)
|
||||
((buf) + offsetof(struct hisi_sas_slot_buf_table, command_header))
|
||||
#define hisi_sas_cmd_hdr_addr_mem(slot) hisi_sas_cmd_hdr_addr((slot)->buf)
|
||||
#define hisi_sas_cmd_hdr_addr_dma(slot) hisi_sas_cmd_hdr_addr((slot)->buf_dma)
|
||||
|
||||
#define hisi_sas_sge_addr(buf) \
|
||||
(buf + offsetof(struct hisi_sas_slot_buf_table, sge_page))
|
||||
#define hisi_sas_sge_addr_mem(slot) hisi_sas_sge_addr(slot->buf)
|
||||
#define hisi_sas_sge_addr_dma(slot) hisi_sas_sge_addr(slot->buf_dma)
|
||||
((buf) + offsetof(struct hisi_sas_slot_buf_table, sge_page))
|
||||
#define hisi_sas_sge_addr_mem(slot) hisi_sas_sge_addr((slot)->buf)
|
||||
#define hisi_sas_sge_addr_dma(slot) hisi_sas_sge_addr((slot)->buf_dma)
|
||||
|
||||
#define hisi_sas_sge_dif_addr(buf) \
|
||||
((buf) + offsetof(struct hisi_sas_slot_dif_buf_table, sge_dif_page))
|
||||
#define hisi_sas_sge_dif_addr_mem(slot) hisi_sas_sge_dif_addr((slot)->buf)
|
||||
#define hisi_sas_sge_dif_addr_dma(slot) hisi_sas_sge_dif_addr((slot)->buf_dma)
|
||||
|
||||
#define HISI_SAS_MAX_SSP_RESP_SZ (sizeof(struct ssp_frame_hdr) + 1024)
|
||||
#define HISI_SAS_MAX_SMP_RESP_SZ 1028
|
||||
|
@ -73,7 +79,13 @@
|
|||
SHOST_DIF_TYPE2_PROTECTION | \
|
||||
SHOST_DIF_TYPE3_PROTECTION)
|
||||
|
||||
#define HISI_SAS_PROT_MASK (HISI_SAS_DIF_PROT_MASK)
|
||||
#define HISI_SAS_DIX_PROT_MASK (SHOST_DIX_TYPE1_PROTECTION | \
|
||||
SHOST_DIX_TYPE2_PROTECTION | \
|
||||
SHOST_DIX_TYPE3_PROTECTION)
|
||||
|
||||
#define HISI_SAS_PROT_MASK (HISI_SAS_DIF_PROT_MASK | HISI_SAS_DIX_PROT_MASK)
|
||||
|
||||
#define HISI_SAS_WAIT_PHYUP_TIMEOUT 20
|
||||
|
||||
struct hisi_hba;
|
||||
|
||||
|
@ -82,11 +94,6 @@ enum {
|
|||
PORT_TYPE_SATA = (1U << 0),
|
||||
};
|
||||
|
||||
enum dev_status {
|
||||
HISI_SAS_DEV_NORMAL,
|
||||
HISI_SAS_DEV_EH,
|
||||
};
|
||||
|
||||
enum {
|
||||
HISI_SAS_INT_ABT_CMD = 0,
|
||||
HISI_SAS_INT_ABT_DEV = 1,
|
||||
|
@ -145,6 +152,7 @@ struct hisi_sas_phy {
|
|||
struct asd_sas_phy sas_phy;
|
||||
struct sas_identify identify;
|
||||
struct completion *reset_completion;
|
||||
struct timer_list timer;
|
||||
spinlock_t lock;
|
||||
u64 port_id; /* from hw */
|
||||
u64 frame_rcvd_size;
|
||||
|
@ -165,6 +173,7 @@ struct hisi_sas_port {
|
|||
|
||||
struct hisi_sas_cq {
|
||||
struct hisi_hba *hisi_hba;
|
||||
const struct cpumask *pci_irq_mask;
|
||||
struct tasklet_struct tasklet;
|
||||
int rd_point;
|
||||
int id;
|
||||
|
@ -187,7 +196,7 @@ struct hisi_sas_device {
|
|||
enum sas_device_type dev_type;
|
||||
int device_id;
|
||||
int sata_idx;
|
||||
u8 dev_status;
|
||||
spinlock_t lock; /* For protecting slots */
|
||||
};
|
||||
|
||||
struct hisi_sas_tmf_task {
|
||||
|
@ -203,12 +212,14 @@ struct hisi_sas_slot {
|
|||
struct sas_task *task;
|
||||
struct hisi_sas_port *port;
|
||||
u64 n_elem;
|
||||
u64 n_elem_dif;
|
||||
int dlvry_queue;
|
||||
int dlvry_queue_slot;
|
||||
int cmplt_queue;
|
||||
int cmplt_queue_slot;
|
||||
int abort;
|
||||
int ready;
|
||||
int device_id;
|
||||
void *cmd_hdr;
|
||||
dma_addr_t cmd_hdr_dma;
|
||||
struct timer_list internal_abort_timer;
|
||||
|
@ -220,6 +231,24 @@ struct hisi_sas_slot {
|
|||
u16 idx;
|
||||
};
|
||||
|
||||
#define HISI_SAS_DEBUGFS_REG(x) {#x, x}
|
||||
|
||||
struct hisi_sas_debugfs_reg_lu {
|
||||
char *name;
|
||||
int off;
|
||||
};
|
||||
|
||||
struct hisi_sas_debugfs_reg {
|
||||
const struct hisi_sas_debugfs_reg_lu *lu;
|
||||
int count;
|
||||
int base_off;
|
||||
union {
|
||||
u32 (*read_global_reg)(struct hisi_hba *hisi_hba, u32 off);
|
||||
u32 (*read_port_reg)(struct hisi_hba *hisi_hba, int port,
|
||||
u32 off);
|
||||
};
|
||||
};
|
||||
|
||||
struct hisi_sas_hw {
|
||||
int (*hw_init)(struct hisi_hba *hisi_hba);
|
||||
void (*setup_itct)(struct hisi_hba *hisi_hba,
|
||||
|
@ -227,7 +256,7 @@ struct hisi_sas_hw {
|
|||
int (*slot_index_alloc)(struct hisi_hba *hisi_hba,
|
||||
struct domain_device *device);
|
||||
struct hisi_sas_device *(*alloc_dev)(struct domain_device *device);
|
||||
void (*sl_notify)(struct hisi_hba *hisi_hba, int phy_no);
|
||||
void (*sl_notify_ssp)(struct hisi_hba *hisi_hba, int phy_no);
|
||||
int (*get_free_slot)(struct hisi_hba *hisi_hba, struct hisi_sas_dq *dq);
|
||||
void (*start_delivery)(struct hisi_sas_dq *dq);
|
||||
void (*prep_ssp)(struct hisi_hba *hisi_hba,
|
||||
|
@ -259,11 +288,16 @@ struct hisi_sas_hw {
|
|||
u32 (*get_phys_state)(struct hisi_hba *hisi_hba);
|
||||
int (*write_gpio)(struct hisi_hba *hisi_hba, u8 reg_type,
|
||||
u8 reg_index, u8 reg_count, u8 *write_data);
|
||||
void (*wait_cmds_complete_timeout)(struct hisi_hba *hisi_hba,
|
||||
int delay_ms, int timeout_ms);
|
||||
int (*wait_cmds_complete_timeout)(struct hisi_hba *hisi_hba,
|
||||
int delay_ms, int timeout_ms);
|
||||
void (*snapshot_prepare)(struct hisi_hba *hisi_hba);
|
||||
void (*snapshot_restore)(struct hisi_hba *hisi_hba);
|
||||
int max_command_entries;
|
||||
int complete_hdr_size;
|
||||
struct scsi_host_template *sht;
|
||||
|
||||
const struct hisi_sas_debugfs_reg *debugfs_reg_global;
|
||||
const struct hisi_sas_debugfs_reg *debugfs_reg_port;
|
||||
};
|
||||
|
||||
struct hisi_hba {
|
||||
|
@ -329,9 +363,25 @@ struct hisi_hba {
|
|||
const struct hisi_sas_hw *hw; /* Low level hw interface */
|
||||
unsigned long sata_dev_bitmap[BITS_TO_LONGS(HISI_SAS_MAX_DEVICES)];
|
||||
struct work_struct rst_work;
|
||||
struct work_struct debugfs_work;
|
||||
u32 phy_state;
|
||||
u32 intr_coal_ticks; /* Time of interrupt coalesce in us */
|
||||
u32 intr_coal_count; /* Interrupt count to coalesce */
|
||||
|
||||
int cq_nvecs;
|
||||
unsigned int *reply_map;
|
||||
|
||||
/* debugfs memories */
|
||||
u32 *debugfs_global_reg;
|
||||
u32 *debugfs_port_reg[HISI_SAS_MAX_PHYS];
|
||||
void *debugfs_complete_hdr[HISI_SAS_MAX_QUEUES];
|
||||
struct hisi_sas_cmd_hdr *debugfs_cmd_hdr[HISI_SAS_MAX_QUEUES];
|
||||
struct hisi_sas_iost *debugfs_iost;
|
||||
struct hisi_sas_itct *debugfs_itct;
|
||||
|
||||
struct dentry *debugfs_dir;
|
||||
struct dentry *debugfs_dump_dentry;
|
||||
bool debugfs_snapshot;
|
||||
};
|
||||
|
||||
/* Generic HW DMA host memory structures */
|
||||
|
@ -430,6 +480,11 @@ struct hisi_sas_sge_page {
|
|||
struct hisi_sas_sge sge[HISI_SAS_SGE_PAGE_CNT];
|
||||
} __aligned(16);
|
||||
|
||||
#define HISI_SAS_SGE_DIF_PAGE_CNT SG_CHUNK_SIZE
|
||||
struct hisi_sas_sge_dif_page {
|
||||
struct hisi_sas_sge sge[HISI_SAS_SGE_DIF_PAGE_CNT];
|
||||
} __aligned(16);
|
||||
|
||||
struct hisi_sas_command_table_ssp {
|
||||
struct ssp_frame_hdr hdr;
|
||||
union {
|
||||
|
@ -460,9 +515,18 @@ struct hisi_sas_slot_buf_table {
|
|||
struct hisi_sas_sge_page sge_page;
|
||||
};
|
||||
|
||||
struct hisi_sas_slot_dif_buf_table {
|
||||
struct hisi_sas_slot_buf_table slot_buf;
|
||||
struct hisi_sas_sge_dif_page sge_dif_page;
|
||||
};
|
||||
|
||||
extern struct scsi_transport_template *hisi_sas_stt;
|
||||
|
||||
extern bool hisi_sas_debugfs_enable;
|
||||
extern struct dentry *hisi_sas_debugfs_dir;
|
||||
|
||||
extern void hisi_sas_stop_phys(struct hisi_hba *hisi_hba);
|
||||
extern int hisi_sas_alloc(struct hisi_hba *hisi_hba, struct Scsi_Host *shost);
|
||||
extern int hisi_sas_alloc(struct hisi_hba *hisi_hba);
|
||||
extern void hisi_sas_free(struct hisi_hba *hisi_hba);
|
||||
extern u8 hisi_sas_get_ata_protocol(struct host_to_dev_fis *fis,
|
||||
int direction);
|
||||
|
@ -487,10 +551,14 @@ extern void hisi_sas_init_mem(struct hisi_hba *hisi_hba);
|
|||
extern void hisi_sas_rst_work_handler(struct work_struct *work);
|
||||
extern void hisi_sas_sync_rst_work_handler(struct work_struct *work);
|
||||
extern void hisi_sas_kill_tasklets(struct hisi_hba *hisi_hba);
|
||||
extern void hisi_sas_phy_oob_ready(struct hisi_hba *hisi_hba, int phy_no);
|
||||
extern bool hisi_sas_notify_phy_event(struct hisi_sas_phy *phy,
|
||||
enum hisi_sas_phy_event event);
|
||||
extern void hisi_sas_release_tasks(struct hisi_hba *hisi_hba);
|
||||
extern u8 hisi_sas_get_prog_phy_linkrate_mask(enum sas_linkrate max);
|
||||
extern void hisi_sas_controller_reset_prepare(struct hisi_hba *hisi_hba);
|
||||
extern void hisi_sas_controller_reset_done(struct hisi_hba *hisi_hba);
|
||||
extern void hisi_sas_debugfs_init(struct hisi_hba *hisi_hba);
|
||||
extern void hisi_sas_debugfs_exit(struct hisi_hba *hisi_hba);
|
||||
extern void hisi_sas_debugfs_work_handler(struct work_struct *work);
|
||||
#endif
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -835,7 +835,7 @@ static void phys_init_v1_hw(struct hisi_hba *hisi_hba)
|
|||
mod_timer(timer, jiffies + HZ);
|
||||
}
|
||||
|
||||
static void sl_notify_v1_hw(struct hisi_hba *hisi_hba, int phy_no)
|
||||
static void sl_notify_ssp_v1_hw(struct hisi_hba *hisi_hba, int phy_no)
|
||||
{
|
||||
u32 sl_control;
|
||||
|
||||
|
@ -1749,6 +1749,8 @@ static int interrupt_init_v1_hw(struct hisi_hba *hisi_hba)
|
|||
}
|
||||
}
|
||||
|
||||
hisi_hba->cq_nvecs = hisi_hba->queue_count;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -1826,7 +1828,7 @@ static struct scsi_host_template sht_v1_hw = {
|
|||
static const struct hisi_sas_hw hisi_sas_v1_hw = {
|
||||
.hw_init = hisi_sas_v1_init,
|
||||
.setup_itct = setup_itct_v1_hw,
|
||||
.sl_notify = sl_notify_v1_hw,
|
||||
.sl_notify_ssp = sl_notify_ssp_v1_hw,
|
||||
.clear_itct = clear_itct_v1_hw,
|
||||
.prep_smp = prep_smp_v1_hw,
|
||||
.prep_ssp = prep_ssp_v1_hw,
|
||||
|
|
|
@ -868,12 +868,12 @@ hisi_sas_device *alloc_dev_quirk_v2_hw(struct domain_device *device)
|
|||
|
||||
hisi_hba->devices[i].device_id = i;
|
||||
sas_dev = &hisi_hba->devices[i];
|
||||
sas_dev->dev_status = HISI_SAS_DEV_NORMAL;
|
||||
sas_dev->dev_type = device->dev_type;
|
||||
sas_dev->hisi_hba = hisi_hba;
|
||||
sas_dev->sas_device = device;
|
||||
sas_dev->sata_idx = sata_idx;
|
||||
sas_dev->dq = dq;
|
||||
spin_lock_init(&sas_dev->lock);
|
||||
INIT_LIST_HEAD(&hisi_hba->devices[i].list);
|
||||
break;
|
||||
}
|
||||
|
@ -1589,7 +1589,7 @@ static void phys_init_v2_hw(struct hisi_hba *hisi_hba)
|
|||
}
|
||||
}
|
||||
|
||||
static void sl_notify_v2_hw(struct hisi_hba *hisi_hba, int phy_no)
|
||||
static void sl_notify_ssp_v2_hw(struct hisi_hba *hisi_hba, int phy_no)
|
||||
{
|
||||
u32 sl_control;
|
||||
|
||||
|
@ -2677,6 +2677,8 @@ static int phy_up_v2_hw(int phy_no, struct hisi_hba *hisi_hba)
|
|||
if (is_sata_phy_v2_hw(hisi_hba, phy_no))
|
||||
goto end;
|
||||
|
||||
del_timer(&phy->timer);
|
||||
|
||||
if (phy_no == 8) {
|
||||
u32 port_state = hisi_sas_read32(hisi_hba, PORT_STATE);
|
||||
|
||||
|
@ -2756,6 +2758,7 @@ static int phy_down_v2_hw(int phy_no, struct hisi_hba *hisi_hba)
|
|||
struct hisi_sas_port *port = phy->port;
|
||||
struct device *dev = hisi_hba->dev;
|
||||
|
||||
del_timer(&phy->timer);
|
||||
hisi_sas_phy_write32(hisi_hba, phy_no, PHYCTRL_NOT_RDY_MSK, 1);
|
||||
|
||||
phy_state = hisi_sas_read32(hisi_hba, PHY_STATE);
|
||||
|
@ -2944,6 +2947,9 @@ static irqreturn_t int_chnl_int_v2_hw(int irq_no, void *p)
|
|||
if (irq_value0 & CHL_INT0_SL_RX_BCST_ACK_MSK)
|
||||
phy_bcast_v2_hw(phy_no, hisi_hba);
|
||||
|
||||
if (irq_value0 & CHL_INT0_PHY_RDY_MSK)
|
||||
hisi_sas_phy_oob_ready(hisi_hba, phy_no);
|
||||
|
||||
hisi_sas_phy_write32(hisi_hba, phy_no,
|
||||
CHL_INT0, irq_value0
|
||||
& (~CHL_INT0_HOTPLUG_TOUT_MSK)
|
||||
|
@ -3227,6 +3233,8 @@ static irqreturn_t sata_int_v2_hw(int irq_no, void *p)
|
|||
unsigned long flags;
|
||||
int phy_no, offset;
|
||||
|
||||
del_timer(&phy->timer);
|
||||
|
||||
phy_no = sas_phy->id;
|
||||
initial_fis = &hisi_hba->initial_fis[phy_no];
|
||||
fis = &initial_fis->fis;
|
||||
|
@ -3393,6 +3401,8 @@ static int interrupt_init_v2_hw(struct hisi_hba *hisi_hba)
|
|||
tasklet_init(t, cq_tasklet_v2_hw, (unsigned long)cq);
|
||||
}
|
||||
|
||||
hisi_hba->cq_nvecs = hisi_hba->queue_count;
|
||||
|
||||
return 0;
|
||||
|
||||
free_cq_int_irqs:
|
||||
|
@ -3542,8 +3552,8 @@ static int write_gpio_v2_hw(struct hisi_hba *hisi_hba, u8 reg_type,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static void wait_cmds_complete_timeout_v2_hw(struct hisi_hba *hisi_hba,
|
||||
int delay_ms, int timeout_ms)
|
||||
static int wait_cmds_complete_timeout_v2_hw(struct hisi_hba *hisi_hba,
|
||||
int delay_ms, int timeout_ms)
|
||||
{
|
||||
struct device *dev = hisi_hba->dev;
|
||||
int entries, entries_old = 0, time;
|
||||
|
@ -3557,7 +3567,12 @@ static void wait_cmds_complete_timeout_v2_hw(struct hisi_hba *hisi_hba,
|
|||
msleep(delay_ms);
|
||||
}
|
||||
|
||||
if (time >= timeout_ms)
|
||||
return -ETIMEDOUT;
|
||||
|
||||
dev_dbg(dev, "wait commands complete %dms\n", time);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct device_attribute *host_attrs_v2_hw[] = {
|
||||
|
@ -3590,7 +3605,7 @@ static const struct hisi_sas_hw hisi_sas_v2_hw = {
|
|||
.setup_itct = setup_itct_v2_hw,
|
||||
.slot_index_alloc = slot_index_alloc_quirk_v2_hw,
|
||||
.alloc_dev = alloc_dev_quirk_v2_hw,
|
||||
.sl_notify = sl_notify_v2_hw,
|
||||
.sl_notify_ssp = sl_notify_ssp_v2_hw,
|
||||
.get_wideport_bitmap = get_wideport_bitmap_v2_hw,
|
||||
.clear_itct = clear_itct_v2_hw,
|
||||
.free_device = free_device_v2_hw,
|
||||
|
|
|
@ -11,7 +11,7 @@
|
|||
#include "hisi_sas.h"
|
||||
#define DRV_NAME "hisi_sas_v3_hw"
|
||||
|
||||
/* global registers need init*/
|
||||
/* global registers need init */
|
||||
#define DLVRY_QUEUE_ENABLE 0x0
|
||||
#define IOST_BASE_ADDR_LO 0x8
|
||||
#define IOST_BASE_ADDR_HI 0xc
|
||||
|
@ -186,6 +186,7 @@
|
|||
#define CHL_INT0_MSK (PORT_BASE + 0x1c0)
|
||||
#define CHL_INT1_MSK (PORT_BASE + 0x1c4)
|
||||
#define CHL_INT2_MSK (PORT_BASE + 0x1c8)
|
||||
#define SAS_EC_INT_COAL_TIME (PORT_BASE + 0x1cc)
|
||||
#define CHL_INT_COAL_EN (PORT_BASE + 0x1d0)
|
||||
#define SAS_RX_TRAIN_TIMER (PORT_BASE + 0x2a4)
|
||||
#define PHY_CTRL_RDY_MSK (PORT_BASE + 0x2b0)
|
||||
|
@ -205,6 +206,7 @@
|
|||
#define ERR_CNT_DWS_LOST (PORT_BASE + 0x380)
|
||||
#define ERR_CNT_RESET_PROB (PORT_BASE + 0x384)
|
||||
#define ERR_CNT_INVLD_DW (PORT_BASE + 0x390)
|
||||
#define ERR_CNT_CODE_ERR (PORT_BASE + 0x394)
|
||||
#define ERR_CNT_DISP_ERR (PORT_BASE + 0x398)
|
||||
|
||||
#define DEFAULT_ITCT_HW 2048 /* reset value, not reprogrammed */
|
||||
|
@ -397,6 +399,11 @@ struct hisi_sas_err_record_v3 {
|
|||
#define USR_DATA_BLOCK_SZ_OFF 20
|
||||
#define USR_DATA_BLOCK_SZ_MSK (0x3 << USR_DATA_BLOCK_SZ_OFF)
|
||||
#define T10_CHK_MSK_OFF 16
|
||||
#define T10_CHK_REF_TAG_MSK (0xf0 << T10_CHK_MSK_OFF)
|
||||
#define T10_CHK_APP_TAG_MSK (0xc << T10_CHK_MSK_OFF)
|
||||
|
||||
#define BASE_VECTORS_V3_HW 16
|
||||
#define MIN_AFFINE_VECTORS_V3_HW (BASE_VECTORS_V3_HW + 1)
|
||||
|
||||
static bool hisi_sas_intr_conv;
|
||||
MODULE_PARM_DESC(intr_conv, "interrupt converge enable (0-1)");
|
||||
|
@ -406,6 +413,11 @@ static int prot_mask;
|
|||
module_param(prot_mask, int, 0);
|
||||
MODULE_PARM_DESC(prot_mask, " host protection capabilities mask, def=0x0 ");
|
||||
|
||||
static bool auto_affine_msi_experimental;
|
||||
module_param(auto_affine_msi_experimental, bool, 0444);
|
||||
MODULE_PARM_DESC(auto_affine_msi_experimental, "Enable auto-affinity of MSI IRQs as experimental:\n"
|
||||
"default is off");
|
||||
|
||||
static u32 hisi_sas_read32(struct hisi_hba *hisi_hba, u32 off)
|
||||
{
|
||||
void __iomem *regs = hisi_hba->regs + off;
|
||||
|
@ -716,7 +728,7 @@ static void clear_itct_v3_hw(struct hisi_hba *hisi_hba,
|
|||
hisi_sas_write32(hisi_hba, ENT_INT_SRC3,
|
||||
ENT_INT_SRC3_ITC_INT_MSK);
|
||||
|
||||
/* clear the itct table*/
|
||||
/* clear the itct table */
|
||||
reg_val = ITCT_CLR_EN_MSK | (dev_id & ITCT_DEV_MSK);
|
||||
hisi_sas_write32(hisi_hba, ITCT_CLR, reg_val);
|
||||
|
||||
|
@ -868,7 +880,7 @@ static void phys_init_v3_hw(struct hisi_hba *hisi_hba)
|
|||
}
|
||||
}
|
||||
|
||||
static void sl_notify_v3_hw(struct hisi_hba *hisi_hba, int phy_no)
|
||||
static void sl_notify_ssp_v3_hw(struct hisi_hba *hisi_hba, int phy_no)
|
||||
{
|
||||
u32 sl_control;
|
||||
|
||||
|
@ -967,19 +979,44 @@ static void prep_prd_sge_v3_hw(struct hisi_hba *hisi_hba,
|
|||
|
||||
hdr->prd_table_addr = cpu_to_le64(hisi_sas_sge_addr_dma(slot));
|
||||
|
||||
hdr->sg_len = cpu_to_le32(n_elem << CMD_HDR_DATA_SGL_LEN_OFF);
|
||||
hdr->sg_len |= cpu_to_le32(n_elem << CMD_HDR_DATA_SGL_LEN_OFF);
|
||||
}
|
||||
|
||||
static void prep_prd_sge_dif_v3_hw(struct hisi_hba *hisi_hba,
|
||||
struct hisi_sas_slot *slot,
|
||||
struct hisi_sas_cmd_hdr *hdr,
|
||||
struct scatterlist *scatter,
|
||||
int n_elem)
|
||||
{
|
||||
struct hisi_sas_sge_dif_page *sge_dif_page;
|
||||
struct scatterlist *sg;
|
||||
int i;
|
||||
|
||||
sge_dif_page = hisi_sas_sge_dif_addr_mem(slot);
|
||||
|
||||
for_each_sg(scatter, sg, n_elem, i) {
|
||||
struct hisi_sas_sge *entry = &sge_dif_page->sge[i];
|
||||
|
||||
entry->addr = cpu_to_le64(sg_dma_address(sg));
|
||||
entry->page_ctrl_0 = 0;
|
||||
entry->page_ctrl_1 = 0;
|
||||
entry->data_len = cpu_to_le32(sg_dma_len(sg));
|
||||
entry->data_off = 0;
|
||||
}
|
||||
|
||||
hdr->dif_prd_table_addr =
|
||||
cpu_to_le64(hisi_sas_sge_dif_addr_dma(slot));
|
||||
|
||||
hdr->sg_len |= cpu_to_le32(n_elem << CMD_HDR_DIF_SGL_LEN_OFF);
|
||||
}
|
||||
|
||||
static u32 get_prot_chk_msk_v3_hw(struct scsi_cmnd *scsi_cmnd)
|
||||
{
|
||||
unsigned char prot_flags = scsi_cmnd->prot_flags;
|
||||
|
||||
if (prot_flags & SCSI_PROT_TRANSFER_PI) {
|
||||
if (prot_flags & SCSI_PROT_REF_CHECK)
|
||||
return 0xc << 16;
|
||||
return 0xfc << 16;
|
||||
}
|
||||
return 0;
|
||||
if (prot_flags & SCSI_PROT_REF_CHECK)
|
||||
return T10_CHK_APP_TAG_MSK;
|
||||
return T10_CHK_REF_TAG_MSK | T10_CHK_APP_TAG_MSK;
|
||||
}
|
||||
|
||||
static void fill_prot_v3_hw(struct scsi_cmnd *scsi_cmnd,
|
||||
|
@ -990,15 +1027,33 @@ static void fill_prot_v3_hw(struct scsi_cmnd *scsi_cmnd,
|
|||
u32 lbrt_chk_val = t10_pi_ref_tag(scsi_cmnd->request);
|
||||
|
||||
switch (prot_op) {
|
||||
case SCSI_PROT_READ_INSERT:
|
||||
prot->dw0 |= T10_INSRT_EN_MSK;
|
||||
prot->lbrtgv = lbrt_chk_val;
|
||||
break;
|
||||
case SCSI_PROT_READ_STRIP:
|
||||
prot->dw0 |= (T10_RMV_EN_MSK | T10_CHK_EN_MSK);
|
||||
prot->lbrtcv = lbrt_chk_val;
|
||||
prot->dw4 |= get_prot_chk_msk_v3_hw(scsi_cmnd);
|
||||
break;
|
||||
case SCSI_PROT_READ_PASS:
|
||||
prot->dw0 |= T10_CHK_EN_MSK;
|
||||
prot->lbrtcv = lbrt_chk_val;
|
||||
prot->dw4 |= get_prot_chk_msk_v3_hw(scsi_cmnd);
|
||||
break;
|
||||
case SCSI_PROT_WRITE_INSERT:
|
||||
prot->dw0 |= T10_INSRT_EN_MSK;
|
||||
prot->lbrtgv = lbrt_chk_val;
|
||||
break;
|
||||
case SCSI_PROT_WRITE_STRIP:
|
||||
prot->dw0 |= (T10_RMV_EN_MSK | T10_CHK_EN_MSK);
|
||||
prot->lbrtcv = lbrt_chk_val;
|
||||
break;
|
||||
case SCSI_PROT_WRITE_PASS:
|
||||
prot->dw0 |= T10_CHK_EN_MSK;
|
||||
prot->lbrtcv = lbrt_chk_val;
|
||||
prot->dw4 |= get_prot_chk_msk_v3_hw(scsi_cmnd);
|
||||
break;
|
||||
default:
|
||||
WARN(1, "prot_op(0x%x) is not valid\n", prot_op);
|
||||
break;
|
||||
|
@ -1033,8 +1088,8 @@ static void prep_ssp_v3_hw(struct hisi_hba *hisi_hba,
|
|||
struct sas_ssp_task *ssp_task = &task->ssp_task;
|
||||
struct scsi_cmnd *scsi_cmnd = ssp_task->cmd;
|
||||
struct hisi_sas_tmf_task *tmf = slot->tmf;
|
||||
unsigned char prot_op = scsi_get_prot_op(scsi_cmnd);
|
||||
int has_data = 0, priority = !!tmf;
|
||||
unsigned char prot_op;
|
||||
u8 *buf_cmd;
|
||||
u32 dw1 = 0, dw2 = 0, len = 0;
|
||||
|
||||
|
@ -1049,6 +1104,7 @@ static void prep_ssp_v3_hw(struct hisi_hba *hisi_hba,
|
|||
dw1 |= 2 << CMD_HDR_FRAME_TYPE_OFF;
|
||||
dw1 |= DIR_NO_DATA << CMD_HDR_DIR_OFF;
|
||||
} else {
|
||||
prot_op = scsi_get_prot_op(scsi_cmnd);
|
||||
dw1 |= 1 << CMD_HDR_FRAME_TYPE_OFF;
|
||||
switch (scsi_cmnd->sc_data_direction) {
|
||||
case DMA_TO_DEVICE:
|
||||
|
@ -1074,9 +1130,15 @@ static void prep_ssp_v3_hw(struct hisi_hba *hisi_hba,
|
|||
hdr->dw2 = cpu_to_le32(dw2);
|
||||
hdr->transfer_tags = cpu_to_le32(slot->idx);
|
||||
|
||||
if (has_data)
|
||||
if (has_data) {
|
||||
prep_prd_sge_v3_hw(hisi_hba, slot, hdr, task->scatter,
|
||||
slot->n_elem);
|
||||
slot->n_elem);
|
||||
|
||||
if (scsi_prot_sg_count(scsi_cmnd))
|
||||
prep_prd_sge_dif_v3_hw(hisi_hba, slot, hdr,
|
||||
scsi_prot_sglist(scsi_cmnd),
|
||||
slot->n_elem_dif);
|
||||
}
|
||||
|
||||
hdr->cmd_table_addr = cpu_to_le64(hisi_sas_cmd_hdr_addr_dma(slot));
|
||||
hdr->sts_buffer_addr = cpu_to_le64(hisi_sas_status_buf_addr_dma(slot));
|
||||
|
@ -1117,18 +1179,19 @@ static void prep_ssp_v3_hw(struct hisi_hba *hisi_hba,
|
|||
fill_prot_v3_hw(scsi_cmnd, &prot);
|
||||
memcpy(buf_cmd_prot, &prot,
|
||||
sizeof(struct hisi_sas_protect_iu_v3_hw));
|
||||
|
||||
/*
|
||||
* For READ, we need length of info read to memory, while for
|
||||
* WRITE we need length of data written to the disk.
|
||||
*/
|
||||
if (prot_op == SCSI_PROT_WRITE_INSERT) {
|
||||
if (prot_op == SCSI_PROT_WRITE_INSERT ||
|
||||
prot_op == SCSI_PROT_READ_INSERT ||
|
||||
prot_op == SCSI_PROT_WRITE_PASS ||
|
||||
prot_op == SCSI_PROT_READ_PASS) {
|
||||
unsigned int interval = scsi_prot_interval(scsi_cmnd);
|
||||
unsigned int ilog2_interval = ilog2(interval);
|
||||
|
||||
len = (task->total_xfer_len >> ilog2_interval) * 8;
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
hdr->dw1 = cpu_to_le32(dw1);
|
||||
|
@ -1288,6 +1351,7 @@ static irqreturn_t phy_up_v3_hw(int phy_no, struct hisi_hba *hisi_hba)
|
|||
struct device *dev = hisi_hba->dev;
|
||||
unsigned long flags;
|
||||
|
||||
del_timer(&phy->timer);
|
||||
hisi_sas_phy_write32(hisi_hba, phy_no, PHYCTRL_PHY_ENA_MSK, 1);
|
||||
|
||||
port_id = hisi_sas_read32(hisi_hba, PHY_PORT_NUM_MA);
|
||||
|
@ -1381,9 +1445,11 @@ static irqreturn_t phy_up_v3_hw(int phy_no, struct hisi_hba *hisi_hba)
|
|||
|
||||
static irqreturn_t phy_down_v3_hw(int phy_no, struct hisi_hba *hisi_hba)
|
||||
{
|
||||
struct hisi_sas_phy *phy = &hisi_hba->phy[phy_no];
|
||||
u32 phy_state, sl_ctrl, txid_auto;
|
||||
struct device *dev = hisi_hba->dev;
|
||||
|
||||
del_timer(&phy->timer);
|
||||
hisi_sas_phy_write32(hisi_hba, phy_no, PHYCTRL_NOT_RDY_MSK, 1);
|
||||
|
||||
phy_state = hisi_sas_read32(hisi_hba, PHY_STATE);
|
||||
|
@ -1552,6 +1618,19 @@ static void handle_chl_int2_v3_hw(struct hisi_hba *hisi_hba, int phy_no)
|
|||
hisi_sas_phy_write32(hisi_hba, phy_no, CHL_INT2, irq_value);
|
||||
}
|
||||
|
||||
static void handle_chl_int0_v3_hw(struct hisi_hba *hisi_hba, int phy_no)
|
||||
{
|
||||
u32 irq_value0 = hisi_sas_phy_read32(hisi_hba, phy_no, CHL_INT0);
|
||||
|
||||
if (irq_value0 & CHL_INT0_PHY_RDY_MSK)
|
||||
hisi_sas_phy_oob_ready(hisi_hba, phy_no);
|
||||
|
||||
hisi_sas_phy_write32(hisi_hba, phy_no, CHL_INT0,
|
||||
irq_value0 & (~CHL_INT0_SL_RX_BCST_ACK_MSK)
|
||||
& (~CHL_INT0_SL_PHY_ENABLE_MSK)
|
||||
& (~CHL_INT0_NOT_RDY_MSK));
|
||||
}
|
||||
|
||||
static irqreturn_t int_chnl_int_v3_hw(int irq_no, void *p)
|
||||
{
|
||||
struct hisi_hba *hisi_hba = p;
|
||||
|
@ -1562,8 +1641,8 @@ static irqreturn_t int_chnl_int_v3_hw(int irq_no, void *p)
|
|||
& 0xeeeeeeee;
|
||||
|
||||
while (irq_msk) {
|
||||
u32 irq_value0 = hisi_sas_phy_read32(hisi_hba, phy_no,
|
||||
CHL_INT0);
|
||||
if (irq_msk & (2 << (phy_no * 4)))
|
||||
handle_chl_int0_v3_hw(hisi_hba, phy_no);
|
||||
|
||||
if (irq_msk & (4 << (phy_no * 4)))
|
||||
handle_chl_int1_v3_hw(hisi_hba, phy_no);
|
||||
|
@ -1571,13 +1650,6 @@ static irqreturn_t int_chnl_int_v3_hw(int irq_no, void *p)
|
|||
if (irq_msk & (8 << (phy_no * 4)))
|
||||
handle_chl_int2_v3_hw(hisi_hba, phy_no);
|
||||
|
||||
if (irq_msk & (2 << (phy_no * 4)) && irq_value0) {
|
||||
hisi_sas_phy_write32(hisi_hba, phy_no,
|
||||
CHL_INT0, irq_value0
|
||||
& (~CHL_INT0_SL_RX_BCST_ACK_MSK)
|
||||
& (~CHL_INT0_SL_PHY_ENABLE_MSK)
|
||||
& (~CHL_INT0_NOT_RDY_MSK));
|
||||
}
|
||||
irq_msk &= ~(0xe << (phy_no * 4));
|
||||
phy_no++;
|
||||
}
|
||||
|
@ -1644,6 +1716,7 @@ static irqreturn_t fatal_axi_int_v3_hw(int irq_no, void *p)
|
|||
u32 irq_value, irq_msk;
|
||||
struct hisi_hba *hisi_hba = p;
|
||||
struct device *dev = hisi_hba->dev;
|
||||
struct pci_dev *pdev = hisi_hba->pci_dev;
|
||||
int i;
|
||||
|
||||
irq_msk = hisi_sas_read32(hisi_hba, ENT_INT_SRC_MSK3);
|
||||
|
@ -1675,6 +1748,17 @@ static irqreturn_t fatal_axi_int_v3_hw(int irq_no, void *p)
|
|||
error->msg, irq_value);
|
||||
queue_work(hisi_hba->wq, &hisi_hba->rst_work);
|
||||
}
|
||||
|
||||
if (pdev->revision < 0x21) {
|
||||
u32 reg_val;
|
||||
|
||||
reg_val = hisi_sas_read32(hisi_hba,
|
||||
AXI_MASTER_CFG_BASE +
|
||||
AM_CTRL_GLOBAL);
|
||||
reg_val |= AM_CTRL_SHUTDOWN_REQ_MSK;
|
||||
hisi_sas_write32(hisi_hba, AXI_MASTER_CFG_BASE +
|
||||
AM_CTRL_GLOBAL, reg_val);
|
||||
}
|
||||
}
|
||||
|
||||
if (irq_value & BIT(ENT_INT_SRC3_ITC_INT_OFF)) {
|
||||
|
@ -1959,21 +2043,68 @@ static irqreturn_t cq_interrupt_v3_hw(int irq_no, void *p)
|
|||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
static void setup_reply_map_v3_hw(struct hisi_hba *hisi_hba, int nvecs)
|
||||
{
|
||||
const struct cpumask *mask;
|
||||
int queue, cpu;
|
||||
|
||||
for (queue = 0; queue < nvecs; queue++) {
|
||||
struct hisi_sas_cq *cq = &hisi_hba->cq[queue];
|
||||
|
||||
mask = pci_irq_get_affinity(hisi_hba->pci_dev, queue +
|
||||
BASE_VECTORS_V3_HW);
|
||||
if (!mask)
|
||||
goto fallback;
|
||||
cq->pci_irq_mask = mask;
|
||||
for_each_cpu(cpu, mask)
|
||||
hisi_hba->reply_map[cpu] = queue;
|
||||
}
|
||||
return;
|
||||
|
||||
fallback:
|
||||
for_each_possible_cpu(cpu)
|
||||
hisi_hba->reply_map[cpu] = cpu % hisi_hba->queue_count;
|
||||
/* Don't clean all CQ masks */
|
||||
}
|
||||
|
||||
static int interrupt_init_v3_hw(struct hisi_hba *hisi_hba)
|
||||
{
|
||||
struct device *dev = hisi_hba->dev;
|
||||
struct pci_dev *pdev = hisi_hba->pci_dev;
|
||||
int vectors, rc;
|
||||
int i, k;
|
||||
int max_msi = HISI_SAS_MSI_COUNT_V3_HW;
|
||||
int max_msi = HISI_SAS_MSI_COUNT_V3_HW, min_msi;
|
||||
|
||||
vectors = pci_alloc_irq_vectors(hisi_hba->pci_dev, 1,
|
||||
max_msi, PCI_IRQ_MSI);
|
||||
if (vectors < max_msi) {
|
||||
dev_err(dev, "could not allocate all msi (%d)\n", vectors);
|
||||
return -ENOENT;
|
||||
if (auto_affine_msi_experimental) {
|
||||
struct irq_affinity desc = {
|
||||
.pre_vectors = BASE_VECTORS_V3_HW,
|
||||
};
|
||||
|
||||
min_msi = MIN_AFFINE_VECTORS_V3_HW;
|
||||
|
||||
hisi_hba->reply_map = devm_kcalloc(dev, nr_cpu_ids,
|
||||
sizeof(unsigned int),
|
||||
GFP_KERNEL);
|
||||
if (!hisi_hba->reply_map)
|
||||
return -ENOMEM;
|
||||
vectors = pci_alloc_irq_vectors_affinity(hisi_hba->pci_dev,
|
||||
min_msi, max_msi,
|
||||
PCI_IRQ_MSI |
|
||||
PCI_IRQ_AFFINITY,
|
||||
&desc);
|
||||
if (vectors < 0)
|
||||
return -ENOENT;
|
||||
setup_reply_map_v3_hw(hisi_hba, vectors - BASE_VECTORS_V3_HW);
|
||||
} else {
|
||||
min_msi = max_msi;
|
||||
vectors = pci_alloc_irq_vectors(hisi_hba->pci_dev, min_msi,
|
||||
max_msi, PCI_IRQ_MSI);
|
||||
if (vectors < 0)
|
||||
return vectors;
|
||||
}
|
||||
|
||||
hisi_hba->cq_nvecs = vectors - BASE_VECTORS_V3_HW;
|
||||
|
||||
rc = devm_request_irq(dev, pci_irq_vector(pdev, 1),
|
||||
int_phy_up_down_bcast_v3_hw, 0,
|
||||
DRV_NAME " phy", hisi_hba);
|
||||
|
@ -2002,7 +2133,7 @@ static int interrupt_init_v3_hw(struct hisi_hba *hisi_hba)
|
|||
}
|
||||
|
||||
/* Init tasklets for cq only */
|
||||
for (i = 0; i < hisi_hba->queue_count; i++) {
|
||||
for (i = 0; i < hisi_hba->cq_nvecs; i++) {
|
||||
struct hisi_sas_cq *cq = &hisi_hba->cq[i];
|
||||
struct tasklet_struct *t = &cq->tasklet;
|
||||
int nr = hisi_sas_intr_conv ? 16 : 16 + i;
|
||||
|
@ -2201,8 +2332,8 @@ static int write_gpio_v3_hw(struct hisi_hba *hisi_hba, u8 reg_type,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static void wait_cmds_complete_timeout_v3_hw(struct hisi_hba *hisi_hba,
|
||||
int delay_ms, int timeout_ms)
|
||||
static int wait_cmds_complete_timeout_v3_hw(struct hisi_hba *hisi_hba,
|
||||
int delay_ms, int timeout_ms)
|
||||
{
|
||||
struct device *dev = hisi_hba->dev;
|
||||
int entries, entries_old = 0, time;
|
||||
|
@ -2216,7 +2347,12 @@ static void wait_cmds_complete_timeout_v3_hw(struct hisi_hba *hisi_hba,
|
|||
msleep(delay_ms);
|
||||
}
|
||||
|
||||
if (time >= timeout_ms)
|
||||
return -ETIMEDOUT;
|
||||
|
||||
dev_dbg(dev, "wait commands complete %dms\n", time);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static ssize_t intr_conv_v3_hw_show(struct device *dev,
|
||||
|
@ -2332,6 +2468,159 @@ static struct device_attribute *host_attrs_v3_hw[] = {
|
|||
NULL
|
||||
};
|
||||
|
||||
static const struct hisi_sas_debugfs_reg_lu debugfs_port_reg_lu[] = {
|
||||
HISI_SAS_DEBUGFS_REG(PHY_CFG),
|
||||
HISI_SAS_DEBUGFS_REG(HARD_PHY_LINKRATE),
|
||||
HISI_SAS_DEBUGFS_REG(PROG_PHY_LINK_RATE),
|
||||
HISI_SAS_DEBUGFS_REG(PHY_CTRL),
|
||||
HISI_SAS_DEBUGFS_REG(SL_CFG),
|
||||
HISI_SAS_DEBUGFS_REG(AIP_LIMIT),
|
||||
HISI_SAS_DEBUGFS_REG(SL_CONTROL),
|
||||
HISI_SAS_DEBUGFS_REG(RX_PRIMS_STATUS),
|
||||
HISI_SAS_DEBUGFS_REG(TX_ID_DWORD0),
|
||||
HISI_SAS_DEBUGFS_REG(TX_ID_DWORD1),
|
||||
HISI_SAS_DEBUGFS_REG(TX_ID_DWORD2),
|
||||
HISI_SAS_DEBUGFS_REG(TX_ID_DWORD3),
|
||||
HISI_SAS_DEBUGFS_REG(TX_ID_DWORD4),
|
||||
HISI_SAS_DEBUGFS_REG(TX_ID_DWORD5),
|
||||
HISI_SAS_DEBUGFS_REG(TX_ID_DWORD6),
|
||||
HISI_SAS_DEBUGFS_REG(TXID_AUTO),
|
||||
HISI_SAS_DEBUGFS_REG(RX_IDAF_DWORD0),
|
||||
HISI_SAS_DEBUGFS_REG(RXOP_CHECK_CFG_H),
|
||||
HISI_SAS_DEBUGFS_REG(STP_LINK_TIMER),
|
||||
HISI_SAS_DEBUGFS_REG(STP_LINK_TIMEOUT_STATE),
|
||||
HISI_SAS_DEBUGFS_REG(CON_CFG_DRIVER),
|
||||
HISI_SAS_DEBUGFS_REG(SAS_SSP_CON_TIMER_CFG),
|
||||
HISI_SAS_DEBUGFS_REG(SAS_SMP_CON_TIMER_CFG),
|
||||
HISI_SAS_DEBUGFS_REG(SAS_STP_CON_TIMER_CFG),
|
||||
HISI_SAS_DEBUGFS_REG(CHL_INT0),
|
||||
HISI_SAS_DEBUGFS_REG(CHL_INT1),
|
||||
HISI_SAS_DEBUGFS_REG(CHL_INT2),
|
||||
HISI_SAS_DEBUGFS_REG(CHL_INT0_MSK),
|
||||
HISI_SAS_DEBUGFS_REG(CHL_INT1_MSK),
|
||||
HISI_SAS_DEBUGFS_REG(CHL_INT2_MSK),
|
||||
HISI_SAS_DEBUGFS_REG(SAS_EC_INT_COAL_TIME),
|
||||
HISI_SAS_DEBUGFS_REG(CHL_INT_COAL_EN),
|
||||
HISI_SAS_DEBUGFS_REG(SAS_RX_TRAIN_TIMER),
|
||||
HISI_SAS_DEBUGFS_REG(PHY_CTRL_RDY_MSK),
|
||||
HISI_SAS_DEBUGFS_REG(PHYCTRL_NOT_RDY_MSK),
|
||||
HISI_SAS_DEBUGFS_REG(PHYCTRL_DWS_RESET_MSK),
|
||||
HISI_SAS_DEBUGFS_REG(PHYCTRL_PHY_ENA_MSK),
|
||||
HISI_SAS_DEBUGFS_REG(SL_RX_BCAST_CHK_MSK),
|
||||
HISI_SAS_DEBUGFS_REG(PHYCTRL_OOB_RESTART_MSK),
|
||||
HISI_SAS_DEBUGFS_REG(DMA_TX_STATUS),
|
||||
HISI_SAS_DEBUGFS_REG(DMA_RX_STATUS),
|
||||
HISI_SAS_DEBUGFS_REG(COARSETUNE_TIME),
|
||||
HISI_SAS_DEBUGFS_REG(ERR_CNT_DWS_LOST),
|
||||
HISI_SAS_DEBUGFS_REG(ERR_CNT_RESET_PROB),
|
||||
HISI_SAS_DEBUGFS_REG(ERR_CNT_INVLD_DW),
|
||||
HISI_SAS_DEBUGFS_REG(ERR_CNT_CODE_ERR),
|
||||
HISI_SAS_DEBUGFS_REG(ERR_CNT_DISP_ERR),
|
||||
{}
|
||||
};
|
||||
|
||||
static const struct hisi_sas_debugfs_reg debugfs_port_reg = {
|
||||
.lu = debugfs_port_reg_lu,
|
||||
.count = 0x100,
|
||||
.base_off = PORT_BASE,
|
||||
.read_port_reg = hisi_sas_phy_read32,
|
||||
};
|
||||
|
||||
static const struct hisi_sas_debugfs_reg_lu debugfs_global_reg_lu[] = {
|
||||
HISI_SAS_DEBUGFS_REG(DLVRY_QUEUE_ENABLE),
|
||||
HISI_SAS_DEBUGFS_REG(PHY_CONTEXT),
|
||||
HISI_SAS_DEBUGFS_REG(PHY_STATE),
|
||||
HISI_SAS_DEBUGFS_REG(PHY_PORT_NUM_MA),
|
||||
HISI_SAS_DEBUGFS_REG(PHY_CONN_RATE),
|
||||
HISI_SAS_DEBUGFS_REG(ITCT_CLR),
|
||||
HISI_SAS_DEBUGFS_REG(IO_SATA_BROKEN_MSG_ADDR_LO),
|
||||
HISI_SAS_DEBUGFS_REG(IO_SATA_BROKEN_MSG_ADDR_HI),
|
||||
HISI_SAS_DEBUGFS_REG(SATA_INITI_D2H_STORE_ADDR_LO),
|
||||
HISI_SAS_DEBUGFS_REG(SATA_INITI_D2H_STORE_ADDR_HI),
|
||||
HISI_SAS_DEBUGFS_REG(CFG_MAX_TAG),
|
||||
HISI_SAS_DEBUGFS_REG(HGC_SAS_TX_OPEN_FAIL_RETRY_CTRL),
|
||||
HISI_SAS_DEBUGFS_REG(HGC_SAS_TXFAIL_RETRY_CTRL),
|
||||
HISI_SAS_DEBUGFS_REG(HGC_GET_ITV_TIME),
|
||||
HISI_SAS_DEBUGFS_REG(DEVICE_MSG_WORK_MODE),
|
||||
HISI_SAS_DEBUGFS_REG(OPENA_WT_CONTI_TIME),
|
||||
HISI_SAS_DEBUGFS_REG(I_T_NEXUS_LOSS_TIME),
|
||||
HISI_SAS_DEBUGFS_REG(MAX_CON_TIME_LIMIT_TIME),
|
||||
HISI_SAS_DEBUGFS_REG(BUS_INACTIVE_LIMIT_TIME),
|
||||
HISI_SAS_DEBUGFS_REG(REJECT_TO_OPEN_LIMIT_TIME),
|
||||
HISI_SAS_DEBUGFS_REG(CQ_INT_CONVERGE_EN),
|
||||
HISI_SAS_DEBUGFS_REG(CFG_AGING_TIME),
|
||||
HISI_SAS_DEBUGFS_REG(HGC_DFX_CFG2),
|
||||
HISI_SAS_DEBUGFS_REG(CFG_ABT_SET_QUERY_IPTT),
|
||||
HISI_SAS_DEBUGFS_REG(CFG_ABT_SET_IPTT_DONE),
|
||||
HISI_SAS_DEBUGFS_REG(HGC_IOMB_PROC1_STATUS),
|
||||
HISI_SAS_DEBUGFS_REG(CHNL_INT_STATUS),
|
||||
HISI_SAS_DEBUGFS_REG(HGC_AXI_FIFO_ERR_INFO),
|
||||
HISI_SAS_DEBUGFS_REG(INT_COAL_EN),
|
||||
HISI_SAS_DEBUGFS_REG(OQ_INT_COAL_TIME),
|
||||
HISI_SAS_DEBUGFS_REG(OQ_INT_COAL_CNT),
|
||||
HISI_SAS_DEBUGFS_REG(ENT_INT_COAL_TIME),
|
||||
HISI_SAS_DEBUGFS_REG(ENT_INT_COAL_CNT),
|
||||
HISI_SAS_DEBUGFS_REG(OQ_INT_SRC),
|
||||
HISI_SAS_DEBUGFS_REG(OQ_INT_SRC_MSK),
|
||||
HISI_SAS_DEBUGFS_REG(ENT_INT_SRC1),
|
||||
HISI_SAS_DEBUGFS_REG(ENT_INT_SRC2),
|
||||
HISI_SAS_DEBUGFS_REG(ENT_INT_SRC3),
|
||||
HISI_SAS_DEBUGFS_REG(ENT_INT_SRC_MSK1),
|
||||
HISI_SAS_DEBUGFS_REG(ENT_INT_SRC_MSK2),
|
||||
HISI_SAS_DEBUGFS_REG(ENT_INT_SRC_MSK3),
|
||||
HISI_SAS_DEBUGFS_REG(CHNL_PHYUPDOWN_INT_MSK),
|
||||
HISI_SAS_DEBUGFS_REG(CHNL_ENT_INT_MSK),
|
||||
HISI_SAS_DEBUGFS_REG(HGC_COM_INT_MSK),
|
||||
HISI_SAS_DEBUGFS_REG(SAS_ECC_INTR),
|
||||
HISI_SAS_DEBUGFS_REG(SAS_ECC_INTR_MSK),
|
||||
HISI_SAS_DEBUGFS_REG(HGC_ERR_STAT_EN),
|
||||
HISI_SAS_DEBUGFS_REG(CQE_SEND_CNT),
|
||||
HISI_SAS_DEBUGFS_REG(DLVRY_Q_0_DEPTH),
|
||||
HISI_SAS_DEBUGFS_REG(DLVRY_Q_0_WR_PTR),
|
||||
HISI_SAS_DEBUGFS_REG(DLVRY_Q_0_RD_PTR),
|
||||
HISI_SAS_DEBUGFS_REG(HYPER_STREAM_ID_EN_CFG),
|
||||
HISI_SAS_DEBUGFS_REG(OQ0_INT_SRC_MSK),
|
||||
HISI_SAS_DEBUGFS_REG(COMPL_Q_0_DEPTH),
|
||||
HISI_SAS_DEBUGFS_REG(COMPL_Q_0_WR_PTR),
|
||||
HISI_SAS_DEBUGFS_REG(COMPL_Q_0_RD_PTR),
|
||||
HISI_SAS_DEBUGFS_REG(AWQOS_AWCACHE_CFG),
|
||||
HISI_SAS_DEBUGFS_REG(ARQOS_ARCACHE_CFG),
|
||||
HISI_SAS_DEBUGFS_REG(HILINK_ERR_DFX),
|
||||
HISI_SAS_DEBUGFS_REG(SAS_GPIO_CFG_0),
|
||||
HISI_SAS_DEBUGFS_REG(SAS_GPIO_CFG_1),
|
||||
HISI_SAS_DEBUGFS_REG(SAS_GPIO_TX_0_1),
|
||||
HISI_SAS_DEBUGFS_REG(SAS_CFG_DRIVE_VLD),
|
||||
{}
|
||||
};
|
||||
|
||||
static const struct hisi_sas_debugfs_reg debugfs_global_reg = {
|
||||
.lu = debugfs_global_reg_lu,
|
||||
.count = 0x800,
|
||||
.read_global_reg = hisi_sas_read32,
|
||||
};
|
||||
|
||||
static void debugfs_snapshot_prepare_v3_hw(struct hisi_hba *hisi_hba)
|
||||
{
|
||||
struct device *dev = hisi_hba->dev;
|
||||
|
||||
set_bit(HISI_SAS_REJECT_CMD_BIT, &hisi_hba->flags);
|
||||
|
||||
hisi_sas_write32(hisi_hba, DLVRY_QUEUE_ENABLE, 0);
|
||||
|
||||
if (wait_cmds_complete_timeout_v3_hw(hisi_hba, 100, 5000) == -ETIMEDOUT)
|
||||
dev_dbg(dev, "Wait commands complete timeout!\n");
|
||||
|
||||
hisi_sas_kill_tasklets(hisi_hba);
|
||||
}
|
||||
|
||||
static void debugfs_snapshot_restore_v3_hw(struct hisi_hba *hisi_hba)
|
||||
{
|
||||
hisi_sas_write32(hisi_hba, DLVRY_QUEUE_ENABLE,
|
||||
(u32)((1ULL << hisi_hba->queue_count) - 1));
|
||||
|
||||
clear_bit(HISI_SAS_REJECT_CMD_BIT, &hisi_hba->flags);
|
||||
}
|
||||
|
||||
static struct scsi_host_template sht_v3_hw = {
|
||||
.name = DRV_NAME,
|
||||
.module = THIS_MODULE,
|
||||
|
@ -2344,6 +2633,7 @@ static struct scsi_host_template sht_v3_hw = {
|
|||
.bios_param = sas_bios_param,
|
||||
.this_id = -1,
|
||||
.sg_tablesize = HISI_SAS_SGE_PAGE_CNT,
|
||||
.sg_prot_tablesize = HISI_SAS_SGE_PAGE_CNT,
|
||||
.max_sectors = SCSI_DEFAULT_MAX_SECTORS,
|
||||
.eh_device_reset_handler = sas_eh_device_reset_handler,
|
||||
.eh_target_reset_handler = sas_eh_target_reset_handler,
|
||||
|
@ -2360,7 +2650,7 @@ static const struct hisi_sas_hw hisi_sas_v3_hw = {
|
|||
.get_wideport_bitmap = get_wideport_bitmap_v3_hw,
|
||||
.complete_hdr_size = sizeof(struct hisi_sas_complete_v3_hdr),
|
||||
.clear_itct = clear_itct_v3_hw,
|
||||
.sl_notify = sl_notify_v3_hw,
|
||||
.sl_notify_ssp = sl_notify_ssp_v3_hw,
|
||||
.prep_ssp = prep_ssp_v3_hw,
|
||||
.prep_smp = prep_smp_v3_hw,
|
||||
.prep_stp = prep_ata_v3_hw,
|
||||
|
@ -2380,6 +2670,10 @@ static const struct hisi_sas_hw hisi_sas_v3_hw = {
|
|||
.get_events = phy_get_events_v3_hw,
|
||||
.write_gpio = write_gpio_v3_hw,
|
||||
.wait_cmds_complete_timeout = wait_cmds_complete_timeout_v3_hw,
|
||||
.debugfs_reg_global = &debugfs_global_reg,
|
||||
.debugfs_reg_port = &debugfs_port_reg,
|
||||
.snapshot_prepare = debugfs_snapshot_prepare_v3_hw,
|
||||
.snapshot_restore = debugfs_snapshot_restore_v3_hw,
|
||||
};
|
||||
|
||||
static struct Scsi_Host *
|
||||
|
@ -2397,6 +2691,7 @@ hisi_sas_shost_alloc_pci(struct pci_dev *pdev)
|
|||
hisi_hba = shost_priv(shost);
|
||||
|
||||
INIT_WORK(&hisi_hba->rst_work, hisi_sas_rst_work_handler);
|
||||
INIT_WORK(&hisi_hba->debugfs_work, hisi_sas_debugfs_work_handler);
|
||||
hisi_hba->hw = &hisi_sas_v3_hw;
|
||||
hisi_hba->pci_dev = pdev;
|
||||
hisi_hba->dev = dev;
|
||||
|
@ -2414,7 +2709,7 @@ hisi_sas_shost_alloc_pci(struct pci_dev *pdev)
|
|||
if (hisi_sas_get_fw_info(hisi_hba) < 0)
|
||||
goto err_out;
|
||||
|
||||
if (hisi_sas_alloc(hisi_hba, shost)) {
|
||||
if (hisi_sas_alloc(hisi_hba)) {
|
||||
hisi_sas_free(hisi_hba);
|
||||
goto err_out;
|
||||
}
|
||||
|
@ -2513,8 +2808,14 @@ hisi_sas_v3_probe(struct pci_dev *pdev, const struct pci_device_id *id)
|
|||
dev_info(dev, "Registering for DIF/DIX prot_mask=0x%x\n",
|
||||
prot_mask);
|
||||
scsi_host_set_prot(hisi_hba->shost, prot_mask);
|
||||
if (hisi_hba->prot_mask & HISI_SAS_DIX_PROT_MASK)
|
||||
scsi_host_set_guard(hisi_hba->shost,
|
||||
SHOST_DIX_GUARD_CRC);
|
||||
}
|
||||
|
||||
if (hisi_sas_debugfs_enable)
|
||||
hisi_sas_debugfs_init(hisi_hba);
|
||||
|
||||
rc = scsi_add_host(shost, dev);
|
||||
if (rc)
|
||||
goto err_out_ha;
|
||||
|
@ -2551,7 +2852,7 @@ hisi_sas_v3_destroy_irqs(struct pci_dev *pdev, struct hisi_hba *hisi_hba)
|
|||
free_irq(pci_irq_vector(pdev, 1), hisi_hba);
|
||||
free_irq(pci_irq_vector(pdev, 2), hisi_hba);
|
||||
free_irq(pci_irq_vector(pdev, 11), hisi_hba);
|
||||
for (i = 0; i < hisi_hba->queue_count; i++) {
|
||||
for (i = 0; i < hisi_hba->cq_nvecs; i++) {
|
||||
struct hisi_sas_cq *cq = &hisi_hba->cq[i];
|
||||
int nr = hisi_sas_intr_conv ? 16 : 16 + i;
|
||||
|
||||
|
@ -2567,6 +2868,8 @@ static void hisi_sas_v3_remove(struct pci_dev *pdev)
|
|||
struct hisi_hba *hisi_hba = sha->lldd_ha;
|
||||
struct Scsi_Host *shost = sha->core.shost;
|
||||
|
||||
hisi_sas_debugfs_exit(hisi_hba);
|
||||
|
||||
if (timer_pending(&hisi_hba->timer))
|
||||
del_timer(&hisi_hba->timer);
|
||||
|
||||
|
|
|
@ -251,10 +251,11 @@ static int number_of_controllers;
|
|||
|
||||
static irqreturn_t do_hpsa_intr_intx(int irq, void *dev_id);
|
||||
static irqreturn_t do_hpsa_intr_msi(int irq, void *dev_id);
|
||||
static int hpsa_ioctl(struct scsi_device *dev, int cmd, void __user *arg);
|
||||
static int hpsa_ioctl(struct scsi_device *dev, unsigned int cmd,
|
||||
void __user *arg);
|
||||
|
||||
#ifdef CONFIG_COMPAT
|
||||
static int hpsa_compat_ioctl(struct scsi_device *dev, int cmd,
|
||||
static int hpsa_compat_ioctl(struct scsi_device *dev, unsigned int cmd,
|
||||
void __user *arg);
|
||||
#endif
|
||||
|
||||
|
@ -1327,7 +1328,7 @@ static int hpsa_scsi_add_entry(struct ctlr_info *h,
|
|||
dev_warn(&h->pdev->dev, "physical device with no LUN=0,"
|
||||
" suspect firmware bug or unsupported hardware "
|
||||
"configuration.\n");
|
||||
return -1;
|
||||
return -1;
|
||||
}
|
||||
|
||||
lun_assigned:
|
||||
|
@ -4110,7 +4111,7 @@ static int hpsa_gather_lun_info(struct ctlr_info *h,
|
|||
"maximum logical LUNs (%d) exceeded. "
|
||||
"%d LUNs ignored.\n", HPSA_MAX_LUN,
|
||||
*nlogicals - HPSA_MAX_LUN);
|
||||
*nlogicals = HPSA_MAX_LUN;
|
||||
*nlogicals = HPSA_MAX_LUN;
|
||||
}
|
||||
if (*nlogicals + *nphysicals > HPSA_MAX_PHYS_LUN) {
|
||||
dev_warn(&h->pdev->dev,
|
||||
|
@ -6127,7 +6128,7 @@ static void cmd_free(struct ctlr_info *h, struct CommandList *c)
|
|||
|
||||
#ifdef CONFIG_COMPAT
|
||||
|
||||
static int hpsa_ioctl32_passthru(struct scsi_device *dev, int cmd,
|
||||
static int hpsa_ioctl32_passthru(struct scsi_device *dev, unsigned int cmd,
|
||||
void __user *arg)
|
||||
{
|
||||
IOCTL32_Command_struct __user *arg32 =
|
||||
|
@ -6164,7 +6165,7 @@ static int hpsa_ioctl32_passthru(struct scsi_device *dev, int cmd,
|
|||
}
|
||||
|
||||
static int hpsa_ioctl32_big_passthru(struct scsi_device *dev,
|
||||
int cmd, void __user *arg)
|
||||
unsigned int cmd, void __user *arg)
|
||||
{
|
||||
BIG_IOCTL32_Command_struct __user *arg32 =
|
||||
(BIG_IOCTL32_Command_struct __user *) arg;
|
||||
|
@ -6201,7 +6202,8 @@ static int hpsa_ioctl32_big_passthru(struct scsi_device *dev,
|
|||
return err;
|
||||
}
|
||||
|
||||
static int hpsa_compat_ioctl(struct scsi_device *dev, int cmd, void __user *arg)
|
||||
static int hpsa_compat_ioctl(struct scsi_device *dev, unsigned int cmd,
|
||||
void __user *arg)
|
||||
{
|
||||
switch (cmd) {
|
||||
case CCISS_GETPCIINFO:
|
||||
|
@ -6521,7 +6523,8 @@ static void check_ioctl_unit_attention(struct ctlr_info *h,
|
|||
/*
|
||||
* ioctl
|
||||
*/
|
||||
static int hpsa_ioctl(struct scsi_device *dev, int cmd, void __user *arg)
|
||||
static int hpsa_ioctl(struct scsi_device *dev, unsigned int cmd,
|
||||
void __user *arg)
|
||||
{
|
||||
struct ctlr_info *h;
|
||||
void __user *argp = (void __user *)arg;
|
||||
|
|
|
@ -3788,11 +3788,6 @@ static int ibmvscsis_write_pending(struct se_cmd *se_cmd)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int ibmvscsis_write_pending_status(struct se_cmd *se_cmd)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void ibmvscsis_set_default_node_attrs(struct se_node_acl *nacl)
|
||||
{
|
||||
}
|
||||
|
@ -4053,7 +4048,6 @@ static const struct target_core_fabric_ops ibmvscsis_ops = {
|
|||
.release_cmd = ibmvscsis_release_cmd,
|
||||
.sess_get_index = ibmvscsis_sess_get_index,
|
||||
.write_pending = ibmvscsis_write_pending,
|
||||
.write_pending_status = ibmvscsis_write_pending_status,
|
||||
.set_default_node_attributes = ibmvscsis_set_default_node_attrs,
|
||||
.get_cmd_state = ibmvscsis_get_cmd_state,
|
||||
.queue_data_in = ibmvscsis_queue_data_in,
|
||||
|
|
|
@ -6696,7 +6696,8 @@ static int ipr_queuecommand(struct Scsi_Host *shost,
|
|||
* Return value:
|
||||
* 0 on success / other on failure
|
||||
**/
|
||||
static int ipr_ioctl(struct scsi_device *sdev, int cmd, void __user *arg)
|
||||
static int ipr_ioctl(struct scsi_device *sdev, unsigned int cmd,
|
||||
void __user *arg)
|
||||
{
|
||||
struct ipr_resource_entry *res;
|
||||
|
||||
|
|
|
@ -518,7 +518,7 @@ static int iscsi_sw_tcp_pdu_init(struct iscsi_task *task,
|
|||
if (!task->sc)
|
||||
iscsi_sw_tcp_send_linear_data_prep(conn, task->data, count);
|
||||
else {
|
||||
struct scsi_data_buffer *sdb = scsi_out(task->sc);
|
||||
struct scsi_data_buffer *sdb = &task->sc->sdb;
|
||||
|
||||
err = iscsi_sw_tcp_send_data_prep(conn, sdb->table.sgl,
|
||||
sdb->table.nents, offset,
|
||||
|
@ -952,12 +952,6 @@ static umode_t iscsi_sw_tcp_attr_is_visible(int param_type, int param)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int iscsi_sw_tcp_slave_alloc(struct scsi_device *sdev)
|
||||
{
|
||||
blk_queue_flag_set(QUEUE_FLAG_BIDI, sdev->request_queue);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int iscsi_sw_tcp_slave_configure(struct scsi_device *sdev)
|
||||
{
|
||||
struct iscsi_sw_tcp_host *tcp_sw_host = iscsi_host_priv(sdev->host);
|
||||
|
@ -985,7 +979,6 @@ static struct scsi_host_template iscsi_sw_tcp_sht = {
|
|||
.eh_device_reset_handler= iscsi_eh_device_reset,
|
||||
.eh_target_reset_handler = iscsi_eh_recover_target,
|
||||
.dma_boundary = PAGE_SIZE - 1,
|
||||
.slave_alloc = iscsi_sw_tcp_slave_alloc,
|
||||
.slave_configure = iscsi_sw_tcp_slave_configure,
|
||||
.target_alloc = iscsi_target_alloc,
|
||||
.proc_name = "iscsi_tcp",
|
||||
|
|
|
@ -228,32 +228,6 @@ static int iscsi_prep_ecdb_ahs(struct iscsi_task *task)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int iscsi_prep_bidi_ahs(struct iscsi_task *task)
|
||||
{
|
||||
struct scsi_cmnd *sc = task->sc;
|
||||
struct iscsi_rlength_ahdr *rlen_ahdr;
|
||||
int rc;
|
||||
|
||||
rlen_ahdr = iscsi_next_hdr(task);
|
||||
rc = iscsi_add_hdr(task, sizeof(*rlen_ahdr));
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
rlen_ahdr->ahslength =
|
||||
cpu_to_be16(sizeof(rlen_ahdr->read_length) +
|
||||
sizeof(rlen_ahdr->reserved));
|
||||
rlen_ahdr->ahstype = ISCSI_AHSTYPE_RLENGTH;
|
||||
rlen_ahdr->reserved = 0;
|
||||
rlen_ahdr->read_length = cpu_to_be32(scsi_in(sc)->length);
|
||||
|
||||
ISCSI_DBG_SESSION(task->conn->session,
|
||||
"bidi-in rlen_ahdr->read_length(%d) "
|
||||
"rlen_ahdr->ahslength(%d)\n",
|
||||
be32_to_cpu(rlen_ahdr->read_length),
|
||||
be16_to_cpu(rlen_ahdr->ahslength));
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* iscsi_check_tmf_restrictions - check if a task is affected by TMF
|
||||
* @task: iscsi task
|
||||
|
@ -392,13 +366,6 @@ static int iscsi_prep_scsi_cmd_pdu(struct iscsi_task *task)
|
|||
memcpy(hdr->cdb, sc->cmnd, cmd_len);
|
||||
|
||||
task->imm_count = 0;
|
||||
if (scsi_bidi_cmnd(sc)) {
|
||||
hdr->flags |= ISCSI_FLAG_CMD_READ;
|
||||
rc = iscsi_prep_bidi_ahs(task);
|
||||
if (rc)
|
||||
return rc;
|
||||
}
|
||||
|
||||
if (scsi_get_prot_op(sc) != SCSI_PROT_NORMAL)
|
||||
task->protected = true;
|
||||
|
||||
|
@ -473,12 +440,10 @@ static int iscsi_prep_scsi_cmd_pdu(struct iscsi_task *task)
|
|||
|
||||
conn->scsicmd_pdus_cnt++;
|
||||
ISCSI_DBG_SESSION(session, "iscsi prep [%s cid %d sc %p cdb 0x%x "
|
||||
"itt 0x%x len %d bidi_len %d cmdsn %d win %d]\n",
|
||||
scsi_bidi_cmnd(sc) ? "bidirectional" :
|
||||
"itt 0x%x len %d cmdsn %d win %d]\n",
|
||||
sc->sc_data_direction == DMA_TO_DEVICE ?
|
||||
"write" : "read", conn->id, sc, sc->cmnd[0],
|
||||
task->itt, transfer_length,
|
||||
scsi_bidi_cmnd(sc) ? scsi_in(sc)->length : 0,
|
||||
session->cmdsn,
|
||||
session->max_cmdsn - session->exp_cmdsn + 1);
|
||||
return 0;
|
||||
|
@ -647,12 +612,7 @@ static void fail_scsi_task(struct iscsi_task *task, int err)
|
|||
state = ISCSI_TASK_ABRT_TMF;
|
||||
|
||||
sc->result = err << 16;
|
||||
if (!scsi_bidi_cmnd(sc))
|
||||
scsi_set_resid(sc, scsi_bufflen(sc));
|
||||
else {
|
||||
scsi_out(sc)->resid = scsi_out(sc)->length;
|
||||
scsi_in(sc)->resid = scsi_in(sc)->length;
|
||||
}
|
||||
scsi_set_resid(sc, scsi_bufflen(sc));
|
||||
|
||||
/* regular RX path uses back_lock */
|
||||
spin_lock_bh(&conn->session->back_lock);
|
||||
|
@ -907,14 +867,7 @@ static void iscsi_scsi_cmd_rsp(struct iscsi_conn *conn, struct iscsi_hdr *hdr,
|
|||
|
||||
if (rhdr->flags & (ISCSI_FLAG_CMD_BIDI_UNDERFLOW |
|
||||
ISCSI_FLAG_CMD_BIDI_OVERFLOW)) {
|
||||
int res_count = be32_to_cpu(rhdr->bi_residual_count);
|
||||
|
||||
if (scsi_bidi_cmnd(sc) && res_count > 0 &&
|
||||
(rhdr->flags & ISCSI_FLAG_CMD_BIDI_OVERFLOW ||
|
||||
res_count <= scsi_in(sc)->length))
|
||||
scsi_in(sc)->resid = res_count;
|
||||
else
|
||||
sc->result = (DID_BAD_TARGET << 16) | rhdr->cmd_status;
|
||||
sc->result = (DID_BAD_TARGET << 16) | rhdr->cmd_status;
|
||||
}
|
||||
|
||||
if (rhdr->flags & (ISCSI_FLAG_CMD_UNDERFLOW |
|
||||
|
@ -961,8 +914,8 @@ iscsi_data_in_rsp(struct iscsi_conn *conn, struct iscsi_hdr *hdr,
|
|||
|
||||
if (res_count > 0 &&
|
||||
(rhdr->flags & ISCSI_FLAG_CMD_OVERFLOW ||
|
||||
res_count <= scsi_in(sc)->length))
|
||||
scsi_in(sc)->resid = res_count;
|
||||
res_count <= sc->sdb.length))
|
||||
scsi_set_resid(sc, res_count);
|
||||
else
|
||||
sc->result = (DID_BAD_TARGET << 16) | rhdr->cmd_status;
|
||||
}
|
||||
|
@ -1810,12 +1763,7 @@ int iscsi_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *sc)
|
|||
spin_unlock_bh(&session->frwd_lock);
|
||||
ISCSI_DBG_SESSION(session, "iscsi: cmd 0x%x is not queued (%d)\n",
|
||||
sc->cmnd[0], reason);
|
||||
if (!scsi_bidi_cmnd(sc))
|
||||
scsi_set_resid(sc, scsi_bufflen(sc));
|
||||
else {
|
||||
scsi_out(sc)->resid = scsi_out(sc)->length;
|
||||
scsi_in(sc)->resid = scsi_in(sc)->length;
|
||||
}
|
||||
scsi_set_resid(sc, scsi_bufflen(sc));
|
||||
sc->scsi_done(sc);
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -495,7 +495,7 @@ static int iscsi_tcp_data_in(struct iscsi_conn *conn, struct iscsi_task *task)
|
|||
struct iscsi_tcp_task *tcp_task = task->dd_data;
|
||||
struct iscsi_data_rsp *rhdr = (struct iscsi_data_rsp *)tcp_conn->in.hdr;
|
||||
int datasn = be32_to_cpu(rhdr->datasn);
|
||||
unsigned total_in_length = scsi_in(task->sc)->length;
|
||||
unsigned total_in_length = task->sc->sdb.length;
|
||||
|
||||
/*
|
||||
* lib iscsi will update this in the completion handling if there
|
||||
|
@ -580,11 +580,11 @@ static int iscsi_tcp_r2t_rsp(struct iscsi_conn *conn, struct iscsi_task *task)
|
|||
data_length, session->max_burst);
|
||||
|
||||
data_offset = be32_to_cpu(rhdr->data_offset);
|
||||
if (data_offset + data_length > scsi_out(task->sc)->length) {
|
||||
if (data_offset + data_length > task->sc->sdb.length) {
|
||||
iscsi_conn_printk(KERN_ERR, conn,
|
||||
"invalid R2T with data len %u at offset %u "
|
||||
"and total length %d\n", data_length,
|
||||
data_offset, scsi_out(task->sc)->length);
|
||||
data_offset, task->sc->sdb.length);
|
||||
return ISCSI_ERR_DATALEN;
|
||||
}
|
||||
|
||||
|
@ -696,7 +696,7 @@ iscsi_tcp_hdr_dissect(struct iscsi_conn *conn, struct iscsi_hdr *hdr)
|
|||
if (tcp_conn->in.datalen) {
|
||||
struct iscsi_tcp_task *tcp_task = task->dd_data;
|
||||
struct ahash_request *rx_hash = NULL;
|
||||
struct scsi_data_buffer *sdb = scsi_in(task->sc);
|
||||
struct scsi_data_buffer *sdb = &task->sc->sdb;
|
||||
|
||||
/*
|
||||
* Setup copy of Data-In into the struct scsi_cmnd
|
||||
|
|
|
@ -25,6 +25,7 @@
|
|||
#include <linux/scatterlist.h>
|
||||
#include <linux/blkdev.h>
|
||||
#include <linux/slab.h>
|
||||
#include <asm/unaligned.h>
|
||||
|
||||
#include "sas_internal.h"
|
||||
|
||||
|
@ -614,7 +615,14 @@ int sas_smp_phy_control(struct domain_device *dev, int phy_id,
|
|||
}
|
||||
|
||||
res = smp_execute_task(dev, pc_req, PC_REQ_SIZE, pc_resp,PC_RESP_SIZE);
|
||||
|
||||
if (res) {
|
||||
pr_err("ex %016llx phy%02d PHY control failed: %d\n",
|
||||
SAS_ADDR(dev->sas_addr), phy_id, res);
|
||||
} else if (pc_resp[2] != SMP_RESP_FUNC_ACC) {
|
||||
pr_err("ex %016llx phy%02d PHY control failed: function result 0x%x\n",
|
||||
SAS_ADDR(dev->sas_addr), phy_id, pc_resp[2]);
|
||||
res = pc_resp[2];
|
||||
}
|
||||
kfree(pc_resp);
|
||||
kfree(pc_req);
|
||||
return res;
|
||||
|
@ -689,10 +697,10 @@ int sas_smp_get_phy_events(struct sas_phy *phy)
|
|||
if (res)
|
||||
goto out;
|
||||
|
||||
phy->invalid_dword_count = scsi_to_u32(&resp[12]);
|
||||
phy->running_disparity_error_count = scsi_to_u32(&resp[16]);
|
||||
phy->loss_of_dword_sync_count = scsi_to_u32(&resp[20]);
|
||||
phy->phy_reset_problem_count = scsi_to_u32(&resp[24]);
|
||||
phy->invalid_dword_count = get_unaligned_be32(&resp[12]);
|
||||
phy->running_disparity_error_count = get_unaligned_be32(&resp[16]);
|
||||
phy->loss_of_dword_sync_count = get_unaligned_be32(&resp[20]);
|
||||
phy->phy_reset_problem_count = get_unaligned_be32(&resp[24]);
|
||||
|
||||
out:
|
||||
kfree(req);
|
||||
|
@ -817,6 +825,26 @@ static struct domain_device *sas_ex_discover_end_dev(
|
|||
|
||||
#ifdef CONFIG_SCSI_SAS_ATA
|
||||
if ((phy->attached_tproto & SAS_PROTOCOL_STP) || phy->attached_sata_dev) {
|
||||
if (child->linkrate > parent->min_linkrate) {
|
||||
struct sas_phy_linkrates rates = {
|
||||
.maximum_linkrate = parent->min_linkrate,
|
||||
.minimum_linkrate = parent->min_linkrate,
|
||||
};
|
||||
int ret;
|
||||
|
||||
pr_notice("ex %016llx phy%02d SATA device linkrate > min pathway connection rate, attempting to lower device linkrate\n",
|
||||
SAS_ADDR(child->sas_addr), phy_id);
|
||||
ret = sas_smp_phy_control(parent, phy_id,
|
||||
PHY_FUNC_LINK_RESET, &rates);
|
||||
if (ret) {
|
||||
pr_err("ex %016llx phy%02d SATA device could not set linkrate (%d)\n",
|
||||
SAS_ADDR(child->sas_addr), phy_id, ret);
|
||||
goto out_free;
|
||||
}
|
||||
pr_notice("ex %016llx phy%02d SATA device set linkrate successfully\n",
|
||||
SAS_ADDR(child->sas_addr), phy_id);
|
||||
child->linkrate = child->min_linkrate;
|
||||
}
|
||||
res = sas_get_ata_info(child, phy);
|
||||
if (res)
|
||||
goto out_free;
|
||||
|
|
|
@ -799,7 +799,7 @@ void sas_scsi_recover_host(struct Scsi_Host *shost)
|
|||
shost->host_failed, tries);
|
||||
}
|
||||
|
||||
int sas_ioctl(struct scsi_device *sdev, int cmd, void __user *arg)
|
||||
int sas_ioctl(struct scsi_device *sdev, unsigned int cmd, void __user *arg)
|
||||
{
|
||||
struct domain_device *dev = sdev_to_domain_dev(sdev);
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
/*******************************************************************
|
||||
* This file is part of the Emulex Linux Device Driver for *
|
||||
* Fibre Channel Host Bus Adapters. *
|
||||
* Copyright (C) 2017-2018 Broadcom. All Rights Reserved. The term *
|
||||
* Copyright (C) 2017-2019 Broadcom. All Rights Reserved. The term *
|
||||
* “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. *
|
||||
* Copyright (C) 2004-2016 Emulex. All rights reserved. *
|
||||
* EMULEX and SLI are trademarks of Emulex. *
|
||||
|
@ -84,8 +84,6 @@ struct lpfc_sli2_slim;
|
|||
#define LPFC_HB_MBOX_INTERVAL 5 /* Heart beat interval in seconds. */
|
||||
#define LPFC_HB_MBOX_TIMEOUT 30 /* Heart beat timeout in seconds. */
|
||||
|
||||
#define LPFC_LOOK_AHEAD_OFF 0 /* Look ahead logic is turned off */
|
||||
|
||||
/* Error Attention event polling interval */
|
||||
#define LPFC_ERATT_POLL_INTERVAL 5 /* EATT poll interval in seconds */
|
||||
|
||||
|
@ -146,6 +144,7 @@ struct lpfc_nvmet_ctxbuf {
|
|||
struct lpfc_nvmet_rcv_ctx *context;
|
||||
struct lpfc_iocbq *iocbq;
|
||||
struct lpfc_sglq *sglq;
|
||||
struct work_struct defer_work;
|
||||
};
|
||||
|
||||
struct lpfc_dma_pool {
|
||||
|
@ -235,8 +234,6 @@ typedef struct lpfc_vpd {
|
|||
} sli3Feat;
|
||||
} lpfc_vpd_t;
|
||||
|
||||
struct lpfc_scsi_buf;
|
||||
|
||||
|
||||
/*
|
||||
* lpfc stat counters
|
||||
|
@ -466,6 +463,7 @@ struct lpfc_vport {
|
|||
uint32_t cfg_use_adisc;
|
||||
uint32_t cfg_discovery_threads;
|
||||
uint32_t cfg_log_verbose;
|
||||
uint32_t cfg_enable_fc4_type;
|
||||
uint32_t cfg_max_luns;
|
||||
uint32_t cfg_enable_da_id;
|
||||
uint32_t cfg_max_scsicmpl_time;
|
||||
|
@ -479,6 +477,7 @@ struct lpfc_vport {
|
|||
struct dentry *debug_disc_trc;
|
||||
struct dentry *debug_nodelist;
|
||||
struct dentry *debug_nvmestat;
|
||||
struct dentry *debug_scsistat;
|
||||
struct dentry *debug_nvmektime;
|
||||
struct dentry *debug_cpucheck;
|
||||
struct dentry *vport_debugfs_root;
|
||||
|
@ -596,6 +595,13 @@ struct lpfc_mbox_ext_buf_ctx {
|
|||
struct list_head ext_dmabuf_list;
|
||||
};
|
||||
|
||||
struct lpfc_epd_pool {
|
||||
/* Expedite pool */
|
||||
struct list_head list;
|
||||
u32 count;
|
||||
spinlock_t lock; /* lock for expedite pool */
|
||||
};
|
||||
|
||||
struct lpfc_ras_fwlog {
|
||||
uint8_t *fwlog_buff;
|
||||
uint32_t fw_buffcount; /* Buffer size posted to FW */
|
||||
|
@ -617,20 +623,19 @@ struct lpfc_ras_fwlog {
|
|||
|
||||
struct lpfc_hba {
|
||||
/* SCSI interface function jump table entries */
|
||||
int (*lpfc_new_scsi_buf)
|
||||
(struct lpfc_vport *, int);
|
||||
struct lpfc_scsi_buf * (*lpfc_get_scsi_buf)
|
||||
(struct lpfc_hba *, struct lpfc_nodelist *);
|
||||
struct lpfc_io_buf * (*lpfc_get_scsi_buf)
|
||||
(struct lpfc_hba *phba, struct lpfc_nodelist *ndlp,
|
||||
struct scsi_cmnd *cmnd);
|
||||
int (*lpfc_scsi_prep_dma_buf)
|
||||
(struct lpfc_hba *, struct lpfc_scsi_buf *);
|
||||
(struct lpfc_hba *, struct lpfc_io_buf *);
|
||||
void (*lpfc_scsi_unprep_dma_buf)
|
||||
(struct lpfc_hba *, struct lpfc_scsi_buf *);
|
||||
(struct lpfc_hba *, struct lpfc_io_buf *);
|
||||
void (*lpfc_release_scsi_buf)
|
||||
(struct lpfc_hba *, struct lpfc_scsi_buf *);
|
||||
(struct lpfc_hba *, struct lpfc_io_buf *);
|
||||
void (*lpfc_rampdown_queue_depth)
|
||||
(struct lpfc_hba *);
|
||||
void (*lpfc_scsi_prep_cmnd)
|
||||
(struct lpfc_vport *, struct lpfc_scsi_buf *,
|
||||
(struct lpfc_vport *, struct lpfc_io_buf *,
|
||||
struct lpfc_nodelist *);
|
||||
|
||||
/* IOCB interface function jump table entries */
|
||||
|
@ -673,13 +678,17 @@ struct lpfc_hba {
|
|||
(struct lpfc_hba *);
|
||||
|
||||
int (*lpfc_bg_scsi_prep_dma_buf)
|
||||
(struct lpfc_hba *, struct lpfc_scsi_buf *);
|
||||
(struct lpfc_hba *, struct lpfc_io_buf *);
|
||||
/* Add new entries here */
|
||||
|
||||
/* expedite pool */
|
||||
struct lpfc_epd_pool epd_pool;
|
||||
|
||||
/* SLI4 specific HBA data structure */
|
||||
struct lpfc_sli4_hba sli4_hba;
|
||||
|
||||
struct workqueue_struct *wq;
|
||||
struct delayed_work eq_delay_work;
|
||||
|
||||
struct lpfc_sli sli;
|
||||
uint8_t pci_dev_grp; /* lpfc PCI dev group: 0x0, 0x1, 0x2,... */
|
||||
|
@ -713,7 +722,6 @@ struct lpfc_hba {
|
|||
#define HBA_FCOE_MODE 0x4 /* HBA function in FCoE Mode */
|
||||
#define HBA_SP_QUEUE_EVT 0x8 /* Slow-path qevt posted to worker thread*/
|
||||
#define HBA_POST_RECEIVE_BUFFER 0x10 /* Rcv buffers need to be posted */
|
||||
#define FCP_XRI_ABORT_EVENT 0x20
|
||||
#define ELS_XRI_ABORT_EVENT 0x40
|
||||
#define ASYNC_EVENT 0x80
|
||||
#define LINK_DISABLED 0x100 /* Link disabled by user */
|
||||
|
@ -784,12 +792,12 @@ struct lpfc_hba {
|
|||
uint8_t nvmet_support; /* driver supports NVMET */
|
||||
#define LPFC_NVMET_MAX_PORTS 32
|
||||
uint8_t mds_diags_support;
|
||||
uint32_t initial_imax;
|
||||
uint8_t bbcredit_support;
|
||||
uint8_t enab_exp_wqcq_pages;
|
||||
|
||||
/* HBA Config Parameters */
|
||||
uint32_t cfg_ack0;
|
||||
uint32_t cfg_xri_rebalancing;
|
||||
uint32_t cfg_enable_npiv;
|
||||
uint32_t cfg_enable_rrq;
|
||||
uint32_t cfg_topology;
|
||||
|
@ -811,12 +819,14 @@ struct lpfc_hba {
|
|||
uint32_t cfg_use_msi;
|
||||
uint32_t cfg_auto_imax;
|
||||
uint32_t cfg_fcp_imax;
|
||||
uint32_t cfg_cq_poll_threshold;
|
||||
uint32_t cfg_cq_max_proc_limit;
|
||||
uint32_t cfg_fcp_cpu_map;
|
||||
uint32_t cfg_fcp_io_channel;
|
||||
uint32_t cfg_hdw_queue;
|
||||
uint32_t cfg_irq_chann;
|
||||
uint32_t cfg_suppress_rsp;
|
||||
uint32_t cfg_nvme_oas;
|
||||
uint32_t cfg_nvme_embed_cmd;
|
||||
uint32_t cfg_nvme_io_channel;
|
||||
uint32_t cfg_nvmet_mrq_post;
|
||||
uint32_t cfg_nvmet_mrq;
|
||||
uint32_t cfg_enable_nvmet;
|
||||
|
@ -852,6 +862,7 @@ struct lpfc_hba {
|
|||
uint32_t cfg_prot_guard;
|
||||
uint32_t cfg_hostmem_hgp;
|
||||
uint32_t cfg_log_verbose;
|
||||
uint32_t cfg_enable_fc4_type;
|
||||
uint32_t cfg_aer_support;
|
||||
uint32_t cfg_sriov_nr_virtfn;
|
||||
uint32_t cfg_request_firmware_upgrade;
|
||||
|
@ -872,15 +883,12 @@ struct lpfc_hba {
|
|||
uint32_t cfg_ras_fwlog_level;
|
||||
uint32_t cfg_ras_fwlog_buffsize;
|
||||
uint32_t cfg_ras_fwlog_func;
|
||||
uint32_t cfg_enable_fc4_type;
|
||||
uint32_t cfg_enable_bbcr; /* Enable BB Credit Recovery */
|
||||
uint32_t cfg_enable_dpp; /* Enable Direct Packet Push */
|
||||
uint32_t cfg_xri_split;
|
||||
#define LPFC_ENABLE_FCP 1
|
||||
#define LPFC_ENABLE_NVME 2
|
||||
#define LPFC_ENABLE_BOTH 3
|
||||
uint32_t cfg_enable_pbde;
|
||||
uint32_t io_channel_irqs; /* number of irqs for io channels */
|
||||
struct nvmet_fc_target_port *targetport;
|
||||
lpfc_vpd_t vpd; /* vital product data */
|
||||
|
||||
|
@ -952,14 +960,6 @@ struct lpfc_hba {
|
|||
struct timer_list eratt_poll;
|
||||
uint32_t eratt_poll_interval;
|
||||
|
||||
/*
|
||||
* stat counters
|
||||
*/
|
||||
atomic_t fc4ScsiInputRequests;
|
||||
atomic_t fc4ScsiOutputRequests;
|
||||
atomic_t fc4ScsiControlRequests;
|
||||
atomic_t fc4ScsiIoCmpls;
|
||||
|
||||
uint64_t bg_guard_err_cnt;
|
||||
uint64_t bg_apptag_err_cnt;
|
||||
uint64_t bg_reftag_err_cnt;
|
||||
|
@ -970,13 +970,6 @@ struct lpfc_hba {
|
|||
struct list_head lpfc_scsi_buf_list_get;
|
||||
struct list_head lpfc_scsi_buf_list_put;
|
||||
uint32_t total_scsi_bufs;
|
||||
spinlock_t nvme_buf_list_get_lock; /* NVME buf alloc list lock */
|
||||
spinlock_t nvme_buf_list_put_lock; /* NVME buf free list lock */
|
||||
struct list_head lpfc_nvme_buf_list_get;
|
||||
struct list_head lpfc_nvme_buf_list_put;
|
||||
uint32_t total_nvme_bufs;
|
||||
uint32_t get_nvme_bufs;
|
||||
uint32_t put_nvme_bufs;
|
||||
struct list_head lpfc_iocb_list;
|
||||
uint32_t total_iocbq_bufs;
|
||||
struct list_head active_rrq_list;
|
||||
|
@ -1033,6 +1026,7 @@ struct lpfc_hba {
|
|||
#ifdef CONFIG_SCSI_LPFC_DEBUG_FS
|
||||
struct dentry *hba_debugfs_root;
|
||||
atomic_t debugfs_vport_count;
|
||||
struct dentry *debug_multixri_pools;
|
||||
struct dentry *debug_hbqinfo;
|
||||
struct dentry *debug_dumpHostSlim;
|
||||
struct dentry *debug_dumpHBASlim;
|
||||
|
@ -1050,6 +1044,10 @@ struct lpfc_hba {
|
|||
|
||||
struct dentry *debug_nvmeio_trc;
|
||||
struct lpfc_debugfs_nvmeio_trc *nvmeio_trc;
|
||||
struct dentry *debug_hdwqinfo;
|
||||
#ifdef LPFC_HDWQ_LOCK_STAT
|
||||
struct dentry *debug_lockstat;
|
||||
#endif
|
||||
atomic_t nvmeio_trc_cnt;
|
||||
uint32_t nvmeio_trc_size;
|
||||
uint32_t nvmeio_trc_output_idx;
|
||||
|
@ -1090,7 +1088,6 @@ struct lpfc_hba {
|
|||
|
||||
uint8_t temp_sensor_support;
|
||||
/* Fields used for heart beat. */
|
||||
unsigned long last_eqdelay_time;
|
||||
unsigned long last_completion_time;
|
||||
unsigned long skipped_hb;
|
||||
struct timer_list hb_tmofunc;
|
||||
|
@ -1164,16 +1161,12 @@ struct lpfc_hba {
|
|||
uint16_t sfp_warning;
|
||||
|
||||
#ifdef CONFIG_SCSI_LPFC_DEBUG_FS
|
||||
#define LPFC_CHECK_CPU_CNT 32
|
||||
uint32_t cpucheck_rcv_io[LPFC_CHECK_CPU_CNT];
|
||||
uint32_t cpucheck_xmt_io[LPFC_CHECK_CPU_CNT];
|
||||
uint32_t cpucheck_cmpl_io[LPFC_CHECK_CPU_CNT];
|
||||
uint32_t cpucheck_ccmpl_io[LPFC_CHECK_CPU_CNT];
|
||||
uint16_t cpucheck_on;
|
||||
#define LPFC_CHECK_OFF 0
|
||||
#define LPFC_CHECK_NVME_IO 1
|
||||
#define LPFC_CHECK_NVMET_RCV 2
|
||||
#define LPFC_CHECK_NVMET_IO 4
|
||||
#define LPFC_CHECK_SCSI_IO 8
|
||||
uint16_t ktime_on;
|
||||
uint64_t ktime_data_samples;
|
||||
uint64_t ktime_status_samples;
|
||||
|
@ -1297,3 +1290,23 @@ lpfc_phba_elsring(struct lpfc_hba *phba)
|
|||
}
|
||||
return &phba->sli.sli3_ring[LPFC_ELS_RING];
|
||||
}
|
||||
|
||||
/**
|
||||
* lpfc_sli4_mod_hba_eq_delay - update EQ delay
|
||||
* @phba: Pointer to HBA context object.
|
||||
* @q: The Event Queue to update.
|
||||
* @delay: The delay value (in us) to be written.
|
||||
*
|
||||
**/
|
||||
static inline void
|
||||
lpfc_sli4_mod_hba_eq_delay(struct lpfc_hba *phba, struct lpfc_queue *eq,
|
||||
u32 delay)
|
||||
{
|
||||
struct lpfc_register reg_data;
|
||||
|
||||
reg_data.word0 = 0;
|
||||
bf_set(lpfc_sliport_eqdelay_id, ®_data, eq->queue_id);
|
||||
bf_set(lpfc_sliport_eqdelay_delay, ®_data, delay);
|
||||
writel(reg_data.word0, phba->sli4_hba.u.if_type2.EQDregaddr);
|
||||
eq->q_mode = delay;
|
||||
}
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
/*******************************************************************
|
||||
* This file is part of the Emulex Linux Device Driver for *
|
||||
* Fibre Channel Host Bus Adapters. *
|
||||
* Copyright (C) 2017-2018 Broadcom. All Rights Reserved. The term *
|
||||
* Copyright (C) 2017-2019 Broadcom. All Rights Reserved. The term *
|
||||
* “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. *
|
||||
* Copyright (C) 2004-2016 Emulex. All rights reserved. *
|
||||
* EMULEX and SLI are trademarks of Emulex. *
|
||||
|
@ -64,9 +64,6 @@
|
|||
#define LPFC_MIN_MRQ_POST 512
|
||||
#define LPFC_MAX_MRQ_POST 2048
|
||||
|
||||
#define LPFC_MAX_NVME_INFO_TMP_LEN 100
|
||||
#define LPFC_NVME_INFO_MORE_STR "\nCould be more info...\n"
|
||||
|
||||
/*
|
||||
* Write key size should be multiple of 4. If write key is changed
|
||||
* make sure that library write key is also changed.
|
||||
|
@ -155,7 +152,7 @@ lpfc_nvme_info_show(struct device *dev, struct device_attribute *attr,
|
|||
struct lpfc_nvme_rport *rport;
|
||||
struct lpfc_nodelist *ndlp;
|
||||
struct nvme_fc_remote_port *nrport;
|
||||
struct lpfc_nvme_ctrl_stat *cstat;
|
||||
struct lpfc_fc4_ctrl_stat *cstat;
|
||||
uint64_t data1, data2, data3;
|
||||
uint64_t totin, totout, tot;
|
||||
char *statep;
|
||||
|
@ -163,7 +160,7 @@ lpfc_nvme_info_show(struct device *dev, struct device_attribute *attr,
|
|||
int len = 0;
|
||||
char tmp[LPFC_MAX_NVME_INFO_TMP_LEN] = {0};
|
||||
|
||||
if (!(phba->cfg_enable_fc4_type & LPFC_ENABLE_NVME)) {
|
||||
if (!(vport->cfg_enable_fc4_type & LPFC_ENABLE_NVME)) {
|
||||
len = scnprintf(buf, PAGE_SIZE, "NVME Disabled\n");
|
||||
return len;
|
||||
}
|
||||
|
@ -334,11 +331,10 @@ lpfc_nvme_info_show(struct device *dev, struct device_attribute *attr,
|
|||
|
||||
rcu_read_lock();
|
||||
scnprintf(tmp, sizeof(tmp),
|
||||
"XRI Dist lpfc%d Total %d NVME %d SCSI %d ELS %d\n",
|
||||
"XRI Dist lpfc%d Total %d IO %d ELS %d\n",
|
||||
phba->brd_no,
|
||||
phba->sli4_hba.max_cfg_param.max_xri,
|
||||
phba->sli4_hba.nvme_xri_max,
|
||||
phba->sli4_hba.scsi_xri_max,
|
||||
phba->sli4_hba.io_xri_max,
|
||||
lpfc_sli4_get_els_iocb_cnt(phba));
|
||||
if (strlcat(buf, tmp, PAGE_SIZE) >= PAGE_SIZE)
|
||||
goto buffer_done;
|
||||
|
@ -457,13 +453,13 @@ lpfc_nvme_info_show(struct device *dev, struct device_attribute *attr,
|
|||
|
||||
totin = 0;
|
||||
totout = 0;
|
||||
for (i = 0; i < phba->cfg_nvme_io_channel; i++) {
|
||||
cstat = &lport->cstat[i];
|
||||
tot = atomic_read(&cstat->fc4NvmeIoCmpls);
|
||||
for (i = 0; i < phba->cfg_hdw_queue; i++) {
|
||||
cstat = &phba->sli4_hba.hdwq[i].nvme_cstat;
|
||||
tot = cstat->io_cmpls;
|
||||
totin += tot;
|
||||
data1 = atomic_read(&cstat->fc4NvmeInputRequests);
|
||||
data2 = atomic_read(&cstat->fc4NvmeOutputRequests);
|
||||
data3 = atomic_read(&cstat->fc4NvmeControlRequests);
|
||||
data1 = cstat->input_requests;
|
||||
data2 = cstat->output_requests;
|
||||
data3 = cstat->control_requests;
|
||||
totout += (data1 + data2 + data3);
|
||||
}
|
||||
scnprintf(tmp, sizeof(tmp),
|
||||
|
@ -509,6 +505,57 @@ lpfc_nvme_info_show(struct device *dev, struct device_attribute *attr,
|
|||
return len;
|
||||
}
|
||||
|
||||
static ssize_t
|
||||
lpfc_scsi_stat_show(struct device *dev, struct device_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
struct Scsi_Host *shost = class_to_shost(dev);
|
||||
struct lpfc_vport *vport = shost_priv(shost);
|
||||
struct lpfc_hba *phba = vport->phba;
|
||||
int len;
|
||||
struct lpfc_fc4_ctrl_stat *cstat;
|
||||
u64 data1, data2, data3;
|
||||
u64 tot, totin, totout;
|
||||
int i;
|
||||
char tmp[LPFC_MAX_SCSI_INFO_TMP_LEN] = {0};
|
||||
|
||||
if (!(vport->cfg_enable_fc4_type & LPFC_ENABLE_FCP) ||
|
||||
(phba->sli_rev != LPFC_SLI_REV4))
|
||||
return 0;
|
||||
|
||||
scnprintf(buf, PAGE_SIZE, "SCSI HDWQ Statistics\n");
|
||||
|
||||
totin = 0;
|
||||
totout = 0;
|
||||
for (i = 0; i < phba->cfg_hdw_queue; i++) {
|
||||
cstat = &phba->sli4_hba.hdwq[i].scsi_cstat;
|
||||
tot = cstat->io_cmpls;
|
||||
totin += tot;
|
||||
data1 = cstat->input_requests;
|
||||
data2 = cstat->output_requests;
|
||||
data3 = cstat->control_requests;
|
||||
totout += (data1 + data2 + data3);
|
||||
|
||||
scnprintf(tmp, sizeof(tmp), "HDWQ (%d): Rd %016llx Wr %016llx "
|
||||
"IO %016llx ", i, data1, data2, data3);
|
||||
if (strlcat(buf, tmp, PAGE_SIZE) >= PAGE_SIZE)
|
||||
goto buffer_done;
|
||||
|
||||
scnprintf(tmp, sizeof(tmp), "Cmpl %016llx OutIO %016llx\n",
|
||||
tot, ((data1 + data2 + data3) - tot));
|
||||
if (strlcat(buf, tmp, PAGE_SIZE) >= PAGE_SIZE)
|
||||
goto buffer_done;
|
||||
}
|
||||
scnprintf(tmp, sizeof(tmp), "Total FCP Cmpl %016llx Issue %016llx "
|
||||
"OutIO %016llx\n", totin, totout, totout - totin);
|
||||
strlcat(buf, tmp, PAGE_SIZE);
|
||||
|
||||
buffer_done:
|
||||
len = strnlen(buf, PAGE_SIZE);
|
||||
|
||||
return len;
|
||||
}
|
||||
|
||||
static ssize_t
|
||||
lpfc_bg_info_show(struct device *dev, struct device_attribute *attr,
|
||||
char *buf)
|
||||
|
@ -2574,6 +2621,7 @@ lpfc_##attr##_store(struct device *dev, struct device_attribute *attr, \
|
|||
|
||||
|
||||
static DEVICE_ATTR(nvme_info, 0444, lpfc_nvme_info_show, NULL);
|
||||
static DEVICE_ATTR(scsi_stat, 0444, lpfc_scsi_stat_show, NULL);
|
||||
static DEVICE_ATTR(bg_info, S_IRUGO, lpfc_bg_info_show, NULL);
|
||||
static DEVICE_ATTR(bg_guard_err, S_IRUGO, lpfc_bg_guard_err_show, NULL);
|
||||
static DEVICE_ATTR(bg_apptag_err, S_IRUGO, lpfc_bg_apptag_err_show, NULL);
|
||||
|
@ -3724,28 +3772,12 @@ LPFC_ATTR_R(nvmet_mrq_post,
|
|||
* lpfc_enable_fc4_type: Defines what FC4 types are supported.
|
||||
* Supported Values: 1 - register just FCP
|
||||
* 3 - register both FCP and NVME
|
||||
* Supported values are [1,3]. Default value is 1
|
||||
* Supported values are [1,3]. Default value is 3
|
||||
*/
|
||||
LPFC_ATTR_R(enable_fc4_type, LPFC_ENABLE_FCP,
|
||||
LPFC_ATTR_R(enable_fc4_type, LPFC_ENABLE_BOTH,
|
||||
LPFC_ENABLE_FCP, LPFC_ENABLE_BOTH,
|
||||
"Enable FC4 Protocol support - FCP / NVME");
|
||||
|
||||
/*
|
||||
* lpfc_xri_split: Defines the division of XRI resources between SCSI and NVME
|
||||
* This parameter is only used if:
|
||||
* lpfc_enable_fc4_type is 3 - register both FCP and NVME and
|
||||
* port is not configured for NVMET.
|
||||
*
|
||||
* ELS/CT always get 10% of XRIs, up to a maximum of 250
|
||||
* The remaining XRIs get split up based on lpfc_xri_split per port:
|
||||
*
|
||||
* Supported Values are in percentages
|
||||
* the xri_split value is the percentage the SCSI port will get. The remaining
|
||||
* percentage will go to NVME.
|
||||
*/
|
||||
LPFC_ATTR_R(xri_split, 50, 10, 90,
|
||||
"Percentage of FCP XRI resources versus NVME");
|
||||
|
||||
/*
|
||||
# lpfc_log_verbose: Only turn this flag on if you are willing to risk being
|
||||
# deluged with LOTS of information.
|
||||
|
@ -4903,6 +4935,8 @@ lpfc_fcp_imax_store(struct device *dev, struct device_attribute *attr,
|
|||
struct Scsi_Host *shost = class_to_shost(dev);
|
||||
struct lpfc_vport *vport = (struct lpfc_vport *)shost->hostdata;
|
||||
struct lpfc_hba *phba = vport->phba;
|
||||
struct lpfc_eq_intr_info *eqi;
|
||||
uint32_t usdelay;
|
||||
int val = 0, i;
|
||||
|
||||
/* fcp_imax is only valid for SLI4 */
|
||||
|
@ -4923,12 +4957,27 @@ lpfc_fcp_imax_store(struct device *dev, struct device_attribute *attr,
|
|||
if (val && (val < LPFC_MIN_IMAX || val > LPFC_MAX_IMAX))
|
||||
return -EINVAL;
|
||||
|
||||
phba->cfg_fcp_imax = (uint32_t)val;
|
||||
phba->initial_imax = phba->cfg_fcp_imax;
|
||||
phba->cfg_auto_imax = (val) ? 0 : 1;
|
||||
if (phba->cfg_fcp_imax && !val) {
|
||||
queue_delayed_work(phba->wq, &phba->eq_delay_work,
|
||||
msecs_to_jiffies(LPFC_EQ_DELAY_MSECS));
|
||||
|
||||
for (i = 0; i < phba->io_channel_irqs; i += LPFC_MAX_EQ_DELAY_EQID_CNT)
|
||||
for_each_present_cpu(i) {
|
||||
eqi = per_cpu_ptr(phba->sli4_hba.eq_info, i);
|
||||
eqi->icnt = 0;
|
||||
}
|
||||
}
|
||||
|
||||
phba->cfg_fcp_imax = (uint32_t)val;
|
||||
|
||||
if (phba->cfg_fcp_imax)
|
||||
usdelay = LPFC_SEC_TO_USEC / phba->cfg_fcp_imax;
|
||||
else
|
||||
usdelay = 0;
|
||||
|
||||
for (i = 0; i < phba->cfg_irq_chann; i += LPFC_MAX_EQ_DELAY_EQID_CNT)
|
||||
lpfc_modify_hba_eq_delay(phba, i, LPFC_MAX_EQ_DELAY_EQID_CNT,
|
||||
val);
|
||||
usdelay);
|
||||
|
||||
return strlen(buf);
|
||||
}
|
||||
|
@ -4982,15 +5031,119 @@ lpfc_fcp_imax_init(struct lpfc_hba *phba, int val)
|
|||
|
||||
static DEVICE_ATTR_RW(lpfc_fcp_imax);
|
||||
|
||||
/**
|
||||
* lpfc_cq_max_proc_limit_store
|
||||
*
|
||||
* @dev: class device that is converted into a Scsi_host.
|
||||
* @attr: device attribute, not used.
|
||||
* @buf: string with the cq max processing limit of cqes
|
||||
* @count: unused variable.
|
||||
*
|
||||
* Description:
|
||||
* If val is in a valid range, then set value on each cq
|
||||
*
|
||||
* Returns:
|
||||
* The length of the buf: if successful
|
||||
* -ERANGE: if val is not in the valid range
|
||||
* -EINVAL: if bad value format or intended mode is not supported.
|
||||
**/
|
||||
static ssize_t
|
||||
lpfc_cq_max_proc_limit_store(struct device *dev, struct device_attribute *attr,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
struct Scsi_Host *shost = class_to_shost(dev);
|
||||
struct lpfc_vport *vport = (struct lpfc_vport *)shost->hostdata;
|
||||
struct lpfc_hba *phba = vport->phba;
|
||||
struct lpfc_queue *eq, *cq;
|
||||
unsigned long val;
|
||||
int i;
|
||||
|
||||
/* cq_max_proc_limit is only valid for SLI4 */
|
||||
if (phba->sli_rev != LPFC_SLI_REV4)
|
||||
return -EINVAL;
|
||||
|
||||
/* Sanity check on user data */
|
||||
if (!isdigit(buf[0]))
|
||||
return -EINVAL;
|
||||
if (kstrtoul(buf, 0, &val))
|
||||
return -EINVAL;
|
||||
|
||||
if (val < LPFC_CQ_MIN_PROC_LIMIT || val > LPFC_CQ_MAX_PROC_LIMIT)
|
||||
return -ERANGE;
|
||||
|
||||
phba->cfg_cq_max_proc_limit = (uint32_t)val;
|
||||
|
||||
/* set the values on the cq's */
|
||||
for (i = 0; i < phba->cfg_irq_chann; i++) {
|
||||
eq = phba->sli4_hba.hdwq[i].hba_eq;
|
||||
if (!eq)
|
||||
continue;
|
||||
|
||||
list_for_each_entry(cq, &eq->child_list, list)
|
||||
cq->max_proc_limit = min(phba->cfg_cq_max_proc_limit,
|
||||
cq->entry_count);
|
||||
}
|
||||
|
||||
return strlen(buf);
|
||||
}
|
||||
|
||||
/*
|
||||
* lpfc_auto_imax: Controls Auto-interrupt coalescing values support.
|
||||
* 0 No auto_imax support
|
||||
* 1 auto imax on
|
||||
* Auto imax will change the value of fcp_imax on a per EQ basis, using
|
||||
* the EQ Delay Multiplier, depending on the activity for that EQ.
|
||||
* Value range [0,1]. Default value is 1.
|
||||
* lpfc_cq_max_proc_limit: The maximum number CQE entries processed in an
|
||||
* itteration of CQ processing.
|
||||
*/
|
||||
LPFC_ATTR_RW(auto_imax, 1, 0, 1, "Enable Auto imax");
|
||||
static int lpfc_cq_max_proc_limit = LPFC_CQ_DEF_MAX_PROC_LIMIT;
|
||||
module_param(lpfc_cq_max_proc_limit, int, 0644);
|
||||
MODULE_PARM_DESC(lpfc_cq_max_proc_limit,
|
||||
"Set the maximum number CQEs processed in an iteration of "
|
||||
"CQ processing");
|
||||
lpfc_param_show(cq_max_proc_limit)
|
||||
|
||||
/*
|
||||
* lpfc_cq_poll_threshold: Set the threshold of CQE completions in a
|
||||
* single handler call which should request a polled completion rather
|
||||
* than re-enabling interrupts.
|
||||
*/
|
||||
LPFC_ATTR_RW(cq_poll_threshold, LPFC_CQ_DEF_THRESHOLD_TO_POLL,
|
||||
LPFC_CQ_MIN_THRESHOLD_TO_POLL,
|
||||
LPFC_CQ_MAX_THRESHOLD_TO_POLL,
|
||||
"CQE Processing Threshold to enable Polling");
|
||||
|
||||
/**
|
||||
* lpfc_cq_max_proc_limit_init - Set the initial cq max_proc_limit
|
||||
* @phba: lpfc_hba pointer.
|
||||
* @val: entry limit
|
||||
*
|
||||
* Description:
|
||||
* If val is in a valid range, then initialize the adapter's maximum
|
||||
* value.
|
||||
*
|
||||
* Returns:
|
||||
* Always returns 0 for success, even if value not always set to
|
||||
* requested value. If value out of range or not supported, will fall
|
||||
* back to default.
|
||||
**/
|
||||
static int
|
||||
lpfc_cq_max_proc_limit_init(struct lpfc_hba *phba, int val)
|
||||
{
|
||||
phba->cfg_cq_max_proc_limit = LPFC_CQ_DEF_MAX_PROC_LIMIT;
|
||||
|
||||
if (phba->sli_rev != LPFC_SLI_REV4)
|
||||
return 0;
|
||||
|
||||
if (val >= LPFC_CQ_MIN_PROC_LIMIT && val <= LPFC_CQ_MAX_PROC_LIMIT) {
|
||||
phba->cfg_cq_max_proc_limit = val;
|
||||
return 0;
|
||||
}
|
||||
|
||||
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
|
||||
"0371 "LPFC_DRIVER_NAME"_cq_max_proc_limit: "
|
||||
"%d out of range, using default\n",
|
||||
phba->cfg_cq_max_proc_limit);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static DEVICE_ATTR_RW(lpfc_cq_max_proc_limit);
|
||||
|
||||
/**
|
||||
* lpfc_state_show - Display current driver CPU affinity
|
||||
|
@ -5023,50 +5176,70 @@ lpfc_fcp_cpu_map_show(struct device *dev, struct device_attribute *attr,
|
|||
case 1:
|
||||
len += snprintf(buf + len, PAGE_SIZE-len,
|
||||
"fcp_cpu_map: HBA centric mapping (%d): "
|
||||
"%d online CPUs\n",
|
||||
phba->cfg_fcp_cpu_map,
|
||||
phba->sli4_hba.num_online_cpu);
|
||||
break;
|
||||
case 2:
|
||||
len += snprintf(buf + len, PAGE_SIZE-len,
|
||||
"fcp_cpu_map: Driver centric mapping (%d): "
|
||||
"%d online CPUs\n",
|
||||
phba->cfg_fcp_cpu_map,
|
||||
phba->sli4_hba.num_online_cpu);
|
||||
"%d of %d CPUs online from %d possible CPUs\n",
|
||||
phba->cfg_fcp_cpu_map, num_online_cpus(),
|
||||
num_present_cpus(),
|
||||
phba->sli4_hba.num_possible_cpu);
|
||||
break;
|
||||
}
|
||||
|
||||
while (phba->sli4_hba.curr_disp_cpu < phba->sli4_hba.num_present_cpu) {
|
||||
while (phba->sli4_hba.curr_disp_cpu <
|
||||
phba->sli4_hba.num_possible_cpu) {
|
||||
cpup = &phba->sli4_hba.cpu_map[phba->sli4_hba.curr_disp_cpu];
|
||||
|
||||
/* margin should fit in this and the truncated message */
|
||||
if (cpup->irq == LPFC_VECTOR_MAP_EMPTY)
|
||||
len += snprintf(buf + len, PAGE_SIZE-len,
|
||||
"CPU %02d io_chan %02d "
|
||||
"physid %d coreid %d\n",
|
||||
if (!cpu_present(phba->sli4_hba.curr_disp_cpu))
|
||||
len += snprintf(buf + len, PAGE_SIZE - len,
|
||||
"CPU %02d not present\n",
|
||||
phba->sli4_hba.curr_disp_cpu);
|
||||
else if (cpup->irq == LPFC_VECTOR_MAP_EMPTY) {
|
||||
if (cpup->hdwq == LPFC_VECTOR_MAP_EMPTY)
|
||||
len += snprintf(
|
||||
buf + len, PAGE_SIZE - len,
|
||||
"CPU %02d hdwq None "
|
||||
"physid %d coreid %d ht %d\n",
|
||||
phba->sli4_hba.curr_disp_cpu,
|
||||
cpup->channel_id, cpup->phys_id,
|
||||
cpup->core_id);
|
||||
else
|
||||
len += snprintf(buf + len, PAGE_SIZE-len,
|
||||
"CPU %02d io_chan %02d "
|
||||
"physid %d coreid %d IRQ %d\n",
|
||||
cpup->phys_id,
|
||||
cpup->core_id, cpup->hyper);
|
||||
else
|
||||
len += snprintf(
|
||||
buf + len, PAGE_SIZE - len,
|
||||
"CPU %02d EQ %04d hdwq %04d "
|
||||
"physid %d coreid %d ht %d\n",
|
||||
phba->sli4_hba.curr_disp_cpu,
|
||||
cpup->channel_id, cpup->phys_id,
|
||||
cpup->core_id, cpup->irq);
|
||||
cpup->eq, cpup->hdwq, cpup->phys_id,
|
||||
cpup->core_id, cpup->hyper);
|
||||
} else {
|
||||
if (cpup->hdwq == LPFC_VECTOR_MAP_EMPTY)
|
||||
len += snprintf(
|
||||
buf + len, PAGE_SIZE - len,
|
||||
"CPU %02d hdwq None "
|
||||
"physid %d coreid %d ht %d IRQ %d\n",
|
||||
phba->sli4_hba.curr_disp_cpu,
|
||||
cpup->phys_id,
|
||||
cpup->core_id, cpup->hyper, cpup->irq);
|
||||
else
|
||||
len += snprintf(
|
||||
buf + len, PAGE_SIZE - len,
|
||||
"CPU %02d EQ %04d hdwq %04d "
|
||||
"physid %d coreid %d ht %d IRQ %d\n",
|
||||
phba->sli4_hba.curr_disp_cpu,
|
||||
cpup->eq, cpup->hdwq, cpup->phys_id,
|
||||
cpup->core_id, cpup->hyper, cpup->irq);
|
||||
}
|
||||
|
||||
phba->sli4_hba.curr_disp_cpu++;
|
||||
|
||||
/* display max number of CPUs keeping some margin */
|
||||
if (phba->sli4_hba.curr_disp_cpu <
|
||||
phba->sli4_hba.num_present_cpu &&
|
||||
phba->sli4_hba.num_possible_cpu &&
|
||||
(len >= (PAGE_SIZE - 64))) {
|
||||
len += snprintf(buf + len, PAGE_SIZE-len, "more...\n");
|
||||
len += snprintf(buf + len,
|
||||
PAGE_SIZE - len, "more...\n");
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (phba->sli4_hba.curr_disp_cpu == phba->sli4_hba.num_present_cpu)
|
||||
if (phba->sli4_hba.curr_disp_cpu == phba->sli4_hba.num_possible_cpu)
|
||||
phba->sli4_hba.curr_disp_cpu = 0;
|
||||
|
||||
return len;
|
||||
|
@ -5094,14 +5267,13 @@ lpfc_fcp_cpu_map_store(struct device *dev, struct device_attribute *attr,
|
|||
# lpfc_fcp_cpu_map: Defines how to map CPUs to IRQ vectors
|
||||
# for the HBA.
|
||||
#
|
||||
# Value range is [0 to 2]. Default value is LPFC_DRIVER_CPU_MAP (2).
|
||||
# Value range is [0 to 1]. Default value is LPFC_HBA_CPU_MAP (1).
|
||||
# 0 - Do not affinitze IRQ vectors
|
||||
# 1 - Affintize HBA vectors with respect to each HBA
|
||||
# (start with CPU0 for each HBA)
|
||||
# 2 - Affintize HBA vectors with respect to the entire driver
|
||||
# (round robin thru all CPUs across all HBAs)
|
||||
# This also defines how Hardware Queues are mapped to specific CPUs.
|
||||
*/
|
||||
static int lpfc_fcp_cpu_map = LPFC_DRIVER_CPU_MAP;
|
||||
static int lpfc_fcp_cpu_map = LPFC_HBA_CPU_MAP;
|
||||
module_param(lpfc_fcp_cpu_map, int, S_IRUGO|S_IWUSR);
|
||||
MODULE_PARM_DESC(lpfc_fcp_cpu_map,
|
||||
"Defines how to map CPUs to IRQ vectors per HBA");
|
||||
|
@ -5135,7 +5307,7 @@ lpfc_fcp_cpu_map_init(struct lpfc_hba *phba, int val)
|
|||
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
|
||||
"3326 lpfc_fcp_cpu_map: %d out of range, using "
|
||||
"default\n", val);
|
||||
phba->cfg_fcp_cpu_map = LPFC_DRIVER_CPU_MAP;
|
||||
phba->cfg_fcp_cpu_map = LPFC_HBA_CPU_MAP;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -5234,14 +5406,21 @@ static DEVICE_ATTR_RW(lpfc_max_scsicmpl_time);
|
|||
*/
|
||||
LPFC_ATTR_R(ack0, 0, 0, 1, "Enable ACK0 support");
|
||||
|
||||
/*
|
||||
# lpfc_xri_rebalancing: enable or disable XRI rebalancing feature
|
||||
# range is [0,1]. Default value is 1.
|
||||
*/
|
||||
LPFC_ATTR_R(xri_rebalancing, 1, 0, 1, "Enable/Disable XRI rebalancing");
|
||||
|
||||
/*
|
||||
* lpfc_io_sched: Determine scheduling algrithmn for issuing FCP cmds
|
||||
* range is [0,1]. Default value is 0.
|
||||
* For [0], FCP commands are issued to Work Queues ina round robin fashion.
|
||||
* For [0], FCP commands are issued to Work Queues based on upper layer
|
||||
* hardware queue index.
|
||||
* For [1], FCP commands are issued to a Work Queue associated with the
|
||||
* current CPU.
|
||||
*
|
||||
* LPFC_FCP_SCHED_ROUND_ROBIN == 0
|
||||
* LPFC_FCP_SCHED_BY_HDWQ == 0
|
||||
* LPFC_FCP_SCHED_BY_CPU == 1
|
||||
*
|
||||
* The driver dynamically sets this to 1 (BY_CPU) if it's able to set up cpu
|
||||
|
@ -5249,11 +5428,11 @@ LPFC_ATTR_R(ack0, 0, 0, 1, "Enable ACK0 support");
|
|||
* CPU. Otherwise, the default 0 (Round Robin) scheduling of FCP/NVME I/Os
|
||||
* through WQs will be used.
|
||||
*/
|
||||
LPFC_ATTR_RW(fcp_io_sched, LPFC_FCP_SCHED_ROUND_ROBIN,
|
||||
LPFC_FCP_SCHED_ROUND_ROBIN,
|
||||
LPFC_ATTR_RW(fcp_io_sched, LPFC_FCP_SCHED_BY_CPU,
|
||||
LPFC_FCP_SCHED_BY_HDWQ,
|
||||
LPFC_FCP_SCHED_BY_CPU,
|
||||
"Determine scheduling algorithm for "
|
||||
"issuing commands [0] - Round Robin, [1] - Current CPU");
|
||||
"issuing commands [0] - Hardware Queue, [1] - Current CPU");
|
||||
|
||||
/*
|
||||
* lpfc_ns_query: Determine algrithmn for NameServer queries after RSCN
|
||||
|
@ -5415,41 +5594,39 @@ LPFC_ATTR_RW(nvme_embed_cmd, 1, 0, 2,
|
|||
"Embed NVME Command in WQE");
|
||||
|
||||
/*
|
||||
* lpfc_fcp_io_channel: Set the number of FCP IO channels the driver
|
||||
* will advertise it supports to the SCSI layer. This also will map to
|
||||
* the number of WQs the driver will create.
|
||||
*
|
||||
* 0 = Configure the number of io channels to the number of active CPUs.
|
||||
* 1,32 = Manually specify how many io channels to use.
|
||||
*
|
||||
* Value range is [0,32]. Default value is 4.
|
||||
*/
|
||||
LPFC_ATTR_R(fcp_io_channel,
|
||||
LPFC_FCP_IO_CHAN_DEF,
|
||||
LPFC_HBA_IO_CHAN_MIN, LPFC_HBA_IO_CHAN_MAX,
|
||||
"Set the number of FCP I/O channels");
|
||||
|
||||
/*
|
||||
* lpfc_nvme_io_channel: Set the number of IO hardware queues the driver
|
||||
* will advertise it supports to the NVME layer. This also will map to
|
||||
* the number of WQs the driver will create.
|
||||
*
|
||||
* This module parameter is valid when lpfc_enable_fc4_type is set
|
||||
* to support NVME.
|
||||
* lpfc_hdw_queue: Set the number of Hardware Queues the driver
|
||||
* will advertise it supports to the NVME and SCSI layers. This also
|
||||
* will map to the number of CQ/WQ pairs the driver will create.
|
||||
*
|
||||
* The NVME Layer will try to create this many, plus 1 administrative
|
||||
* hardware queue. The administrative queue will always map to WQ 0
|
||||
* A hardware IO queue maps (qidx) to a specific driver WQ.
|
||||
* A hardware IO queue maps (qidx) to a specific driver CQ/WQ.
|
||||
*
|
||||
* 0 = Configure the number of io channels to the number of active CPUs.
|
||||
* 1,32 = Manually specify how many io channels to use.
|
||||
* 0 = Configure the number of hdw queues to the number of active CPUs.
|
||||
* 1,128 = Manually specify how many hdw queues to use.
|
||||
*
|
||||
* Value range is [0,32]. Default value is 0.
|
||||
* Value range is [0,128]. Default value is 0.
|
||||
*/
|
||||
LPFC_ATTR_R(nvme_io_channel,
|
||||
LPFC_NVME_IO_CHAN_DEF,
|
||||
LPFC_HBA_IO_CHAN_MIN, LPFC_HBA_IO_CHAN_MAX,
|
||||
"Set the number of NVME I/O channels");
|
||||
LPFC_ATTR_R(hdw_queue,
|
||||
LPFC_HBA_HDWQ_DEF,
|
||||
LPFC_HBA_HDWQ_MIN, LPFC_HBA_HDWQ_MAX,
|
||||
"Set the number of I/O Hardware Queues");
|
||||
|
||||
/*
|
||||
* lpfc_irq_chann: Set the number of IRQ vectors that are available
|
||||
* for Hardware Queues to utilize. This also will map to the number
|
||||
* of EQ / MSI-X vectors the driver will create. This should never be
|
||||
* more than the number of Hardware Queues
|
||||
*
|
||||
* 0 = Configure number of IRQ Channels to the number of active CPUs.
|
||||
* 1,128 = Manually specify how many IRQ Channels to use.
|
||||
*
|
||||
* Value range is [0,128]. Default value is 0.
|
||||
*/
|
||||
LPFC_ATTR_R(irq_chann,
|
||||
LPFC_HBA_HDWQ_DEF,
|
||||
LPFC_HBA_HDWQ_MIN, LPFC_HBA_HDWQ_MAX,
|
||||
"Set the number of I/O IRQ Channels");
|
||||
|
||||
/*
|
||||
# lpfc_enable_hba_reset: Allow or prevent HBA resets to the hardware.
|
||||
|
@ -5491,16 +5668,6 @@ LPFC_ATTR_RW(XLanePriority, 0, 0x0, 0x7f, "CS_CTL for Express Lane Feature.");
|
|||
*/
|
||||
LPFC_ATTR_R(enable_bg, 0, 0, 1, "Enable BlockGuard Support");
|
||||
|
||||
/*
|
||||
# lpfc_fcp_look_ahead: Look ahead for completions in FCP start routine
|
||||
# 0 = disabled (default)
|
||||
# 1 = enabled
|
||||
# Value range is [0,1]. Default value is 0.
|
||||
#
|
||||
# This feature in under investigation and may be supported in the future.
|
||||
*/
|
||||
unsigned int lpfc_fcp_look_ahead = LPFC_LOOK_AHEAD_OFF;
|
||||
|
||||
/*
|
||||
# lpfc_prot_mask: i
|
||||
# - Bit mask of host protection capabilities used to register with the
|
||||
|
@ -5677,6 +5844,7 @@ LPFC_ATTR_RW(enable_dpp, 1, 0, 1, "Enable Direct Packet Push");
|
|||
|
||||
struct device_attribute *lpfc_hba_attrs[] = {
|
||||
&dev_attr_nvme_info,
|
||||
&dev_attr_scsi_stat,
|
||||
&dev_attr_bg_info,
|
||||
&dev_attr_bg_guard_err,
|
||||
&dev_attr_bg_apptag_err,
|
||||
|
@ -5704,11 +5872,11 @@ struct device_attribute *lpfc_hba_attrs[] = {
|
|||
&dev_attr_lpfc_nodev_tmo,
|
||||
&dev_attr_lpfc_devloss_tmo,
|
||||
&dev_attr_lpfc_enable_fc4_type,
|
||||
&dev_attr_lpfc_xri_split,
|
||||
&dev_attr_lpfc_fcp_class,
|
||||
&dev_attr_lpfc_use_adisc,
|
||||
&dev_attr_lpfc_first_burst_size,
|
||||
&dev_attr_lpfc_ack0,
|
||||
&dev_attr_lpfc_xri_rebalancing,
|
||||
&dev_attr_lpfc_topology,
|
||||
&dev_attr_lpfc_scan_down,
|
||||
&dev_attr_lpfc_link_speed,
|
||||
|
@ -5742,12 +5910,13 @@ struct device_attribute *lpfc_hba_attrs[] = {
|
|||
&dev_attr_lpfc_use_msi,
|
||||
&dev_attr_lpfc_nvme_oas,
|
||||
&dev_attr_lpfc_nvme_embed_cmd,
|
||||
&dev_attr_lpfc_auto_imax,
|
||||
&dev_attr_lpfc_fcp_imax,
|
||||
&dev_attr_lpfc_cq_poll_threshold,
|
||||
&dev_attr_lpfc_cq_max_proc_limit,
|
||||
&dev_attr_lpfc_fcp_cpu_map,
|
||||
&dev_attr_lpfc_fcp_io_channel,
|
||||
&dev_attr_lpfc_hdw_queue,
|
||||
&dev_attr_lpfc_irq_chann,
|
||||
&dev_attr_lpfc_suppress_rsp,
|
||||
&dev_attr_lpfc_nvme_io_channel,
|
||||
&dev_attr_lpfc_nvmet_mrq,
|
||||
&dev_attr_lpfc_nvmet_mrq_post,
|
||||
&dev_attr_lpfc_nvme_enable_fb,
|
||||
|
@ -6775,6 +6944,7 @@ lpfc_get_cfgparam(struct lpfc_hba *phba)
|
|||
lpfc_multi_ring_rctl_init(phba, lpfc_multi_ring_rctl);
|
||||
lpfc_multi_ring_type_init(phba, lpfc_multi_ring_type);
|
||||
lpfc_ack0_init(phba, lpfc_ack0);
|
||||
lpfc_xri_rebalancing_init(phba, lpfc_xri_rebalancing);
|
||||
lpfc_topology_init(phba, lpfc_topology);
|
||||
lpfc_link_speed_init(phba, lpfc_link_speed);
|
||||
lpfc_poll_tmo_init(phba, lpfc_poll_tmo);
|
||||
|
@ -6787,8 +6957,9 @@ lpfc_get_cfgparam(struct lpfc_hba *phba)
|
|||
lpfc_use_msi_init(phba, lpfc_use_msi);
|
||||
lpfc_nvme_oas_init(phba, lpfc_nvme_oas);
|
||||
lpfc_nvme_embed_cmd_init(phba, lpfc_nvme_embed_cmd);
|
||||
lpfc_auto_imax_init(phba, lpfc_auto_imax);
|
||||
lpfc_fcp_imax_init(phba, lpfc_fcp_imax);
|
||||
lpfc_cq_poll_threshold_init(phba, lpfc_cq_poll_threshold);
|
||||
lpfc_cq_max_proc_limit_init(phba, lpfc_cq_max_proc_limit);
|
||||
lpfc_fcp_cpu_map_init(phba, lpfc_fcp_cpu_map);
|
||||
lpfc_enable_hba_reset_init(phba, lpfc_enable_hba_reset);
|
||||
lpfc_enable_hba_heartbeat_init(phba, lpfc_enable_hba_heartbeat);
|
||||
|
@ -6824,8 +6995,8 @@ lpfc_get_cfgparam(struct lpfc_hba *phba)
|
|||
/* Initialize first burst. Target vs Initiator are different. */
|
||||
lpfc_nvme_enable_fb_init(phba, lpfc_nvme_enable_fb);
|
||||
lpfc_nvmet_fb_size_init(phba, lpfc_nvmet_fb_size);
|
||||
lpfc_fcp_io_channel_init(phba, lpfc_fcp_io_channel);
|
||||
lpfc_nvme_io_channel_init(phba, lpfc_nvme_io_channel);
|
||||
lpfc_hdw_queue_init(phba, lpfc_hdw_queue);
|
||||
lpfc_irq_chann_init(phba, lpfc_irq_chann);
|
||||
lpfc_enable_bbcr_init(phba, lpfc_enable_bbcr);
|
||||
lpfc_enable_dpp_init(phba, lpfc_enable_dpp);
|
||||
|
||||
|
@ -6834,38 +7005,27 @@ lpfc_get_cfgparam(struct lpfc_hba *phba)
|
|||
phba->nvmet_support = 0;
|
||||
phba->cfg_enable_fc4_type = LPFC_ENABLE_FCP;
|
||||
phba->cfg_enable_bbcr = 0;
|
||||
phba->cfg_xri_rebalancing = 0;
|
||||
} else {
|
||||
/* We MUST have FCP support */
|
||||
if (!(phba->cfg_enable_fc4_type & LPFC_ENABLE_FCP))
|
||||
phba->cfg_enable_fc4_type |= LPFC_ENABLE_FCP;
|
||||
}
|
||||
|
||||
if (phba->cfg_auto_imax && !phba->cfg_fcp_imax)
|
||||
phba->cfg_auto_imax = 0;
|
||||
phba->initial_imax = phba->cfg_fcp_imax;
|
||||
phba->cfg_auto_imax = (phba->cfg_fcp_imax) ? 0 : 1;
|
||||
|
||||
phba->cfg_enable_pbde = 0;
|
||||
|
||||
/* A value of 0 means use the number of CPUs found in the system */
|
||||
if (phba->cfg_fcp_io_channel == 0)
|
||||
phba->cfg_fcp_io_channel = phba->sli4_hba.num_present_cpu;
|
||||
if (phba->cfg_nvme_io_channel == 0)
|
||||
phba->cfg_nvme_io_channel = phba->sli4_hba.num_present_cpu;
|
||||
|
||||
if (phba->cfg_enable_fc4_type == LPFC_ENABLE_NVME)
|
||||
phba->cfg_fcp_io_channel = 0;
|
||||
|
||||
if (phba->cfg_enable_fc4_type == LPFC_ENABLE_FCP)
|
||||
phba->cfg_nvme_io_channel = 0;
|
||||
|
||||
if (phba->cfg_fcp_io_channel > phba->cfg_nvme_io_channel)
|
||||
phba->io_channel_irqs = phba->cfg_fcp_io_channel;
|
||||
else
|
||||
phba->io_channel_irqs = phba->cfg_nvme_io_channel;
|
||||
if (phba->cfg_hdw_queue == 0)
|
||||
phba->cfg_hdw_queue = phba->sli4_hba.num_present_cpu;
|
||||
if (phba->cfg_irq_chann == 0)
|
||||
phba->cfg_irq_chann = phba->sli4_hba.num_present_cpu;
|
||||
if (phba->cfg_irq_chann > phba->cfg_hdw_queue)
|
||||
phba->cfg_irq_chann = phba->cfg_hdw_queue;
|
||||
|
||||
phba->cfg_soft_wwnn = 0L;
|
||||
phba->cfg_soft_wwpn = 0L;
|
||||
lpfc_xri_split_init(phba, lpfc_xri_split);
|
||||
lpfc_sg_seg_cnt_init(phba, lpfc_sg_seg_cnt);
|
||||
lpfc_hba_queue_depth_init(phba, lpfc_hba_queue_depth);
|
||||
lpfc_hba_log_verbose_init(phba, lpfc_log_verbose);
|
||||
|
@ -6903,16 +7063,16 @@ lpfc_get_cfgparam(struct lpfc_hba *phba)
|
|||
void
|
||||
lpfc_nvme_mod_param_dep(struct lpfc_hba *phba)
|
||||
{
|
||||
if (phba->cfg_nvme_io_channel > phba->sli4_hba.num_present_cpu)
|
||||
phba->cfg_nvme_io_channel = phba->sli4_hba.num_present_cpu;
|
||||
|
||||
if (phba->cfg_fcp_io_channel > phba->sli4_hba.num_present_cpu)
|
||||
phba->cfg_fcp_io_channel = phba->sli4_hba.num_present_cpu;
|
||||
if (phba->cfg_hdw_queue > phba->sli4_hba.num_present_cpu)
|
||||
phba->cfg_hdw_queue = phba->sli4_hba.num_present_cpu;
|
||||
if (phba->cfg_irq_chann > phba->sli4_hba.num_present_cpu)
|
||||
phba->cfg_irq_chann = phba->sli4_hba.num_present_cpu;
|
||||
if (phba->cfg_irq_chann > phba->cfg_hdw_queue)
|
||||
phba->cfg_irq_chann = phba->cfg_hdw_queue;
|
||||
|
||||
if (phba->cfg_enable_fc4_type & LPFC_ENABLE_NVME &&
|
||||
phba->nvmet_support) {
|
||||
phba->cfg_enable_fc4_type &= ~LPFC_ENABLE_FCP;
|
||||
phba->cfg_fcp_io_channel = 0;
|
||||
|
||||
lpfc_printf_log(phba, KERN_INFO, LOG_NVME_DISC,
|
||||
"6013 %s x%x fb_size x%x, fb_max x%x\n",
|
||||
|
@ -6929,11 +7089,11 @@ lpfc_nvme_mod_param_dep(struct lpfc_hba *phba)
|
|||
}
|
||||
|
||||
if (!phba->cfg_nvmet_mrq)
|
||||
phba->cfg_nvmet_mrq = phba->cfg_nvme_io_channel;
|
||||
phba->cfg_nvmet_mrq = phba->cfg_irq_chann;
|
||||
|
||||
/* Adjust lpfc_nvmet_mrq to avoid running out of WQE slots */
|
||||
if (phba->cfg_nvmet_mrq > phba->cfg_nvme_io_channel) {
|
||||
phba->cfg_nvmet_mrq = phba->cfg_nvme_io_channel;
|
||||
if (phba->cfg_nvmet_mrq > phba->cfg_irq_chann) {
|
||||
phba->cfg_nvmet_mrq = phba->cfg_irq_chann;
|
||||
lpfc_printf_log(phba, KERN_ERR, LOG_NVME_DISC,
|
||||
"6018 Adjust lpfc_nvmet_mrq to %d\n",
|
||||
phba->cfg_nvmet_mrq);
|
||||
|
@ -6947,11 +7107,6 @@ lpfc_nvme_mod_param_dep(struct lpfc_hba *phba)
|
|||
phba->cfg_nvmet_mrq = LPFC_NVMET_MRQ_OFF;
|
||||
phba->cfg_nvmet_fb_size = 0;
|
||||
}
|
||||
|
||||
if (phba->cfg_fcp_io_channel > phba->cfg_nvme_io_channel)
|
||||
phba->io_channel_irqs = phba->cfg_fcp_io_channel;
|
||||
else
|
||||
phba->io_channel_irqs = phba->cfg_nvme_io_channel;
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
@ -2947,7 +2947,7 @@ static int lpfcdiag_loop_post_rxbufs(struct lpfc_hba *phba, uint16_t rxxri,
|
|||
cmd->un.cont64[i].addrLow = putPaddrLow(mp[i]->phys);
|
||||
cmd->un.cont64[i].tus.f.bdeSize =
|
||||
((struct lpfc_dmabufext *)mp[i])->size;
|
||||
cmd->ulpBdeCount = ++i;
|
||||
cmd->ulpBdeCount = ++i;
|
||||
|
||||
if ((--num_bde > 0) && (i < 2))
|
||||
continue;
|
||||
|
@ -4682,7 +4682,7 @@ lpfc_bsg_issue_mbox(struct lpfc_hba *phba, struct bsg_job *job,
|
|||
* Don't allow mailbox commands to be sent when blocked or when in
|
||||
* the middle of discovery
|
||||
*/
|
||||
if (phba->sli.sli_flag & LPFC_BLOCK_MGMT_IO) {
|
||||
if (phba->sli.sli_flag & LPFC_BLOCK_MGMT_IO) {
|
||||
rc = -EAGAIN;
|
||||
goto job_done;
|
||||
}
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
/*******************************************************************
|
||||
* This file is part of the Emulex Linux Device Driver for *
|
||||
* Fibre Channel Host Bus Adapters. *
|
||||
* Copyright (C) 2017-2018 Broadcom. All Rights Reserved. The term *
|
||||
* Copyright (C) 2017-2019 Broadcom. All Rights Reserved. The term *
|
||||
* “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. *
|
||||
* Copyright (C) 2004-2016 Emulex. All rights reserved. *
|
||||
* EMULEX and SLI are trademarks of Emulex. *
|
||||
|
@ -199,11 +199,6 @@ void lpfc_reset_hba(struct lpfc_hba *);
|
|||
int lpfc_emptyq_wait(struct lpfc_hba *phba, struct list_head *hd,
|
||||
spinlock_t *slock);
|
||||
|
||||
int lpfc_fof_queue_create(struct lpfc_hba *);
|
||||
int lpfc_fof_queue_setup(struct lpfc_hba *);
|
||||
int lpfc_fof_queue_destroy(struct lpfc_hba *);
|
||||
irqreturn_t lpfc_sli4_fof_intr_handler(int, void *);
|
||||
|
||||
int lpfc_sli_setup(struct lpfc_hba *);
|
||||
int lpfc_sli4_setup(struct lpfc_hba *phba);
|
||||
void lpfc_sli_queue_init(struct lpfc_hba *phba);
|
||||
|
@ -320,8 +315,8 @@ void lpfc_sli_def_mbox_cmpl(struct lpfc_hba *, LPFC_MBOXQ_t *);
|
|||
void lpfc_sli4_unreg_rpi_cmpl_clr(struct lpfc_hba *, LPFC_MBOXQ_t *);
|
||||
int lpfc_sli_issue_iocb(struct lpfc_hba *, uint32_t,
|
||||
struct lpfc_iocbq *, uint32_t);
|
||||
int lpfc_sli4_issue_wqe(struct lpfc_hba *phba, uint32_t rnum,
|
||||
struct lpfc_iocbq *iocbq);
|
||||
int lpfc_sli4_issue_wqe(struct lpfc_hba *phba, struct lpfc_sli4_hdw_queue *qp,
|
||||
struct lpfc_iocbq *pwqe);
|
||||
struct lpfc_sglq *__lpfc_clear_active_sglq(struct lpfc_hba *phba, uint16_t xri);
|
||||
struct lpfc_sglq *__lpfc_sli_get_nvmet_sglq(struct lpfc_hba *phba,
|
||||
struct lpfc_iocbq *piocbq);
|
||||
|
@ -445,7 +440,6 @@ extern spinlock_t _dump_buf_lock;
|
|||
extern int _dump_buf_done;
|
||||
extern spinlock_t pgcnt_lock;
|
||||
extern unsigned int pgcnt;
|
||||
extern unsigned int lpfc_fcp_look_ahead;
|
||||
|
||||
/* Interface exported by fabric iocb scheduler */
|
||||
void lpfc_fabric_abort_nport(struct lpfc_nodelist *);
|
||||
|
@ -520,8 +514,13 @@ int lpfc_sli4_read_config(struct lpfc_hba *);
|
|||
void lpfc_sli4_node_prep(struct lpfc_hba *);
|
||||
int lpfc_sli4_els_sgl_update(struct lpfc_hba *phba);
|
||||
int lpfc_sli4_nvmet_sgl_update(struct lpfc_hba *phba);
|
||||
int lpfc_sli4_scsi_sgl_update(struct lpfc_hba *phba);
|
||||
int lpfc_sli4_nvme_sgl_update(struct lpfc_hba *phba);
|
||||
int lpfc_io_buf_flush(struct lpfc_hba *phba, struct list_head *sglist);
|
||||
int lpfc_io_buf_replenish(struct lpfc_hba *phba, struct list_head *cbuf);
|
||||
int lpfc_sli4_io_sgl_update(struct lpfc_hba *phba);
|
||||
int lpfc_sli4_post_io_sgl_list(struct lpfc_hba *phba,
|
||||
struct list_head *blist, int xricnt);
|
||||
int lpfc_new_io_buf(struct lpfc_hba *phba, int num_to_alloc);
|
||||
void lpfc_io_free(struct lpfc_hba *phba);
|
||||
void lpfc_free_sgl_list(struct lpfc_hba *, struct list_head *);
|
||||
uint32_t lpfc_sli_port_speed_get(struct lpfc_hba *);
|
||||
int lpfc_sli4_request_firmware_update(struct lpfc_hba *, uint8_t);
|
||||
|
@ -574,6 +573,21 @@ void lpfc_nvme_mod_param_dep(struct lpfc_hba *phba);
|
|||
void lpfc_nvme_abort_fcreq_cmpl(struct lpfc_hba *phba,
|
||||
struct lpfc_iocbq *cmdiocb,
|
||||
struct lpfc_wcqe_complete *abts_cmpl);
|
||||
void lpfc_create_multixri_pools(struct lpfc_hba *phba);
|
||||
void lpfc_create_destroy_pools(struct lpfc_hba *phba);
|
||||
void lpfc_move_xri_pvt_to_pbl(struct lpfc_hba *phba, u32 hwqid);
|
||||
void lpfc_move_xri_pbl_to_pvt(struct lpfc_hba *phba, u32 hwqid, u32 cnt);
|
||||
void lpfc_adjust_high_watermark(struct lpfc_hba *phba, u32 hwqid);
|
||||
void lpfc_keep_pvt_pool_above_lowwm(struct lpfc_hba *phba, u32 hwqid);
|
||||
void lpfc_adjust_pvt_pool_count(struct lpfc_hba *phba, u32 hwqid);
|
||||
#ifdef LPFC_MXP_STAT
|
||||
void lpfc_snapshot_mxp(struct lpfc_hba *, u32);
|
||||
#endif
|
||||
struct lpfc_io_buf *lpfc_get_io_buf(struct lpfc_hba *phba,
|
||||
struct lpfc_nodelist *ndlp, u32 hwqid,
|
||||
int);
|
||||
void lpfc_release_io_buf(struct lpfc_hba *phba, struct lpfc_io_buf *ncmd,
|
||||
struct lpfc_sli4_hdw_queue *qp);
|
||||
void lpfc_nvme_cmd_template(void);
|
||||
void lpfc_nvmet_cmd_template(void);
|
||||
extern int lpfc_enable_nvmet_cnt;
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
/*******************************************************************
|
||||
* This file is part of the Emulex Linux Device Driver for *
|
||||
* Fibre Channel Host Bus Adapters. *
|
||||
* Copyright (C) 2017-2018 Broadcom. All Rights Reserved. The term *
|
||||
* Copyright (C) 2017-2019 Broadcom. All Rights Reserved. The term *
|
||||
* “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. *
|
||||
* Copyright (C) 2004-2016 Emulex. All rights reserved. *
|
||||
* EMULEX and SLI are trademarks of Emulex. *
|
||||
|
@ -1656,16 +1656,16 @@ lpfc_ns_cmd(struct lpfc_vport *vport, int cmdcode,
|
|||
CtReq->un.rft.PortId = cpu_to_be32(vport->fc_myDID);
|
||||
|
||||
/* Register FC4 FCP type if enabled. */
|
||||
if ((phba->cfg_enable_fc4_type == LPFC_ENABLE_BOTH) ||
|
||||
(phba->cfg_enable_fc4_type == LPFC_ENABLE_FCP))
|
||||
if (vport->cfg_enable_fc4_type == LPFC_ENABLE_BOTH ||
|
||||
vport->cfg_enable_fc4_type == LPFC_ENABLE_FCP)
|
||||
CtReq->un.rft.fcpReg = 1;
|
||||
|
||||
/* Register NVME type if enabled. Defined LE and swapped.
|
||||
* rsvd[0] is used as word1 because of the hard-coded
|
||||
* word0 usage in the ct_request data structure.
|
||||
*/
|
||||
if ((phba->cfg_enable_fc4_type == LPFC_ENABLE_BOTH) ||
|
||||
(phba->cfg_enable_fc4_type == LPFC_ENABLE_NVME))
|
||||
if (vport->cfg_enable_fc4_type == LPFC_ENABLE_BOTH ||
|
||||
vport->cfg_enable_fc4_type == LPFC_ENABLE_NVME)
|
||||
CtReq->un.rft.rsvd[0] =
|
||||
cpu_to_be32(LPFC_FC4_TYPE_BITMASK);
|
||||
|
||||
|
@ -1732,8 +1732,8 @@ lpfc_ns_cmd(struct lpfc_vport *vport, int cmdcode,
|
|||
* caller can specify NVME (type x28) as well. But only
|
||||
* these that FC4 type is supported.
|
||||
*/
|
||||
if (((phba->cfg_enable_fc4_type == LPFC_ENABLE_BOTH) ||
|
||||
(phba->cfg_enable_fc4_type == LPFC_ENABLE_NVME)) &&
|
||||
if (((vport->cfg_enable_fc4_type == LPFC_ENABLE_BOTH) ||
|
||||
(vport->cfg_enable_fc4_type == LPFC_ENABLE_NVME)) &&
|
||||
(context == FC_TYPE_NVME)) {
|
||||
if ((vport == phba->pport) && phba->nvmet_support) {
|
||||
CtReq->un.rff.fbits = (FC4_FEATURE_TARGET |
|
||||
|
@ -1744,8 +1744,8 @@ lpfc_ns_cmd(struct lpfc_vport *vport, int cmdcode,
|
|||
}
|
||||
CtReq->un.rff.type_code = context;
|
||||
|
||||
} else if (((phba->cfg_enable_fc4_type == LPFC_ENABLE_BOTH) ||
|
||||
(phba->cfg_enable_fc4_type == LPFC_ENABLE_FCP)) &&
|
||||
} else if (((vport->cfg_enable_fc4_type == LPFC_ENABLE_BOTH) ||
|
||||
(vport->cfg_enable_fc4_type == LPFC_ENABLE_FCP)) &&
|
||||
(context == FC_TYPE_FCP))
|
||||
CtReq->un.rff.type_code = context;
|
||||
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -1,7 +1,7 @@
|
|||
/*******************************************************************
|
||||
* This file is part of the Emulex Linux Device Driver for *
|
||||
* Fibre Channel Host Bus Adapters. *
|
||||
* Copyright (C) 2017-2018 Broadcom. All Rights Reserved. The term *
|
||||
* Copyright (C) 2017-2019 Broadcom. All Rights Reserved. The term *
|
||||
* “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. *
|
||||
* Copyright (C) 2007-2011 Emulex. All rights reserved. *
|
||||
* EMULEX and SLI are trademarks of Emulex. *
|
||||
|
@ -50,6 +50,9 @@
|
|||
#define LPFC_CPUCHECK_SIZE 8192
|
||||
#define LPFC_NVMEIO_TRC_SIZE 8192
|
||||
|
||||
/* scsistat output buffer size */
|
||||
#define LPFC_SCSISTAT_SIZE 8192
|
||||
|
||||
#define LPFC_DEBUG_OUT_LINE_SZ 80
|
||||
|
||||
/*
|
||||
|
@ -284,6 +287,9 @@ struct lpfc_idiag {
|
|||
|
||||
#endif
|
||||
|
||||
/* multixripool output buffer size */
|
||||
#define LPFC_DUMP_MULTIXRIPOOL_SIZE 8192
|
||||
|
||||
enum {
|
||||
DUMP_FCP,
|
||||
DUMP_NVME,
|
||||
|
@ -410,10 +416,10 @@ lpfc_debug_dump_wq(struct lpfc_hba *phba, int qtype, int wqidx)
|
|||
char *qtypestr;
|
||||
|
||||
if (qtype == DUMP_FCP) {
|
||||
wq = phba->sli4_hba.fcp_wq[wqidx];
|
||||
wq = phba->sli4_hba.hdwq[wqidx].fcp_wq;
|
||||
qtypestr = "FCP";
|
||||
} else if (qtype == DUMP_NVME) {
|
||||
wq = phba->sli4_hba.nvme_wq[wqidx];
|
||||
wq = phba->sli4_hba.hdwq[wqidx].nvme_wq;
|
||||
qtypestr = "NVME";
|
||||
} else if (qtype == DUMP_MBX) {
|
||||
wq = phba->sli4_hba.mbx_wq;
|
||||
|
@ -454,14 +460,15 @@ lpfc_debug_dump_cq(struct lpfc_hba *phba, int qtype, int wqidx)
|
|||
int eqidx;
|
||||
|
||||
/* fcp/nvme wq and cq are 1:1, thus same indexes */
|
||||
eq = NULL;
|
||||
|
||||
if (qtype == DUMP_FCP) {
|
||||
wq = phba->sli4_hba.fcp_wq[wqidx];
|
||||
cq = phba->sli4_hba.fcp_cq[wqidx];
|
||||
wq = phba->sli4_hba.hdwq[wqidx].fcp_wq;
|
||||
cq = phba->sli4_hba.hdwq[wqidx].fcp_cq;
|
||||
qtypestr = "FCP";
|
||||
} else if (qtype == DUMP_NVME) {
|
||||
wq = phba->sli4_hba.nvme_wq[wqidx];
|
||||
cq = phba->sli4_hba.nvme_cq[wqidx];
|
||||
wq = phba->sli4_hba.hdwq[wqidx].nvme_wq;
|
||||
cq = phba->sli4_hba.hdwq[wqidx].nvme_cq;
|
||||
qtypestr = "NVME";
|
||||
} else if (qtype == DUMP_MBX) {
|
||||
wq = phba->sli4_hba.mbx_wq;
|
||||
|
@ -478,17 +485,17 @@ lpfc_debug_dump_cq(struct lpfc_hba *phba, int qtype, int wqidx)
|
|||
} else
|
||||
return;
|
||||
|
||||
for (eqidx = 0; eqidx < phba->io_channel_irqs; eqidx++) {
|
||||
if (cq->assoc_qid == phba->sli4_hba.hba_eq[eqidx]->queue_id)
|
||||
for (eqidx = 0; eqidx < phba->cfg_hdw_queue; eqidx++) {
|
||||
eq = phba->sli4_hba.hdwq[eqidx].hba_eq;
|
||||
if (cq->assoc_qid == eq->queue_id)
|
||||
break;
|
||||
}
|
||||
if (eqidx == phba->io_channel_irqs) {
|
||||
if (eqidx == phba->cfg_hdw_queue) {
|
||||
pr_err("Couldn't find EQ for CQ. Using EQ[0]\n");
|
||||
eqidx = 0;
|
||||
eq = phba->sli4_hba.hdwq[0].hba_eq;
|
||||
}
|
||||
|
||||
eq = phba->sli4_hba.hba_eq[eqidx];
|
||||
|
||||
if (qtype == DUMP_FCP || qtype == DUMP_NVME)
|
||||
pr_err("%s CQ: WQ[Idx:%d|Qid%d]->CQ[Idx%d|Qid%d]"
|
||||
"->EQ[Idx:%d|Qid:%d]:\n",
|
||||
|
@ -516,7 +523,7 @@ lpfc_debug_dump_hba_eq(struct lpfc_hba *phba, int qidx)
|
|||
{
|
||||
struct lpfc_queue *qp;
|
||||
|
||||
qp = phba->sli4_hba.hba_eq[qidx];
|
||||
qp = phba->sli4_hba.hdwq[qidx].hba_eq;
|
||||
|
||||
pr_err("EQ[Idx:%d|Qid:%d]\n", qidx, qp->queue_id);
|
||||
|
||||
|
@ -564,21 +571,21 @@ lpfc_debug_dump_wq_by_id(struct lpfc_hba *phba, int qid)
|
|||
{
|
||||
int wq_idx;
|
||||
|
||||
for (wq_idx = 0; wq_idx < phba->cfg_fcp_io_channel; wq_idx++)
|
||||
if (phba->sli4_hba.fcp_wq[wq_idx]->queue_id == qid)
|
||||
for (wq_idx = 0; wq_idx < phba->cfg_hdw_queue; wq_idx++)
|
||||
if (phba->sli4_hba.hdwq[wq_idx].fcp_wq->queue_id == qid)
|
||||
break;
|
||||
if (wq_idx < phba->cfg_fcp_io_channel) {
|
||||
if (wq_idx < phba->cfg_hdw_queue) {
|
||||
pr_err("FCP WQ[Idx:%d|Qid:%d]\n", wq_idx, qid);
|
||||
lpfc_debug_dump_q(phba->sli4_hba.fcp_wq[wq_idx]);
|
||||
lpfc_debug_dump_q(phba->sli4_hba.hdwq[wq_idx].fcp_wq);
|
||||
return;
|
||||
}
|
||||
|
||||
for (wq_idx = 0; wq_idx < phba->cfg_nvme_io_channel; wq_idx++)
|
||||
if (phba->sli4_hba.nvme_wq[wq_idx]->queue_id == qid)
|
||||
for (wq_idx = 0; wq_idx < phba->cfg_hdw_queue; wq_idx++)
|
||||
if (phba->sli4_hba.hdwq[wq_idx].nvme_wq->queue_id == qid)
|
||||
break;
|
||||
if (wq_idx < phba->cfg_nvme_io_channel) {
|
||||
if (wq_idx < phba->cfg_hdw_queue) {
|
||||
pr_err("NVME WQ[Idx:%d|Qid:%d]\n", wq_idx, qid);
|
||||
lpfc_debug_dump_q(phba->sli4_hba.nvme_wq[wq_idx]);
|
||||
lpfc_debug_dump_q(phba->sli4_hba.hdwq[wq_idx].nvme_wq);
|
||||
return;
|
||||
}
|
||||
|
||||
|
@ -646,23 +653,23 @@ lpfc_debug_dump_cq_by_id(struct lpfc_hba *phba, int qid)
|
|||
{
|
||||
int cq_idx;
|
||||
|
||||
for (cq_idx = 0; cq_idx < phba->cfg_fcp_io_channel; cq_idx++)
|
||||
if (phba->sli4_hba.fcp_cq[cq_idx]->queue_id == qid)
|
||||
for (cq_idx = 0; cq_idx < phba->cfg_hdw_queue; cq_idx++)
|
||||
if (phba->sli4_hba.hdwq[cq_idx].fcp_cq->queue_id == qid)
|
||||
break;
|
||||
|
||||
if (cq_idx < phba->cfg_fcp_io_channel) {
|
||||
if (cq_idx < phba->cfg_hdw_queue) {
|
||||
pr_err("FCP CQ[Idx:%d|Qid:%d]\n", cq_idx, qid);
|
||||
lpfc_debug_dump_q(phba->sli4_hba.fcp_cq[cq_idx]);
|
||||
lpfc_debug_dump_q(phba->sli4_hba.hdwq[cq_idx].fcp_cq);
|
||||
return;
|
||||
}
|
||||
|
||||
for (cq_idx = 0; cq_idx < phba->cfg_nvme_io_channel; cq_idx++)
|
||||
if (phba->sli4_hba.nvme_cq[cq_idx]->queue_id == qid)
|
||||
for (cq_idx = 0; cq_idx < phba->cfg_hdw_queue; cq_idx++)
|
||||
if (phba->sli4_hba.hdwq[cq_idx].nvme_cq->queue_id == qid)
|
||||
break;
|
||||
|
||||
if (cq_idx < phba->cfg_nvme_io_channel) {
|
||||
if (cq_idx < phba->cfg_hdw_queue) {
|
||||
pr_err("NVME CQ[Idx:%d|Qid:%d]\n", cq_idx, qid);
|
||||
lpfc_debug_dump_q(phba->sli4_hba.nvme_cq[cq_idx]);
|
||||
lpfc_debug_dump_q(phba->sli4_hba.hdwq[cq_idx].nvme_cq);
|
||||
return;
|
||||
}
|
||||
|
||||
|
@ -697,13 +704,13 @@ lpfc_debug_dump_eq_by_id(struct lpfc_hba *phba, int qid)
|
|||
{
|
||||
int eq_idx;
|
||||
|
||||
for (eq_idx = 0; eq_idx < phba->io_channel_irqs; eq_idx++)
|
||||
if (phba->sli4_hba.hba_eq[eq_idx]->queue_id == qid)
|
||||
for (eq_idx = 0; eq_idx < phba->cfg_hdw_queue; eq_idx++)
|
||||
if (phba->sli4_hba.hdwq[eq_idx].hba_eq->queue_id == qid)
|
||||
break;
|
||||
|
||||
if (eq_idx < phba->io_channel_irqs) {
|
||||
if (eq_idx < phba->cfg_hdw_queue) {
|
||||
printk(KERN_ERR "FCP EQ[Idx:%d|Qid:%d]\n", eq_idx, qid);
|
||||
lpfc_debug_dump_q(phba->sli4_hba.hba_eq[eq_idx]);
|
||||
lpfc_debug_dump_q(phba->sli4_hba.hdwq[eq_idx].hba_eq);
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
/*******************************************************************
|
||||
* This file is part of the Emulex Linux Device Driver for *
|
||||
* Fibre Channel Host Bus Adapters. *
|
||||
* Copyright (C) 2017-2018 Broadcom. All Rights Reserved. The term *
|
||||
* Copyright (C) 2017-2019 Broadcom. All Rights Reserved. The term *
|
||||
* “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. *
|
||||
* Copyright (C) 2004-2016 Emulex. All rights reserved. *
|
||||
* EMULEX and SLI are trademarks of Emulex. *
|
||||
|
@ -2827,8 +2827,8 @@ lpfc_cmpl_els_logo(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
|
|||
!(vport->fc_flag & FC_PT2PT_PLOGI)) {
|
||||
phba->pport->fc_myDID = 0;
|
||||
|
||||
if ((phba->cfg_enable_fc4_type == LPFC_ENABLE_BOTH) ||
|
||||
(phba->cfg_enable_fc4_type == LPFC_ENABLE_NVME)) {
|
||||
if ((vport->cfg_enable_fc4_type == LPFC_ENABLE_BOTH) ||
|
||||
(vport->cfg_enable_fc4_type == LPFC_ENABLE_NVME)) {
|
||||
if (phba->nvmet_support)
|
||||
lpfc_nvmet_update_targetport(phba);
|
||||
else
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
/*******************************************************************
|
||||
* This file is part of the Emulex Linux Device Driver for *
|
||||
* Fibre Channel Host Bus Adapters. *
|
||||
* Copyright (C) 2017-2018 Broadcom. All Rights Reserved. The term *
|
||||
* Copyright (C) 2017-2019 Broadcom. All Rights Reserved. The term *
|
||||
* “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. *
|
||||
* Copyright (C) 2004-2016 Emulex. All rights reserved. *
|
||||
* EMULEX and SLI are trademarks of Emulex. *
|
||||
|
@ -638,8 +638,6 @@ lpfc_work_done(struct lpfc_hba *phba)
|
|||
if (phba->pci_dev_grp == LPFC_PCI_DEV_OC) {
|
||||
if (phba->hba_flag & HBA_RRQ_ACTIVE)
|
||||
lpfc_handle_rrq_active(phba);
|
||||
if (phba->hba_flag & FCP_XRI_ABORT_EVENT)
|
||||
lpfc_sli4_fcp_xri_abort_event_proc(phba);
|
||||
if (phba->hba_flag & ELS_XRI_ABORT_EVENT)
|
||||
lpfc_sli4_els_xri_abort_event_proc(phba);
|
||||
if (phba->hba_flag & ASYNC_EVENT)
|
||||
|
@ -859,10 +857,9 @@ lpfc_port_link_failure(struct lpfc_vport *vport)
|
|||
void
|
||||
lpfc_linkdown_port(struct lpfc_vport *vport)
|
||||
{
|
||||
struct lpfc_hba *phba = vport->phba;
|
||||
struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
|
||||
|
||||
if (phba->cfg_enable_fc4_type != LPFC_ENABLE_NVME)
|
||||
if (vport->cfg_enable_fc4_type != LPFC_ENABLE_NVME)
|
||||
fc_host_post_event(shost, fc_get_event_number(),
|
||||
FCH_EVT_LINKDOWN, 0);
|
||||
|
||||
|
@ -925,8 +922,8 @@ lpfc_linkdown(struct lpfc_hba *phba)
|
|||
|
||||
vports[i]->fc_myDID = 0;
|
||||
|
||||
if ((phba->cfg_enable_fc4_type == LPFC_ENABLE_BOTH) ||
|
||||
(phba->cfg_enable_fc4_type == LPFC_ENABLE_NVME)) {
|
||||
if ((vport->cfg_enable_fc4_type == LPFC_ENABLE_BOTH) ||
|
||||
(vport->cfg_enable_fc4_type == LPFC_ENABLE_NVME)) {
|
||||
if (phba->nvmet_support)
|
||||
lpfc_nvmet_update_targetport(phba);
|
||||
else
|
||||
|
@ -1012,7 +1009,7 @@ lpfc_linkup_port(struct lpfc_vport *vport)
|
|||
(vport != phba->pport))
|
||||
return;
|
||||
|
||||
if (phba->cfg_enable_fc4_type != LPFC_ENABLE_NVME)
|
||||
if (vport->cfg_enable_fc4_type != LPFC_ENABLE_NVME)
|
||||
fc_host_post_event(shost, fc_get_event_number(),
|
||||
FCH_EVT_LINKUP, 0);
|
||||
|
||||
|
@ -3660,8 +3657,8 @@ lpfc_mbx_cmpl_reg_vpi(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
|
|||
spin_unlock_irq(shost->host_lock);
|
||||
vport->fc_myDID = 0;
|
||||
|
||||
if ((phba->cfg_enable_fc4_type == LPFC_ENABLE_BOTH) ||
|
||||
(phba->cfg_enable_fc4_type == LPFC_ENABLE_NVME)) {
|
||||
if ((vport->cfg_enable_fc4_type == LPFC_ENABLE_BOTH) ||
|
||||
(vport->cfg_enable_fc4_type == LPFC_ENABLE_NVME)) {
|
||||
if (phba->nvmet_support)
|
||||
lpfc_nvmet_update_targetport(phba);
|
||||
else
|
||||
|
@ -3923,11 +3920,9 @@ lpfc_mbx_cmpl_fabric_reg_login(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
|
|||
int
|
||||
lpfc_issue_gidft(struct lpfc_vport *vport)
|
||||
{
|
||||
struct lpfc_hba *phba = vport->phba;
|
||||
|
||||
/* Good status, issue CT Request to NameServer */
|
||||
if ((phba->cfg_enable_fc4_type == LPFC_ENABLE_BOTH) ||
|
||||
(phba->cfg_enable_fc4_type == LPFC_ENABLE_FCP)) {
|
||||
if ((vport->cfg_enable_fc4_type == LPFC_ENABLE_BOTH) ||
|
||||
(vport->cfg_enable_fc4_type == LPFC_ENABLE_FCP)) {
|
||||
if (lpfc_ns_cmd(vport, SLI_CTNS_GID_FT, 0, SLI_CTPT_FCP)) {
|
||||
/* Cannot issue NameServer FCP Query, so finish up
|
||||
* discovery
|
||||
|
@ -3942,8 +3937,8 @@ lpfc_issue_gidft(struct lpfc_vport *vport)
|
|||
vport->gidft_inp++;
|
||||
}
|
||||
|
||||
if ((phba->cfg_enable_fc4_type == LPFC_ENABLE_BOTH) ||
|
||||
(phba->cfg_enable_fc4_type == LPFC_ENABLE_NVME)) {
|
||||
if ((vport->cfg_enable_fc4_type == LPFC_ENABLE_BOTH) ||
|
||||
(vport->cfg_enable_fc4_type == LPFC_ENABLE_NVME)) {
|
||||
if (lpfc_ns_cmd(vport, SLI_CTNS_GID_FT, 0, SLI_CTPT_NVME)) {
|
||||
/* Cannot issue NameServer NVME Query, so finish up
|
||||
* discovery
|
||||
|
@ -4059,12 +4054,12 @@ lpfc_mbx_cmpl_ns_reg_login(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
|
|||
lpfc_ns_cmd(vport, SLI_CTNS_RSPN_ID, 0, 0);
|
||||
lpfc_ns_cmd(vport, SLI_CTNS_RFT_ID, 0, 0);
|
||||
|
||||
if ((phba->cfg_enable_fc4_type == LPFC_ENABLE_BOTH) ||
|
||||
(phba->cfg_enable_fc4_type == LPFC_ENABLE_FCP))
|
||||
if ((vport->cfg_enable_fc4_type == LPFC_ENABLE_BOTH) ||
|
||||
(vport->cfg_enable_fc4_type == LPFC_ENABLE_FCP))
|
||||
lpfc_ns_cmd(vport, SLI_CTNS_RFF_ID, 0, FC_TYPE_FCP);
|
||||
|
||||
if ((phba->cfg_enable_fc4_type == LPFC_ENABLE_BOTH) ||
|
||||
(phba->cfg_enable_fc4_type == LPFC_ENABLE_NVME))
|
||||
if ((vport->cfg_enable_fc4_type == LPFC_ENABLE_BOTH) ||
|
||||
(vport->cfg_enable_fc4_type == LPFC_ENABLE_NVME))
|
||||
lpfc_ns_cmd(vport, SLI_CTNS_RFF_ID, 0,
|
||||
FC_TYPE_NVME);
|
||||
|
||||
|
@ -4100,7 +4095,7 @@ lpfc_register_remote_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
|
|||
struct fc_rport_identifiers rport_ids;
|
||||
struct lpfc_hba *phba = vport->phba;
|
||||
|
||||
if (phba->cfg_enable_fc4_type == LPFC_ENABLE_NVME)
|
||||
if (vport->cfg_enable_fc4_type == LPFC_ENABLE_NVME)
|
||||
return;
|
||||
|
||||
/* Remote port has reappeared. Re-register w/ FC transport */
|
||||
|
@ -4175,9 +4170,8 @@ lpfc_unregister_remote_port(struct lpfc_nodelist *ndlp)
|
|||
{
|
||||
struct fc_rport *rport = ndlp->rport;
|
||||
struct lpfc_vport *vport = ndlp->vport;
|
||||
struct lpfc_hba *phba = vport->phba;
|
||||
|
||||
if (phba->cfg_enable_fc4_type == LPFC_ENABLE_NVME)
|
||||
if (vport->cfg_enable_fc4_type == LPFC_ENABLE_NVME)
|
||||
return;
|
||||
|
||||
lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_RPORT,
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
/*******************************************************************
|
||||
* This file is part of the Emulex Linux Device Driver for *
|
||||
* Fibre Channel Host Bus Adapters. *
|
||||
* Copyright (C) 2017-2018 Broadcom. All Rights Reserved. The term *
|
||||
* Copyright (C) 2017-2019 Broadcom. All Rights Reserved. The term *
|
||||
* “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. *
|
||||
* Copyright (C) 2009-2016 Emulex. All rights reserved. *
|
||||
* EMULEX and SLI are trademarks of Emulex. *
|
||||
|
@ -194,7 +194,7 @@ struct lpfc_sli_intf {
|
|||
#define LPFC_ACT_INTR_CNT 4
|
||||
|
||||
/* Algrithmns for scheduling FCP commands to WQs */
|
||||
#define LPFC_FCP_SCHED_ROUND_ROBIN 0
|
||||
#define LPFC_FCP_SCHED_BY_HDWQ 0
|
||||
#define LPFC_FCP_SCHED_BY_CPU 1
|
||||
|
||||
/* Algrithmns for NameServer Query after RSCN */
|
||||
|
@ -208,12 +208,18 @@ struct lpfc_sli_intf {
|
|||
/* Configuration of Interrupts / sec for entire HBA port */
|
||||
#define LPFC_MIN_IMAX 5000
|
||||
#define LPFC_MAX_IMAX 5000000
|
||||
#define LPFC_DEF_IMAX 150000
|
||||
#define LPFC_DEF_IMAX 0
|
||||
|
||||
#define LPFC_IMAX_THRESHOLD 1000
|
||||
#define LPFC_MAX_AUTO_EQ_DELAY 120
|
||||
#define LPFC_EQ_DELAY_STEP 15
|
||||
#define LPFC_EQD_ISR_TRIGGER 20000
|
||||
/* 1s intervals */
|
||||
#define LPFC_EQ_DELAY_MSECS 1000
|
||||
|
||||
#define LPFC_MIN_CPU_MAP 0
|
||||
#define LPFC_MAX_CPU_MAP 2
|
||||
#define LPFC_MAX_CPU_MAP 1
|
||||
#define LPFC_HBA_CPU_MAP 1
|
||||
#define LPFC_DRIVER_CPU_MAP 2 /* Default */
|
||||
|
||||
/* PORT_CAPABILITIES constants. */
|
||||
#define LPFC_MAX_SUPPORTED_PAGES 8
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -2095,8 +2095,8 @@ lpfc_request_features(struct lpfc_hba *phba, struct lpfcMboxq *mboxq)
|
|||
if (phba->nvmet_support) {
|
||||
bf_set(lpfc_mbx_rq_ftr_rq_mrqp, &mboxq->u.mqe.un.req_ftrs, 1);
|
||||
/* iaab/iaar NOT set for now */
|
||||
bf_set(lpfc_mbx_rq_ftr_rq_iaab, &mboxq->u.mqe.un.req_ftrs, 0);
|
||||
bf_set(lpfc_mbx_rq_ftr_rq_iaar, &mboxq->u.mqe.un.req_ftrs, 0);
|
||||
bf_set(lpfc_mbx_rq_ftr_rq_iaab, &mboxq->u.mqe.un.req_ftrs, 0);
|
||||
bf_set(lpfc_mbx_rq_ftr_rq_iaar, &mboxq->u.mqe.un.req_ftrs, 0);
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
/*******************************************************************
|
||||
* This file is part of the Emulex Linux Device Driver for *
|
||||
* Fibre Channel Host Bus Adapters. *
|
||||
* Copyright (C) 2017-2018 Broadcom. All Rights Reserved. The term *
|
||||
* Copyright (C) 2017-2019 Broadcom. All Rights Reserved. The term *
|
||||
* “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. *
|
||||
* Copyright (C) 2004-2016 Emulex. All rights reserved. *
|
||||
* EMULEX and SLI are trademarks of Emulex. *
|
||||
|
@ -825,7 +825,7 @@ lpfc_rcv_prli(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
|
|||
"rport rolechg: role:x%x did:x%x flg:x%x",
|
||||
roles, ndlp->nlp_DID, ndlp->nlp_flag);
|
||||
|
||||
if (phba->cfg_enable_fc4_type != LPFC_ENABLE_NVME)
|
||||
if (vport->cfg_enable_fc4_type != LPFC_ENABLE_NVME)
|
||||
fc_remote_port_rolechg(rport, roles);
|
||||
}
|
||||
}
|
||||
|
@ -1789,8 +1789,8 @@ lpfc_cmpl_reglogin_reglogin_issue(struct lpfc_vport *vport,
|
|||
* is configured try it.
|
||||
*/
|
||||
ndlp->nlp_fc4_type |= NLP_FC4_FCP;
|
||||
if ((phba->cfg_enable_fc4_type == LPFC_ENABLE_BOTH) ||
|
||||
(phba->cfg_enable_fc4_type == LPFC_ENABLE_NVME)) {
|
||||
if ((vport->cfg_enable_fc4_type == LPFC_ENABLE_BOTH) ||
|
||||
(vport->cfg_enable_fc4_type == LPFC_ENABLE_NVME)) {
|
||||
ndlp->nlp_fc4_type |= NLP_FC4_NVME;
|
||||
/* We need to update the localport also */
|
||||
lpfc_nvme_update_localport(vport);
|
||||
|
@ -1804,7 +1804,7 @@ lpfc_cmpl_reglogin_reglogin_issue(struct lpfc_vport *vport,
|
|||
* should just issue PRLI for FCP. Otherwise issue
|
||||
* GFT_ID to determine if remote port supports NVME.
|
||||
*/
|
||||
if (phba->cfg_enable_fc4_type != LPFC_ENABLE_FCP) {
|
||||
if (vport->cfg_enable_fc4_type != LPFC_ENABLE_FCP) {
|
||||
rc = lpfc_ns_cmd(vport, SLI_CTNS_GFT_ID,
|
||||
0, ndlp->nlp_DID);
|
||||
return ndlp->nlp_state;
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue