Commit Graph

53 Commits

Author SHA1 Message Date
Keith Busch c30341dc3c NVMe: Abort timed out commands
Send nvme abort command to io requests that have timed out on an
initialized device. If the command is not returned after another timeout,
schedule the controller for reset.

Signed-off-by: Keith Busch <keith.busch@intel.com>
[fix endianness issues]
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
2014-01-27 19:27:53 -05:00
Keith Busch d4b4ff8e28 NVMe: Schedule reset for failed controllers
Schedules a controller reset when it indicates it has a failed status. If
the device does not become ready after a reset, the pci device will be
scheduled for removal.

Signed-off-by: Keith Busch <keith.busch@intel.com>
[fixed checkpatch issue]
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
2014-01-27 19:20:02 -05:00
Keith Busch 9a6b94584d NVMe: Device resume error handling
Adds controller error handling on resume power management. If the device
fails to initialize, the device is queued for a reset. If the reset fails,
a thread is spawned to remove the pci device.

If the device resumes as "busy", the device is responding to admin
commands but will not create IO queues. In this case, we need to remove
the gendisks and free the IO queues since they can't be used and may be
holding bios in their lists.

From testing, the dma pools require a pci device so this had to change
the pci driver 'remove' to release the dma resources in line with that
call instead of after all references to the device are released.

Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
2013-12-16 15:54:39 -05:00
Matthew Wilcox 68608c268b NVMe: Cache dev->pci_dev in a local pointer
Helps with line-length issues

Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
2013-12-16 15:54:35 -05:00
Matthew Wilcox 0a8d44cb33 NVMe: Fix lockdep warnings
During the initialisation path, the queue lock is taken without interrupt
protection.  It's perfectly safe to do so, because the interrupt handler
can't run at this point, but it confuses lockdep.

Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
2013-12-16 15:54:34 -05:00
Keith Busch 320a382746 NVMe: compat SG_IO ioctl
For 32-bit versions of sg3-utils running on a 64-bit system. This is
mostly a copy from the relevent portions of fs/compat_ioctl.c, with
slight modifications for going through block_device_operations.

Signed-off-by: Keith Busch <keith.busch@intel.com>
Reviewed-by: Vishal Verma <vishal.l.verma@linux.intel.com>
[fixed up CONFIG_COMPAT=n build problems]
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
2013-12-16 15:49:40 -05:00
Michael Opdenacker 481e5bad84 NVMe: remove deprecated IRQF_DISABLED
This patch proposes to remove the use of the IRQF_DISABLED flag

It's a NOOP since 2.6.35 and it will be removed one day.

Signed-off-by: Michael Opdenacker <michael.opdenacker@free-electrons.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
2013-11-18 17:13:54 -05:00
Haiyan Hu b80d5ccca3 NVMe: Avoid shift operation when writing cq head doorbell
Changes the type of dev->db_stride to unsigned and changes the value
stored there to be 1 << the current value. Then there is less
calculation to be done at completion time.

Signed-off-by: Haiyan Hu <huhaiyan@huawei.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
2013-11-18 17:10:51 -05:00
Russell King 052d0efada DMA-API: block: nvme-core: replace dma_set_mask()+dma_set_coherent_mask() with new helper
Replace the following sequence:

	dma_set_mask(dev, mask);
	dma_set_coherent_mask(dev, mask);

with a call to the new helper dma_set_mask_and_coherent().

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-09-21 21:02:24 +01:00
Keith Busch d82e8bfdef NVMe: Merge issue on character device bring-up
A recent patch made it possible to bring up the character handle when the
device is responsive but not accepting a set-features command. Another
recent patch moved the initialization that requires we move where the
checks for this condition occur. This patch merges these two ideas so
it works much as before.

Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
2013-09-06 16:26:58 -04:00
Keith Busch 9d713c2bfb NVMe: Handle ioremap failure
Decrement the number of queues required for doorbell remapping until
the memory is successfully mapped for that size.

Additional checks are done so that we don't call free_irq if it has
already been freed.

Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
2013-09-03 16:44:25 -04:00
Keith Busch cd63894630 NVMe: Add pci suspend/resume driver callbacks
Used for going in and out of low power states. Resuming reuses the IO
queues from the previous initialization, freeing any allocated queues
that are no longer usable.

Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
2013-09-03 16:44:16 -04:00
Keith Busch 1894d8f16a NVMe: Use normal shutdown
The NVMe spec recommends using the shutdown normal sequence when safely
taking the controller offline instead of hitting CC.EN on the next
start-up to reset the controller. The spec recommends a minimum of 1
second for the shutdown complete. This patch waits 2 seconds to be on
the safe side.

Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
2013-09-03 16:40:32 -04:00
Keith Busch f0b50732a9 NVMe: Separate controller init from disk discovery
This combines the controller initialization into one function, removing
IO queue setup from namespace discovery, and creates symetric functions
for device removal. The controller start and shutdown functions can now
be called from resume/suspend context as well as probe/remove.

Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
2013-09-03 16:40:28 -04:00
Keith Busch 2240427425 NVMe: Separate queue alloc/free from create/delete
This separates nvme queue allocation from creation, and queue deletion
from freeing. This is so that we may in the future temporarily disable
queues and reuse the same memory when bringing them back online, like
coming back from suspend state.

Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
2013-09-03 16:39:28 -04:00
Keith Busch 0877cb0d28 NVMe: Group pci related actions in functions
This will make it easier to reuse these outside probe/remove.

Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
2013-09-03 16:39:25 -04:00
Keith Busch 9e59d091b0 NVMe: Disk stats for read/write commands only
Flush and discard requests would previously mess up the accounting.

Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
2013-09-03 16:33:35 -04:00
Keith Busch 7e03b12406 NVMe: Bring up cdev on set feature failure
This patch creates the character device as long as a device's admin queues
are usable so a user has an opprotunity to perform administration tasks.
A device may be in a state that does not allow IO and setting the queue
count feature in such a state returns an error. Previously the driver
would bail and the controller would be unusable.

Signed-off-by: Keith Busch <keith.busch@intel.com>
2013-09-03 16:32:26 -04:00
Keith Busch 1b56749e54 NVMe: Fix checkpatch issues
Signed-off-by: Keith Busch <keith.busch@intel.com>
2013-09-03 16:32:26 -04:00
Matthew Wilcox c3bfe7176c NVMe: Namespace IDs are unsigned
The 'Number of Namespaces' read from the device was being treated as
signed, which would cause us to not scan any namespaces for a device
with more than 2 billion namespaces.  That led to noticing that the
namespace ID was also being treated as signed, which could lead to the
result from NVME_IOCTL_ID being treated as an error code.

Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
2013-09-03 16:32:26 -04:00
Matthew Wilcox 7d8224574c NVMe: Call nvme_process_cq from submission path
Since we have the queue locked, it makes sense to check if there are
any completion queue entries on the queue before we release the lock.
If there are, it may save an interrupt and reduce latency for the I/Os
that happened to complete.  This happens fairly often for some workloads.

Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
2013-06-24 13:57:27 -04:00
Matthew Wilcox bc57a0f7a4 NVMe: Remove "process_cq did something" message
I was originally intending to log the fact that the kthread had done
some work since it might help us find interrupt handling problems, but
that hasn't been done yet, and spamming the logs with this message is
just rude.

Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
2013-06-24 13:57:08 -04:00
Matthew Wilcox e9539f4752 NVMe: Return correct value from interrupt handler
The interrupt handler currently reports whether it found any new
completion queue entries.  If the completion queue is primarily being
processed by a method other than the interrupt handler, it may return
IRQ_NONE so often that Linux thinks that the interrupt is being falsely
triggered.

To solve this problem, report whether any completion queue entries have
been seen since the last interrupt was received for this queue.

Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
2013-06-24 11:54:20 -04:00
Keith Busch 6198221fa0 NVMe: Disk IO statistics
Add io stats accounting for bio requests so nvme block devices show
useful disk stats.

Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
2013-06-20 12:06:35 -04:00
Matthew Wilcox 063a8096f3 NVMe: Restructure MSI / MSI-X setup
The current code copies 'nr_io_queues' into 'q_count', modifies
'nr_io_queues' during MSI-X setup, then resets 'nr_io_queues' for
MSI setup.  Instead, copy 'nr_io_queues' into 'vecs' and modify 'vecs'
during both MSI-X and MSI setup.

This lets us simplify the for-loops that set up MSI-X and MSI, and opens
the possibility of using more I/O queues than we have interrupt vectors,
should future benchmarking prove that to be a useful feature.

Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
2013-06-20 11:09:23 -04:00
Ramachandra Rao Gajula fa08a39664 NVMe: Add MSI support
Some devices only have support for MSI, not MSI-X.  While MSI is more
limited, it still provides better performance than line-based interrupts.

Signed-off-by: Ramachandra Gajula <rama@fastorsystems.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
2013-05-31 11:45:52 -04:00
Matthew Wilcox cf9f123b38 NVMe: Use dma_set_mask() correctly
In some circumstances setting a 64-bit DMA mask can fail, as explained
in Documentation/DMA-API-HOWTO.txt.  Use the recommended code sequence
to set a 32-bit DMA mask if setting a 64-bit DMA mask fails.

Reported-by: Chayan Biswas <Chayan.Biswas@sandisk.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
2013-05-28 16:46:46 -04:00
Chayan Biswas cf90bc4830 Return the result from user admin command IOCTL even in case of failure
We copy the result to user if the command is completed from the
controller even if it completes with failure (non-zero) status.
A return status of < 0 indicates the command was not completed
by the controller. The user application may expect the error code
in the result field in case of failure.

Signed-off-by: Chayan Biswas <Chayan.Biswas@sandisk.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
2013-05-23 13:38:59 -04:00
Keith Busch 053ab702cc NVMe: Do not cancel command multiple times
Cancelling an already cancelled command does not do anything, so check
the command context before cancelling it, continuing if had already been
cancelled so we do not log the same problem every second if a device
stops responding.

Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
2013-05-17 09:18:38 -04:00
Wei Yongjun 1287dabd34 NVMe: fix error return code in nvme_submit_bio_queue()
nvme_submit_flush_data() might overwrite the initialisation of the
return value with 0, so move the -ENOMEM setting close to the usage.

Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
2013-05-17 09:13:18 -04:00
Dan Carpenter 5460fc0310 NVMe: check for integer overflow in nvme_map_user_pages()
You need to have CAP_SYS_ADMIN to trigger this overflow but it makes the
static checkers complain so we should fix it.  The worry is that
"length" comes from copy_from_user() so we need to check that "length +
offset" can't overflow.

I also changed the min_t() cast to be unsigned instead of signed.  Now
that we cap "length" to INT_MAX it doesn't make a difference, but it's a
little easier for reviewers to know that large values aren't cast to
negative.

Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
2013-05-17 09:11:03 -04:00
Keith Busch 94f370cab6 NVMe: Use user defined admin ioctl timeout
Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
2013-05-09 16:03:50 -04:00
Matthew Wilcox 44af146a84 NVMe: Only clear the enable bit when disabling controller
Many of the bits in the Controller Configuration register may only be
modified when the Enable bit is clear.  Clearing them at the same time
as the Enable bit might be OK, but let's play it safe and only touch the
Enable bit.

Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Reviewed-by: Keith Busch <keith.busch@intel.com>
2013-05-08 09:54:31 -04:00
Matthew Wilcox ba47e3865e NVMe: Wait for device to acknowledge shutdown
A recent update to the specification makes it clear that the host
is expected to wait for the device to acknowledge the Enable bit
transitioning to 0 as well as waiting for the device to acknowledge a
transition to 1.

Reported-by: Khosrow Panah <Khosrow.Panah@idt.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Reviewed-by: Keith Busch <keith.busch@intel.com>
2013-05-08 09:53:49 -04:00
Keith Busch 78f8d2577b NVMe: Schedule timeout for sync commands
Schedule a timeout on sync commands in case the command times out and
the device is not being polled for timeouts. This prevents device removal
from hanging forever if the device has stopped responding.

Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
2013-05-02 15:36:02 -04:00
Keith Busch f410c680b5 NVMe: Meta-data support in NVME_IOCTL_SUBMIT_IO
This adds support for namespaces with separate meta-data formats in the
submit io ioctl. The meta-data buffer has to be a contiguous, so such
a buffer is allocated and the mapped user pages are copied to/from this
buffer for write/read commands.

Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
2013-05-02 15:35:09 -04:00
Keith Busch 159b67d7ae NVMe: Device specific stripe size handling
We have an nvme device that has a concept of a stripe size. IO requests
that do not transfer data crossing a stripe boundary has greater
performance compared to IO that does cross it. This patch sets the
stripe size for the device if the device and vendor ids match one with
this feature and splits IO requests that cross the stripe boundary.

Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
2013-05-02 14:41:05 -04:00
Keith Busch 427e970801 NVMe: Split non-mergeable bio requests
It is possible a bio request can not be submitted as a single NVMe IO
command if the bio_vec is not mergeable with the NVMe PRP alignement
constraints. This condition was handled by submitting an IO for the
mergeable portion then submitting a follow on IO for the remaining data
after the previous IO completes. The remainder to be sent was tracked
by manipulating the bio->bi_idx and bio->bi_sector. This patch splits
the request as many times as necessary and submits the bios together.

Since submitting the bio may cause it to be requeued on split,
nvme_resubmit_bios had to be modified to remove the wait queue when
the bio list is empty prior to submitting the bio since a split would
have added the wait queue a second time, corrupting the wait queue head
task list.

There are a few other benefits from doing this: it fixes a potential
issue with the previous handling of a non-mergeable bio as the requeuing
method could would use an unlocked nvme_queue if the callback isn't
invoked on the queue's associated cpu; it will be possible to retry a
failed bio if desired at some later time since it does not manipulate
the original bio; the bio integrity extensions require the bio to be in
its original condition for the checks to work correctly if we implement
the end-to-end data protection in the future.

Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
2013-05-02 14:38:59 -04:00
Keith Busch cbb6218fd4 NVMe: Remove dead code in nvme_dev_add
There is no situation that could occur where we could error out of this
function and require cleaning up allocated namespaces.

Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
2013-05-02 14:36:45 -04:00
Keith Busch a9ef4343af NVMe: Check for NULL memory in nvme_dev_add
Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
2013-05-02 14:35:44 -04:00
Keith Busch 68b8eca5f8 NVMe: Fix error clean-up on nvme_alloc_queue
The nvme_queue's depth is not set if we fail to allocate submission queue
entries, which was being used to determine how much coherent memory to
free on error. Use the depth variable instead.

Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
2013-05-02 14:34:35 -04:00
Keith Busch 025c557a71 NVMe: Free admin queue on request_irq error
Fixes a potential memory leak if requesting the admin queue irq fails.

Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
2013-05-02 14:33:53 -04:00
Arjan van de Ven 564a232c05 NVMe: Set TASK_INTERRUPTIBLE before processing queues
The kthread has two tasks; handling timeouts (for which it runs once per
second), and submitting queued BIOs.  If a BIO happens to be queued after
the thread has processed the queue but before it calls schedule_timeout(),
the thread will sleep for a second before submitting it, which can cause
performance problems in some rare cases (that will become more common in
a subsequent patch).

Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Tested-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
2013-05-01 16:40:27 -04:00
Keith Busch 5e82e952f0 NVMe: Add a character device for each nvme device
Registers a miscellaneous device for each nvme controller probed. This
creates character device files as /dev/nvmeN, where N is the device
instance, and supports nvme admin ioctl commands so devices without
namespaces can be managed.

Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
2013-04-16 15:43:55 -04:00
Matthew Wilcox 1c9b52651d NVMe: Fix endian-related problems in user I/O submission path
When constructing the command, dsmgmt needs to be treated as a 32-bit
value, not a 16-bit value.  reftag, apptag and appmask all need to be
converted from native-endian to little-endian.  Again, sparse's bitwise
warnings caught this problem.  Thanks to Keith for pointing out the
correct way to fix the reftag.

Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Acked-by: Keith Busch <keith.busch@intel.com>
2013-04-16 15:21:06 -04:00
Matthew Wilcox af2d9ca744 NVMe: Fix I/O cancellation status on big-endian machines
The sparse bitwise checks pointed out that I needed to shift the status
before changing its endianness, not after.

Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
2013-04-16 15:18:30 -04:00
Matthew Wilcox 422ef0c7c8 NVMe: Don't fail initialisation unnecessarily
The nvme_dev_add() function currently returns the last error code that it
saw, which (if everything else succeeds) happens to be the result of an
optional command, so it can legitimately fail.  Looking at the error path
more closely reveals that we should return success from this function,
even if no device namespaces are added.  So once the queues are created
and the device has responded to Identify, make sure that this function
succeeds.

Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Acked-by: Keith Busch <keith.busch@intel.com>
2013-04-16 15:13:41 -04:00
Matthew Wilcox 063cc6d559 NVMe: Abstract out sector to block number conversion
Introduce nvme_block_nr() to help convert sectors to block numbers.
This fixes an integer overflow in the SCSI conversion layer, and it's
slightly less typing than opencoding it.

Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Acked-by: Keith Busch <keith.busch@intel.com>
2013-04-16 15:05:22 -04:00
Arjan van de Ven acb7aa0db0 NVMe: Use round_jiffies_relative() for the periodic, once-per-second timer
The nvme driver has a "once per second" event where the management kthread
wakes up the system and then reschedules itself for 1 second later.
For power efficiency reasons, I'd like this timer to happen together
with other wakeups in the system.

This patch makes the schedule_timeout() call in the kthread use
round_jiffies_relative(), causing the wakeup to at least align with other
"once per X seconds" events in the kernel.

Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Tested-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
2013-04-16 15:05:16 -04:00
Vishal Verma 5d0f6131a7 NVMe: Add nvme-scsi.c
Translates SCSI commands in SG_IO ioctl to NVMe commands.
Uses the scsi-nvme translation spec from nvmexpress.org as reference.

Signed-off-by: Vishal Verma <vishal.l.verma@intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
2013-03-28 14:50:49 -04:00