Move some of the OMAP2+ CM and System Control Module direct
register accesses into CM- and System Control
Module-specific "drivers" underneath arch/arm/mach-omap2/. This
is a prerequisite for moving this code out of arch/arm/mach-omap2/ into
drivers/.
Basic test logs are available here:
http://www.pwsan.com/omap/testlogs/cm_scm_cleanup_a_v3.13/20131019101809/
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.15 (GNU/Linux)
iQIcBAABAgAGBQJSZAZXAAoJEBvUPslcq6VzCWQQAKH4Rj0izwbbLkgBAeeaQz5K
oJgPJ6UPLOJ2uLIUauCKUSR6+nktrCTfV8P+J4DhCc6OiGrKBXJhSETPgaTbWsNw
Bd577pmuvXSfNFXUaLwCgkSmafJ1pi6d7kEx/7ZW3TziVE/aUxyeHkrMtWJHrjTP
28tJVieOxLlO5iK06DfmGcCpLUBKJKtgGRo0h/oqMhLAaN5S8//lyVYgdsto7oCN
/bes6OpuVVdKiSr78V4rCVtR5Lij5+lVrT8HDiw2BA0V3bYcI7+CVlWBPZ3mYkuy
oAJDcn9whNyfWS+SsaTIjy6nHsgQkhEJnhrQW3k2skVZobRtWDv7U5LiTjsUhb3o
pjyWD8zZ7jqrkgyLsai6dm1zsljMQXsIQwH5h++HdCRhtNOXd6bVQZy0KqkpLu0y
Bhpt8/edh4Bdc305oB05/Y9Uxr7Gr8M377chVZx+JD3rxIDjRRyOJcRIhd27WZEf
HSMLpO/ayUXWdDuTlKW0IEnImx3PrxT913cnjIY589FhfdahfGQoft4sWDeiQLAX
+zVYZljeY+GxbUWO6aY4m2PfVN9p/Hwal58NZZgj59wq9iHUuJErK11X7rj+2vwN
+20IS8sikz6Iym84iC0T+omUeFVY0Zo004DVvpPB+D1C2LpwdI1c6kTz4DYT1EBP
pvs8Wihkk7xQxQn0rBGP
=L37r
-----END PGP SIGNATURE-----
Merge tag 'omap-for-v3.13/cm-scm-cleanup-signed' of git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap into next/cleanup
From Paul Walmsley <paul@pwsan.com> via Tony Lindgren:
Move some of the OMAP2+ CM and System Control Module direct
register accesses into CM- and System Control
Module-specific "drivers" underneath arch/arm/mach-omap2/. This
is a prerequisite for moving this code out of arch/arm/mach-omap2/ into
drivers/.
Basic test logs are available here:
http://www.pwsan.com/omap/testlogs/cm_scm_cleanup_a_v3.13/20131019101809/
* tag 'omap-for-v3.13/cm-scm-cleanup-signed' of git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap:
ARM: OMAP3: control: add API for setting IVA bootmode
ARM: OMAP3: CM/control: move CM scratchpad save to CM driver
ARM: OMAP3: McBSP: do not access CM register directly
ARM: OMAP3: clock: add API to enable/disable autoidle for a single clock
ARM: OMAP2: CM/PM: remove direct register accesses outside CM code
+ Linux 3.12-rc4
Signed-off-by: Olof Johansson <olof@lixom.net>
A driver that simply completes IO it receives, it does no
transfers. Written to fascilitate testing of the blk-mq code.
It supports various module options to use either bio queueing,
rq queueing, or mq mode.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
We have officially run out of flags in a 32-bit space. Extend it
to 64-bit even on 32-bit archs.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
This lets the transport do endian conversion if necessary, and insulates
the drivers from the difference.
Most drivers can use the simple helpers virtio_cread() and virtio_cwrite().
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
the kernel.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.14 (GNU/Linux)
iQIcBAABAgAGBQJSQuSSAAoJEEEQszewGV1zi00QALIJQt28Ir03WoxpbfLev2WP
2s/f3zwW76XkNhWhh7FNG+GCl7VUcLcLSXod/nYE7wvbVtx9DVGzpWRbzqmNW/Ti
5eqkk9EQso1G8jEseBiKaWsO7JxKkwa+nEYkGaqbm9K5Qk9iVlQ1hTF/m9/L64Dq
I4BazqHJuU8g7sJhNaUaVvJLX5YXEeYTaTl43XoEEHy24HDqnvT2sIkiEj1WXLGE
EmsOHVYLeJ4z7kZK8Dl0ZW/YFzrP3UAM25KObRYJXKAwGZhbTNreujF0RDdg5CBY
6etMU7h7W90KkCGXzbgt4AkHwBDHM6EGb81ZiKOWZBGlrQRQFyAas4e8UTgEQh1F
+I9PKNY/m94F9+RMpV9a35hNHNUj/+XskmXmoIFnCgSsrgYDIoSzZqV+v3iVDHRb
oeX4R6S5ZbmxT3QC0n1xdctGn8BbgzGpAUnvfcNHG/IypNjCuGk9Femu7CQS2Pka
R3BuTcizcppbITaFeaLLorRkC8U/GPuh5JmMUpyJh/+aq+wiWDw9oBEVfNclHB67
Qo44EAN1WPv9M+EyQAhzUndJIFj0Ib4wUM7ILjHeXJLVvwQfpE1dQURDs//6Wwgp
/hALpZqa5Y9oxPJO3Fgmmmci6q0P3+EFX5PT8GO2ecGWhhDlCSfLWc/lEtYrjzxG
t77iKH/DdzX5nfrr4AQQ
=r6/u
-----END PGP SIGNATURE-----
Merge tag 'del-shark-for-v3.13' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-stericsson into next/cleanup
From Linus Walleij:
This deletes the Shark SA110-based sub-architecture from
the kernel.
* tag 'del-shark-for-v3.13' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-stericsson:
video: drop code for ARCH_SHARK in cyber2000fb
input: i8042: drop dependency on ARCH_SHARK
ide: drop dependency on ARCH_SHARK
block: drop dependency on ARCH_SHARK
MAINTAINERS: delete Shark machine entry
ARM: delete mach-shark
Signed-off-by: Olof Johansson <olof@lixom.net>
With this machine deleted, there is no need to maintain the
MFM block driver for its hard disk either.
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
The arg64 struct has a hole after ->buf_size which isn't cleared. Or if
any of the calls to copy_from_user() fail then that would cause an
information leak as well.
This was assigned CVE-2013-2147.
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Acked-by: Mike Miller <mike.miller@hp.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The pciinfo struct has a two byte hole after ->dev_fn so stack
information could be leaked to the user.
This was assigned CVE-2013-2147.
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Acked-by: Mike Miller <mike.miller@hp.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The freeze and restore functions defined in virtio drivers are used
for suspend and hibernate, so CONFIG_PM_SLEEP is more appropriate than
CONFIG_PM. This patch replace all CONFIG_PM with CONFIG_PM_SLEEP for
virtio drivers that implement freeze and restore callbacks.
Signed-off-by: Aaron Lu <aaron.lu@intel.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Replace the following sequence:
dma_set_mask(dev, mask);
dma_set_coherent_mask(dev, mask);
with a call to the new helper dma_set_mask_and_coherent().
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Pull ceph fixes from Sage Weil:
"These fix several bugs with RBD from 3.11 that didn't get tested in
time for the merge window: some error handling, a use-after-free, and
a sequencing issue when unmapping and image races with a notify
operation.
There is also a patch fixing a problem with the new ceph + fscache
code that just went in"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client:
fscache: check consistency does not decrement refcount
rbd: fix error handling from rbd_snap_name()
rbd: ignore unmapped snapshots that no longer exist
rbd: fix use-after free of rbd_dev->disk
rbd: make rbd_obj_notify_ack() synchronous
rbd: complete notifies before cleaning up osd_client and rbd_dev
libceph: add function to ensure notifies are complete
After the last architecture switched to generic hard irqs the config
options HAVE_GENERIC_HARDIRQS & GENERIC_HARDIRQS and the related code
for !CONFIG_GENERIC_HARDIRQS can be removed.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Fix thinkos where pkt_<level> needs a valid pktcdvd_device * and the
pointer is known to be NULL.
Signed-off-by: Joe Perches <joe@perches.com>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com> (go smatch!)
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Allow the device name to be emitted with pkt_err when logging the sense
data.
Signed-off-by: Joe Perches <joe@perches.com>
Cc: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Add a new pkt_info macro to prefix the name to the logging output.
Signed-off-by: Joe Perches <joe@perches.com>
Cc: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Add a new pkt_notice macro to prefix the name to the logging output.
Signed-off-by: Joe Perches <joe@perches.com>
Cc: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Add a new pkt_err macro to prefix the name to the logging output. Convert
pr_err where there is a non-null struct pktcdvd_device.
Includes improvements from Andy Shevchenko.
Signed-off-by: Joe Perches <joe@perches.com>
Cc: Andy Shevchenko <andy.shevchenko@gmail.com>
Cc: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Add pd->name to output for these debugging messages.
Remove normally compiled out pkt_dbg(2, ...) function entry tracing
equivalents as it's better done via the function tracer.
Signed-off-by: Joe Perches <joe@perches.com>
Cc: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Use the more common pkt_dbg(level, fmt, ...) form.
These messages are emitted at KERN_NOTICE.
Always emit function name with pkt_dbg(2, ...) uses and remove the
sometimes abbreviated embedded function name.
This form always verifies the format and arguments.
Signed-off-by: Joe Perches <joe@perches.com>
Cc: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Use a more current logging style and add messages levels to the logging
messages.
Simplify pkt_dump_sense by using %*ph and adding a simple function to emit
the sense string.
Includes improvements from Andy Shevchenko and Dan Carpenter.
Signed-off-by: Joe Perches <joe@perches.com>
Cc: Andy Shevchenko <andy.shevchenko@gmail.com>
Cc: Dan Carpenter <dan.carpenter@oracle.com>
Cc: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Macros should be converted to functions where feasible to
verify arguments and the like.
Signed-off-by: Joe Perches <joe@perches.com>
Cc: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
If the system has trouble allocating memory for the creation of the aoe
debugfs directory or of a file inside it, the debugfs member of an aoedev
can be NULL.
Do not treat a NULL debugfs pointer as a BUG on aoedev shutdown, avoiding
the user impact of an unecessary panic.
Signed-off-by: Ed Cashin <ecashin@coraid.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This patch fixes following compiler warnings:
drivers/block/aoe/aoecmd.c: In function `aoecmd_ata_rw':
drivers/block/aoe/aoecmd.c:383:17: warning: variable `t' set but not used [-Wunused-but-set-variable]
struct aoetgt *t;
^
drivers/block/aoe/aoecmd.c: In function `resend':
drivers/block/aoe/aoecmd.c:488:21: warning: variable `ah' set but not used [-Wunused-but-set-variable]
struct aoe_atahdr *ah;
^
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: Ed Cashin <ecashin@coraid.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
In the kernel we have a nice helper that may be used here. This patch
substitutes the custom implementation by the native function call.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: Ed Cashin <ecashin@coraid.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This information is presented in a compact format that has evolved for
easy routine scanning by expert humans, mostly developers and support
technicians helping to troubleshoot or test AoE-based systems.
Signed-off-by: Ed Cashin <ecashin@coraid.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The place holder in the file contents is filled out in the following
patch.
Signed-off-by: Ed Cashin <ecashin@coraid.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This series adds the debugging information that the coraid.com-distributed
aoe driver exports via sysfs, but instead of sysfs, it uses debugfs.
With these patches applied, even without AoE targets on the network, KEDR
reports new possible memory leaks, but these are from callers outside the
aoe driver that have used aoe_devnode to get the name of the character
devices through the aoe_class->devnode callback, and I believe they're
responsible for freeing that memory.
This patch:
Create and destroy the debugfs directory.
Signed-off-by: Ed Cashin <ecashin@coraid.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The driver core clears the driver data to NULL after device_release or
on probe failure. Thus, it is not needed to manually clear the device
driver data to NULL.
Signed-off-by: Jingoo Han <jg1.han@samsung.com>
Cc: Jean Delvare <khali@linux-fr.org>
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
At one time we used to set the maximum number of scatter gather elements
on all Smart Array controllers to 32. At some point in time the
firmware began to write the "appropriate" value for each controller into
the config table. The cciss driver would then read that and set
h->maxsgentries.
h->maxsgentries = readl(&(h->cfgtable->MaxSGElements);
On the P600 that value is 544. Under some workloads a significant
performance reduction may result. This patch forces the P600 to use
only 32 scatter gather elements. Other controllers are not affected.
Signed-off-by: Mike Miller <mike.miller@hp.com>
Signed-off-by: Dwight (Bud) Brown <bubrown@redhat.com>
Signed-off-by: Tomas Henzl <thenzl@redhat.com>
Acked-by: Stephen M. Cameron <steve.cameron@hp.com>
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mg_times_out() is used only in this file. Fix the following sparse
warning:
drivers/block/mg_disk.c:639:6: warning: symbol 'mg_times_out' was not declared. Should it be static?
Signed-off-by: Jingoo Han <jg1.han@samsung.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The use of strict_strtoul() is not preferred, because strict_strtoul() is
obsolete. Thus, kstrtoul() should be used.
Signed-off-by: Jingoo Han <jg1.han@samsung.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
rbd_snap_name() calls rbd_dev_v{1,2}_snap_name() depending on the
format of the image. The format 1 version returns NULL on error, which
is handled by the caller. The format 2 version returns an ERR_PTR,
which the caller of rbd_snap_name() does not expect.
Fortunately this is unlikely to occur in practice because
rbd_snap_id_by_name() is called before rbd_snap_name(). This would hit
similar errors to rbd_snap_name() (like the snapshot not existing) and
return early, so rbd_snap_name() would not hit an error unless the
snapshot was removed between the two calls or memory was exhausted.
Use an ERR_PTR in rbd_dev_v1_snap_name() so that the specific error
can be propagated, and it is consistent with rbd_dev_v2_snap_name().
Handle the ERR_PTR in the only rbd_snap_name() caller.
Suggested-by: Alex Elder <alex.elder@linaro.org>
Signed-off-by: Josh Durgin <josh.durgin@inktank.com>
Reviewed-by: Alex Elder <elder@linaro.org>
This prevents erroring out while adding a device when a snapshot
unrelated to the current mapping is deleted between reading the
snapshot context and reading the snapshot names. If the mapped
snapshot name is not found an error still occurs as usual.
Signed-off-by: Josh Durgin <josh.durgin@inktank.com>
Reviewed-by: Alex Elder <elder@linaro.org>
Removing a device deallocates the disk, unschedules the watch, and
finally cleans up the rbd_dev structure. rbd_dev_refresh(), called
from the watch callback, updates the disk size and rbd_dev
structure. With no locking between them, rbd_dev_refresh() may use the
device or rbd_dev after they've been freed.
To fix this, check whether RBD_DEV_FLAG_REMOVING is set before
updating the disk size in rbd_dev_refresh(). In order to prevent a
race where rbd_dev_refresh() is already revalidating the disk when
rbd_remove() is called, move the call to rbd_bus_del_dev() after the
watch is unregistered and all notifies are complete. It's safe to
defer deleting this structure because no new requests can be submitted
once the RBD_DEV_FLAG_REMOVING is set, since the device cannot be
opened.
Fixes: http://tracker.ceph.com/issues/5636
Signed-off-by: Josh Durgin <josh.durgin@inktank.com>
Reviewed-by: Alex Elder <elder@linaro.org>
The only user of rbd_obj_notify_ack() is rbd_watch_cb(). It used
asynchronously with no tracking of when the notify ack completes, so
it may still be in progress when the osd_client is shut down. This
results in a BUG() since the osd client assumes no requests are in
flight when it stops. Since all notifies are flushed before the
osd_client is stopped, waiting for the notify ack to complete before
returning from the watch callback ensures there are no notify acks in
flight during shutdown.
Rename rbd_obj_notify_ack() to rbd_obj_notify_ack_sync() to reflect
its new synchronous nature.
Signed-off-by: Josh Durgin <josh.durgin@inktank.com>
Reviewed-by: Alex Elder <elder@linaro.org>
To ensure rbd_dev is not used after it's released, flush all pending
notify callbacks before calling rbd_dev_image_release(). No new
notifies can be added to the queue at this point because the watch has
already be unregistered with the osd_client.
Signed-off-by: Josh Durgin <josh.durgin@inktank.com>
Reviewed-by: Alex Elder <elder@linaro.org>
Pull ceph updates from Sage Weil:
"This includes both the first pile of Ceph patches (which I sent to
torvalds@vger, sigh) and a few new patches that add support for
fscache for Ceph. That includes a few fscache core fixes that David
Howells asked go through the Ceph tree. (Thanks go to Milosz Tanski
for putting this feature together)
This first batch of patches (included here) had (has) several
important RBD bug fixes, hole punch support, several different
cleanups in the page cache interactions, improvements in the truncate
code (new truncate mutex to avoid shenanigans with i_mutex), and a
series of fixes in the synchronous striping read/write code.
On top of that is a random collection of small fixes all across the
tree (error code checks and error path cleanup, obsolete wq flags,
etc)"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client: (43 commits)
ceph: use d_invalidate() to invalidate aliases
ceph: remove ceph_lookup_inode()
ceph: trivial buildbot warnings fix
ceph: Do not do invalidate if the filesystem is mounted nofsc
ceph: page still marked private_2
ceph: ceph_readpage_to_fscache didn't check if marked
ceph: clean PgPrivate2 on returning from readpages
ceph: use fscache as a local presisent cache
fscache: Netfs function for cleanup post readpages
FS-Cache: Fix heading in documentation
CacheFiles: Implement interface to check cache consistency
FS-Cache: Add interface to check consistency of a cached object
rbd: fix null dereference in dout
rbd: fix buffer size for writes to images with snapshots
libceph: use pg_num_mask instead of pgp_num_mask for pg.seed calc
rbd: fix I/O error propagation for reads
ceph: use vfs __set_page_dirty_nobuffers interface instead of doing it inside filesystem
ceph: allow sync_read/write return partial successed size of read/write.
ceph: fix bugs about handling short-read for sync read mode.
ceph: remove useless variable revoked_rdcache
...
Pull NVM Express driver update from Matthew Wilcox.
* git://git.infradead.org/users/willy/linux-nvme:
NVMe: Merge issue on character device bring-up
NVMe: Handle ioremap failure
NVMe: Add pci suspend/resume driver callbacks
NVMe: Use normal shutdown
NVMe: Separate controller init from disk discovery
NVMe: Separate queue alloc/free from create/delete
NVMe: Group pci related actions in functions
NVMe: Disk stats for read/write commands only
NVMe: Bring up cdev on set feature failure
NVMe: Fix checkpatch issues
NVMe: Namespace IDs are unsigned
NVMe: Update nvme_id_power_state with latest spec
NVMe: Split header file into user-visible and kernel-visible pieces
NVMe: Call nvme_process_cq from submission path
NVMe: Remove "process_cq did something" message
NVMe: Return correct value from interrupt handler
NVMe: Disk IO statistics
NVMe: Restructure MSI / MSI-X setup
NVMe: Use kzalloc instead of kmalloc+memset
A recent patch made it possible to bring up the character handle when the
device is responsive but not accepting a set-features command. Another
recent patch moved the initialization that requires we move where the
checks for this condition occur. This patch merges these two ideas so
it works much as before.
Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Pull trivial tree from Jiri Kosina:
"The usual trivial updates all over the tree -- mostly typo fixes and
documentation updates"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/trivial: (52 commits)
doc: Documentation/cputopology.txt fix typo
treewide: Convert retrun typos to return
Fix comment typo for init_cma_reserved_pageblock
Documentation/trace: Correcting and extending tracepoint documentation
mm/hotplug: fix a typo in Documentation/memory-hotplug.txt
power: Documentation: Update s2ram link
doc: fix a typo in Documentation/00-INDEX
Documentation/printk-formats.txt: No casts needed for u64/s64
doc: Fix typo "is is" in Documentations
treewide: Fix printks with 0x%#
zram: doc fixes
Documentation/kmemcheck: update kmemcheck documentation
doc: documentation/hwspinlock.txt fix typo
PM / Hibernate: add section for resume options
doc: filesystems : Fix typo in Documentations/filesystems
scsi/megaraid fixed several typos in comments
ppc: init_32: Fix error typo "CONFIG_START_KERNEL"
treewide: Add __GFP_NOWARN to k.alloc calls with v.alloc fallbacks
page_isolation: Fix a comment typo in test_pages_isolated()
doc: fix a typo about irq affinity
...
The order parameter is sometimes NULL in _rbd_dev_v2_snap_size(), but
the dout() always derefences it. Move this to another dout() protected
by a check that order is non-NULL.
Signed-off-by: Josh Durgin <josh.durgin@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
Reviewed-by: Alex Elder <alex.elder@linaro.org>
rbd_osd_req_create() needs to know the snapshot context size to create
a buffer large enough to send it with the message front. It gets this
from the img_request, which was not set for the obj_request yet. This
resulted in trying to write past the end of the front payload, hitting
this BUG:
libceph: BUG_ON(p > msg->front.iov_base + msg->front.iov_len);
Fix this by associating the obj_request with its img_request
immediately after it's created, before the osd request is created.
Fixes: http://tracker.ceph.com/issues/5760
Suggested-by: Alex Elder <alex.elder@linaro.org>
Signed-off-by: Josh Durgin <josh.durgin@inktank.com>
Reviewed-by: Alex Elder <alex.elder@linaro.org>
When a request returns an error, the driver needs to report the entire
extent of the request as completed. Writes already did this, since
they always set xferred = length, but reads were skipping that step if
an error other than -ENOENT occurred. Instead, rbd would end up
passing 0 xferred to blk_end_request(), which would always report
needing more data. This resulted in an assert failing when more data
was required by the block layer, but all the object requests were
done:
[ 1868.719077] rbd: obj_request read result -108 xferred 0
[ 1868.719077]
[ 1868.719518] end_request: I/O error, dev rbd1, sector 0
[ 1868.719739]
[ 1868.719739] Assertion failure in rbd_img_obj_callback() at line 1736:
[ 1868.719739]
[ 1868.719739] rbd_assert(more ^ (which == img_request->obj_request_count));
Without this assert, reads that hit errors would hang forever, since
the block layer considered them incomplete.
Fixes: http://tracker.ceph.com/issues/5647
CC: stable@vger.kernel.org # v3.10
Signed-off-by: Josh Durgin <josh.durgin@inktank.com>
Reviewed-by: Alex Elder <alex.elder@linaro.org>
Decrement the number of queues required for doorbell remapping until
the memory is successfully mapped for that size.
Additional checks are done so that we don't call free_irq if it has
already been freed.
Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Used for going in and out of low power states. Resuming reuses the IO
queues from the previous initialization, freeing any allocated queues
that are no longer usable.
Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
The NVMe spec recommends using the shutdown normal sequence when safely
taking the controller offline instead of hitting CC.EN on the next
start-up to reset the controller. The spec recommends a minimum of 1
second for the shutdown complete. This patch waits 2 seconds to be on
the safe side.
Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
This combines the controller initialization into one function, removing
IO queue setup from namespace discovery, and creates symetric functions
for device removal. The controller start and shutdown functions can now
be called from resume/suspend context as well as probe/remove.
Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
This separates nvme queue allocation from creation, and queue deletion
from freeing. This is so that we may in the future temporarily disable
queues and reuse the same memory when bringing them back online, like
coming back from suspend state.
Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
This will make it easier to reuse these outside probe/remove.
Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Flush and discard requests would previously mess up the accounting.
Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
This patch creates the character device as long as a device's admin queues
are usable so a user has an opprotunity to perform administration tasks.
A device may be in a state that does not allow IO and setting the queue
count feature in such a state returns an error. Previously the driver
would bail and the controller would be unusable.
Signed-off-by: Keith Busch <keith.busch@intel.com>
The 'Number of Namespaces' read from the device was being treated as
signed, which would cause us to not scan any namespaces for a device
with more than 2 billion namespaces. That led to noticing that the
namespace ID was also being treated as signed, which could lead to the
result from NVME_IOCTL_ID being treated as an error code.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Here's the big driver core pull request for 3.12-rc1.
Lots of tiny changes here fixing up the way sysfs attributes are
created, to try to make drivers simpler, and fix a whole class race
conditions with creations of device attributes after the device was
announced to userspace.
All the various pieces are acked by the different subsystem maintainers.
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.21 (GNU/Linux)
iEYEABECAAYFAlIlIPcACgkQMUfUDdst+ynUMwCaAnITsxyDXYQ4DqEsz8EcOtMk
718AoLrgnUZs3B+70AT34DVktg4HSThk
=USl9
-----END PGP SIGNATURE-----
Merge tag 'driver-core-3.12-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core
Pull driver core patches from Greg KH:
"Here's the big driver core pull request for 3.12-rc1.
Lots of tiny changes here fixing up the way sysfs attributes are
created, to try to make drivers simpler, and fix a whole class race
conditions with creations of device attributes after the device was
announced to userspace.
All the various pieces are acked by the different subsystem
maintainers"
* tag 'driver-core-3.12-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core: (119 commits)
firmware loader: fix pending_fw_head list corruption
drivers/base/memory.c: introduce help macro to_memory_block
dynamic debug: line queries failing due to uninitialized local variable
sysfs: sysfs_create_groups returns a value.
debugfs: provide debugfs_create_x64() when disabled
rbd: convert bus code to use bus_groups
firmware: dcdbas: use binary attribute groups
sysfs: add sysfs_create/remove_groups for when SYSFS is not enabled
driver core: add #include <linux/sysfs.h> to core files.
HID: convert bus code to use dev_groups
Input: serio: convert bus code to use drv_groups
Input: gameport: convert bus code to use drv_groups
driver core: firmware: use __ATTR_RW()
driver core: core: use DEVICE_ATTR_RO
driver core: bus: use DRIVER_ATTR_WO()
driver core: create write-only attribute macros for devices and drivers
sysfs: create __ATTR_WO()
driver-core: platform: convert bus code to use dev_groups
workqueue: convert bus code to use dev_groups
MEI: convert bus code to use dev_groups
...
The bus_attrs field of struct bus_type is going away soon, dev_groups
should be used instead. This converts the RBD bus code to use the
correct field.
Cc: Yehuda Sadeh <yehuda@inktank.com>
Cc: Sage Weil <sage@inktank.com>
Acked-by: Alex Elder <elder@linaro.org>
Cc: <ceph-devel@vger.kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Don't emit OOM warnings when k.alloc calls fail when
there there is a v.alloc immediately afterwards.
Converted a kmalloc/vmalloc with memset to kzalloc/vzalloc.
Signed-off-by: Joe Perches <joe@perches.com>
Acked-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Fix a BUG which can trigger when direct-IO is used with AOE.
As discussed previously, the fact that some users of the block layer
provide bios that point to pages with a zero _count means that it is not
OK for the network layer to do a put_page on the skb frags during an
skb_linearize, so the aoe driver gets a reference to pages in bios and
puts the reference before ending the bio. And because it cannot use
get_page on a page with a zero _count, it manipulates the value
directly.
It is not OK to increment the _count of a compound page tail, though,
since the VM layer will VM_BUG_ON a non-zero _count. Block users that
do direct I/O can result in the aoe driver seeing compound page tails in
bios. In that case, the same logic works as long as the head of the
compound page is used instead of the tails. This patch handles compound
pages and does not BUG.
It relies on the block layer user leaving the relationship between the
page tail and its head alone for the duration between the submission of
the bio and its completion, whether successful or not.
Signed-off-by: Ed Cashin <ecashin@coraid.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The local variables such as 'bio_list', and 'pages' are pointers;
thus, use NULL instead of 0 to fix the following sparse warnings.
drivers/block/rbd.c:2166:32: warning: Using plain integer as NULL pointer
drivers/block/rbd.c:2168:31: warning: Using plain integer as NULL pointer
Signed-off-by: Jingoo Han <jg1.han@samsung.com>
Reviewed-by: Sage Weil <sage@inktank.com>
Pull block IO driver bits from Jens Axboe:
"As I mentioned in the core block pull request, due to real life
circumstances the driver pull request would be late. Now it looks
like -rc2 late... On the plus side, apart form the rsxx update, these
are all things that I could argue could go in later in the cycle as
they are fixes and not features. So even though things are late, it's
not ALL bad.
The pull request contains:
- Updates to bcache, all bug fixes, from Kent.
- A pile of drbd bug fixes (no big features this time!).
- xen blk front/back fixes.
- rsxx driver updates, some of them deferred form 3.10. So should be
well cooked by now"
* 'for-3.11/drivers' of git://git.kernel.dk/linux-block: (63 commits)
bcache: Allocation kthread fixes
bcache: Fix GC_SECTORS_USED() calculation
bcache: Journal replay fix
bcache: Shutdown fix
bcache: Fix a sysfs splat on shutdown
bcache: Advertise that flushes are supported
bcache: check for allocation failures
bcache: Fix a dumb race
bcache: Use standard utility code
bcache: Update email address
bcache: Delete fuzz tester
bcache: Document shrinker reserve better
bcache: FUA fixes
drbd: Allow online change of al-stripes and al-stripe-size
drbd: Constants should be UPPERCASE
drbd: Ignore the exit code of a fence-peer handler if it returns too late
drbd: Fix rcu_read_lock balance on error path
drbd: fix error return code in drbd_init()
drbd: Do not sleep inside rcu
bcache: Refresh usage docs
...
Pull microblaze update from Michal Simek:
"This Microblaze merge window is quite minimal.
I have also added to my branch one xilinx systemace sparse fix because
haven't got any reply from block maintainer."
* 'next' of git://git.monstr.eu/linux-2.6-microblaze:
xilinx systemace: Fix sparse warnings
microblaze: Move __NR_syscalls from uapi
microblaze: Enable KGDB in defconfig
microblaze: Don't mark arch_kgdb_ops as const.
Pull Ceph updates from Sage Weil:
"There is some follow-on RBD cleanup after the last window's code drop,
a series from Yan fixing multi-mds behavior in cephfs, and then a
sprinkling of bug fixes all around. Some warnings, sleeping while
atomic, a null dereference, and cleanups"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client: (36 commits)
libceph: fix invalid unsigned->signed conversion for timespec encoding
libceph: call r_unsafe_callback when unsafe reply is received
ceph: fix race between cap issue and revoke
ceph: fix cap revoke race
ceph: fix pending vmtruncate race
ceph: avoid accessing invalid memory
libceph: Fix NULL pointer dereference in auth client code
ceph: Reconstruct the func ceph_reserve_caps.
ceph: Free mdsc if alloc mdsc->mdsmap failed.
ceph: remove sb_start/end_write in ceph_aio_write.
ceph: avoid meaningless calling ceph_caps_revoking if sync_mode == WB_SYNC_ALL.
ceph: fix sleeping function called from invalid context.
ceph: move inode to proper flushing list when auth MDS changes
rbd: fix a couple warnings
ceph: clear migrate seq when MDS restarts
ceph: check migrate seq before changing auth cap
ceph: fix race between page writeback and truncate
ceph: reset iov_len when discarding cap release messages
ceph: fix cap release race
libceph: fix truncate size calculation
...
Merge first patch-bomb from Andrew Morton:
- various misc bits
- I'm been patchmonkeying ocfs2 for a while, as Joel and Mark have been
distracted. There has been quite a bit of activity.
- About half the MM queue
- Some backlight bits
- Various lib/ updates
- checkpatch updates
- zillions more little rtc patches
- ptrace
- signals
- exec
- procfs
- rapidio
- nbd
- aoe
- pps
- memstick
- tools/testing/selftests updates
* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (445 commits)
tools/testing/selftests: don't assume the x bit is set on scripts
selftests: add .gitignore for kcmp
selftests: fix clean target in kcmp Makefile
selftests: add .gitignore for vm
selftests: add hugetlbfstest
self-test: fix make clean
selftests: exit 1 on failure
kernel/resource.c: remove the unneeded assignment in function __find_resource
aio: fix wrong comment in aio_complete()
drivers/w1/slaves/w1_ds2408.c: add magic sequence to disable P0 test mode
drivers/memstick/host/r592.c: convert to module_pci_driver
drivers/memstick/host/jmb38x_ms: convert to module_pci_driver
pps-gpio: add device-tree binding and support
drivers/pps/clients/pps-gpio.c: convert to module_platform_driver
drivers/pps/clients/pps-gpio.c: convert to devm_* helpers
drivers/parport/share.c: use kzalloc
Documentation/accounting/getdelays.c: avoid strncpy in accounting tool
aoe: update internal version number to v83
aoe: update copyright date
aoe: perform I/O completions in parallel
...
Some users have a large AoE target while others like to use many AoE
targets at the same time. In the latter case, there is an opportunity to
greatly improve aggregate throughput by allowing different threads to
complete the I/O associated with each target. For 36 targets, 4 KiB read
throughput roughly doubles, for example, with these changes in place.
Signed-off-by: Ed Cashin <ecashin@coraid.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Currently, when a disconnect is requested by the user (via NBD_DISCONNECT
ioctl) the return from NBD_DO_IT is undefined (it is usually one of
several error codes). This means that nbd-client does not know if a
manual disconnect was performed or whether a network error occurred.
Because of this, nbd-client's persist mode (which tries to reconnect after
error, but not after manual disconnect) does not always work correctly.
This change fixes this by causing NBD_DO_IT to always return 0 if a user
requests a disconnect. This means that nbd-client can correctly either
persist the connection (if an error occurred) or disconnect (if the user
requested it).
Signed-off-by: Paul Clements <paul.clements@steeleye.com>
Acked-by: Rob Landley <rob@landley.net>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The NBD_CLEAR_QUE ioctl has been deprecated for quite some time (its job
is now done by two other ioctls). We should stop trying to make bogus
assertions in it. Also, user-level code should remove calls to
NBD_CLEAR_QUE, ASAP.
Signed-off-by: Michal Belczyk <belczyk@bsd.krakow.pl>
Signed-off-by: Paul Clements <paul.clements@steeleye.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Calling kthread_run with a single name parameter causes it to be handled
as a format string. Many callers are passing potentially dynamic string
content, so use "%s" in those cases to avoid any potential accidents.
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Disk names may contain arbitrary strings, so they must not be
interpreted as format strings. It seems that only md allows arbitrary
strings to be used for disk names, but this could allow for a local
memory corruption from uid 0 into ring 0.
CVE-2013-2851
Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
gcc isn't quite smart enough and generates these warnings:
drivers/block/rbd.c: In function 'rbd_img_request_fill':
drivers/block/rbd.c:1266:22: warning: 'bio_list' may be used uninitialized in this function [-Wmaybe-uninitialized]
drivers/block/rbd.c:2186:14: note: 'bio_list' was declared here
drivers/block/rbd.c:2247:10: warning: 'pages' may be used uninitialized in this function [-Wmaybe-uninitialized]
even though they are initialized for their respective code paths.
Signed-off-by: Sage Weil <sage@inktank.com>
Updating an image header needs to be protected to ensure it's
done consistently. However distinct headers can be updated
concurrently without a problem. Instead of using the global
control lock to serialize headder updates, just rely on the header
semaphore. (It's already used, this just moves it out to cover
a broader section of the code.)
That leaves the control mutex protecting only the creation of rbd
clients, so rename it.
This resolves:
http://tracker.ceph.com/issues/5222
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
When an rbd device is first getting mapped, its device registration
is protected the control mutex. There is no need to do that though,
because the device has already been assigned an id that's guaranteed
to be unique.
An unmap of an rbd device won't proceed if the device has a non-zero
open count or is already being unmapped. So there's no need to hold
the control mutex in that case either.
Finally, an rbd device can't be opened if it is being removed, and
it won't go away if there is a non-zero open count. So here too
there's no need to hold the control mutex while getting or putting a
reference to an rbd device's Linux device structure.
Drop the mutex calls in these cases.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Make sure two concurrent unmap operations on the same rbd device
won't collide, by only proceeding with the removal and cleanup of a
device if is not already underway.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
When unmapping a device, its id is supplied, and that is used to
look up which rbd device should be unmapped. Looking up the
device involves searching the rbd device list while holding
a spinlock that protects access to that list.
Currently all of this is done under protection of the control lock,
but that protection is going away soon. To ensure the rbd_dev is
still valid (still on the list) while setting its REMOVING flag, do
so while still holding the list lock. To do so, get rid of
__rbd_get_dev(), and open code what it did in the one place it
was used.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
If more than one rbd image has the same ceph cluster configuration
(same options, same set of monitors, same keys) they normally share
a single rbd client.
When an image is getting mapped, rbd looks to see if an existing
client can be used, and creates a new one if not.
The lookup and creation are not done under a common lock though, so
mapping two images concurrently could lead to duplicate clients
getting set up needlessly. This isn't a major problem, but it's
wasteful and different from what's intended.
This patch fixes that by using the control mutex to protect
both the lookup and (if needed) creation of the client. It
was previously used just when creating.
This resolves:
http://tracker.ceph.com/issues/3094
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
This includes a few relatively small fixes I found while examining
the code that refreshes image information.
This resolves:
http://tracker.ceph.com/issues/5040
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Neither zero_bio_chain() nor zero_pages() contains a call to flush
caches after zeroing a portion of a page. This can cause problems
on architectures that have caches that allow virtual address
aliasing.
This resolves:
http://tracker.ceph.com/issues/4777
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Cheers,
Rusty.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
iQIcBAABAgAGBQJR0NO+AAoJENkgDmzRrbjxS0YP/RoI5WguUDVx8tIdS29pwe9L
Z5U0uIDUBZkNftlvvkdqdGjU1MeejxoMIs6cMz6ScY6YEI7li1AzuNApRadflnMx
Do9u999NylI6q2oCLrt9y/eliJ/p5hkeCbhDrGdCgB7rPCjEAp/+Eu9Kq7FO8AYr
Qu0rpLjvJ/5c5lsLAmUn95lZZ/uc/EPHasfZDiR8eGHRoAvPeyJ5qR06MpHFcl33
2EKwR8RCYP9yUdqWnSKlC4LTuIpNK/BjEpHcQK2UPSn49vR8B3gVz5VXNP/7nIYQ
hCr3VGoEOBoAlsD38IuKycXQ7vmBTU2hq7bQPq7Z6wyH96Sb6xFmVqT1Dm7uAI4+
3OU69ugQb7WM3hQjeRRFvn5WjPqu8THwJKU7vhEf+xA09+v09QFjbHoEFabrFDlo
MqVVhelkT6Mo8bBrxl8sZc/CG+aSl/kLQfyXWIcR2MQEAlfYKOYuDz+5TXtMZHaR
SwlxgE2wbPP54v27yQWfBWY4d7dtJ+OX25GW1yzJdsNUFg7fQ9zzMnrv1prsUlog
bxs4P0JcBOMLZx2IWm0R2kmbRFYjPgj6Kr3VnrIhZGesG1cwQB2QNvEPEkCtl6kq
8HdX8Tua3dpfKAhDxnlztpVcJUmTuLn95jMWjW1KhCcjOqqJbaElq9ubplP2eMtr
7zlfzFq9RsBKG0+J98Gt
=vZYj
-----END PGP SIGNATURE-----
mergetag object b3087e48ce
type commit
tag virtio-next-for-linus
tagger Rusty Russell <rusty@rustcorp.com.au> 1372639977 +0930
Was away, but it's all trivial and been sitting in linux-next. So if you don't
pull, no electrons will be harmed.
Thanks,
Rusty.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
iQIcBAABAgAGBQJR0NMjAAoJENkgDmzRrbjxf3EQALoIAuQAjiEcHTGkLd/hGMJk
mDiN14CXNH0siKASlCfcAxudJxKfIRM1tYzMFV1e7DtC7ZAH43m/SYBmXC4hRpIz
nHsv57xtUg8bGZgb2OZKX/L0rqiE5wxtK7ZWGIPh3aE4uZKif+YvSBNJniO/HRTl
UVDbrp+sbZzbT8NIHQ8d2vdtfG7rCNRkVaaETxk1wguu2TGJkyt0xuU3zT54We0L
M8rJuHyTplgEMuDUydmNi9uNPZJZZY7QqoZ7s+ubXELo2FcFeyqWP2oHZnywESok
fWOfhgdgIgmTED/eXS9wGwB7+K6azKg2chA23QZ91ncX/Ts141sEAd9ezwuQZc+5
VS/Lo2mRvWbEcG78uYGUz+ukltZv0Og6uOvfSfCrALcVA4pYNFJTjbRu+Io+qAml
082byeBUT/eurZr7xE30b4aTvZV2bYfqFkVynjF9ugNvTTjMiruw8uAzep6DD1gl
38XRwtrk6MHLAFuxsyNV1FgunIWB6zvXNjMTUUVDLAQCbZwxZArr9gDd4zUG9VTS
EbA9TIulSMMLUve5zZQyByqRLlzmzDVghMOZVbYUOQN4uNxbAPoXZ87tpiRhk8Oc
XW+LRABjwZOjiQ9SDWRxxb8p2WTVyjuwIOTE0pLCuEBw6RFmjGga+PzH9rDygJnF
zEIbVDRT9Jj7rOtxSabE
=p6z7
-----END PGP SIGNATURE-----
Merge tags 'modules-next-for-linus' and 'virtio-next-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux
Pull trivial module and virtio fixes from Rusty Russell.
Apparently these were meant for 3.10, but came in after the release.
* tag 'modules-next-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux:
modpost.c: Add .text.unlikely to TEXT_SECTIONS
* tag 'virtio-next-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux:
virtio: remove virtqueue_add_buf().
lguest: rename i386_head.S
virtio_blk: Add missing 'static' qualifiers
virtio: console: Add emergency writeonly register to config space
virtio_pci: better macro exported in uapi
Pull m68k updates from Geert Uytterhoeven.
* 'for-3.11' of git://git.kernel.org/pub/scm/linux/kernel/git/geert/linux-m68k:
m68k/q40: Enable PC parallel port in defconfig
m68k/q40: Undefine insl/outsl before redefining them
m68k/uaccess: Fix asm constraints for userspace access
swim: Release memory region after incorrect return/goto
m68k/irq: Vector ints need a valid interrupt handler
m68k/math-emu: unsigned issue, 'unsigned long' will never be less than zero
m68k: remove CONFIG_EARLY_PRINTK dependency on CONFIG_EMBEDDED, default to n
m68k/sun3: remove inline marking of EXPORT_SYMBOL functions
[SCSI] a3000: use module_platform_driver_probe()
[SCSI] a4000t: use module_platform_driver_probe()
m68k: Remove inline strcpy() and strcat() implementations
Pull VFS patches (part 1) from Al Viro:
"The major change in this pile is ->readdir() replacement with
->iterate(), dealing with ->f_pos races in ->readdir() instances for
good.
There's a lot more, but I'd prefer to split the pull request into
several stages and this is the first obvious cutoff point."
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: (67 commits)
[readdir] constify ->actor
[readdir] ->readdir() is gone
[readdir] convert ecryptfs
[readdir] convert coda
[readdir] convert ocfs2
[readdir] convert fatfs
[readdir] convert xfs
[readdir] convert btrfs
[readdir] convert hostfs
[readdir] convert afs
[readdir] convert ncpfs
[readdir] convert hfsplus
[readdir] convert hfs
[readdir] convert befs
[readdir] convert cifs
[readdir] convert freevxfs
[readdir] convert fuse
[readdir] convert hpfs
reiserfs: switch reiserfs_readdir_dentry to inode
reiserfs: is_privroot_deh() needs only directory inode, actually
...
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.13 (GNU/Linux)
iQEbBAABAgAGBQJRxf9cAAoJEHm+PkMAQRiGMWkH911xM4gRmFgE7SqVW4F4AWBm
ngcqMqNy9IdqKfibORUUDvVfEa5gjD5ai2quIKpfQiaukbpQJ696H90ijuAkajLn
DQBrN243s0pzhhc/quWINnWxsFQ613JjdUMUMaD7e9A1aKjYzWrPGt/tSjrFXGCP
tArTupVzc/iOmnEQDKiROI/Nokq44QJ36aTGPM7n08xMtpKmkCXM+9/UosBteB0O
HVI33dmjwz7i55fI53XAWyuZCE+gSEnA4z8spJ9LfXso2W14V+roc+GuL6OyeeTI
pCn/+4niVPb4B0ROZlpyVmdZjbPPcMMEK5o+BSJI68SH6LHZTQh2iVuqYfpSyA==
=uUH5
-----END PGP SIGNATURE-----
Merge tag 'v3.10-rc7' into for-3.11/drivers
Linux 3.10-rc7
Pull this in early to avoid doing it with the bcache merge,
since there are a number of changes to bcache between my old
base (3.10-rc1) and the new pull request.
The reference to the original request dropped at the end of
rbd_img_obj_exists_callback() corresponds to the reference taken
in rbd_img_obj_exists_submit() to account for the stat request
referring to it. Move the put of that reference up right after
clearing that pointer to make its purpose more obvious.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
drivers/block/rbd.c: In function ‘zero_pages’:
drivers/block/rbd.c:1102: warning: comparison of distinct pointer types lacks a cast
Remove the hackish casts and use min_t() to fix this.
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
Reviewed-by: Alex Elder <elder@inktank.com>
Pull Ceph fix from Sage Weil:
"This is a recently spotted regression in the snapshot behavior...
It turns out several tests weren't being run in the nightlies so this
took a while to spot"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client:
rbd: send snapshot context with writes
Allow to change the AL layout with an resize operation. For that
the reisze command gets two new fields: al_stripes and al_stripe_size.
In order to make the operation crash save:
1) Lock out all IO and MD-IO
2) Write the super block with MDF_PRIMARY_IND clear
3) write the bitmap to the new location (all zeros, since
we allow only while connected)
4) Initialize the new AL-area
5) Write the super block with the restored MDF_PRIMARY_IND.
6) Unfreeze all IO
Since the AL-layout has no influence on the protocol, this operation
needs to be beforemed on both sides of a resource (if intended).
Signed-off-by: Andreas Gruenbacher <agruen@linbit.com>
Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
In case the connection was established and lost again before
the a fence-peer handler returns, ignore the exit code of this
instance. (And use the exit code of the later started instance)
Signed-off-by: Andreas Gruenbacher <agruen@linbit.com>
Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Fix to return a negative error code from the error handling
case instead of 0, as returned elsewhere in this function.
Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
Signed-off-by: Andreas Gruenbacher <agruen@linbit.com>
Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Konrad writes:
It has the 'feature-max-indirect-segments' implemented in both backend
and frontend. The current problem with the backend and frontend is that the
segment size is limited to 11 pages. It means we can at most squeeze in 44kB per
request. The ring can hold 32 (next power of two below 36) requests, meaning we
can do 1.4M of outstanding requests. Nowadays that is not enough.
The problem in the past was addressed in two ways - but neither one went upstream.
The first solution to this proposed by Justin from Spectralogic was to negotiate
the segment size. This means that the ‘struct blkif_sring_entry’ is now a variable size.
It can expand from 112 bytes (cover 11 pages of data - 44kB) to 1580 bytes
(256 pages of data - so 1MB). It is a simple extension by just making the array in the
request expand from 11 to a variable size negotiated. But it had limits: this extension
still limits the number of segments per request to 255 (as the total number must be
specified in the request, which only has an 8-bit field for that purpose).
The other solution (from Intel - Ronghui) was to create one extra ring that only has the
‘struct blkif_request_segment’ in them. The ‘struct blkif_request’ would be changed to have
an index in said ‘segment ring’. There is only one segment ring. This means that the size of
the initial ring is still the same. The requests would point to the segment and enumerate out
how many of the indexes it wants to use. The limit is of course the size of the segment.
If one assumes a one-page segment this means we can in one request cover ~4MB.
Those patches were posted as RFC and the author never followed up on the ideas on changing
it to be a bit more flexible.
There is yet another mechanism that could be employed (which these patches implement) - and it
borrows from VirtIO protocol. And that is the ‘indirect descriptors’. This very similar to
what Intel suggests, but with a twist. The twist is to negotiate how many of these
'segment' pages (aka indirect descriptor pages) we want to support (in reality we negotiate
how many entries in the segment we want to cover, and we module the number if it is
bigger than the segment size).
This means that with the existing 36 slots in the ring (single page) we can cover:
32 slots * each blkif_request_indirect covers: 512 * 4096 ~= 64M. Since we ample space
in the blkif_request_indirect to span more than one indirect page, that number (64M)
can be also multiplied by eight = 512MB.
Roger Pau Monne took the idea and implemented them in these patches. They work
great and the corner cases (migration between backends with and without this extension)
work nicely. The backend has a limit right now off how many indirect entries
it can handle: one indirect page, and at maximum 256 entries (out of 512 - so 50% of the page
is used). That comes out to 32 slots * 256 entries in a indirect page * 1 indirect page
per request * 4096 = 32MB.
This is a conservative number that can change in the future. Right now it strikes
a good balance between giving excellent performance, memory usage in the backend, and
balancing the needs of many guests.
In the patchset there is also the split of the blkback structure to be per-VBD.
This means that the spinlock contention we had with many guests trying to do I/O and
all the blkback threads hitting the same lock has been eliminated.
Also there are bug-fixes to deal with oddly sized sectors, insane amounts on
th ring, and also a security fix (posted earlier).
Sending the right snapshot context with each write is required for
snapshots to work. Due to the ordering of calls, the snapshot context
is never set for any requests. This causes writes to the current
version of the image to be reflected in all snapshots, which are
supposed to be read-only.
This happens because rbd_osd_req_format_write() sets the snapshot
context based on obj_request->img_request. At this point, however,
obj_request->img_request has not been set yet, to the snapshot context
is set to NULL. Fix this by moving rbd_img_obj_request_add(), which
sets obj_request->img_request, before the osd request formatting
calls.
This resolves:
http://tracker.ceph.com/issues/5465
Reported-by: Karol Jurak <karol.jurak@gmail.com>
Signed-off-by: Josh Durgin <josh.durgin@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
Reviewed-by: Alex Elder <elder@linaro.org>
Pull Ceph fix from Sage Weil:
"This fixes another problem with using v2 images on 3.10 due to the
order in which fields are read from the image header.
Hopefully this is the last one"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client:
rbd: fetch object order before using it
rbd_dev_v2_header_onetime() fetches striping information, and
checks whether the image can be read by compariing the stripe unit
to the object size. It determines the object size by shifting
the object order, which is 0 at this point since it has not been
read yet. Move the call to get the image size and object order
before rbd_dev_v2_header_onetime() so it is set before use.
Signed-off-by: Josh Durgin <josh.durgin@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
With the introduction of indirect segments we can receive requests
with a number of segments bigger than the maximum number of allowed
iovecs in a bios, so make sure that blkback doesn't try to allocate a
bios with more iovecs than BIO_MAX_PAGES
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Since we have the queue locked, it makes sense to check if there are
any completion queue entries on the queue before we release the lock.
If there are, it may save an interrupt and reduce latency for the I/Os
that happened to complete. This happens fairly often for some workloads.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
I was originally intending to log the fact that the kthread had done
some work since it might help us find interrupt handling problems, but
that hasn't been done yet, and spamming the logs with this message is
just rude.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
The code uses
return foo;
goto err_type;
when instead the form should have been
ret = foo;
goto err_type;
Here this causes a useful release_mem_region to be skipped.
Signed-off-by: Joe Perches <joe@perches.com>
Reviewed-by: Laurent Vivier <Laurent@Vivier.EU>
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
The interrupt handler currently reports whether it found any new
completion queue entries. If the completion queue is primarily being
processed by a method other than the interrupt handler, it may return
IRQ_NONE so often that Linux thinks that the interrupt is being falsely
triggered.
To solve this problem, report whether any completion queue entries have
been seen since the last interrupt was received for this queue.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Now that indirect segments are enabled blk_queue_max_hw_sectors must
be set to match the maximum number of sectors we can handle in a
request.
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Reported-by: Felipe Franciosi <felipe.franciosi@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
The code generat with gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-54)
creates an unbound loop for the second foreach_grant_safe loop in
purge_persistent_gnt.
The workaround is to avoid having this second loop and instead
perform all the work inside the first loop by adding a new variable,
clean_used, that will be set when all the desired persistent grants
have been removed and we need to iterate over the remaining ones to
remove the WAS_ACTIVE flag.
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Reported-by: Tom O'Neill <toneill@vmem.com>
Reported-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Pull Ceph fix from Sage Weil:
"This fixes a problem preventing the kernel and userland librbd
libraries from sharing data with the new format 2 images"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client:
rbd: use the correct length for format 2 object names
Add io stats accounting for bio requests so nvme block devices show
useful disk stats.
Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
The current code copies 'nr_io_queues' into 'q_count', modifies
'nr_io_queues' during MSI-X setup, then resets 'nr_io_queues' for
MSI setup. Instead, copy 'nr_io_queues' into 'vecs' and modify 'vecs'
during both MSI-X and MSI setup.
This lets us simplify the for-loops that set up MSI-X and MSI, and opens
the possibility of using more I/O queues than we have interrupt vectors,
should future benchmarking prove that to be a useful feature.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Use kzalloc instead of kmalloc and a susbsequent memset.
Signed-off-by: Tushar Behera <tushar.behera@linaro.org>
Signed-off-by: Vishal Verma <vishal.l.verma@linux.intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Adding debugfs entries to help with debugging and testing and
testing code.
pci_regs:
This entry will spit out all of the data stored on the BAR.
stats:
This entry will display all of the driver stats for each
DMA channel.
cram:
This will allow read/write ability to the CRAM address space
on our adapter's CPU.
Signed-off-by: Philip J Kelleher <pjk1939@linux.vnet.ibm.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Unfortunaly, our CPU register path does not do any kind of
EEH error checking. So to fix this issue, an ioread32 was
added to the CPU register timeout code. This way, the
driver can check to see if the timeout was caused by an EEH
error or not. This is a dummy read.
Signed-off-by: Philip J Kelleher <pjk1939@linux.vnet.ibm.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Adding a sanity check to guarentee that DMAs outside of the device's
address space will be errored out right away.
Signed-off-by: Philip J Kelleher <pjk1939@linux.vnet.ibm.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
A kernel panic would occur on a DLPAR add if there was a partition
still mounted during the DLPAR remove. This bug fix will allow the
user to unmount the partition and bring the driver back into a
good state after the DLPAR add.
Signed-off-by: Philip J Kelleher <pjk1939@linux.vnet.ibm.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Changing the adapter name from FlashSystem-80 to the official
name: Flash Adapter 900GB Full Height.
Signed-off-by: Philip J Kelleher <pjk1939@linux.vnet.ibm.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Before, the partition table would have to be reread because our
card was attached before it transistioned out of it's 'starting'
state.
This change will cause the driver to wait to attach the device
until the adapter is ready.
Signed-off-by: Philip J Kelleher <pjk1939@linux.vnet.ibm.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Previously, the block size was determined by whether or not
our Hardware could handle 512 byte accesses. Now, all of our
Hardware can handle 512 and 4096 block sizes.
This fix allows it to be user configurable.
Signed-off-by: Philip J Kelleher <pjk1939@linux.vnet.ibm.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
The workqueue mechanism has been reworked to prevent soft
lockup issues from occuring by adding in mutex sychronization.
Signed-off-by: Philip J Kelleher <pjk1939@linux.vnet.ibm.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Before, DMAs would never be cancelled if there was a data stall
or an EEH Permenant failure which would cause an unrecoverable
I/O hang.
The DMA cancellation mechanism has been modified to fix
these issues and allows DMAs to be cancelled during the
above mentioned events.
Signed-off-by: Philip J Kelleher <pjk1939@linux.vnet.ibm.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Giving all interrupt based events their own workqueue to complete
tasks on. This fixes a bug that would cause creg commands to timeout
if too many are issued at once.
Signed-off-by: Philip J Kelleher <pjk1939@linux.vnet.ibm.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Check that the ring does not have an insane amount of requests
(more than there could fit on the ring).
If we detect this case we will stop processing the requests
and wait until the XenBus disconnects the ring.
The existing check RING_REQUEST_CONS_OVERFLOW which checks for how
many responses we have created in the past (rsp_prod_pvt) vs
requests consumed (req_cons) and whether said difference is greater or
equal to the size of the ring, does not catch this case.
Wha the condition does check if there is a need to process more
as we still have a backlog of responses to finish. Note that both
of those values (rsp_prod_pvt and req_cons) are not exposed on the
shared ring.
To understand this problem a mini crash course in ring protocol
response/request updates is in place.
There are four entries: req_prod and rsp_prod; req_event and rsp_event
to track the ring entries. We are only concerned about the first two -
which set the tone of this bug.
The req_prod is a value incremented by frontend for each request put
on the ring. Conversely the rsp_prod is a value incremented by the backend
for each response put on the ring (rsp_prod gets set by rsp_prod_pvt when
pushing the responses on the ring). Both values can
wrap and are modulo the size of the ring (in block case that is 32).
Please see RING_GET_REQUEST and RING_GET_RESPONSE for the more details.
The culprit here is that if the difference between the
req_prod and req_cons is greater than the ring size we have a problem.
Fortunately for us, the '__do_block_io_op' loop:
rc = blk_rings->common.req_cons;
rp = blk_rings->common.sring->req_prod;
while (rc != rp) {
..
blk_rings->common.req_cons = ++rc; /* before make_response() */
}
will loop up to the point when rc == rp. The macros inside of the
loop (RING_GET_REQUEST) is smart and is indexing based on the modulo
of the ring size. If the frontend has provided a bogus req_prod value
we will loop until the 'rc == rp' - which means we could be processing
already processed requests (or responses) often.
The reason the RING_REQUEST_CONS_OVERFLOW is not helping here is
b/c it only tracks how many responses we have internally produced
and whether we would should process more. The astute reader will
notice that the macro RING_REQUEST_CONS_OVERFLOW provides two
arguments - more on this later.
For example, if we were to enter this function with these values:
blk_rings->common.sring->req_prod = X+31415 (X is the value from
the last time __do_block_io_op was called).
blk_rings->common.req_cons = X
blk_rings->common.rsp_prod_pvt = X
The RING_REQUEST_CONS_OVERFLOW(&blk_rings->common, blk_rings->common.req_cons)
is doing:
req_cons - rsp_prod_pvt >= 32
Which is,
X - X >= 32 or 0 >= 32
And that is false, so we continue on looping (this bug).
If we re-use said macro RING_REQUEST_CONS_OVERFLOW and pass in the rp
instead (sring->req_prod) of rc, the this macro can do the check:
req_prod - rsp_prov_pvt >= 32
Which is,
X + 31415 - X >= 32 , or 31415 >= 32
which is true, so we can error out and break out of the function.
Unfortunatly the difference between rsp_prov_pvt and req_prod can be
at 32 (which would error out in the macro). This condition exists when
the backend is lagging behind with the responses and still has not finished
responding to all of them (so make_response has not been called), and
the rsp_prov_pvt + 32 == req_cons. This ends up with us not being able
to use said macro.
Hence introducing a new macro called RING_REQUEST_PROD_OVERFLOW which does
a simple check of:
req_prod - rsp_prod_pvt > RING_SIZE
And with the X values from above:
X + 31415 - X > 32
Returns true. Also not that if the ring is full (which is where
the RING_REQUEST_CONS_OVERFLOW triggered), we would not hit the
same condition:
X + 32 - X > 32
Which is false.
Lets use that macro.
Note that in v5 of this patchset the macro was different - we used an
earlier version.
Cc: stable@vger.kernel.org
[v1: Move the check outside the loop]
[v2: Add a pr_warn as suggested by David]
[v3: Use RING_REQUEST_CONS_OVERFLOW as suggested by Jan]
[v4: Move wake_up after kthread_stop as suggested by Jan]
[v5: Use RING_REQUEST_PROD_OVERFLOW instead]
[v6: Use RING_REQUEST_PROD_OVERFLOW - Jan's version]
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
gadsa
Format 2 objects use 16 characters for the object name suffix to be
able to express the full 64-bit range of object numbers. Format 1
images only use 12 characters for this. Using 12-character names for
format 2 caused userspace and kernel rbd clients to read differently
named objects, which made an image written by one client look empty to
the other client.
CC: stable@vger.kernel.org # 3.9+
Reported-by: Chris Dunlop <chris@onthe.net.au>
Signed-off-by: Josh Durgin <josh.durgin@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
Pull block layer fixes from Jens Axboe:
"Outside of bcache (which really isn't super big), these are all
few-liners. There are a few important fixes in here:
- Fix blk pm sleeping when holding the queue lock
- A small collection of bcache fixes that have been done and tested
since bcache was included in this merge window.
- A fix for a raid5 regression introduced with the bio changes.
- Two important fixes for mtip32xx, fixing an oops and potential data
corruption (or hang) due to wrong bio iteration on stacked devices."
* 'for-linus' of git://git.kernel.dk/linux-block:
scatterlist: sg_set_buf() argument must be in linear mapping
raid5: Initialize bi_vcnt
pktcdvd: silence static checker warning
block: remove refs to XD disks from documentation
blkpm: avoid sleep when holding queue lock
mtip32xx: Correctly handle bio->bi_idx != 0 conditions
mtip32xx: Fix NULL pointer dereference during module unload
bcache: Fix error handling in init code
bcache: clarify free/available/unused space
bcache: drop "select CLOSURES"
bcache: Fix incompatible pointer type warning
Merge misc fixes from Andrew Morton:
"Bunch of fixes and one little addition to math64.h"
* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (27 commits)
include/linux/math64.h: add div64_ul()
mm: memcontrol: fix lockless reclaim hierarchy iterator
frontswap: fix incorrect zeroing and allocation size for frontswap_map
kernel/audit_tree.c:audit_add_tree_rule(): protect `rule' from kill_rules()
mm: migration: add migrate_entry_wait_huge()
ocfs2: add missing lockres put in dlm_mig_lockres_handler
mm/page_alloc.c: fix watermark check in __zone_watermark_ok()
drivers/misc/sgi-gru/grufile.c: fix info leak in gru_get_config_info()
aio: fix io_destroy() regression by using call_rcu()
rtc-at91rm9200: use shadow IMR on at91sam9x5
rtc-at91rm9200: add shadow interrupt mask
rtc-at91rm9200: refactor interrupt-register handling
rtc-at91rm9200: add configuration support
rtc-at91rm9200: add match-table compile guard
fs/ocfs2/namei.c: remove unecessary ERROR when removing non-empty directory
swap: avoid read_swap_cache_async() race to deadlock while waiting on discard I/O completion
drivers/rtc/rtc-twl.c: fix missing device_init_wakeup() when booted with device tree
cciss: fix broken mutex usage in ioctl
audit: wait_for_auditd() should use TASK_UNINTERRUPTIBLE
drivers/rtc/rtc-cmos.c: fix accidentally enabling rtc channel
...
If a new logical drive is added and the CCISS_REGNEWD ioctl is invoked
(as is normal with the Array Configuration Utility) the process will
hang as below. It attempts to acquire the same mutex twice, once in
do_ioctl() and once in cciss_unlocked_open(). The BKL was recursive,
the mutex isn't.
Linux version 3.10.0-rc2 (scameron@localhost.localdomain) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) ) #1 SMP Fri May 24 14:32:12 CDT 2013
[...]
acu D 0000000000000001 0 3246 3191 0x00000080
Call Trace:
schedule+0x29/0x70
schedule_preempt_disabled+0xe/0x10
__mutex_lock_slowpath+0x17b/0x220
mutex_lock+0x2b/0x50
cciss_unlocked_open+0x2f/0x110 [cciss]
__blkdev_get+0xd3/0x470
blkdev_get+0x5c/0x1e0
register_disk+0x182/0x1a0
add_disk+0x17c/0x310
cciss_add_disk+0x13a/0x170 [cciss]
cciss_update_drive_info+0x39b/0x480 [cciss]
rebuild_lun_table+0x258/0x370 [cciss]
cciss_ioctl+0x34f/0x470 [cciss]
do_ioctl+0x49/0x70 [cciss]
__blkdev_driver_ioctl+0x28/0x30
blkdev_ioctl+0x200/0x7b0
block_ioctl+0x3c/0x40
do_vfs_ioctl+0x89/0x350
SyS_ioctl+0xa1/0xb0
system_call_fastpath+0x16/0x1b
This mutex usage was added into the ioctl path when the big kernel lock
was removed. As it turns out, these paths are all thread safe anyway
(or can easily be made so) and we don't want ioctl() to be single
threaded in any case.
Signed-off-by: Stephen M. Cameron <scameron@beardog.cce.hp.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Mike Miller <mike.miller@hp.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Pull ceph fixes from Sage Weil:
"There is a pair of fixes for double-frees in the recent bundle for
3.10, a couple of fixes for long-standing bugs (sleep while atomic and
an endianness fix), and a locking fix that can be triggered when osds
are going down"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client:
rbd: fix cleanup in rbd_add()
rbd: don't destroy ceph_opts in rbd_add()
ceph: ceph_pagelist_append might sleep while atomic
ceph: add cpu_to_le32() calls when encoding a reconnect capability
libceph: must hold mutex for reset_changed_osds()
Pull NVMe fixes from Matthew Wilcox.
* 'fixes-3.10' of git://git.infradead.org/users/willy/linux-nvme:
NVMe: Add MSI support
NVMe: Use dma_set_mask() correctly
Return the result from user admin command IOCTL even in case of failure
NVMe: Do not cancel command multiple times
NVMe: fix error return code in nvme_submit_bio_queue()
NVMe: check for integer overflow in nvme_map_user_pages()
MAINTAINERS: update NVM EXPRESS DRIVER file list
NVMe: Fix a signedness bug in nvme_trans_modesel_get_mp
NVMe: Remove redundant version.h header include
We need to make sure that the device is not RO or that
the request is not past the number of sectors we want to
issue the DISCARD operation for.
This fixes CVE-2013-2140.
Cc: stable@vger.kernel.org
Acked-by: Jan Beulich <JBeulich@suse.com>
Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
[v1: Made it pr_warn instead of pr_debug]
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Currently xen-blkback passes the logical sector size over xenbus and
xen-blkfront sets up the paravirt disk with that logical block size.
But newer drives usually have the logical sector size set to 512 for
compatibility reasons and would show the actual sector size only in
physical sector size.
This results in the device being partitioned and accessed in dom0 with
the correct sector size, but the guest thinks 512 bytes is the correct
block size. And that results in poor performance.
To fix this, blkback gets modified to pass also physical-sector-size
over xenbus and blkfront to use both values to set up the paravirt
disk. I did not just change the passed in sector-size because I am
not sure having a bigger logical sector size than the physical one
is valid (and that would happen if a newer dom0 kernel hits an older
domU kernel). Also this way a domU set up before should still be
accessible (just some tools might detect the unaligned setup).
[v2: Make xenbus write failure non-fatal]
[v3: Use xenbus_scanf instead of xenbus_gather]
[v4: Rebased against segment changes]
Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
The max module parameter (by default 32) is the maximum number of
segments that the frontend will negotiate with the backend for indirect
descriptors. Higher value means more potential throughput but more
memory usage. The backend picks the minimum of the frontend and its
default backend value.
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Some devices only have support for MSI, not MSI-X. While MSI is more
limited, it still provides better performance than line-based interrupts.
Signed-off-by: Ramachandra Gajula <rama@fastorsystems.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Jiri writes:
please pull from
git://git.kernel.org/pub/scm/linux/kernel/git/jikos/linux-block.git for-jens
to receive one pktcdvd fix. It fixes a highly theoretical issue with using.
pktcdvd to work with media that'd be larger than 2TB :) But it's a correct.
fix and makes static checkers shut up about improperly cleaning upper.
32bits.
Static checkers complain about widening the binary "not" operations here
because sectors are u64 and "(pd)->settings.size" is unsigned int.
It unintentionally clears the high 32 bits of the sector. This means
that the driver won't work for devices with over 2TB of space. Since
this is a DVD drive, we're unlikely to reach that limit, but we may as
well silence the warning.
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
In some circumstances setting a 64-bit DMA mask can fail, as explained
in Documentation/DMA-API-HOWTO.txt. Use the recommended code sequence
to set a 32-bit DMA mask if setting a 64-bit DMA mask fails.
Reported-by: Chayan Biswas <Chayan.Biswas@sandisk.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
The index on the page must be set before it is inserted in the radix
tree. Otherwise there is a small race which can occur during lookup
where the page can be found with the incorrect index. This will trigger
the BUG_ON() in brd_lookup_page().
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reported-by: Chris Wedgwood <cw@f00f.org>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
If the port number is missing from the device-tree the device gets named
xs` instead of xsa. This fixes the check for missing ids.
Tested on ml507 board.
Signed-off-by: Gernot Vormayr <gvormayr@gmail.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Grant Likely <grant.likely@linaro.org>
Cc: Rob Herring <rob.herring@calxeda.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
We copy the result to user if the command is completed from the
controller even if it completes with failure (non-zero) status.
A return status of < 0 indicates the command was not completed
by the controller. The user application may expect the error code
in the result field in case of failure.
Signed-off-by: Chayan Biswas <Chayan.Biswas@sandisk.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Bjorn Helgaas pointed out that a recent commit introduced a
use-after-free condition in an error path for rbd_add().
He correctly stated:
I think b536f69a3a "rbd: set up devices only for mapped images"
introduced a use-after-free error in rbd_add():
...
If rbd_dev_device_setup() returns an error, we call
rbd_dev_image_release(), which ultimately kfrees rbd_dev.
Then we call rbd_dev_destroy(), which references fields in
the already-freed rbd_dev struct before kfreeing it again.
The simple fix is to return the error code after the call to
rbd_dev_image_release().
Closer examination revealed that there's no need to clean up
rbd_opts in that function, so fix that too.
Update some other comments that have also become out of date.
Reported-by: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Whether rbd_client_create() successfully creates a new client or
not, it takes responsibility for getting the ceph_opts structure
it's passed destroyed. If successful, the structure becomes
associated with the created client; if not, rbd_client_create()
will destroy it.
Previously, rbd_get_client() would call ceph_destroy_options()
if rbd_get_client() failed, and that meant it got called twice.
That led freeing various pointers more than once, which is never a
good idea.
This resolves:
http://tracker.ceph.com/issues/4559
Cc: stable@vger.kernel.org # 3.8+
Reported-by: Dan van der Ster <dan@vanderster.com>
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Cancelling an already cancelled command does not do anything, so check
the command context before cancelling it, continuing if had already been
cancelled so we do not log the same problem every second if a device
stops responding.
Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
nvme_submit_flush_data() might overwrite the initialisation of the
return value with 0, so move the -ENOMEM setting close to the usage.
Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
You need to have CAP_SYS_ADMIN to trigger this overflow but it makes the
static checkers complain so we should fix it. The worry is that
"length" comes from copy_from_user() so we need to check that "length +
offset" can't overflow.
I also changed the min_t() cast to be unsigned instead of signed. Now
that we cap "length" to INT_MAX it doesn't make a difference, but it's a
little easier for reviewers to know that large values aren't cast to
negative.
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
nvme_trans_modesel_get_mp() was defined with a unsigned return
type, but can return signed values.
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Vishal Verma <vishal.l.verma@linux.intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
version.h header inclusion is not necessary as detected by
checkversion.pl.
Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org>
Acked-by: Vishal Verma <vishal.l.verma@linux.intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Pull Ceph fixes from Sage Weil:
"Yes, this is a much larger pull than I would like after -rc1. There
are a few things included:
- a few fixes for leaks and incorrect assertions
- a few patches fixing behavior when mapped images are resized
- handling for cloned/layered images that are flattened out from
underneath the client
The last bit was non-trivial, and there is some code movement and
associated cleanup mixed in. This was ready and was meant to go in
last week but I missed the boat on Friday. My only excuse is that I
was waiting for an all clear from the testing and there were many
other shiny things to distract me.
Strictly speaking, handling the flatten case isn't a regression and
could wait, so if you like we can try to pull the series apart, but
Alex and I would much prefer to have it all in as it is a case real
users will hit with 3.10."
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client: (33 commits)
rbd: re-submit flattened write request (part 2)
rbd: re-submit write request for flattened clone
rbd: re-submit read request for flattened clone
rbd: detect when clone image is flattened
rbd: reference count parent requests
rbd: define parent image request routines
rbd: define rbd_dev_unparent()
rbd: don't release write request until necessary
rbd: get parent info on refresh
rbd: ignore zero-overlap parent
rbd: support reading parent page data for writes
rbd: fix parent request size assumption
libceph: init sent and completed when starting
rbd: kill rbd_img_request_get()
rbd: only set up watch for mapped images
rbd: set mapping read-only flag in rbd_add()
rbd: support reading parent page data
rbd: fix an incorrect assertion condition
rbd: define rbd_dev_v2_header_info()
rbd: get rid of trivial v1 header wrappers
...
Stacking drivers may append bvecs to existing bio's, resulting
in non-zero bi_idx conditions. This patch counts the loops of
bio_for_each_segment() rather than inheriting the bi_idx value
to pass as a segment count to the hardware submission routine.
Signed-off-by: Sam Bradshaw <sbradshaw@micron.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
An open file-handle to one or more of the driver exported debugfs
nodes causes raciness in recursive removal during module unload;
sometimes a stale parent dentry is dereferenced when more than 1
pci device is present.
Signed-off-by: Sam Bradshaw <sbradshaw@micron.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Add code to rbd_img_obj_exists_callback() to detect when a clone's
parent image has disappeared, and re-submit the original write
request in that case.
Kill off some redundant assertions.
This completes the resolution for:
http://tracker.ceph.com/issues/3763
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Add code to rbd_img_parent_read_full_callback() to detect when a
clone's parent image has disappeared, and re-submit the original
write request in that case. (See the previous commit for more
reasoning about why this is appropriate.)
Rename some variables in rbd_img_obj_parent_read_full_callback()
to match the convention used in the previous patch.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
If a clone image gets flattened while a parent read request is
underway, the original rbd object request needs to be resubmitted.
The reason is that by the time we get the response to the parent
read request, the data read from the parent may be out of date.
In other words, we could see this sequence of events:
rbd client parent image/osd
---------- ----------------
original object ENOENT;
issue parent read
respond to parent read
child image flattened
original image header refresh
<--- original object written independently here
parent read response received
Add code to rbd_img_parent_read_callback() to detect when a clone's
parent image has disappeared (as evidenced by its parent overlap
becoming 0), and re-submit the original read request in that case.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
A format 2 clone image can be the subject of a "flatten" operation,
during which all of its data gets "copied up" from its parent image,
leaving the image fully populated. Once this is complete, the
clone's association with the parent is abolished.
Since this can occur when a clone is mapped, we need to detect when
it has occurred and handle it accordingly. We know an image has
been flattened when we know it at one time had a parent, but we have
learned (via a "get_parent" object class method call) it no longer
has one.
There might be in-flight requests at the point we learn an image has
been flattened, so we can't simply clean up parent data structures
right away. Instead, we'll drop the initial parent reference when
the parent has disappeared (rather than when the image gets
destroyed), which will allow the last in-flight reference to clean
things up when it's complete.
We leverage the fact that a zero parent overlap renders an image
effectively unlayered. We set the overlap to 0 at the point we
detect the clone image has flattened, which allows the unlayered
behavior to take effect immediately, while keeping other parent
structures in place until in-flight requests to complete.
This and the next few patches resolve:
http://tracker.ceph.com/issues/3763
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Keep a reference count for uses of the parent information for an rbd
device.
An initial reference is set in rbd_img_request_create() if the
target image has a parent (with non-zero overlap). Each image
request for an image with a non-zero parent overlap gets another
reference when it's created, and that reference is dropped when the
request is destroyed.
The initial reference is dropped when the image gets torn down.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Define rbd_parent_request_create() and rbd_parent_request_destroy()
to handle the creation of parent image requests submitted for
layered image objects. For simplicity, let rbd_img_request_put()
handle dropping the reference to any image request (parent or not),
and call whichever destructor is appropriate on the last put.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Define rbd_dev_unparent() to encapsulate cleaning up parent data
structures from a layered rbd image.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Previously when a layered write was going to involve a copyup
request, the original osd request was released before submitting the
parent full-object read. The osd request for the copyup would then
be allocated in rbd_img_obj_parent_read_full_callback().
Shortly we will be handling the event of mapped layered images
getting flattened, and when that occurs we need to resubmit the
original request. We therefore don't want to release the osd
request until we really konw we're going to replace it--in the
callback function.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Get parent info for format 2 images on every refresh (rather than
just during the initial probe). This will be needed to detect the
disappearance of the parent image in the event a mapped image
becomes unlayered (i.e., flattened). Avoid leaking the previous
parent spec on the second and subsequent times this information is
requested by dropping the previous one (if any) before updating it.
(Also, extract the pool id into a local variable before assigning
it into the parent spec.)
Switch to using a non-zero parent overlap value rather than the
existence of a parent (a non-null parent_spec pointer) to determine
whether to mark a request layered. It will soon be possible for
a layered image to become unlayered while a request is in flight.
This means that the layered flag for an image request indicates that
there was a non-zero parent overlap at the time the image request
was created. The parent overlap can change thereafter, which may
lead to special handling at request submission or completion time.
This and the next several patches are related to:
http://tracker.ceph.com/issues/3763
NOTE:
If an error occurs while refreshing the parent info (i.e.,
requesting it after initial probe), the old parent info will
persist. This is not really correct, and is a scenario that needs
to be addressed. For now we'll assert that the failure mode is
unlikely, but the issue has been documented in tracker issue 5040.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
An rbd clone image that has an overlap with its parent of 0 is
effectively not a layered image at all. Detect this case and treat
such an image as non-layered. Issue a warning to be sure the user
knows what's going on.
This resolves:
http://tracker.ceph.com/issues/5028
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Currently, rbd_img_obj_parent_read_full() assumes the incoming
object request contains bio data. But if a layered image is part of
a multi-layer stack of images it will result in read requests of
page data to parent images.
This is handling the same kind of issue as was resolved by this
commit:
5b2ab72d rbd: support reading parent page data
This resolves:
http://tracker.ceph.com/issues/5027
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
The code that reads object data from the parent for a copyup on
write request currently assumes that the size of that request is the
size of a "full" object from the original target image.
That is not necessarily the case. The parent overlap could reduce
the request size below that. To fix that assumption we need to
record the number of pages in the copyup_pages array, for both an
image request and an object request. Rename a local variable in
rbd_img_obj_parent_read_full_callback() to reflect we're recording
the length of the parent read request, not the size of the target
object.
This resolves:
http://tracker.ceph.com/issues/5038
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Pull NVMe driver update from Matthew Wilcox:
"Lots of exciting new features in the NVM Express driver this time,
including support for emulating SCSI commands, discard support and the
ability to submit per-sector metadata with I/Os.
It's still mostly bugfixes though!"
* git://git.infradead.org/users/willy/linux-nvme: (27 commits)
NVMe: Use user defined admin ioctl timeout
NVMe: Simplify Firmware Activate code slightly
NVMe: Only clear the enable bit when disabling controller
NVMe: Wait for device to acknowledge shutdown
NVMe: Schedule timeout for sync commands
NVMe: Meta-data support in NVME_IOCTL_SUBMIT_IO
NVMe: Device specific stripe size handling
NVMe: Split non-mergeable bio requests
NVMe: Remove dead code in nvme_dev_add
NVMe: Check for NULL memory in nvme_dev_add
NVMe: Fix error clean-up on nvme_alloc_queue
NVMe: Free admin queue on request_irq error
NVMe: Add scsi unmap to SG_IO
NVMe: queue usage fixes in nvme-scsi
NVMe: Set TASK_INTERRUPTIBLE before processing queues
NVMe: Add a character device for each nvme device
NVMe: Fix endian-related problems in user I/O submission path
NVMe: Fix I/O cancellation status on big-endian machines
NVMe: Fix sparse warnings in scsi emulation
NVMe: Don't fail initialisation unnecessarily
...
Get rid of rbd_img_request_get(), because it isn't used, and maybe
won't ever be needed.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Any changes to parent images are immaterial to any mapped clone.
So there is no need to have a watch event registered on header
objects except for the header object of an image that is mapped.
In fact, a watch request is a write operation, and we may only
have read access to a parent image.
We can't set up the watch request until we know the name of the
header object though. So pass a flag to rbd_dev_image_probe() to
indicate whether this probe is for a mapping or for a parent image.
Change the second parameter to rbd_dev_header_watch_sync() be
Boolean while we're at it.
This resolves:
http://tracker.ceph.com/issues/4941
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
The rbd_dev->mapping field for a parent image is not meaningful.
Since rbd_image_probe() is used both for images being mapped and
their parents, it doesn't make sense to set that flag in that
function.
So move the setting of the mapping.read_only flag out of
rbd_dev_image_probe() and into rbd_add() instead.
This resolves:
http://tracker.ceph.com/issues/4940
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Currently, rbd_img_parent_read() assumes the incoming object request
contains bio data. But if a layered image is part of a multi-layer
stack of images it will result in read requests of page data to parent
images.
Fortunately, it's not hard to add support for page data.
This resolves:
http://tracker.ceph.com/issues/4939
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
In rbd_img_obj_parent_read_full_callback() there is an assertion
intended to verify the size of the image request for a full parent
read was the size of the original request's target object. But
assertion was looking at the parent image order rather than the
original one, and these values can differ.
Fix that.
This resolves:
http://tracker.ceph.com/issues/4938
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
This rearranges rbd_dev_v2_refresh() so it works more like
rbd_dev_v1_header_info(). While format 1 images need to read the
whole header object to get any information, format 2 can collect
almost all information selectively. So the one-time initialization
will remain in a separate function--based on rbd_dev_v2_probe().
Rename rbd_dev_v2_refresh() to be rbd_dev_v2_header_info(), and have
it call rbd_dev_v2_header_onetime() if it's being called for the
first time for the given rbd device.
Rename rbd_dev_v2_probe() to be rbd_dev_v2_header_onetime() and
remove the image size and snapshot context calls it held in
common with the refresh function.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Get rid of the trivial wrapper functions rbd_dev_v1_refresh() and
rbd_dev_v1_probe(), substituting rbd_dev_v1_header_read() calls
in their place.
Rename rbd_dev_v1_header_read() to be rbd_dev_v1_header_info(), to
be more generic (it will better reflect what happens with format 2
images).
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
An rbd_dev structure's fields are all zero-filled for an initial
probe, so there's no need to explicitly zero the parent_spec
and parent_overlap fields in rbd_dev_v1_probe(). Removing these
assignments makes rbd_dev_v1_probe() *almost* trivial.
Move the dout() message that announces discovery of an image into
rbd_dev_image_probe(), generalize to support images in either format
and only show it if an image is fully discovered.
This highlights that are some unnecessary cleanups in the error
path for rbd_dev_v1_probe(), so they can be removed.
Now rbd_dev_v1_probe() *is* a trivial wrapper function.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Now that rbd_header_from_disk() only fills in one-time fields once,
we can extend it slightly so it releases the other fields before
replacing their values. This way there's no need to pass a
temporary buffer and then copy all the results in. Just use the rbd
device header structure in rbd_header_from_disk() so its values get
updated directly.
Note that this means we need to take the header semaphore at the
point we update things. So pass the rbd_dev rather than the address
of its header as its first argument to rbd_header_from_disk(), and
have it return an error code.
As a result, rbd_dev_v1_header_read() does all the work,
rbd_read_header() becomes unnecessary, and rbd_dev_v1_refresh()
becomes a very simple wrapper.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
This rearranges rbd_header_from_disk so that it:
- allocates the snapshot context right away
- keeps results in local variables, not changing the passed-in
header until it's known we'll succeed
- does initialization of set-once fields in a header only if
they have not already been set
The last point is moot at the moment, because rbd_read_header()
(the only caller) always supplies a zero-filled header buffer.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
The passed-in header structure is zeroed in rbd_header_from_disk().
Instead, have the caller do it. Note that there are two callers,
rbd_dev_v1_refresh() and rbd_dev_v1_probe(). The latter already has
a zeroed header structure so zeroing it isn't necessary there.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Defer setting the size and features fields of a mapped image until
after the Linux disk structure is set up. Set the capacity of the
disk after that.
Rearrange the definition of rbd_image_header, separating the fields
that are set only once from those that can be updated.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Pull block driver updates from Jens Axboe:
"It might look big in volume, but when categorized, not a lot of
drivers are touched. The pull request contains:
- mtip32xx fixes from Micron.
- A slew of drbd updates, this time in a nicer series.
- bcache, a flash/ssd caching framework from Kent.
- Fixes for cciss"
* 'for-3.10/drivers' of git://git.kernel.dk/linux-block: (66 commits)
bcache: Use bd_link_disk_holder()
bcache: Allocator cleanup/fixes
cciss: bug fix to prevent cciss from loading in kdump crash kernel
cciss: add cciss_allow_hpsa module parameter
drivers/block/mg_disk.c: add CONFIG_PM_SLEEP to suspend/resume functions
mtip32xx: Workaround for unaligned writes
bcache: Make sure blocksize isn't smaller than device blocksize
bcache: Fix merge_bvec_fn usage for when it modifies the bvm
bcache: Correctly check against BIO_MAX_PAGES
bcache: Hack around stuff that clones up to bi_max_vecs
bcache: Set ra_pages based on backing device's ra_pages
bcache: Take data offset from the bdev superblock.
mtip32xx: mtip32xx: Disable TRIM support
mtip32xx: fix a smatch warning
bcache: Disable broken btree fuzz tester
bcache: Fix a format string overflow
bcache: Fix a minor memory leak on device teardown
bcache: Documentation updates
bcache: Use WARN_ONCE() instead of __WARN()
bcache: Add missing #include <linux/prefetch.h>
...
Pull block core updates from Jens Axboe:
- Major bit is Kents prep work for immutable bio vecs.
- Stable candidate fix for a scheduling-while-atomic in the queue
bypass operation.
- Fix for the hang on exceeded rq->datalen 32-bit unsigned when merging
discard bios.
- Tejuns changes to convert the writeback thread pool to the generic
workqueue mechanism.
- Runtime PM framework, SCSI patches exists on top of these in James'
tree.
- A few random fixes.
* 'for-3.10/core' of git://git.kernel.dk/linux-block: (40 commits)
relay: move remove_buf_file inside relay_close_buf
partitions/efi.c: replace useless kzalloc's by kmalloc's
fs/block_dev.c: fix iov_shorten() criteria in blkdev_aio_read()
block: fix max discard sectors limit
blkcg: fix "scheduling while atomic" in blk_queue_bypass_start
Documentation: cfq-iosched: update documentation help for cfq tunables
writeback: expose the bdi_wq workqueue
writeback: replace custom worker pool implementation with unbound workqueue
writeback: remove unused bdi_pending_list
aoe: Fix unitialized var usage
bio-integrity: Add explicit field for owner of bip_buf
block: Add an explicit bio flag for bios that own their bvec
block: Add bio_alloc_pages()
block: Convert some code to bio_for_each_segment_all()
block: Add bio_for_each_segment_all()
bounce: Refactor __blk_queue_bounce to not use bi_io_vec
raid1: use bio_copy_data()
pktcdvd: Use bio_reset() in disabled code to kill bi_idx usage
pktcdvd: use bio_copy_data()
block: Add bio_copy_data()
...
Add definitions for the three Firmware Activate actions, and change the
SCSI translation code to construct the command into a temporary variable
instead of translating the endianness back-and-forth.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Reviewed-by: Vishal Verma <vishal.l.verma@linux.intel.com>
Many of the bits in the Controller Configuration register may only be
modified when the Enable bit is clear. Clearing them at the same time
as the Enable bit might be OK, but let's play it safe and only touch the
Enable bit.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Reviewed-by: Keith Busch <keith.busch@intel.com>
A recent update to the specification makes it clear that the host
is expected to wait for the device to acknowledge the Enable bit
transitioning to 0 as well as waiting for the device to acknowledge a
transition to 1.
Reported-by: Khosrow Panah <Khosrow.Panah@idt.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Reviewed-by: Keith Busch <keith.busch@intel.com>
Hold off setting the read-only flag in rbd_add() for an image being
mapped until we have successfully probed the image. At that point
we know whether it's a snapshot mapping or not, so we can set the
read-only flag in that one place rather than doing so (for
snapshots) in rbd_dev_mapping_set(). To do this, pass a flag to the
image probe routine indicating whether we want a read-only mapping.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
This function is a duplicate of rbd_dev_mapping_clear(), and was
added by mistake.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Currently rbd_dev_mapping_set() looks up the snapshot id for the
snapshot whose name is found in the rbd device's spec structure.
That function gets called by rbd_dev_device_setup(), which is
called by rbd_add() *after* rbd_dev_image_probe(). If the
image probe succeeds, the rbd device's spec will already have
been updated to include names and ids for all fields.
Therefore there's no need to look up the snapshot id in
rbd_dev_mapping_set().
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
The presence of the LAYERING bit in an rbd image's feature mask does
not guarantee the image actually has a parent image. Currently that
bit is set only when a clone (i.e., image with a parent) is created,
but it is (currently) not cleared if that clone gets flattened back
into a "normal" image. A "parent_id" query will leave the
parent_spec for the image being mapped a null pointer, but will not
return an error.
Currently, whenever an image with the LAYERED feature gets mapped, a
warning about the use of layered images gets printed. But we don't
want to do this for a flattened image, so print the warning only
if we find there is a parent spec after the probe.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
In blkif_queue_request blkfront iterates over the scatterlist in order
to set the segments of the request, and in blkif_completion blkfront
iterates over the raw request, which makes it hard to know the exact
position of the source and destination memory positions.
This can be solved by allocating a scatterlist for each request, that
will be keep until the request is finished, allowing us to copy the
data back to the original memory without having to iterate over the
raw request.
Oracle-Bug: 16660413 - LARGE ASYNCHRONOUS READS APPEAR BROKEN ON 2.6.39-400
CC: stable@vger.kernel.org
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Reported-and-Tested-by: Anne Milicia <anne.milicia@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Since rbd_update_mapping_size() is now a trivial wrapper, just open
code it in its two callers.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
When a mapped image changes size, we change the capacity recorded
for the Linux disk associated with it, in rbd_update_mapping_size().
That function is called in two places--the format 1 and format 2
refresh routines.
There is no need to set the capacity while holding the header
semaphore. Instead, do it in the common rbd_dev_refresh(), using
the logic that's already there to initiate disk revalidation.
Add handling in the request function, just in case a request
that exceeds the capacity of the device comes in (perhaps one
that was started before a refresh shrunk the device).
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
This commit:
d98df63e rbd: revalidate_disk upon rbd resize
instituted a call to revalidate_disk() to notify interested parties
that a mapped image has changed size. This works well, as long as
the the rbd device doesn't map a snapshot.
A snapshot will never change size. However, the base image the
snapshot is associated with can, and it can do so while the snapshot
is mapped.
The problem is that the test for the size is looking at the size of
the base image, not the size of the mapped snapshot. This patch
corrects that.
Update the warning message shown in the event of error, and move
it into the callers.
This resolves:
http://tracker.ceph.com/issues/4911
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
When rbd_dev_v2_refresh() is called, the rbd device already has a
snapshot context associated with it. But that never gets freed,
the pointer just gets overwritten.
Fix this by dropping the rbd device's reference to the snapshot
context before overwriting the pointer.
Because ceph_put_snap_context() already handles for a null pointer
we don't need to check for that (for the probe case, where no
context has yet been assigned).
This resolves:
http://tracker.ceph.com/issues/4912
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Pull more vfs updates from Al Viro:
"A couple of fixes + getting rid of __blkdev_put() return value"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
proc: Use PDE attribute setting accessor functions
make blkdev_put() return void
block_device_operations->release() should return void
mtd_blktrans_ops->release() should return void
hfs: SMP race on directory close()
Allocate pending requests in smaller chunks instead of allocating them
all at the same time.
This change also removes the global array of pending_reqs, it is no
longer necessay.
Variables related to the grant mapping have been grouped into a struct
called "grant_page", this allows to allocate them in smaller chunks,
and also improves memory locality.
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Reported-by: Sander Eikelenboom <linux@eikelenboom.it>
Tested-by: Sander Eikelenboom <linux@eikelenboom.it>
Reviewed-by: David Vrabel <david.vrabel@citrix.com>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
The value passed is 0 in all but "it can never happen" cases (and those
only in a couple of drivers) *and* it would've been lost on the way
out anyway, even if something tried to pass something meaningful.
Just don't bother.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Pull Ceph changes from Alex Elder:
"This is a big pull.
Most of it is culmination of Alex's work to implement RBD image
layering, which is now complete (yay!).
There is also some work from Yan to fix i_mutex behavior surrounding
writes in cephfs, a sync write fix, a fix for RBD images that get
resized while they are mapped, and a few patches from me that resolve
annoying auth warnings and fix several bugs in the ceph auth code."
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client: (254 commits)
rbd: fix image request leak on parent read
libceph: use slab cache for osd client requests
libceph: allocate ceph message data with a slab allocator
libceph: allocate ceph messages with a slab allocator
rbd: allocate image object names with a slab allocator
rbd: allocate object requests with a slab allocator
rbd: allocate name separate from obj_request
rbd: allocate image requests with a slab allocator
rbd: use binary search for snapshot lookup
rbd: clear EXISTS flag if mapped snapshot disappears
rbd: kill off the snapshot list
rbd: define rbd_snap_size() and rbd_snap_features()
rbd: use snap_id not index to look up snap info
rbd: look up snapshot name in names buffer
rbd: drop obj_request->version
rbd: drop rbd_obj_method_sync() version parameter
rbd: more version parameter removal
rbd: get rid of some version parameters
rbd: stop tracking header object version
rbd: snap names are pointer to constant data
...
I dived into lguest again, reworking the pagetable code so we can move
the switcher page: our fixmaps sometimes take more than 2MB now...
Cheers,
Rusty.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
iQIcBAABAgAGBQJRga7lAAoJENkgDmzRrbjx/yIQAKpqIBtxOJeYH3SY+Uoe7Cfp
toNYcpJEldvb0UcWN8M2cSZpHoxl1SUoq9djwcM29tcKa7EZAjHaGtb/Q1qMTDgv
+B3WAfiGU2pmXFxLAkbrlLNGnysy24JspqJQ5hcYV84EiBxQdZp+nCYgOphd+GMK
ww16vo9ya8jFjzt3GeRp/Heb3vEzV4Cp6BC3i0m8A3WNpEpbRb66pqXNk5o8ggJO
SxQOKSXmUM+0m+jKSul5xn3e2Ls2LOrZZ8/DIHA+gW66N4Zab7n2/j1Q9VRxb4lh
FqnR7KwgBX8OCh9IsBDqQYS7MohvMYge6eUdLtFrq84jvMleMEhrC8q9v2tucFUb
5t18CLwvyK7Gdg6UCKiZ7YSPcuURAILO16al9bh5IseeBDsuX+43VsvQoBmFn9k6
cLOVTZ6BlOmahK5PyRYFSvLa9Rxzr/05Mr7oYq9UgshD9io78dnqczFYIORF53rW
zD7C4HuTZfYJFfNd0wAJ0RfVXnf8QvDlMdo7zPC26DSXNWqj8OexCY0qqSWUB+2F
vcfJP6NkV4fZB8aawWIFUVwc64yqtt2uPVLa7ATZWqk16PgKrchGewmw3tiEwOgu
1l7xgffTRRUIJsqaCZoXdgw3yezcKRjuUBcOxL09lDAAhc+NxWNvzZBsKp66DwDk
yZQKn0OdXnuf0CeEOfFf
=1tYL
-----END PGP SIGNATURE-----
Merge tag 'virtio-next-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux
Pull virtio & lguest updates from Rusty Russell:
"Lots of virtio work which wasn't quite ready for last merge window.
Plus I dived into lguest again, reworking the pagetable code so we can
move the switcher page: our fixmaps sometimes take more than 2MB now..."
Ugh. Annoying conflicts with the tcm_vhost -> vhost_scsi rename.
Hopefully correctly resolved.
* tag 'virtio-next-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux: (57 commits)
caif_virtio: Remove bouncing email addresses
lguest: improve code readability in lg_cpu_start.
virtio-net: fill only rx queues which are being used
lguest: map Switcher below fixmap.
lguest: cache last cpu we ran on.
lguest: map Switcher text whenever we allocate a new pagetable.
lguest: don't share Switcher PTE pages between guests.
lguest: expost switcher_pages array (as lg_switcher_pages).
lguest: extract shadow PTE walking / allocating.
lguest: make check_gpte et. al return bool.
lguest: assume Switcher text is a single page.
lguest: rename switcher_page to switcher_pages.
lguest: remove RESERVE_MEM constant.
lguest: check vaddr not pgd for Switcher protection.
lguest: prepare to make SWITCHER_ADDR a variable.
virtio: console: replace EMFILE with EBUSY for already-open port
virtio-scsi: reset virtqueue affinity when doing cpu hotplug
virtio-scsi: introduce multiqueue support
virtio-scsi: push vq lock/unlock into virtscsi_vq_done
virtio-scsi: pass struct virtio_scsi to virtqueue completion function
...
Schedule a timeout on sync commands in case the command times out and
the device is not being polled for timeouts. This prevents device removal
from hanging forever if the device has stopped responding.
Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
This adds support for namespaces with separate meta-data formats in the
submit io ioctl. The meta-data buffer has to be a contiguous, so such
a buffer is allocated and the mapped user pages are copied to/from this
buffer for write/read commands.
Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
We have an nvme device that has a concept of a stripe size. IO requests
that do not transfer data crossing a stripe boundary has greater
performance compared to IO that does cross it. This patch sets the
stripe size for the device if the device and vendor ids match one with
this feature and splits IO requests that cross the stripe boundary.
Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
It is possible a bio request can not be submitted as a single NVMe IO
command if the bio_vec is not mergeable with the NVMe PRP alignement
constraints. This condition was handled by submitting an IO for the
mergeable portion then submitting a follow on IO for the remaining data
after the previous IO completes. The remainder to be sent was tracked
by manipulating the bio->bi_idx and bio->bi_sector. This patch splits
the request as many times as necessary and submits the bios together.
Since submitting the bio may cause it to be requeued on split,
nvme_resubmit_bios had to be modified to remove the wait queue when
the bio list is empty prior to submitting the bio since a split would
have added the wait queue a second time, corrupting the wait queue head
task list.
There are a few other benefits from doing this: it fixes a potential
issue with the previous handling of a non-mergeable bio as the requeuing
method could would use an unlocked nvme_queue if the callback isn't
invoked on the queue's associated cpu; it will be possible to retry a
failed bio if desired at some later time since it does not manipulate
the original bio; the bio integrity extensions require the bio to be in
its original condition for the checks to work correctly if we implement
the end-to-end data protection in the future.
Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
There is no situation that could occur where we could error out of this
function and require cleaning up allocated namespaces.
Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
The nvme_queue's depth is not set if we fail to allocate submission queue
entries, which was being used to determine how much coherent memory to
free on error. Use the depth variable instead.
Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Fixes a potential memory leak if requesting the admin queue irq fails.
Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Translates a scsi unmap request from SG_IO ioctl to NVMe
data-set-management deallocate.
Signed-off-by: Keith Busch <keith.busch@intel.com>
Acked-by: Vishal Verma <vishal.l.verma@linux.intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Fixes nvme queue usages in scsi-to-nvme translation code to not get
a queue more often than it is being put, and not use the queue in an
unsafe way without it being locked.
Signed-off-by: Keith Busch <keith.busch@intel.com>
Acked-by: Vishal Verma <vishal.l.verma@linux.intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
When a read for a layered image object finds the target object
doesn't exist, a read image request for the parent image is created
and submitted. When that completes, the callback routine was
not releasing that parent image request. Fix that.
The slab allocation stuff just added has greatly simplified the
search for the source of this memory leak.
This resolves:
http://tracker.ceph.com/issues/4803
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
The names of objects used for image object requests are always fixed
size. So create a slab cache to manage them. Define a new function
rbd_segment_name_free() to match rbd_segment_name() (which is what
supplies the dynamically-allocated name buffer).
This is part of:
http://tracker.ceph.com/issues/3926
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Create a slab cache to manage rbd_obj_request allocation. We aren't
using a constructor, and we'll zero-fill object request structures
when they're allocated.
This is part of:
http://tracker.ceph.com/issues/3926
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
The next patch will define a slab allocator for a object requests.
To use that we'll need to allocate the name of an object separate
from the request structure itself.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Create a slab cache to manage rbd_img_request allocation. Nothing
too fancy at this point--we'll still initialize everything at
allocation time (no constructor)
This is part of:
http://tracker.ceph.com/issues/3926
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Use bsearch(3) to make snapshot lookup by id more efficient. (There
could be thousands of snapshots, and conceivably many more.)
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
This functionality inadvertently disappeared in the last patch.
Image snapshots can get removed at just about any time. In
particular it can disappear even if it is in use by an rbd
client as a mapped image.
The rbd client deals with such a disappearance by responding to new
requests with ENXIO. This is implemented by each rbd device
maintaining an EXISTS flag, which is normally set but cleared if a
snapshot disappears.
This patch (re-)implements the clearing of that flag.
Whenever mapped image header information is refreshed, if the
mapping is for a snapshot, verify the mapped snapshot is still
present in the updated snapshot context. If it is not, clear the
flag.
It is not necessary to check this in the initial probe, because the
probe will not succeed if the snapshot doesn't exist.
This resolves:
http://tracker.ceph.com/issues/4880
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
We no longer use the snapshot list for anything. When we need to
look up a snapshot name, id, size, or feature mask, we just do it
directly rather than relying on this list being updated with every
refresh. The main reason it existed was for the benefit of the
device/sysfs entries that previously were associated with snapshots.
So get rid of the snapshot list, and struct rbd_snap, and the
hundreds of lines of code that supported them.
This resolves:
http://tracker.ceph.com/issues/4868
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
This patch defines a handful of new functions that will allow
us to get rid of the rbd device structure's list of snapshots.
Define rbd_snap_id_by_name() to look up a snapshot id given its
name. This is efficient for format 1 images but not for format 2.
Fortunately it only gets called at mapping time so it's not that
critical.
Use rbd_snap_id_by_name() to find out the id for a snapshot getting
mapped, and pass that id to new functions rbd_snap_size() and
rbd_snap_features() to look up information about a given snapshot's
size and feature mask given its snapshot id. All this gets done
in rbd_dev_mapping_set().
As a result, snap_by_name() is no longer needed, so get rid of it.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
In order to align with what was needed for format 1 rbd images,
rbd_dev_v2_snap_info() was set up to take as argument an index into
the array of snapshot ids in a rbd device's snapshot context.
This switches that around, so we pass the snapshot id instead.
In doing this, rbd_snap_name() now returns a dynamically-allocated
string rather than a fixed one, so there's no need to make a
duplicate in its caller, rbd_dev_spec_update().
This means the following functions take a snapshot id where they
previously used an index value:
rbd_dev_snap_info()
rbd_dev_v1_snap_info()
rbd_dev_v2_snap_info()
A new function, rbd_dev_snap_index(), determines the snap index for
format 1 images and uses it to look up the name.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Rather than scanning the list of snapshot structures for it, scan
the snapshot context buffer containing snapshot names in order to
determine for a format 1 image the name associated with a given
snapshot id.
Pull out the part of rbd_dev_v1_snap_info() that does this scan into
a new function, _rbd_dev_v1_snap_name(). Have that function return
a dynamically-allocated copy of the name, and don't duplicate it in
rbd_dev_v1_snap_info().
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Nothing ever uses the version field maintained in the object request
structure any more, so get rid of it.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Only NULL is passed as the version argument to rbd_obj_method_sync(),
so get rid of it.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Continued from the last patch, more parameters that can go away
because we no longer have a need to track object versions.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Several functions in rbd have parameters meant to allow the version
of an object to be passed in or out. The purpose of those was to
allow the version of a header object to be maintained, but we no
longer do that. As a result, these parameters are never actually
needed or used, so get rid of them.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
The rbd code takes care to maintain the version of the header
object. This was done in hopes of using it to detect a change in
the object between reading it and setting up a watch request to
be notified of changes.
The mechanism was never fully implemented, however. And we now
avoid the original problem by setting up the watch request before
ever reading the content of the header.
The osd doesn't interpret the object version supplied with a WATCH
osd op, nor does it use the version supplied with a NOTIFY_ACK op
(we can just supply 0 for both). There is therefore no need to
maintain the header's object version any more, so stop doing so.
We'll be able to simplify some more rbd code in the next few patches
as a result of this.
This resolves:
http://tracker.ceph.com/issues/3952
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Make explicit that snapshot names don't change by making functions
return and take parameters that that point to const qualified data.
This resolves:
http://tracker.ceph.com/issues/4867
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Whenever a header object event causes a mapped rbd image to refresh
its header information, revalidate_disk() is being called. This was
done in rbd_dev_refresh() outside the control mutex in order to
avoid a lock inversion. Although a an event like this *might*
indicate the image has changed size, most of the time it does not.
Record the image size before and after the refresh, and only
call revalidate_disk() if it changes.
This resolves:
http://tracker.ceph.com/issues/4867
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
A warning gets spewed for any image being probed, including parent
images. Set up a condition such that the warning message only gets
printed for the image being mapped, not any of its parents.
Also, I didn't like the way the warning ended up being so long.
Make it a terse warning instead. People experimenting with layering
will know what the message means.
This is part of:
http://tracker.ceph.com/issues/4867
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Now that we have a library routine to create snap contexts, use it.
This is part of:
http://tracker.ceph.com/issues/4857
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Stop setting up Linux devices during the image probe operation.
Instead, set up the devices as a separate step after the image
probe, in rbd_add().
A consequence of this is that only mapped images get devices
assigned to them, which is pretty sweet.
This resolves:
http://tracker.ceph.com/issues/4774
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Currently an rbd_device structure gets destroyed from the release
routine for the device embedded within it. Stop doing that, instead
calling rbd_dev_image_release() right after rbd_bus_del_dev()
wherever the latter is called.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Define a new function rbd_dev_unprobe() which undoes state changes
that occur from calling rbd_dev_v1_probe() or rbd_dev_v2_probe().
Note that this is a superset of rbd_header_free(), which is now
getting removed (it seems to have been used improperly anyway).
Flesh out rbd_dev_image_release() so it undoes exactly what
rbd_dev_image_probe() does.
This means that:
- rbd_dev_device_release() gets called when the last device
reference gets dropped;
- that undoes everything done by the rbd_dev_device_setup() call
at the end of rbd_dev_image_probe() (and nothing more), ending
by calling rbd_dev_image_release(); and
- rbd_dev_image_release() undoes everything else done by
rbd_dev_image_probe() (and this includes a call to
rbd_dev_unprobe().
This means the image and device portions of an rbd device are fairly
cleanly separated now, so error paths should be a little easier to
verify than they used to be.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Rename rbd_dev_probe_finish() to be rbd_dev_device_setup(). Its
purpose is to set up the Linux side of an rbd device mapping.
Rename rbd_dev_release() to be rbd_dev_device_release(), making
it more obvious it serves as the inverse of the setup function
(or it will).
Encapsulate some of what was done in rbd_dev_release() into a new
function rbd_dev_image_release(), which serves as the inverse of
setting up the ceph side of the mapped rbd image.
Define a new helper rbd_dev_clear_mapping() to simply zero out the
fields of a mapping structure--the inverse of rbd_dev_set_mapping().
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Drop the module reference at the end of rbd_remove() for symmetry
with adding a reference at the top of rbd_add().
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Move setting up the watch request for an image so it's done in
rbd_dev_image_probe() rather than rbd_dev_probe_finish(). Move
it all the way up to before doing the initial probe. This avoids
a potential race condition, in which we get (and use) the initial
snapshot context for an image, and it gets changed between that
time and the time we get the watch set up.
This resolves:
http://tracker.ceph.com/issues/3871
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
When a format 2 image is refreshed, code is in place to verify that
the object order never changes from what it was originally. This
relies on the fact that the refresh will occur *after* an initial
load of information about the image.
An upcoming patch makes it possible for the refresh to occur first,
so we can no longer make this order check. The order really can't
ever change anyway--this was just a sanity check. So get rid of it.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Currently, a watch on an rbd device header object gets torn down
when its final Linux device reference gets dropped. Instead, tear
it down when removing the device. If an error occurs cleaning up
the watch event when unmapping, abort the unmap request.
All images (including parents) still get watch requests set up, so
tear these down also, in rbd_dev_remove_parent(). For now, ignore
any errors that occur in this case.
Get rid of local variable "rc" in rbd_remove(); use "ret" instead
(they both somehow ended up defined in the function and only one is
needed).
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Define a new function rbd_header_name(), which allocates and formats
the name of the header object for the rbd device.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Move a block of initialization related to the "ceph-side" of an rbd
image out of rbd_dev_probe_finish() and into rbd_dev_image_probe().
Add appropriate error handling to clean things up in the event any
of these new functions return an error.
We know that rbd_dev_snaps_update(), rbd_dev_spec_update(), and
rbd_dev_probe_parent() all clean up after themselves before they
return an error, so no special cleanup is required except when an
earlier call succeeds. Since rbd_dev_spec_update() only updates the
spec field (whose cleanup will be handled by dropping the last
reference to the spec) there is no cleanup action associatied with
that.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Probe for a parent device earlier in rbd_dev_probe_finish(), before
starting to set up the Linux side of the rbd device.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
When an error occurs while finishing probing a device it is assumed
that parent devices get cleaned up when deleting a device. They
don't. Add a call to clean them up. Note that this means the
parent spec will already be cleaned up so it doesn't have to be
in one of the rbd_add() error paths.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
In certain error paths, it is possible for an rbd device to have a
parent spec but no parent rbd_dev. In rbd_dev_remove_parent() use
the parent field rather than parent_spec in determining whether to
try to remove any parent devices. Use assertions to indicate that
any non-null parent pointer has parent_spec associated with it.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
The function __rbd_remove() is used in two spots, and it's fairly
simple. It combines cleanup of part of the ceph-side state as well
as cleaning up the Linux-side state. Just open code it in the two
callers and eliminate the function.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Set the mapping size and features earlier in rbd_dev_probe_finish().
Define rbd_dev_mapping_clear() as an inverse for setting those
fields, and use it both in error handling in rbd_dev_image_probe()
and in the final cleanup in rbd_dev_release(). Change the name
of rbd_dev_set_mapping() to of rbd_dev_mapping_set().
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Encapsulate the code that removes an rbd device's parent images into
a new function, rbd_dev_remove_parent().
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Encapsulate the code that probes for an rbd device's parent images
into a new function, rbd_dev_probe_parent().
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Don't set the disk capacity until right before we announce the
device as available for use.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Hold off setting the EXISTS rbd device flag until just before we
announce the disk as available for use. There's no point in doing
so any earlier than that, and at that point the device truly is
fully set up and ready to use.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
This just tweaks a few things in the routines that implement
rbd sysfs files.
All of the entries for an rbd device in /sys/bus/rbd/devices/<id>/
will represent information whose valid values are known by the time
they are accessible.
Right now we get the size of the mapped image by a call to
get_capacity(). There's no need to do this, because that will
return what we last set the capacity to, which is just the size
recorded for the mapping. So just show that value instead.
We also get this under protection of the header semaphore, in order
to provide a precisely correct value. This isn't really necessary;
these files are really informational only and it's not necessary to
be so careful.
Finally, print a special value in case the major device number is
not recorded. Right now that won't matter much but soon the parent
images won't have devices associated with them.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
When a snapshot context update occurs, rbd_update_mapping_size() is
called to set the capacity of the disk to record the updated
size of the image in case it has changed.
There's a bug though. The mapping size is in units of *bytes*. The
code that updates the mapping size field is assigning a value that
has been scaled down to *sectors*.
Fix that. Also, check to see if the size has actually changed, and
don't bother updating things (specifically, calling set_capacity())
if it has not.
This resolves:
http://tracker.ceph.com/issues/4833
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Fairly straightforward refactoring of rbd_dev_probe_update_spec().
The name is changed to rbd_dev_spec_update().
Rearrange it so nothing gets assigned to the spec until all of the
names have been successfully acquired.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Rename rbd_dev_probe() to be rbd_dev_image_probe(). Its purpose
will eventually be to probe for the existence of a valid rbd image
for the rbd device--focusing only on the ceph side and not the Linux
device side of initialization.
For now the two "sides" are not fully separated, and this function
is still the entry point for initializing the full rbd device.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Currently, rbd_dev_destroy() does more than just the inverse of what
rbd_dev_create() does. Stop doing that, and move the two extra
things it does into the three call sites.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Encapsulate the creation of a snapshot context for rbd in a new
function rbd_snap_context_create(). Define rbd wrappers for getting
and dropping references to them once they're created.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Change some calls to WARN_ON() so they use rbd_warn() instead, so we
get consistent messaging. A few remain but they can probably just
go away eventually.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
This commit added fetching if fancy striping parameters:
09186ddb rbd: get and check striping parameters
They are almost unused, but the two fields storing the information
really belonged in the rbd_image_header structure.
This patch moves them there.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Make the names and image id in an rbd_spec be pointers to constant
data. This required the use of a local variable to hold the
snapshot name in rbd_add_parse_args() to avoid a warning.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Set the rbd spec's snapshot id for an image getting mapped in
rbd_dev_probe_update_spec() rather than rbd_dev_set_mapping().
This is the more logical place for that to happen (even though
it means we might look up the snapshot by name twice).
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
A function called snap_by_name() ought to just look up a snapshot by
name. It does that, but then it assigns some stuff to the rbd
device structure as well.
Change the function to do just the lookup, and have the caller do
the assignments that follow.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
If a format 2 image id is found for an image being mapped, but the
subsequent probe of the image fails, rbd_dev_probe() quits without
freeing the image id. Fix that.
Also drop a redundant hunk of code in rbd_dev_image_id().
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Currently, rbd_dev_probe() assumes that any error returned by
rbd_dev_image_id() is most likely -ENOENT, and responds by
calling the format 1 probe routine, rbd_dev_v1_probe(). Then,
at the top of rbd_dev_v1_probe(), an empty string is allocated
for the image id.
This is sort of unbalanced. Fix this by having rbd_dev_image_id()
look for -ENOENT from its "get_id" method call. If that is seen,
have it allocate the empty string there rather than depending on
rbd_dev_v1_probe() to do it.
Given that this is effectively defining the format of the image,
set rbd_dev->image_format inside rbd_dev_image_id() rather than in
the format-specific probe routines.
Also drop a redundant hunk of code in rbd_dev_image_id().
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
I found during some failure injection testing that the call to
rbd_free_disk() in the error path of rbd_dev_probe_finish() was
dropping an extra reference to the disk queue. The problem
occurred when put_disk tried to drop a reference to the disk's
queue. A call to blk_cleanup_queue() just prior to that will have
also dropped a reference to the queue.
The problem is that the reference dropped by put_disk() is assumed
to have been taken by add_disk(). Our code has error paths that can
occur after the disk and its queue are initialized, but before the
call to add_disk(), and in those paths we won't have that extra
reference.
The fix is easy though. In rbd_free_disk() we're already checking
the disk's GENHD_FL_UP flag. That flag is an indication that
add_disk() has been called, so just call blk_cleanup_queue()
conditional on that flag being set.
This resolves:
http://tracker.ceph.com/issues/4800
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Now that rbd_obj_method_sync() returns the number of bytes
returned by the method call, that value should be used by
callers to ensure we don't overrun the valid portion of the
buffer.
Fix the two spots that remained that weren't doing that,
rbd_dev_image_name() and rbd_dev_v2_snap_name().
Rearrange the error path slightly in rbd_dev_v2_snap_name().
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
When the snapshot context for an rbd device gets updated (or the
initial one is recorded) a a list of snapshot structures is created
to represent them, one entry per snapshot. Each entry includes a
dynamically-allocated copy of the snapshot name.
Currently the name is allocated in rbd_snap_create(), as a duplicate
of the passed-in name.
For format 1 images, the snapshot name provided is just a pointer to
an existing name. But for format 2 images, the passed-in name is
already dynamically allocated, and in the the process of duplicating
it here we are leaking the passed-in name.
Fix this by dynamically allocating the name for format 1 snapshots
also, and then stop allocating a duplicate in rbd_snap_create().
Change rbd_dev_v1_snap_info() so none of its parameters is
side-effected unless it's going to return success.
This is part of:
http://tracker.ceph.com/issues/4803
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Rename __rbd_add_snap_dev() to be rbd_snap_create(). We no longer
have devices for non-mapped snapshots, and we're not actually
"adding" it to the list in this function, just creating it.
Rename rbd_remove_snap_dev() to be rbd_snap_destroy() for reasons
similar to the above. Stop having this function delete the snapshot
from its list (to be symmetrical with its create counterpart) and do
that in the caller instead.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Change rbd_dev_v2_snap_info() so it only ever sets values of the
size and features parameters if looking up the snapshot name was
successful.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Only one of the two callers of _rbd_dev_v2_snap_size() needs the
order value returned. So make that an optional argument--a null
pointer if the caller doesn't need it.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
When an rbd image is initially mapped, its snapshot context is
collected, and then a list of snapshot entries representing the
snapshots in that context is created. The list is created using
rbd_dev_snaps_update(). (This function also supports updating an
existing snapshot list based on a new snapshot context.)
If an error occurs, updating the list is aborted, and the list is
currently left as-is, in an inconsistent state. At that point,
there may be a partially-constructed list, but the calling functions
(rbd_dev_probe_finish() from rbd_dev_probe() from rbd_add()) never
clean them up. So this constitutes a leak.
A snapshot list that is inconsistent with the current snapshot
context is of no use, and might even be actively bad. So rather
than just having the caller clean it up, have rbd_dev_snaps_update()
just clear out the entire snapshot list in the event an error
occurs.
The other place rbd_dev_snaps_update() is used is when a refresh is
triggered, either because of a watch callback or via a write to the
/sys/bus/rbd/devices/<id>/refresh interface. An error while
updating the snapshots has no substantive effect in either of those
cases, but one of them issues a warning. Move that warning to the
common rbd_dev_refresh() function so it gets issued regardless of
how it got initiated.
This is part of:
http://tracker.ceph.com/issues/4803
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
When an rbd image gets mapped a device entry gets created for it
under /sys/bus/rbd/devices/<id>/. Inside that directory there are
sysfs files that contain information about the image: its size,
feature bits, major device number, and so on.
Additionally, if that image has any snapshots, a device entry gets
created for each of those as a "child" of the mapped device. Each
of these is a subdirectory of the mapped device, and each directory
contains a few files with information about the snapshot (its
snapshot id, size, and feature mask).
There is no clear benefit to having those device entries for the
snapshots. The information provided via sysfs of of little real
value--and all of it is available via rbd CLI commands. If we
still wanted to see the kernel's view of this information it could
be done much more simply by including it in a single sysfs file for
the mapped image.
But there *is* a clear cost to supporting them. Every time a snapshot
context changes, these entries need to be updated (deleted snapshots
removed, new snapshots created). The rbd driver is notified of
changes to the snapshot context via callbacks from an osd, and care
must be taken to coordinate removal of snapshot data structures
with the possibility of one these notifications occurring.
Things would be considerably simpler if we just didn't have to
maintain device entries for the snapshots.
So get rid of them.
The ability to map a snapshot of an rbd image will remain; the only
thing lost will be the ability to query these sysfs directories for
information about snapshots of mapped images.
This resolves:
http://tracker.ceph.com/issues/4796
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Now that we have most everything in place to support layered rbd
images, enable support for them in the kernel client. Issue a
warning to the log that the support is considered experimental
whenever a format 2 layered image is mapped.
Note that we also have to claim to support the STRIPINGV2 feature,
due to a mistake in the way the rbd CLI set up those flags. This
feature can work if it has the right parameters, and safeguards
have been put in place to reject those images that do not have
compatible parameters.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
If an rbd format 2 image indicates it supports the STRIPINGV2
feature we need to find out its stripe unit and stripe count in
order to know whether we can use it. We don't yet support fancy
striping fully, but if the default parameters are used the behavior
is indistinguishible from non-fancy striping.
This is necessary because some images require the STRIPINGV2 feature
even if they use the default parameters. (Which is to say the feature
bit was erroneously set even if the feature was not used.)
This resolves:
http://tracker.ceph.com/issues/4709
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Callers of rbd_obj_method_sync() don't know how many bytes of data
got returned by the class method call. As a result, they have been
assuming enough got returned to decode whatever was expected.
This isn't safe. We know how many bytes got transferred, so have
rbd_obj_method_sync() return that amount (rather than just 0) if
the call is successful.
Change all callers to use this return value to ensure decoding of
the results is done safely.
On the other hand, most callers of rbd_obj_method_sync() only
indicate success or failure, so all of *their* callers can simply
test for non-zero result.
This resolves:
http://tracker.ceph.com/issues/4773
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Make the inbound and outbound data parameters have void rather than
character type for rbd_obj_method_sync(). This makes it more clear
they don't expect typed data, and eliminates the need for some silly
type casts.
One more unrelated change: define the features buffer used in
_rbd_dev_v2_snap_features() to be a packed data structure.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Make the buf parameter into which the data is to be read have type
void pointer.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
A clone image has a defined overlap point with its parent image.
That is the byte offset beyond which the parent image has no
defined data to back the clone, and anything thereafter can be
viewed as being zero-filled by the clone image.
This is needed because a clone image can be resized. If it gets
resized larger than the snapshot it is based on, the overlap defines
the original size. If the clone gets resized downward below the
original size the new clone size defines the overlap. If the clone
is subsequently resized to be larger, the overlap won't be increased
because the previous resize invalidated any parent data beyond that
point.
This resolves:
http://tracker.ceph.com/issues/4724
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
This implements the main copyup functionality for layered writes.
Here we add a copyup_pages field to the object request, which is
used only for copyup requests to keep track of the page array
containing data read from the parent image.
A copyup request is currently the only request rbd has that requires
two osd operations. Because of this we handle copyup specially.
All image object requests get an osd request allocated when they are
created. For a write request, if a copyup is required, the osd
request originally allocated is released, and a new one (with room
for two osd ops) is allocated to replace it. A new function
rbd_osd_req_create_copyup() allocates an osd request suitable for
a copyup request.
The first op is then filled with a copyup object class method call,
supplying the array of pages containing data read from the parent.
The second op is filled in with the original write request.
The original request otherwise remains intact, and it describes the
original write request (found in the second osd op). The presence
of the copyup op is sort of implicit; a non-null copyup_pages field
could be used to distinguish between a "normal" write request and a
request containing both a copyup call and a write.
This resolves:
http://tracker.ceph.com/issues/3419
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
As a step toward implementing layered writes, implement reading the
data for a target object from the parent image for a write request
whose target object is known to not exist. Add a copyup_pages field
to an image request to track the page array used (only) for such a
request.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
If rbd disk is open and rbd resize is done, new size is not
visible by filesystem. Like is done in virtio-blk and dm driver,
revalidate_disk() permits to update the bd_inode size.
Signed-off-by: Laurent Barbe <laurent@ksperis.com>
Reviewed-by: Alex Elder <elder@inktank.com>
This patch adds the ability to build an image request whose data
will be written from or read into memory described by a page array.
(Previously only bio lists were supported.)
Originally this was going to define a new function for this purpose
but it was largely identical to the rbd_img_request_fill_bio(). So
instead, rbd_img_request_fill_bio() has been generalized to handle
both types of image request.
For the moment we still only fill image requests with bio data.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Define a new function zero_pages() that zeroes a range of memory
defined by a page array, along the lines of zero_bio_chain(). It
saves and the irq flags like bvec_kmap_irq() does, though I'm not
sure at this point that it's necessary.
Update rbd_img_obj_request_read_callback() to use the new function
if the object request contains page rather than bio data.
For the moment, only bio data is used for osd READ ops.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Object requests that are part of an image request are subject to
some additional handling. Define rbd_img_obj_request_submit() to
encapsulate that, and use it when initially submitting an image
object request, and when re-submitting it during callback of
an object existence check.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Separate rbd_osd_req_format() into two functions, one for read
requests and the other for write requests.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
This is a step toward fully implementing layered writes.
Add checks before request submission for the object(s) associated
with an image request. For write requests, if we don't know that
the target object exists, issue a STAT request to find out. When
that request completes, mark the known and exists flags for the
original object request accordingly and re-submit the object
request. (Note that this still does the existence check only; the
copyup operation is not yet done.)
A new object request is created to perform the existence check. A
pointer to the original request is added to that object request to
allow the stat request to re-issue the original request after
updating its flags. If there is a failure with the stat request
the error code is stored with the original request, which is then
completed.
This resolves:
http://tracker.ceph.com/issues/3418
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
This creates two new flags for object requests to indicate what is
known about the existence of the object to which a request is to be
sent. The KNOWN flag will be true if the the EXISTS flag is
meaningful. That is:
KNOWN EXISTS
----- ------
0 0 don't know whether the object exists
0 1 (not used/invalid)
1 0 object is known to not exist
1 0 object is known to exist
This will be used in determining how to handle write requests for
data objects for layered rbd images.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
In a few spots, whether the an object request's img_request pointer
is null is used to determine whether an object request is being done
as part of an image data request.
Stop doing that, and instead always use the object request IMG_DATA
flag for that purpose. Swap the order of the definition of the
IMG_DATA and DONE flag helpers, because obj_request_done_set() now
refers to obj_request_img_data_set() to get its rbd_dev value.
This will become important because the img_request pointer is
about to become part of a union.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
An extra reference is taken when an object request is added as one
of the requests making up an image object. A reference is dropped
again when the image's object requests get submitted.
The original reference for the object request will remain throughout
this period, so we don't need to add and then take away an extra
one.
This can be interpreted as the image request inheriting the original
object request's reference.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
In the incremental move toward supporting distinct data items in an
osd request some of the functions had "write_request" parameters to
indicate, basically, whether the data belonged to in_data or the
out_data. Now that we maintain the data fields in the op structure
there is no need to indicate the direction, so get rid of the
"write_request" parameters.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Implement layered read requests for format 2 rbd images.
If an rbd image is a clone of a snapshot, the snapshot will be the
clone's "parent" image. When an object read request on a clone
comes back with ENOENT it indicates that the clone is not yet
populated with that portion of the image's data, and the parent
image should be consulted to satisfy the read.
When this occurs, a new image request is created, directed to the
parent image. The offset and length of the image are the same as
the image-relative offset and length of the object request that
produced ENOENT. Data from the parent image therefore satisfies the
object read request for the original image request.
While this code works, it will not be active until we enable the
layering feature (by adding RBD_FEATURE_LAYERING to the value of
RBD_FEATURES_SUPPORTED).
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Call the probe function for the parent device if one is present.
Since we don't formally support the layering feature we won't
be using this functionality just yet.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Add a flag to distinguish between object requests being done on
standalone objects and requests being sent for objects representing
rbd image data (i.e., object requests that are the result of image
request).
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
We're going to need some more Boolean values for object requests,
so create a flags bit field and use it to record whether the request
is done.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Encapsulate the code that completes processing of an object request
that's part of an image request.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Define a flag indicating whether an image request is for a layered
image (one with a parent image to which requests will be redirected
if the target object of a request does not exist). The code that
checks this flag will be added shortly.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Define a flag indicating whether an image request originated from
the Linux block layer (from blk_fetch_request()) or whether it was
initiated in order to satisfy an object request for a child image
of a layered rbd device. For image requests initiated by objects of
child images we'll save a pointer to the object request rather than
the Linux block request.
For now, only block requests are used.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
There are several Boolean values we'll be maintaining for image
requests. Switch from the single write_request field to a
general-purpose flags field, and use one if its bits to represent
the direction of I/O for the image request. Define helper functions
for setting and testing that flag.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
For an image object request we will need to know what offset within
the rbd image the request covers. Record that when the object
request gets created.
Update the I/O error warnings so they use this so what's reported
is more informative.
Rename a local variable to fit the convention used everywhere else.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Compute the total number of bytes transferred for an image
request--the sum across each of the request's object requests.
To avoid contention do it only when all object requests are
complete, in rbd_img_request_complete().
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
If any image object request produces a non-zero result, preserve
that as the result of the overall image request. If multiple
objects have non-zero results, save only the first one.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
There is a new rbd feature bit defined for "fancy striping." Add
it to the ones defined in the kernel client.
Change RBD_FEATURES_ALL so it represents the set of all feature
bits (rather than just the ones we support). Define a new symbol
RBD_FEATURES_SUPPORTED to indicate the supported ones.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Right now the data for a method call is specified via a pointer and
length, and it's copied--along with the class and method name--into
a pagelist data item to be sent to the osd. Instead, encode the
data in a data item separate from the class and method names.
This will allow large amounts of data to be supplied to methods
without copying. Only rbd uses the class functionality right now,
and when it really needs this it will probably need to use a page
array rather than a page list. But this simple implementation
demonstrates the functionality on the osd client, and that's enough
for now.
This resolves:
http://tracker.ceph.com/issues/4104
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
This ends up being a rather large patch but what it's doing is
somewhat straightforward.
Basically, this is replacing two calls with one. The first of the
two calls is initializing a struct ceph_osd_data with data (either a
page array, a page list, or a bio list); the second is setting an
osd request op so it associates that data with one of the op's
parameters. In place of those two will be a single function that
initializes the op directly.
That means we sort of fan out a set of the needed functions:
- extent ops with pages data
- extent ops with pagelist data
- extent ops with bio list data
and
- class ops with page data for receiving a response
We also have define another one, but it's only used internally:
- class ops with pagelist data for request parameters
Note that we *still* haven't gotten rid of the osd request's
r_data_in and r_data_out fields. All the osd ops refer to them for
their data. For now, these data fields are pointers assigned to the
appropriate r_data_* field when these new functions are called.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
This patch just trivially moves around some code for consistency.
In preparation for initializing osd request data fields in
ceph_osdc_build_request(), I wanted to verify that rbd did in fact
call that immediately before it called ceph_osdc_start_request().
It was true (although image requests are built in a group and then
started as a group). But I made the changes here just to make
it more obvious, by making all of the calls follow a common
sequence:
osd_req_op_<optype>_init();
ceph_osd_data_<type>_init()
osd_req_op_<optype>_<datafield>()
rbd_osd_req_format()
...
ret = rbd_obj_request_submit()
I moved the initialization of the callback for image object requests
into rbd_img_request_fill_bio(), again, for consistency. To avoid
a forward reference, I moved the definition of rbd_img_obj_callback()
up in the file.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
The osd data for a request is currently initialized inside
rbd_osd_req_create(), but that assumes an object request's data
belongs in the osd request's data in or data out field.
There are only three places where requests with data are set up, and
it turns out it's easier to call just the osd data init routines
directly there rather than handling it in rbd_osd_req_create().
(The real motivation here is moving toward getting rid of the
osd request in and out data fields.)
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Currently an object request has its osd request's data field set in
rbd_osd_req_format_op(). That assumes a single osd op per object
request, and that won't be the case for long.
Move the code that sets this out and into the caller.
Rename rbd_osd_req_format_op() to be just rbd_osd_req_format(),
removing the notion that it's doing anything op-specific.
This and the next patch resolve:
http://tracker.ceph.com/issues/4658
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
An osd request now holds all of its source op structures, and every
place that initializes one of these is in fact initializing one
of the entries in the the osd request's array.
So rather than supplying the address of the op to initialize, have
caller specify the osd request and an indication of which op it
would like to initialize. This better hides the details the
op structure (and faciltates moving the data pointers they use).
Since osd_req_op_init() is a common routine, and it's not used
outside the osd client code, give it static scope. Also make
it return the address of the specified op (so all the other
init routines don't have to repeat that code).
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
An extent type osd operation currently implies that there will
be corresponding data supplied in the data portion of the request
(for write) or response (for read) message. Similarly, an osd class
method operation implies a data item will be supplied to receive
the response data from the operation.
Add a ceph_osd_data pointer to each of those structures, and assign
it to point to eithre the incoming or the outgoing data structure in
the osd message. The data is not always available when an op is
initially set up, so add two new functions to allow setting them
after the op has been initialized.
Begin to make use of the data item pointer available in the osd
operation rather than the request data in or out structure in
places where it's convenient. Add some assertions to verify
pointers are always set the way they're expected to be.
This is a sort of stepping stone toward really moving the data
into the osd request ops, to allow for some validation before
making that jump.
This is the first in a series of patches that resolve:
http://tracker.ceph.com/issues/4657
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
An osd request keeps a pointer to the osd operations (ops) array
that it builds in its request message.
In order to allow each op in the array to have its own distinct
data, we will need to keep track of each op's data, and that
information does not go over the wire.
As long as we're tracking the data we might as well just track the
entire (source) op definition for each of the ops. And if we're
doing that, we'll have no more need to keep a pointer to the
wire-encoded version.
This patch makes the array of source ops be kept with the osd
request structure, and uses that instead of the version encoded in
the message in places where that was previously used. The array
will be embedded in the request structure, and the maximum number of
ops we ever actually use is currently 2. So reduce CEPH_OSD_MAX_OP
to 2 to reduce the size of the structure.
The result of doing this sort of ripples back up, and as a result
various function parameters and local variables become unnecessary.
Make r_num_ops be unsigned, and move the definition of struct
ceph_osd_req_op earlier to ensure it's defined where needed.
It does not yet add per-op data, that's coming soon.
This resolves:
http://tracker.ceph.com/issues/4656
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Define rbd_osd_req_format_op(), which encapsulates formatting
an osd op into an object request's osd request message. Only
one op is supported right now.
Stop calling ceph_osdc_build_request() in rbd_osd_req_create().
Instead, call rbd_osd_req_format_op() in each of the callers of
rbd_osd_req_create().
This is to prepare for the next patch, in which the source ops for
an osd request will be held in the osd request itself. Because of
that, we won't have the source op to work with until after the
request is created, so we can't format the op until then.
This an the next patch resolve:
http://tracker.ceph.com/issues/4656
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Define and use functions that encapsulate the initializion of a
ceph_osd_data structure.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
When rbd creates an object request containing an object method call
operation it is passing 0 for the size. I originally thought this
was because the length was not needed for method calls, but I think
it really should be supplied, to describe how much space is
available to receive response data. So provide the supplied length.
This resolves:
http://tracker.ceph.com/issues/4659
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
When assigning a bio pointer to an osd request, we don't have an
efficient way of knowing the total length bytes in the bio list.
That information is available at the point it's set up by the rbd
code, so record it with the osd data when it's set.
This and the next patch are related to maintaining the length of a
message's data independent of the message header, as described here:
http://tracker.ceph.com/issues/4589
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
The rbd code has a function that allocates and populates a
ceph_osd_req_op structure (the in-core version of an osd request
operation). When reviewed, Josh suggested two things: that the
big varargs function might be better split into type-specific
functions; and that this functionality really belongs in the osd
client rather than rbd.
This patch implements both of Josh's suggestions. It breaks
up the rbd function into separate functions and defines them
in the osd client module as exported interfaces. Unlike the
rbd version, however, the functions don't allocate an osd_req_op
structure; they are provided the address of one and that is
initialized instead.
The rbd function has been eliminated and calls to it have been
replaced by calls to the new routines. The rbd code now now use a
stack (struct) variable to hold the op rather than allocating and
freeing it each time.
For now only the capabilities used by rbd are implemented.
Implementing all the other osd op types, and making the rest of the
code use it will be done separately, in the next few patches.
Note that only the extent, cls, and watch portions of the
ceph_osd_req_op structure are currently used. Delete the others
(xattr, pgls, and snap) from its definition so nobody thinks it's
actually implemented or needed. We can add it back again later
if needed, when we know it's been tested.
This (and a few follow-on patches) resolves:
http://tracker.ceph.com/issues/3861
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Move some definitions for max integer values out of the rbd code and
into the more central "decode.h" header file. These really belong
in a Linux (or libc) header somewhere, but I haven't gotten around
to proposing that yet.
This is in preparation for moving some code out of rbd.c and into
the osd client.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
The length of outgoing data in an osd request is dependent on the
osd ops that are embedded in that request. Each op is encoded into
a request message using osd_req_encode_op(), so that should be used
to determine the amount of outgoing data implied by the op as it
is encoded.
Have osd_req_encode_op() return the number of bytes of outgoing data
implied by the op being encoded, and accumulate and use that in
ceph_osdc_build_request().
As a result, ceph_osdc_build_request() no longer requires its "len"
parameter, so get rid of it.
Using the sum of the op lengths rather than the length provided is
a valid change because:
- The only callers of osd ceph_osdc_build_request() are
rbd and the osd client (in ceph_osdc_new_request() on
behalf of the file system).
- When rbd calls it, the length provided is only non-zero for
write requests, and in that case the single op has the
same length value as what was passed here.
- When called from ceph_osdc_new_request(), (it's not all that
easy to see, but) the length passed is also always the same
as the extent length encoded in its (single) write op if
present.
This resolves:
http://tracker.ceph.com/issues/4406
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Record the byte count for an osd request rather than the page count.
The number of pages can always be derived from the byte count (and
alignment/offset) but the reverse is not true.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
An osd request defines information about where data to be read
should be placed as well as where data to write comes from.
Currently these are represented by common fields.
Keep information about data for writing separate from data to be
read by splitting these into data_in and data_out fields.
This is the key patch in this whole series, in that it actually
identifies which osd requests generate outgoing data and which
generate incoming data. It's less obvious (currently) that an osd
CALL op generates both outgoing and incoming data; that's the focus
of some upcoming work.
This resolves:
http://tracker.ceph.com/issues/4127
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
An osd request uses either pages or a bio list for its data. Use a
union to record information about the two, and add a data type
tag to select between them.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Pull the fields in an osd request structure that define the data for
the request out into a separate structure.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Pull VFS updates from Al Viro,
Misc cleanups all over the place, mainly wrt /proc interfaces (switch
create_proc_entry to proc_create(), get rid of the deprecated
create_proc_read_entry() in favor of using proc_create_data() and
seq_file etc).
7kloc removed.
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: (204 commits)
don't bother with deferred freeing of fdtables
proc: Move non-public stuff from linux/proc_fs.h to fs/proc/internal.h
proc: Make the PROC_I() and PDE() macros internal to procfs
proc: Supply a function to remove a proc entry by PDE
take cgroup_open() and cpuset_open() to fs/proc/base.c
ppc: Clean up scanlog
ppc: Clean up rtas_flash driver somewhat
hostap: proc: Use remove_proc_subtree()
drm: proc: Use remove_proc_subtree()
drm: proc: Use minor->index to label things, not PDE->name
drm: Constify drm_proc_list[]
zoran: Don't print proc_dir_entry data in debug
reiserfs: Don't access the proc_dir_entry in r_open(), r_start() r_show()
proc: Supply an accessor for getting the data from a PDE's parent
airo: Use remove_proc_subtree()
rtl8192u: Don't need to save device proc dir PDE
rtl8187se: Use a dir under /proc/net/r8180/
proc: Add proc_mkdir_data()
proc: Move some bits from linux/proc_fs.h to linux/{of.h,signal.h,tty.h}
proc: Move PDE_NET() to fs/proc/proc_net.c
...
The kthread has two tasks; handling timeouts (for which it runs once per
second), and submitting queued BIOs. If a BIO happens to be queued after
the thread has processed the queue but before it calls schedule_timeout(),
the thread will sleep for a second before submitting it, which can cause
performance problems in some rare cases (that will become more common in
a subsequent patch).
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Tested-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Raise the default max request size for nbd to 128KB (from 127KB) to get it
4KB aligned. This patch also allows the max request size to be increased
(via /sys/block/nbd<x>/queue/max_sectors_kb) to 32MB.
The patch makes nbd network traffic more efficient by:
- reducing request fragmentation (4KB alignment)
- reducing the number of requests (fewer round trips, less network overhead)
Especially in high latency networks, larger request size can make a dramatic
Signed-off-by: Paul Clements <paul.clements@steeleye.com>
Signed-off-by: Michal Belczyk <belczyk@bsd.krakow.pl>
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Use preferable function name which implies using a pseudo-random
number generator.
Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
By default the cciss driver supports all "older" HP Smart Array
controllers and hpsa supports all controllers starting with the G6 family.
There are module parameters that allow a user to override those defaults
and use hpsa for any HP Smart Array controller.
If the user does override the default behavior and uses hpsa for older
controllers it is possible that cciss may try to load in a kdump crash
kernel. This may happen if cciss is loaded first from the kdump initrd
image. If cciss does load rather than hpsa and reset_devices is true we
immediately call cciss_hard_reset_controller. This will result in a
kernel panic and the core file cannot be created. This patch prevents
cciss from trying to load in this scenario.
Signed-off-by: Mike Miller <mike.miller@hp.com>
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Add the cciss_allow_hpsa modules parameter. This allows users to use the
hpsa driver instead of cciss for older controllers.
Signed-off-by: Mike Miller <mike.miller@hp.com>
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Add CONFIG_PM_SLEEP to suspend/resume functions to fix the following build
warning when CONFIG_PM_SLEEP is not selected. This is because sleep PM
callbacks defined by SIMPLE_DEV_PM_OPS are only used when the
CONFIG_PM_SLEEP is enabled.
drivers/block/mg_disk.c:783:12: warning: 'mg_suspend' defined but not used [-Wunused-function]
drivers/block/mg_disk.c:807:12: warning: 'mg_resume' defined but not used [-Wunused-function]
Signed-off-by: Jingoo Han <jg1.han@samsung.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Workaround for handling unaligned writes: limit number of outstanding
unaligned writes
Signed-off-by: Sam Bradshaw <sbradshaw@micron.com>
Signed-off-by: Asai Thambi S P <asamymuthupa@micron.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Indirect descriptors introduce a new block operation
(BLKIF_OP_INDIRECT) that passes grant references instead of segments
in the request. This grant references are filled with arrays of
blkif_request_segment_aligned, this way we can send more segments in a
request.
The proposed implementation sets the maximum number of indirect grefs
(frames filled with blkif_request_segment_aligned) to 256 in the
backend and 32 in the frontend. The value in the frontend has been
chosen experimentally, and the backend value has been set to a sane
value that allows expanding the maximum number of indirect descriptors
in the frontend if needed.
The migration code has changed from the previous implementation, in
which we simply remapped the segments on the shared ring. Now the
maximum number of segments allowed in a request can change depending
on the backend, so we have to requeue all the requests in the ring and
in the queue and split the bios in them if they are bigger than the
new maximum number of segments.
[v2: Fixed minor comments by Konrad.
[v1: Added padding to make the indirect request 64bit aligned.
Added some BUGs, comments; fixed number of indirect pages in
blkif_get_x86_{32/64}_req. Added description about the indirect operation
in blkif.h]
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
[v3: Fixed spaces and tabs mix ups]
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Preparatory change for implementing indirect descriptors. Change
xen_blkbk_{map/unmap} in order to be able to map/unmap a random amount
of grants (previously it was limited to
BLKIF_MAX_SEGMENTS_PER_REQUEST). Also, remove the usage of pending_req
in the map/unmap functions, so we can map/unmap grants without needing
to pass a pending_req.
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: xen-devel@lists.xen.org
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Remove the last dependency from blkbk by moving the list of free
requests to blkif. This change reduces the contention on the list of
available requests.
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: xen-devel@lists.xen.org
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Moving grant ref handles from blkbk to pending_req will allow us to
get rid of the shared blkbk structure.
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: xen-devel@lists.xen.org
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
This mechanism allows blkback to change the number of grants
persistently mapped at run time.
The algorithm uses a simple LRU mechanism that removes (if needed) the
persistent grants that have not been used since the last LRU run, or
if all grants have been used it removes the first grants in the list
(that are not in use).
The algorithm allows the user to change the maximum number of
persistent grants, by changing max_persistent_grants in sysfs.
Since we are storing the persistent grants used inside the request
struct (to be able to mark them as "unused" when unmapping), we no
longer need the bitmap (unmap_seg).
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: xen-devel@lists.xen.org
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Using balloon pages for all granted pages allows us to simplify the
logic in blkback, especially in the xen_blkbk_map function, since now
we can decide if we want to map a grant persistently or not after we
have actually mapped it. This could not be done before because
persistent grants used ballooned pages, whereas non-persistent grants
used pages from the kernel.
This patch also introduces several changes, the first one is that the
list of free pages is no longer global, now each blkback instance has
it's own list of free pages that can be used to map grants. Also, a
run time parameter (max_buffer_pages) has been added in order to tune
the maximum number of free pages each blkback instance will keep in
it's buffer.
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Cc: xen-devel@lists.xen.org
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Pull Ceph fix from Sage Weil:
"It's a simple fix for a hard to hit race, but low-risk and clearly
correct"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client:
rbd: do a safe list traversal in rbd_img_request_submit()
It's possible that the reference to the object request dropped
inside the loop in rbd_img_request_submit() will be the last
one, in which case the content of the object pointer can't be
trusted.
Use a safe form of the object request list traversal to avoid
problems.
This resolves:
http://tracker.ceph.com/issues/4705
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Registers a miscellaneous device for each nvme controller probed. This
creates character device files as /dev/nvmeN, where N is the device
instance, and supports nvme admin ioctl commands so devices without
namespaces can be managed.
Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
When constructing the command, dsmgmt needs to be treated as a 32-bit
value, not a 16-bit value. reftag, apptag and appmask all need to be
converted from native-endian to little-endian. Again, sparse's bitwise
warnings caught this problem. Thanks to Keith for pointing out the
correct way to fix the reftag.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Acked-by: Keith Busch <keith.busch@intel.com>
The sparse bitwise checks pointed out that I needed to shift the status
before changing its endianness, not after.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Sparse produced warnings for some instances of
mismatched types and direct userspace dereferences.
This patch fixes those for the scsi emulation layer.
Signed-off-by: Vishal Verma <vishal.l.verma@intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
The nvme_dev_add() function currently returns the last error code that it
saw, which (if everything else succeeds) happens to be the result of an
optional command, so it can legitimately fail. Looking at the error path
more closely reveals that we should return success from this function,
even if no device namespaces are added. So once the queues are created
and the device has responded to Identify, make sure that this function
succeeds.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Acked-by: Keith Busch <keith.busch@intel.com>
Introduce nvme_block_nr() to help convert sectors to block numbers.
This fixes an integer overflow in the SCSI conversion layer, and it's
slightly less typing than opencoding it.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Acked-by: Keith Busch <keith.busch@intel.com>
The nvme driver has a "once per second" event where the management kthread
wakes up the system and then reschedules itself for 1 second later.
For power efficiency reasons, I'd like this timer to happen together
with other wakeups in the system.
This patch makes the schedule_timeout() call in the kthread use
round_jiffies_relative(), causing the wakeup to at least align with other
"once per X seconds" events in the kernel.
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Tested-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Temporarily disabling TRIM support until TRIM related issues
are addressed in the firmware.
Signed-off-by: Asai Thambi S P <asamymuthupa@micron.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Reported smatch warning:
drivers/block/mtip32xx/mtip32xx.c:4163 mtip_block_shutdown() warn: variable dereferenced before check 'dd->disk' (see line 4159)
dd->disk->disk_name accessed before the check if dd->disk is NULL. Fixed this
and access of dd->queue/dd->disk->queue.
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Asai Thambi S P <asamymuthupa@micron.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
iQIcBAABCAAGBQJRZCnRAAoJEPfTWPspceCmQD0QAMS6a7kYln9n7ByzMa1rfm7x
Mhu3NkT3I34a5+hgdiIwWTjqJ2Uu78S5pD522eAP4m9yGcG7m1c3aqdTEewDck7K
xcIjgzS1j1J/XRlrtgPxJhsiA23Kzhliwztrr1YhczWLMijvIrNu2KYZ4EK4/YPB
KkdGJJitunrOw4nhmnB1AYCRJoZJ5KXOpigqVr7vxTh0Ye7ue2k9mDD3x6tOp/Q0
TxKxXyEZF9yHgUkpARMcy4OEcr63APfAeiAnZOPpdK5rbCc8pVXpoadLE8UipnlI
0FcGBzuEezQE3FZrT9PL58FFNEPUmx3NWz2SwYV7Te5Gw5Zw/ffMxk8geE12ALmb
YuULHGOxz97rxjFImYYGqIOQl7F6V59bYgbXegmKnWjorKEn6LxHyNI1Q76oH5V3
UgvkmTK91yNa6DvB24Vrt1yKb3GvKQApHAwEn82y+LqcZFkWq1iKqlPVNNGGjCbR
j47Qu4G/6Bif0vTFC34ciI79ga/di4S4FUOCthsjskm7waDUNvrlbP4Mvwq1VPAc
VZaYGrQcWCDxuAY/S3XTR+2ci/B0n4Fhyn/iFg/6S+nQawJDeiM+C3yVDbH8caq4
wV2PXEQ/r+SII4PVGFAmATewUKF3dkUpT2TKhc9KRHZh9+mFBXFj13MZ3+WpBPHl
NTvEyi5nUL+MdyZGuJUi
=CB+W
-----END PGP SIGNATURE-----
Merge tag 'for-linus-20130409' of git://git.kernel.dk/linux-block
Pull block fixes from Jens Axboe:
"I've got a few smaller fixes queued up for 3.9 that should go in. The
major one is the loop regression, the others are nice fixes on their
own though. It contains:
- Fix for unitialized var in the block sysfs code, courtesy of Arnd
and gcc-4.8.
- Two fixes for mtip32xx, fixing probe and command timeout. Also a
debug measure that could have waited for 3.10, but it's driver
only, so I let it slip in.
- Revert the loop partition cleanup fix, it could cause a deadlock on
auto-teardown as part of umount. The fix is clear, but at this
point we just want to revert it and get a real fix in for 3.10."
* tag 'for-linus-20130409' of git://git.kernel.dk/linux-block:
Revert "loop: cleanup partitions when detaching loop device"
mtip32xx: fix two smatch warnings
mtip32xx: Add debugfs entry device_status
mtip32xx: return 0 from pci probe in case of rebuild
mtip32xx: recovery from command timeout
block: avoid using uninitialized value in from queue_var_store
The only part of proc_dir_entry the code outside of fs/proc
really cares about is PDE(inode)->data. Provide a helper
for that; static inline for now, eventually will be moved
to fs/proc, along with the knowledge of struct proc_dir_entry
layout.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
This reverts commit 8761a3dc1f.
There are situations where the destruction path is called
with the bdev->bd_mutex already held, which then deadlocks in
loop_clr_fd(). The normal partition cleanup does a trylock()
on the mutex, but it'd be nice to have a more bullet proof
method in loop. So punt this more involved fix to the next
merge window, and just back out this buggy fix for now.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Dan reports:
New smatch warnings:
drivers/block/mtip32xx/mtip32xx.c:2728 show_device_status() warn: variable dereferenced before check 'dd' (see line 2727)
drivers/block/mtip32xx/mtip32xx.c:2758 show_device_status() warn: variable dereferenced before check 'dd' (see line 2757)
which are checking if dd == NULL, in a list_for_each_entry() type loop.
Get rid of the check, dd can never be NULL here.
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
This patch adds a new debugfs entry 'device_status' in
/sys/kernel/debug/rssd. The value of this entry shows
devices online and those in the process of removing.
Signed-off-by: Sam Bradshaw <sbradshaw@micron.com>
Signed-off-by: Asai Thambi S P <asamymuthupa@micron.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Fix to return 0 from pci probe in case of rebuild. If not, pci consider
probe has failed, and crash during rmmod.
Signed-off-by: Asai Thambi S P <asamymuthupa@micron.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>