Currently, BlockDriver.bdrv_refresh_filename() is supposed to both
refresh the filename (BDS.exact_filename) and set BDS.full_open_options.
Now that we have generic code in the central bdrv_refresh_filename() for
creating BDS.full_open_options, we can drop the latter part from all
BlockDriver.bdrv_refresh_filename() implementations.
This also means that we can drop all of the existing default code for
this from the global bdrv_refresh_filename() itself.
Furthermore, we now have to call BlockDriver.bdrv_refresh_filename()
after having set BDS.full_open_options, because the block driver's
implementation should now be allowed to depend on BDS.full_open_options
being set correctly.
Finally, with this patch we can drop the @options parameter from
BlockDriver.bdrv_refresh_filename(); also, add a comment on this
function's purpose in block/block_int.h while touching its interface.
This completely obsoletes blklogwrite's implementation of
.bdrv_refresh_filename().
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190201192935.18394-25-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
Instead of having every block driver which implements
bdrv_refresh_filename() copy all of the strong runtime options over to
bs->full_open_options, implement this process generically in
bdrv_refresh_filename().
This patch only adds this new generic implementation, it does not remove
the old functionality. This is done in a follow-up patch.
With this patch, some superfluous information (that should never have
been there) may be removed from some JSON filenames, as can be seen in
the change to iotests 110's and 228's reference outputs.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190201192935.18394-24-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
Test 110 tests relative backing filenames for complex BDS trees. Now
that the originally supposedly failing test passes, let us add a new
failing test: Quorum can never work automatically (without detecting
whether all child nodes have the same base directory, but that would be
rather inconsistent behavior).
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Message-id: 20190201192935.18394-21-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
bdrv_get_full_backing_filename_from_filename() breaks down when it comes
to JSON filenames. Using bdrv_dirname() as the basis is better because
since we have BDS, we can descend through the BDS tree to the protocol
layer, which gives us a greater probability of finding a non-JSON name;
also, bdrv_dirname() is more correct as it allows block drivers to
override the generation of that directory name in a protocol-specific
way.
We still need to keep bdrv_get_full_backing_filename_from_filename(),
though, because it has valid callers which need it during image creation
when no BDS is available yet.
This makes a test case in qemu-iotest 110, which was supposed to fail,
work. That is actually good, but we need to change the reference output
(and the comment in 110) accordingly.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Message-id: 20190201192935.18394-20-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
This function queries a node; since we cannot do that right now, it
executes query-named-block-nodes and returns the matching node's object.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 20190201192935.18394-8-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190201192935.18394-7-mreitz@redhat.com
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Basically, bdrv_refresh_filename() should respect all children of a
BlockDriverState. However, generally those children are driver-specific,
so this function cannot handle the general case. On the other hand,
there are only few drivers which use other children than @file and
@backing (that being vmdk, quorum, and blkverify).
Most block drivers only use @file and/or @backing (if they use any
children at all). Both can be implemented directly in
bdrv_refresh_filename.
The user overriding the file's filename is already handled, however, the
user overriding the backing file is not. If this is done, opening the
BDS with the plain filename of its file will not be correct, so we may
not set bs->exact_filename in that case.
iotest 051 contains test cases for overriding the backing file, and so
its output changes with this patch applied.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Message-id: 20190201192935.18394-6-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
Since qemu currently doesn't flush persistent bitmaps to disk until
shutdown (which might be MUCH later), it's useful if 'query-block'
at least shows WHICH bitmaps will (eventually) make it to persistent
storage. Update affected iotests.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Message-id: 20190204210512.27458-1-eblake@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
A new test file 242 added to the qemu-iotests set. It checks
the format of qcow2 specific information for the new added
section that lists details of bitmaps.
Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
Message-Id: <1549638368-530182-4-git-send-email-andrey.shinkevich@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
[eblake: pep8 compliance, avoid trailing blank line]
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
It's not enough to order the kwargs for consistent QMP log output,
we must also sort any sub-dictionaries in lists that appear as values.
Reported-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Without this filter, this test sometimes fails.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This patch forbids attaching a disk to a SCSI device if its using a
different AioContext. Test case included.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This fixes a crash when attaching two disks with the same blockdev to
a SCSI device that is using iothreads. Test case included.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This fixes a crash when attaching a disk to a SCSI device using
iothreads, then detaching it and reattaching it again. Test case
included.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Clarify that the number of extents provided in BlockdevCreateOptionsVmdk
must match the number of extents that will actually be used. Providing
more extents will result in an error now.
This requires adapting the test case to provide the right number of
extents.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
This test waits for a MIGRATION event with status=completed on the
source VM before querying the migration status on both source and
destination. However, just because the source says migration has
completed does not mean the destination thinks the same. Therefore, in
some cases, the destination VM may still report "active" instead of
"completed" when asked for its migration status.
Fix this by enabling migration events on both VMs and waiting until both
source and destination emit a status=completed MIGRATION event.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Recently, some bugs in dmg file have been fixed. To prevent reading dmg
is broken someday in the future, add a simple test which ensures the
conversion from dmg to raw should not hang or face any I/O error.
Signed-off-by: yuchenlin <npes87184@gmail.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The mirror_start_job() function used for the commit-active job blocks
the source, target and all intermediate nodes for the duration of the
job.
target <- intermediate <- source
Since 4ef85a9c23 this function creates a dummy mirror_top_bs that
goes on top of the source node, and it is this dummy node that gets
blocked instead. The source node is never blocked or added to the
job's list of nodes.
target <- intermediate <- source <- mirror_top
At the moment I don't think it is possible to exploit this problem
because any additional job on 'source' would either be forbidden for
other reasons or it would need to involve an additional node that is
blocked, causing an error.
This can be seen in the error messages, however, because they never
refer to the source node being blocked:
$ qemu-img create -f qcow2 hd0.qcow2 1M
$ qemu-img create -f qcow2 -b hd0.qcow2 hd1.qcow2
$ qemu-io -c 'write 0 1M' hd0.qcow2
$ $QEMU -drive if=none,file=hd1.qcow2,node-name=hd1
{ "execute": "qmp_capabilities" }
{ "execute": "block-commit", "arguments": {"device": "hd1", "speed": 256}}
{ "execute": "block-stream", "arguments": {"device": "hd1"}}
{ "error": {"class": "GenericError",
"desc": "Node 'hd0' is busy: block device is in use by block job: commit"}}
After this patch the error message refers to 'hd1', as it should.
The expected output of iotest 141 also needs to be updated for the
same reason.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
To do this, we need to allow creating the NBD server on various ports
instead of a single one (which may not even work if you run just one
instance, because something entirely else might be using that port).
So we just pick a random port in [32768, 32768 + 1024) and try to create
a server there. If that fails, we just retry until something sticks.
For the IPv6 test, we need a different range, though (just above that
one). This is because "localhost" resolves to both 127.0.0.1 and ::1.
This means that if you bind to it, it will bind to both, if possible, or
just one if the other is already in use. Therefore, if the IPv6 test
has already taken [::1]:some_port and we then try to take
localhost:some_port, that will work -- only the second server will be
bound to 127.0.0.1:some_port alone and not [::1]:some_port in addition.
So we have two different servers on the same port, one for IPv4 and one
for IPv6.
But when we then try to connect to the server through
localhost:some_port, we will always end up at the IPv6 one (as long as
it is up), and this may not be the one we want.
Thus, we must make sure not to create an IPv6-only NBD server on the
same port as a normal "dual-stack" NBD server -- which is done by using
distinct port ranges, as explained above.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20181221234750.23577-4-mreitz@redhat.com
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
By default, qemu-nbd binds to 0.0.0.0. However, we then proceed to
connect to "localhost". Usually, this works out fine; but if this test
is run concurrently, some other test function may have bound a different
server to ::1 (on the same port -- you can bind different serves to the
same port, as long as one is on IPv4 and the other on IPv6).
So running qemu-nbd works, it can bind to 0.0.0.0:NBD_PORT. But
potentially a concurrent test has successfully taken [::1]:NBD_PORT. In
this case, trying to connect to "localhost" will lead us to the IPv6
instance, where we do not want to end up.
Fix this by just binding to "localhost". This will make qemu-nbd error
out immediately and not give us cryptic errors later.
(Also, it will allow us to just try a different port as of a future
patch.)
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20181221234750.23577-3-mreitz@redhat.com
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
In some cases, we may want to deal with qemu-nbd errors (e.g. by
launching it in a different configuration until it no longer throws
any). In that case, we do not want its output ending up in the test
output.
It may still be useful for handling the error, though, so add a new
function that works basically like qemu_nbd(), only that it returns the
qemu-nbd output instead of making it end up in the log. In contrast to
qemu_img_pipe(), it does still return the exit code as well, though,
because that is even more important for error handling.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20181221234750.23577-2-mreitz@redhat.com
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Hot-unplug a scsi-hd using an iothread. The previous patch fixes a
segfault in this scenario.
This patch adds a regression test.
Suggested-by: Alberto Garcia <berto@igalia.com>
Suggested-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Message-id: 20190114133257.30299-3-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Any good new feature deserves some regression testing :)
Coverage includes:
- 223: what happens when there are 0 or more than 1 export,
proof that we can see multiple contexts including qemu:dirty-bitmap
- 233: proof that we can list over TLS, and that mix-and-match of
plain/TLS listings will behave sanely
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Richard W.M. Jones <rjones@redhat.com>
Tested-by: Richard W.M. Jones <rjones@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20190117193658.16413-22-eblake@redhat.com>
We have a race between the nbd server and the client both trying
to report errors at once which can make the test sometimes fail
if the output lines swap order under load. Break the race by
collecting server messages into a file and then replaying that
at the end of the test.
We may yet want to fix the server to not output ANYTHING for a
client action except when -v was used (to avoid malicious clients
from being able to DoS a server by filling up its logs), but that
is saved for a future patch.
Signed-off-by: Eric Blake <eblake@redhat.com>
CC: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20190117193658.16413-2-eblake@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Having to fire up qemu, then use QMP commands for nbd-server-start
and nbd-server-add, just to expose a persistent dirty bitmap, is
rather tedious. Make it possible to expose a dirty bitmap using
just qemu-nbd (of course, for now this only works when qemu-nbd is
visiting a BDS formatted as qcow2).
Of course, any good feature also needs unit testing, so expand
iotest 223 to cover it.
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20190111194720.15671-9-eblake@redhat.com>
With the experimental x-nbd-server-add-bitmap command, there was
a window of time where an NBD client could see the export but not
the associated dirty bitmap, which can cause a client that planned
on using the dirty bitmap to be forced to treat the entire image
as dirty as a safety fallback. Furthermore, if the QMP client
successfully exports a disk but then fails to add the bitmap, it
has to take on the burden of removing the export. Since we don't
allow changing the exposed dirty bitmap (whether to a different
bitmap, or removing advertisement of the bitmap), it is nicer to
make the bitmap tied to the export at the time the export is
created, with automatic failure to export if the bitmap is not
available.
The experimental command included an optional 'bitmap-export-name'
field for remapping the name exposed over NBD to be different from
the bitmap name stored on disk. However, my libvirt demo code
for implementing differential backups on top of persistent bitmaps
did not need to take advantage of that feature (it is instead
possible to create a new temporary bitmap with the desired name,
use block-dirty-bitmap-merge to merge one or more persistent
bitmaps into the temporary, then associate the temporary with the
NBD export, if control is needed over the exported bitmap name).
Hence, I'm not copying that part of the experiment over to the
stable addition. For more details on the libvirt demo, see
https://www.redhat.com/archives/libvir-list/2018-October/msg01254.html,
https://kvmforum2018.sched.com/event/FzuB/facilitating-incremental-backup-eric-blake-red-hat
This patch focuses on the user interface, and reduces (but does
not completely eliminate) the window where an NBD client can see
the export but not the dirty bitmap, with less work to clean up
after errors. Later patches will add further cleanups now that
this interface is declared stable via a single QMP command,
including removing the race window.
Update test 223 to use the new interface.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20190111194720.15671-6-eblake@redhat.com>
Our initial implementation of x-nbd-server-add-bitmap put
in a restriction because of incremental backups: in that
usage, we are exporting one qcow2 file (the temporary overlay
target of a blockdev-backup sync:none job) and a dirty bitmap
owned by a second qcow2 file (the source of the
blockdev-backup, which is the backing file of the temporary).
While both qcow2 files are still writable (the target in
order to capture copy-on-write of old contents, and the
source in order to track live guest writes in the meantime),
the NBD client expects to see constant data, including the
dirty bitmap. An enabled bitmap in the source would be
modified by guest writes, which is at odds with the NBD
export being a read-only constant view, hence the initial
code choice of enforcing a disabled bitmap (the intent is
that the exposed bitmap was disabled in the same transaction
that started the blockdev-backup job, although we don't want
to track enough state to actually enforce that).
However, consider the case of a bitmap contained in a read-only
node (including when the bitmap is found in a backing layer of
the active image). Because the node can't be modified, the
bitmap won't change due to writes, regardless of whether it is
still enabled. Forbidding the export unless the bitmap is
disabled is awkward, paritcularly since we can't change the
bitmap to be disabled (because the node is read-only).
Alternatively, consider the case of live storage migration,
where management directs the destination to create a writable
NBD server, then performs a drive-mirror from the source to
the target, prior to doing the rest of the live migration.
Since storage migration can be time-consuming, it may be wise
to let the destination include a dirty bitmap to track which
portions it has already received, where even if the migration
is interrupted and restarted, the source can query the
destination block status in order to potentially minimize
re-sending data that has not changed in the meantime on a
second attempt. Such code has not been written, and might not
be trivial (after all, a cluster being marked dirty in the
bitmap does not necessarily guarantee it has the desired
contents), but it makes sense that letting an active dirty
bitmap be exposed and changing alongside writes may prove
useful in the future.
Solve both issues by gating the restriction against a
disabled bitmap to only happen when the caller has requested
a read-only export, and where the BDS that owns the bitmap
(whether or not it is the BDS handed to nbd_export_new() or
from its backing chain) is still writable. We could drop
the check altogether (if management apps are prepared to
deal with a changing bitmap even on a read-only image), but
for now keeping a check for the read-only case still stands
a chance of preventing management errors.
Update iotest 223 to show the looser behavior by leaving
a bitmap enabled the whole run; note that we have to tear
down and re-export a node when handling an error.
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20190111194720.15671-4-eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Since we already forbid other nbd-server commands when not
in the right state, it is unlikely that any caller was relying
on a second stop to behave as a silent no-op. Update iotest
223 to show the improved behavior.
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20190111194720.15671-3-eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Testing success paths is important, but it's also nice to highlight
expected failure handling, to show that we don't crash, and so that
upcoming tests that change behavior can demonstrate the resulting
effects on error paths.
Add the following errors:
Attempting to export without a running server
Attempting to start a second server
Attempting to export a bad node name
Attempting to export a name that is already exported
Attempting to export an enabled bitmap
Attempting to remove an already removed export
Attempting to quit server a second time
All of these properly complain except for a second server-stop,
which will be fixed next.
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20190111194720.15671-2-eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
New interface, new smoke test.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20181221093529.23855-12-jsnow@redhat.com>
[eblake: fix last-minute change to echo text]
Signed-off-by: Eric Blake <eblake@redhat.com>
If iotests have lines exceeding >998 characters long, git doesn't
want to send it plaintext to the list. We can solve this by allowing
the iotests to use pretty printed QMP output that we can match against
instead.
As a bonus, it's much nicer for human eyes too.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20181221093529.23855-11-jsnow@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
As laid out in the previous commit's message:
```
Several places in iotests deal with serializing objects into JSON
strings, but to add pretty-printing it seems desirable to localize
all of those cases.
log() seems like a good candidate for that centralized behavior.
log() can already serialize json objects, but when it does so,
it assumes filters=[] operates on QMP objects, not strings.
qmp_log currently operates by dumping outgoing and incoming QMP
objects into strings and filtering them assuming that filters=[]
are string filters.
```
Therefore:
Change qmp_log to treat filters as if they're always qmp object filters,
then change the logging call to rely on log()'s ability to serialize QMP
objects, so we're not duplicating that effort.
Add a qmp version of filter_testfiles and adjust the only caller using
it for qmp_log to use the qmp version.
Signed-off-by: John Snow <jsnow@redhat.com>
Message-Id: <20181221093529.23855-10-jsnow@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Several places in iotests deal with serializing objects into JSON
strings, but to add pretty-printing it seems desirable to localize
all of those cases.
log() seems like a good candidate for that centralized behavior.
log() can already serialize json objects, but when it does so,
it assumes filters=[] operates on QMP objects, not strings.
qmp_log currently operates by dumping outgoing and incoming QMP
objects into strings and filtering them assuming that filters=[]
are string filters.
To have qmp_log use log's serialization, qmp_log will need to
accept only qmp filters, not text filters.
However, only a single caller of qmp_log actually requires any
filters at all. I remove the default filter and add it explicitly
to the caller in preparation for refactoring qmp_log to use rich
filters instead.
test 206 is amended to name the filter explicitly and the default
is removed.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20181221093529.23855-9-jsnow@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Python before 3.6 does not sort dictionaries (including kwargs).
Therefore, printing QMP objects involves sorting the keys to have
a predictable ordering in the iotests output. This means that
iotests output will sometimes show arguments in an order not
specified by the test author.
Presently, we accomplish this by using json.dumps' sort_keys argument,
where we only serialize the arguments dictionary, but not the command.
However, if we want to pretty-print QMP objects being sent to the
QEMU process, we need to build the entire command before logging it.
Ordinarily, this would then involve "arguments" being sorted above
"execute", which would necessitate a rather ugly and harder-to-read
change to many iotests outputs.
To facilitate pretty-printing AND maintaining predictable output AND
having "arguments" sort after "execute", add a custom sort function
that takes a dictionary and recursively builds an OrderedDict that
maintains the specific key order we wish to see in iotests output.
The qmp_log function uses this to build a QMP object that keeps
"execute" above "arguments", but sorts all keys and keys in any
subdicts in "arguments" lexicographically to maintain consistent
iotests output, with no incompatible changes to any current test.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20181221093529.23855-8-jsnow@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
To mimic the common filter of the same name, but for the python tests.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20181221093529.23855-7-jsnow@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Instead of using os.environ[], use .get with a default of empty string
to match the setup in check to allow us to import the iotests module
(for debugging, say) without needing a crafted environment just to
import the module.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20181221093529.23855-6-jsnow@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
The 'x' prefix was added because I was uncertain of the direction we'd
take for the libvirt API. With the general approach solidified, I feel
comfortable committing to this API for 4.0.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20181221093529.23855-5-jsnow@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
This changes output from:
$ qemu-nbd nosuch
Failed to blk_new_open 'nosuch': Could not open 'nosuch': No such file or directory
to something more consistent with qemu-img and qemu:
$ qemu-nbd nosuch
qemu-nbd: Failed to blk_new_open 'nosuch': Could not open 'nosuch': No such file or directory
Update the lone affected test to match. (Hmm - is it sad that we don't
do much testing of expected failures?)
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Richard W.M. Jones <rjones@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20181215135324.152629-2-eblake@redhat.com>
Reduce extra noise of nbd-client, change 083 correspondingly.
In various commits (be41c100 in 2.10, f140e300 in 2.11, 78a33ab
in 2.12), we added spots where qemu as an NBD client would report
problems communicating with the server to stderr, because there
was no where else to send the error to. However, this is racy,
particularly since the most common source of these errors is when
either the client or the server abruptly hangs up, leaving one
coroutine to report the error only if it wins (or loses) the
race in attempting the read from the server before another
thread completes its cleanup of a protocol error that caused the
disconnect in the first place. The race is also apparent in the
fact that differences in the flush behavior of the server can
alter the frequency of encountering the race in the client (see
commit 6d39db96).
Rather than polluting stderr, it's better to just trace these
situations, for use by developers debugging a flaky connection,
particularly since the real error that either triggers the abrupt
disconnection in the first place, or that results from the EIO
when a request can't receive a reply, DOES make it back to the
user in the normal Error propagation channels.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20181102151152.288399-4-vsementsov@virtuozzo.com>
[eblake: drop depedence on error hint, enhance commit message]
Signed-off-by: Eric Blake <eblake@redhat.com>
It is interesting to know whether the shutdown cause was 'quit' or
'reset', especially when using "--no-reboot". In that case, a management
layer can now determine if the guest wanted a reboot or shutdown, and
can act accordingly.
Changes the output of the reason in the iotests from 'host-qmp' to
'host-qmp-quit'. This does not break compatibility because
the field was introduced in the same version.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Message-Id: <20181205110131.23049-4-d.csapak@proxmox.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
[Commit message tweaked]
Signed-off-by: Markus Armbruster <armbru@redhat.com>
This makes it possible to determine what the exact reason was for
a RESET or a SHUTDOWN. A management layer might need the specific reason
of those events to determine which cleanups or other actions it needs to do.
This patch also updates the iotests to the new expected output that includes
the reason.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Message-Id: <20181205110131.23049-3-d.csapak@proxmox.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
[Commit message tweaked]
Signed-off-by: Markus Armbruster <armbru@redhat.com>
- qcow2: Decompression worker threads
- dmg: lzfse compression support
- file-posix: Simplify delegation to worker thread
- Don't pass flags to bdrv_reopen_queue()
- iotests: make 235 work on s390 (and others)
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJcE4wNAAoJEH8JsnLIjy/WpJQP/39XmFQr/UO/Z7fsQNJD7Kbn
yUzAunMt7r7nfyuC5CP7a57apjKzbLHIbKDKrI8v2/SHysZ2zvjGx9QFCYNM44P7
XRmwd/fJJUqcyaDZDjiIHZtfSvVQB09xOjl62K9b6tVYCTztBwqVzY9uE4oA0coh
tAofAwG8vHYYxhjkPxKaftBv/GO/a9jB1Dk6DG7cX4FUm0lwEnGcT3ZmRNUBRAQ4
F0HfG+OubqljHOSR3VN3PPoienDwQOTsroqhIL4R0Jeb6I/1IVyeO56C4WYrfn9L
Tjgsu1v/te4F+7/BBICQKp5y9nNYrg6uPlC4cD/st/xZQe0oMUHEGcSESm61wOc5
bP8A5D7iiCn1c3kZXrPVyuvUQBn3fIJUOgVHQ7Oa4x2i9VcjpzQKAL2Wuu9NEgwc
Acn9lj9ey3rZwcJisCyOchn5sG/M4dYstHP8aAUafeSpAvsXje+hPKnWe0+SqxZx
btmVt6Suh205fP86w9POeNzy1la69FzF/xqe3Eohl5mEZsylL5jT0w9CfAzJSJrz
dDhgnelgQZ0/YcoEc1pqqQ8EP+9EJuIzjB7mEaCfZUmylq7mL/QvWgtjSbIr1yFG
RFvg6wTqcnrtOKoLvLSfw64QJXgDFwQ3cZ7Wl8XakZNPMfffndk9AThQxBBgofqg
XOyuW5gg3g3xzZrQswsf
=XKq9
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/kevin/tags/for-upstream' into staging
Block layer patches:
- qcow2: Decompression worker threads
- dmg: lzfse compression support
- file-posix: Simplify delegation to worker thread
- Don't pass flags to bdrv_reopen_queue()
- iotests: make 235 work on s390 (and others)
# gpg: Signature made Fri 14 Dec 2018 10:55:09 GMT
# gpg: using RSA key 7F09B272C88F2FD6
# gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>"
# Primary key fingerprint: DC3D EB15 9A9A F95D 3D74 56FE 7F09 B272 C88F 2FD6
* remotes/kevin/tags/for-upstream: (42 commits)
block/mirror: add missing coroutine_fn annotations
iotests: make 235 work on s390 (and others)
block: Assert that flags are up-to-date in bdrv_reopen_prepare()
block: Remove assertions from update_flags_from_options()
block: Stop passing flags to bdrv_reopen_queue_child()
block: Remove flags parameter from bdrv_reopen_queue()
block: Clean up reopen_backing_file() in block/replication.c
qemu-io: Put flag changes in the options QDict in reopen_f()
block: Drop bdrv_reopen()
block: Use bdrv_reopen_set_read_only() in the mirror driver
block: Use bdrv_reopen_set_read_only() in external_snapshot_commit()
block: Use bdrv_reopen_set_read_only() in qmp_change_backing_file()
block: Use bdrv_reopen_set_read_only() in stream_start/complete()
block: Use bdrv_reopen_set_read_only() in bdrv_commit()
block: Use bdrv_reopen_set_read_only() in commit_start/complete()
block: Use bdrv_reopen_set_read_only() in bdrv_backing_update_filename()
block: Add bdrv_reopen_set_read_only()
file-posix: Avoid aio_worker() for QEMU_AIO_IOCTL
file-posix: Switch to .bdrv_co_ioctl
file-posix: Remove paio_submit_co()
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
"-machine pc" will not work all architectures. Lets fall back to the
default machine by not specifying it.
In addition we also need to specify -no-shutdown on s390 as qemu will
exit otherwise.
Cc: qemu-stable@nongnu.org
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This function takes four options (cache.direct, cache.no-flush,
read-only and auto-read-only) from a QemuOpts object and updates the
flags accordingly.
If any of those options is not set (because it was missing from the
original QDict or because it had an invalid value) then the function
aborts with a failed assertion:
$ qemu-io -c 'reopen -o read-only=foo' hd.qcow2
block.c:1126: update_flags_from_options: Assertion `qemu_opt_find(opts, BDRV_OPT_CACHE_DIRECT)' failed.
Aborted
This assertion is unnecessary, and it forces any caller of
bdrv_reopen() to pass all the aforementioned four options. This may
have made sense in order to remove ambiguity when bdrv_reopen() was
taking both flags and options, but that's not the case anymore.
It's also unnecessary if we want to validate the option values,
because bdrv_reopen_prepare() already takes care of that, as we can
see if we remove the assertions:
$ qemu-io -c 'reopen -o read-only=foo' hd.qcow2
Parameter 'read-only' expects 'on' or 'off'
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
When reopen_f() puts a block device in the reopen queue, some of the
new options are passed using a QDict, but others ("read-only" and the
cache options) are passed as flags.
This patch puts those flags in the QDict. This way the flags parameter
becomes redundant and we'll be able to get rid of it in a subsequent
patch.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>