The reference output file only works for file. 'qemu-img convert -p'
makes a lot more progress updates for NFS than for file, so disable the
test for NFS.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
NFS paths were only partially filtered in _filter_img_create, _img_info
and _filter_img_info, resulting in "nfs://127.0.0.1TEST_DIR/t.IMGFMT".
This adds another replacement to the sed calls that matches the test
directory not as a host path, but as an NFS URL (the prefix as used for
$TEST_IMG).
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Test cases were trying to use nfs:// URLs as local filenames, which made
every test fail for NFS. With TEST_IMG and TEST_IMG_FILE set like for
the other protocols, NFS tests can pass again.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Currently the timer is cancelled and the block job is entered by
block_job_resume(). This behavior causes drain to run extra blockjob
iterations when the job was sleeping due to the ratelimit.
This patch leaves the job asleep when block_job_resume() is called.
Jobs can still be forcibly woken up using block_job_enter(), which is
used to cancel jobs.
After this patch drain no longer runs extra blockjob iterations. This
is the expected behavior that qemu-iotests 185 used to rely on. We
temporarily changed the 185 test output to make it pass for the QEMU
2.12 release but now it's time to address this issue.
Cc: QingFeng Hao <haoqf@linux.vnet.ibm.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: QingFeng Hao <haoqf@linux.vnet.ibm.com>
Message-id: 20180508135436.30140-3-stefanha@redhat.com
Reviewed-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Jeff Cody <jcody@redhat.com>
Commit 8565c3ab53 ("qemu-iotests: fix
185") identified a race condition in a sub-test.
Similar issues also affect the other sub-tests. If disk I/O completes
quickly, it races with the QMP 'quit' command. This causes spurious
test failures because QMP events are emitted in an unpredictable order.
This test relies on QEMU internals and there is no QMP API for getting
deterministic behavior needed to make this test 100% reliable. At the
same time, the test is useful and it would be a shame to remove it.
Add sleep 0.5 to reduce the chance of races. This is not a real fix but
appears to reduce spurious failures in practice.
Cc: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 20180508135436.30140-2-stefanha@redhat.com
Reviewed-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20180502202051.15493-4-mreitz@redhat.com
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
COR across nodes (that is, you have some filter node between the
actually COR target and the node that performs the COR) cannot reliably
work together with the permission system when there is no explicit COR
node that can request the WRITE_UNCHANGED permission for its child.
This is because COR (currently) sneaks its requests by the usual
permission checks, so it can work without a WRITE* permission; but if
there is a filter node in between, that will re-issue the request, which
then passes through the usual check -- and if nobody has requested a
WRITE_UNCHANGED permission, that check will fail.
There is no real direct fix apart from hoping that there is someone who
has requested that permission; in case of just the qemu-io HMP command
(and no guest device), however, that is not the case. The real real fix
is to implement the copy-on-read flag through an implicitly added COR
node. Such a node can request the necessary permissions as shown in
this test.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20180421132929.21610-10-mreitz@redhat.com
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
iotest 197 tests copy-on-read using the (now old) copy-on-read flag.
Copy it to 215 and modify it to use the COR filter driver instead.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20180421132929.21610-9-mreitz@redhat.com
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Message-id: 20180421132929.21610-8-mreitz@redhat.com
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
userfaultfd support depends on the host kernel, so it may not be
available. If so, 181 and 201 should be skipped.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20180406151731.4285-3-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
Currently, common.qemu only allows to match for results indicating
success. The only way to fail is by provoking a timeout. However,
sometimes we do have a defined failure output and can match for that,
which saves us from having to wait for the timeout in case of failure.
Because failure can sometimes just result in a _notrun in the test, it
is actually important to care about being able to fail quickly.
Also, sometimes we simply do not get any specific output in case of
success. The only way to handle this currently would be to define an
error message as the string to look for, which means that actual success
results in a timeout. This is really bad because it unnecessarily slows
down a succeeding test.
Therefore, this patch adds a new parameter $success_or_failure to
_timed_wait_for and _send_qemu_cmd. Setting this to a non-empty string
makes both commands expect two match parameters: If the first matches,
the function succeeds. If the second matches, the function fails.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20180406151731.4285-2-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
The L2 and refcount caches have default sizes that can be overridden
using the l2-cache-size and refcount-cache-size (an additional
parameter named cache-size sets the combined size of both caches).
Unless forced by one of the aforementioned parameters, QEMU will set
the unspecified sizes so that the L2 cache is 4 times larger than the
refcount cache.
This is based on the premise that the refcount metadata needs to be
only a fourth of the L2 metadata to cover the same amount of disk
space. This is incorrect for two reasons:
a) The amount of disk covered by an L2 table depends solely on the
cluster size, but in the case of a refcount block it depends on
the cluster size *and* the width of each refcount entry.
The 4/1 ratio is only valid with 16-bit entries (the default).
b) When we talk about disk space and L2 tables we are talking about
guest space (L2 tables map guest clusters to host clusters),
whereas refcount blocks are used for host clusters (including
L1/L2 tables and the refcount blocks themselves). On a fully
populated (and uncompressed) qcow2 file, image size > virtual size
so there are more refcount entries than L2 entries.
Problem (a) could be fixed by adjusting the algorithm to take into
account the refcount entry width. Problem (b) could be fixed by
increasing a bit the refcount cache size to account for the clusters
used for qcow2 metadata.
However this patch takes a completely different approach and instead
of keeping a ratio between both cache sizes it assigns as much as
possible to the L2 cache and the remainder to the refcount cache.
The reason is that L2 tables are used for every single I/O request
from the guest and the effect of increasing the cache is significant
and clearly measurable. Refcount blocks are however only used for
cluster allocation and internal snapshots and in practice are accessed
sequentially in most cases, so the effect of increasing the cache is
negligible (even when doing random writes from the guest).
So, make the refcount cache as small as possible unless the user
explicitly asks for a larger one.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 9695182c2eb11b77cb319689a1ebaa4e7c9d6591.1523968389.git.berto@igalia.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
Commit abd3622cc0 added a case to 122
regarding how the qcow2 driver handles an incorrect compressed data
length value. This does not really fit into 122, as that file is
supposed to contain qemu-img convert test cases, which this case is not.
So this patch splits it off into its own file; maybe we will even get
more qcow2-only compression tests in the future.
Also, that test case does not work with refcount_bits=1, so mark that
option as unsupported.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20180406164108.26118-1-mreitz@redhat.com
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
We already have an extensive mirror test (041) which does cover
cancelling a mirror job, especially after it has emitted the READY
event. However, it does not check what exact events are emitted after
block-job-cancel is executed. More importantly, it does not use
throttling to ensure that it covers the case of block-job-cancel before
READY.
It would be possible to add this case to 041, but considering it is
already our largest test file, it makes sense to create a new file for
these cases.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20180501220509.14152-3-mreitz@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
Commit b76e4458b1 ("block/mirror: change
the semantic of 'force' of block-job-cancel") accidentally removed the
ratelimit in the mirror job.
Reintroduce the ratelimit but keep the block-job-cancel force=true
behavior that was added in commit
b76e4458b1.
Note that block_job_sleep_ns() returns immediately when the job is
cancelled. Therefore it's safe to unconditionally call
block_job_sleep_ns() - a cancelled job does not sleep.
This commit fixes the non-deterministic qemu-iotests 185 output. The
test relies on the ratelimit to make the job sleep until the 'quit'
command is processed. Previously the job could complete before the
'quit' command was received since there was no ratelimit.
Cc: Liang Li <liliang.opensource@gmail.com>
Cc: Jeff Cody <jcody@redhat.com>
Cc: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20180424123527.19168-1-stefanha@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
Improve and fix 169:
- use MIGRATION events instead of RESUME
- make a TODO: enable dirty-bitmaps capability for offline case
- recreate vm_b without -incoming near test end
This (likely) fixes racy faults at least of the following types:
- timeout on waiting for RESUME event
- sha256 mismatch on line 136 (142 after this patch)
- fail to self.vm_b.launch() on line 135 (141 now after this patch)
And surely fixes cat processes, left after test finish.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-id: 20180411122606.367301-3-vsementsov@virtuozzo.com
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Commit 4486e89c21 ("vl: introduce
vm_shutdown()") added a bdrv_drain_all() call. As a side-effect of the
drain operation the block job iterates one more time than before. The
185 output no longer matches and the test is failing now.
It may be possible to avoid the superfluous block job iteration, but
that type of patch is not suitable late in the QEMU 2.12 release cycle.
This patch simply updates the 185 output file. The new behavior is
correct, just not optimal, so make the test pass again.
Fixes: 4486e89c21 ("vl: introduce vm_shutdown()")
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: QingFeng Hao <haoqf@linux.vnet.ibm.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: QingFeng Hao <haoqf@linux.vnet.ibm.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
qemu-iotests doesn't support dmg, and the dmg block driver doesn't
support image creation. Two test cases declare dmg as supported, but
that's obviously wrong for both reasons. Remove the declaration.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Blacklist these formats, as they don't support image creation, as they
say:
> ./qemu-img create -f bochs x 1m
qemu-img: x: Format driver 'bochs' does not support image creation
> ./qemu-img create -f cloop x 1m
qemu-img: x: Format driver 'cloop' does not support image creation
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Support "generic" formats like in bash tests with their
_supported_fmt generic
The test, supporting "generic" formats will run if IMGFMT_GENERIC =
true, which is default, except for bochs and cloop. However, you can
use verify_image_format(['generic', 'bochs']), which will run for all
except cloop (for this moment).
Also, add an assert (we don't want set both arguments) and remove
duplication.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
If there are more than one events, wait_until_completed() might return
the 2nd event even if the 1st event is JOB_COMPLETED, since the for loop
will continue to run even if completed is set to True.
It never happened before, but it can be triggered when OOB is enabled
due to the RESUME startup message. Fix that up.
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20180408030542.17855-1-peterx@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
-----BEGIN PGP SIGNATURE-----
iQEcBAABAgAGBQJaw6CJAAoJEPQH2wBh1c9AZvcIAL51TsxzLBfYbEr7t/OqaYcT
KptgkLn8UPUsVwEaZ1i9T+qpPqUhCgUeqBki4wTpGOfwmRO633L81lhlvLU1RD5f
a9qsIWtvG0eYw+BU4P+ojltufdeQRQMxuLVhoiq2ur9vC8zUy6tEnF+yqOdWHYsm
83Z2T/NFZ192zIwPD3Rq9/+ijllJOhODrHdAcnwBp0IuvlXi5FV7GjccB4scW/Ym
LwDe7cNWv0XlzRlosyhTRs0HiOm+XiLAj5t9dVKFiuZsUwGFEXx6GiRtUutDf1j8
5ham2wiYLsvxnSgRoUk486Txuj2ceNYDOvnllSv3uFh2kZyTaYewOL1acQFmtVw=
=5AX/
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'mreitz/tags/pull-block-2018-04-03' into queue-block
A fix for preallocated truncation, a new iotest, and a fix to make the iotests work more comfortably on ppc64
# gpg: Signature made Tue Apr 3 17:40:57 2018 CEST
# gpg: using RSA key F407DB0061D5CF40
# gpg: Good signature from "Max Reitz <mreitz@redhat.com>"
# Primary key fingerprint: 91BE B60A 30DB 3E88 57D1 1829 F407 DB00 61D5 CF40
* mreitz/tags/pull-block-2018-04-03:
iotests: Test abnormally large size in compressed cluster descriptor
qemu-iotests: Use ppc64 qemu_arch on ppc64le host
iotests: Test preallocated truncate of 2G image
block/file-posix: Fix fully preallocated truncate
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
L2 entries for compressed clusters have a field that indicates the
number of sectors used to store the data in the image.
That's however not the size of the compressed data itself, just the
number of sectors where that data is located. The actual data size is
usually not a multiple of the sector size, and therefore cannot be
represented with this field.
The way it works is that QEMU reads all the specified sectors and
starts decompressing the data until there's enough to recover the
original uncompressed cluster. If there are any bytes left that
haven't been decompressed they are simply ignored.
One consequence of this is that even if the size field is larger than
it needs to be QEMU can handle it just fine: it will read more data
from disk but it will ignore the extra bytes.
This test creates an image with two compressed clusters that use 5
sectors (2.5 KB) each, increases the size field to the maximum (8192
sectors, or 4 MB) and verifies that the data can be read without
problems.
This test is important because while the decompressed data takes
exactly one cluster, the maximum value allowed in the compressed size
field is twice the cluster size. So although QEMU won't produce images
with such large values we need to make sure that it can handle them.
Another effect of increasing the size field is that it can make
it include data from the following host cluster(s). In this case
'qemu-img check' will detect that the refcounts are not correct, and
we'll need to rebuild them.
Additionally, this patch also tests that decreasing the size corrupts
the image since the original data can no longer be recovered. In this
case QEMU returns an error when trying to read the compressed data,
but 'qemu-img check' doesn't see anything wrong if the refcounts are
consistent.
One possible task for the future is to make 'qemu-img check' verify
the sizes of the compressed clusters, by trying to decompress the data
and checking that the size stored in the L2 entry is correct.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Message-id: 20180329120745.11154-1-berto@igalia.com
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
The qemu target does not always correspond to the host machine type. For
example ppc64le machine target is ppc64. Let's introduce "qemu_arch"
variable to store the matching qemu architecture related to the current
architecture and use it when auto-detecting the default qemu binary.
Signed-off-by: Lukáš Doktor <ldoktor@redhat.com>
Message-id: 20180329112053.5399-2-ldoktor@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20180228131315.30194-3-mreitz@redhat.com
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Support luks images creatins like in 205
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Commit ac64273c66 modified the output of iotest 186, changing
the QOM path of floppy drives from /machine/unattached/device[17] to
/machine/unattached/device[13].
Instead of updating the test output to reflect this change, this patch
adds a new filter that hides all QOM paths from the 'Attached to:'
line of the 'info block' command.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Cc: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
SCSI controllers are no longer created automatically for
-drive if=scsi, so this patch updates the tests that relied
on that.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Cc: Thomas Huth <thuth@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
patch keeping us from creating too large extents.
-----BEGIN PGP SIGNATURE-----
iQEcBAABAgAGBQJauVVBAAoJEPQH2wBh1c9AGtYIAJ1ojl6+guKQHjtzv9W8ch50
2tzH9/lFkhq/Tyay8MCcq1dWQr23tKqusxi6fDHVdc8XIx6fDuPFzWxUQwwLvS81
9CA6Qs2NngkiAs89ZZ0Sc9aj+LzFbKvXDXPYd4FFeLVVzD0CA4qghH5THrrH6LRm
4DoRUK1QejrOC0v2zCRQvEN6RlI6WFGCf9YiYKkdrvswnjtLgOobCt7TvC9Nade2
smGyxtDENkV6bAZgQgxMAlf88GCyKslb4Fu6U+sfKMDejWmlYEREbBsQc+/Gp6Af
R6ZeiXEJ+KUF34RktPTmhJlUbzRc4Mw0Ij7Abrfhtpre9hifIp62XSrM8sasgLo=
=k+ly
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/maxreitz/tags/pull-block-2018-03-26' into staging
A fix for dirty bitmap migration through shared storage, and a VMDK
patch keeping us from creating too large extents.
# gpg: Signature made Mon 26 Mar 2018 21:17:05 BST
# gpg: using RSA key F407DB0061D5CF40
# gpg: Good signature from "Max Reitz <mreitz@redhat.com>"
# Primary key fingerprint: 91BE B60A 30DB 3E88 57D1 1829 F407 DB00 61D5 CF40
* remotes/maxreitz/tags/pull-block-2018-03-26:
vmdk: return ERROR when cluster sector is larger than vmdk limitation
iotests: enable shared migration cases in 169
qcow2: fix bitmaps loading when bitmaps already exist
qcow2-bitmap: add qcow2_reopen_bitmaps_rw_hint()
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Shared migration for dirty bitmaps is fixed by previous patches,
so we can enable the test.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-id: 20180320170521.32152-5-vsementsov@virtuozzo.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
This tests that the .bdrv_truncate implementation for luks doesn't crash
for invalid image sizes.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
We want to test resizing even for luks. The only change that is needed
is to explicitly zero out new space for luks because it's undefined.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
When we try to allocate new clusters we first look for available ones
starting from s->free_cluster_index and once we find them we increase
their reference counts. Before we get to call update_refcount() to do
this last step s->free_cluster_index is already pointing to the next
cluster after the ones we are trying to allocate.
During update_refcount() it may happen however that we also need to
allocate a new refcount block in order to store the refcounts of these
new clusters (and to complicate things further that may also require
us to grow the refcount table). After all this we don't know if the
clusters that we originally tried to allocate are still available, so
we return -EAGAIN to ask the caller to restart the search for free
clusters.
This is what can happen in a common scenario:
1) We want to allocate a new cluster and we see that cluster N is
free.
2) We try to increase N's refcount but all refcount blocks are full,
so we allocate a new one at N+1 (where s->free_cluster_index was
pointing at).
3) Once we're done we return -EAGAIN to look again for a free
cluster, but now s->free_cluster_index points at N+2, so that's
the one we allocate. Cluster N remains unallocated and we have a
hole in the qcow2 file.
This can be reproduced easily:
qemu-img create -f qcow2 -o cluster_size=512 hd.qcow2 1M
qemu-io -c 'write 0 124k' hd.qcow2
After this the image has 132608 bytes (256 clusters), and the refcount
block is full. If we write 512 more bytes it should allocate two new
clusters: the data cluster itself and a new refcount block.
qemu-io -c 'write 124k 512' hd.qcow2
However the image has now three new clusters (259 in total), and the
first one of them is empty (and unallocated):
dd if=hd.qcow2 bs=512c skip=256 count=1 | hexdump -C
If we write larger amounts of data in the last step instead of the 512
bytes used in this example we can create larger holes in the qcow2
file.
What this patch does is reset s->free_cluster_index to its previous
value when alloc_refcount_block() returns -EAGAIN. This way the caller
will try to allocate again the original clusters if they are still
free.
The output of iotest 026 also needs to be updated because now that
images have no holes some tests fail at a different point and the
number of leaked clusters is different.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Testing on ext4, most 'quick' qcow2 tests took less than 5 seconds,
but 163 took more than 20. Let's remove it from the quick set.
Signed-off-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Instead of converting all "backing": null instances into "backing": "",
handle a null value directly in bdrv_open_inherit().
This enables explicitly null backing links for json:{} filenames.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Message-Id: <20180224154033.29559-7-mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
[eblake: rebase to qobject_to() parameter order and qapi headers split]
Signed-off-by: Eric Blake <eblake@redhat.com>
CentOS 6 lacks a realpath binary on the base install, which makes
all iotests runs fail since the 2.11 release:
001 - output mismatch (see 001.out.bad)
./check: line 815: realpath: command not found
diff: missing operand after `/home/dummy/qemu/tests/qemu-iotests/001.out'
diff: Try `diff --help' for more information.
Many of the uses of 'realpath' in the check script were being
used on the output of 'type -p' - but that is already an
absolute file name. While a canonical name can often be
shorter (realpath gets rid of /../), it can also be longer (due
to symlink expansion); and we really don't care if the name is
canonical, merely that it was an executable file with an
absolute path. These were broken in commit cceaf1db.
The remaining use of realpath was to convert a possibly relative
filename into an absolute one before calling diff to make it
easier to copy-and-paste the filename for moving the .bad file
into place as the new reference file even when running iotests
out-of-tree (see commit 93e53fb6), but $PWD can achieve the same
purpose.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Commit bff5554843 added "force_size" into the common.filter for
_filter_img_create(), but test 146 still expects it in the output.
Signed-off-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Originally we added parallels as a read-only format to qemu-iotests
where we did just some tests with a binary image. Since then, write and
image creation support has been added to the driver, so we can now
enable it in _supported_fmt generic.
The driver doesn't support migration yet, though, so we need to add it
to the list of exceptions in 181.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Expose the "manual" property via QAPI for the backup-related jobs.
As of this commit, this allows the management API to request the
"concluded" and "dismiss" semantics for backup jobs.
Signed-off-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Split out the pause command into the actual pause and the wait.
Not every usage presently needs to resubmit a pause request.
The intent with the next commit will be to explicitly disallow
redundant or meaningless pause/resume requests, so the tests
need to become more judicious to reflect that.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
We're about to add several new states, and booleans are becoming
unwieldly and difficult to reason about. It would help to have a
more explicit bookkeeping of the state of blockjobs. To this end,
add a new "status" field and add our existing states in a redundant
manner alongside the bools they are replacing:
UNDEFINED: Placeholder, default state. Not currently visible to QMP
unless changes occur in the future to allow creating jobs
without starting them via QMP.
CREATED: replaces !!job->co && paused && !busy
RUNNING: replaces effectively (!paused && busy)
PAUSED: Nearly redundant with info->paused, which shows pause_count.
This reports the actual status of the job, which almost always
matches the paused request status. It differs in that it is
strictly only true when the job has actually gone dormant.
READY: replaces job->ready.
STANDBY: Paused, but job->ready is true.
New state additions in coming commits will not be quite so redundant:
WAITING: Waiting on transaction. This job has finished all the work
it can until the transaction converges, fails, or is canceled.
PENDING: Pending authorization from user. This job has finished all the
work it can until the job or transaction is finalized via
block_job_finalize. This implies the transaction has converged
and left the WAITING phase.
ABORTING: Job has encountered an error condition and is in the process
of aborting.
CONCLUDED: Job has ceased all operations and has a return code available
for query and may be dismissed via block_job_dismiss.
NULL: Job has been dismissed and (should) be destroyed. Should never
be visible to QMP.
Some of these states appear somewhat superfluous, but it helps define the
expected flow of a job; so some of the states wind up being synchronous
empty transitions. Importantly, jobs can be in only one of these states
at any given time, which helps code and external users alike reason about
the current condition of a job unambiguously.
Signed-off-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>