Do not test the deprecated API versions. debian-win32-cross and debian-win64-cross
are already using SDL2 (they do not cover GTK+ at all).
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
[AJB: fix merge conflicts]
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
It has some basic *-devel.i686 packages to be used with "gcc -m32" as a
32 bit cross build environment.
Signed-off-by: Fam Zheng <famz@redhat.com>
[AJB: add glibc-static]
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
To be more accurate on its purpose and make code that looks for a certain
target out of this variable more readable.
Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
This is a helper function for the configure script. It replies yes,
sudo or no to inform the user if non-interactive docker support is
available. We trap the Exception to fail gracefully.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Fam Zheng <famz@redhat.com>
* Copy offloading for qemu-img convert (iSCSI, raw, and qcow2)
If the underlying storage supports copy offloading, qemu-img convert will
use it instead of performing reads and writes. This avoids data transfers
and thus frees up storage bandwidth for other purposes. SCSI EXTENDED COPY
and Linux copy_file_range(2) are used to implement this optimization.
* Drop spurious "WARNING: I\/O thread spun for 1000 iterations" warning
-----BEGIN PGP SIGNATURE-----
iQEcBAABAgAGBQJbFSBoAAoJEJykq7OBq3PISpEIAIcMao4/rzinAWXzS+ncK9LO
6FtRVpgutpHaWX2ayySaz5n2CdR3cNMrpCI7sjY2Kw0lrdkqxPgl5n0SWD+VCl4W
7+JLz/uF0iUV8X+99e7WGAjZbm9LSlxgn5AQKfrrwyPf0ZfzoYQ5nBMcQ6xjEeQP
48j2WqJqN9/u8RBD07o11yn0+CE5g56/f12xVjR5ASVodzsAmcZ2OQRMQbM01isU
1mBekJQkDxJkt5l13Rql8+t+vWz8/9BEW2c/eIDKvoayMqYJpdfKv4DqLloIuHnc
3RkquA0zUuKtl7xEnEkH/We7fi4QPGW/vyBN7ychS/zKzZFQrXmwqrAuFSw3dKU=
=vZp+
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/stefanha/tags/block-pull-request' into staging
Pull request
* Copy offloading for qemu-img convert (iSCSI, raw, and qcow2)
If the underlying storage supports copy offloading, qemu-img convert will
use it instead of performing reads and writes. This avoids data transfers
and thus frees up storage bandwidth for other purposes. SCSI EXTENDED COPY
and Linux copy_file_range(2) are used to implement this optimization.
* Drop spurious "WARNING: I\/O thread spun for 1000 iterations" warning
# gpg: Signature made Mon 04 Jun 2018 12:20:08 BST
# gpg: using RSA key 9CA4ABB381AB73C8
# gpg: Good signature from "Stefan Hajnoczi <stefanha@redhat.com>"
# gpg: aka "Stefan Hajnoczi <stefanha@gmail.com>"
# Primary key fingerprint: 8695 A8BF D3F9 7CDA AC35 775A 9CA4 ABB3 81AB 73C8
* remotes/stefanha/tags/block-pull-request:
main-loop: drop spin_counter
qemu-img: Convert with copy offloading
block-backend: Add blk_co_copy_range
iscsi: Implement copy offloading
iscsi: Create and use iscsi_co_wait_for_task
iscsi: Query and save device designator when opening
file-posix: Implement bdrv_co_copy_range
qcow2: Implement copy offloading
raw: Implement copy offloading
raw: Check byte range uniformly
block: Introduce API for copy offloading
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
vDPA support, fix to vhost blk RO bit handling, some include path
cleanups, NFIT ACPI table.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-----BEGIN PGP SIGNATURE-----
iQEcBAABAgAGBQJbEXNvAAoJECgfDbjSjVRpc8gH/R8xrcFrV+k9wwbgYcOcGb6Y
LWjseE31pqJcxRV80vLOdzYEuLStZQKQQY7xBDMlA5vdyvZxIA6FLO2IsiJSbFAk
EK8pclwhpwQAahr8BfzenabohBv2UO7zu5+dqSvuJCiMWF3jGtPAIMxInfjXaOZY
odc1zY2D2EgsC7wZZ1hfraRbISBOiRaez9BoGDKPOyBY9G1ASEgxJgleFgoBLfsK
a1XU+fDM6hAVdxftfkTm0nibyf7PWPDyzqghLqjR9WXLvZP3Cqud4p8N29mY51pR
KSTjA4FYk6Z9EVMltyBHfdJs6RQzglKjxcNGdlrvacDfyFi79fGdiosVllrjfJM=
=3+V0
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/mst/tags/for_upstream' into staging
acpi, vhost, misc: fixes, features
vDPA support, fix to vhost blk RO bit handling, some include path
cleanups, NFIT ACPI table.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
# gpg: Signature made Fri 01 Jun 2018 17:25:19 BST
# gpg: using RSA key 281F0DB8D28D5469
# gpg: Good signature from "Michael S. Tsirkin <mst@kernel.org>"
# gpg: aka "Michael S. Tsirkin <mst@redhat.com>"
# Primary key fingerprint: 0270 606B 6F3C DF3D 0B17 0970 C350 3912 AFBE 8E67
# Subkey fingerprint: 5D09 FD08 71C8 F85B 94CA 8A0D 281F 0DB8 D28D 5469
* remotes/mst/tags/for_upstream: (31 commits)
vhost-blk: turn on pre-defined RO feature bit
ACPI testing: test NFIT platform capabilities
nvdimm, acpi: support NFIT platform capabilities
tests/.gitignore: add entry for generated file
arch_init: sort architectures
ui: use local path for local headers
qga: use local path for local headers
colo: use local path for local headers
migration: use local path for local headers
usb: use local path for local headers
sd: fix up include
vhost-scsi: drop an unused include
ppc: use local path for local headers
rocker: drop an unused include
e1000e: use local path for local headers
ioapic: fix up includes
ide: use local path for local headers
display: use local path for local headers
trace: use local path for local headers
migration: drop an unused include
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
If the user doesn't specify a TARGET_LIST they get the current
configuration but with spaces and hilarity ensues. This adds some make
magic to turn the TARGET_LIST back into a comma separated list.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20180521103504.26432-1-alex.bennee@linaro.org>
Signed-off-by: Fam Zheng <famz@redhat.com>
Add testing for the newly added NFIT Platform Capabilities Structure.
Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Suggested-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
After a "make check" we end up with the following:
$ git status
On branch master
Your branch is up-to-date with 'origin/master'.
Untracked files:
(use "git add <file>..." to include in what will be committed)
tests/test-block-backend
nothing added to commit but untracked files present (use "git add" to track)
Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Fixes: commit ad0df3e0fd ("block: test blk_aio_flush() with blk->root == NULL")
Cc: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Commit d759c951f3 ("replay: push
replay_mutex_lock up the call tree") removed the !timeout lock
optimization in the main loop.
The idea of the optimization was to avoid ping-pongs between threads by
keeping the Big QEMU Lock held across non-blocking (!timeout) main loop
iterations.
A warning is printed when the main loop spins without releasing BQL for
long periods of time. These warnings were supposed to aid debugging but
in practice they just alarm users. They are considered noise because
the cause of spinning is not shown and is hard to find.
Now that the lock optimization has been removed, there is no danger of
hogging the BQL. Drop the spin counter and the infamous warning.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
This is still poorly documented by Travis but according to:
https://docs.travis-ci.com/user/common-build-problems/#Running-a-Container-Based-Docker-Image-Locally
their reference images are now hosted on Docker Hub. So we update the
FROM line to refer to the new default image. We also need a few
additional tweaks:
- re-enable deb-src lines for our build-dep install
- add explicit PATH definition for tools
- force the build USER to be Travis
- add clang to FEATURES for our test-clang machinery
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
the 'debian' base image is deprecated since 3e11974988
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
we can now directly see different version sort consecutively.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Commit fb5e19d2e1 originally fixed the
regression, but was inadvertently broken again in merge commit
2d6752d38d.
Fixes:
https://bugs.launchpad.net/qemu/+bug/1654137
Cc: qemu-stable@nongnu.org
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20180515152500.19460-3-f4bug@amsat.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* New command-line option: --preconfig
This option allows pausing QEMU and allow the configuration
using QMP commands before running board initialization code.
* New QMP set-numa-node, now made possible because of --preconfig
* Small update on -numa error messages
-----BEGIN PGP SIGNATURE-----
iQIcBAABCAAGBQJbDy2jAAoJECgHk2+YTcWm8OgP/As0judZ7JwT5F5fR4nxtQPr
EODeEI+e/IahGH58Bafx24Fvxy0HZIgwqBDnPgclA8d3ebwmDQIejgk5/00zq2xv
cktRpzXGSBGJCVNctVz8xqN91m0lPtVgeRvXUJPn3hthXRSLO4p0vbyOW8g/C+O2
+dEcGqifAQatyxs9gYmmWoUia8zle/v1bd2v/x/DwiFW9cd47yDr3+ChHh6+nx0W
uh2zymD+ykVeNJ9WSxc3k8zTzQnuxbDK1fNbZsztk9KQDMWG3+u7KRNoJ86/7hKL
RMUfRCUMBgemuZsrFF8wSsha27e9VgCw4oR8dQ5AnTgMjK6nch/839XuFWh7j5qu
ntS4vt1v6IczdflKXX7IUILAgvVffLh2SCGWlKq9Q1eKo+CZTCea+iSbb2QobAE+
9TyuqXVKCsGKLoqkbK39d1UjZFhDQJhMsyuOHreKEINmJRmLI5qrNbEaaysb+mMB
DnFE1NiWjE+6X4w9U1l8lnKNUakOPNDG8cbgtuiEKDC9e3OZeiS8yyPLaQajELSR
Ig7N2GxrLMSfS3LNMSwkACILzVeu03aFoHyCPrpUqD8mpxidN0mmFVup8dawtkJ1
A5dHrKDmzlOT6DivJOgT8/Os/wMo3ErPcvyu69rTUO9PKSbwcFPwi6bMMDzgb4xO
kAles9X7HrHxSeJiggRW
=RHIj
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/ehabkost/tags/numa-next-pull-request' into staging
NUMA queue, 2018-05-30
* New command-line option: --preconfig
This option allows pausing QEMU and allow the configuration
using QMP commands before running board initialization code.
* New QMP set-numa-node, now made possible because of --preconfig
* Small update on -numa error messages
# gpg: Signature made Thu 31 May 2018 00:02:59 BST
# gpg: using RSA key 2807936F984DC5A6
# gpg: Good signature from "Eduardo Habkost <ehabkost@redhat.com>"
# Primary key fingerprint: 5A32 2FD5 ABC4 D3DB ACCF D1AA 2807 936F 984D C5A6
* remotes/ehabkost/tags/numa-next-pull-request:
tests: functional tests for QMP command set-numa-node
qmp: add set-numa-node command
qmp: permit query-hotpluggable-cpus in preconfig state
tests: extend qmp test with preconfig checks
cli: add --preconfig option
tests: qapi-schema tests for allow-preconfig
qapi: introduce new cmd option "allow-preconfig"
hmp: disable monitor in preconfig state
qapi: introduce preconfig runstate
numa: split out NumaOptions parsing into set_numa_options()
numa: postpone options post-processing till machine_run_board_init()
numa: clarify error message when node index is out of range in -numa dist, ...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* start QEMU with 2 unmapped cpus,
* while in preconfig state
* add 2 numa nodes
* assign cpus to them
* exit preconfig and in running state check that cpus
are mapped correctly.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <1526556607-268163-1-git-send-email-imammedo@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
use new allow-preconfig parameter in tests and make sure that
the QAPISchema can parse allow-preconfig correctly
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <1526058959-41425-1-git-send-email-imammedo@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
New option will be used to allow commands, which are prepared/need
to run, during preconfig state. Other commands that should be able
to run in preconfig state, should be amended to not expect machine
in initialized state or deal with it.
For compatibility reasons, commands that don't use new flag
'allow-preconfig' explicitly are not permitted to run in
preconfig state but allowed in all other states like they used
to be.
Within this patch allow following commands in preconfig state:
qmp_capabilities
query-qmp-schema
query-commands
query-command-line-options
query-status
exit-preconfig
to allow qmp connection, basic introspection and moving to the next
state.
PS:
set-numa-node and query-hotpluggable-cpus will be enabled later in
a separate patches.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <1526057503-39287-1-git-send-email-imammedo@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
[ehabkost: Changed "since 2.13" to "since 3.0"]
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
We're ready to declare the blockdev-create job stable. This renames the
corresponding QMP command from x-blockdev-create to blockdev-create.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
This rewrites the test case 213 to work with the new x-blockdev-create
job rather than the old synchronous version of the command.
All of the test cases stay the same as before, but in order to be able
to implement proper job handling, the test case is rewritten in Python.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
This rewrites the test case 212 to work with the new x-blockdev-create
job rather than the old synchronous version of the command.
All of the test cases stay the same as before, but in order to be able
to implement proper job handling, the test case is rewritten in Python.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
This rewrites the test case 211 to work with the new x-blockdev-create
job rather than the old synchronous version of the command.
All of the test cases stay the same as before, but in order to be able
to implement proper job handling, the test case is rewritten in Python.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
This rewrites the test case 210 to work with the new x-blockdev-create
job rather than the old synchronous version of the command.
All of the test cases stay the same as before, but in order to be able
to implement proper job handling, the test case is rewritten in Python.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
This rewrites the test case 207 to work with the new x-blockdev-create
job rather than the old synchronous version of the command.
Most of the test cases stay the same as before (the exception being some
improved 'size' options that allow distinguishing which command created
the image), but in order to be able to implement proper job handling,
the test case is rewritten in Python.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
This rewrites the test case 206 to work with the new x-blockdev-create
job rather than the old synchronous version of the command.
All of the test cases stay the same as before, but in order to be able
to implement proper job handling, the test case is rewritten in Python.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
This adds two helper functions that are useful for test cases that make
use of a non-file protocol (specifically ssh).
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Add an iotests.py function that runs a job and only returns when it is
destroyed. An error is logged when the job failed and job-finalize and
job-dismiss commands are issued if necessary.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
This adds a filter function to postprocess 'qemu-img info' input
(similar to what _img_info does), and an img_info_log() function that
calls 'qemu-img info' and logs the filtered output.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
This adds a helper function that logs both the QMP request and the
received response before returning it.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
This adds a helper function that returns a list of QMP events that are
already filtered through filter_qmp_event().
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
This changes the x-blockdev-create QMP command so that it doesn't block
the monitor and the main loop any more, but starts a background job that
performs the image creation.
The basic job as implemented here is all that is necessary to make image
creation asynchronous and to provide a QMP interface that can be marked
stable, but it still lacks a few features that jobs usually provide: The
job will ignore pause commands and it doesn't publish more than very
basic progress yet (total-progress is 1 and current-progress advances
from 0 to 1 when the driver callbacks returns). These features can be
added later without breaking compatibility.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
So far we relied on job->ret and strerror() to produce an error message
for failed jobs. Not surprisingly, this tends to result in completely
useless messages.
This adds a Job.error field that can contain an error string for a
failing job, and a parameter to job_completed() that sets the field. As
a default, if NULL is passed, we continue to use strerror(job->ret).
All existing callers are changed to pass NULL. They can be improved in
separate patches.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
-----BEGIN PGP SIGNATURE-----
iQEcBAABAgAGBQJbCMvcAAoJEHWtZYAqC0IRaZkIAKYYlLTjuiSXjOi3pwAIK9MS
kBXqEqQcAXeKXFVYfyrzxQGJK8vVm+I/P+L+Q3jOLanRWM+gZsxGpct7fDHAQQMY
iY+o5RAZUJH9rpr9mgvim332A52B/TmPl3viU6SFVM8y7PSOm3Tm7NsC17ak7XDF
rEY2q5Cou08SU5rTz6Dif2adoz9/YkITMIamz2nVmnjxS5zDCfe+DEh+TrQJXGt5
3n+Ywva7AzYCvuunKIscYD1QBOu7pMMOTduGkABduKyWr1oLeipG1QU/VTqz9S7O
xDvySSZMYUohj2D71J7brc05gmMQb668geDIxuKmnXrSYgSoraJatGvfV0UTxOA=
=BJKH
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/stefanberger/tags/pull-tpm-2018-05-23-4' into staging
Merge tpm 2018/05/23 v4
# gpg: Signature made Sat 26 May 2018 03:52:12 BST
# gpg: using RSA key 75AD65802A0B4211
# gpg: Good signature from "Stefan Berger <stefanb@linux.vnet.ibm.com>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: B818 B9CA DF90 89C2 D5CE C66B 75AD 6580 2A0B 4211
* remotes/stefanberger/tags/pull-tpm-2018-05-23-4:
test: Add test cases that use the external swtpm with CRB interface
docs: tpm: add VM save/restore example and troubleshooting guide
tpm: extend TPM TIS with state migration support
tpm: extend TPM emulator with state migration support
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add a test program for testing the CRB with the external swtpm.
The 1st test case extends a PCR and reads back the value and compares
it against an expected return packet.
The 2nd test case repeats the 1st test case and then migrates the
external swtpm's state along with the VM state to a destination
QEMU and swtpm and checks that the PCR has the expected value now.
The test cases require 'swtpm' to be installed on the system and
in the PATH and 'swtpm' must support the --tpm2 option. If this is
not the case, the test will be skipped.
Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Right now tests report OK status if QEMU crashes during cleanup.
Let's catch that case and fail the test.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Thomas Huth <thuth@redhat.com>
This patch introduces the host notifier support in
vhost-user-bridge. A new option (-H) is added to use
the host notifier. This is mainly used to test the
host notifier implementation in vhost user.
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
qmp_to_opts() used to be a method of QMPTestCase, but recently we
started to add more Python test cases that don't make use of
QMPTestCase. In order to make the method usable there, move it to VM.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
This adds a QMP event that is emitted whenever a job transitions from
one status to another.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
The transition to the READY state was still performed in the BlockJob
layer, in the same function that sent the BLOCK_JOB_READY QMP event.
This patch brings the state transition to the Job layer and implements
the QMP event using a notifier called from the Job layer, like we
already do for other events related to state transitions.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Instead of having a 'bool ready' in BlockJob, add a function that
derives its value from the job status.
At the same time, this fixes the behaviour to match what the QAPI
documentation promises for query-block-job: 'true if the job may be
completed'. When the ready flag was introduced in commit ef6dbf1e46,
the flag never had to be reset to match the description because after
being ready, the jobs would immediately complete and disappear.
Job transactions and manual job finalisation were introduced only later.
With these changes, jobs may stay around even after having completed
(and they are not ready to be completed a second time), however their
patches forgot to reset the ready flag.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
This moves the logic that implements job transactions from BlockJob to
Job.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
This doesn't actually move any transaction code to Job yet, but it
renames the type for transactions from BlockJobTxn to JobTxn and makes
them contain Jobs rather than BlockJobs
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
This moves the .complete callback that tells a READY job to complete
from BlockJobDriver to JobDriver. The wrapper function job_complete()
doesn't require anything block job specific any more and can be moved
to Job.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
block_job_drain() contains a blk_drain() call which cannot be moved to
Job, so add a new JobDriver callback JobDriver.drain which has a common
implementation for all BlockJobs. In addition to this we keep the
existing BlockJobDriver.drain callback that is called by the common
drain implementation for all block jobs.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
This moves the finalisation of a single job from BlockJob to Job.
Some part of this code depends on job transactions, and job transactions
call this code, we introduce some temporary calls from Job functions to
BlockJob ones. This will be fixed once transactions move to Job, too.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
This renames the BlockJobCreateFlags constants, moves a few JOB_INTERNAL
checks to job_create() and the auto_{finalize,dismiss} fields from
BlockJob to Job.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
While we already moved the state related to job pausing to Job, the
functions to do were still BlockJob only. This commit moves them over to
Job.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
There is nothing block layer specific about block_job_sleep_ns(), so
move the function to Job.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
This commit moves some core functions for dealing with the job coroutine
from BlockJob to Job. This includes primarily entering the coroutine
(both for the first and reentering) and yielding explicitly and at pause
points.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Move the defer_to_main_loop functionality from BlockJob to Job.
The code can be simplified because we can use job->aio_context in
job_defer_to_main_loop_bh() now, instead of having to access the
BlockDriverState.
Probably taking the data->aio_context lock in addition was already
unnecessary in the old code because we didn't actually make use of
anything protected by the old AioContext except getting the new
AioContext, in case it changed between scheduling the BH and running it.
But it's certainly unnecessary now that the BDS isn't accessed at all
any more.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
We cannot yet move the whole logic around job cancelling to Job because
it depends on quite a few other things that are still only in BlockJob,
but we can move the cancelled field at least.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
This moves reference counting from BlockJob to Job.
In order to keep calling the BlockJob cleanup code when the job is
deleted via job_unref(), introduce a new JobDriver.free callback. Every
block job must use block_job_free() for this callback, this is asserted
in block_job_create().
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
This moves BlockJob.status and the closely related functions
(block_)job_state_transition() and (block_)job_apply_verb to Job. The
two QAPI enums are renamed to JobStatus and JobVerb.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
This is the first step towards creating an infrastructure for generic
background jobs that aren't tied to a block device. For now, Job only
stores its ID and JobDriver, the rest stays in BlockJob.
The following patches will move over more parts of BlockJob to Job if
they are meaningful outside the context of a block job.
BlockJob.driver is now redundant, but this patch leaves it around to
avoid unnecessary churn. The next patches will get rid of almost all of
its uses anyway so that it can be removed later with much less churn.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
185 and 191 define a MIG_SOCKET even though they don't do anything with
migration. Remove the useless variable.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
grep for "migrate" turns up a few test cases which use migration, but
haven't been in the "migration" group so far. Add them to the group.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
The reference output file only works for file. 'qemu-img convert -p'
makes a lot more progress updates for NFS than for file, so disable the
test for NFS.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
NFS paths were only partially filtered in _filter_img_create, _img_info
and _filter_img_info, resulting in "nfs://127.0.0.1TEST_DIR/t.IMGFMT".
This adds another replacement to the sed calls that matches the test
directory not as a host path, but as an NFS URL (the prefix as used for
$TEST_IMG).
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Test cases were trying to use nfs:// URLs as local filenames, which made
every test fail for NFS. With TEST_IMG and TEST_IMG_FILE set like for
the other protocols, NFS tests can pass again.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
No need to close the TPM data socket on the emulator end, qemu will
close it after a SHUTDOWN. This avoids a race between close() and
read() in the TPM data thread.
Reported-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Currently the timer is cancelled and the block job is entered by
block_job_resume(). This behavior causes drain to run extra blockjob
iterations when the job was sleeping due to the ratelimit.
This patch leaves the job asleep when block_job_resume() is called.
Jobs can still be forcibly woken up using block_job_enter(), which is
used to cancel jobs.
After this patch drain no longer runs extra blockjob iterations. This
is the expected behavior that qemu-iotests 185 used to rely on. We
temporarily changed the 185 test output to make it pass for the QEMU
2.12 release but now it's time to address this issue.
Cc: QingFeng Hao <haoqf@linux.vnet.ibm.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: QingFeng Hao <haoqf@linux.vnet.ibm.com>
Message-id: 20180508135436.30140-3-stefanha@redhat.com
Reviewed-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Jeff Cody <jcody@redhat.com>
Commit 8565c3ab53 ("qemu-iotests: fix
185") identified a race condition in a sub-test.
Similar issues also affect the other sub-tests. If disk I/O completes
quickly, it races with the QMP 'quit' command. This causes spurious
test failures because QMP events are emitted in an unpredictable order.
This test relies on QEMU internals and there is no QMP API for getting
deterministic behavior needed to make this test 100% reliable. At the
same time, the test is useful and it would be a shame to remove it.
Add sleep 0.5 to reduce the chance of races. This is not a real fix but
appears to reduce spurious failures in practice.
Cc: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 20180508135436.30140-2-stefanha@redhat.com
Reviewed-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Jeff Cody <jcody@redhat.com>
No need to write it to a file. Just need a proper firmware O:-)
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20180502202051.15493-4-mreitz@redhat.com
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
COR across nodes (that is, you have some filter node between the
actually COR target and the node that performs the COR) cannot reliably
work together with the permission system when there is no explicit COR
node that can request the WRITE_UNCHANGED permission for its child.
This is because COR (currently) sneaks its requests by the usual
permission checks, so it can work without a WRITE* permission; but if
there is a filter node in between, that will re-issue the request, which
then passes through the usual check -- and if nobody has requested a
WRITE_UNCHANGED permission, that check will fail.
There is no real direct fix apart from hoping that there is someone who
has requested that permission; in case of just the qemu-io HMP command
(and no guest device), however, that is not the case. The real real fix
is to implement the copy-on-read flag through an implicitly added COR
node. Such a node can request the necessary permissions as shown in
this test.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20180421132929.21610-10-mreitz@redhat.com
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
iotest 197 tests copy-on-read using the (now old) copy-on-read flag.
Copy it to 215 and modify it to use the COR filter driver instead.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20180421132929.21610-9-mreitz@redhat.com
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Message-id: 20180421132929.21610-8-mreitz@redhat.com
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
userfaultfd support depends on the host kernel, so it may not be
available. If so, 181 and 201 should be skipped.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20180406151731.4285-3-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
Currently, common.qemu only allows to match for results indicating
success. The only way to fail is by provoking a timeout. However,
sometimes we do have a defined failure output and can match for that,
which saves us from having to wait for the timeout in case of failure.
Because failure can sometimes just result in a _notrun in the test, it
is actually important to care about being able to fail quickly.
Also, sometimes we simply do not get any specific output in case of
success. The only way to handle this currently would be to define an
error message as the string to look for, which means that actual success
results in a timeout. This is really bad because it unnecessarily slows
down a succeeding test.
Therefore, this patch adds a new parameter $success_or_failure to
_timed_wait_for and _send_qemu_cmd. Setting this to a non-empty string
makes both commands expect two match parameters: If the first matches,
the function succeeds. If the second matches, the function fails.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20180406151731.4285-2-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
The L2 and refcount caches have default sizes that can be overridden
using the l2-cache-size and refcount-cache-size (an additional
parameter named cache-size sets the combined size of both caches).
Unless forced by one of the aforementioned parameters, QEMU will set
the unspecified sizes so that the L2 cache is 4 times larger than the
refcount cache.
This is based on the premise that the refcount metadata needs to be
only a fourth of the L2 metadata to cover the same amount of disk
space. This is incorrect for two reasons:
a) The amount of disk covered by an L2 table depends solely on the
cluster size, but in the case of a refcount block it depends on
the cluster size *and* the width of each refcount entry.
The 4/1 ratio is only valid with 16-bit entries (the default).
b) When we talk about disk space and L2 tables we are talking about
guest space (L2 tables map guest clusters to host clusters),
whereas refcount blocks are used for host clusters (including
L1/L2 tables and the refcount blocks themselves). On a fully
populated (and uncompressed) qcow2 file, image size > virtual size
so there are more refcount entries than L2 entries.
Problem (a) could be fixed by adjusting the algorithm to take into
account the refcount entry width. Problem (b) could be fixed by
increasing a bit the refcount cache size to account for the clusters
used for qcow2 metadata.
However this patch takes a completely different approach and instead
of keeping a ratio between both cache sizes it assigns as much as
possible to the L2 cache and the remainder to the refcount cache.
The reason is that L2 tables are used for every single I/O request
from the guest and the effect of increasing the cache is significant
and clearly measurable. Refcount blocks are however only used for
cluster allocation and internal snapshots and in practice are accessed
sequentially in most cases, so the effect of increasing the cache is
negligible (even when doing random writes from the guest).
So, make the refcount cache as small as possible unless the user
explicitly asks for a larger one.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 9695182c2eb11b77cb319689a1ebaa4e7c9d6591.1523968389.git.berto@igalia.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
Commit abd3622cc0 added a case to 122
regarding how the qcow2 driver handles an incorrect compressed data
length value. This does not really fit into 122, as that file is
supposed to contain qemu-img convert test cases, which this case is not.
So this patch splits it off into its own file; maybe we will even get
more qcow2-only compression tests in the future.
Also, that test case does not work with refcount_bits=1, so mark that
option as unsupported.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20180406164108.26118-1-mreitz@redhat.com
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
* dtc configure fixes
* MemoryRegionCache second try
* Deprecated option removal
* add support for Hyper-V reenlightenment MSRs
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQEcBAABAgAGBQJa9Y2qAAoJEL/70l94x66Df8EIAI4pi+zf1mTlH0Koi+oqOg+d
geBC6N9IA+n1p90XERnPbuiT19NjON2R1Z907SbzDkijxdNRoYUoQf7Z+ZBTENjn
dYsVvgLYzajGLWWtJetPPaNFAqeF2z8B3lbVQnGVLzH5pQQ2NS1NJsvXQA2LslLs
2ll1CJ2EEBhayoBSbHK+0cY85f+DUgK/T1imIV2T/rwcef9Rw218nvPfGhPBSoL6
tI2xIOxz8bBOvZNg2wdxpaoPuDipBFu6koVVbaGSgXORg8k5CEcKNxInztufdELW
KZK5ORa3T0uqu5T/GGPAfm/NbYVQ4aTB5mddshsXtKbBhnbSfRYvpVsR4kQB/Hc=
=oC1r
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging
* Don't silently truncate extremely long words in the command line
* dtc configure fixes
* MemoryRegionCache second try
* Deprecated option removal
* add support for Hyper-V reenlightenment MSRs
# gpg: Signature made Fri 11 May 2018 13:33:46 BST
# gpg: using RSA key BFFBD25F78C7AE83
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>"
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>"
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1
# Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83
* remotes/bonzini/tags/for-upstream: (29 commits)
rename included C files to foo.inc.c, remove osdep.h
pc-dimm: fix error messages if no slots were defined
build: Silence dtc directory creation
shippable: Remove Debian 8 libfdt kludge
configure: Display if libfdt is from system or git
configure: Really use local libfdt if the system one is too old
i386/kvm: add support for Hyper-V reenlightenment MSRs
qemu-doc: provide details of supported build platforms
qemu-options: Remove deprecated -no-kvm-irqchip
qemu-options: Remove deprecated -no-kvm-pit-reinjection
qemu-options: Bail out on unsupported options instead of silently ignoring them
qemu-options: Remove remainders of the -tdf option
qemu-options: Mark -virtioconsole as deprecated
target/i386: sev: fix memory leaks
opts: don't silently truncate long option values
opts: don't silently truncate long parameter keys
accel: use g_strsplit for parsing accelerator names
update-linux-headers: drop hyperv.h
qemu-thread: always keep the posix wrapper layer
exec: reintroduce MemoryRegion caching
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We already have an extensive mirror test (041) which does cover
cancelling a mirror job, especially after it has emitted the READY
event. However, it does not check what exact events are emitted after
block-job-cancel is executed. More importantly, it does not use
throttling to ensure that it covers the case of block-job-cancel before
READY.
It would be possible to add this case to 041, but considering it is
already our largest test file, it makes sense to create a new file for
these cases.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20180501220509.14152-3-mreitz@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
Commit b76e4458b1 ("block/mirror: change
the semantic of 'force' of block-job-cancel") accidentally removed the
ratelimit in the mirror job.
Reintroduce the ratelimit but keep the block-job-cancel force=true
behavior that was added in commit
b76e4458b1.
Note that block_job_sleep_ns() returns immediately when the job is
cancelled. Therefore it's safe to unconditionally call
block_job_sleep_ns() - a cancelled job does not sleep.
This commit fixes the non-deterministic qemu-iotests 185 output. The
test relies on the ratelimit to make the job sleep until the 'quit'
command is processed. Previously the job could complete before the
'quit' command was received since there was no ratelimit.
Cc: Liang Li <liliang.opensource@gmail.com>
Cc: Jeff Cody <jcody@redhat.com>
Cc: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20180424123527.19168-1-stefanha@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJa7BLUAAoJEDhwtADrkYZTumIQAJC6wXmN+wBYc2MoR2Y8SQgY
+gTM9J6R6H50ijb7RkkERLTgys7IxCDD/jy2p0yX/Re3ReXbYwzYQXmSFpF1KWGe
SXB84uDtwSILbvR5iS0TBdQSyO+u5DRboukuLfTEZHjYQUP+guT1we3YwqWGzIKp
o5kV/7Nq0vPWO5Sbs4FWB0t9hWzWV3Kef9b4gRPn05sWPaq2/sU6A3xai+ty6qS7
PCm7VwT4z5SACdR4LRiL45h3HdThgr/alJJ6lUr2kaNCBiDBvM4h6d7W+lI/Vi3Y
rG+wqyPQFyWLXf0uuI3AmSScVUzfYv9C4TcBTJkFnebrFcybPsGwEJLGtaIgFnBU
1Mcz/TCl1bB4fDvhwV2qexxlXryOWXKn+ygdu9sBSY/QSA+NEqbJQo6cCDqMQ9Qy
6zqrGxUrM/peVLvhfle4cIbyPslGRGn2s95oQzCJi8TlZxBj8lgW1x1kr7OhSlf4
rNteSYAHDNSiNVL1PcW3vOS7ndTA6O0vHAtGa+0vbQzAf+RUfFG0sfggG6350O8e
97Hp4LKT3VpGEuwyQEw6wk3zODNfAgtkkwjQHTnQYHriKB/fcVfY3g7gpYp4zMLF
GJ3h5KZj71JNoFoxVJniAgkWY8+IP11ggXMyYWSMxMZ3M81EqQ/rbvOvGxn1wjd8
kHbpUEMmGBHF1VmKs7e1
=Kukn
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/armbru/tags/pull-qapi-2018-05-04' into staging
QAPI patches for 2018-05-04
# gpg: Signature made Fri 04 May 2018 08:59:16 BST
# gpg: using RSA key 3870B400EB918653
# gpg: Good signature from "Markus Armbruster <armbru@redhat.com>"
# gpg: aka "Markus Armbruster <armbru@pond.sub.org>"
# Primary key fingerprint: 354B C8B3 D7EB 2A6B 6867 4E5F 3870 B400 EB91 8653
* remotes/armbru/tags/pull-qapi-2018-05-04:
qapi: deprecate CpuInfoFast.arch
qapi: discriminate CpuInfoFast on SysEmuTarget, not CpuInfoArch
qapi: change the type of TargetInfo.arch from string to enum SysEmuTarget
qapi: add SysEmuTarget to "common.json"
qapi: fill in CpuInfoFast.arch in query-cpus-fast
qobject: Modify qobject_ref() to return obj
qobject: Replace qobject_incref/QINCREF qobject_decref/QDECREF
qobject: use a QObjectBase_ struct
qobject: Ensure base is at offset 0
qobject: Use qobject_to() instead of type cast
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Now that we can safely call QOBJECT() on QObject * as well as its
subtypes, we can have macros qobject_ref() / qobject_unref() that work
everywhere instead of having to use QINCREF() / QDECREF() for QObject
and qobject_incref() / qobject_decref() for its subtypes.
The replacement is mechanical, except I broke a long line, and added a
cast in monitor_qmp_cleanup_req_queue_locked(). Unlike
qobject_decref(), qobject_unref() doesn't accept void *.
Note that the new macros evaluate their argument exactly once, thus no
need to shout them.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180419150145.24795-4-marcandre.lureau@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
[Rebased, semantic conflict resolved, commit message improved]
Signed-off-by: Markus Armbruster <armbru@redhat.com>
By moving the base fields to a QObjectBase_, QObject can be a type
which also has a 'base' field. This allows writing a generic QOBJECT()
macro that will work with any QObject type, including QObject
itself. The container_of() macro ensures that the object to cast has a
QObjectBase_ base field, giving some type safety guarantees. QObject
must have no members but QObjectBase_ base, or else QOBJECT() breaks.
QObjectBase_ is not a typedef and uses a trailing underscore to make
it obvious it is not for normal use and to avoid potential abuse.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180419150145.24795-3-marcandre.lureau@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
The proper way to convert from (abstract) QObject to a (concrete)
subtype is qobject_to(). Look for offenders that type cast instead:
$ git-grep '(Q[A-Z][a-z]* \*)'
hmp.c: qmp_device_add((QDict *)qdict, NULL, &err);
include/qapi/qmp/qobject.h: return (QObject *)obj;
qobject/qobject.c:static void (*qdestroy[QTYPE__MAX])(QObject *) = {
tests/check-qdict.c: dst = (QDict *)qdict_crumple(src, &error_abort);
The first two cast away const, the third isn't a type cast. Fix the
fourth.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20180426152805.8469-1-armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
The consoles ("sclpconsole" and "sclplmconsole") can only be configured
with "-device" and "-chardev" so far. Other machines use the convenience
option "-serial" to configure the default consoles, even for virtual
consoles like spapr-vty on the pseries machine. So let's support this
option on s390x, too. This way we can easily enable the serial console
here again with "-nodefaults", for example:
qemu-system-s390x -no-shutdown -nographic -nodefaults -serial mon:stdio
... which is way shorter than typing:
qemu-system-s390x -no-shutdown -nographic -nodefaults \
-chardev stdio,id=c1,mux=on -device sclpconsole,chardev=c1 \
-mon chardev=c1
The -serial parameter can also be used if you only want to see the QEMU
monitor on stdio without using -nodefaults, but not the console output.
That's something that is pretty impossible with the current code today:
qemu-system-s390x -no-shutdown -nographic -serial none
While we're at it, this patch also maps the second -serial option to the
"sclplmconsole", so that there is now an easy way to configure this second
console on s390x, too, for example:
qemu-system-s390x -no-shutdown -nographic -serial null -serial mon:stdio
Additionally, the new code is also smaller than the old one and we have
less s390x-specific code in vl.c :-)
I've also checked that migration still works as expected by migrating
a guest with console output back and forth between a qemu-system-s390x
that has this patch and an instance without this patch.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1524754794-28005-1-git-send-email-thuth@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
This patch just requests blocktime calculation,
and check it in case when UFFD_FEATURE_THREAD_ID feature is set
on the host.
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Alexey Perevalov <a.perevalov@samsung.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-Id: <1521742647-25550-6-git-send-email-a.perevalov@samsung.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Improve and fix 169:
- use MIGRATION events instead of RESUME
- make a TODO: enable dirty-bitmaps capability for offline case
- recreate vm_b without -incoming near test end
This (likely) fixes racy faults at least of the following types:
- timeout on waiting for RESUME event
- sha256 mismatch on line 136 (142 after this patch)
- fail to self.vm_b.launch() on line 135 (141 now after this patch)
And surely fixes cat processes, left after test finish.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-id: 20180411122606.367301-3-vsementsov@virtuozzo.com
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Commit 4486e89c21 ("vl: introduce
vm_shutdown()") added a bdrv_drain_all() call. As a side-effect of the
drain operation the block job iterates one more time than before. The
185 output no longer matches and the test is failing now.
It may be possible to avoid the superfluous block job iteration, but
that type of patch is not suitable late in the QEMU 2.12 release cycle.
This patch simply updates the 185 output file. The new behavior is
correct, just not optimal, so make the test pass again.
Fixes: 4486e89c21 ("vl: introduce vm_shutdown()")
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: QingFeng Hao <haoqf@linux.vnet.ibm.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: QingFeng Hao <haoqf@linux.vnet.ibm.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
qemu-iotests doesn't support dmg, and the dmg block driver doesn't
support image creation. Two test cases declare dmg as supported, but
that's obviously wrong for both reasons. Remove the declaration.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Blacklist these formats, as they don't support image creation, as they
say:
> ./qemu-img create -f bochs x 1m
qemu-img: x: Format driver 'bochs' does not support image creation
> ./qemu-img create -f cloop x 1m
qemu-img: x: Format driver 'cloop' does not support image creation
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Support "generic" formats like in bash tests with their
_supported_fmt generic
The test, supporting "generic" formats will run if IMGFMT_GENERIC =
true, which is default, except for bochs and cloop. However, you can
use verify_image_format(['generic', 'bochs']), which will run for all
except cloop (for this moment).
Also, add an assert (we don't want set both arguments) and remove
duplication.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>