By leveraging berkeley's softfloat and testfloat.
With this we get decent coverage of softfloat.c:
$ ./fp-test -r even: 67.22% coverage
$ ./fp-test -r all: 73.11% coverage
Note that we do not yet test parts of softfloat.c that aren't
in the original softfloat library, namely:
- denormal inputs
- *_to_int16/uint16 conversions
- scalbn for fixed point
- muladd variants
- min/max
- exp2
- log2
- float*_compare (except float16_compare)
Signed-off-by: Emilio G. Cota <cota@braap.org>
[rth: Add the new modules to git_submodules.]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
These are BSD-licensed so we can add them as submodules.
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This test failed before "fix iterating properties over a class".
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
And factor out a common function used by the follow class properties
iterator test.
Fix uninitialized "seentype" variable.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
This reverts commit 0ea47d0f36.
scripts/argparse.py was removed from the tree, so we don't
need this hack anymore.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20180618225131.13113-4-ehabkost@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
This reverts commit c2d3189667.
scripts/argparse.py was removed from the tree, so we don't need
this hack anymore.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20180618225131.13113-3-ehabkost@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
This test exhibits a regression fixed by the previous reverts.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20180817135224.22971-5-marcandre.lureau@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Peter reported a test failure on FreeBSD with the new reconnect test:
MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))}
gtester -k --verbose -m=quick tests/test-char
TEST: tests/test-char... (pid=16190)
/char/null: OK
/char/invalid: OK
/char/ringbuf: OK
/char/mux: OK
/char/stdio: OK
/char/pipe: OK
/char/file: OK
/char/file-fifo: OK
/char/udp: OK
/char/serial: OK
/char/hotswap: OK
/char/socket/basic: OK
/char/socket/reconnect: FAIL
GTester: last random seed: R02S521380d9c12f1dac3ad1763bf5665c27
(pid=16367)
/char/socket/fdpass: OK
FAIL: tests/test-char
**
ERROR:tests/test-char.c:353:char_socket_test_common: assertion failed:
(object_property_get_bool(OBJECT(chr_client), "connected",
&error_abort))
It turns out that the socket test code checks both server and client
connection states, but doesn't wait for both.
Wait for the client side as well.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20180823143125.16767-5-marcandre.lureau@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Run some memfd-related checks before registering hostmem-memfd &
various properties. This will help libvirt to figure out what the host
is supposed to be capable of.
qemu_memfd_check() is changed to a less optimized version, since it is
used with various flags, it no longer caches the result.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20180906161415.8543-1-marcandre.lureau@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
To avoid undefined behaviour.
Note that these "atomics" are atomic in the "access once" sense.
The variables are updated by a single thread at a time, so no
"full" atomics are necessary.
Signed-off-by: Emilio G. Cota <cota@braap.org>
Message-Id: <20180910232752.31565-6-cota@braap.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
qemu_event_reset() must be called before the AIO request in a different
iothread is submitted. Otherwise the request could be completed before
we do the qemu_event_reset() and the test would hang in
qemu_event_wait().
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Tested-by: Max Reitz <mreitz@redhat.com>
Recently, the test case has started failing because some job related
functions want to drop the AioContext lock even though it hasn't been
taken:
(gdb) bt
#0 0x00007f51c067c9fb in raise () from /lib64/libc.so.6
#1 0x00007f51c067e77d in abort () from /lib64/libc.so.6
#2 0x0000558c9d5dde7b in error_exit (err=<optimized out>, msg=msg@entry=0x558c9d6fe120 <__func__.18373> "qemu_mutex_unlock_impl") at util/qemu-thread-posix.c:36
#3 0x0000558c9d6b5263 in qemu_mutex_unlock_impl (mutex=mutex@entry=0x558c9f3999a0, file=file@entry=0x558c9d6fd36f "util/async.c", line=line@entry=516) at util/qemu-thread-posix.c:96
#4 0x0000558c9d6b0565 in aio_context_release (ctx=ctx@entry=0x558c9f399940) at util/async.c:516
#5 0x0000558c9d5eb3da in job_completed_txn_abort (job=0x558c9f68e640) at job.c:738
#6 0x0000558c9d5eb227 in job_finish_sync (job=0x558c9f68e640, finish=finish@entry=0x558c9d5eb8d0 <job_cancel_err>, errp=errp@entry=0x0) at job.c:986
#7 0x0000558c9d5eb8ee in job_cancel_sync (job=<optimized out>) at job.c:941
#8 0x0000558c9d64d853 in replication_close (bs=<optimized out>) at block/replication.c:148
#9 0x0000558c9d5e5c9f in bdrv_close (bs=0x558c9f41b020) at block.c:3420
#10 bdrv_delete (bs=0x558c9f41b020) at block.c:3629
#11 bdrv_unref (bs=0x558c9f41b020) at block.c:4685
#12 0x0000558c9d62a3f3 in blk_remove_bs (blk=blk@entry=0x558c9f42a7c0) at block/block-backend.c:783
#13 0x0000558c9d62a667 in blk_delete (blk=0x558c9f42a7c0) at block/block-backend.c:402
#14 blk_unref (blk=0x558c9f42a7c0) at block/block-backend.c:457
#15 0x0000558c9d5dfcea in test_secondary_stop () at tests/test-replication.c:478
#16 0x00007f51c1f13178 in g_test_run_suite_internal () from /lib64/libglib-2.0.so.0
#17 0x00007f51c1f1337b in g_test_run_suite_internal () from /lib64/libglib-2.0.so.0
#18 0x00007f51c1f1337b in g_test_run_suite_internal () from /lib64/libglib-2.0.so.0
#19 0x00007f51c1f13552 in g_test_run_suite () from /lib64/libglib-2.0.so.0
#20 0x00007f51c1f13571 in g_test_run () from /lib64/libglib-2.0.so.0
#21 0x0000558c9d5de31f in main (argc=<optimized out>, argv=<optimized out>) at tests/test-replication.c:581
It is yet unclear whether this should really be considered a bug in the
test case or whether blk_unref() should work for callers that haven't
taken the AioContext lock, but in order to fix the build tests quickly,
just take the AioContext lock around blk_unref().
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Currently, the default values for werror and rerror have to be set
explicitly with blk_set_on_error() by the callers of blk_new(). The only
caller actually doing this is blockdev_init(), which is called for
BlockBackends created using -drive.
In particular, anonymous BlockBackends created with
-device ...,drive=<node-name> didn't get the correct default set and
instead defaulted to the integer value 0 (= BLOCKDEV_ON_ERROR_REPORT).
This is the intended default for rerror anyway, but the default for
werror should be BLOCKDEV_ON_ERROR_ENOSPC.
Set the defaults in blk_new() instead so that they apply no matter what
way the BlockBackend was created.
Cc: qemu-stable@nongnu.org
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Sufficient L2 cache can noticeably improve the performance when using
large images with frequent I/O.
Previously, unless 'cache-size' was specified and was large enough, the
L2 cache was set to a certain size without taking the virtual image size
into account.
Now, the L2 cache assignment is aware of the virtual size of the image,
and will cover the entire image, unless the cache size needed for that is
larger than a certain maximum. This maximum is set to 1 MB by default
(enough to cover an 8 GB image with the default cluster size) but can
be increased or decreased using the 'l2-cache-size' option. This option
was previously documented as the *maximum* L2 cache size, and this patch
makes it behave as such, instead of as a constant size. Also, the
existing option 'cache-size' can limit the sum of both L2 and refcount
caches, as previously.
Signed-off-by: Leonid Bloch <lbloch@janustech.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Image locking errors happening at device initialization time doesn't say
which file cannot be locked, for instance,
-device scsi-disk,drive=drive-1: Failed to get shared "write" lock
Is another process using the image?
could refer to either the overlay image or its backing image.
Hoist the error_append_hint to the caller of raw_check_lock_bytes where
file name is known, and include it in the error hint.
Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This supercedes Juan's pull from the 13th
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJbq7zSAAoJEAUWMx68W/3nrSAP/jU3MAg5/Cx+AMbNjQntTllN
0kd1lJPNOW9IBmoKKwu+mJ0SRHRi5kghs5NXuGBYTMcpVoZk0I0Vf8/koq+Yn+2g
lbZw4Nv7a8h0aOoXo6lNa1u/VIRMCwgRvLWzP3HVIjlVf1Uup+45zTMynt6QnnMi
w42ctSJXVl5asNp11od8BUJCSZ4C9OI2Uu6Z54F3q2q3GdCEKH0wKkxD19WSBimf
/j82TSXGctdJGrwWCqEh2yapTG0cYeaYPCTx6Nb8mc/+mqR/gdvquM8plIHTVqEP
0eBFl/rZp1gnPqN+TIpTBqngPiIO1XezQvg/vXQThbnUaaaz1axnAAefXeXabz/W
/JPWZdDue5MX2MTtD5uoz/9RKQNfOWwCB+phTDJreqkdSNjeQmrIxItDXksPaD8n
diNVJd0Erg377E3mt3wn2mJH4PscwJtTk5s8dhLECAqypybqwGRMqKpomXKfodQj
/bIjjQpsqV0NyCNCcKSWOrTAnZl7KDohUL4KTPi49CgLMTO+J6YqSsGrhoVyU541
m9uV8xfKLBfX2ebOqEvpu9gty2t21yVFXNHDenJA5sdiF63LvJ5MvD6Hi2zCuf3S
tRrjSCqioao4BFhGjuQRc20Plsv8YHpfKLy/+w/SN8QF0Gmv49e/WSWcFuE1GuN8
loI3NEahqd0VrpW/x/HD
=JP00
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/dgilbert/tags/pull-migration-20180926a' into staging
Migration pull 2018-09-26
This supercedes Juan's pull from the 13th
# gpg: Signature made Wed 26 Sep 2018 18:07:30 BST
# gpg: using RSA key 0516331EBC5BFDE7
# gpg: Good signature from "Dr. David Alan Gilbert (RH2) <dgilbert@redhat.com>"
# Primary key fingerprint: 45F5 C71B 4A0C B7FB 977A 9FA9 0516 331E BC5B FDE7
* remotes/dgilbert/tags/pull-migration-20180926a:
migration/ram.c: Avoid taking address of fields in packed MultiFDInit_t struct
migration: fix the compression code
migration: fix QEMUFile leak
tests/migration: Speed up the test on ppc64
migration: cleanup in error paths in loadvm
migration/postcopy: Clear have_listen_thread
tests/migration: Add migration-test header file
tests/migration: Support cross compilation in generating boot header file
tests/migration: Convert x86 boot block compilation script into Makefile
migration: use save_page_use_compression in flush_compressed_data
migration: show the statistics of compression
migration: do not flush_compressed_data at the end of iteration
Add a hint message to loadvm and exits on failure
migration: handle the error condition properly
migration: fix calculating xbzrle_counters.cache_miss_rate
migration/rdma: Fix uninitialised rdma_return_path
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The SLOF boot process is always quite slow ... but we can speed it up
a little bit by specifying "-nodefaults" and by using the "nvramrc"
variable instead of "boot-command" (since "nvramrc" is evaluated earlier
in the SLOF boot process than "boot-command").
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1537204330-16076-1-git-send-email-thuth@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Precomputing the hash values allows us to perform more frequent
accesses to the hash table, thereby reaching higher throughputs.
We keep the old behaviour by default, since (1) we might confuse
users if they measured a speedup without changing anything in
the QHT implementation, and (2) benchmarking the hash function
"on line" is also valuable.
Before:
$ taskset -c 0 tests/qht-bench -n 1
Throughput: 38.18 MT/s
After:
$ taskset -c 0 tests/qht-bench -n 1
Throughput: 38.16 MT/s
After (with precomputing):
$ taskset -c 0 tests/qht-bench -n 1 -p
Throughput: 50.87 MT/s
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Accessing the HT from an iterator results almost always
in a deadlock. Given that only one qht-internal function
uses this argument, drop it from the interface.
Suggested-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Perform first the tests that exercise code paths that are
easier to hit at small table sizes, and then resize the table
to speed up subsequent tests. If this resize is not too large,
we can make the test faster with no code coverage loss.
- With gcov enabled:
Before: 20.568s, 90.28% qht.c coverage
After: 5.168s, 93.06% qht.c coverage
The coverage increase is entirely due to calling qht_resize,
which we weren't calling before. Note that the code paths
that remain to be tested are either error handling or
can only occur when several threads are accessing the
hash table concurrently (e.g. seqlock retry, trylock fail).
- Without gcov:
Before: 1.987s
After: 0.528s
The speedup is almost the same as with gcov, although the
"before" run is a lot faster.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This improves coverage by one (!) LoC in qht.c, bringing the
coverage rate up from 90.00% to 90.28%.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This improves qht.c code coverage from 89.44% to 90.00%.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This patch moves the settings related migration-test from the
migration-test.c file to a new header file.
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Wei Huang <wei@redhat.com>
Message-Id: <1536174934-26022-4-git-send-email-wei@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Recently a new configure option, CROSS_CC_GUEST, was added to
$(TARGET)-softmmu/config-target.mak to support TCG-related tests. This
patch tries to leverage this option to support cross compilation when the
migration boot block file is being re-generated:
* The x86 related files are moved to a new sub-dir (named ./i386).
* A new top-layer Makefile is created in tests/migration/ directory.
This Makefile searches and parses CROSS_CC_GUEST to generate CROSS_PREFIX.
The CROSS_PREFIX, if available, is then passed to migration/$ARCH/Makefile.
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Wei Huang <wei@redhat.com>
Message-Id: <1536174934-26022-3-git-send-email-wei@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
The x86 boot block header currently is generated with a shell script.
To better support other CPUs (e.g. aarch64), we convert the script
into Makefile. This allows us to 1) support cross-compilation easily,
and 2) avoid creating a script file for every architecture.
Note that, in the new design, the cross compiler prefix can be specified by
setting the CROSS_PREFIX in "make" command. Also to allow gcc pre-processor
to include the C-style file correctly, it also renames the
x86-a-b-bootblock.s file extension from .s to .S.
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Wei Huang <wei@redhat.com>
Message-Id: <1536174934-26022-2-git-send-email-wei@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
-cpu max works with any accelerator, so we don't need
to use it only conditionally if not using KVM. Just use
it all the time.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20180820155554.23476-1-peter.maydell@linaro.org>
Signed-off-by: Fam Zheng <famz@redhat.com>
- Deprecate the "enforce-config-section" machine parameter
- Re-enable the wdt_ib700, endianness and vmxnet3 qtests
- Some trivial fixes and doc update patches that crossed my way
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQIcBAABAgAGBQJbqlsyAAoJEC7Z13T+cC21RbAP/3IvGfBxuRm6rBWoghjQgbl8
KU8nPnlZUtqjxmfUTILO/h+pJ3na5MQ8hh7v8JHi+xlQ2DPkECW21DtnfdxntVjw
+b+N5Ap6J22GHyEq4HJXPWAk2rDInqkU966DvL40RiMvOTfXdg9EO0TDX0VsVgZv
BR1r7/t3T0P7hiQ0XWb9U2JchRIC+Zgk34gXZPSTpoIv89fUhzNoK5LvAA6yV1FQ
TvE8VTKJm4wkqThH1ShtbJCBKjHjW/W8LYZr3YMothcs8vGjEdEcDL4BoJZDn3bF
h4VTkU+k8lp7W9LmlnPnu1WH/5ezhzdwJTeFaPJt4U10WKJptAS4vbK03DXlds9O
9d2BOXKrima2kSr1ejSe1f0kcE8fis1XFmSuhF61Nbw6ngT5+pP2JSc1XwFazd2K
zQwV4GXBLzAGnd4F2Ec+5TKzbGFVfczxeBDiBkkVmG+XdX/UXJpkpPYGAaw7DDiK
JwKVVYIPk1ll6MAbR6qEGsvE/adHNEm8lUdjXqwgbQlIeUZ2H0hCu9lJ0X81mtoQ
WZP+nMa/87COnlPX6VPVgxM2TXQOH/UbGz/WmYzZ6/gPKTX+gfwrHQGdp7Tjl33U
KxFKWioFnoqGuyWasvTtKEK67/IlrY+w1nXuuqKJg8J2/qx1SVtx45FHkRkxkIDx
4boRpx0XUqpDVdf8VhRB
=dXgp
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/huth-gitlab/tags/pull-request-2018-09-25' into staging
- Deprecate the usage of a network backend via "name" instead of "id"
- Deprecate the "enforce-config-section" machine parameter
- Re-enable the wdt_ib700, endianness and vmxnet3 qtests
- Some trivial fixes and doc update patches that crossed my way
# gpg: Signature made Tue 25 Sep 2018 16:58:42 BST
# gpg: using RSA key 2ED9D774FE702DB5
# gpg: Good signature from "Thomas Huth <th.huth@gmx.de>"
# gpg: aka "Thomas Huth <thuth@redhat.com>"
# gpg: aka "Thomas Huth <huth@tuxfamily.org>"
# gpg: aka "Thomas Huth <th.huth@posteo.de>"
# Primary key fingerprint: 27B8 8847 EEE0 2501 18F3 EAB9 2ED9 D774 FE70 2DB5
* remotes/huth-gitlab/tags/pull-request-2018-09-25:
Revert "check: Move VMXNET3 test to common"
Revert "check: Move endianess test to common"
Revert "check: Move wdt_ib700 test to common"
tests/migration: Speed up the test on ppc64
hw/qdev-core: Fix description of instance_init
qdev: fix a typo in comment
docs: Fix some typos (most found by codespell)
trivial: Make bios files and source files non-executable
memfd: fix possible usage of the uninitialized file descriptor
hw/core/machine: Officially deprecate the enforce-config-section parameter
net/slirp: Deprecate the [hub_id name] parameter tuple
net: Deprecate the "name" parameter of -net
Makefile: Add missing dependency for qemu-deprecated.texi
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This reverts commit 7a066770f5.
The patch did not work as expected: The vmxnet3 test is currently
not run at all anymore.
Signed-off-by: Thomas Huth <thuth@redhat.com>
This reverts commit 669cc71000.
The patch did not work as expected: The endianess test is currently
not run at all anymore.
Signed-off-by: Thomas Huth <thuth@redhat.com>
This reverts commit ee1f6c812b.
The patch did not work as expected: The wdt_ib700 test is currently
not run at all anymore.
Signed-off-by: Thomas Huth <thuth@redhat.com>
The SLOF boot process is always quite slow ... but we can speed it up
a little bit by specifying "-nodefaults" and by using the "nvramrc"
variable instead of "boot-command" (since "nvramrc" is evaluated earlier
in the SLOF boot process than "boot-command").
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
For the block job drain test, don't only test draining the source and
the target node, but create a backing chain for the source
(source_backing <- source <- source_overlay) and test draining each of
the nodes in it.
When using iothreads, the source node (and therefore the job) is in a
different AioContext than the drain, which happens from the main
thread. This way, the main thread waits in AIO_WAIT_WHILE() for the
iothread to make process and aio_wait_kick() is required to notify it.
The test validates that calling bdrv_wakeup() for a child or a parent
node will actually notify AIO_WAIT_WHILE() instead of letting it hang.
Increase the sleep time a bit (to 1 ms) because the test case is racy
and with the shorter sleep, it didn't reproduce the bug it is supposed
to test for me under 'rr record -n'.
This was because bdrv_drain_invoke_entry() (in the main thread) was only
called after the job had already reached the pause point, so we got a
bdrv_dec_in_flight() from the main thread and the additional
aio_wait_kick() when the job becomes idle (that we really wanted to test
here) wasn't even necessary any more to make progress.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Commit 89bd030533 changed the test case from using job_sleep_ns() to
using qemu_co_sleep_ns() instead. Also, block_job_sleep_ns() became
job_sleep_ns() in commit 5d43e86e11.
In both cases, some comments in the test case were not updated. Do that
now.
Reported-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
This adds tests for calling AIO_WAIT_WHILE() in the .commit and .abort
callbacks. Both reasons why .abort could be called for a single job are
tested: Either .run or .prepare could return an error.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
This is a regression test for a deadlock that could occur in callbacks
called from the aio_poll() in bdrv_drain_poll_top_level(). The
AioContext lock wasn't released and therefore would be taken a second
time in the callback. This would cause a possible AIO_WAIT_WHILE() in
the callback to hang.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
This is a regression test for a deadlock that occurred in block job
completion callbacks (via job_defer_to_main_loop) because the AioContext
lock was taken twice: once in job_finish_sync() and then again in
job_defer_to_main_loop_bh(). This would cause AIO_WAIT_WHILE() to hang.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
All callers in QEMU proper hold the AioContext lock when calling
job_finish_sync(). test-blockjob should do the same when it calls the
function indirectly through job_cancel_sync().
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
This extends the existing drain test with a block job to include
variants where the block job runs in a different AioContext.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
We just fixed a bug that was causing a use-after-free when QEMU was
unable to create a temporary snapshot. This is a test case for this
scenario.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This adds some tests for block-commit with the new options top-node and
base-node (taking node names) instead of top and base (taking file
names).
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The exit callback in this test actually only performs cleanup.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 20180906130225.5118-11-jsnow@redhat.com
Reviewed-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
We remove the exit callback and the completed boolean along with it.
We can simulate it just fine by waiting for the job to defer to the
main loop, and then giving it one final kick to get the main loop
portion to run.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 20180906130225.5118-10-jsnow@redhat.com
Reviewed-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
These tests don't actually test blockjobs anymore, they test
generic Job lifetimes. Change the types accordingly.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 20180906130225.5118-9-jsnow@redhat.com
Reviewed-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Here are the accumulated ppc target patches for the last several
weeks. Highlights are:
* A number of 40p / PReP cleanups
* Preliminary irq rework on the pseries machine towards the new
XIVE interrupt controller
There are a few patches which make small changes to generic device and
arm code as prerequisites to the 40p interrupt routing cleanup. They
have acks from the relevant maintainers.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEdfRlhq5hpmzETofcbDjKyiDZs5IFAlup3PYACgkQbDjKyiDZ
s5IcYQ//fp79LhIXUKfJuGasVg1K8X795s3nD8vZ76z7FV2kNyHvOCcTsLn0Ccrp
WJLdXdZ0ErY87vJPfHckii9pXOX8J38nV5EFCElSLslx6gCndQZdQX2WY3luwIzq
afiKMERwTkCcqFXXPgweijhhuAU+roay8xdO/ZBO52ogzGaZalTFjG4l9a0DZMSm
ZceDrLrKw6GOaxntLptcn2+Ncuwpm0WSpLyL+bGNAzSAbqdn1dhHQ9UBrcSMteWj
df8J7CX63CFL2MwbQE3RyXeKaomdHabG+QgEVMlS4dpXVUx++ciMtrwZTX1mMDlI
DA9+5u6TcRMz34hN8lWk2O05scOVp8965BcfdeRBYAOTDS4ztiZJ9spKkIV0lHfe
rkgo7F1OsqoQhs9QrLYp0zZYn1OIhHWrbhk/DQptCJMRHk8mct4v2FcyGecU0e1Z
7SlJErxHXmar83PCCJXhtYHthDxN+dTHUW0bbrF4IjysfK+poX5hvvFEjyHGPIJL
duytwgEnnrBOFM7f7mdfH1LKeKzm1ji8nu7g2IsPAXC0xuFaq+d0fZWUWjymSPku
k5k5UUPs8KLtP9XY2qhO0vxBWl5d+CTam19FWVqHjRAp5WqjmoLxWnkofupcT0Yv
LcoHH2Ad9K8e0F4nA4UCYdJwfGH3qO+eBzmBR4+HZOuT1gVvRuw=
=A62f
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-3.1-20180925' into staging
ppc patch queue 2018-09-25
Here are the accumulated ppc target patches for the last several
weeks. Highlights are:
* A number of 40p / PReP cleanups
* Preliminary irq rework on the pseries machine towards the new
XIVE interrupt controller
There are a few patches which make small changes to generic device and
arm code as prerequisites to the 40p interrupt routing cleanup. They
have acks from the relevant maintainers.
# gpg: Signature made Tue 25 Sep 2018 08:00:06 BST
# gpg: using RSA key 6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>"
# gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>"
# gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>"
# gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>"
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* remotes/dgibson/tags/ppc-for-3.1-20180925:
40p: add fixed IRQ routing for LSI SCSI device
lsi53c895a: add optional external IRQ via qdev
scsi: remove unused lsi53c895a_create() and lsi53c810_create() functions
scsi: move lsi53c8xx_create() callers to lsi53c8xx_handle_legacy_cmdline()
scsi: add lsi53c8xx_handle_legacy_cmdline() function
sm501: Adjust endianness of pixel value in rectangle fill
spapr_pci: add an extra 'nr_msis' argument to spapr_populate_pci_dt
spapr: increase the size of the IRQ number space
spapr: introduce a spapr_irq class 'nr_msis' attribute
40p: use OR gate to wire up raven PCI interrupts
raven: some minor IRQ-related tidy-ups
hw/ppc: on 40p machine, change default firmware to OpenBIOS
target/ppc/cpu-models: Re-group the 970 CPUs together again
Record history of ppcemb target in common.json
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is a small test that will check for the ability to parse
both legacy and modern options for rbd.
The way the test is set up is for failure to occur, but without
having to wait to timeout on a non-existent rbd server. The error
messages in the success path show that the arguments were parsed.
The failure behavior prior to the patch series that has this test, is
qemu-img complaining about mandatory options (e.g. 'pool') not being
provided.
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Jeff Cody <jcody@redhat.com>
Message-id: f830580e339b974a83ed4870d11adcdc17f49a47.1536704901.git.jcody@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
OpenBIOS gained 40p support in 5b20e4cace
Use it, instead of relying on an unmaintained and very limited firmware.
Signed-off-by: Hervé Poussineau <hpoussin@reactos.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Here's another pull request for qemu-3.1. No real theme here, just an
assortment of various fixes. Probably the most notable thing is the
removal of the ppcemb target which has been deprecated for some time
now.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEdfRlhq5hpmzETofcbDjKyiDZs5IFAluSKPoACgkQbDjKyiDZ
s5JSpRAAhWvxLM6OoTdhAaPKhlKrIzWexWNI8efJNWfXvHnbHBxs8tk+hnJOZVsU
m00hfFMKMA0/4JMURrbYsCiyaq+r+Ws8oEbLDVKQdng6LNeUrLq7uC0rv41bW3CC
1BTqTX16lvhPsg1Sz8mh6IGwCIgRiV8zgvQ4iCc3GCJidI2A+3uLvW5hAndvDdjb
3lq6drg23LXZ6z/ou7hPynKmV6tFTlxSnB957LCnPGFACZeJKbuoRHPP30IrWwY+
nOQ1GTvenouGvEKI5gsC13qFWYcoNPPfc7NZFtx1fvxiMpkOj7R5hg9oStT2Ya6u
MVRwcp/XA2MF+2NnJ8TZOkAV7+1JidhRirsKFjcn1JqftWSxJOKA0weWuNQgdQNY
lJzyZZejEJCHn0NgOq9ZRjOP4U6iIcSlTurfXoronhw1q7yEBkYkS+JpLToLLsid
9qwxlBAfUfQ8E1wR8RnM6ATygVp2Z2ToL+70Rc7xzq6/R8kYFSzuhyaI1GUUtPGW
ZPwp3GRYWJE/xOK3z1YAndrN8FlNxqz3Cov3vtH118aBatWAT+PRVlouOB1/aF3T
KfV8Kme5KQrMGuj/RDLGLOeQi0e8wqBtVIhsESpHdocC6uo28H5gNXxptyLJPA04
dJwWvaQf/J7eIuChhuFygiTzMnQyJA1f77jlExpKfxKKQwUpHf4=
=WnE4
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-3.1-20180907' into staging
ppc patch queue 2018-09-07
Here's another pull request for qemu-3.1. No real theme here, just an
assortment of various fixes. Probably the most notable thing is the
removal of the ppcemb target which has been deprecated for some time
now.
# gpg: Signature made Fri 07 Sep 2018 08:30:02 BST
# gpg: using RSA key 6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>"
# gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>"
# gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>"
# gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>"
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* remotes/dgibson/tags/ppc-for-3.1-20180907:
target-ppc: Extend HWCAP2 bits for ISA 3.0
target/ppc/kvm: set vcpu as online/offline
Fix a deadlock case in the CPU hotplug flow
spapr: Correct reference count on spapr-cpu-core
mac_newworld: implement custom FWPathProvider
uninorth: add ofw-addr property to allow correct fw path generation
mac_oldworld: implement custom FWPathProvider
grackle: set device fw_name and address for correct fw path generation
macio: add addr property to macio IDE object
macio: add macio bus to help with fw path generation
macio: move MACIOIDEState type declarations to macio.h
spapr_pci: fix potential NULL pointer dereference
spapr: fix leak of rev array
ppc: Remove deprecated ppcemb target
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When the lexer chokes on an input character, it consumes the
character, emits a JSON error token, and enters its start state. This
can lead to suboptimal error recovery. For instance, input
0123 ,
produces the tokens
JSON_ERROR 01
JSON_INTEGER 23
JSON_COMMA ,
Make the lexer skip characters after a lexical error until a
structural character ('[', ']', '{', '}', ':', ','), an ASCII control
character, or '\xFE', or '\xFF'.
Note that we must not skip ASCII control characters, '\xFE', '\xFF',
because those are documented to force the JSON parser into known-good
state, by docs/interop/qmp-spec.txt.
The lexer now produces
JSON_ERROR 01
JSON_COMMA ,
Update qmp-test for the nicer error recovery: QMP now reports just one
error for input %p instead of two. Also drop the newline after %p; it
was needed to tease out the second error.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180831075841.13363-5-armbru@redhat.com>
[Conflict with commit ebb4d82d88 resolved]
Markus spotted some issues with this new test case which
unfortunately I didn't notice had been flagged until after
I'd applied the pull request. Revert the relevant commit.
This reverts commit 2b70ea9276.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
- split off the individual virtio-ccw devices into separate files
-----BEGIN PGP SIGNATURE-----
iQJGBAABCAAwFiEEw9DWbcNiT/aowBjO3s9rk8bwL68FAluGaXsSHGNvaHVja0By
ZWRoYXQuY29tAAoJEN7Pa5PG8C+vD84QAKRKsoAGVq7qo+bAjqCVEQQXSKn9zII+
XDK7ZIQEPIQXks7nY/RFeRMLcP5i4WFcuW5PCg4LHTvptiAUGFzryOUc5qbB/1h8
fRxjwHxwDnc3zaDmx1aKpRTftX/tQuqgQX+N1Yr/2G/QdbKmTLN+WUlq9AT6YysJ
tkFNzKQgbze3ejHEdQKlioaQUuIwqrXzE5cBH/luLemrhoMghqP9W3CHmevKxXUO
0cUbGR4H2BWPaGcnxwtk9GHc4OZjKSJrH4JlKr3J5XBFZjJZO6nhsxzynbkPk0ll
xQlxrw0LGWVQPGJM51Ne5KPw9lElw4Fte1zmBXLpjHTAsBFT3Y0ImB1e8ALDaG8C
fejAlolHD5W6520GXrnCcTica71v7/7iaA7wg0jd8BDRopqNDgmrxdTT9vWTopRG
wrXihAA8r8Fwm+uPV+6qrCTr6B/v982OeIjnnlD4sQbWbIeH0EfetStQgGlcuO6t
tLPoZef8X4Qt05e71jrcVv802FlODoFbH+P26btL8jpnQdRQY4HKBPczjZIl/14Z
ZP6ORUHvN8w2k2Ld4esZzn2L3Kkpcpg79s+pBdIus4y+hnHSXlKe8Ipp4Nw/QFf6
h5C/p4ujchHyNPv30PfIHiqQIdVJuvC8gJvp7QnvK5nIlOqDAgeduvWZ9JI3Dj65
H7hKyqZR2HJr
=c1n9
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/cohuck/tags/s390x-20180829' into staging
- various fixes and improvements in the tcg code
- split off the individual virtio-ccw devices into separate files
# gpg: Signature made Wed 29 Aug 2018 10:38:03 BST
# gpg: using RSA key DECF6B93C6F02FAF
# gpg: Good signature from "Cornelia Huck <conny@cornelia-huck.de>"
# gpg: aka "Cornelia Huck <huckc@linux.vnet.ibm.com>"
# gpg: aka "Cornelia Huck <cornelia.huck@de.ibm.com>"
# gpg: aka "Cornelia Huck <cohuck@kernel.org>"
# gpg: aka "Cornelia Huck <cohuck@redhat.com>"
# Primary key fingerprint: C3D0 D66D C362 4FF6 A8C0 18CE DECF 6B93 C6F0 2FAF
* remotes/cohuck/tags/s390x-20180829:
target/s390x: use regular spaces in translate.c
hw/s390x: Move virtio-ccw-blk code to a separate file
hw/s390x: Move virtio-ccw-net code to a separate file
hw/s390x: Move virtio-ccw-input code to a separate file
hw/s390x: Move virtio-ccw-gpu code to a separate file
hw/s390x: Move vhost-vsock-ccw code to a separate file
hw/s390x: Move virtio-ccw-crypto code to a separate file
hw/s390x: Move virtio-ccw-9p code to a separate file
hw/s390x: Move virtio-ccw-rng code to a separate file
hw/s390x: Move virtio-ccw-scsi code to a separate file
hw/s390x: Move virtio-ccw-balloon code to a separate file
hw/s390x: Move virtio-ccw-serial code to a separate file
hw/s390x/virtio-ccw: Consolidate calls to virtio_ccw_unrealize()
target/s390x: fix PACK reading 1 byte less and writing 1 byte more
target/s390x: add EX support for TRT and TRTR
target/s390x: fix IPM polluting irrelevant bits
target/s390x: fix CSST decoding and runtime alignment check
target/s390x: add BAL and BALR instructions
tests/tcg: add a simple s390x test
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Don't generate handlers for IRQ levels that are not defined for the CPU
or for window overflow/underflow exceptions for configs w/o windowed
registers.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Not all CPU configurations may have enough space for handler code
between exception/interrupt vectors. Leave jumps to the handlers at the
vectors, but move all handlers past the vectors area.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Failed memory transactions should raise exceptions 14 (for fetch) or 15
(for load/store) with XEA2.
Memory accesses that result in TLB miss followed by an attempt to load
PTE from physical memory which fails should raise InstTLBMiss or
LoadStoreTLBMiss with XEA2.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
When a container fails, it leaves a dangling tarball which name is
based on a timestamp. Further uses of make won't clean those files,
neither calling the 'docker-clean' target.
Use the .DELETE_ON_ERROR built-in target to let make remove those
temporary tarballs in case of failure.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20180818030337.22271-1-f4bug@amsat.org>
Signed-off-by: Fam Zheng <famz@redhat.com>
As recommended in https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#sort-multi-line-arguments
"This helps to avoid duplication of packages and make the
list much easier to update. This also makes PRs a lot easier
to read and review."
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20180818015344.797-4-f4bug@amsat.org>
Signed-off-by: Fam Zheng <famz@redhat.com>
As recommended in https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#sort-multi-line-arguments
"This helps to avoid duplication of packages and make the
list much easier to update. This also makes PRs a lot easier
to read and review."
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20180818015344.797-3-f4bug@amsat.org>
Signed-off-by: Fam Zheng <famz@redhat.com>
As recommended in https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#sort-multi-line-arguments
"This helps to avoid duplication of packages and make the
list much easier to update. This also makes PRs a lot easier
to read and review."
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20180818015344.797-2-f4bug@amsat.org>
Signed-off-by: Fam Zheng <famz@redhat.com>
This patch fixes a race condition and test failure where the main process
waits for the signal of a thread but the thread already sent that signal
via a condition. Since these signals are non-sticky, we need to introduce a
separate variable to make this signal sticky.
Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Utilize the job_exit shim by not calling job_defer_to_main_loop, and
where applicable, converting the deferred callback into the job_exit
callback.
This converts backup, stream, create, and the unit tests all at once.
Most of these jobs do not see any changes to the order in which they
clean up their resources, except the test-blockjob-txn test, which
now puts down its bs before job_completed is called.
This is safe for the same reason the reordering in the mirror job is
safe, because job_completed no longer runs under two locks, making
the unref safe even if it causes a flush.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 20180830015734.19765-7-jsnow@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
Jobs presently use both an Error object in the case of the create job,
and char strings in the case of generic errors elsewhere.
Unify the two paths as just j->err, and remove the extra argument from
job_completed. The integer error code for job_completed is kept for now,
to be removed shortly in a separate patch.
Signed-off-by: John Snow <jsnow@redhat.com>
Message-id: 20180830015734.19765-3-jsnow@redhat.com
[mreitz: Dropped a superfluous g_strdup()]
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Presently we codify the entry point for a job as the "start" callback,
but a more apt name would be "run" to clarify the idea that when this
function returns we consider the job to have "finished," except for
any cleanup which occurs in separate callbacks later.
As part of this clarification, change the signature to include an error
object and a return code. The error ptr is not yet used, and the return
code while captured, will be overwritten by actions in the job_completed
function.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 20180830015734.19765-2-jsnow@redhat.com
Reviewed-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Verify the usage of this schema feature and the API behaviour. This
should be the only case where qmp_dispatch() returns NULL.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
test_qom_set_without_value() is about a bug in infrastructure used by
the QMP core, fixed in commit c489780203. We covered the bug in
infrastructure unit tests (commit bce3035a44). I wrote that test
earlier, to cover QMP level as well, the test could go into qmp-test.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
test_object_add_without_props() tests a bug in qmp_object_add() we
fixed in commit e64c75a975. Sadly, we don't have systematic
object-add tests. This lone test can go into qmp-cmd-test for want of
a better home.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
This helper will simplify a bunch of code checking for QMP errors and
can be shared by various tests. Note that test-qga does check for
error description as well, so don't replace the code there for now.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
During development, I got a 'make check' failure that claimed:
qemu-img returned status code 32512
**
ERROR:tests/libqos/libqos.c:202:mkimg: assertion failed: (!rc)
But 32512 is too big for a normal exit status value, which means we
failed to use WEXITSTATUS() to shift the bits to the desired value
for printing. However, instead of worrying about how to portably
parse g_spawn()'s rc in the proper platform-dependent manner, it's
better to just rely on the fact that we now require glib 2.40 (since
commit e7b3af815) and can therefore use glib's portable checker
instead, where the message under my same condition improves to:
Child process exited with code 127
**
ERROR:tests/libqos/libqos.c:192:mkimg: assertion failed: (ret && !err)
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
The generated qapi_event_send_FOO() take an Error ** argument. They
can't actually fail, because all they do with the argument is passing it
to functions that can't fail: the QObject output visitor, and the
@qmp_emit callback, which is either monitor_qapi_event_queue() or
event_test_emit().
Drop the argument, and pass &error_abort to the QObject output visitor
and @qmp_emit instead.
Suggested-by: Eric Blake <eblake@redhat.com>
Suggested-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20180815133747.25032-4-peterx@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
[Commit message rewritten, update to qapi-code-gen.txt corrected]
Signed-off-by: Markus Armbruster <armbru@redhat.com>
PACK fails on the test from the Principles of Operation: F1F2F3F4
becomes 0000234C instead of 0001234C due to an off-by-one error.
Furthermore, it overwrites one extra byte to the left of F1.
If len_dest is 0, then we only want to flip the 1st byte and never loop
over the rest. Therefore, the loop condition should be > and not >=.
If len_src is 1, then we should flip the 1st byte and pack the 2nd.
Since len_src is already decremented before the loop, the first
condition should be >=, and not >.
Likewise for len_src == 2 and the second condition.
Signed-off-by: Pavel Zbitskiy <pavel.zbitskiy@gmail.com>
Message-Id: <20180821025104.19604-7-pavel.zbitskiy@gmail.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Improves "b213c9f5: target/s390x: Implement TRTR" by introducing the
intermediate functions, which are compatible with dx_helper type.
Signed-off-by: Pavel Zbitskiy <pavel.zbitskiy@gmail.com>
Message-Id: <20180821025104.19604-6-pavel.zbitskiy@gmail.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Suppose psw.mask=0x0000000080000000, cc=2, r1=0 and we do "ipm 1".
This command must touch only bits 32-39, so the expected output
is r1=0x20000000. However, currently qemu yields r1=0x20008000,
because irrelevant parts of PSW leak into r1 during program mask
transfer.
Signed-off-by: Pavel Zbitskiy <pavel.zbitskiy@gmail.com>
Message-Id: <20180821025104.19604-5-pavel.zbitskiy@gmail.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
CSST is defined as:
C(0xc802, CSST, SSF, CASS, la1, a2, 0, 0, csst, 0)
It means that the first parameter is handled by in1_la1().
in1_la1() fills addr1 field, and not in1.
Furthermore, when extract32() is used for the alignment check, the
third parameter should specify the number of trailing bits that must
be 0. For FC these numbers are:
FC=0 (word, 4 bytes): 2
FC=1 (double word, 8 bytes): 3
FC=2 (quad word, 16 bytes): 4
For SC these numbers correspond to the size:
SC=0: 0
SC=1: 1
SC=2: 2
SC=3: 3
SC=4: 4
Signed-off-by: Pavel Zbitskiy <pavel.zbitskiy@gmail.com>
Message-Id: <20180821025104.19604-4-pavel.zbitskiy@gmail.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
There is no known available OS for ppc around anymore that uses page
sizes below 4k, so it does not make much sense that we keep wasting
our time on building and testing the ppcemb-softmmu target. It has
been deprecated since two releases, and nobody complained, so let's
remove this now.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
It is already protected by CONFIG_ISA_TESTDEV in all architectures.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
We protect it with CONFIG_VMXNET3_PCI now, so no need to also put it
on i386.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
This is only for x86* architecture.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Once there, untangle endianness-test and boot-serial-test.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
---
boot-serial-test don't depend on isa-testdev. Thanks Thomas.
The previous commit makes JSON strings containing '%' awkward to
express in templates: you'd have to mask the '%' with an Unicode
escape \u0025. No template currently contains such JSON strings.
Support the printf conversion specification %% in JSON strings as a
convenience anyway, because it's trivially easy to do.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-58-armbru@redhat.com>
The JSON parser optionally supports interpolation. This is used to
build QObjects by parsing string templates. The templates are C
literals, so parse errors (such as invalid interpolation
specifications) are actually programming errors. Consequently, the
functions providing parsing with interpolation
(qobject_from_jsonf_nofail(), qobject_from_vjsonf_nofail(),
qdict_from_jsonf_nofail(), qdict_from_vjsonf_nofail()) pass
&error_abort to the parser.
However, there's another, more dangerous kind of programming error:
since we use va_arg() to get the value to interpolate, behavior is
undefined when the variable argument isn't consistent with the
interpolation specification.
The same problem exists with printf()-like functions, and the solution
is to have the compiler check consistency. This is what
GCC_FMT_ATTR() is about.
To enable this type checking for interpolation as well, we carefully
chose our interpolation specifications to match printf conversion
specifications, and decorate functions parsing templates with
GCC_FMT_ATTR().
Note that this only protects against undefined behavior due to type
errors. It can't protect against use of invalid interpolation
specifications that happen to be valid printf conversion
specifications.
However, there's still a gaping hole in the type checking: GCC
recognizes '%' as start of printf conversion specification anywhere in
the template, but the parser recognizes it only outside JSON strings.
For instance, if someone were to pass a "{ '%s': %d }" template, GCC
would require a char * and an int argument, but the parser would
va_arg() only an int argument, resulting in undefined behavior.
Avoid undefined behavior by catching the programming error at run
time: have the parser recognize and reject '%' in JSON strings.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-57-armbru@redhat.com>
test_after_failed_device_add() does this:
response = qmp("{'execute': 'device_add',"
" 'arguments': {"
" 'driver': 'virtio-blk-%s',"
" 'drive': 'drive0'"
"}}", qvirtio_get_dev_type());
Wrong. An interpolation specification must be a JSON token, it
doesn't work within JSON string tokens. The code above doesn't use
the value of qvirtio_get_dev_type(), and sends arguments
{"driver": "virtio-blk-%s", "drive": "drive0"}}
The command fails because there is no driver named "virtio-blk-%".
Harmless, since the test wants the command to fail. Screwed up in
commit 2f84a92ec6.
Fix the obvious way. The command now fails because the drive is
empty, like it did before commit 2f84a92ec6.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-55-armbru@redhat.com>
The JSON parser has three public headers, json-lexer.h, json-parser.h,
json-streamer.h. They all contain stuff that is of no interest
outside qobject/json-*.c.
Collect the public interface in include/qapi/qmp/json-parser.h, and
everything else in qobject/json-parser-int.h.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-54-armbru@redhat.com>
The last case where qobject_from_json() & friends return null without
setting an error is empty or blank input. Callers:
* block.c's parse_json_protocol() reports "Could not parse the JSON
options". It's marked as a work-around, because it also covered
actual bugs, but they got fixed in the previous few commits.
* qobject_input_visitor_new_str() reports "JSON parse error". Also
marked as work-around. The recent fixes have made this unreachable,
because it currently gets called only for input starting with '{'.
* check-qjson.c's empty_input() and blank_input() demonstrate the
behavior.
* The other callers are not affected since they only pass input with
exactly one JSON value or, in the case of negative tests, one error.
Fail with "Expecting a JSON value" instead of returning null, and
simplify callers.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-48-armbru@redhat.com>
json_message_process_token() accumulates tokens until it got the
sequence of tokens that comprise a single JSON value (it counts curly
braces and square brackets to decide). It feeds those token sequences
to json_parser_parse(). If a non-empty sequence of tokens remains at
the end of the parse, it's silently ignored. check-qjson.c cases
unterminated_array(), unterminated_array_comma(), unterminated_dict(),
unterminated_dict_comma() demonstrate this bug.
Fix as follows. Introduce a JSON_END_OF_INPUT token. When the
streamer receives it, it feeds the accumulated tokens to
json_parser_parse().
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-46-armbru@redhat.com>
qobject_from_json() & friends use the consume_json() callback to
receive either a value or an error from the parser.
When they are fed a string that contains more than either one JSON
value or one JSON syntax error, consume_json() gets called multiple
times.
When the last call receives a value, qobject_from_json() returns that
value. Any other values are leaked.
When any call receives an error, qobject_from_json() sets the first
error received. Any other errors are thrown away.
When values follow errors, qobject_from_json() returns both a value
and sets an error. That's bad. Impact:
* block.c's parse_json_protocol() ignores and leaks the value. It's
used to to parse pseudo-filenames starting with "json:". The
pseudo-filenames can come from the user or from image meta-data such
as a QCOW2 image's backing file name.
* vl.c's parse_display_qapi() ignores and leaks the error. It's used
to parse the argument of command line option -display.
* vl.c's main() case QEMU_OPTION_blockdev ignores the error and leaves
it in @err. main() will then pass a pointer to a non-null Error *
to net_init_clients(), which is forbidden. It can lead to assertion
failure or other misbehavior.
* check-qjson.c's multiple_values() demonstrates the badness.
* The other callers are not affected since they only pass strings with
exactly one JSON value or, in the case of negative tests, one
error.
The impact on the _nofail() functions is relatively harmless. They
abort when any call receives an error. Else they return the last
value, and leak the others, if any.
Fix consume_json() as follows. On the first call, save value and
error as before. On subsequent calls, if any, don't save them. If
the first call saved a value, the next call, if any, replaces the
value by an "Expecting at most one JSON value" error. Take care not
to leak values or errors that aren't saved.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-44-armbru@redhat.com>
Support for %I64d got added in commit 2c0d4b36e7 "json: fix PRId64 on
Win32". We had to hard-code I64d because we used the lexer's finite
state machine to check interpolations. No more, so clean this up.
Additional conversion specifications would be easy enough to implement
when needed.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-42-armbru@redhat.com>
Both lexer and parser reject invalid interpolation specifications.
The parser's check is useless.
The lexer ends the token right after the first bad character. This
tends to lead to suboptimal error reporting. For instance, input
[ %04d ]
produces the tokens
JSON_LSQUARE [
JSON_ERROR %0
JSON_INTEGER 4
JSON_KEYWORD d
JSON_RSQUARE ]
The parser then yields an error, an object and two more errors:
error: Invalid JSON syntax
object: 4
error: JSON parse error, invalid keyword
error: JSON parse error, expecting value
Dumb down the lexer to accept [A-Za-z0-9]*. The parser's check is now
used. Emit a proper error there.
The lexer now produces
JSON_LSQUARE [
JSON_INTERP %04d
JSON_RSQUARE ]
and the parser reports just
JSON parse error, invalid interpolation '%04d'
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-41-armbru@redhat.com>
The callback to consume JSON values takes QObject *json, Error *err.
If both are null, the callback is supposed to make up an error by
itself. This sucks.
qjson.c's consume_json() neglects to do so, which makes
qobject_from_json() null instead of failing. I consider that a bug.
The culprit is json_message_process_token(): it passes two null
pointers when it runs into a lexical error or a limit violation. Fix
it to pass a proper Error object then. Update the callbacks:
* monitor.c's handle_qmp_command(): the code to make up an error is
now dead, drop it.
* qga/main.c's process_event(): lumps the "both null" case together
with the "not a JSON object" case. The former is now gone. The
error message "Invalid JSON syntax" is misleading for the latter.
Improve it to "Input must be a JSON object".
* qobject/qjson.c's consume_json(): no update; check-qjson
demonstrates qobject_from_json() now sets an error on lexical
errors, but still doesn't on some other errors.
* tests/libqtest.c's qmp_response(): the Error object is now reliable,
so use it to improve the error message.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-40-armbru@redhat.com>
The JSON parser optionally supports interpolation. The lexer
recognizes interpolation tokens unconditionally. The parser rejects
them when interpolation is disabled, in parse_interpolation().
However, it neglects to set an error then, which can make
json_parser_parse() fail without setting an error.
Move the check for unwanted interpolation from the parser's
parse_interpolation() into the lexer's finite state machine. When
interpolation is disabled, '%' is now handled like any other
unexpected character.
The next commit will improve how such lexical errors are handled.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-39-armbru@redhat.com>
The classical way to structure parser and lexer is to have the client
call the parser to get an abstract syntax tree, the parser call the
lexer to get the next token, and the lexer call some function to get
input characters.
Another way to structure them would be to have the client feed
characters to the lexer, the lexer feed tokens to the parser, and the
parser feed abstract syntax trees to some callback provided by the
client. This way is more easily integrated into an event loop that
dispatches input characters as they arrive.
Our JSON parser is kind of between the two. The lexer feeds tokens to
a "streamer" instead of a real parser. The streamer accumulates
tokens until it got the sequence of tokens that comprise a single JSON
value (it counts curly braces and square brackets to decide). It
feeds those token sequences to a callback provided by the client. The
callback passes each token sequence to the parser, and gets back an
abstract syntax tree.
I figure it was done that way to make a straightforward recursive
descent parser possible. "Get next token" becomes "pop the first
token off the token sequence". Drawback: we need to store a complete
token sequence. Each token eats 13 + input characters + malloc
overhead bytes.
Observations:
1. This is not the only way to use recursive descent. If we replaced
"get next token" by a coroutine yield, we could do without a
streamer.
2. The lexer reports errors by passing a JSON_ERROR token to the
streamer. This communicates the offending input characters and
their location, but no more.
3. The streamer reports errors by passing a null token sequence to the
callback. The (already poor) lexical error information is thrown
away.
4. Having the callback receive a token sequence duplicates the code to
convert token sequence to abstract syntax tree in every callback.
5. Known bug: the streamer silently drops incomplete token sequences.
This commit rectifies 4. by lifting the call of the parser from the
callbacks into the streamer. Later commits will address 3. and 5.
The lifting removes a bug from qjson.c's parse_json(): it passed a
pointer to a non-null Error * in certain cases, as demonstrated by
check-qjson.c.
json_parser_parse() is now unused. It's a stupid wrapper around
json_parser_parse_err(). Drop it, and rename json_parser_parse_err()
to json_parser_parse().
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-35-armbru@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-31-armbru@redhat.com>
The JSON parser treats each half of a surrogate pair as unpaired
surrogate. Fix it to recognize surrogate pairs.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-30-armbru@redhat.com>
The JSON parser translates invalid \uXXXX to garbage instead of
rejecting it, and swallows \u0000.
Fix by using mod_utf8_encode() instead of flawed wchar_to_utf8().
Valid surrogate pairs are now differently broken: they're rejected
instead of translated to garbage. The next commit will fix them.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-29-armbru@redhat.com>
Since the JSON grammer doesn't accept U+0000 anywhere, this merely
exchanges one kind of parse error for another. It's purely for
consistency with qobject_to_json(), which accepts \xC0\x80 (see commit
e2ec3f9768).
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-26-armbru@redhat.com>
We reject bytes that can't occur in valid UTF-8 (\xC0..\xC1,
\xF5..\xFF in the lexer. That's insufficient; there's plenty of
invalid UTF-8 not containing these bytes, as demonstrated by
check-qjson:
* Malformed sequences
- Unexpected continuation bytes
- Missing continuation bytes after start bytes other than
\xC0..\xC1, \xF5..\xFD.
* Overlong sequences with start bytes other than \xC0..\xC1,
\xF5..\xFD.
* Invalid code points
Fixing this in the lexer would be bothersome. Fixing it in the parser
is straightforward, so do that.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-23-armbru@redhat.com>
The JSON parser rejects some invalid sequences, but accepts others
without correcting the problem.
We should either reject all invalid sequences, or minimize overlong
sequences and replace all other invalid sequences by a suitable
replacement character. A common choice for replacement is U+FFFD.
I'm going to implement the former. Update the comments in
utf8_string() to expect this.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-22-armbru@redhat.com>
Fix the lexer to reject unescaped control characters in JSON strings,
in accordance with RFC 8259 "The JavaScript Object Notation (JSON)
Data Interchange Format".
Bonus: we now recover more nicely from unclosed strings. E.g.
{"one: 1}\n{"two": 2}
now recovers cleanly after the newline, where before the lexer
remained confused until the next unpaired double quote or lexical
error.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-19-armbru@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-17-armbru@redhat.com>
RFC 8259 "The JavaScript Object Notation (JSON) Data Interchange
Format" requires control characters in strings to be escaped.
Demonstrate the JSON parser accepts U+0001 .. U+001F unescaped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-16-armbru@redhat.com>
Some of utf8_string()'s test_cases[] contain multiple invalid
sequences. Testing that qobject_from_json() fails only tests we
reject at least one invalid sequence. That's incomplete.
Additionally test each non-space sequence in isolation.
This demonstrates that the JSON parser accepts invalid sequences
starting with \xC2..\xF4. Add a FIXME comment.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-15-armbru@redhat.com>
The previous commit made utf8_string()'s test_cases[].utf8_in
superfluous: we can use .json_in instead. Except for the case testing
U+0000. \x00 doesn't work in C strings, so it tests \\u0000 instead.
But testing \\uXXXX is escaped_string()'s job. It's covered there.
Test U+0001 here, and drop .utf8_in.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-14-armbru@redhat.com>
utf8_string() tests only double quoted strings. Cover single quoted
strings, too: store the strings to test without quotes, then wrap them
in either kind of quote.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-13-armbru@redhat.com>
simple_string() and single_quote_string() have become redundant with
escaped_string(), except for embedded single and double quotes.
Replace them by a test that covers just that.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-12-armbru@redhat.com>
Cover escaped single quote, surrogates, invalid escapes, and
noncharacters. This demonstrates that valid surrogate pairs are
misinterpreted, and invalid surrogates and noncharacters aren't
rejected.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-11-armbru@redhat.com>
Merge a few closely related test strings, and drop a few redundant
ones.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-10-armbru@redhat.com>
escaped_string() first tests double quoted strings, then repeats a few
tests with single quotes. Repeat all of them: store the strings to
test without quotes, and wrap them in either kind of quote for
testing.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-9-armbru@redhat.com>
To permit recovering from arbitrary JSON parse errors, the JSON parser
resets itself on lexical errors. We recommend sending a 0xff byte for
that purpose, and test-qga covers this usage since commit 5229564b83.
That commit had to add an ugly hack to qmp_fd_vsend() to make capable
of sending this byte (it's designed to send only valid JSON).
The previous commit added a way to send arbitrary text. Put that to
use for this purpose, and drop the hack from qmp_fd_vsend().
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-8-armbru@redhat.com>
qmp-test neglects to cover QMP input that isn't valid JSON. libqtest
doesn't let us send such input. Add qtest_qmp_send_raw() for this
purpose, and put it to use in qmp-test.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-7-armbru@redhat.com>
[Commit message typo fixed]
qmp-test is for QMP protocol tests. Commit e4a426e75e added generic,
basic tests of query commands to it. Move them to their own test
program qmp-cmd-test, to keep qmp-test focused on the protocol.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-6-armbru@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-5-armbru@redhat.com>
qobject_from_json() can return null without setting an error on
lexical errors. I call that a bug. Add test coverage to demonstrate
it.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-4-armbru@redhat.com>
qobject_from_json() & friends misbehave when the JSON text has more
than one JSON value. Add test coverage to demonstrate the bugs.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-3-armbru@redhat.com>
-----BEGIN PGP SIGNATURE-----
iQIcBAABCAAGBQJbfRjcAAoJEPSH7xhYctcj6vkP/0CxdFLEJ5zfmQCT9plrcenc
4CtR3syKimF1CSk9eKPE8V3oiZSBxuM1FJYPhH8d9UbYcWrItGLr/dgh1EAgurAI
P4oeWqI21CGeCWQGduhmQ51vSw1b8JTdNYWmb3QAsMBUugZYla4lvC3R5h63vmBI
4U1RzQrZmmN9svnNMx22dCInbPNoayR3Ekr7z/bF6sRG/B+ZwecenVoD9X8T/Ozu
epx9OOoBfMGDB5wbEEx/RUKrMsGH5D712QeMHUtGYmLRs1Wl4AV7Si+bSd3oi+GI
aL6ZjuaOofGaESOuH7fTkTGhGgmcPd7+pLPqpEYIJ3wmQOOQP/dp9B+6VXCxTQcA
y5F9FBEP5nQL+OIusvi+l65PqzstrKtxrSzWPGHgmosdLead15znZ4Z6YdOtHWHr
ZZOW55M2ZvlZvEWB3hHmT9rjZFP4Uu9XFIW05gzQiqVhcKemtgQ3hBiX+OvxRpPM
RpGLqGK/oDwadvEsNitYqbRJDe74VSAxOtmvEsDfJLzRoyHM1zHw3au00NdyGhMp
89Xc5AnkuHJLCFZ9duXErt5GQz/7EzHkLpQ16pqyuetgLc50ytED1tgqiJl6+Td1
IS4me8wwBpk+IvRD0zsh4p/FHocL684CSP5+AZwcy8RljtbBacbvmAS6484RSx4c
rWL39dB2uhvHEErmS9/+
=2bke
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/juanquintela/tags/check/20180822' into staging
check/next for 20180822
# gpg: Signature made Wed 22 Aug 2018 09:03:40 BST
# gpg: using RSA key F487EF185872D723
# gpg: Good signature from "Juan Quintela <quintela@redhat.com>"
# gpg: aka "Juan Quintela <quintela@trasno.org>"
# Primary key fingerprint: 1899 FF8E DEBF 58CC EE03 4B82 F487 EF18 5872 D723
* remotes/juanquintela/tags/check/20180822:
check: Only test tpm devices when they are compiled in
check: Only test usb-ehci when it is compiled in
check: Only test usb-uhci devices when they are compiled in
check: Only test usb-ohci when it is compiled in
check: Only test nvme when it is compiled in
check: Only test pvpanic when it is compiled in
check: Only test wdt_ib700 when it is compiled in
check: Only test sdhci when it is compiled in
check: Only test i82801b11 when it is compiled in
check: Only test ioh3420 when it is compiled in
check: Only test ipack when it is compiled in
check: Only test hda when it is compiled in
check: Only test ac97 when it is compiled in
check: Only test es1370 when it is compiled in
check: Only test rtl8139 when it is compiled in
check: Only test pcnet when it is compiled in
check: Only test eepro100 when it is compiled in
check: Only test ne2000 when it is compiled in
check: Only test vmxnet3 when it is compiled in
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The VM tests currently have a timeout of 2 minutes for trying
to connect to ssh. Since the guest VM has to boot from cold
to the point of accepting inbound ssh during this time, if the
host machine is heavily loaded it can spuriously time out.
Increase the timeout from 2 to 5 minutes.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Fam Zheng <famz@redhat.com>
Message-id: 20180823112153.15279-1-peter.maydell@linaro.org
* qumu-guest-agent freeze-hook tweak (Christian)
* pm_smbus improvements (Corey)
* Move validation to pre_plug for pc-dimm (David)
* Fix memory leaks (Eduardo, Marc-André)
* synchronization profiler (Emilio)
* Convert the CPU list to RCU (Emilio)
* LSI support for PPR Extended Message (George)
* vhost-scsi support for protection information (Greg)
* Mark mptsas as a storage device in the help (Guenter)
* checkpatch tweak cherry-picked from Linux (me)
* Typos, cleanups and dead-code removal (Julia, Marc-André)
* qemu-pr-helper support for old libmultipath (Murilo)
* Annotate fallthroughs (me)
* MemoryRegionOps cleanup (me, Peter)
* Make s390 qtests independent from libqos, which doesn't actually support it (me)
* Make cpu_get_ticks independent from BQL (me)
* Introspection fixes (Thomas)
* Support QEMU_MODULE_DIR environment variable (ryang)
-----BEGIN PGP SIGNATURE-----
iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAlt+5OYUHHBib256aW5p
QHJlZGhhdC5jb20ACgkQv/vSX3jHroPtxwf8CQM/F+0L+EKeYfYcVgVZsDhhOkLj
Pm61q0bZsWKLby5jCqIDYw7Z/vodJnSS1DO0slIRoXxvQ9DwlkbBnBy/aG/E9U0q
WF1vbCezibDIt7sGcsu9F5zXU9eqe+E6dZfxFrv8FQSOFVxn34TfeJagWLCtzg0d
LnVTF/e4zJD8IQiM7w6lJQxua3fz13ssPEg2KnMkguDhACMwvZ/K/cA2AJkHRMhY
sroPMwLHlrF1NOoeCIrWxYUmSGCRCAy1DmiPGiiSs0yBq/dL0UkAa5Eu6HMQ7rgI
zUff3JDmzEjixUSIEbpVRN+yPCN0/ACSOpJUrKLDxXbc4nZ+PBQ04YpyPQ==
=UZiV
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging
* x86 TCG fixes for 64-bit call gates (Andrew)
* qumu-guest-agent freeze-hook tweak (Christian)
* pm_smbus improvements (Corey)
* Move validation to pre_plug for pc-dimm (David)
* Fix memory leaks (Eduardo, Marc-André)
* synchronization profiler (Emilio)
* Convert the CPU list to RCU (Emilio)
* LSI support for PPR Extended Message (George)
* vhost-scsi support for protection information (Greg)
* Mark mptsas as a storage device in the help (Guenter)
* checkpatch tweak cherry-picked from Linux (me)
* Typos, cleanups and dead-code removal (Julia, Marc-André)
* qemu-pr-helper support for old libmultipath (Murilo)
* Annotate fallthroughs (me)
* MemoryRegionOps cleanup (me, Peter)
* Make s390 qtests independent from libqos, which doesn't actually support it (me)
* Make cpu_get_ticks independent from BQL (me)
* Introspection fixes (Thomas)
* Support QEMU_MODULE_DIR environment variable (ryang)
# gpg: Signature made Thu 23 Aug 2018 17:46:30 BST
# gpg: using RSA key BFFBD25F78C7AE83
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>"
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>"
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1
# Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83
* remotes/bonzini/tags/for-upstream: (69 commits)
KVM: cleanup unnecessary #ifdef KVM_CAP_...
target/i386: update MPX flags when CPL changes
i2c: pm_smbus: Add the ability to force block transfer enable
i2c: pm_smbus: Don't delay host status register busy bit when interrupts are enabled
i2c: pm_smbus: Add interrupt handling
i2c: pm_smbus: Add block transfer capability
i2c: pm_smbus: Make the I2C block read command read-only
i2c: pm_smbus: Fix the semantics of block I2C transfers
i2c: pm_smbus: Clean up some style issues
pc-dimm: assign and verify the "addr" property during pre_plug
pc: drop memory region alignment check for 0
util/oslib-win32: indicate alignment for qemu_anon_ram_alloc()
pc-dimm: assign and verify the "slot" property during pre_plug
ipmi: Use proper struct reference for BT vmstate
vhost-scsi: expose 't10_pi' property for VIRTIO_SCSI_F_T10_PI
vhost-scsi: unify vhost-scsi get_features implementations
vhost-user-scsi: move host_features into VHostSCSICommon
cpus: allow cpu_get_ticks out of BQL
cpus: protect TimerState writes with a spinlock
seqlock: add QemuLockable support
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
So that we can test other implementations.
Signed-off-by: Emilio G. Cota <cota@braap.org>
Message-Id: <20180819091335.22863-8-cota@braap.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Instead of declaring it volatile.
Signed-off-by: Emilio G. Cota <cota@braap.org>
Message-Id: <20180819091335.22863-6-cota@braap.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The check should be unnecessary since commit
e7b3af8159 "glib: bump min required glib
library version to 2.40".
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20180730153639.26466-1-marcandre.lureau@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
When used together with -m, this allows us to benchmark the
profiler's performance impact on qemu_mutex_lock.
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Certain device introspection crashes used to only happen if you were
using a certain machine, e.g. if the machine was using serial_hd() or
nd_table[], and a device was trying to use these in its instance_init
function, too.
To be able to catch these problems, let's extend the device-introspect
test to check the devices on all machine types, with and without the
"-nodefaults" parameter (since this makes a difference sometimes, too).
Since this is a rather slow operation, and most of the problems are
already handled by testing with the "none" machine only, the test with
all machines is only run in the "make check SPEED=slow" mode.
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1534419358-10932-8-git-send-email-thuth@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Introspection should not change the qom-tree / qtree, so we should check
this in the device-introspect-test, too. This patch helped to find lots
of instrospection bugs during the QEMU v3.0 soft/hard-freeze period in the
last two months.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1534419358-10932-7-git-send-email-thuth@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The tests that check something for all machine types currently spend
a lot of time checking old machine types (like "pc-i440fx-2.0" for
example). The chances that we find something new there in addition
to checking the latest version of a machine type are pretty low, so
we should not waste the time of the developers by testing this again
and again in the "quick" testing mode.
Thus let's add some code to determine whether we are testing a current
machine type or an old one, and only test the old types if we are
running in "SPEED=slow" mode.
This decreases the testing time quite a bit now, e.g. the qom-test
now finishes within 4 seconds for qemu-system-x86_64 instead of 30
seconds when testing all machines.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1534419358-10932-6-git-send-email-thuth@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
When running "make check" on a non-POWER host, the output is quite
distorted like this:
[...]
GTESTER check-qtest-nios2
GTESTER check-qtest-or1k
GTESTER check-qtest-ppc64
Skipping test: kvm_hv not available Skipping test: kvm_hv not available Skipping test: kvm_hv not available Skipping test: kvm_hv not available GTESTER check-qtest-ppcemb
GTESTER check-qtest-ppc
GTESTER check-qtest-riscv32
GTESTER check-qtest-riscv64
[...]
Move the check to the beginning of the main function instead, so that
we do not have to test the condition again and again for each test,
and better use g_test_message() instead of g_print() here, like it is
also done in ufd_version_check() already.
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1534419358-10932-2-git-send-email-thuth@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Because qtest does not support s390 channel I/O, s390 only performs smoke tests on
those few devices that do not have any functional tests. Therefore, every time we
add functional tests for a virtio device, the choice is between removing
those tests from the s390 suite (so that s390 actually _loses_ coverage)
or sprinkling the test with architecture checks.
This patch simply creates a ccw-specific test that only performs smoke tests on
all virtio-ccw devices. If channel I/O support is ever added to qtest and libqos,
then this file can go away. In the meanwhile, it simplifies maintenance and
makes sure that all virtio devices are tested.
Acked-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The ehci test also test uhci. Welcome to the worderfull world of USB.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
It was not possible to compile out pvpanic. Use the same trick
than applesmc.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
test-file-redirector uses rtl8139 in everything except s390.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
- prep machine is a fictional machine, so has no specifications. Which
devices can be changed/added/removed without impact? Are interrupts
correctly mapped?
- prep firmware (OHW) has support only for IDE drives (no SCSI).
Booting from IDE has been broken approximatively 3 years ago, and nobody complained.
- OHW is limited on IDE boot to a specific set of OS loaders.
These operating systems are of the 2004 time frame.
- OHW can use -kernel. Linux kernel freezes a long time after PS/2 mouse
detection, and then screen becomes garbage. This was already broken in
QEMU v2.7, 2 years ago, and nobody complained.
On the other side:
- 40p is a real machine, so emulation can be checked against
hardware specifications
- OpenBIOS has support for SCSI block devices, including 40p LSI adapter
- OpenBIOS can start mostly all Linux kernels (including recent ones)
and recent operating system (like NetBSD 7.1.2)
Signed-off-by: Hervé Poussineau <hpoussin@reactos.org>
[dwg: Drop prep from boot-serial test to avoid deprecation warnings]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
When we do a build inside one of the BSD VMs, first
delete any stale old build directories from the VM's
/var/tmp. This prevents the VM from running out of
disk space after it has been used for a dozen or
so builds.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Fam Zheng <famz@redhat.com>
Message-id: 20180820124811.7982-1-peter.maydell@linaro.org
On a SPARC host that I'm using as a build test machine, the
boot-serial-test for the SPARC guest machines takes about 65
seconds to execute. This means that it hits the current
60 second timer on these tests. Push the timeout up so
that it doesn't trigger spuriously on slow hosts like this one.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Acked-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Message-id: 20180817161404.9420-1-peter.maydell@linaro.org
'test.hex' file is a memory test pattern stored in Hexadecimal Object
Format. It loads at 0x10000 in RAM and contains values from 0 through
255.
The test case verifies that the expected memory test pattern was loaded.
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Suggested-by: Steffen Gortz <qemu.ml@steffen-goertz.de>
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Su Hang <suhang16@mails.ucas.ac.cn>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
[PMM: changed qtest_startf() to qtest_initf() to work with
current master after the refactoring in commit 88b988c895]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJbdTcjAAoJEDhwtADrkYZTZEYP/ivp0ozEfMeGgc6PFItv3zmx
QVD+NYJ8bnv/iEoWl/pnQ0/HY3YLHz4G1DTK0dSlJAvAiChpPiR7YCeJRXeTyLHL
9KCFQV5SV9llstVi0f4ebEK21mUkYWoqtlzxxyqXh0q2N/QLtaVQ85ysE6ufwhNH
jlunmJLGRRwPR95F4a05uVHNOym1ig9eo5CtQ1Fa8viV9BgWTbpSp1t4feB1OLnt
Ml9cbFubb1cA7CuhdNHazNOnRZtEW5A9eOo6rX4d5JcH/zgFWdPpKCRn/X/NdvSE
aRKqk7ll0gxYlacqVpkea23pVKVl7e1oUqkziaL8rq/BYE0SePkRv+SnmsifD8uT
kWl+eHLyaW1g43omc0uttyAuTkFnvAa+l9TqIrdEYcPJJNaCsZVgJpDvj9+Oxril
fk3OIHAnzSWwp/AmFLCSOYdaoVuZhppp6rqnu26B0w9Rxkbqe1790LbjDJrLUB+2
vN7+JmDhUfJk7/2pi+MGZrBtj3zcgbb3Qc5+NG8H1401bA/n8FNnPKgWdmAlmO7i
pTafa1FXArJGWiBhzg2PUqmZq45MQiheQ1+SWgviIodQX5oHB3kEimcRPg4Wk18c
fTKJDe7w8NFFNjuH6ou2LI4KzgQeewW+oCjxh2A7kwCqDmq5Eq8nBw/bYO1DgcDr
bfCnicNJinjCHcgvvCVM
=DuZ8
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/armbru/tags/pull-tests-2018-08-16' into staging
Testing patches for 2018-08-16
# gpg: Signature made Thu 16 Aug 2018 09:34:43 BST
# gpg: using RSA key 3870B400EB918653
# gpg: Good signature from "Markus Armbruster <armbru@redhat.com>"
# gpg: aka "Markus Armbruster <armbru@pond.sub.org>"
# Primary key fingerprint: 354B C8B3 D7EB 2A6B 6867 4E5F 3870 B400 EB91 8653
* remotes/armbru/tags/pull-tests-2018-08-16: (25 commits)
libqtest: Improve error reporting for bad read from QEMU
tests/libqtest: Improve kill_qemu()
libqtest: Rename qtest_FOOv() to qtest_vFOO() for consistency
libqtest: Replace qtest_startf() by qtest_initf()
libqtest: Enable compile-time format string checking
migration-test: Clean up string interpolation into QMP, part 3
migration-test: Clean up string interpolation into QMP, part 2
migration-test: Clean up string interpolation into QMP, part 1
migration-test: Make wait_command() cope with '%'
tests: New helper qtest_qmp_receive_success()
migration-test: Make wait_command() return the "return" member
tests: Clean up string interpolation around qtest_qmp_device_add()
cpu-plug-test: Don't pass integers as strings to device_add
tests: Clean up string interpolation into QMP input (simple cases)
tests: Pass literal format strings directly to qmp_FOO()
qobject: qobject_from_jsonv() is dangerous, hide it away
test-qobject-input-visitor: Avoid format string ambiguity
libqtest: Simplify qmp_fd_vsend() a bit
qobject: New qobject_from_vjsonf_nofail(), qdict_from_vjsonf_nofail()
qobject: Replace qobject_from_jsonf() by qobject_from_jsonf_nofail()
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When read() from the qtest socket or the QMP socket fails or EOFs, we
report "Broken pipe" and exit(1). This commonly happens when QEMU
crashes. It also happens when QEMU refuses to run because the test
passed it bad arguments. Sadly, we neglect to report either.
Improve this by calling abort() instead of exit(1), so kill_qemu()
runs, and reports how QEMU died. This improves error reporting to
something like
/x86_64/device/introspect/list: Broken pipe
tests/libqtest.c:129: kill_qemu() detected QEMU death from signal 6 (Aborted) (dumped core)
Three exit() remain in libqtest.c:
* In qmp_response(), when we can't parse a QMP reply read from the QMP
socket. Change to abort() for consistency.
* In qtest_qemu_binary(), when QTEST_QEMU_BINARY isn't in the
environment. This can only happen before we start QEMU. Leave
alone.
* In qtest_init_without_qmp_handshake(), when the fork()ed child fails
to execlp(). Leave alone.
exit() elsewhere are unlikely due to QEMU dying on us. If that should
turn out to be wrong, we can move kill_qemu() from @abrt_hooks to
atexit() or something.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20180815141945.10457-2-armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
[Commit message tweaked slightly]
In kill_qemu() we have an assert that checks that the QEMU process
didn't dump core:
assert(!WCOREDUMP(wstatus));
Unfortunately the WCOREDUMP macro here means the resulting message
is not very easy to comprehend on at least some systems:
ahci-test: tests/libqtest.c:113: kill_qemu: Assertion `!(((__extension__ (((union { __typeof(wstatus) __in; int __i; }) { .__in = (wstatus) }).__i))) & 0x80)' failed.
and it doesn't identify what signal the process took. What's more,
WCOREDUMP is not reliable - in some cases, setrlimit() coupled with
kernel dump settings can result in the flag not being set. It's
better to log ALL death by signal, instead of caring whether a core
dump was attempted (although once we know a signal happened, also
mentioning if a core dump is present can be helpful).
Furthermore, we are NOT detecting EINTR (while EINTR shouldn't be
happening if we didn't install signal handlers, it's still better
to always be robust).
Finally, even non-signal death with a non-zero status is suspicious,
since qemu's SIGINT handler is supposed to result in exit(0).
Instead of using a raw assert, print the information in an
easier to understand way:
/i386/ahci/sanity: tests/libqtest.c:129: kill_qemu() detected QEMU death from signal 11 (Segmentation fault) (core dumped)
(Of course, the really useful information would be why the QEMU
process dumped core in the first place, but we don't have that
by the time the test program has picked up the exit status.)
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180810132800.38549-1-eblake@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
[Core dump reporting and commit message tweaked]
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
13 of 13 C99 library function pairs taking ... or a va_list parameter
are called FOO() and vFOO(). In QEMU, we sometimes call the one
taking a va_list FOOv() instead. Bad taste. libqtest.h uses both
spellings. Normalize it to the standard spelling.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180806065344.7103-24-armbru@redhat.com>
qtest_init() creates a new QTestState, and leaves @global_qtest alone.
qtest_start() additionally assigns it to @global_qtest, but
qtest_startf() additionally assigns NULL to @global_qtest. This makes
no sense. Replace it by qtest_initf() that works like qtest_init(),
i.e. leaves @global_qtest alone.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180806065344.7103-23-armbru@redhat.com>
qtest_qmp() & friends pass their format string and variable arguments
to qobject_from_vjsonf_nofail(). Unlike qobject_from_jsonv(), they
aren't decorated with GCC_FMT_ATTR(). Fix that to get compile-time
format string checking.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180806065344.7103-22-armbru@redhat.com>
Leaving interpolation into JSON to qmp() is more robust than building
QMP input manually, as explained in the recent commit "tests: Clean up
string interpolation into QMP input (simple cases)".
migration-test.c interpolates strings into JSON in a few places:
* migrate_set_parameter() interpolates string parameter @value as a
JSON number. Change it to long long. This requires changing
migrate_check_parameter() similarly.
* migrate_set_capability() interpolates string parameter @value as a
JSON boolean. Change it to bool.
* deprecated_set_speed() interpolates string parameter @value as a
JSON number. Change it to long long.
Bonus: gets rid of non-literal format strings. A step towards
compile-time format string checking without triggering
-Wformat-nonliteral.
Cc: Juan Quintela <quintela@redhat.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180806065344.7103-21-armbru@redhat.com>
Leaving interpolation into JSON to qmp() is more robust than building
QMP input manually, as explained in the recent commit "tests: Clean up
string interpolation into QMP input (simple cases)".
migrate() interpolates members into a JSON object. Change it to take
its extra QMP arguments as arguments for qdict_from_jsonf_nofail()
instead of a string containing JSON members.
Bonus: gets rid of a non-literal format string. A step towards
compile-time format string checking without triggering
-Wformat-nonliteral.
Cc: Juan Quintela <quintela@redhat.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180806065344.7103-20-armbru@redhat.com>
Leaving interpolation into JSON to qmp() is more robust than building
QMP input manually, as explained in the recent commit "tests: Clean up
string interpolation into QMP input (simple cases)".
migrate_recover() builds QMP input manually because wait_command()
can't interpolate. Well, it can since the previous commit. Simplify
accordingly.
Bonus: gets rid of a non-literal format string. A step towards
compile-time format string checking without triggering
-Wformat-nonliteral.
Cc: Juan Quintela <quintela@redhat.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180806065344.7103-19-armbru@redhat.com>
wait_command() passes its argument @command to qtest_qmp_send().
Falls apart if @command contain '%'. Two ways to disarm this trap:
suppress interpretation of '%' by passing @command as argument to
format string "%s", or fix it by having wait_command() take the
variable arguments to go with @command. Do the latter.
This is another step towards compile-time format string checking
without triggering -Wformat-nonliteral.
Cc: Juan Quintela <quintela@redhat.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180806065344.7103-18-armbru@redhat.com>
Commit b21373d071 copied wait_command() from tests/migration-test.c
to tests/tpm-util.c. Replace both copies by new libqtest helper
qtest_qmp_receive_success(). Also use it to simplify
qtest_qmp_device_del().
Bonus: gets rid of a non-literal format string. A step towards
compile-time format string checking without triggering
-Wformat-nonliteral.
Cc: Thomas Huth <thuth@redhat.com>
Cc: Juan Quintela <quintela@redhat.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Cc: Stefan Berger <stefanb@linux.vnet.ibm.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180806065344.7103-17-armbru@redhat.com>
All callers of wait_command() are only interested in the success
response's "return" member. Lift its extraction into wait_command().
Cc: Juan Quintela <quintela@redhat.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180806065344.7103-16-armbru@redhat.com>
Leaving interpolation into JSON to qmp() is more robust than building
QMP input manually, as explained in the commit before previous.
qtest_qmp_device_add() and its wrappers interpolate into JSON as
follows:
* qtest_qmp_device_add() interpolates members into a JSON object.
* So do its wrappers qpci_plug_device_test() and usb_test_hotplug().
* usb_test_hotplug() additionally interpolates strings and numbers
into JSON strings.
Clean them up:
* Have qtest_qmp_device_add() take its extra device properties as
arguments for qdict_from_jsonf_nofail() instead of a string
containing JSON members.
* Drop qpci_plug_device_test(), use qtest_qmp_device_add()
directly.
* Change usb_test_hotplug() parameter @port to string, to avoid
interpolation. Interpolate @hcd_id separately.
Bonus: gets rid of a non-literal format string. A step towards
compile-time format string checking without triggering
-Wformat-nonliteral.
Cc: Thomas Huth <thuth@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180806065344.7103-15-armbru@redhat.com>
test_plug_with_device_add_x86() plugs Haswell-i386-cpu and
Haswell-x86_64-cpu with device_add. It passes socket-id, core-id,
thread-id as JSON strings. The properties are actually integers.
test_plug_with_device_add_coreid() plugs power8_v2.0-spapr-cpu-core
and qemu-s390x-cpu with device_add. It passes core-id as JSON string.
The properties are actually integers.
Passing JSON string values to integer properties works only due to
device_add implementation accidents. Fix the test to pass JSON
numbers. While there, use %u rather than %i with unsigned int.
Cc: Thomas Huth <thuth@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180806065344.7103-14-armbru@redhat.com>
When you build QMP input manually like this
cmd = g_strdup_printf("{ 'execute': 'migrate',"
"'arguments': { 'uri': '%s' } }",
uri);
rsp = qmp(cmd);
g_free(cmd);
you're responsible for escaping the interpolated values for JSON. Not
done here, and therefore works only for sufficiently nice @uri. For
instance, if @uri contained a single "'", qobject_from_vjsonf_nofail()
would abort. A sufficiently nasty @uri could even inject unwanted
members into the arguments object.
Leaving interpolation into JSON to qmp() is more robust:
rsp = qmp("{ 'execute': 'migrate', 'arguments': { 'uri': %s } }", uri);
It's also more concise.
Clean up the simple cases where we interpolate exactly a JSON value.
Bonus: gets rid of non-literal format strings. A step towards
compile-time format string checking without triggering
-Wformat-nonliteral.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180806065344.7103-13-armbru@redhat.com>
The qmp_FOO() take a printf-like format string. In a few places, we
assign a string literal to a variable and pass that instead of simply
passing the literal. Clean that up.
Bonus: gets rid of non-literal format strings. A step towards
compile-time format string checking without triggering
-Wformat-nonliteral.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180806065344.7103-12-armbru@redhat.com>
When visitor_input_test_init_internal()'s argument @ap is null, then
@json_string is interpreted literally, else it's gets %-escapes
interpolated. This is awkward.
One caller always passes null @ap, and the others never do. Lift the
building of the QObject into the callers, where it can be done without
such ambiguity.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180806065344.7103-10-armbru@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180806065344.7103-9-armbru@redhat.com>
Commit ab45015a96 "qobject: Let qobject_from_jsonf() fail instead of
abort" fails to accomplish its stated aim: the function can still
abort due to its use of &error_abort.
Its rationale for letting it fail is that all remaining users cope
fine with failure. Well, they're just fine with aborting, too; it's
what they do on failure.
Simply reverting the broken commit would bring back the unfortunate
asymmetry between qobject_from_jsonf() and qobject_from_jsonv(): one
aborts, the other returns null. So also rename it to
qobject_from_jsonf_nofail().
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180806065344.7103-7-armbru@redhat.com>
We have two flavors of vararg usage in qtest: qtest_hmp() etc. work
like sprintf(), and qtest_qmp() etc. work like qobject_from_jsonf().
Spell that out in the comments.
Also add GCC_FMT_ATTR() to qtest_hmp() etc. so that the compiler can
flag incorrect use.
We have some cleanup work to do before we can do the same for
qtest_qmp() etc. This would get us the same better-than-nothing
checking we already have for qobject_from_jsonf(): common incorrect
uses of supported conversion specifications will be flagged
(e.g. passing a double for %d), but use of unsupported ones won't.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
[Rebased, comment wording tweaked, commit message rewritten]
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20180806065344.7103-6-armbru@redhat.com>
qtest_qmp_discard_response(...) is shorthand for
qobject_unref(qtest_qmp(...), except it's not actually shorter.
Moreover, the presence of these functions encourage sloppy testing.
Remove them from libqtest. Add them as macros to the tests that use
them, with a TODO comment asking for cleanup.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180806065344.7103-5-armbru@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
qtest_init() still uses the qtest_qmp_discard_response(s, "") hack to
receive the greeting, even though we have qtest_qmp_receive() since
commit 66e0c7b187. Put it to use.
Bonus: gets rid of an empty format string. A step towards
compile-time format string checking without triggering
-Wformat-zero-length.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180806065344.7103-4-armbru@redhat.com>
qtest_qmp_device_del() still uses the qmp("") hack to receive a
message, even though we have qmp_receive() since commit 66e0c7b187.
Put it to use.
Bonus: gets rid of empty format strings. A step towards compile-time
format string checking without triggering -Wformat-zero-length.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180806065344.7103-3-armbru@redhat.com>
The functions to receive messages are called qtest_qmp_receive() and
qmp_receive(), qmp_fd_receive(). The ones to send messages are called
qtest_async_qmp(), qtest_async_qmpv(), qmp_async(), qmp_fd_send(),
qmp_fd_sendv(). Inconsistent. Rename the *_async* ones to
qmp_send(), qtest_qmp_send(), qtest_qmp_vsend(). Rename
qmp_fd_sendv() to qmp_fd_vsend().
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180806065344.7103-2-armbru@redhat.com>
blockdev-mirror with the same node for source and target segfaults
today: A node is in its own backing chain, so mirror_start_job() decides
that this is an active commit. When adding the intermediate nodes with
block_job_add_bdrv(), it starts the iteration through the subchain with
the backing file of source, though, so it never reaches target and
instead runs into NULL at the base.
While we could fix that by starting with source itself, there is no
point in allowing mirroring a node into itself and I wouldn't be
surprised if this caused more problems later.
So just check for this scenario and error out.
Cc: qemu-stable@nongnu.org
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
This reinstates commit b008326744,
which was temporarily reverted for the 3.0 release so that libvirt gets
some extra time to update their command lines.
The -drive option serial was deprecated in QEMU 2.10. It's time to
remove it.
Tests need to be updated to set the serial number with -global instead
of using the -drive option.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
This reinstates commit a7aff6dd10,
which was temporarily reverted for the 3.0 release so that libvirt gets
some extra time to update their command lines.
The -drive options cyls, heads, secs and trans were deprecated in
QEMU 2.10. It's time to remove them.
hd-geo-test tested both the old version with geometry options in -drive
and the new one with -device. Therefore the code using -drive doesn't
have to be replaced there, we just need to remove the -drive test cases.
This in turn allows some simplification of the code.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
The previous patch fixes a problem in which draining a block device
with more than one throttled request can make it wait first for the
completion of requests in other members of the same group.
This patch updates test_remove_group_member() in iotest 093 to
reproduce that scenario. This updated test would hang QEMU without the
fix from the previous patch.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
A throttle group can have several members, and each one of them can
have several pending requests in the queue.
The requests are processed in a round-robin fashion, so the algorithm
decides the drive that is going to run the next request and sets a
timer in it. Once the timer fires and the throttled request is run
then the next drive from the group is selected and a new timer is set.
If the user tried to remove a drive from a group and that drive had a
timer set then the code was not taking care of setting up a new timer
in one of the remaining members of the group, freezing their I/O.
This problem was fixed in 6fccbb475b,
and this patch adds a new test case that reproduces this exact
scenario.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Fix the following issues:
common.py:873:13: E129 visually indented line with same indent as next logical line
common.py:1766:5: E741 ambiguous variable name 'l'
common.py:1784:1: E305 expected 2 blank lines after class or function definition, found 1
common.py:1833:1: E305 expected 2 blank lines after class or function definition, found 1
common.py:1843:1: E305 expected 2 blank lines after class or function definition, found 1
visit.py:181:18: E127 continuation line over-indented for visual indent
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20180621083551.775-1-armbru@redhat.com>
[Fixup squashed in:]
Message-ID: <871sd0nzw9.fsf@dusky.pond.sub.org>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Presumably 0.15 was the version it was first introduced, but
qmp keeps evolving. There is no point in having that version
as test prefix, 'qmp' makes more sense here.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180326150916.9602-12-marcandre.lureau@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Use make's --output-sync option when running tests inside VMs,
so that if we're building with parallelization the output doesn't
get scrambled.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20180803085230.30574-6-peter.maydell@linaro.org>
Signed-off-by: Fam Zheng <famz@redhat.com>
Currently we run the guests in a VM which is given only 2G of RAM.
Since the guests are configured without any swap space, builds
can fail because the system runs out of memory and kills the
compiler, especially if the job count is set for a lot of
parallelism. Bump the setting up from 2G to 4G to give us some
more headroom.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20180803085230.30574-5-peter.maydell@linaro.org>
Signed-off-by: Fam Zheng <famz@redhat.com>
Invoking 'make vm-build-freebsd' and friends with V=1 should
propagate that verbosity setting down into the build run
inside the VM. Make sure we do that. This brings it into
line with how the container tests handle V=1.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20180803085230.30574-4-peter.maydell@linaro.org>
Signed-off-by: Fam Zheng <famz@redhat.com>
Our test suite works for parallel execution too, and this can
noticeably speed up a test run; pass the 'jobs' setting to
it as well as to the build proper.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20180803085230.30574-3-peter.maydell@linaro.org>
Signed-off-by: Fam Zheng <famz@redhat.com>
The images are big. Add a rule to clean up easily.
Suggested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Fam Zheng <famz@redhat.com>
Message-Id: <20180716020008.31468-1-famz@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Fam Zheng <famz@redhat.com>
This one does docker testing in the VM. It is intended to replace the
native docker testing on patchew testers.
Signed-off-by: Fam Zheng <famz@redhat.com>
Message-Id: <20180712012829.20231-5-famz@redhat.com>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Fam Zheng <famz@redhat.com>
In VM based tests, the source archive is created in host, we don't have
to run archive-source.sh again, as it complicates the Makefile and
scripts.
Signed-off-by: Fam Zheng <famz@redhat.com>
Message-Id: <20180712012829.20231-4-famz@redhat.com>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Fam Zheng <famz@redhat.com>
Not using snapshot has the benefit of automatically persisting useful
test harnesses, such as docker images and ccache database. Although it
will lose some cleanness, it is imaginably useful for patchew.
Signed-off-by: Fam Zheng <famz@redhat.com>
Message-Id: <20180712012829.20231-2-famz@redhat.com>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Fam Zheng <famz@redhat.com>
Similar to 79f24568e5, this fixes the following warnings:
CHK version_gen.h
LEX convert-dtsv0-lexer.lex.c
make[1]: flex: Command not found
BISON dtc-parser.tab.c
make[1]: bison: Command not found
LEX dtc-lexer.lex.c
make[1]: flex: Command not found
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20180628153535.1411-5-f4bug@amsat.org>
Signed-off-by: Fam Zheng <famz@redhat.com>
If KVM is not available, then use the 'max' cpu.
This fixes:
ERROR:root:Log:
ERROR:root:qemu-system-x86_64: CPU model 'host' requires KVM
Failed to prepare guest environment
error: [Errno 104] Connection reset by peer
source/qemu/tests/vm/Makefile.include:25: recipe for target 'tests/vm/ubuntu.i386.img' failed
make: *** [tests/vm/ubuntu.i386.img] Error 2
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20180628153535.1411-4-f4bug@amsat.org>
Signed-off-by: Fam Zheng <famz@redhat.com>
Now, QEmu adds a new check for memory-less NUMA nodes in build_srat().
It effects the ACPI test.
So, Update ACPI tables test blobs.
Signed-off-by: Dou Liyang <douly.fnst@cn.fujitsu.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
This adds a test to make sure we fail properly for a 0 length mmap.
There are most likely other failure conditions we should also check.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Cc: umarcor <1783362@bugs.launchpad.net>
Message-Id: <20180730134321.19898-3-alex.bennee@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Make sure that query-blockstats returns information for every
BlockBackend that is named or attached to a device model (or both).
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
On my system (Fedora 28), this script reports a 'failed to get
"consistent read" lock' error. Following docs/devel/testing.rst, it's
better to add locking=off here.
Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
qstring_from_substr() takes the index of the substring's first and
last character. qstring_from_substr(s, 0, SIZE_MAX) denotes an empty
substring. Awkward.
Shift the end index one to the right. This simplifies both
qstring_from_substr() and its callers.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180727062204.10401-3-armbru@redhat.com>
When gnutls negotiates TLS 1.3 instead of 1.2, the order of messages
sent by the handshake changes. This exposed a logic bug in the test
suite which caused us to wait for the server to see handshake
completion, but not wait for the client to see completion. The result
was the client didn't receive the certificate for verification and the
test failed.
This is exposed in Fedora 29 rawhide which has just enabled TLS 1.3 in
its GNUTLS builds.
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Most of the TLS related tests are passing an in a "Error" object to
methods that are expected to fail, but then ignoring any error that is
set and instead asserting on a return value. This means that when an
error is unexpectedly raised, no information about it is printed out,
making failures hard to diagnose. Changing these tests to pass in
&error_abort will make unexpected failures print messages to stderr.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
The test-vmstate test is a bit chatty because it triggers various
expected failure scenarios and the code in question uses error_report
instead of accepting 'Error **errp' parameters. To silence this test the
stubs for error_vprintf() were changed to send errors via
g_test_message() instead of stderr:
commit 28017e010d
Author: Paolo Bonzini <pbonzini@redhat.com>
Date: Mon Oct 24 18:31:03 2016 +0200
tests: send error_report to test log
Implement error_vprintf to send the output of error_report to
the test log. This silences test-vmstate.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <1477326663-67817-3-git-send-email-pbonzini@redhat.com>
Unfortunately this change has global impact across the entire test suite
and means that when tests fail for unexpected reasons, the message is
not displayed on stderr. eg when using &error_abort in a call the test
merely prints
Unexpected error in qcrypto_tls_session_check_certificate() at crypto/tlssession.c:280:
and the actual error message is hidden, making it impossible to diagnose
the failure. This is especially problematic in CI or build systems where
it isn't possible to easily pass the --debug-log flag to tests and
re-run with the test log visible.
This change makes the previous big hammer much more nuanced, providing a
flag in the stub error_vprintf() that can used on a per-test basis to
silence the errors. Only the test-vmstate silences errors initially.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Calling qcrypto_init ensures that all relevant initialization is
done. In particular this honours the debugging settings and thread
settings.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
The only possible change of last_byte is when it reaches the edge.
Setting it every time might let last_byte contain an invalid data when
memory corruption is detected, then the check of the next byte will be
incorrect. For example, a single page corruption at address 0x14ad000
will also lead to a "fake" corruption at 0x14ae000:
Memory content inconsistency at 14ad000 first_byte = 44 last_byte = 44 current = ef hit_edge = 0
Memory content inconsistency at 14ae000 first_byte = 44 last_byte = ef current = 44 hit_edge = 0
After the patch, it'll only report the corrputed page:
Memory content inconsistency at 14ad000 first_byte = 44 last_byte = 44 current = ef hit_edge = 0
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20180723123305.24792-4-peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
The combination of being rather esoteric and needing to support mmap @
0 means this only ever worked under translation. It has now regressed
even further and is no longer useful. Kill it.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Setting up binfmt_misc is outside of the scope of the docker.py script
but we can at least validate it with any given executable so we have a
more useful error message than the sed line of deboostrap failing
cryptically.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reported-by: Richard Henderson <richard.henderson@linaro.org>
We do a minimum version check for the debootstrap but if the distro
has added their own minor version tick it would fail and fall-back to
the SCM version. This is sub-optimal as the latest/greatest version
may be broken at any one particular time. We fix that with a little
sed magic on the version string before passing to our ugly shell
versioning check.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
This is just a note that later versions of debootstrap don't
technically need this hack.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
When a check fails we currently just report why we failed. This is not
totally helpful to people who want to boot-strap a new image. Report a
hint as to why it failed.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Suggested-by: Fam Zheng <famz@redhat.com>
The addition of QEMU_TARGET was intended to ensure we fall back to
checking for the existence of an image if the build system was not
currently configured to build it. However this breaks the direct use
of the rule for building custom binfmt_misc images. We already check
for EXECUTABLE so let us just use that as a proxy for deciding if we
are just going to check the image exits.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
This allows us to run a particular test on all docker images. For
example:
make docker-test-unit
Will run the unit tests on every supported image. At the same time
rename docker-test to docker-all-tests to be clearer.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
This test doesn't even build QEMU, it just builds and runs all the
unit tests. Intended to make checking unit tests on all docker images
easier.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Rename DOCKER_INTERMEDIATE_IMAGES to DOCKER_PARTIAL_IMAGES and add the
incomplete cross compiler images that can build tests but can't build
QEMU itself. We also add debian, debian-bootstrap and the tricode
images to the list.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Not all our images are able to run the tests. Rather than use features
we can just check for the existence and run-ability of gtester. If the
image has been setup for binfmt_misc it will be able to run anyway.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Not all docker images can run the check step. Let's move everything
into a common helper so we don't need to replicate checks in the
future.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
This allows some tests that just want to configure QEMU's source tree
to do so.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
As this is called directly from the Makefile while determining
dependencies and it is possible the user was configured in one window
but not have credentials in the other. Let's catch the Exceptions and
deal with it quietly.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reported-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
This image isn't going to build anything significant as it is just
intended for building test cases. In case it does end up getting
inadvertently included in a build lets aim for the minimal possible
product.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
We need both git and a working compiler to build the tools. Although
the qemu:debian9 image also has a bunch of extra dependencies it would
be fairly unusual for a user not to already have this layer available
for one of our many other docker images so lets not complicate things.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
The .gitignore was being a little over enthusiastic hiding files.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
223 tests persistent dirty bitmaps which are not supported in
compat=0.10, so that option is unsupported for this test.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Tested-by: John Snow <jsnow@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>