This changes the ICP realize and reset handlers in DeviceRealize and
DeviceReset handlers. parent handlers are now called from the
inheriting classes which is a cleaner object pattern.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
In order to test that the NBD server is properly advertising
dirty bitmaps, we need a bare minimum client that can request
and read the context. Since feature freeze for 3.0 is imminent,
this is the smallest workable patch, which replaces the qemu
block status report with the results of the NBD server's dirty
bitmap (making it very easy to use 'qemu-img map --output=json'
to learn where the dirty portions are). Note that the NBD
protocol defines a dirty section with the same bit but opposite
sense that normal "base:allocation" uses to report an allocated
section; so in qemu-img map output, "data":true corresponds to
clean, "data":false corresponds to dirty.
A more complete solution that allows dirty bitmaps to be queried
at the same time as normal block status will be required before
this addition can lose the x- prefix. Until then, the fact that
this replaces normal status with dirty status means actions
like 'qemu-img convert' will likely misbehave due to treating
dirty regions of the file as if they are unallocated.
The next patch adds an iotest to exercise this new code.
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180702191458.28741-2-eblake@redhat.com>
The condition to check whether an address has hit against a particular
TLB entry is not completely trivial. We do this in various places, and
in fact in one place (get_page_addr_code()) we have got the condition
wrong. Abstract it out into new tlb_hit() and tlb_hit_page() inline
functions (one for a known-page-aligned address and one for an
arbitrary address), and use them in all the places where we had the
condition correct.
This is a no-behaviour-change patch; we leave fixing the buggy
code in get_page_addr_code() to a subsequent patch.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20180629162122.19376-2-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
- add bpb/ppa15 features to default cpu model for z196 and later
- rework TOD handling and fix cpu hotplug under tcg
- various fixes
-----BEGIN PGP SIGNATURE-----
iQJGBAABCAAwFiEEw9DWbcNiT/aowBjO3s9rk8bwL68FAls6B/QSHGNvaHVja0By
ZWRoYXQuY29tAAoJEN7Pa5PG8C+vqCIQAII2fFuPicarkERO8DBbn/nzPWMNp20f
UUkhmS5w7Bng3Fq2hL3aihmRijMM7K4+6M4m0/mEf6Z07dQmC5M/FGNsV22/ToFb
wjGUZM/SaqJF+gosEvA/pzhc8GmgPCDF3z2Phbdqsa4Ck33sMKyZIGveH8jF9lDf
8HoljDQ06ckXhaIsX1DZ9I6o6u5CpoCOSLrwuLpVfzFK1fimD04B44GINZg/oJQ0
e++ac2OwyV7OgjdLiNlVJOI5UV7iVrgZRvtGLqLzRCWLJVDl85I2JwhQwx0mHv4Y
Of0SyiY3SF5jiV/FCLdd/k9CxMUzBXhZvq5vi7qtejakiVbVA+W5E1hugUUl794a
gjlsrkKnpFFAW6QU8/4bq88I9e0F4LXLfaK1dCnOsJX3kGIruUjLGfOTM/T2Cuz9
0oJPYgZs0NAxTh+9wsFOlhS0EBr8jczn+DRRQjUPrzWk2INLtl8kKyjh71UW9yl5
vYCZevK9KUuZXNGBsqVn2pS1rKbryoJVm3IZudJXBvyj5EPo4q3Tdxm7NZKGPIlu
e6VXHinhMtk7BZt5J2HNKGFIo5IPCcYQJCgs6+M1wRTHK6VtquVvas0y5aW5Wvl8
0lHI0fmflDoHbPiyRH9xvOS3r/y9xNYoUUG2GxjQqteW7tJAuhYy/rQIvBbVKufL
0xFKC4vdhV7c
=GF+m
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/cohuck/tags/s390x-20180702' into staging
s390x updates:
- add bpb/ppa15 features to default cpu model for z196 and later
- rework TOD handling and fix cpu hotplug under tcg
- various fixes
# gpg: Signature made Mon 02 Jul 2018 12:09:40 BST
# gpg: using RSA key DECF6B93C6F02FAF
# gpg: Good signature from "Cornelia Huck <conny@cornelia-huck.de>"
# gpg: aka "Cornelia Huck <huckc@linux.vnet.ibm.com>"
# gpg: aka "Cornelia Huck <cornelia.huck@de.ibm.com>"
# gpg: aka "Cornelia Huck <cohuck@kernel.org>"
# gpg: aka "Cornelia Huck <cohuck@redhat.com>"
# Primary key fingerprint: C3D0 D66D C362 4FF6 A8C0 18CE DECF 6B93 C6F0 2FAF
* remotes/cohuck/tags/s390x-20180702:
s390x/tcg: fix locking problem with tcg_s390_tod_updated
s390x/kvm: indicate alignment in legacy_s390_alloc()
s390x/kvm: legacy_s390_alloc() only supports one allocation
s390x/tcg: fix CPU hotplug with single-threaded TCG
s390x/tcg: rearm the CKC timer during migration
s390x/tcg: implement SET CLOCK
s390x/tcg: SET CLOCK COMPARATOR can clear CKC interrupts
s390x/tcg: properly implement the TOD
s390x/tcg: drop tod_basetime
s390x/tod: factor out TOD into separate device
s390x/kvm: pass values instead of pointers to kvm_s390_set_clock_*()
s390x/tcg: avoid overflows in time2tod/tod2time
s390x/cpumodel: default enable bpb and ppa15 for z196 and later
loader: Check access size when calling rom_ptr() to avoid crashes
s390/ipl: fix ipl with -no-reboot
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
There is no need for a stub, since tb_invalidate_phys_addr can be excised
altogether when TCG is disabled. This is a bit cleaner since it avoids
using code that is clearly specific to user-mode emulation (it calls
mmap_lock/unlock) for the !CONFIG_TCG case.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
All files using "qemu/units.h" definitions already include it directly,
we can now remove it from "qemu/cutils.h".
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Marcel Apfelbaum <marcel@redhat.com>
Message-Id: <20180625124238.25339-41-f4bug@amsat.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
It eases code review, unit is explicit.
Patch generated using:
$ git grep -E '(1024|2048|4096|8192|(<<|>>).?(10|20|30))' hw/ include/hw/
and modified manually.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20180625124238.25339-39-f4bug@amsat.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
It eases code review, unit is explicit.
Patch generated using:
$ git grep -E '(1024|2048|4096|8192|(<<|>>).?(10|20|30))' hw/ include/hw/
and modified manually.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Stefan Weil <sw@weilnetz.de>
Message-Id: <20180625124238.25339-35-f4bug@amsat.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
It eases code review, unit is explicit.
Patch generated using:
$ git grep -E '(1024|2048|4096|8192|(<<|>>).?(10|20|30))' hw/ include/hw/
and modified manually.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20180625124238.25339-33-f4bug@amsat.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Loosely based on 076b35b5a5.
Suggested-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20180625124238.25339-2-f4bug@amsat.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Fix the --disable-tcg breakage introduced by 8bca9a03ec60d:
$ configure --disable-tcg
[...]
$ make -C i386-softmmu exec.o
make: Entering directory 'i386-softmmu'
CC exec.o
In file included from source/qemu/exec.c:62:0:
source/qemu/include/exec/ram_addr.h:96:6: error: conflicting types for ‘tb_invalidate_phys_range’
void tb_invalidate_phys_range(ram_addr_t start, ram_addr_t end);
^~~~~~~~~~~~~~~~~~~~~~~~
In file included from source/qemu/exec.c:24:0:
source/qemu/include/exec/exec-all.h:309:6: note: previous declaration of ‘tb_invalidate_phys_range’ was here
void tb_invalidate_phys_range(target_ulong start, target_ulong end);
^~~~~~~~~~~~~~~~~~~~~~~~
source/qemu/exec.c:1043:6: error: conflicting types for ‘tb_invalidate_phys_addr’
void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr, MemTxAttrs attrs)
^~~~~~~~~~~~~~~~~~~~~~~
In file included from source/qemu/exec.c:24:0:
source/qemu/include/exec/exec-all.h:308:6: note: previous declaration of ‘tb_invalidate_phys_addr’ was here
void tb_invalidate_phys_addr(target_ulong addr);
^~~~~~~~~~~~~~~~~~~~~~~
make: *** [source/qemu/rules.mak:69: exec.o] Error 1
make: Leaving directory 'i386-softmmu'
Tested to build x86_64-softmmu and i386-softmmu targets.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20180629200710.27626-1-f4bug@amsat.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Right now, each CPU has its own TOD. Especially, the TOD will differ
based on creation time of a CPU - e.g. when hotplugging a CPU the times
will differ quite a lot, resulting in stall warnings in the guest.
Let's use a single TOD by implementing our new TOD device. Prepare it
for TOD-clock epoch extension.
Most importantly, whenever we set the TOD, we have to update the CKC
timer.
Introduce "tcg_s390x.h" just like "kvm_s390x.h" for tcg specific
function declarations that should not go into cpu.h.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180627134410.4901-6-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Let's treat this like a separate device. TCG will have to store the
actual state/time later on.
Include cpu-qom.h in kvm_s390x.h (due to S390CPU) to compile tod-kvm.c.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180627134410.4901-4-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
The rom_ptr() function allows direct access to the ROM blobs that we
load during startup. However, there are currently no checks for the
size of the accesses, so it's currently possible to crash QEMU for
example with:
$ echo "Insane in the mainframe" > /tmp/test.txt
$ s390x-softmmu/qemu-system-s390x -kernel /tmp/test.txt -append xyz
Segmentation fault (core dumped)
$ s390x-softmmu/qemu-system-s390x -kernel /tmp/test.txt -initrd /tmp/test.txt
Segmentation fault (core dumped)
$ echo -n HdrS > /tmp/hdr.txt
$ sparc64-softmmu/qemu-system-sparc64 -kernel /tmp/hdr.txt -initrd /tmp/hdr.txt
Segmentation fault (core dumped)
We need a possibility to check the size of the ROM area that we want
to access, thus let's add a size parameter to the rom_ptr() function
to avoid these problems.
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1530005740-25254-1-git-send-email-thuth@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
kexec/kdump as well as the bootloader use a subcode of diagnose 308
that is supposed to reset the I/O subsystem but not comprise a full
"reboot". With the latest refactoring this is now broken when
-no-reboot is used or when libvirt acts on a reboot QMP event, for
example a virt-install from iso images.
We need to mark these "subsystem resets" as special.
Fixes: a30fb811cb (s390x: refactor reset/reipl handling)
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180622102928.173420-1-borntraeger@de.ibm.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
It was unclear before on what does the CLOSED event mean. Meanwhile we
add a TODO to fix up the CLOSED event in the future when the in/out
ports are different for a chardev.
CC: Paolo Bonzini <pbonzini@redhat.com>
CC: "Marc-André Lureau" <marcandre.lureau@redhat.com>
CC: Stefan Hajnoczi <stefanha@redhat.com>
CC: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20180620073223.31964-2-peterx@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
* last of the SVE patches; SVE is now enabled for aarch64 linux-user
* sd: Don't trace SDRequest crc field (coverity bugfix)
* target/arm: Mark PMINTENSET accesses as possibly doing IO
* clean up v7VE feature bit handling
* i.mx7d: minor cleanups
* target/arm: support reading of CNT[VCT|FRQ]_EL0 from user-space
* target/arm: Implement ARMv8.2-DotProd
* virt: add addresses to dt node names (which stops dtc from
complaining that they're not correctly named)
* cleanups: replace error_setg(&error_fatal) by error_report() + exit()
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABCAAGBQJbNkelAAoJEDwlJe0UNgzeB/wP/1WLIC2HD6jbYCPm/qW6fpXx
cUIRjvmodD8gVdJ8m5MVPQl1AdWVdVOGTStMdKNo4nWRuZyRKl3WvnZKoDQqj+fq
PahPUMEXGC14djhmMQ7XdDHFYFCtKOi/leZSSLw3nI1QVq+SSYOiWhqrUExLeMWb
xA25kzHhsNrTbwE7WxBz0dhtkX+QBy2CY11e1o3ONC5vJQhuXxLxPyEZZrQpj2yZ
9twZTcWwJ1FRlTNsGo26wNl65gC7RNAbts+fNa2gxFcwUfN2ioKKdI1iyQVvW7Mz
CwHoa7ghppyQ6hBT2U2O5nBJx72pdQaLae6mQ+FdIFTaoE05aoAQ+pUKZiv+JwTH
blZQZld03c80Rt6SBSVS7AU8l3A0T+cePqVMA2lv7NyGSyq4G8RjqtLIe++QGTIj
Lq9PDVh0evnFQY11RBvJYOit3l/gB6XtfnYo2+ePpCapr/cGmXzerr9kwcCr/D7I
jCQamPO0oxAIbvmqfc0feppWwFdI9qFbyKIRcQtb7uglN29z95ImNCq1tGYweQvy
/ApGLMrrcdtgF2Ok13ajM+Ge60QPZ0uB8EJoStZwOwFeIuEThdvS919q0BxmR/d3
eSwMj7ysf+UqPU9vkjtiXUyXl7r2iCbFoXcXIBsYE1Xpx1YYZKSThzMl7hqDWRGo
W+wtYk3PvdlAPTH/lg5D
=KZLT
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20180629' into staging
target-arm queue:
* last of the SVE patches; SVE is now enabled for aarch64 linux-user
* sd: Don't trace SDRequest crc field (coverity bugfix)
* target/arm: Mark PMINTENSET accesses as possibly doing IO
* clean up v7VE feature bit handling
* i.mx7d: minor cleanups
* target/arm: support reading of CNT[VCT|FRQ]_EL0 from user-space
* target/arm: Implement ARMv8.2-DotProd
* virt: add addresses to dt node names (which stops dtc from
complaining that they're not correctly named)
* cleanups: replace error_setg(&error_fatal) by error_report() + exit()
# gpg: Signature made Fri 29 Jun 2018 15:52:21 BST
# gpg: using RSA key 3C2525ED14360CDE
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>"
# gpg: aka "Peter Maydell <pmaydell@gmail.com>"
# gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>"
# Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE
* remotes/pmaydell/tags/pull-target-arm-20180629: (55 commits)
target/arm: Add ID_ISAR6
target/arm: Prune a15 features from max
target/arm: Prune a57 features from max
target/arm: Fix SVE system register access checks
target/arm: Fix SVE signed division vs x86 overflow exception
sdcard: Use the ldst API
sd: Don't trace SDRequest crc field
target/arm: Mark PMINTENSET accesses as possibly doing IO
target/arm: Remove redundant DIV detection for KVM
target/arm: Add ARM_FEATURE_V7VE for v7 Virtualization Extensions
i.mx7d: Change IRQ number type from hwaddr to int
i.mx7d: Change SRC unimplemented device name from sdma to src
i.mx7d: Remove unused header files
target/arm: support reading of CNT[VCT|FRQ]_EL0 from user-space
target/arm: Implement ARMv8.2-DotProd
target/arm: Enable SVE for aarch64-linux-user
target/arm: Implement SVE dot product (indexed)
target/arm: Implement SVE dot product (vectors)
target/arm: Implement SVE fp complex multiply add (indexed)
target/arm: Pass index to AdvSIMD FCMLA (indexed)
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
soft-freeze, but I'd like these preparatory patches to be merged anyway.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEtIKLr5QxQM7yo0kQcdTV5YIvc9YFAls2DEgACgkQcdTV5YIv
c9ZShw/6AuDMeWpzFHAdf4lBuUdDFyAC7QFln8xdb+MDus5whAvtt7vDeY0OKbgo
w5VDTkstO7h4jQJDHkjzHK91ZdUbgu0Tj7C09x4oQpUueNWsTZGcPBvNGadjjOBt
70LdwyV2nSER3+QNjTNznrh0faxay4xuSTIY/mW6iudeWGobwXmseEeOE8gGM+w0
s1GwxMVKIfllKUmW2vx0mGfn02pKtTnan+Si+sp/AnY9xSquFfHWpZhXZlkZrfYd
mgtJOTY9IpSekr9jBBKgUlZ/QVYiliDzuh3ePDYKtsuHZZ7z2ype3DkXqYOnblOs
C+2gWUE/TC5BStjRX3RmPv21dpfkEdlxOZpgXbpP1VgKqbtnbnvcgTL89IPv9afl
Aj+q5uYR494kOL5rSDynVRdWhnUmMnkqHCZpKG+IRMHv6GlrXpxOQWenwCS/vYWK
swKqRwGj0CFugdt7qVZ+4XjXbbWEI21dHHG7nAXinfakKVOfJYIeGIQC7WfpIrxy
ApV0mHSceK0AMBJvlf1Zf0Qm0lJ7Ay7MRT/5XWDFV9Bogf+wxtGvf9Ukc2qQhwd8
mR9iN7rlWz3VSu5vS3bEdsiBXKibxIRfv7HhF5fa+mwkZA9gMbj33vVds1zA4ta1
Qw4doRq4xWui3uNO9jvtcXtW5Bq7N4p6wVFK76dLVHk5axLCIec=
=vVJr
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/gkurz/tags/for-upstream' into staging
The Darwin host support still needs some more work. It won't make it for
soft-freeze, but I'd like these preparatory patches to be merged anyway.
# gpg: Signature made Fri 29 Jun 2018 11:39:04 BST
# gpg: using RSA key 71D4D5E5822F73D6
# gpg: Good signature from "Greg Kurz <groug@kaod.org>"
# gpg: aka "Gregory Kurz <gregory.kurz@free.fr>"
# gpg: aka "[jpeg image of size 3330]"
# Primary key fingerprint: B482 8BAF 9431 40CE F2A3 4910 71D4 D5E5 822F 73D6
* remotes/gkurz/tags/for-upstream:
9p: darwin: Explicitly cast comparisons of mode_t with -1
cutils: Provide strchrnul
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This helper allows to retrieve the paths of nodes whose name
match node-name or node-name@unit-address patterns.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Message-id: 1530044492-24921-2-git-send-email-eric.auger@redhat.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This updates the minimum required glib version to 2.40
-----BEGIN PGP SIGNATURE-----
iQIcBAABCAAGBQJbNhcKAAoJEL6G67QVEE/f1gMP+wV4J+V9WJOxcXbNjP9bb1K4
gsvZrv2pKCVbcJW2o3Mc3POXExX1E2GsrAl639zpDHbNVlVFvQC51GaRO1Fc6lwh
7NRFHen0Ee6I7TvWlVXMy1YPXPZ/8EJ/1KuZerYUUyKO/R8ojEt/TbELwfl3P1LF
VZGWgs+GpwUYwiG4ZmYPQ0VsT/munZPlaK1mRQSDbGqX0KG1Vy8Q+mgbGAm+S6xh
39HtM7ecdfXbVEAnoMsp9+kRi3zXpD5zSAyVBZN2RDktMt/EHxF0pPJCqX7oq0Rn
DehXDkzaqF2ghzrSAIT3rbKBhzYIY/ny/vxnyQZ/Px0GrBdKwuUJ+pYcF2akS2hV
s98VWRw/tqt8wQKYLF/wmeL7eE/tKAHSUqFc3Ta0Rvgfq0z9Mp3b73vuswBpfjWL
+GiASRvg96n/1yqmywtCd7KtF5riOo1iyvfMFCdQ+nBHQok/6K/oWfZPyoZsCAIa
2GJBfmHzkkc98QeNO0Dp/+ckSyKIBuyTV1bDyq/8Yz7IwEd1PfRWooEgVFSHO7MU
6ddCECvKVvP0S03r5dlw4Imio39gPpeZBMmSE4ZOh/hRa89T4UqIUvSLwzat7OhA
DH1NhJsj7G1jaIxENfzBhpggg++rHCVcfc2cv28Tl7i7twyRsb7CveYA6TwD+UIg
7JT+KswUo7W9MrkKBHC+
=8tKK
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/berrange/tags/min-glib-pull-request' into staging
glib: update the min required version
This updates the minimum required glib version to 2.40
# gpg: Signature made Fri 29 Jun 2018 12:24:58 BST
# gpg: using RSA key BE86EBB415104FDF
# gpg: Good signature from "Daniel P. Berrange <dan@berrange.com>"
# gpg: aka "Daniel P. Berrange <berrange@redhat.com>"
# Primary key fingerprint: DAF3 A6FD B26B 6291 2D0E 8E3F BE86 EBB4 1510 4FDF
* remotes/berrange/tags/min-glib-pull-request:
glib: enforce the minimum required version and warn about old APIs
glib: bump min required glib library version to 2.40
util: remove redundant include of glib.h and add osdep.h
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We are gradually moving away from sector-based interfaces, towards
byte-based. Now that all callers of vectored I/O have been converted
to use our preferred byte-based bdrv_co_p{read,write}v(), we can
delete the unused bdrv_co_{read,write}v().
Furthermore, this gets rid of the signature difference between the
public bdrv_co_writev() and the callback .bdrv_co_writev (the
latter still exists, because some drivers still need more work
before they are fully byte-based).
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This moves the code to resize an image file to the thread pool to avoid
blocking.
Creating large images with preallocation with blockdev-create is now
actually a background job instead of blocking the monitor (and most
other things) until the preallocation has completed.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
When growing an image, block drivers (especially protocol drivers) may
initialise the newly added area. I/O requests to the same area need to
wait for this initialisation to be completed so that data writes don't
get overwritten and reads don't read uninitialised data.
To avoid overhead in the fast I/O path by adding new locking in the
protocol drivers and to restrict the impact to requests that actually
touch the new area, reuse the existing tracked request infrastructure in
block/io.c and mark all discard requests as serialising.
With this change, it is safe for protocol drivers to make
.bdrv_co_truncate actually asynchronous.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
This moves the bdrv_truncate() implementation from block.c to block/io.c
so it can have access to the tracked requests infrastructure.
This involves making refresh_total_sectors() public (in block_int.h).
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
bdrv_truncate() is an operation that can block (even for a quite long
time, depending on the PreallocMode) in I/O paths that shouldn't block.
Convert it to a coroutine_fn so that we have the infrastructure for
drivers to make their .bdrv_co_truncate implementation asynchronous.
This change could potentially introduce new race conditions because
bdrv_truncate() isn't necessarily executed atomically any more. Whether
this is a problem needs to be evaluated for each block driver that
supports truncate:
* file-posix/win32, gluster, iscsi, nfs, rbd, ssh, sheepdog: The
protocol drivers are trivially safe because they don't actually yield
yet, so there is no change in behaviour.
* copy-on-read, crypto, raw-format: Essentially just filter drivers that
pass the request to a child node, no problem.
* qcow2: The implementation modifies metadata, so it needs to hold
s->lock to be safe with concurrent I/O requests. In order to avoid
double locking, this requires pulling the locking out into
preallocate_co() and using qcow2_write_caches() instead of
bdrv_flush().
* qed: Does a single header update, this is fine without locking.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
The error handling policy was traditionally set with -drive, but with
-blockdev it is no longer possible to set frontend options. scsi-disk
(and other block devices) have long supported qdev properties to
configure the error handling policy, so let's add these options to
usb-storage as well and just forward them to the internal scsi-disk
instance.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
There are two useful macros that can be defined before including
glib.h that are related to the min required glib version
- GLIB_VERSION_MIN_REQUIRED
When this is defined, if code uses an API that was deprecated
in this version, or older, a compiler warning will be emitted.
This alerts maintainers to update their code to whatever new
replacement API is now recommended best practice.
- GLIB_VERSION_MAX_ALLOWED
When this is defined, if code uses an API that was introduced
in a version that is newer than the declared version, a compiler
warning will be emitted. This alerts maintainers if new code
accidentally uses functionality that won't be available on some
supported platforms.
The GLIB_VERSION_MAX_ALLOWED constant makes it a bit harder to opt
in to using specific new APIs with a GLIB_CHECK_VERSION conditional.
To workaround this Pragmas can be used to temporarily turn off the
-Wdeprecated-declarations compiler warning, while a static inline
compat function is implemented. This workaround is illustrated with the
implementation of the g_strv_contains method to satisfy the test suite.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Per supported platforms doc[1], the various min glib on relevant distros is:
RHEL-7: 2.50.3
Debian (Stretch): 2.50.3
Debian (Jessie): 2.42.1
OpenBSD (Ports): 2.54.3
FreeBSD (Ports): 2.50.3
OpenSUSE Leap 15: 2.54.3
SLE12-SP2: 2.48.2
Ubuntu (Xenial): 2.48.0
macOS (Homebrew): 2.56.0
This suggests that a minimum glib of 2.42 is a reasonable target.
The GLibC compile farm, however, uses Ubuntu 14.04 (Trusty) which only
has glib 2.40.0, and this is needed for testing during merge. Thus an
exception is made to the documented platform support policy to allow for
all three current LTS releases to be supported.
Docker jobs that not longer satisfy this new min version are removed.
[1] https://qemu.weilnetz.de/doc/qemu-doc.html#Supported-build-platforms
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Code must only ever include glib.h indirectly via the glib-compat.h
header file, because we will need some macros set before glib.h is
pulled in. Adding extra includes of glib.h will (soon) cause compile
failures such as:
In file included from /home/berrange/src/virt/qemu/include/qemu/osdep.h:107,
from /home/berrange/src/virt/qemu/include/qemu/iova-tree.h:26,
from util/iova-tree.c:13:
/home/berrange/src/virt/qemu/include/glib-compat.h:22: error: "GLIB_VERSION_MIN_REQUIRED" redefined [-Werror]
#define GLIB_VERSION_MIN_REQUIRED GLIB_VERSION_2_40
In file included from /usr/include/glib-2.0/glib/gtypes.h:34,
from /usr/include/glib-2.0/glib/galloca.h:32,
from /usr/include/glib-2.0/glib.h:30,
from util/iova-tree.c:12:
/usr/include/glib-2.0/glib/gversionmacros.h:237: note: this is the location of the previous definition
# define GLIB_VERSION_MIN_REQUIRED (GLIB_VERSION_CUR_STABLE)
Furthermore, the osdep.h include should always be done directly from the
.c file rather than indirectly via any .h file.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
The VPD Block Limits Inquiry page is optional, allowing SCSI devices
to not implement it. This is the case for devices like the MegaRAID
SAS 9361-8i and Microsemi PM8069.
In case of SCSI passthrough, the response of this request is used by
the QEMU SCSI layer to set the max_io_sectors that the guest
device will support, based on the value of the max_sectors_kb that
the device has set in the host at that time. Without this response,
the guest kernel is free to assume any value of max_io_sectors
for the SCSI device. If this value is greater than the value from
the host, SCSI Sense errors will occur because the guest will send
read/write requests that are larger than the underlying host device
is configured to support. An example of this behavior can be seen
in [1].
A workaround is to set the max_sectors_kb host value back in the guest
kernel (a process that can be automated using rc.local startup scripts
and the like), but this has several drawbacks:
- it can be troublesome if the guest has many passthrough devices that
needs this tuning;
- if a change in max_sectors_kb is made in the host side, manual change
in the guests will also be required;
- during an OS install it is difficult, and sometimes not possible, to
go to a terminal and change the max_sectors_kb prior to the installation.
This means that the disk can't be used during the install process. The
easiest alternative here is to roll back to scsi-hd, install the guest
and then go back to SCSI passthrough when the installation is done and
max_sectors_kb can be set.
An easier way would be to QEMU handle the absence of the Block Limits
VPD device response, setting max_io_sectors accordingly and allowing
the guest to use the device without the hassle.
This patch adds emulation of the Block Limits VPD response for
SCSI passthrough devices of type TYPE_DISK that doesn't support
it. The following changes were made:
- scsi_handle_inquiry_reply will now check the available VPD
pages from the Inquiry EVPD reply. In case the device does not
- a new function called scsi_generic_set_vpd_bl_emulation,
that is called during device realize, was created to set a
new flag 'needs_vpd_bl_emulation' of the device. This function
retrieves the Inquiry EVPD response of the device to check for
VPD BL support.
- scsi_handle_inquiry_reply will now check the available VPD
pages from the Inquiry EVPD reply in case the device needs
VPD BL emulation, adding the Block Limits page (0xb0) to
the list. This will make the guest kernel aware of the
support that we're now providing by emulation.
- a new function scsi_emulate_block_limits creates the
emulated Block Limits response. This function is called
inside scsi_read_complete in case the device requires
Block Limits VPD emulation and we detected a SCSI Sense
error in the VPD Block Limits reply that was issued
from the guest kernel to the device. This error is
expected: we're reporting support from our side, but
the device isn't aware of it.
With this patch, the guest now queries the Block Limits
page during the device configuration because it is being
advertised in the Supported Pages response. It will either
receive the Block Limits page from the hardware, if it supports
it, or will receive an emulated response from QEMU. At any rate,
the guest now has the information to set the max_sectors_kb
parameter accordingly, sparing the user of SCSI sense errors
that would happen without the emulated response and in the
absence of Block Limits support from the hardware.
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1566195
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1566195
Reported-by: Dac Nguyen <dacng@us.ibm.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20180627172432.11120-4-danielhb413@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
For the VPD Block Limits emulation with SCSI passthrough,
we'll issue an Inquiry request with EVPD set to retrieve
the available VPD pages of the device. This would be done in
a way similar of what scsi_generic_read_device_identification
does: create a SCSI command and a reply buffer, fill in the
sg_io_hdr_t structure, call blk_ioctl, check if an error
occurred, process the response.
This same process is done in other 2 functions, get_device_type
and get_stream_blocksize. They differ in the command/reply
buffer and post-processing, everything else is almost a
copy/paste.
Instead of adding a forth copy/pasted-ish code when adding
the passthrough VPD BL emulation, this patch extirpates
this repetition of those 3 functions and put it into
a new one called scsi_SG_IO_FROM_DEV. Any future code that
wants to execute an SG_DXFER_FROM_DEV to the device can
use it, avoiding filling sg_io_hdr_t again and et cetera.
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20180627172432.11120-3-danielhb413@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
To add support for the emulation of Block Limits VPD page
for passthrough devices, a few adjustments in the current code
base is required to avoid repetition and improve clarity.
In scsi-generic.c, detach the Inquiry handling from
scsi_read_complete and put it into a new function called
scsi_handle_inquiry_reply. This change aims to avoid
cluttering of scsi_read_complete when we more logic in the
Inquiry response handling is added in the next patches,
centralizing the changes in the new function.
In scsi-disk.c, take the build of all emulated VPD pages
from scsi_disk_emulate_inquiry and make it available to
other files into a non-static function called
scsi_disk_emulate_vpd_page. Making it public will allow
the future VPD BL emulation code for passthrough devices
to use it from scsi-generic.c, avoiding copy/pasting this
code solely for that purpose. It also has the advantage of
providing emulation of all VPD pages in case we need to
emulate other pages in other scenarios. As a bonus,
scsi_disk_emulate_inquiry got tidier.
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20180627172432.11120-2-danielhb413@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
strchrnul is a GNU extension and thus unavailable on a number of targets.
In the review for a commit removing strchrnul from 9p, I was asked to
create a qemu_strchrnul helper to factor out this functionality.
Do so, and use it in a number of other places in the code base that inlined
the replacement pattern in a place where strchrnul could be used.
Signed-off-by: Keno Fischer <keno@juliacomputing.com>
Acked-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Greg Kurz <groug@kaod.org>
With this flag, kvm allows guest to control host CPU power state. This
increases latency for other processes using same host CPU in an
unpredictable way, but if decreases idle entry/exit times for the
running VCPU, so to use it QEMU needs a hint about whether host CPU is
overcommitted, hence the flag name.
Follow-up patches will expose this capability to guest
(using mwait leaf).
Based on a patch by Wanpeng Li <kernellwp@gmail.com> .
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Message-Id: <20180622192148.178309-2-mst@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Let's start to use "info pic" just like other platforms. For now we
keep the command for a while so that old users can know what is the new
command to use.
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20171229073104.3810-6-peterx@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This include both userspace and in-kernel ioapic. Note that the numbers
can be inaccurate for kvm-ioapic. One reason is the same with
kvm-i8259, that when irqfd is used, irqs can be delivered all inside
kernel without our notice. Meanwhile, kvm-ioapic is specially treated
when irq numbers <ISA_NUM_IRQS, those irqs will be delivered in kernel
too via kvm-i8259 (please refer to kvm_pc_gsi_handler).
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20171229073104.3810-5-peterx@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This adds owners/parents (which are the same, just occasionally
owner==NULL) printing for memory regions; a new '-o' flag
enabled new output.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Message-Id: <20180604032511.6980-1-aik@ozlabs.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Remove the legacy esp_init() function now that there are no more remaining
users.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Message-Id: <20180613094727.11326-3-mark.cave-ayland@ilande.co.uk>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Tested-by: Hervé Poussineau <hpoussin@reactos.org>
Coverity does not like the new _Float* types that are used by
recent glibc, and croaks on every single file that includes
stdlib.h. Add dummy typedefs to please it.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Let's try to reduce error handling a bit. In the plug/unplug case, the
device was realized and therefore we can assume that getting access to
the memory region will not fail.
For get_vmstate_memory_region() this is already handled that way.
Document both cases.
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180619134141.29478-13-david@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This way we can easily check if the region has already been inititalized
without having to rely on the size of an uninitialized region being 0.
Free the region in nvdimm_finalize() and not in unrealize() as we will
allow to create the region before realization in following patches.
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180619134141.29478-11-david@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Importantly, get_vmstate_memory_region() should also fail with a proper
error if called before the device is realized. For a PCDIMM, both functions
are to return the same thing, so share the implementation.
All current users are called after the device has been realized, so we
can expect the calls to succeed.
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180619134141.29478-9-david@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Unused, so let's remove it.
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180619134141.29478-8-david@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Not used outside of pc-dimm.c and there shouldn't be other users. If
other devices (e.g. memory devices) ever have to also use slots, then we
will have to factor this out.
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180619134141.29478-5-david@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Let's rename it to make it look more consistent.
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180619134141.29478-4-david@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
We have had some tracing tools for mutex but it's not easy to use them
for e.g. dead locks. Let's provide "--enable-debug-mutex" parameter
when configure to allow QemuMutex to store the last owner that took
specific lock. It will be easy to use this tool to debug deadlocks
since we can directly know who took the lock then as long as we can have
a debugger attached to the process.
Reviewed-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20180425025459.5258-4-peterx@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
According to KVM commit 75d61fbc, it needs to delete the slot before
changing the KVM_MEM_READONLY flag. But QEMU commit 235e8982 only check
whether KVM_MEM_READONLY flag is set instead of changing. It doesn't
need to delete the slot if the KVM_MEM_READONLY flag is not changed.
This fixes a issue that migrating a VM at the OVMF startup stage and
VM is executing the codes in rom. Between the deleting and adding the
slot in kvm_set_user_memory_region, there is a chance that guest access
rom and trap to KVM, then KVM can't find the corresponding memslot.
While KVM (on ARM) injects an abort to guest due to the broken hva, then
guest will get stuck.
Signed-off-by: Shannon Zhao <zhaoshenglong@huawei.com>
Message-Id: <1526462314-19720-1-git-send-email-zhaoshenglong@huawei.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20180602085259.17853-1-stefanha@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Place them in exec.c, exec-all.h and ram_addr.h. This removes
knowledge of translate-all.h (which is an internal header) from
several files outside accel/tcg and removes knowledge of
AddressSpace from translate-all.c (as it only operates on ram_addr_t).
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
laio_init() can fail for a couple of reasons, which will lead to a NULL
pointer dereference in laio_attach_aio_context().
To solve this, add a aio_setup_linux_aio() function which is called
early in raw_open_common. If this fails, propagate the error up. The
signature of aio_get_linux_aio() was not modified, because it seems
preferable to return the actual errno from the possible failing
initialization calls.
Additionally, when the AioContext changes, we need to associate a
LinuxAioState with the new AioContext. Use the bdrv_attach_aio_context
callback and call the new aio_setup_linux_aio(), which will allocate a
new AioContext if needed, and return errors on failures. If it fails for
any reason, fallback to threaded AIO with an error message, as the
device is already in-use by the guest.
Add an assert that aio_get_linux_aio() cannot return NULL.
Signed-off-by: Nishanth Aravamudan <naravamudan@digitalocean.com>
Message-id: 20180622193700.6523-1-naravamudan@digitalocean.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Determining the size of a field is useful when you don't have a struct
variable handy. Open-coding this is ugly.
This patch adds the sizeof_field() macro, which is similar to
typeof_field(). Existing instances are updated to use the macro.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Message-id: 20180614164431.29305-1-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Not needed. Don't expose last_ram_page().
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180620202736.21399-1-david@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
trace_mem_build_info expects a size_shift for its first argument. Fix it.
Signed-off-by: Emilio G. Cota <cota@braap.org>
Message-id: 1527028012-21888-2-git-send-email-cota@braap.org
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* aspeed: set APB clocks correctly (fixes slowdown on palmetto)
* smmuv3: cache config data and TLB entries
* v7m/v8m: support read/write from MPU regions smaller than 1K
* various: clean up logging/debug messages
* xilinx_spips: Make dma transactions as per dma_burst_size
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABCAAGBQJbMnASAAoJEDwlJe0UNgzesJQQAIUYSTN+jrvl7CE+6Eo2LDp4
XX7Q1oO/wVSWqID2ESm9yhpM8+xNa/IPqHy23qBzNClLxCdqYwr+hIsUp9NhIkBX
JpHeoJWr8CI4MO5AyQYx72bcASSodR35sp+rw4aX6LL6tqL6UuZ5e8p66vh9W0wI
umSUB+QfhR8BRpZOk79uA8o3sHwslx3Z9N+Lr4TrprV53hvEs3xe31D7VLTTE1f9
+yYxtlIW2E0zoy8+Ptstn6wemwTKyEUmJDptBJVDMj89kHsDuKSvOSBcqsiNbfM2
jniKQXPFvZa8FRFfO258bS+hHNQ7CI4kFR7Mli2hNNZZVUZLyq0TI9xh+kYBORNM
nC8rTJgNzQz7iqNWtxro0/oQWjPdEOBnZAdN0+ozc3+i1l1mTE/fbPeqAZbpnYj0
U/bca/MT7KkhVFZumidPoWn6oeByLhSB4duX2OgrkE802r1BGdNFZxkyv1Tx8PMd
FT5al3md6usytXIeVC1oA3u61VnnfcaBRgjYA57c8ygIA7U7IsCtdMNWZXHtVyBF
CF+n/lQ+VgqnDF1Ns2bW0cRC9wTIbkKQZA8UNEiK6WIWG8BiiIag7G/3BEIFxb5z
QjyxU2VmHE4+sk1T25e54O/329E36nmeWCI4cLGUobVe+Kzbo/dHY3lFq/QtkEJo
xCI2niNzT8PLpWOPNPeH
=ZiOR
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20180626' into staging
target-arm queue:
* aspeed: set APB clocks correctly (fixes slowdown on palmetto)
* smmuv3: cache config data and TLB entries
* v7m/v8m: support read/write from MPU regions smaller than 1K
* various: clean up logging/debug messages
* xilinx_spips: Make dma transactions as per dma_burst_size
# gpg: Signature made Tue 26 Jun 2018 17:55:46 BST
# gpg: using RSA key 3C2525ED14360CDE
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>"
# gpg: aka "Peter Maydell <pmaydell@gmail.com>"
# gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>"
# Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE
* remotes/pmaydell/tags/pull-target-arm-20180626: (32 commits)
aspeed/timer: use the APB frequency from the SCU
aspeed: initialize the SCU controller first
aspeed/scu: introduce clock frequencies
hw/arm/smmuv3: Add notifications on invalidation
hw/arm/smmuv3: IOTLB emulation
hw/arm/smmuv3: Cache/invalidate config data
hw/arm/smmuv3: Fix translate error handling
target/arm: Handle small regions in get_phys_addr_pmsav8()
target/arm: Set page (region) size in get_phys_addr_pmsav7()
tcg: Support MMU protection regions smaller than TARGET_PAGE_SIZE
hw/arm/stellaris: Use HWADDR_PRIx to display register address
hw/arm/stellaris: Fix gptm_write() error message
hw/net/smc91c111: Use qemu_log_mask(UNIMP) instead of fprintf
hw/net/smc91c111: Use qemu_log_mask(GUEST_ERROR) instead of hw_error
hw/net/stellaris_enet: Use qemu_log_mask(GUEST_ERROR) instead of hw_error
hw/net/stellaris_enet: Fix a typo
hw/arm/stellaris: Use qemu_log_mask(UNIMP) instead of fprintf
hw/arm/omap: Use qemu_log_mask(GUEST_ERROR) instead of fprintf
hw/arm/omap1: Use qemu_log_mask(GUEST_ERROR) instead of fprintf
hw/i2c/omap_i2c: Use qemu_log_mask(UNIMP) instead of fprintf
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The timer controller can be driven by either an external 1MHz clock or
by the APB clock. Today, the model makes the assumption that the APB
frequency is always set to 24MHz but this is incorrect.
The AST2400 SoC on the palmetto machines uses a 48MHz input clock
source and the APB can be set to 48MHz. The consequence is a general
system slowdown. The QEMU machines using the AST2500 SoC do not seem
impacted today because the APB frequency is still set to 24MHz.
We fix the timer frequency for all SoCs by linking the Timer model to
the SCU model. The APB frequency driving the timers is now the one
configured for the SoC.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Joel Stanley <joel@jms.id.au>
Reviewed-by: Andrew Jeffery <andrew@aj.id.au>
Message-id: 20180622075700.5923-4-clg@kaod.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
All Aspeed SoC clocks are driven by an input source clock which can
have different frequencies : 24MHz or 25MHz, and also, on the Aspeed
AST2400 SoC, 48MHz. The H-PLL (CPU) clock is defined from a
calculation using parameters in the H-PLL Parameter register or from a
predefined set of frequencies if the setting is strapped by hardware
(Aspeed AST2400 SoC). The other clocks of the SoC are then defined
from the H-PLL using dividers.
We introduce first the APB clock because it should be used to drive
the Aspeed timer model.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Andrew Jeffery <andrew@aj.id.au>
Message-id: 20180622075700.5923-2-clg@kaod.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
On TLB invalidation commands, let's call registered
IOMMU notifiers. Those can only be UNMAP notifiers.
SMMUv3 does not support notification on MAP (VFIO).
This patch allows vhost use case where IOTLB API is notified
on each guest IOTLB invalidation.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1529653501-15358-5-git-send-email-eric.auger@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We emulate a TLB cache of size SMMU_IOTLB_MAX_SIZE=256.
It is implemented as a hash table whose key is a combination
of the 16b asid and 48b IOVA (Jenkins hash).
Entries are invalidated on TLB invalidation commands, either
globally, or per asid, or per asid/iova.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Message-id: 1529653501-15358-4-git-send-email-eric.auger@redhat.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Let's cache config data to avoid fetching and parsing STE/CD
structures on each translation. We invalidate them on data structure
invalidation commands.
We put in place a per-smmu mutex to protect the config cache. This
will be useful too to protect the IOTLB cache. The caches can be
accessed without BQL, ie. in IO dataplane. The same kind of mutex was
put in place in the intel viommu.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1529653501-15358-3-git-send-email-eric.auger@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add support for MMU protection regions that are smaller than
TARGET_PAGE_SIZE. We do this by marking the TLB entry for those
pages with a flag TLB_RECHECK. This flag causes us to always
take the slow-path for accesses. In the slow path we can then
special case them to always call tlb_fill() again, so we have
the correct information for the exact address being accessed.
This change allows us to handle reading and writing from small
regions; we cannot deal with execution from the small region.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180620130619.11362-2-peter.maydell@linaro.org
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-id: 20180624040609.17572-10-f4bug@amsat.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
TCMI_VERBOSE is no more used, drop the OMAP_8/16/32B_REG macros.
Suggested-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-id: 20180624040609.17572-9-f4bug@amsat.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Qspi dma has a burst length of 64 bytes, So limit the transactions w.r.t
dma-burst-size property.
Signed-off-by: Sai Pavan Boddu <saipava@xilinx.com>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1529660880-30376-1-git-send-email-sai.pavan.boddu@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In commit a8bff79e9f we added a definition to hw/virtio/virtio-gpu.h
for VIRTIO_GPU_CAPSET_VIRGL2, as a workaround for it not yet being
in the Linux kernel headers. In commit 77d361b13c we updated our
kernel headers to a version which does define the macro, so we can
now remove our workaround.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20180622173249.29963-1-peter.maydell@linaro.org
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Add support for OpenGL ES to egl-helpers. Wire up the new option for
egl-headless and gtk UIs. egl-headless actually works fine. gtk hits a
not-yet implemented code path in libEGL when trying to use gles mode:
libEGL warning: FIXME: egl/x11 doesn't support front buffer rendering.
(This is mesa 17.2.3).
Cc: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Tested-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Message-id: 20180618112141.23398-1-kraxel@redhat.com
The oldest machine type which is still used in a still maintained distro
is a pc-0.12 based machine type in RHEL6, so everything that is older
than pc-0.12 should not be used anymore. Thus let's deprecate pc-0.10
and pc-0.11 so that we can finally remove them in a future release.
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1529917512-10528-1-git-send-email-thuth@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Also add a compat property to disable it for old machine types,
needed for live migration compatibility.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Message-id: 20180622111200.30561-6-kraxel@redhat.com
Enable TOPOEXT feature on EPYC CPU. This is required to support
hyperthreading on VM guests. Also extend xlevel to 0x8000001E.
Disable topoext on PC_COMPAT_2_12 and keep xlevel 0x8000000a.
Signed-off-by: Babu Moger <babu.moger@amd.com>
Message-Id: <1529443919-67509-3-git-send-email-babu.moger@amd.com>
[ehabkost: Added EPYC-IBPB.xlevel to PC_COMPAT_2_12]
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
* hw/intc/arm_gicv3: fix wrong values when reading IPRIORITYR
* target/arm: fix read of freed memory in kvm_arm_machine_init_done()
* virt: support up to 512 CPUs
* virt: support 256MB ECAM PCI region (for more PCI devices)
* xlnx-zynqmp: Use Cortex-R5F, not Cortex-R5
* mps2-tz: Implement and use the TrustZone Memory Protection Controller
* target/arm: enforce alignment checking for v6M cores
* xen: Don't use memory_region_init_ram_nomigrate() in pci_assign_dev_load_option_rom()
* vl.c: Don't zero-initialize statics for serial_hds
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABCAAGBQJbLPHgAAoJEDwlJe0UNgzeCugP/0yFWNkIQqgPX1D2fDLaea8z
fj2QCkMHVDC/iM9s2Q7PpQDwfxMfE5MbvcavyfcVWe6lthyWQvvQ+5j4JHipgEup
vdGX+ASA9sbC3kJ1u/OUekz6JliEWaxOyImnj2gyQfBw8zuMfiF+eTmtoKXcJC3u
KNSNvArPIaFLNKsaQgTQE19Bu8CxIzfGEzsIgeAoD4gE7wv48EWqpOaYdIkbDo+0
gJylBVCa46pmf56ESCuLTDZC+2FxBcw+uCFtD5zzt6YVV0Fli1ja9FNwLP17YHqg
GOfNQ6melPeNUF+ByIEaPLWrq+Sy2P6wlnVlvKcKis8nXq497VjJdvv1txbbnyrn
s4dKgVHjQP6EocvaVKCxXsLfjvPUCF2+f/uIdA8IR4WRilgTEfV3IdOYNuKjOFXb
FcYap5UrX4ikEBkDBIBkh1BQXqOAU+lf8JjW1O6kn8PbfkEu2oRqGybWtEOjr+mz
+iNsJ+4PydJ34WlvPKkzDG7GjEJcleStmgD3/DdL3r+jtjBYBT97xOAqWVnyVRYb
NEUQEGmG894THftieuMw6r8vi4u24ZkI/3vGvBeSZ1od8IFEjzENRWkhYoDhLO5r
LH16da19pVgWnLSXJnhxLTRaYNNAphNSIW30GDAwMbuXFK+LpCBufT8X7b6RerNx
tOYET9DwlQRYEiuBVeSk
=1T5q
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20180622' into staging
target-arm queue:
* hw/intc/arm_gicv3: fix wrong values when reading IPRIORITYR
* target/arm: fix read of freed memory in kvm_arm_machine_init_done()
* virt: support up to 512 CPUs
* virt: support 256MB ECAM PCI region (for more PCI devices)
* xlnx-zynqmp: Use Cortex-R5F, not Cortex-R5
* mps2-tz: Implement and use the TrustZone Memory Protection Controller
* target/arm: enforce alignment checking for v6M cores
* xen: Don't use memory_region_init_ram_nomigrate() in pci_assign_dev_load_option_rom()
* vl.c: Don't zero-initialize statics for serial_hds
# gpg: Signature made Fri 22 Jun 2018 13:56:00 BST
# gpg: using RSA key 3C2525ED14360CDE
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>"
# gpg: aka "Peter Maydell <pmaydell@gmail.com>"
# gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>"
# Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE
* remotes/pmaydell/tags/pull-target-arm-20180622: (28 commits)
xen: Don't use memory_region_init_ram_nomigrate() in pci_assign_dev_load_option_rom()
vl.c: Don't zero-initialize statics for serial_hds
target/arm: Strict alignment for ARMv6-M and ARMv8-M Baseline
target/arm: Introduce ARM_FEATURE_M_MAIN
hw/arm/mps2-tz.c: Instantiate MPCs
hw/arm/iotkit: Wire up MPC interrupt lines
hw/arm/iotkit: Instantiate MPC
hw/misc/iotkit-secctl.c: Implement SECMPCINTSTATUS
hw/misc/tz_mpc.c: Honour the BLK_LUT settings in translate
hw/misc/tz-mpc.c: Implement correct blocked-access behaviour
hw/misc/tz-mpc.c: Implement registers
hw/misc/tz-mpc.c: Implement the Arm TrustZone Memory Protection Controller
xlnx-zynqmp: Swap Cortex-R5 for Cortex-R5F
target-arm: Add the Cortex-R5F
hw/arm/virt: Increase max_cpus to 512
hw/arm/virt: Use 256MB ECAM region by default
hw/arm/virt: Add virt-3.0 machine type
hw/arm/virt: Add a new 256MB ECAM region
hw/arm/virt: Register two redistributor regions when necessary
hw/arm/virt-acpi-build: Advertise one or two GICR structures
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Another assorted patch of patches for ppc and spapr.
* Rework of guest pagesize handling for ppc, which avoids guest
visibly different behaviour between accelerators
* A number of Pnv cleanups, working towards more complete POWER9
support
* Migration of VPA data, a significant bugfix
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEdfRlhq5hpmzETofcbDjKyiDZs5IFAlssebQACgkQbDjKyiDZ
s5IMUA/+MUejq1eIAE3z4POijm0nNS5r7gce4b6M4G0vV1cC3eY9GA/kir4r3XU5
LoBGJx96pSXph69tzI7WIdn8vHpE2AkvWr4DT36ndxW0H7V630BCHk7vWMuBrfHf
5l+bKgKKnfM5MaptnKDsosvfSk7NaH+7cqSa97h2PUGFaTKmD2XyTXQqBkReBzoC
lBYVUVLjc6l9Y3P+DT+3uF+5lR7QOuqJ+2q31DvuNcTFoYhHc4DPXmDb5PfjY/DG
3oBIcFC6dqjkRzWAAT65Zk1P/hrKcYmmE382hwsFg76GTGR893MhE0vx0EjqrFbr
Tdkwj8v6Uto9/iatRz/nBSEg96oDmvSmz97Omh4JwHPKOpHRZ+QjCxccK63Jp1Yv
xgwV52YIw/5LIQYLw6eArDKfU9iLZ/Nh+rtAZxaCb/Bvsmv2vljbDpHbuhk1uJ8i
NaJ4Pe9aDt7tew39Sl/ohdUmgOYBal3WwVBNE5JNn5cO6GxzEG8U5JuFDY+RzpUV
Xh2ksy1n6F7Q3XOqf4CgF/0Gpy5mrUXSRyCVIx47E+DNwFvkexNwX8PqANJqKJFo
cxBBZbvaHRjpUGUN/yjixPqx7xWWfC698Urgdzg7LMcn3SjaRztT834DhlH46gh6
1KBf2RrkPXEJD+O4YMb5CowDXrPL4yYXSLN0GxGUknW0nQF3v2g=
=vjK1
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-3.0-20180622' into staging
ppc patch queue 2018-06-22
Another assorted patch of patches for ppc and spapr.
* Rework of guest pagesize handling for ppc, which avoids guest
visibly different behaviour between accelerators
* A number of Pnv cleanups, working towards more complete POWER9
support
* Migration of VPA data, a significant bugfix
# gpg: Signature made Fri 22 Jun 2018 05:23:16 BST
# gpg: using RSA key 6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>"
# gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>"
# gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>"
# gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>"
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* remotes/dgibson/tags/ppc-for-3.0-20180622: (23 commits)
spapr: Don't rewrite mmu capabilities in KVM mode
spapr: Limit available pagesizes to provide a consistent guest environment
target/ppc: Add ppc_hash64_filter_pagesizes()
spapr: Use maximum page size capability to simplify memory backend checking
spapr: Maximum (HPT) pagesize property
pseries: Update SLOF firmware image to qemu-slof-20180621
target/ppc: Add missing opcode for icbt on PPC440
ppc4xx_i2c: Implement directcntl register
ppc4xx_i2c: Remove unimplemented sdata and intr registers
sm501: Fix hardware cursor color conversion
fpu_helper.c: fix helper_fpscr_clrbit() function
spapr: remove unused spapr_irq routines
spapr: split the IRQ allocation sequence
target/ppc: Add kvmppc_hpt_needs_host_contiguous_pages() helper
spapr: Add cpu_apply hook to capabilities
spapr: Compute effective capability values earlier
target/ppc: Allow cpu compatiblity checks based on type, not instance
ppc/pnv: consolidate the creation of the ISA bus device tree
ppc/pnv: introduce Pnv8Chip and Pnv9Chip models
spapr_cpu_core: migrate VPA related state
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The interrupt outputs from the MPC in the IoTKit and the expansion
MPCs in the board must be wired up to the security controller, and
also all ORed together to produce a single line to the NVIC.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20180620132032.28865-8-peter.maydell@linaro.org
Wire up the one MPC that is part of the IoTKit itself. For the
moment we don't wire up its interrupt line.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20180620132032.28865-7-peter.maydell@linaro.org
Implement the SECMPCINTSTATUS register. This is the only register
in the security controller that deals with Memory Protection
Controllers, and it simply provides a read-only view of the
interrupt lines from the various MPCs in the system.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20180620132032.28865-6-peter.maydell@linaro.org
Implement the missing registers for the TZ MPC.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Message-id: 20180620132032.28865-3-peter.maydell@linaro.org
Implement the Arm TrustZone Memory Protection Controller, which sits
in front of RAM and allows secure software to configure it to either
pass through or reject transactions.
We implement the MPC as a QEMU IOMMU, which will direct transactions
either through to the devices and memory behind it or to a special
"never works" AddressSpace if they are blocked.
This initial commit implements the skeleton of the device:
* it always permits accesses
* it doesn't implement most of the registers
* it doesn't implement the interrupt or other behaviour
for blocked transactions
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Message-id: 20180620132032.28865-2-peter.maydell@linaro.org
With this patch, virt-3.0 machine uses a new 256MB ECAM region
by default instead of the legacy 16MB one, if highmem is set
(LPAE supported by the guest) and (!firmware_loaded || aarch64).
Indeed aarch32 mode FW may not support this high ECAM region.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Message-id: 1529072910-16156-11-git-send-email-eric.auger@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This patch defines a new ECAM region located after the 256GB limit.
The virt machine state is augmented with a new highmem_ecam field
which guards the usage of this new ECAM region instead of the legacy
16MB one. With the highmem ECAM region, up to 256 PCIe buses can be
used.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Message-id: 1529072910-16156-9-git-send-email-eric.auger@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This patch allows the creation of a GICv3 node with 1 or 2
redistributor regions depending on the number of smu_cpus.
The second redistributor region is located just after the
existing RAM region, at 256GB and contains up to up to 512 vcpus.
Please refer to kernel documentation for further node details:
Documentation/devicetree/bindings/interrupt-controller/arm,gic-v3.txt
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Message-id: 1529072910-16156-6-git-send-email-eric.auger@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
To prepare for multiple redistributor regions, we introduce
an array of uint32_t properties that stores the redistributor
count of each redistributor region.
Non accelerated VGICv3 only supports a single redistributor region.
The capacity of all redist regions is checked against the number of
vcpus.
Machvirt is updated to set those properties, ie. a single
redistributor region with count set to the number of vcpus
capped by 123.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Message-id: 1529072910-16156-4-git-send-email-eric.auger@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add experimental x-nbd-server-add-bitmap to expose a disabled
bitmap over NBD, in preparation for a pull model incremental
backup scheme. Also fix a corner case protocol issue with
NBD_CMD_BLOCK_STATUS, and add new NBD_CMD_CACHE.
- Eric Blake: tests: Simplify .gitignore
- Eric Blake: nbd/server: Reject 0-length block status request
- Vladimir Sementsov-Ogievskiy: 0/6 NBD export bitmaps
- Vladimir Sementsov-Ogievskiy: nbd/server: introduce NBD_CMD_CACHE
-----BEGIN PGP SIGNATURE-----
Comment: Public key at http://people.redhat.com/eblake/eblake.gpg
iQEcBAABCAAGBQJbK7wDAAoJEKeha0olJ0NqJuMH/0o7wzP3HWIO1pJZvhRA81AP
ZZqZy/8xO0B++keCxzgTbdr9yf6RDtYGjDrNuOcVzqfA1k9E1Cl2pPjS93TzPlzZ
PEgtG6bLXL2jCUi/8/EbZ/TLuoQx5YJc7eXTNREgXDRMLiTEg5Kcmg1n0efhr8Sl
5/bRMDmW1+atO4dJS8bEW5WLuy4IoB823cSHjWoPFRHeNCwmFREwTx2xVULVje84
JZzHYxyaBRquYLKIAew1WidhKVC4OiYtm4F5PxGBB/ZCVkeoDBX5uqZSjOUVt8y7
uOhGYQGW/qSj/s3x74NEL6IPo57Iay/Wq065DbolDx7s0y5t38TocrBnakzOBFU=
=VK6J
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/ericb/tags/pull-nbd-2018-06-20-v2' into staging
nbd patches for 2018-06-20
Add experimental x-nbd-server-add-bitmap to expose a disabled
bitmap over NBD, in preparation for a pull model incremental
backup scheme. Also fix a corner case protocol issue with
NBD_CMD_BLOCK_STATUS, and add new NBD_CMD_CACHE.
- Eric Blake: tests: Simplify .gitignore
- Eric Blake: nbd/server: Reject 0-length block status request
- Vladimir Sementsov-Ogievskiy: 0/6 NBD export bitmaps
- Vladimir Sementsov-Ogievskiy: nbd/server: introduce NBD_CMD_CACHE
# gpg: Signature made Thu 21 Jun 2018 15:53:55 BST
# gpg: using RSA key A7A16B4A2527436A
# gpg: Good signature from "Eric Blake <eblake@redhat.com>"
# gpg: aka "Eric Blake (Free Software Programmer) <ebb9@byu.net>"
# gpg: aka "[jpeg image of size 6874]"
# Primary key fingerprint: 71C2 CC22 B1C4 6029 27D2 F3AA A7A1 6B4A 2527 436A
* remotes/ericb/tags/pull-nbd-2018-06-20-v2:
nbd/server: introduce NBD_CMD_CACHE
docs/interop: add nbd.txt
qapi: new qmp command nbd-server-add-bitmap
nbd/server: implement dirty bitmap export
nbd/server: add nbd_meta_empty_or_pattern helper
nbd/server: refactor NBDExportMetaContexts
nbd/server: fix trace
nbd/server: Reject 0-length block status request
tests: Simplify .gitignore
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The way we used to handle KVM allowable guest pagesizes for PAPR guests
required some convoluted checking of memory attached to the guest.
The allowable pagesizes advertised to the guest cpus depended on the memory
which was attached at boot, but then we needed to ensure that any memory
later hotplugged didn't change which pagesizes were allowed.
Now that we have an explicit machine option to control the allowable
maximum pagesize we can simplify this. We just check all memory backends
against that declared pagesize. We check base and cold-plugged memory at
reset time, and hotplugged memory at pre_plug() time.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
The way the POWER Hash Page Table (HPT) MMU is virtualized by KVM HV means
that every page that the guest puts in the pagetables must be truly
physically contiguous, not just GPA-contiguous. In effect this means that
an HPT guest can't use any pagesizes greater than the host page size used
to back its memory.
At present we handle this by changing what we advertise to the guest based
on the backing pagesizes. This is pretty bad, because it means the guest
sees a different environment depending on what should be host configuration
details.
As a start on fixing this, we add a new capability parameter to the
pseries machine type which gives the maximum allowed pagesizes for an
HPT guest. For now we just create and validate the parameter without
making it do anything.
For backwards compatibility, on older machine types we set it to the max
available page size for the host. For the 3.0 machine type, we fix it to
16, the intention being to only allow HPT pagesizes up to 64kiB by default
in future.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
Handle nbd CACHE command. Just do read, without sending read data back.
Cache mechanism should be done by exported node driver chain.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20180413143156.11409-1-vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
[eblake: fix two missing case labels in switch statements]
Signed-off-by: Eric Blake <eblake@redhat.com>
Handle a new NBD meta namespace: "qemu", and corresponding queries:
"qemu:dirty-bitmap:<export bitmap name>".
With the new metadata context negotiated, BLOCK_STATUS query will reply
with dirty-bitmap data, converted to extents. The new public function
nbd_export_bitmap selects which bitmap to export. For now, only one bitmap
may be exported.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20180609151758.17343-5-vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
[eblake: wording tweaks, minor cleanups, additional tracing]
Signed-off-by: Eric Blake <eblake@redhat.com>
As well as being able to generate its own i2c transactions, the ppc4xx
i2c controller has a DIRECTCNTL register which allows explicit control
of the i2c lines.
Using this register an OS can directly bitbang i2c operations. In
order to let emulated i2c devices respond to this, we need to wire up
the DIRECTCNTL register to qemu's bitbanged i2c handling code.
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We don't emulate slave mode so related registers are not needed.
[lh]sadr are only retained to avoid too many warnings and simplify
debugging but sdata is not even correct because device has a 4 byte
FIFO instead so just remove this unimplemented register for now.
The intr register is also not implemented correctly, it is for
diagnostics and normally not even visible on device without explicitly
enabling it. As no guests are known to need this remove it as well.
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
spapr_irq_alloc_block and spapr_irq_alloc() are now deprecated.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Today, when a device requests for IRQ number in a sPAPR machine, the
spapr_irq_alloc() routine first scans the ICSState status array to
find an empty slot and then performs the assignement of the selected
numbers. Split this sequence in two distinct routines : spapr_irq_find()
for lookups and spapr_irq_claim() for claiming the IRQ numbers.
This will ease the introduction of a static layout of IRQ numbers.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
spapr capabilities have an apply hook to actually activate (or deactivate)
the feature in the system at reset time. However, a number of capabilities
affect the setup of cpus, and need to be applied to each of them -
including hotplugged cpus for extra complication. To make this simpler,
add an optional cpu_apply hook that is called from spapr_cpu_reset().
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Previously, the effective values of the various spapr capability flags
were only determined at machine reset time. That was a lazy way of making
sure it was after cpu initialization so it could use the cpu object to
inform the defaults.
But we've now improved the compat checking code so that we don't need to
instantiate the cpus to use it. That lets us move the resolution of the
capability defaults much earlier.
This is going to be necessary for some future capabilities.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
It introduces a base PnvChip class from which the specific processor
chip classes, Pnv8Chip and Pnv9Chip, inherit. Each of them needs to
define an init and a realize routine which will create the controllers
of the target processor. For the moment, the base PnvChip class
handles the XSCOM bus and the cores.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
A per-CPU machine data pointer was recently added to PowerPCCPU. The
motivation is to to hide platform specific details from the core CPU
code. This per-CPU data can hold state which is relevant to the guest
though, eg, Virtual Processor Areas, and we should migrate this state.
This patch adds the plumbing so that we can migrate the per-CPU data
for PAPR guests. We only do this for newer machine types for the sake
of backward compatibility. No state is migrated for the moment: the
vmstate_spapr_cpu_state structure will be populated by subsequent
patches.
Signed-off-by: Greg Kurz <groug@kaod.org>
[dwg: Fix some trivial spelling and spacing errors]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This moves the details of the ISA bus creation under the LPC model but
more important, the new PnvChip operation will let us choose the chip
class to use when we introduce the different chip classes for Power9
and Power8. It hides away the processor chip controllers from the
machine.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
On Power9, the thread interrupt presenter has a different type and is
linked to the chip owning the cores.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This patch allows the user to specify whether to use active or only
background mode for mirror block jobs. Currently, this setting will
remain constant for the duration of the entire block job.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Message-id: 20180613181823.13618-14-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20180613181823.13618-12-mreitz@redhat.com
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
This new function allows to look for a consecutively dirty area in a
dirty bitmap.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Message-id: 20180613181823.13618-10-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
This new parameter allows the caller to just query the next dirty
position without moving the iterator.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Message-id: 20180613181823.13618-8-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
bdrv_drain_all_*() used bdrv_next() to iterate over all root nodes and
did a subtree drain for each of them. This works fine as long as the
graph is static, but sadly, reality looks different.
If the graph changes so that root nodes are added or removed, we would
have to compensate for this. bdrv_next() returns each root node only
once even if it's the root node for multiple BlockBackends or for a
monitor-owned block driver tree, which would only complicate things.
The much easier and more obviously correct way is to fundamentally
change the way the functions work: Iterate over all BlockDriverStates,
no matter who owns them, and drain them individually. Compensation is
only necessary when a new BDS is created inside a drain_all section.
Removal of a BDS doesn't require any action because it's gone afterwards
anyway.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
In the future, bdrv_drained_all_begin/end() will drain all invidiual
nodes separately rather than whole subtrees. This means that we don't
want to propagate the drain to all parents any more: If the parent is a
BDS, it will already be drained separately. Recursing to all parents is
unnecessary work and would make it an O(n²) operation.
Prepare the drain function for the changed drain_all by adding an
ignore_bds_parents parameter to the internal implementation that
prevents the propagation of the drain to BDS parents. We still (have to)
propagate it to non-BDS parents like BlockBackends or Jobs because those
are not drained separately.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
bdrv_drain_all() wants to have a single polling loop for draining the
in-flight requests of all nodes. This means that the AIO_WAIT_WHILE()
condition relies on activity in multiple AioContexts, which is polled
from the mainloop context. We must therefore call AIO_WAIT_WHILE() from
the mainloop thread and use the AioWait notification mechanism.
Just randomly picking the AioContext of any non-mainloop thread would
work, but instead of bothering to find such a context in the caller, we
can just as well accept NULL for ctx.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
bdrv_do_drained_begin() is only safe if we have a single
BDRV_POLL_WHILE() after quiescing all affected nodes. We cannot allow
that parent callbacks introduce a nested polling loop that could cause
graph changes while we're traversing the graph.
Split off bdrv_do_drained_begin_quiesce(), which only quiesces a single
node without waiting for its requests to complete. These requests will
be waited for in the BDRV_POLL_WHILE() call down the call chain.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Anything can happen inside BDRV_POLL_WHILE(), including graph
changes that may interfere with its callers (e.g. child list iteration
in recursive callers of bdrv_do_drained_begin).
Switch to a single BDRV_POLL_WHILE() call for the whole subtree at the
end of bdrv_do_drained_begin() to avoid such effects. The recursion
happens now inside the loop condition. As the graph can only change
between bdrv_drain_poll() calls, but not inside of it, doing the
recursion here is safe.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
We already requested that block jobs be paused in .bdrv_drained_begin,
but no guarantee was made that the job was actually inactive at the
point where bdrv_drained_begin() returned.
This introduces a new callback BdrvChildRole.bdrv_drained_poll() and
uses it to make bdrv_drain_poll() consider block jobs using the node to
be drained.
For the test case to work as expected, we have to switch from
block_job_sleep_ns() to qemu_co_sleep_ns() so that the test job is even
considered active and must be waited for when draining the node.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Commit 91af091f92 added an additional aio_poll() to BDRV_POLL_WHILE()
in order to make sure that all pending BHs are executed on drain. This
was the wrong place to make the fix, as it is useless overhead for all
other users of the macro and unnecessarily complicates the mechanism.
This patch effectively reverts said commit (the context has changed a
bit and the code has moved to AIO_WAIT_WHILE()) and instead polls in the
loop condition for drain.
The effect is probably hard to measure in any real-world use case
because actual I/O will dominate, but if I run only the initialisation
part of 'qemu-img convert' where it calls bdrv_block_status() for the
whole image to find out how much data there is copy, this phase actually
needs only roughly half the time after this patch.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
The boot framebuffer is expected to be configured by the firmware, so it
uses fw_cfg as interface. Initialization goes as follows:
(1) Check whenever etc/ramfb is present.
(2) Allocate framebuffer from RAM.
(3) Fill struct RAMFBCfg, write it to etc/ramfb.
Done. You can write stuff to the framebuffer now, and it should appear
automagically on the screen.
Note that this isn't very efficient because it does a full display
update on each refresh. No dirty tracking. Dirty tracking would have
to be active for the whole ram slot, so that wouldn't be very efficient
either. For a boot display which is active for a short time only this
isn't a big deal. As permanent guest display something better should be
used (if possible).
This is the ramfb core code. Some windup is needed for display devices
which want have a ramfb boot display.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
Message-id: 20180613122948.18149-2-kraxel@redhat.com
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
CPUPPCState currently contains a number of fields containing the state of
the VPA. The VPA is a PAPR specific concept covering several guest/host
shared memory areas used to communicate some information with the
hypervisor.
As a PAPR concept this is really machine specific information, although it
is per-cpu, so it doesn't really belong in the core CPU state structure.
There's also other information that's per-cpu, but platform/machine
specific. So create a (void *)machine_data in PowerPCCPU which can be
used by the machine to locate per-cpu data. Intialization, lifetime and
cleanup of machine_data is entirely up to the machine type.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Greg Kurz <groug@kaod.org>
Tested-by: Greg Kurz <groug@kaod.org>
Currently, we allocate space for all the cpu objects within a single core
in one big block. This was copied from an older version of the spapr code
and requires some ugly pointer manipulation to extract the individual
objects.
This design was due to a misunderstanding of qemu lifetime conventions and
has already been changed in spapr (in 94ad93bd "spapr_cpu_core: instantiate
CPUs separately".
Make an equivalent change in pnv_core to get rid of the nasty pointer
arithmetic.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
In the case where we have an interrupt generated externally from inputs to
bits 1 and 2 of port A and/or port B, it is necessary to expose
mos6522_update_irq() so it can be called by the interrupt source.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The PMU device supercedes the CUDA device found on older New World Macs and
is supported by a larger number of guest OSs from OS 9 to OS X 10.5.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
MacOS 9 has a bug in its PMU driver whereby after configuring the ADB bus
devices it sends another write to reg 3 on both devices resetting them
both back to the same address.
Add a new disable_direct_reg3_writes property to ADBDevice to disable these
direct writes which can enabled just for the upcoming pmu-adb support.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
PMU-enabled New World Macs expose their GPIOs via a separate memory region
within the macio device.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This option allows the VIA configuration to be controlled between 3
different possible setups: cuda, pmu-adb and pmu with USB rather than ADB
keyboard/mouse.
For the moment we don't do anything with the configuration except to pass
it to the macio device (the via-cuda parent) and also to the firmware via
the fw_cfg interface so that it can present the correct device tree.
The default is cuda which is the current default and so will have no
change in behaviour.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Use mmap_lock in user-mode to protect TCG state and the page descriptors.
In !user-mode, each vCPU has its own TCG state, so no locks needed.
Per-page locks are used to protect the page descriptors.
Per-TB locks are used in both modes to protect TB jumps.
Some notes:
- tb_lock is removed from notdirty_mem_write by passing a
locked page_collection to tb_invalidate_phys_page_fast.
- tcg_tb_lookup/remove/insert/etc have their own internal lock(s),
so there is no need to further serialize access to them.
- do_tb_flush is run in a safe async context, meaning no other
vCPU threads are running. Therefore acquiring mmap_lock there
is just to please tools such as thread sanitizer.
- Not visible in the diff, but tb_invalidate_phys_page already
has an assert_memory_lock.
- cpu_io_recompile is !user-only, so no mmap_lock there.
- Added mmap_unlock()'s before all siglongjmp's that could
be called in user-mode while mmap_lock is held.
+ Added an assert for !have_mmap_lock() after returning from
the longjmp in cpu_exec, just like we do in cpu_exec_step_atomic.
Performance numbers before/after:
Host: AMD Opteron(tm) Processor 6376
ubuntu 17.04 ppc64 bootup+shutdown time
700 +-+--+----+------+------------+-----------+------------*--+-+
| + + + + + *B |
| before ***B*** ** * |
|tb lock removal ###D### *** |
600 +-+ *** +-+
| ** # |
| *B* #D |
| *** * ## |
500 +-+ *** ### +-+
| * *** ### |
| *B* # ## |
| ** * #D# |
400 +-+ ** ## +-+
| ** ### |
| ** ## |
| ** # ## |
300 +-+ * B* #D# +-+
| B *** ### |
| * ** #### |
| * *** ### |
200 +-+ B *B #D# +-+
| #B* * ## # |
| #* ## |
| + D##D# + + + + |
100 +-+--+----+------+------------+-----------+------------+--+-+
1 8 16 Guest CPUs 48 64
png: https://imgur.com/HwmBHXe
debian jessie aarch64 bootup+shutdown time
90 +-+--+-----+-----+------------+------------+------------+--+-+
| + + + + + + |
| before ***B*** B |
80 +tb lock removal ###D### **D +-+
| **### |
| **## |
70 +-+ ** # +-+
| ** ## |
| ** # |
60 +-+ *B ## +-+
| ** ## |
| *** #D |
50 +-+ *** ## +-+
| * ** ### |
| **B* ### |
40 +-+ **** # ## +-+
| **** #D# |
| ***B** ### |
30 +-+ B***B** #### +-+
| B * * # ### |
| B ###D# |
20 +-+ D ##D## +-+
| D# |
| + + + + + + |
10 +-+--+-----+-----+------------+------------+------------+--+-+
1 8 16 Guest CPUs 48 64
png: https://imgur.com/iGpGFtv
The gains are high for 4-8 CPUs. Beyond that point, however, unrelated
lock contention significantly hurts scalability.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This applies to both user-mode and !user-mode emulation.
Instead of relying on a global lock, protect the list of incoming
jumps with tb->jmp_lock. This lock also protects tb->cflags,
so update all tb->cflags readers outside tb->jmp_lock to use
atomic reads via tb_cflags().
In order to find the destination TB (and therefore its jmp_lock)
from the origin TB, we introduce tb->jmp_dest[].
I considered not using a linked list of jumps, which simplifies
code and makes the struct smaller. However, it unnecessarily increases
memory usage, which results in a performance decrease. See for
instance these numbers booting+shutting down debian-arm:
Time (s) Rel. err (%) Abs. err (s) Rel. slowdown (%)
------------------------------------------------------------------------------
before 20.88 0.74 0.154512 0.
after 20.81 0.38 0.079078 -0.33524904
GTree 21.02 0.28 0.058856 0.67049808
GHashTable + xxhash 21.63 1.08 0.233604 3.5919540
Using a hash table or a binary tree to keep track of the jumps
doesn't really pay off, not only due to the increased memory usage,
but also because most TBs have only 0 or 1 jumps to them. The maximum
number of jumps when booting debian-arm that I measured is 35, but
as we can see in the histogram below a TB with that many incoming jumps
is extremely rare; the average TB has 0.80 incoming jumps.
n_jumps: 379208; avg jumps/tb: 0.801099
dist: [0.0,1.0)|▄█▁▁▁▁▁▁▁▁▁▁▁ ▁▁▁▁▁▁ ▁▁▁ ▁▁▁ ▁|[34.0,35.0]
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The appended adds assertions to make sure we do not longjmp with page
locks held. Note that user-mode has nothing to check, since page_locks
are !user-mode only.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Groundwork for supporting parallel TCG generation.
Instead of using a global lock (tb_lock) to protect changes
to pages, use fine-grained, per-page locks in !user-mode.
User-mode stays with mmap_lock.
Sometimes changes need to happen atomically on more than one
page (e.g. when a TB that spans across two pages is
added/invalidated, or when a range of pages is invalidated).
We therefore introduce struct page_collection, which helps
us keep track of a set of pages that have been locked in
the appropriate locking order (i.e. by ascending page index).
This commit first introduces the structs and the function helpers,
to then convert the calling code to use per-page locking. Note
that tb_lock is not removed yet.
While at it, rename tb_alloc_page to tb_page_add, which pairs with
tb_page_remove.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This commit does several things, but to avoid churn I merged them all
into the same commit. To wit:
- Use uintptr_t instead of TranslationBlock * for the list of TBs in a page.
Just like we did in (c37e6d7e "tcg: Use uintptr_t type for
jmp_list_{next|first} fields of TB"), the rationale is the same: these
are tagged pointers, not pointers. So use a more appropriate type.
- Only check the least significant bit of the tagged pointers. Masking
with 3/~3 is unnecessary and confusing.
- Introduce the TB_FOR_EACH_TAGGED macro, and use it to define
PAGE_FOR_EACH_TB, which improves readability. Note that
TB_FOR_EACH_TAGGED will gain another user in a subsequent patch.
- Update tb_page_remove to use PAGE_FOR_EACH_TB. In case there
is a bug and we attempt to remove a TB that is not in the list, instead
of segfaulting (since the list is NULL-terminated) we will reach
g_assert_not_reached().
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Thereby making it per-TCGContext. Once we remove tb_lock, this will
avoid an atomic increment every time a TB is invalidated.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This paves the way for enabling scalable parallel generation of TCG code.
Instead of tracking TBs with a single binary search tree (BST), use a
BST for each TCG region, protecting it with a lock. This is as scalable
as it gets, since each TCG thread operates on a separate region.
The core of this change is the introduction of struct tcg_region_tree,
which contains a pointer to a GTree and an associated lock to serialize
accesses to it. We then allocate an array of tcg_region_tree's, adding
the appropriate padding to avoid false sharing based on
qemu_dcache_linesize.
Given a tc_ptr, we first find the corresponding region_tree. This
is done by special-casing the first and last regions first, since they
might be of size != region.size; otherwise we just divide the offset
by region.stride. I was worried about this division (several dozen
cycles of latency), but profiling shows that this is not a fast path.
Note that region.stride is not required to be a power of two; it
is only required to be a multiple of the host's page size.
Note that with this design we can also provide consistent snapshots
about all region trees at once; for instance, tcg_tb_foreach
acquires/releases all region_tree locks before/after iterating over them.
For this reason we now drop tb_lock in dump_exec_info().
As an alternative I considered implementing a concurrent BST, but this
can be tricky to get right, offers no consistent snapshots of the BST,
and performance and scalability-wise I don't think it could ever beat
having separate GTrees, given that our workload is insert-mostly (all
concurrent BST designs I've seen focus, understandably, on making
lookups fast, which comes at the expense of convoluted, non-wait-free
insertions/removals).
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The meaning of "existing" is now changed to "matches in hash and
ht->cmp result". This is saner than just checking the pointer value.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
qht_lookup now uses the default cmp function. qht_lookup_custom is defined
to retain the old behaviour, that is a cmp function is explicitly provided.
qht_insert will gain use of the default cmp in the next patch.
Note that we move qht_lookup_custom's @func to be the last argument,
which makes the new qht_lookup as simple as possible.
Instead of this (i.e. keeping @func 2nd):
0000000000010750 <qht_lookup>:
10750: 89 d1 mov %edx,%ecx
10752: 48 89 f2 mov %rsi,%rdx
10755: 48 8b 77 08 mov 0x8(%rdi),%rsi
10759: e9 22 ff ff ff jmpq 10680 <qht_lookup_custom>
1075e: 66 90 xchg %ax,%ax
We get:
0000000000010740 <qht_lookup>:
10740: 48 8b 4f 08 mov 0x8(%rdi),%rcx
10744: e9 37 ff ff ff jmpq 10680 <qht_lookup_custom>
10749: 0f 1f 80 00 00 00 00 nopl 0x0(%rax)
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
- Fix options that work only with -drive or -blockdev, but not with
both, because of QDict type confusion
- rbd: Add options 'auth-client-required' and 'key-secret'
- Remove deprecated -drive options serial/addr/cyls/heads/secs/trans
- rbd, iscsi: Remove deprecated 'filename' option
- Fix 'qemu-img map' crash with unaligned image size
- Improve QMP documentation for jobs
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJbI8sTAAoJEH8JsnLIjy/WeIsP/Ak8ealnjx8GRUfWwi69DX5z
LPzGXjlFJ1DqVb1SCuc7mptlgQwvHSoeYk6i+2xzJy9WmaAJsT2Mxz+EkviGzlcU
Xna6t/a9dgMxvHQMKbN1j4Rm3tDJzm2x6ZnBLYlwFhecmkEz4i3TEsceFL7BrN4F
C21jm+yC2wKPyggKXRRLRP4a1UfuM7OH8OEE3aWCZwEz1+6ucZGs3xsmWJfXxlCK
adigVec3gNWgQK+gWFlYKXRJKalK50BnZ7+2LlQIcC1kLAx22Wr2t2zmlROa351w
JEBIQlLkEN8hpB44Xk3Wla3YmwumW8EANIdxkeAXDgYgn8KEpN1nwWyxGUsfoOqz
WhZ+pT0yerg/kSovKgTyxsF9YoeptJEUmmR0nZX4+xpnctbU6EHuspymMb1Xl95h
/7lirJbBVb8o9rzcCNOzTc0f9lot9pDglLjxcWq3RqvpJ2iQmxxY2UUidYl9BmRg
SpkkA9E613bZ9zuK+17FDbnGq/N9h77HSBSVbLgEoajFLXYUtkuc0egi/ZVLWo0W
wo/XvXTZqy7n9/M/j4yVS+Xv7WotNuyHajk6N73KdL0NodXE776CPxvMlUEHEHB9
Pizo/fuxmgatjDFf1WU3Qt5pZlU1Pao4Aakcq3z/Ag7ucMK03QQIMBcrJC24+DD2
pjT5HPIeS3vklkg9Jqdi
=LxgF
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/kevin/tags/for-upstream' into staging
Block layer patches:
- Fix options that work only with -drive or -blockdev, but not with
both, because of QDict type confusion
- rbd: Add options 'auth-client-required' and 'key-secret'
- Remove deprecated -drive options serial/addr/cyls/heads/secs/trans
- rbd, iscsi: Remove deprecated 'filename' option
- Fix 'qemu-img map' crash with unaligned image size
- Improve QMP documentation for jobs
# gpg: Signature made Fri 15 Jun 2018 15:20:03 BST
# gpg: using RSA key 7F09B272C88F2FD6
# gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>"
# Primary key fingerprint: DC3D EB15 9A9A F95D 3D74 56FE 7F09 B272 C88F 2FD6
* remotes/kevin/tags/for-upstream: (26 commits)
block: Remove dead deprecation warning code
block: Remove deprecated -drive option serial
block: Remove deprecated -drive option addr
block: Remove deprecated -drive geometry options
rbd: New parameter key-secret
rbd: New parameter auth-client-required
block: Fix -blockdev / blockdev-add for empty objects and arrays
check-block-qdict: Cover flattening of empty lists and dictionaries
check-block-qdict: Rename qdict_flatten()'s variables for clarity
block-qdict: Simplify qdict_is_list() some
block-qdict: Clean up qdict_crumple() a bit
block-qdict: Tweak qdict_flatten_qdict(), qdict_flatten_qlist()
block-qdict: Simplify qdict_flatten_qdict()
block: Make remaining uses of qobject input visitor more robust
block: Factor out qobject_input_visitor_new_flat_confused()
block: Clean up a misuse of qobject_to() in .bdrv_co_create_opts()
block: Fix -drive for certain non-string scalars
block: Fix -blockdev for certain non-string scalars
qobject: Move block-specific qdict code to block-qdict.c
block: Add block-specific QDict header
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Currently we don't support board configurations that put an IOMMU
in the path of the CPU's memory transactions, and instead just
assert() if the memory region fonud in address_space_translate_for_iotlb()
is an IOMMUMemoryRegion.
Remove this limitation by having the function handle IOMMUs.
This is mostly straightforward, but we must make sure we have
a notifier registered for every IOMMU that a transaction has
passed through, so that we can flush the TLB appropriately
when any of the IOMMUs change their mappings.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20180604152941.20374-5-peter.maydell@linaro.org
Add an IOMMU index argument to the translate method of
IOMMUs. Since all of our current IOMMU implementations
support only a single IOMMU index, this has no effect
on the behaviour.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20180604152941.20374-4-peter.maydell@linaro.org
Add support for multiple IOMMU indexes to the IOMMU notifier APIs.
When initializing a notifier with iommu_notifier_init(), the caller
must pass the IOMMU index that it is interested in. When a change
happens, the IOMMU implementation must pass
memory_region_notify_iommu() the IOMMU index that has changed and
that notifiers must be called for.
IOMMUs which support only a single index don't need to change.
Callers which only really support working with IOMMUs with a single
index can use the result of passing MEMTXATTRS_UNSPECIFIED to
memory_region_iommu_attrs_to_index().
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20180604152941.20374-3-peter.maydell@linaro.org