Instead of each target knowing or guessing the guest page size,
just pass the desired size of dirtied memory area.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Introduce a memory region type that can reserve I/O space. Such regions
are useful for modeling I/O that is only handled outside of QEMU, i.e.
in the context of an accelerator like KVM.
Any access to such a region from QEMU is a bug, but could theoretically
be triggered by guest code (DMA to reserved region). So only warning
about such events once, then ignore them.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
All files under GPLv2 will get GPLv2+ changes starting tomorrow.
event_notifier.c and exec-obsolete.h were only ever touched by Red Hat
employees and can be relicensed now.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Commit a621f38de8 (Direct dispatch
through MemoryRegion) moved byte swaps to a central function.
Add a missing break, so that long-sized byte swaps don't abort.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Avi Kivity <avi@redhat.com>
Since commit be675c9720 (memory: move
endianness compensation to memory core) it was checking for
TARGET_BIG_ENDIAN instead of TARGET_WORDS_BIGENDIAN, thereby not
swapping correctly for Big Endian targets.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Avi Kivity <avi@redhat.com>
Unlike ->readonly, ->readable is not inherited from aliase, so we can simply
query the memory region.
Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Now that all mmio goes through MemoryRegions, we can convert
io_mem_opaque to be a MemoryRegion pointer, and remove the thunks
that convert from old-style CPU{Read,Write}MemoryFunc to MemoryRegionOps.
Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Convert the fixed-address IO_MEM_RAM, IO_MEM_ROM, IO_MEM_UNASSIGNED,
and IO_MEM_NOTDIRTY io handlers to MemoryRegions. These aren't real
regions, since they are never added to the memory hierarchy, but they
allow reuse of the dispatch functionality.
Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
The code sometimes uses range comparisons on io indexes (e.g.
index =< IO_MEM_ROM). Avoid these as they make moving to objects harder.
Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
backend_registered was used to lazify the process of registering an
mmio region, since the it is different for the I/O address space and
the memory address space. However, it also makes registration dependent
on the region being visible in the address space. This is not the case
for "fake" regions, like watchpoints or IO_MEM_UNASSIGNED.
Remove backend_registered and always initialize the region. If it turns
out to be part of the I/O address space, we've wasted an I/O slot, but
that's not too bad. In any case this will be optimized later on.
Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Currently mmio access goes directly to the io_mem_{read,write} arrays.
In preparation for eliminating them, add indirection via a function.
Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Instead of doing device endianness compensation in cpu_register_io_memory(),
do it in the memory core.
Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
The getter is no longer used, so it is completely removed.
Reviewed-by: Anthony Liguori <aliguori@us.ibm.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Currently creating a memory region automatically registers it for
live migration. This differs from other state (which is enumerated
in a VMStateDescription structure) and ties the live migration code
into the memory core.
Decouple the two by introducing a separate API, vmstate_register_ram(),
for registering a RAM block for migration. Currently the same
implementation is reused, but later it can be moved into a separate list,
and registrations can be moved to VMStateDescription blocks.
Signed-off-by: Avi Kivity <avi@redhat.com>
This is a layering violation, but needed while the code contains
naked calls to qemu_get_ram_ptr() and the like.
Signed-off-by: Avi Kivity <avi@redhat.com>
Add an API that allows a client to observe changes in the global
memory map:
- region added (possibly with logging enabled)
- region removed (possibly with logging enabled)
- logging started on a region
- logging stopped on a region
- global logging started
- global logging removed
This API will eventually replace cpu_register_physical_memory_client().
Signed-off-by: Avi Kivity <avi@redhat.com>
Given an address space (represented by the top-level memory region),
returns the memory region that maps a given range. Useful for implementing
DMA.
The implementation is a simplistic binary search. Once we have a tree
representation this can be optimized.
Signed-off-by: Avi Kivity <avi@redhat.com>
Currently xen_ram_alloc() relies on ram_addr, which is going away.
Give it something else to use as a cookie.
Signed-off-by: Avi Kivity <avi@redhat.com>
The mutating memory APIs can easily cause empty transactions,
where the mutators don't actually change anything, or perhaps
only modify disabled regions. Detect these conditions and
avoid regenerating the memory topology.
Signed-off-by: Avi Kivity <avi@redhat.com>
Add an API to update an alias offset of an active alias. This can be
used to simplify implementation of dynamic memory banks.
Signed-off-by: Avi Kivity <avi@redhat.com>
This allows users to disable a memory region without removing
it from the hierarchy, simplifying the implementation of
memory routers.
Signed-off-by: Avi Kivity <avi@redhat.com>
MemoryRegionOps::valid tries to declaratively specify which transactions
are accepted by the device/bus, however it is not completely generic. Add
a callback for special cases.
Signed-off-by: Avi Kivity <avi@redhat.com>
'info mtree' accesses invalid memory in two cases, both due to incorrect
(and unsafe) usage of QTAILQ_FOREACH_SAFE().
Reported-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
As we register old portio regions via ioport_register, we are also
responsible for providing the word access wrapper.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Add a type and methods for manipulating a list of disjoint I/O ports,
used in some older hardware devices.
Based on original patch by Richard Henderson.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Avi Kivity <avi@redhat.com>
Add a monitor command 'info mtree' to show the memory hierarchy
much like /proc/iomem in Linux.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
The property is inheritable, but only if set to true. This is so
that memory routers can mark sections of RAM as read-only via aliases.
Signed-off-by: Avi Kivity <avi@redhat.com>
Instead of the offset property use the proper addr property to calculate
the offsets.
Additionally, be a little more verbose on the warning and print the
subregion name.
Signed-off-by: Michael Walle <michael@walle.cc>
Signed-off-by: Avi Kivity <avi@redhat.com>
It is quite common to have a MemoryRegion with size of INT64_MAX.
When processing alias regions in render_memory_region() it's quite
easy to find a case where it will construct a temporary AddrRange with
a non-zero start, and size still of INT64_MAX. When means attempting
to compute the end of such a range as start + size will result in
signed integer overflow.
This integer overflow means that addrrange_intersects() can
incorrectly report regions as not intersecting when they do. For
example consider the case of address ranges {0x10000000000,
0x7fffffffffffffff} and {0x10010000000, 0x10000000} where the second
is in fact included completely in the first.
This patch rearranges addrrange_intersects() to avoid the integer
overflow, correcting this behaviour.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Avi Kivity <avi@redhat.com>
Mask out the sub-page bits that are used by ROM device for storing the
io-index and the IO_MEM_ROMD flag.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
When adding a rom_device in I/O mode, we incorrectly masked off the low
bits, resulting in a pure RAM map. Fix my masking off the high bits and
IO_MEM_ROMD, yielding a pure I/O map.
Signed-off-by: Avi Kivity <avi@redhat.com>