The float64_to_uint32 has several flaws:
- for numbers between 2**32 and 2**64, the inexact exception flag
may get incorrectly set. In this case, only the invalid flag
should be set.
test pattern: 425F81378DC0CD1F / 0x1.f81378dc0cd1fp+38
- for numbers between 2**63 and 2**64, incorrect results may
be produced:
test pattern: 43EAAF73F1F0B8BD / 0x1.aaf73f1f0b8bdp+63
This patch re-implements float64_to_uint32 to re-use the
float64_to_uint64 routine (instead of float64_to_int64). For the
saturation case, we ignore any flags which the conversion routine
has set and raise only the invalid flag.
This contribution can be licensed under either the softfloat-2a or -2b
license.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Message-id: 1387397961-4894-5-git-send-email-tommusta@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
The float64_to_uint64_round_to_zero routine is incorrect.
For example, the following test pattern:
46697351FF4AEC29 / 0x1.97351ff4aec29p+103
currently produces 8000000000000000 instead of FFFFFFFFFFFFFFFF.
This patch re-implements the routine to temporarily force the
rounding mode and use the float64_to_uint64 routine.
This contribution can be licensed under either the softfloat-2a or -2b
license.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Message-id: 1387397961-4894-4-git-send-email-tommusta@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
This patch adds the float32_to_uint64() routine, which converts a
32-bit floating point number to an unsigned 64 bit number.
This contribution can be licensed under either the softfloat-2a or -2b
license.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
[PMM: removed harmless but silly int64_t casts]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
If the input to float*_scalbn() is denormal then it represents
a number 0.[mantissabits] * 2^(1-exponentbias) (and the actual
exponent field is all zeroes). This means that when we convert
it to our unpacked encoding the unpacked exponent must be one
greater than for a normal number, which represents
1.[mantissabits] * 2^(e-exponentbias) for an exponent field e.
This meant we were giving answers too small by a factor of 2 for
all denormal inputs.
Note that the float-to-int routines also have this behaviour
of not adjusting the exponent for denormals; however there it is
harmless because denormals will all convert to integer zero anyway.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Richard Henderson <rth@twiddle.net>
We implement a number of float-to-integer conversions using conversion
to an integer type with a wider range and then a check against the
narrower range we are actually converting to. If we find the result to
be out of range we correctly raise the Invalid exception, but we must
also suppress other exceptions which might have been raised by the
conversion function we called.
This won't throw away exceptions we should have preserved, because for
the 'core' exception flags the IEEE spec mandates that the only valid
combinations of exception that can be raised by a single operation are
Inexact + Overflow and Inexact + Underflow. For the non-IEEE softfloat
flag for input denormals, we can guarantee that that flag won't have
been set for out of range float-to-int conversions because a squashed
denormal by definition goes to plus or minus zero, which is always in
range after conversion to integer zero.
This bug has been fixed for some of the float-to-int conversion routines
by previous patches; fix it for the remaining functions as well, so
that they all restore the pre-conversion status flags prior to raising
Invalid.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Richard Henderson <rth@twiddle.net>
The comment preceding the float64_to_uint64 routine suggests that
the implementation is broken. And this is, indeed, the case.
This patch properly implements the conversion of a 64-bit floating
point number to an unsigned, 64 bit integer.
This contribution can be licensed under either the softfloat-2a or -2b
license.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Currently the int-to-float functions take types which are specified
as "at least X bits wide", rather than "exactly X bits wide". This is
confusing and unhelpful since it means that the callers have to include
an explicit cast to [u]intXX_t to ensure the correct behaviour. Fix
them all to take the exactly-X-bits-wide types instead.
Note that this doesn't change behaviour at all since at the moment
we happen to define the 'int32' and 'uint32' types as exactly 32 bits
wide, and the 'int64' and 'uint64' types as exactly 64 bits wide.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Add the float to 16 bit integer conversion routines. These can be
trivially implemented in terms of the int32_to_float* routines, but
providing them makes our API more symmetrical and can simplify callers.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
ARMv8 requires support for converting 32 and 64bit floating point
values to signed and unsigned 16bit integers.
Signed-off-by: Will Newton <will.newton@linaro.org>
[PMM: updated not to incorrectly set Inexact for Invalid inputs]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Our float32 to float16 conversion routine was generating the correct
numerical answers, but not always setting the right set of exception
flags. Fix this, mostly by rearranging the code to more closely
resemble RoundAndPackFloat*, and in particular:
* non-IEEE halfprec always raises Invalid for input NaNs
* we need to check for the overflow case before underflow
* we weren't getting the tininess-detected-after-rounding
case correct (somewhat academic since only ARM uses halfprec
and it is always tininess-detected-before-rounding)
* non-IEEE halfprec overflow raises only Invalid, not
Invalid + Inexact
* we weren't setting Inexact when we should
Also add some clarifying comments about what the code is doing.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
To make the code slightly cleaner to look at and make the save/restore
code easier to understand, introduce this function to set the priority of
interrupts.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Message-id: 1387606179-22709-3-git-send-email-christoffer.dall@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
TRIGGER can really mean mean anything (e.g. was it triggered, is it
level-triggered, is it edge-triggered, etc.). Rename to EDGE_TRIGGER to
make the code comprehensible without looking up the data structure.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Message-id: 1387606179-22709-2-git-send-email-christoffer.dall@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
commit 5ce4f35781
"target-arm: A64: add set_pc cpu method"
introduces an array aarch64_cpus which is zero
size if this code is built without CONFIG_USER_ONLY.
In particular an attempt to iterate over this array produces a warning
under gcc 4.8.2:
CC aarch64-softmmu/target-arm/cpu64.o
/scm/qemu/target-arm/cpu64.c: In function ‘aarch64_cpu_register_types’:
/scm/qemu/target-arm/cpu64.c:124:5: error: comparison of unsigned
expression < 0 is always false [-Werror=type-limits]
for (i = 0; i < ARRAY_SIZE(aarch64_cpus); i++) {
^
cc1: all warnings being treated as errors
This is the result of ARRAY_SIZE being an unsigned type,
causing "i" to be promoted to unsigned int as well.
As zero size arrays are a gcc extension, it seems
cleanest to add a dummy element with NULL name,
and test for it during registration.
We'll be able to drop this when we add more CPUs.
Cc: Alexander Graf <agraf@suse.de>
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: Richard Henderson <rth@twiddle.net>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Stefan Weil <sw@weilnetz.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20131223145216.GA22663@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Don't conditionalise GEM instantiation on networking attachments. The
device should always be present even if not attached to a network.
This allows for probing of the device by expectant guests (such as
OS's). This is needed because sysbus (or AXI in Xilinx's real hw case)
is not self identifying so the guest has no dynamic way of detecting
device absence.
Also allows for testing of the GEM in loopback mode with -net none.
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 55649779a68ee3ff54b24c339b6fdbdccd1f0ed7.1388800598.git.peter.crosthwaite@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
There is an inline duplication of the raw_read and raw_write function
bodies. Fix by just calling raw_read/raw_write instead.
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: e69281b7e1462b346cb313cf0b89eedc0568125f.1388649290.git.peter.crosthwaite@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Use c13_context field instead of c13_fcse for CONTEXTIDR register
definition.
Signed-off-by: Sergey Fedorov <s.fedorov@samsung.com>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1387521191-15350-1-git-send-email-s.fedorov@samsung.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
If the UART back-end blocks, buffer in the Tx FIFO to try again later.
This stops the IO-thread busy waiting on char back-ends (which causes
all sorts of performance problems).
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: 4bea048b3ab38425701d82ccc1ab92545c26b79c.1388626249.git.peter.crosthwaite@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The can_receive logic was only taking into account the RxFIFO
occupancy. RxFIFO population is only used for the echo and normal modes
however. Improve the logic to correctly return the true number of
receivable characters based on the current mode:
Normal mode: RxFIFO vacancy.
Remote loopback: TxFIFO vacancy.
Echo mode: The min of the TxFIFO and RxFIFO vacancies.
Local Loopback: Return non-zero (to implement droppage)
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: 36a58440c9ca5080151e95765c2c81342de8a8df.1388626249.git.peter.crosthwaite@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This tx timer implementation is flawed. Despite the controller
attempting to time the guest visable assertion of the TX-empty status
bit (and corresponding interrupt) the controller is still transmitting
characters instantaneously. There is also no sense of multiple character
delay.
The only side effect of this timer is assertion of tx-empty status. So
just remove the timer completely and hold tx-empty as permanently
asserted (its reset status). This matches the actual behaviour of
instantaneous transmission.
While we are VMSD version bumping, add the tx_fifo as device state to
prepare for upcomming TxFIFO flow control. Implement the interrupt
generation logic for the TxFIFO occupancy.
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: 7a208a7eb8d79d6429fe28b1396c3104371807b2.1388626249.git.peter.crosthwaite@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Some (interrupt) status register bits relating to the TxFIFO path were
not defined. Define them. This prepares support for proper Tx data path
flow control.
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: 2068b963f0af8cc834c353944e9fa816d950b163.1388626249.git.peter.crosthwaite@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The status register bits are always pure functions of other device
state. Move the generation of these bits to the update_status()
function to simplify. Makes developing much easier as theres now no need
to recheck status bits on all the changes to rx/tx fifo state.
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: 321994929f789096975104f99c55732774be4cae.1388626249.git.peter.crosthwaite@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This should be rechecked on bus write accesses as such accesses may
change the underlying state that generates the interrupt. Particular
relevant for when the guest touches the interrupt status or mask.
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: 1c250cd61b7b8de492fbc8b79b8370958a56d83b.1388626249.git.peter.crosthwaite@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When setting rounding modes we currently just hardcode the numeric values
for rounding modes in a big switch statement.
With AArch64 support coming, we will need to refer to these rounding modes
at different places throughout the code though, so let's better give them
names so we don't get confused by accident.
Signed-off-by: Alexander Graf <agraf@suse.de>
[WN: Commit message tweak, use names from ARM ARM.]
Signed-off-by: Will Newton <will.newton@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
This adds decoding support for C3.6.24 FP conditional select.
Signed-off-by: Claudio Fontana <claudio.fontana@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
This adds decoding support for C3.6.23 FP Conditional Compare.
Signed-off-by: Claudio Fontana <claudio.fontana@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Add decoding support for C3.6.22 Floating-point compare.
Signed-off-by: Claudio Fontana <claudio.fontana@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
This patch adds emulation for the fmov instruction working on scalars
with an immediate payload.
Signed-off-by: Alexander Graf <agraf@suse.de>
[WN: Commit message tweak, rebase and use new infrastructure.]
Signed-off-by: Will Newton <will.newton@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
This patch adds emulation for the "Floating-point data-processing (3 source)"
group of instructions.
Signed-off-by: Alexander Graf <agraf@suse.de>
[WN: Commit message tweak, merged single and double precision patches.
Implement using muladd as suggested by Richard Henderson.]
Signed-off-by: Will Newton <will.newton@linaro.org>
[PMM: pull field decode up a level, use register accessors]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
This patch adds emulation for the "Floating-point data-processing (2 source)"
group of instructions.
Signed-off-by: Alexander Graf <agraf@suse.de>
[WN: Commit message tweak, merge single and double precision patches. Rebase
and update to new infrastructure. Incorporate FMIN/FMAX support patch by
Michael Matz.]
Signed-off-by: Will Newton <will.newton@linaro.org>
[PMM:
* added convenience accessors for FP s and d regs
* pulled the field decode and opcode validity check up a level]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Use the VFP_BINOP macro to provide helpers for min, max, minnum
and maxnum, rather than hand-rolling them. (The float64 max
version is not used by A32 but will be needed for A64.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
The A64 128 bit vector registers are stored as a pair of
uint64_t values in the register array. This means that if
we're directly loading or storing a value of size less than
64 bits we must adjust the offset appropriately to account
for whether the host is bigendian or not. Provide utility
functions to abstract away the offsetof() calculations for
the FP registers.
For do_fp_st() we can sidestep most of the issues for 64 bit
and smaller reg-to-mem transfers by always doing a 64 bit
load from the register and writing just the piece we need
to memory.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
When dumping the current CPU state, we can also get a request
to dump the FPU state along with the CPU's integer state.
Add support to dump the VFP state when that flag is set, so that
we can properly debug code that modifies floating point registers.
Signed-off-by: Alexander Graf <agraf@suse.de>
[WN: Commit message tweak, rebased. Output all registers, two per-line.]
Signed-off-by: Will Newton <will.newton@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Add a config for aarch64-linux-user, thereby enabling it as
a valid target.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Now the AArch64 targets are in mainline we can include them in our
Travis test matrix.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Use the helpers provided for getting the correct FPSR and FPCR
values for the signal context.
Signed-off-by: Will Newton <will.newton@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
The AArch64 linux-user support was written before but merged after
commit 4ce6243dc6 which cleaned up the handling of the clone()
syscall argument order, so we failed to notice that AArch64 also needs
TARGET_CLONE_BACKWARDS to be defined. Add this define so that clone
and fork syscalls work correctly.
Signed-off-by: Claudio Fontana <claudio.fontana@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
This implement exclusive loads/stores for aarch64 along the lines of
arm32 and ppc implementations. The exclusive load remembers the address
and loaded value. The exclusive store throws an an exception which uses
those values to check for equality in a proper exclusive region.
This is not actually the architecture mandated semantics (for either
AArch32 or AArch64) but it is close enough for typical guest code
sequences to work correctly, and saves us from having to monitor all
guest stores. It's fairly easy to come up with test cases where we
don't behave like hardware - we don't for example model cache line
behaviour. However in the common patterns this works, and the existing
32 bit ARM exclusive access implementation has the same limitations.
AArch64 also implements new acquire/release loads/stores (which may be
either exclusive or non-exclusive). These imposes extra ordering
constraints on memory operations (ie they act as if they have an implicit
barrier built into them). As TCG is single-threaded all our barriers
are no-ops, so these just behave like normal loads and stores.
Signed-off-by: Michael Matz <matz@suse.de>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
In preparation for adding support for A64 load/store exclusive instructions,
widen the fields in the CPU state struct that deal with address and data values
for exclusives from 32 to 64 bits. Although in practice AArch64 and AArch32
exclusive accesses will be generally separate there are some odd theoretical
corner cases (eg you should be able to do the exclusive load in AArch32, take
an exception to AArch64 and successfully do the store exclusive there), and it's
also easier to reason about.
The changes in semantics for the variables are:
exclusive_addr -> extended to 64 bits; -1ULL for "monitor lost",
otherwise always < 2^32 for AArch32
exclusive_val -> extended to 64 bits. 64 bit exclusives in AArch32 now
use the high half of exclusive_val instead of a separate exclusive_high
exclusive_high -> is no longer used in AArch32; extended to 64 bits as
it will be needed for AArch64's pair-of-64-bit-values exclusives.
exclusive_test -> extended to 64 bits, as it is an address. Since this is
a linux-user-only field, in arm-linux-user it will always have the top
32 bits zero.
exclusive_info -> stays 32 bits, as it is neither data nor address, but
simply holds register indexes etc. AArch64 will be able to fit all its
information into 32 bits as well.
Note that the refactoring of gen_store_exclusive() coincidentally fixes
a minor bug where ldrexd would incorrectly update the first CPU register
even if the load for the second register faulted.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Adds support for Load Register (literal), both normal
and SIMD/FP forms.
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
this patch adds support for C3.5.4 - C3.5.5
Conditional compare (both immediate and register)
Signed-off-by: Claudio Fontana <claudio.fontana@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
The common pattern for system registers in a 64-bit capable ARM
CPU is that when in AArch32 the cp15 register is a view of the
bottom 32 bits of the 64-bit AArch64 system register; writes in
AArch32 leave the top half unchanged. The most natural way to
model this is to have the state field in the CPU struct be a
64 bit value, and simply have the AArch32 TCG code operate on
a pointer to its lower half.
For aarch64-linux-user the only registers we need to share like
this are the thread-local-storage ones. Widen their fields to
64 bits and provide the 64 bit reginfo struct to make them
visible in AArch64 state. Note that minor cleanup of the AArch64
system register encoding space means We can share the TPIDR_EL1
reginfo but need split encodings for TPIDR_EL0 and TPIDRRO_EL0.
Since we're touching almost every line in QEMU that uses the
c13_tls* fields in this patch anyway, we take the opportunity
to rename them in line with the standard ARM architectural names
for these registers.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
The common pattern for system registers in a 64-bit capable ARM
CPU is that when in AArch32 the cp15 register is a view of the
bottom 32 bits of the 64-bit AArch64 system register; writes in
AArch32 leave the top half unchanged. The most natural way to
model this is to have the state field in the CPU struct be a
64 bit value, and simply have the AArch32 TCG code operate on
a pointer to its lower half.
For aarch64-linux-user the only registers we need to share like
this are the thread-local-storage ones. Widen their fields to
64 bits and provide the 64 bit reginfo struct to make them
visible in AArch64 state. Note that minor cleanup of the AArch64
system register encoding space means We can share the TPIDR_EL1
reginfo but need split encodings for TPIDR_EL0 and TPIDRRO_EL0.
Since we're touching almost every line in QEMU that uses the
c13_tls* fields in this patch anyway, we take the opportunity
to rename them in line with the standard ARM architectural names
for these registers.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Implement an initial minimal set of EL0-visible system registers:
* NZCV
* FPCR
* FPSR
* CTR_EL0
* DCZID_EL0
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
The AArch64 equivalent of the traditional AArch32
cp15 coprocessor registers is the set of instructions
MRS/MSR/SYS/SYSL, which cover between them both true
system registers and the "operations with side effects"
such as cache maintenance which in AArch32 are mixed
in with other cp15 registers. Implement these instructions
to look in the cpregs hashtable for the register or
operation.
Since we don't yet populate the cpregs hashtable with
any registers with the "AA64" bit set, everything will
still UNDEF at this point.
MSR/MRS is the first user of is_jmp = DISAS_UPDATE, so
fix an infelicity in its handling where the main loop
was requiring the caller to do the update of PC rather
than just doing it itself.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>