The v8 ARM ARM defines that unused spaces in the ID_AA64* system
register ranges are Reserved and must RAZ, rather than being UNDEF.
Implement this.
In particular, ARM v8.2 adds a new feature register ID_AA64MMFR2,
and newer versions of the Linux kernel will attempt to read this,
which causes them not to boot up on versions of QEMU missing this fix.
Since the encoding .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 6
is actually defined in ARMv8 (as ID_MMFR4), we give it an entry in
the ARMCPU struct so CPUs can override it, though since none do
this too will just RAZ.
Cc: qemu-stable@nongnu.org
Reported-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1455890863-11203-1-git-send-email-peter.maydell@linaro.org
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Mark CNTHP_TVAL_EL2 as ARM_CP_NO_RAW due to the register not
having any underlying state. This fixes an issue with booting
KVM enabled kernels when EL2 is on.
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1456490739-19343-1-git-send-email-edgar.iglesias@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Implement the performance monitor register traps controlled
by MDCR_EL3.TPM and MDCR_EL2.TPM. Most of the performance
registers already have an access function to deal with the
user-enable bit, and the TPM checks can be added there. We
also need a new access function which only implements the
TPM checks for use by the few not-EL0-accessible registers
and by PMUSERENR_EL0 (which is always EL0-readable).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1455892784-11328-3-git-send-email-peter.maydell@linaro.org
Reviewed-by: Sergey Fedorov <serge.fdrv@gmail.com>
Acked-by: Alistair Francis <alistair.francis@xilinx.com>
Fix two issues with our implementation of the SDCR:
* it is only present from ARMv8 onwards
* it does not contain several of the trap bits present in its 64-bit
counterpart the MDCR_EL3
Put the register description in the right place so that it does not
get enabled for ARMv7 and earlier, and give it a write function so that
we can mask out the bits which should not be allowed to have an effect
if EL3 is 32-bit.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1455892784-11328-2-git-send-email-peter.maydell@linaro.org
Reviewed-by: Sergey Fedorov <serge.fdrv@gmail.com>
Acked-by: Alistair Francis <alistair.francis@xilinx.com>
If HCR.TGE is 1 then mode changes via CPS and MSR from Monitor to
NonSecure PL1 modes are illegal mode changes. Implement this check
in bad_mode_switch().
(We don't currently implement HCR.TGE, but this is the only missing
check from the v8 ARM ARM G1.9.3 and so it's worth adding now; the
rest of the HCR.TGE checks can be added later as necessary.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Sergey Fedorov <serge.fdrv@gmail.com>
Message-id: 1455556977-3644-12-git-send-email-peter.maydell@linaro.org
Mode switches from Hyp to any other mode via the CPS and MRS
instructions are illegal mode switches (though obviously switching
via exception return is valid). Add this check to bad_mode_switch().
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Sergey Fedorov <serge.fdrv@gmail.com>
Message-id: 1455556977-3644-11-git-send-email-peter.maydell@linaro.org
In v8, the illegal mode changes which are UNPREDICTABLE in v7 are
given architected behaviour:
* the mode field is unchanged
* PSTATE.IL is set (so any subsequent instructions will UNDEF)
* any other CPSR fields are written to as normal
This is pretty much the same behaviour we picked for our
UNPREDICTABLE handling, with the exception that for v8 we
need to set the IL bit.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Sergey Fedorov <serge.fdrv@gmail.com>
Message-id: 1455556977-3644-10-git-send-email-peter.maydell@linaro.org
In v8 trying to switch mode to Mon from Secure EL1 is an
illegal mode switch. (In v7 this is impossible as all secure
modes except User are at EL3.) We can handle this case by
making a switch to Mon valid only if the current EL is 3,
which then gives the correct answer whether EL3 is AArch32
or AArch64.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Sergey Fedorov <serge.fdrv@gmail.com>
Message-id: 1455556977-3644-9-git-send-email-peter.maydell@linaro.org
We don't actually support Hyp mode yet, but add the correct
checks for it to the bad_mode_switch() function for completeness.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Sergey Fedorov <serge.fdrv@gmail.com>
Message-id: 1455556977-3644-8-git-send-email-peter.maydell@linaro.org
QEMU doesn't implement the NSACR.RFR bit, which is a permitted
IMPDEF in choice in ARMv7 and the only permitted choice in ARMv8.
Add a comment to bad_mode_switch() to note that this is why
FIQ is always a valid mode regardless of the CPU's Secure state.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Sergey Fedorov <serge.fdrv@gmail.com>
Message-id: 1455556977-3644-7-git-send-email-peter.maydell@linaro.org
The only case where we can attempt a cpsr_write() mode switch from
User is from the gdbstub; all other cases are handled in the
calling code (notably translate.c). Architecturally attempts to
alter the mode bits from user mode are simply ignored (and not
treated as a bad mode switch, which in v8 sets CPSR.IL). Make
mode switches from User ignored in cpsr_write() as well, for
consistency.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Sergey Fedorov <serge.fdrv@gmail.com>
Message-id: 1455556977-3644-6-git-send-email-peter.maydell@linaro.org
Raw CPSR writes should skip the architectural checks for whether
we're allowed to set the A or F bits and should also not do
the switching of register banks if the mode changes. Handle
this inside cpsr_write(), which allows us to drop the "manually
set the mode bits to avoid the bank switch" code from all the
callsites which are using CPSRWriteRaw.
This fixes a bug in 32-bit KVM handling where we had forgotten
the "manually set the mode bits" part and could thus potentially
trash the register state if the mode from the last exit to userspace
differed from the mode on this exit.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Sergey Fedorov <serge.fdrv@gmail.com>
Message-id: 1455556977-3644-4-git-send-email-peter.maydell@linaro.org
Add an argument to cpsr_write() to indicate what kind of CPSR
write is being requested, since the exact behaviour should
differ for the different cases.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Sergey Fedorov <serge.fdrv@gmail.com>
Message-id: 1455556977-3644-3-git-send-email-peter.maydell@linaro.org
The Linux kernel accesses this register early in its setup.
Signed-off-by: Christopher Covington <christopher.covington@linaro.org>
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: b30d536cb16ec57b4412172bb6dbc3f00d293e7d.1455060548.git.alistair.francis@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org>
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
Tested-by: Nathan Rossi <nathan@nathanrossi.com>
Message-id: da0563119a9f56fd5fbdc26e7ed19a8a8457c5b9.1455060548.git.alistair.francis@xilinx.com
[PMM: Use 0 for PMCEID0 values for A15 and A57 since our PMU
does not currently implement any events.]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Move bank_number()'s implementation into internals.h, so
it's available in the user-mode-only compile as well.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Sergey Fedorov <serge.fdrv@gmail.com>
Move get/set_r13_banked() from helper.c to op_helper.c. This will
let us add exception-raising code to them, and also puts them
in the same file as get/set_user_reg(), which makes some conceptual
sense.
(The original reason for the helper.c/op_helper.c split was that
only op_helper.c had access to the CPU env pointer; this distinction
has not been true for a long time, though, and so the split is
now rather arbitrary.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Sergey Fedorov <serge.fdrv@gmail.com>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
If access to FPEXC32_EL2 is trapped by CPTR_EL2.TFP or CPTR_EL3.TFP,
this should be reported with a syndrome register indicating an
FP access trap, not one indicating a system register access trap.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Sergey Fedorov <serge.fdrv@gmail.com>
Implement the debug register traps controlled by MDCR_EL2.TDA
and MDCR_EL3.TDA.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Sergey Fedorov <serge.fdrv@gmail.com>
Implement trapping of the "debug ROM" registers, which are controlled
by MDCR_EL2.TDRA for EL2 but by the more general MDCR_EL3.TDA for EL3.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Sergey Fedorov <serge.fdrv@gmail.com>
Implement the traps to EL2 and EL3 controlled by the bits
MDCR_EL2.TDOSA MDCR_EL3.TDOSA. These can configurably trap
accesses to the "powerdown debug" registers.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Sergey Fedorov <serge.fdrv@gmail.com>
Correct some corner cases we were getting wrong for
CNTFRQ access rights:
* should UNDEF from 32-bit Secure EL1
* only writable from the highest implemented exception level,
which might not be EL1 now
To clarify the code, provide a new utility function
arm_highest_el() which returns the highest implemented
exception level.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Sergey Fedorov <serge.fdrv@gmail.com>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Implement some corner cases of the behaviour of the NSACR
register on ARMv8:
* if EL3 is AArch64 then accessing the NSACR from Secure EL1
with AArch32 should trap to EL3
* if EL3 is not present or is AArch64 then reads from NS EL1 and
NS EL2 return constant 0xc00
It would in theory be possible to implement all these with
a single reginfo definition, but for clarity we use three
separate definitions for the three cases and install the
right one based on the CPU feature flags.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1454506721-11843-7-git-send-email-peter.maydell@linaro.org
System registers might have access requirements which need to
be described via a CPAccessFn and which differ for reads and
writes. For this to be possible we need to pass the access
function a parameter to tell it whether the access being checked
is a read or a write.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Sergey Fedorov <serge.fdrv@gmail.com>
Message-id: 1454506721-11843-6-git-send-email-peter.maydell@linaro.org
The registers MVBAR and SCR should have the behaviour of trapping to
EL3 if accessed from Secure EL1, but we were incorrectly implementing
them to UNDEF (which would trap to EL1). Fix this by using the new
access_trap_aa32s_el1() access function.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1454506721-11843-4-git-send-email-peter.maydell@linaro.org
Implement the MDCR_EL3 register (which is SDCR for AArch32).
For the moment we implement it as reads-as-written.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1454506721-11843-3-git-send-email-peter.maydell@linaro.org
Implement the inputsize > pamax check for Stage 2 translations.
This is CONSTRAINED UNPREDICTABLE and we choose to fault.
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1453932970-14576-4-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Rename check_s2_startlevel to check_s2_mmu_setup in preparation
for additional checks.
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1453932970-14576-3-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The S2 starting level table size check applies to both AArch32
and AArch64. Move it to common code.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1453932970-14576-2-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The AArch64 system registers DACR32_EL2, IFSR32_EL2, SPSR_IRQ,
SPSR_ABT, SPSR_UND and SPSR_FIQ are visible and fully functional from
EL3 even if the CPU has no EL2 (unlike some others which are RES0
from EL3 in that configuration). Move them from el2_cp_reginfo[] to
v8_cp_reginfo[] so they are always present.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Sergey Fedorov <serge.fdrv@gmail.com>
Message-id: 1453227802-9991-1-git-send-email-peter.maydell@linaro.org
The AArch64 FPEXC32_EL2 system register is visible at EL2 and EL3,
and allows those exception levels to read and write the FPEXC
register for a lower exception level that is using AArch32.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Sergey Fedorov <serge.fdrv@gmail.com>
Message-id: 1453132414-8127-1-git-send-email-peter.maydell@linaro.org
The entry offset when taking an exception to AArch64 from a lower
exception level may be 0x400 or 0x600. 0x400 is used if the
implemented exception level immediately lower than the target level
is using AArch64, and 0x600 if it is using AArch32. We were
incorrectly implementing this as checking the exception level
that the exception was taken from. (The two can be different if
for example we take an exception from EL0 to AArch64 EL3; we should
in this case be checking EL2 if EL2 is implemented, and EL1 if
EL2 is not implemented.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Handling of semihosting calls should depend on the register width
of the calling code, not on that of any higher exception level,
so we need to identify and handle semihosting calls before we
decide whether to deliver the exception as an entry to AArch32
or AArch64. (EXCP_SEMIHOST is also an "internal exception" so
it has no target exception level in the first place.)
This will allow AArch32 EL1 code to use semihosting calls when
running under an AArch64 EL3.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
If EL2 or EL3 is present on an AArch64 CPU, then exceptions can be
taken to an exception level which is running AArch32 (if only EL0
and EL1 are present then EL1 must be AArch64 and all exceptions are
taken to AArch64). To support this we need to have a single
implementation of the CPU do_interrupt() method which can handle both
32 and 64 bit exception entry.
Pull the common parts of aarch64_cpu_do_interrupt() and
arm_cpu_do_interrupt() out into a new function which calls
either the AArch32 or AArch64 specific entry code once it has
worked out which one is needed.
We temporarily special-case the handling of EXCP_SEMIHOST to
avoid an assertion in arm_el_is_aa64(); the next patch will
pull all the semihosting handling out to the arm_cpu_do_interrupt()
level (since semihosting semantics depend on the register width
of the calling code, not on that of any higher EL).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Move the aarch64_cpu_do_interrupt() function to helper.c. We want
to be able to call this from code that isn't AArch64-only, and
the move allows us to avoid awkward #ifdeffery at the callsite.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
If we have a secure address space, use it in page table walks:
when doing the physical accesses to read descriptors, make them
through the correct address space.
(The descriptor reads are the only direct physical accesses
made in target-arm/ for CPUs which might have TrustZone.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Implement cpu_get_phys_page_attrs_debug instead of cpu_get_phys_page_debug.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Clean up includes so that osdep.h is included first and headers
which it implies are not included manually.
This commit was created with scripts/clean-includes.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Message-id: 1449505425-32022-3-git-send-email-peter.maydell@linaro.org
arm_regime_using_lpae_format checks whether the LPAE extension is used
for stage 1 translation regimes. MMU indexes not exclusively of a stage 1
regime won't work with this method.
In case of ARMMMUIdx_S12NSE0 or ARMMMUIdx_S12NSE1, offset these values
by ARMMMUIdx_S1NSE0 to get the right index indicating a stage 1
translation regime.
Rename also the function to arm_s1_regime_using_lpae_format and update
the comments to reflect the change.
Signed-off-by: Alvise Rigo <a.rigo@virtualopensystems.com>
Message-id: 1452854262-19550-1-git-send-email-a.rigo@virtualopensystems.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Qemu does not generally perform alignment checks. However, the ARM ARM
requires implementation of alignment exceptions for a number of cases
including LDREX, and Windows-on-ARM relies on this.
This change adds plumbing to enable alignment checks on loads using
MO_ALIGN, a do_unaligned_access hook to raise the exception (data
abort), and uses the new aligned loads in LDREX (for all but
single-byte loads).
Signed-off-by: Andrew Baumann <Andrew.Baumann@microsoft.com>
Message-id: 1449167808-5656-1-git-send-email-Andrew.Baumann@microsoft.com
[PMM: set WnR bits in syndrome and FSR as appropriate]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In an LPAE format descriptor in ARMv8 the address field extends
up to bit 47, not just bit 39. Correct the masking so we don't
give incorrect results if the output address size is greater
than 40 bits, as it can be for AArch64.
(Note that we don't yet support the new-in-v8 Address Size fault which
should be generated if any translation table entry or TTBR contains
an address with non-zero bits above the most significant bit of the
maximum output address size.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1448029971-9875-1-git-send-email-peter.maydell@linaro.org
Add BANK_<cpumode> #defines to index banked registers.
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Soren Brinkmann <soren.brinkmann@xilinx.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add support for applying S2 translation to 32bit S1
page-table walks.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1445864527-14520-13-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add support for applying S2 translation to 64bit S1
page-table walks.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1445864527-14520-12-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Introduce ARMMMUFaultInfo to propagate MMU Fault information
across the MMU translation code path. This is in preparation for
adding Stage-2 translation.
No functional changes.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1445864527-14520-11-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Avoid inline for get_phys_addr() to prepare for future recursive use.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1445864527-14520-10-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1445864527-14520-9-git-send-email-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The starting level for S2 pagetable walks is computed
differently from the S1 starting level. Implement the S2
variant.
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1445864527-14520-8-git-send-email-edgar.iglesias@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>