We write idmap_t0sz with SCTLR_EL1.{C,M} clear, but we only have the
guarnatee that the kernel Image is clean, not invalid in the caches, and
therefore we might read a stale value once the MMU is enabled.
This patch ensures we invalidate the corresponding cacheline after the
write as we do for all other data written before we set SCTLR_EL1.{C.M},
guaranteeing that the value will be visible later. We rely on the DSBs
in __create_page_tables to complete the maintenance.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
CC: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
After writing the page tables, we use __inval_cache_range to invalidate
any stale cache entries. Strongly Ordered memory accesses are not
ordered w.r.t. cache maintenance instructions, and hence explicit memory
barriers are required to provide this ordering. However,
__inval_cache_range was written to be used on Normal Cacheable memory
once the MMU and caches are on, and does not have any barriers prior to
the DC instructions.
This patch adds a DMB between the page tables being written and the
corresponding cachelines being invalidated, ensuring that the
invalidation makes the new data visible to subsequent cacheable
accesses. A barrier is not required before the prior invalidate as we do
not access the page table memory area prior to this, and earlier
barriers in preserve_boot_args and set_cpu_boot_mode_flag ensures
ordering w.r.t. any stores performed prior to entering Linux.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Fixes: c218bca74e ("arm64: Relax the kernel cache requirements for boot")
Signed-off-by: Will Deacon <will.deacon@arm.com>
The page size and the number of translation levels, and hence the supported
virtual address range, are build-time configurables on arm64 whose optimal
values are use case dependent. However, in the current implementation, if
the system's RAM is located at a very high offset, the virtual address range
needs to reflect that merely because the identity mapping, which is only used
to enable or disable the MMU, requires the extended virtual range to map the
physical memory at an equal virtual offset.
This patch relaxes that requirement, by increasing the number of translation
levels for the identity mapping only, and only when actually needed, i.e.,
when system RAM's offset is found to be out of reach at runtime.
Tested-by: Laura Abbott <lauraa@codeaurora.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
According to the arm64 boot protocol, registers x1 to x3 should be
zero upon kernel entry, and non-zero values are reserved for future
use. This future use is going to be problematic if we never enforce
the current rules, so start enforcing them now, by emitting a warning
if non-zero values are detected.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This removes the function __calc_phys_offset and all open coded
virtual to physical address translations using the offset kept
in x28.
Instead, just use absolute or PC-relative symbol references as
appropriate when referring to virtual or physical addresses,
respectively.
Tested-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Enabling of the MMU is split into two functions, with an align and
a branch in the middle. On arm64, the entire kernel Image is ID mapped
so this is really not necessary, and we can just merge it into a
single function.
Also replaces an open coded adrp/add reference to __enable_mmu pair
with adr_l.
Tested-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Replace the confusing virtual/physical address arithmetic with a simple
PC-relative reference.
Tested-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This removes the confusing __switch_data object from head.S,
and replaces it with standard PC-relative references to the
various symbols it encapsulates.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The global processor_id is assigned the MIDR_EL1 value of the boot
CPU in the early init code, but is never referenced afterwards.
As the relevance of the MIDR_EL1 value of the boot CPU is debatable
anyway, especially under big.LITTLE, let's remove it before anyone
starts using it.
Tested-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
struct cpu_table is an artifact left from the (very) early days of
the arm64 port, and its only real use is to allow the most beautiful
"AArch64 Processor" string to be displayed at boot time.
Really? Yes, really.
Let's get rid of it. In order to avoid another BogoMips-gate, the
aforementioned string is preserved.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Commit 828e9834e9 ("arm64: head: create a new function for setting
the boot_cpu_mode flag") added BOOT_CPU_MODE_EL1, a nonzero value
replacing uses of zero. However it failed to update __boot_cpu_mode
appropriately.
A CPU booted at EL2 writes BOOT_CPU_MODE_EL2 to __boot_cpu_mode[0], and
a CPU booted at EL1 writes BOOT_CPU_MODE_EL1 to __boot_cpu_mode[1].
Later is_hyp_mode_mismatched() determines there to be a mismatch if
__boot_cpu_mode[0] != __boot_cpu_mode[1].
If all CPUs are booted at EL1, __boot_cpu_mode[0] will be set to
BOOT_CPU_MODE_EL1, but __boot_cpu_mode[1] will retain its initial value
of zero, and is_hyp_mode_mismatched will erroneously determine that the
boot modes are mismatched. This hasn't been a problem so far, but later
patches which will make use of is_hyp_mode_mismatched() expect it to
work correctly.
This patch initialises __boot_cpu_mode[1] to BOOT_CPU_MODE_EL1, fixing
the erroneous mismatch detection when all CPUs are booted at EL1.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Another one for the big head.S spring cleaning: the label should
be after the .align or it may point to the padding.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The head.text section is intended to be run at early bootup
before any of the regular kernel mappings have been setup.
Parts of head.text may be freed back into the buddy allocator
due to TEXT_OFFSET so for security requirements this memory
must not be executable. The suspend/resume/hotplug code path
requires some of these head.S functions to run however which
means they need to be executable. Support these conflicting
requirements by moving the few head.text functions that need
to be executable to the text section which has the appropriate
page table permissions.
Tested-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The hyp stub vectors are currently loaded using adr. This
instruction has a +/- 1MB range for the loading address. If
the alignment for sections is changed the address may be more
than 1MB away, resulting in reclocation errors. Switch to using
adrp for getting the address to ensure we aren't affected by the
location of the __hyp_stub_vectors.
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Change our PE/COFF header to use the minimum file alignment of
512 bytes (0x200), as mandated by the PE/COFF spec v8.3
Also update the linker script so that the Image file itself is also a
round multiple of FileAlignment.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Roy Franz <roy.franz@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Position independent AArch64 code needs to be linked and loaded at the
same relative offset from a 4 KB boundary, or adrp/add and adrp/ldr
pairs will not work correctly. (This is how PC relative symbol
references with a 4 GB reach are emitted)
We need to declare this in the PE/COFF header, otherwise the PE/COFF
loader may load the Image and invoke the stub at an offset which
violates this rule.
Reviewed-by: Roy Franz <roy.franz@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
After the EFI stub has done its business, it jumps into the kernel by
branching to offset #0 of the loaded Image, which is where it expects
to find the header containing a 'branch to stext' instruction.
However, the UEFI spec 2.1.1 states the following regarding PE/COFF
image loading:
"A UEFI image is loaded into memory through the LoadImage() Boot
Service. This service loads an image with a PE32+ format into memory.
This PE32+ loader is required to load all sections of the PE32+ image
into memory."
In other words, it is /not/ required to load parts of the image that are
not covered by a PE/COFF section, so it may not have loaded the header
at the expected offset, as it is not covered by any PE/COFF section.
So instead, jump to 'stext' directly, which is at the base of the
PE/COFF .text section, by supplying a symbol 'stext_offset' to
efi-entry.o which contains the relative offset of stext into the Image.
Also replace other open coded calculations of the same value with a
reference to 'stext_offset'
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Roy Franz <roy.franz@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
The static memory footprint of a kernel Image at boot is larger than the
Image file itself. Things like .bss data and initial page tables are allocated
statically but populated dynamically so their content is not contained in the
Image file.
However, if EFI (or GRUB) has loaded the Image at precisely the desired offset
of base of DRAM + TEXT_OFFSET, the Image will be booted in place, and we have
to make sure that the allocation done by the PE/COFF loader is large enough.
Fix this by growing the PE/COFF .text section to cover the entire static
memory footprint. The part of the section that is not covered by the payload
will be zero initialised by the PE/COFF loader.
Acked-by: Mark Salter <msalter@redhat.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Leif Lindholm <leif.lindholm@linaro.org>
Tested-by: Leif Lindholm <leif.lindholm@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Remove an unused local variable from head.S. It seems this was never
used even from the initial commit
9703d9d7f7 (arm64: Kernel booting and
initialisation), and is a left over from a previous implementation
of __calc_phys_offset.
Signed-off-by: Geoff Levand <geoff@infradead.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
When booting via UEFI, the kernel Image is loaded at a 4 kB boundary and
the embedded EFI stub is executed in place. The EFI stub relocates the
Image to reside TEXT_OFFSET bytes above a 2 MB boundary, and jumps into
the kernel proper.
In AArch64, PC relative symbol references are emitted using adrp/add or
adrp/ldr pairs, where the offset into a 4 kB page is resolved using a
separate :lo12: relocation. This implicitly assumes that the code will
always be executed at the same relative offset with respect to a 4 kB
boundary, or the references will point to the wrong address.
This means we should link the kernel at a 4 kB aligned base address in
order to remain compatible with the base address the UEFI loader uses
when doing the initial load of Image. So update the code that generates
TEXT_OFFSET to choose a multiple of 4 kB.
At the same time, update the code so it chooses from the interval [0..2MB)
as the author originally intended.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
GICv3 introduces new system registers accessible with the full msr/mrs
syntax (e.g. mrs x0, Sop0_op1_CRm_CRn_op2). However, only recent
binutils understand the new syntax. This patch introduces msr_s/mrs_s
assembly macros which generate the equivalent instructions above and
converts the existing GICv3 code (both drivers/irqchip/ and
arch/arm64/kernel/).
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Olof Johansson <olof@lixom.net>
Tested-by: Olof Johansson <olof@lixom.net>
Suggested-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Jason Cooper <jason@lakedaemon.net>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
* tag 'deps-irqchip-gic-3.17' of git://git.infradead.org/users/jcooper/linux:
irqchip: gic-v3: Initial support for GICv3
irqchip: gic: Move some bits of GICv2 to a library-type file
Conflicts:
arch/arm64/Kconfig
This patch allows support for 3 levels of page tables with 64KB page
configuration allowing 48-bit VA space. The pgd is no longer a full
PAGE_SIZE (PTRS_PER_PGD is 64) and (swapper|idmap)_pg_dir are not fully
populated (pgd_alloc falls back to kzalloc).
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Jungseok Lee <jungseoklee85@gmail.com>
This patch adds a create_table_entry macro which is used to populate pgd
and pud entries, also reducing the number of arguments for
create_pgd_entry.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Jungseok Lee <jungseoklee85@gmail.com>
Rather than having several Kconfig options, define int
ARM64_PGTABLE_LEVELS which will be also useful in converting some of the
pgtable macros.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Jungseok Lee <jungseoklee85@gmail.com>
This patch implements 4 levels of translation tables since 3 levels
of page tables with 4KB pages cannot support 40-bit physical address
space described in [1] due to the following issue.
It is a restriction that kernel logical memory map with 4KB + 3 levels
(0xffffffc000000000-0xffffffffffffffff) cannot cover RAM region from
544GB to 1024GB in [1]. Specifically, ARM64 kernel fails to create
mapping for this region in map_mem function since __phys_to_virt for
this region reaches to address overflow.
If SoC design follows the document, [1], over 32GB RAM would be placed
from 544GB. Even 64GB system is supposed to use the region from 544GB
to 576GB for only 32GB RAM. Naturally, it would reach to enable 4 levels
of page tables to avoid hacking __virt_to_phys and __phys_to_virt.
However, it is recommended 4 levels of page table should be only enabled
if memory map is too sparse or there is about 512GB RAM.
References
----------
[1]: Principles of ARM Memory Maps, White Paper, Issue C
Signed-off-by: Jungseok Lee <jays.lee@samsung.com>
Reviewed-by: Sungjinn Chung <sungjinn.chung@samsung.com>
Acked-by: Kukjin Kim <kgene.kim@samsung.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Reviewed-by: Steve Capper <steve.capper@linaro.org>
[catalin.marinas@arm.com: MEMBLOCK_INITIAL_LIMIT removed, same as PUD_SIZE]
[catalin.marinas@arm.com: early_ioremap_init() updated for 4 levels]
[catalin.marinas@arm.com: 48-bit VA depends on BROKEN until KVM is fixed]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Jungseok Lee <jungseoklee85@gmail.com>
The early_ioremap_init() function already handles fixmap pte
initialisation, so upgrade this to cover all of pud/pmd/pte and remove
one page from swapper_pg_dir.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Jungseok Lee <jungseoklee85@gmail.com>
The arm64 Image header contains a text_offset field which bootloaders
are supposed to read to determine the offset (from a 2MB aligned "start
of memory" per booting.txt) at which to load the kernel. The offset is
not well respected by bootloaders at present, and due to the lack of
variation there is little incentive to support it. This is unfortunate
for the sake of future kernels where we may wish to vary the text offset
(even zeroing it).
This patch adds options to arm64 to enable fuzz-testing of text_offset.
CONFIG_ARM64_RANDOMIZE_TEXT_OFFSET forces the text offset to a random
16-byte aligned value value in the range [0..2MB) upon a build of the
kernel. It is recommended that distribution kernels enable randomization
to test bootloaders such that any compliance issues can be fixed early.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Tom Rini <trini@ti.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Currently the kernel Image is stripped of everything past the initial
stack, and at runtime the memory is initialised and used by the kernel.
This makes the effective minimum memory footprint of the kernel larger
than the size of the loaded binary, though bootloaders have no mechanism
to identify how large this minimum memory footprint is. This makes it
difficult to choose safe locations to place both the kernel and other
binaries required at boot (DTB, initrd, etc), such that the kernel won't
clobber said binaries or other reserved memory during initialisation.
Additionally when big endian support was added the image load offset was
overlooked, and is currently of an arbitrary endianness, which makes it
difficult for bootloaders to make use of it. It seems that bootloaders
aren't respecting the image load offset at present anyway, and are
assuming that offset 0x80000 will always be correct.
This patch adds an effective image size to the kernel header which
describes the amount of memory from the start of the kernel Image binary
which the kernel expects to use before detecting memory and handling any
memory reservations. This can be used by bootloaders to choose suitable
locations to load the kernel and/or other binaries such that the kernel
will not clobber any memory unexpectedly. As before, memory reservations
are required to prevent the kernel from clobbering these locations
later.
Both the image load offset and the effective image size are forced to be
little-endian regardless of the native endianness of the kernel to
enable bootloaders to load a kernel of arbitrary endianness. Bootloaders
which wish to make use of the load offset can inspect the effective
image size field for a non-zero value to determine if the offset is of a
known endianness. To enable software to determine the endinanness of the
kernel as may be required for certain use-cases, a new flags field (also
little-endian) is added to the kernel header to export this information.
The documentation is updated to clarify these details. To discourage
future assumptions regarding the value of text_offset, the value at this
point in time is removed from the main flow of the documentation (though
kept as a compatibility note). Some minor formatting issues in the
documentation are also corrected.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Tom Rini <trini@ti.com>
Cc: Geoff Levand <geoff@infradead.org>
Cc: Kevin Hilman <kevin.hilman@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Currently we place swapper_pg_dir and idmap_pg_dir below the kernel
image, between PHYS_OFFSET and (PHYS_OFFSET + TEXT_OFFSET). However,
bootloaders may use portions of this memory below the kernel and we do
not parse the memory reservation list until after the MMU has been
enabled. As such we may clobber some memory a bootloader wishes to have
preserved.
To enable the use of all of this memory by bootloaders (when the
required memory reservations are communicated to the kernel) it is
necessary to move our initial page tables elsewhere. As we currently
have an effectively unbound requirement for memory at the end of the
kernel image for .bss, we can place the page tables here.
This patch moves the initial page table to the end of the kernel image,
after the BSS. As they do not consist of any initialised data they will
be stripped from the kernel Image as with the BSS. The BSS clearing
routine is updated to stop at __bss_stop rather than _end so as to not
clobber the page tables, and memory reservations made redundant by the
new organisation are removed.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <lauraa@codeaurora.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Currently __turn_mmu_on is aligned to 64 bytes to ensure that it doesn't
span any page boundary, which simplifies the idmap and spares us
requiring an additional page table to map half of the function. In
keeping with other important requirements in architecture code, this
fact is undocumented.
Additionally, as the function consists of three instructions totalling
12 bytes with no literal pool data, a smaller alignment of 16 bytes
would be sufficient.
This patch reduces the alignment to 16 bytes and documents the
underlying reason for the alignment. This reduces the required alignment
of the entire .head.text section from 64 bytes to 16 bytes, though it
may still be aligned to a larger value depending on TEXT_OFFSET.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <lauraa@codeaurora.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The Generic Interrupt Controller (version 3) offers services that are
similar to GICv2, with a number of additional features:
- Affinity routing based on the CPU MPIDR (ARE)
- System register for the CPU interfaces (SRE)
- Support for more that 8 CPUs
- Locality-specific Peripheral Interrupts (LPIs)
- Interrupt Translation Services (ITS)
This patch adds preliminary support for GICv3 with ARE and SRE,
non-secure mode only. It relies on higher exception levels to grant ARE
and SRE access.
Support for LPI and ITS will be added at a later time.
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Jason Cooper <jason@lakedaemon.net>
Reviewed-by: Zi Shen Lim <zlim@broadcom.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Reviewed-by: Tirumalesh Chalamarla <tchalamarla@cavium.com>
Reviewed-by: Yun Wu <wuyun.wu@huawei.com>
Reviewed-by: Zhen Lei <thunder.leizhen@huawei.com>
Tested-by: Tirumalesh Chalamarla<tchalamarla@cavium.com>
Tested-by: Radha Mohan Chintakuntla <rchintakuntla@cavium.com>
Acked-by: Radha Mohan Chintakuntla <rchintakuntla@cavium.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lkml.kernel.org/r/1404140510-5382-3-git-send-email-marc.zyngier@arm.com
Signed-off-by: Jason Cooper <jason@lakedaemon.net>
The CurrentEL system register reports the Current Exception Level
of the CPU. It doesn't say anything about the stack handling, and
yet we compare it to PSR_MODE_EL2t and PSR_MODE_EL2h.
It works by chance because PSR_MODE_EL2t happens to match the right
bits, but that's otherwise a very bad idea. Just check for the EL
value instead.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
[catalin.marinas@arm.com: fixed arch/arm64/kernel/efi-entry.S]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Strings library contributed to glibc but re-licensed under GPLv2)
- Optimised crypto algorithms making use of the ARMv8 crypto extensions
(together with kernel API for using FPSIMD instructions in interrupt
context)
- Ftrace support
- CPU topology parsing from DT
- ESR_EL1 (Exception Syndrome Register) exposed to user space signal
handlers for SIGSEGV/SIGBUS (useful to emulation tools like Qemu)
- 1GB section linear mapping if applicable
- Barriers usage clean-up
- Default pgprot clean-up
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
iQIcBAABAgAGBQJTkb+CAAoJEGvWsS0AyF7xLyEQAJgL8s2SdDyd+R8aukNDu3n9
tCK7yVHO9Kg96dfeXVuSOVEo2jszo6R3nxzUL05FMovr230WBcmoeHvHz8ETGnw1
g0yO8Ltkckjevog4UleCa3wGtYISjvwwrTalzbqoEWzsF2AV8oiqv/yuIn/EdkUr
jaOqfNsnAQa8TIz4vMhi/AVdJWTTU/F6WP80oqCbxqXu/WL2InuBlHtOJMbk1HDI
u1DJUGDQ1B9OgSVRkAOjCjSsEtz8sDY3lXsg3V1qT5+NbZTyomYM2IiBLdgQcX4P
t/rqX9nX4VmRQtzefeP5WhKFks2x80C0BKibWC4teeL++tJHbgbFkyjoZZGcP27o
zued3cYABrjrcAEU6ko/LUiL2Q4ozBOzosClpjpWulCxNPzsOps82UZWo3F3XbAt
xjE3k7WF9WeNBOJdDGrarEaSLdnjjgCLoWVs8cOUYLpOOrtdSw16D29jJ68U0Y5g
31wdwKxoueC8SFt8M9fP9J9Jyau08g+kvW1xQXrRmroppweFxjSpSy90imARyux/
wUFz79HxkQB79ZHpJ0I5TNrw/w+7pBnfVSKGPOzrk+ZUsaH76caNRBoffUCzFMzz
T3Sc8A36TZtOIcGR/Q4DMZNFXlIUXDSzCHP2Iu0QoIjTd5Ex96cqNvy3nswCYWwv
yGe3ZEqUq9+WL7snNW4v
=Jj8U
-----END PGP SIGNATURE-----
Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux into next
Pull arm64 updates from Catalin Marinas:
- Optimised assembly string/memory routines (based on the AArch64
Cortex Strings library contributed to glibc but re-licensed under
GPLv2)
- Optimised crypto algorithms making use of the ARMv8 crypto extensions
(together with kernel API for using FPSIMD instructions in interrupt
context)
- Ftrace support
- CPU topology parsing from DT
- ESR_EL1 (Exception Syndrome Register) exposed to user space signal
handlers for SIGSEGV/SIGBUS (useful to emulation tools like Qemu)
- 1GB section linear mapping if applicable
- Barriers usage clean-up
- Default pgprot clean-up
Conflicts as per Catalin.
* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (57 commits)
arm64: kernel: initialize broadcast hrtimer based clock event device
arm64: ftrace: Add system call tracepoint
arm64: ftrace: Add CALLER_ADDRx macros
arm64: ftrace: Add dynamic ftrace support
arm64: Add ftrace support
ftrace: Add arm64 support to recordmcount
arm64: Add 'notrace' attribute to unwind_frame() for ftrace
arm64: add __ASSEMBLY__ in asm/insn.h
arm64: Fix linker script entry point
arm64: lib: Implement optimized string length routines
arm64: lib: Implement optimized string compare routines
arm64: lib: Implement optimized memcmp routine
arm64: lib: Implement optimized memset routine
arm64: lib: Implement optimized memmove routine
arm64: lib: Implement optimized memcpy routine
arm64: defconfig: enable a few more common/useful options in defconfig
ftrace: Make CALLER_ADDRx macros more generic
arm64: Fix deadlock scenario with smp_send_stop()
arm64: Fix machine_shutdown() definition
arm64: Support arch_irq_work_raise() via self IPIs
...
set_cpu_boot_mode_flag is used to identify which exception levels are
encountered across the system by CPUs trying to enter the kernel. The
basic algorithm is: if a CPU is booting at EL2, it will set a flag at
an offset of #4 from __boot_cpu_mode, a cacheline-aligned variable.
Otherwise, a flag is set at an offset of zero into the same cacheline.
This enables us to check that all CPUs booted at the same exception
level.
This cacheline is written with the stage-1 MMU off (that is, via a
strongly-ordered mapping) and will bypass any clean lines in the cache,
leading to potential coherence problems when the variable is later
checked via the normal, cacheable mapping of the kernel image.
This patch reworks the broken flushing code so that we:
(1) Use a DMB to order the strongly-ordered write of the cacheline
against the subsequent cache-maintenance operation (by-VA
operations only hazard against normal, cacheable accesses).
(2) Use a single dc ivac instruction to invalidate any clean lines
containing a stale copy of the line after it has been updated.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch adds PE/COFF header fields to the start of the kernel
Image so that it appears as an EFI application to UEFI firmware.
An EFI stub is included to allow direct booting of the kernel
Image.
Signed-off-by: Mark Salter <msalter@redhat.com>
[Add support in PE/COFF header for signed images]
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Leif Lindholm <leif.lindholm@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
- Additional cache flushing during boot (needed in the presence of
external caches or under virtualisation)
- DMA range invalidation fix for non cache line aligned buffers
- Build failure fix with !COMPAT
- Kconfig update for STRICT_DEVMEM
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
iQIcBAABAgAGBQJTRDI8AAoJEGvWsS0AyF7xx2IP/jgBjjeIII7L6T45dG3BeJR1
ph8CBfnsHK3wr5KXiFrdQ2mhAWFr44SLNOd9xuZZmzpA/QWlyqaH46+oqz2GozUf
y2HucJoEIO+wXMFscQ/EQDiT4uUSbEXBQ6JeZzrhCgVaXeSs+wkKTtGXkxEh2gWo
w4OI1/JX7phv4heim51aabzziQ3o9JziIs6hALv6OVZVsuPF/TX+mK8C2ejJWLnv
ou+6E6iv69wNrgPnM3fcKj1CDisCNdFVvjd2LwzkJS7MUra74SWoXczCbfBYW6Ty
1GgZ/t3TOluDoaLgXfGyQXxnhOUyHdV16034/k8wISfuXG59V0eT+CaCgAotftKD
5oH+P4MfyVOvZpILrRtY/4MajlCr4V1RnzSYKnS/h3zzHW7Cx/BtYbbQEOVQnZwc
Gh4adLqc0f8QtkD4zGI7UWmxPxiI9KX9EEpVDAU3TJw6FjVSp7qZ1ifajsWc201h
STzQEu8LDBWQY2WKrtZxXvFjZj4eSSXNaDHNVCugODk2FBU6wNv4P5q1S+23xt+G
rR9UI8a0mpginNvhnwHoIR6X+RW3CDaUzn9k3gaJWDWvGoTeuAIstdyxCn6Dm26c
XF2B5xf5SdxXeNv513WfULCyLgJmLs9CzVFybgYWKLciLbLn7f3pAiisHuO1DWvu
Gv+70wfYArAcbkGPw5Qt
=mDkd
-----END PGP SIGNATURE-----
Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull second set of arm64 updates from Catalin Marinas:
"A second pull request for this merging window, mainly with fixes and
docs clarification:
- Documentation clarification on CPU topology and booting
requirements
- Additional cache flushing during boot (needed in the presence of
external caches or under virtualisation)
- DMA range invalidation fix for non cache line aligned buffers
- Build failure fix with !COMPAT
- Kconfig update for STRICT_DEVMEM"
* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
arm64: Fix DMA range invalidation for cache line unaligned buffers
arm64: Add missing Kconfig for CONFIG_STRICT_DEVMEM
arm64: fix !CONFIG_COMPAT build failures
Revert "arm64: virt: ensure visibility of __boot_cpu_mode"
arm64: Relax the kernel cache requirements for boot
arm64: Update the TCR_EL1 translation granule definitions for 16K pages
ARM: topology: Make it clear that all CPUs need to be described
Add support for early IO or memory mappings which are needed before the
normal ioremap() is usable. This also adds fixmap support for permanent
fixed mappings such as that used by the earlyprintk device register
region.
Signed-off-by: Mark Salter <msalter@redhat.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Borislav Petkov <borislav.petkov@amd.com>
Cc: Dave Young <dyoung@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
With system caches for the host OS or architected caches for guest OS we
cannot easily guarantee that there are no dirty or stale cache lines for
the areas of memory written by the kernel during boot with the MMU off
(therefore non-cacheable accesses).
This patch adds the necessary cache maintenance during boot and relaxes
the booting requirements.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch changes the idmap page table creation during boot to cover
the whole kernel image, allowing functions like cpu_reset() to be safely
called with the physical address.
This patch also simplifies the create_block_map asm macro to no longer
take an idmap argument and always use the phys/virt/end parameters. For
the idmap case, phys == virt.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The __data_loc variable is an unused left over from the 32 bit arm implementation.
Remove that variable and adjust the __mmap_switched startup routine accordingly.
Signed-off-by: Geoff Levand <geoff@infradead.org> for Huawei, Linaro
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The refactoring of el2_setup split code setting up EL2 and detecting the
CPU boot mode in separate chunks. This allows the code that sets up EL2 to
run in an endian independent way - ie before the endianess is set up in
the respective sctlr registers.
This patch brings secondary_entry up-to-date so that CPUs entering the
kernel through this code path set-up EL2 and the cpu boot mode properly.
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Acked-by: Mark Rutland <mark.rutand@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The endianness of memory accesses at EL2 and EL1 are configured by
SCTLR_EL2.EE and SCTLR_EL1.EE respectively. When the kernel is booted,
the state of SCTLR_EL{2,1}.EE is unknown, and thus the kernel must
ensure that they are set before performing any memory accesses.
This patch ensures that SCTLR_EL{2,1} are configured appropriately at
boot for kernels of either endianness.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Matthew Leach <matthew.leach@arm.com>
[catalin.marinas@arm.com: fix SCTLR_EL1.E0E bit setting in head.S]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Currently, the code for setting the __cpu_boot_mode flag is munged in
with el2_setup. This makes things difficult on a BE bringup as a
memory access has to have occurred before el2_setup which is the place
that we'd like to set the endianess on the current EL.
Create a new function for setting __cpu_boot_mode and have el2_setup
return the mode the CPU. Also define a new constant in virt.h,
BOOT_CPU_MODE_EL1, for readability.
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Matthew Leach <matthew.leach@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The arm64 kernel has an internal holding pen, which is necessary for
some systems where we can't bring CPUs online individually and must hold
multiple CPUs in a safe area until the kernel is able to handle them.
The current SMP infrastructure for arm64 is closely coupled to this
holding pen, and alternative boot methods must launch CPUs into the pen,
where they sit before they are launched into the kernel proper.
With PSCI (and possibly other future boot methods), we can bring CPUs
online individually, and need not perform the secondary_holding_pen
dance. Instead, this patch factors the holding pen management code out
to the spin-table boot method code, as it is the only boot method
requiring the pen.
A new entry point for secondaries, secondary_entry is added for other
boot methods to use, which bypasses the holding pen and its associated
overhead when bringing CPUs online. The smp.pen.text section is also
removed, as the pen can live in head.text without problem.
The cpu_operations structure is extended with two new functions,
cpu_boot and cpu_postboot, for bringing a cpu into the kernel and
performing any post-boot cleanup required by a bootmethod (e.g.
resetting the secondary_holding_pen_release to INVALID_HWID).
Documentation is added for cpu_operations.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Expand the arm64 image header to allow for co-existance with
PE/COFF header required by the EFI stub. The PE/COFF format
requires the "MZ" header to be at offset 0, and the offset
to the PE/COFF header to be at offset 0x3c. The image
header is expanded to allow 2 instructions at the beginning
to accommodate a benign intruction at offset 0 that includes
the "MZ" header, a magic number, and the offset to the PE/COFF
header.
Signed-off-by: Roy Franz <roy.franz@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The reg property of the cpu nodes in the DT now contains all the
affinity levels in (MPIDR[39:32] and MPIDR[23:0]) and that's what
boot_secondary() writes in the pen, so increase the mask in
secondary_holding_pen accordingly.
Signed-off-by: Javi Merino <javi.merino@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch adds support for "earlyprintk=" parameter on the kernel
command line. The format is:
earlyprintk=<name>[,<addr>][,<options>]
where <name> is the name of the (UART) device, e.g. "pl011", <addr> is
the I/O address. The <options> aren't currently used.
The mapping of the earlyprintk device is done very early during kernel
boot and there are restrictions on which functions it can call. A
special early_io_map() function is added which creates the mapping from
the pre-defined EARLY_IOBASE to the device I/O address passed via the
kernel parameter. The pgd entry corresponding to EARLY_IOBASE is
pre-populated in head.S during kernel boot.
Only PL011 is currently supported and it is assumed that the interface
is already initialised by the boot loader before the kernel is started.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
The architecture doesn't mandate any reset value for vttbr_el2.
Better set it to a known value before some HYP code gets confused.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
If booted in EL2, install an dummy hypervisor whose only purpose
is to be replaced by a full fledged one.
A minimal API allows to:
- obtain the current HYP vectors (__hyp_get_vectors)
- set new HYP vectors (__hyp_set_vectors)
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>