Commit Graph

106 Commits

Author SHA1 Message Date
Will Deacon 4b3dc9679c arm64: force CONFIG_SMP=y and remove redundant #ifdefs
Nobody seems to be producing !SMP systems anymore, so this is just
becoming a source of kernel bugs, particularly if people want to use
coherent DMA with non-shared pages.

This patch forces CONFIG_SMP=y for arm64, removing a modest amount of
code in the process.

Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 11:08:40 +01:00
Ard Biesheuvel 5dfe9d7d23 arm64: reduce ID map to a single page
Commit ea8c2e1124 ("arm64: Extend the idmap to the whole kernel
image") changed the early page table code so that the entire kernel
Image is covered by the identity map. This allows functions that
need to enable or disable the MMU to reside anywhere in the kernel
Image.

However, this change has the unfortunate side effect that the Image
cannot cross a physical 512 MB alignment boundary anymore, since the
early page table code cannot deal with the Image crossing a /virtual/
512 MB alignment boundary.

So instead, reduce the ID map to a single page, that is populated by
the contents of the .idmap.text section. Only three functions reside
there at the moment: __enable_mmu(), cpu_resume_mmu() and cpu_reset().
If new code is introduced that needs to manipulate the MMU state, it
should be added to this section as well.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-06-02 17:44:51 +01:00
Ard Biesheuvel 61bd93ce80 arm64: use fixmap region for permanent FDT mapping
Currently, the FDT blob needs to be in the same 512 MB region as
the kernel, so that it can be mapped into the kernel virtual memory
space very early on using a minimal set of statically allocated
translation tables.

Now that we have early fixmap support, we can relax this restriction,
by moving the permanent FDT mapping to the fixmap region instead.
This way, the FDT blob may be anywhere in memory.

This also moves the vetting of the FDT to mmu.c, since the early
init code in head.S does not handle mapping of the FDT anymore.
At the same time, fix up some comments in head.S that have gone stale.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-06-02 16:31:33 +01:00
Mark Rutland 0c20856c26 arm64: head.S: ensure idmap_t0sz is visible
We write idmap_t0sz with SCTLR_EL1.{C,M} clear, but we only have the
guarnatee that the kernel Image is clean, not invalid in the caches, and
therefore we might read a stale value once the MMU is enabled.

This patch ensures we invalidate the corresponding cacheline after the
write as we do for all other data written before we set SCTLR_EL1.{C.M},
guaranteeing that the value will be visible later. We rely on the DSBs
in __create_page_tables to complete the maintenance.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
CC: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-24 15:13:58 +00:00
Mark Rutland 91d57155dc arm64: head.S: ensure visibility of page tables
After writing the page tables, we use __inval_cache_range to invalidate
any stale cache entries. Strongly Ordered memory accesses are not
ordered w.r.t. cache maintenance instructions, and hence explicit memory
barriers are required to provide this ordering. However,
__inval_cache_range was written to be used on Normal Cacheable memory
once the MMU and caches are on, and does not have any barriers prior to
the DC instructions.

This patch adds a DMB between the page tables being written and the
corresponding cachelines being invalidated, ensuring that the
invalidation makes the new data visible to subsequent cacheable
accesses. A barrier is not required before the prior invalidate as we do
not access the page table memory area prior to this, and earlier
barriers in preserve_boot_args and set_cpu_boot_mode_flag ensures
ordering w.r.t. any stores performed prior to entering Linux.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Fixes: c218bca74e ("arm64: Relax the kernel cache requirements for boot")
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-24 14:36:35 +00:00
Ard Biesheuvel dd006da216 arm64: mm: increase VA range of identity map
The page size and the number of translation levels, and hence the supported
virtual address range, are build-time configurables on arm64 whose optimal
values are use case dependent. However, in the current implementation, if
the system's RAM is located at a very high offset, the virtual address range
needs to reflect that merely because the identity mapping, which is only used
to enable or disable the MMU, requires the extended virtual range to map the
physical memory at an equal virtual offset.

This patch relaxes that requirement, by increasing the number of translation
levels for the identity mapping only, and only when actually needed, i.e.,
when system RAM's offset is found to be out of reach at runtime.

Tested-by: Laura Abbott <lauraa@codeaurora.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-23 11:35:29 +00:00
Ard Biesheuvel da9c177de8 arm64: enforce x1|x2|x3 == 0 upon kernel entry as per boot protocol
According to the arm64 boot protocol, registers x1 to x3 should be
zero upon kernel entry, and non-zero values are reserved for future
use. This future use is going to be problematic if we never enforce
the current rules, so start enforcing them now, by emitting a warning
if non-zero values are detected.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-19 19:46:02 +00:00
Ard Biesheuvel 6f4d57fa70 arm64: remove __calc_phys_offset
This removes the function __calc_phys_offset and all open coded
virtual to physical address translations using the offset kept
in x28.

Instead, just use absolute or PC-relative symbol references as
appropriate when referring to virtual or physical addresses,
respectively.

Tested-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-19 19:46:02 +00:00
Ard Biesheuvel 8b0a95753a arm64: merge __enable_mmu and __turn_mmu_on
Enabling of the MMU is split into two functions, with an align and
a branch in the middle. On arm64, the entire kernel Image is ID mapped
so this is really not necessary, and we can just merge it into a
single function.

Also replaces an open coded adrp/add reference to __enable_mmu pair
with adr_l.

Tested-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-19 19:46:02 +00:00
Ard Biesheuvel b1c98297fe arm64: use PC-relative reference for secondary_holding_pen_release
Replace the confusing virtual/physical address arithmetic with a simple
PC-relative reference.

Tested-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-19 19:46:01 +00:00
Ard Biesheuvel a871d354f7 arm64: remove __switch_data object from head.S
This removes the confusing __switch_data object from head.S,
and replaces it with standard PC-relative references to the
various symbols it encapsulates.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-19 19:46:01 +00:00
Ard Biesheuvel a44ef51799 arm64: remove processor_id
The global processor_id is assigned the MIDR_EL1 value of the boot
CPU in the early init code, but is never referenced afterwards.

As the relevance of the MIDR_EL1 value of the boot CPU is debatable
anyway, especially under big.LITTLE, let's remove it before anyone
starts using it.

Tested-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-19 19:46:01 +00:00
Marc Zyngier a591ede4cd arm64: Get rid of struct cpu_table
struct cpu_table is an artifact left from the (very) early days of
the arm64 port, and its only real use is to allow the most beautiful
"AArch64 Processor" string to be displayed at boot time.

Really? Yes, really.

Let's get rid of it. In order to avoid another BogoMips-gate, the
aforementioned string is preserved.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-19 19:46:00 +00:00
Mark Rutland 424a383824 arm64: fix hyp mode mismatch detection
Commit 828e9834e9 ("arm64: head: create a new function for setting
the boot_cpu_mode flag") added BOOT_CPU_MODE_EL1, a nonzero value
replacing uses of zero. However it failed to update __boot_cpu_mode
appropriately.

A CPU booted at EL2 writes BOOT_CPU_MODE_EL2 to __boot_cpu_mode[0], and
a CPU booted at EL1 writes BOOT_CPU_MODE_EL1 to __boot_cpu_mode[1].
Later is_hyp_mode_mismatched() determines there to be a mismatch if
__boot_cpu_mode[0] != __boot_cpu_mode[1].

If all CPUs are booted at EL1, __boot_cpu_mode[0] will be set to
BOOT_CPU_MODE_EL1, but __boot_cpu_mode[1] will retain its initial value
of zero, and is_hyp_mode_mismatched will erroneously determine that the
boot modes are mismatched. This hasn't been a problem so far, but later
patches which will make use of is_hyp_mode_mismatched() expect it to
work correctly.

This patch initialises __boot_cpu_mode[1] to BOOT_CPU_MODE_EL1, fixing
the erroneous mismatch detection when all CPUs are booted at EL1.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-17 16:58:55 +00:00
Ard Biesheuvel 947bb7587f arm64: put __boot_cpu_mode label after alignment instead of before
Another one for the big head.S spring cleaning: the label should
be after the .align or it may point to the padding.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-03-14 11:02:26 +00:00
Laura Abbott 034edabe6c arm64: Move some head.text functions to executable section
The head.text section is intended to be run at early bootup
before any of the regular kernel mappings have been setup.
Parts of head.text may be freed back into the buddy allocator
due to TEXT_OFFSET so for security requirements this memory
must not be executable. The suspend/resume/hotplug code path
requires some of these head.S functions to run however which
means they need to be executable. Support these conflicting
requirements by moving the few head.text functions that need
to be executable to the text section which has the appropriate
page table permissions.

Tested-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-11-26 17:19:47 +00:00
Laura Abbott ac2dec5f6c arm64: Switch to adrp for loading the stub vectors
The hyp stub vectors are currently loaded using adr. This
instruction has a +/- 1MB range for the loading address. If
the alignment for sections is changed the address may be more
than 1MB away, resulting in reclocation errors. Switch to using
adrp for getting the address to ensure we aren't affected by the
location of the __hyp_stub_vectors.

Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-11-25 15:56:44 +00:00
Ard Biesheuvel a352ea3e19 arm64/efi: set PE/COFF file alignment to 512 bytes
Change our PE/COFF header to use the minimum file alignment of
512 bytes (0x200), as mandated by the PE/COFF spec v8.3

Also update the linker script so that the Image file itself is also a
round multiple of FileAlignment.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Roy Franz <roy.franz@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
2014-11-05 09:03:09 +01:00
Ard Biesheuvel ea6bc80d18 arm64/efi: set PE/COFF section alignment to 4 KB
Position independent AArch64 code needs to be linked and loaded at the
same relative offset from a 4 KB boundary, or adrp/add and adrp/ldr
pairs will not work correctly. (This is how PC relative symbol
references with a 4 GB reach are emitted)

We need to declare this in the PE/COFF header, otherwise the PE/COFF
loader may load the Image and invoke the stub at an offset which
violates this rule.

Reviewed-by: Roy Franz <roy.franz@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
2014-11-05 09:03:05 +01:00
Ard Biesheuvel 95b395963f arm64/efi: efistub: jump to 'stext' directly, not through the header
After the EFI stub has done its business, it jumps into the kernel by
branching to offset #0 of the loaded Image, which is where it expects
to find the header containing a 'branch to stext' instruction.

However, the UEFI spec 2.1.1 states the following regarding PE/COFF
image loading:
"A UEFI image is loaded into memory through the LoadImage() Boot
Service. This service loads an image with a PE32+ format into memory.
This PE32+ loader is required to load all sections of the PE32+ image
into memory."

In other words, it is /not/ required to load parts of the image that are
not covered by a PE/COFF section, so it may not have loaded the header
at the expected offset, as it is not covered by any PE/COFF section.

So instead, jump to 'stext' directly, which is at the base of the
PE/COFF .text section, by supplying a symbol 'stext_offset' to
efi-entry.o which contains the relative offset of stext into the Image.
Also replace other open coded calculations of the same value with a
reference to 'stext_offset'

Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Roy Franz <roy.franz@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
2014-11-05 09:02:59 +01:00
Ard Biesheuvel c16173fa56 arm64/efi: efistub: cover entire static mem footprint in PE/COFF .text
The static memory footprint of a kernel Image at boot is larger than the
Image file itself. Things like .bss data and initial page tables are allocated
statically but populated dynamically so their content is not contained in the
Image file.

However, if EFI (or GRUB) has loaded the Image at precisely the desired offset
of base of DRAM + TEXT_OFFSET, the Image will be booted in place, and we have
to make sure that the allocation done by the PE/COFF loader is large enough.

Fix this by growing the PE/COFF .text section to cover the entire static
memory footprint. The part of the section that is not covered by the payload
will be zero initialised by the PE/COFF loader.

Acked-by: Mark Salter <msalter@redhat.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Leif Lindholm <leif.lindholm@linaro.org>
Tested-by: Leif Lindholm <leif.lindholm@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08 14:39:18 +01:00
Geoff Levand 5843be2279 arm64: Remove unused variable in head.S
Remove an unused local variable from head.S.  It seems this was never
used even from the initial commit
9703d9d7f7 (arm64: Kernel booting and
initialisation), and is a left over from a previous implementation
of __calc_phys_offset.

Signed-off-by: Geoff Levand <geoff@infradead.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-08-26 19:24:00 +01:00
Ard Biesheuvel 4190312beb arm64: align randomized TEXT_OFFSET on 4 kB boundary
When booting via UEFI, the kernel Image is loaded at a 4 kB boundary and
the embedded EFI stub is executed in place. The EFI stub relocates the
Image to reside TEXT_OFFSET bytes above a 2 MB boundary, and jumps into
the kernel proper.

In AArch64, PC relative symbol references are emitted using adrp/add or
adrp/ldr pairs, where the offset into a 4 kB page is resolved using a
separate :lo12: relocation. This implicitly assumes that the code will
always be executed at the same relative offset with respect to a 4 kB
boundary, or the references will point to the wrong address.

This means we should link the kernel at a 4 kB aligned base address in
order to remain compatible with the base address the UEFI loader uses
when doing the initial load of Image. So update the code that generates
TEXT_OFFSET to choose a multiple of 4 kB.

At the same time, update the code so it chooses from the interval [0..2MB)
as the author originally intended.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-08-19 19:26:09 +01:00
Catalin Marinas 72c5839515 arm64: gicv3: Allow GICv3 compilation with older binutils
GICv3 introduces new system registers accessible with the full msr/mrs
syntax (e.g. mrs x0, Sop0_op1_CRm_CRn_op2). However, only recent
binutils understand the new syntax. This patch introduces msr_s/mrs_s
assembly macros which generate the equivalent instructions above and
converts the existing GICv3 code (both drivers/irqchip/ and
arch/arm64/kernel/).

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Olof Johansson <olof@lixom.net>
Tested-by: Olof Johansson <olof@lixom.net>
Suggested-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Jason Cooper <jason@lakedaemon.net>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
2014-07-25 13:12:15 +01:00
Catalin Marinas ecb3c2bbf2 Merge tag 'deps-irqchip-gic-3.17' of git://git.infradead.org/users/jcooper/linux
* tag 'deps-irqchip-gic-3.17' of git://git.infradead.org/users/jcooper/linux:
  irqchip: gic-v3: Initial support for GICv3
  irqchip: gic: Move some bits of GICv2 to a library-type file

Conflicts:
	arch/arm64/Kconfig
2014-07-25 13:03:22 +01:00
Catalin Marinas 383c279911 arm64: Add support for 48-bit VA space with 64KB page configuration
This patch allows support for 3 levels of page tables with 64KB page
configuration allowing 48-bit VA space. The pgd is no longer a full
PAGE_SIZE (PTRS_PER_PGD is 64) and (swapper|idmap)_pg_dir are not fully
populated (pgd_alloc falls back to kzalloc).

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Jungseok Lee <jungseoklee85@gmail.com>
2014-07-23 15:28:15 +01:00
Catalin Marinas b4a0d8b377 arm64: Clean up the initial page table creation in head.S
This patch adds a create_table_entry macro which is used to populate pgd
and pud entries, also reducing the number of arguments for
create_pgd_entry.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Jungseok Lee <jungseoklee85@gmail.com>
2014-07-23 15:28:01 +01:00
Catalin Marinas abe669d7e1 arm64: Convert bool ARM64_x_LEVELS to int ARM64_PGTABLE_LEVELS
Rather than having several Kconfig options, define int
ARM64_PGTABLE_LEVELS which will be also useful in converting some of the
pgtable macros.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Jungseok Lee <jungseoklee85@gmail.com>
2014-07-23 15:27:46 +01:00
Jungseok Lee c79b954bf6 arm64: mm: Implement 4 levels of translation tables
This patch implements 4 levels of translation tables since 3 levels
of page tables with 4KB pages cannot support 40-bit physical address
space described in [1] due to the following issue.

It is a restriction that kernel logical memory map with 4KB + 3 levels
(0xffffffc000000000-0xffffffffffffffff) cannot cover RAM region from
544GB to 1024GB in [1]. Specifically, ARM64 kernel fails to create
mapping for this region in map_mem function since __phys_to_virt for
this region reaches to address overflow.

If SoC design follows the document, [1], over 32GB RAM would be placed
from 544GB. Even 64GB system is supposed to use the region from 544GB
to 576GB for only 32GB RAM. Naturally, it would reach to enable 4 levels
of page tables to avoid hacking __virt_to_phys and __phys_to_virt.

However, it is recommended 4 levels of page table should be only enabled
if memory map is too sparse or there is about 512GB RAM.

References
----------
[1]: Principles of ARM Memory Maps, White Paper, Issue C

Signed-off-by: Jungseok Lee <jays.lee@samsung.com>
Reviewed-by: Sungjinn Chung <sungjinn.chung@samsung.com>
Acked-by: Kukjin Kim <kgene.kim@samsung.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Reviewed-by: Steve Capper <steve.capper@linaro.org>
[catalin.marinas@arm.com: MEMBLOCK_INITIAL_LIMIT removed, same as PUD_SIZE]
[catalin.marinas@arm.com: early_ioremap_init() updated for 4 levels]
[catalin.marinas@arm.com: 48-bit VA depends on BROKEN until KVM is fixed]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Jungseok Lee <jungseoklee85@gmail.com>
2014-07-23 15:27:40 +01:00
Catalin Marinas 7edd88ad7e arm64: Do not initialise the fixmap page tables in head.S
The early_ioremap_init() function already handles fixmap pte
initialisation, so upgrade this to cover all of pud/pmd/pte and remove
one page from swapper_pg_dir.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Jungseok Lee <jungseoklee85@gmail.com>
2014-07-23 15:27:00 +01:00
Mark Rutland da57a369d3 arm64: Enable TEXT_OFFSET fuzzing
The arm64 Image header contains a text_offset field which bootloaders
are supposed to read to determine the offset (from a 2MB aligned "start
of memory" per booting.txt) at which to load the kernel. The offset is
not well respected by bootloaders at present, and due to the lack of
variation there is little incentive to support it. This is unfortunate
for the sake of future kernels where we may wish to vary the text offset
(even zeroing it).

This patch adds options to arm64 to enable fuzz-testing of text_offset.
CONFIG_ARM64_RANDOMIZE_TEXT_OFFSET forces the text offset to a random
16-byte aligned value value in the range [0..2MB) upon a build of the
kernel. It is recommended that distribution kernels enable randomization
to test bootloaders such that any compliance issues can be fixed early.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Tom Rini <trini@ti.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-10 12:36:58 +01:00
Mark Rutland a2c1d73b94 arm64: Update the Image header
Currently the kernel Image is stripped of everything past the initial
stack, and at runtime the memory is initialised and used by the kernel.
This makes the effective minimum memory footprint of the kernel larger
than the size of the loaded binary, though bootloaders have no mechanism
to identify how large this minimum memory footprint is. This makes it
difficult to choose safe locations to place both the kernel and other
binaries required at boot (DTB, initrd, etc), such that the kernel won't
clobber said binaries or other reserved memory during initialisation.

Additionally when big endian support was added the image load offset was
overlooked, and is currently of an arbitrary endianness, which makes it
difficult for bootloaders to make use of it. It seems that bootloaders
aren't respecting the image load offset at present anyway, and are
assuming that offset 0x80000 will always be correct.

This patch adds an effective image size to the kernel header which
describes the amount of memory from the start of the kernel Image binary
which the kernel expects to use before detecting memory and handling any
memory reservations. This can be used by bootloaders to choose suitable
locations to load the kernel and/or other binaries such that the kernel
will not clobber any memory unexpectedly. As before, memory reservations
are required to prevent the kernel from clobbering these locations
later.

Both the image load offset and the effective image size are forced to be
little-endian regardless of the native endianness of the kernel to
enable bootloaders to load a kernel of arbitrary endianness. Bootloaders
which wish to make use of the load offset can inspect the effective
image size field for a non-zero value to determine if the offset is of a
known endianness. To enable software to determine the endinanness of the
kernel as may be required for certain use-cases, a new flags field (also
little-endian) is added to the kernel header to export this information.

The documentation is updated to clarify these details. To discourage
future assumptions regarding the value of text_offset, the value at this
point in time is removed from the main flow of the documentation (though
kept as a compatibility note). Some minor formatting issues in the
documentation are also corrected.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Tom Rini <trini@ti.com>
Cc: Geoff Levand <geoff@infradead.org>
Cc: Kevin Hilman <kevin.hilman@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-10 12:36:40 +01:00
Mark Rutland bd00cd5f8c arm64: place initial page tables above the kernel
Currently we place swapper_pg_dir and idmap_pg_dir below the kernel
image, between PHYS_OFFSET and (PHYS_OFFSET + TEXT_OFFSET). However,
bootloaders may use portions of this memory below the kernel and we do
not parse the memory reservation list until after the MMU has been
enabled. As such we may clobber some memory a bootloader wishes to have
preserved.

To enable the use of all of this memory by bootloaders (when the
required memory reservations are communicated to the kernel) it is
necessary to move our initial page tables elsewhere. As we currently
have an effectively unbound requirement for memory at the end of the
kernel image for .bss, we can place the page tables here.

This patch moves the initial page table to the end of the kernel image,
after the BSS. As they do not consist of any initialised data they will
be stripped from the kernel Image as with the BSS. The BSS clearing
routine is updated to stop at __bss_stop rather than _end so as to not
clobber the page tables, and memory reservations made redundant by the
new organisation are removed.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <lauraa@codeaurora.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-10 12:36:12 +01:00
Mark Rutland 909a4069da arm64: head.S: remove unnecessary function alignment
Currently __turn_mmu_on is aligned to 64 bytes to ensure that it doesn't
span any page boundary, which simplifies the idmap and spares us
requiring an additional page table to map half of the function. In
keeping with other important requirements in architecture code, this
fact is undocumented.

Additionally, as the function consists of three instructions totalling
12 bytes with no literal pool data, a smaller alignment of 16 bytes
would be sufficient.

This patch reduces the alignment to 16 bytes and documents the
underlying reason for the alignment. This reduces the required alignment
of the entire .head.text section from 64 bytes to 16 bytes, though it
may still be aligned to a larger value depending on TEXT_OFFSET.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <lauraa@codeaurora.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-10 12:35:56 +01:00
Marc Zyngier 021f653791 irqchip: gic-v3: Initial support for GICv3
The Generic Interrupt Controller (version 3) offers services that are
similar to GICv2, with a number of additional features:
- Affinity routing based on the CPU MPIDR (ARE)
- System register for the CPU interfaces (SRE)
- Support for more that 8 CPUs
- Locality-specific Peripheral Interrupts (LPIs)
- Interrupt Translation Services (ITS)

This patch adds preliminary support for GICv3 with ARE and SRE,
non-secure mode only. It relies on higher exception levels to grant ARE
and SRE access.

Support for LPI and ITS will be added at a later time.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Jason Cooper <jason@lakedaemon.net>
Reviewed-by: Zi Shen Lim <zlim@broadcom.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Reviewed-by: Tirumalesh Chalamarla <tchalamarla@cavium.com>
Reviewed-by: Yun Wu <wuyun.wu@huawei.com>
Reviewed-by: Zhen Lei <thunder.leizhen@huawei.com>
Tested-by: Tirumalesh Chalamarla<tchalamarla@cavium.com>
Tested-by: Radha Mohan Chintakuntla <rchintakuntla@cavium.com>
Acked-by: Radha Mohan Chintakuntla <rchintakuntla@cavium.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lkml.kernel.org/r/1404140510-5382-3-git-send-email-marc.zyngier@arm.com
Signed-off-by: Jason Cooper <jason@lakedaemon.net>
2014-07-08 22:11:47 +00:00
Marc Zyngier 974c8e450b arm64: fix el2_setup check of CurrentEL
The CurrentEL system register reports the Current Exception Level
of the CPU. It doesn't say anything about the stack handling, and
yet we compare it to PSR_MODE_EL2t and PSR_MODE_EL2h.

It works by chance because PSR_MODE_EL2t happens to match the right
bits, but that's otherwise a very bad idea. Just check for the EL
value instead.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
[catalin.marinas@arm.com: fixed arch/arm64/kernel/efi-entry.S]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-04 16:16:52 +01:00
Linus Torvalds cc07aabc53 - Optimised assembly string/memory routines (based on the AArch64 Cortex
Strings library contributed to glibc but re-licensed under GPLv2)
 - Optimised crypto algorithms making use of the ARMv8 crypto extensions
   (together with kernel API for using FPSIMD instructions in interrupt
   context)
 - Ftrace support
 - CPU topology parsing from DT
 - ESR_EL1 (Exception Syndrome Register) exposed to user space signal
   handlers for SIGSEGV/SIGBUS (useful to emulation tools like Qemu)
 - 1GB section linear mapping if applicable
 - Barriers usage clean-up
 - Default pgprot clean-up
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1.4.9 (GNU/Linux)
 
 iQIcBAABAgAGBQJTkb+CAAoJEGvWsS0AyF7xLyEQAJgL8s2SdDyd+R8aukNDu3n9
 tCK7yVHO9Kg96dfeXVuSOVEo2jszo6R3nxzUL05FMovr230WBcmoeHvHz8ETGnw1
 g0yO8Ltkckjevog4UleCa3wGtYISjvwwrTalzbqoEWzsF2AV8oiqv/yuIn/EdkUr
 jaOqfNsnAQa8TIz4vMhi/AVdJWTTU/F6WP80oqCbxqXu/WL2InuBlHtOJMbk1HDI
 u1DJUGDQ1B9OgSVRkAOjCjSsEtz8sDY3lXsg3V1qT5+NbZTyomYM2IiBLdgQcX4P
 t/rqX9nX4VmRQtzefeP5WhKFks2x80C0BKibWC4teeL++tJHbgbFkyjoZZGcP27o
 zued3cYABrjrcAEU6ko/LUiL2Q4ozBOzosClpjpWulCxNPzsOps82UZWo3F3XbAt
 xjE3k7WF9WeNBOJdDGrarEaSLdnjjgCLoWVs8cOUYLpOOrtdSw16D29jJ68U0Y5g
 31wdwKxoueC8SFt8M9fP9J9Jyau08g+kvW1xQXrRmroppweFxjSpSy90imARyux/
 wUFz79HxkQB79ZHpJ0I5TNrw/w+7pBnfVSKGPOzrk+ZUsaH76caNRBoffUCzFMzz
 T3Sc8A36TZtOIcGR/Q4DMZNFXlIUXDSzCHP2Iu0QoIjTd5Ex96cqNvy3nswCYWwv
 yGe3ZEqUq9+WL7snNW4v
 =Jj8U
 -----END PGP SIGNATURE-----

Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux into next

Pull arm64 updates from Catalin Marinas:
 - Optimised assembly string/memory routines (based on the AArch64
   Cortex Strings library contributed to glibc but re-licensed under
   GPLv2)
 - Optimised crypto algorithms making use of the ARMv8 crypto extensions
   (together with kernel API for using FPSIMD instructions in interrupt
   context)
 - Ftrace support
 - CPU topology parsing from DT
 - ESR_EL1 (Exception Syndrome Register) exposed to user space signal
   handlers for SIGSEGV/SIGBUS (useful to emulation tools like Qemu)
 - 1GB section linear mapping if applicable
 - Barriers usage clean-up
 - Default pgprot clean-up

Conflicts as per Catalin.

* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (57 commits)
  arm64: kernel: initialize broadcast hrtimer based clock event device
  arm64: ftrace: Add system call tracepoint
  arm64: ftrace: Add CALLER_ADDRx macros
  arm64: ftrace: Add dynamic ftrace support
  arm64: Add ftrace support
  ftrace: Add arm64 support to recordmcount
  arm64: Add 'notrace' attribute to unwind_frame() for ftrace
  arm64: add __ASSEMBLY__ in asm/insn.h
  arm64: Fix linker script entry point
  arm64: lib: Implement optimized string length routines
  arm64: lib: Implement optimized string compare routines
  arm64: lib: Implement optimized memcmp routine
  arm64: lib: Implement optimized memset routine
  arm64: lib: Implement optimized memmove routine
  arm64: lib: Implement optimized memcpy routine
  arm64: defconfig: enable a few more common/useful options in defconfig
  ftrace: Make CALLER_ADDRx macros more generic
  arm64: Fix deadlock scenario with smp_send_stop()
  arm64: Fix machine_shutdown() definition
  arm64: Support arch_irq_work_raise() via self IPIs
  ...
2014-06-06 10:43:28 -07:00
Will Deacon d0488597a1 arm64: head: fix cache flushing and barriers in set_cpu_boot_mode_flag
set_cpu_boot_mode_flag is used to identify which exception levels are
encountered across the system by CPUs trying to enter the kernel. The
basic algorithm is: if a CPU is booting at EL2, it will set a flag at
an offset of #4 from __boot_cpu_mode, a cacheline-aligned variable.
Otherwise, a flag is set at an offset of zero into the same cacheline.
This enables us to check that all CPUs booted at the same exception
level.

This cacheline is written with the stage-1 MMU off (that is, via a
strongly-ordered mapping) and will bypass any clean lines in the cache,
leading to potential coherence problems when the variable is later
checked via the normal, cacheable mapping of the kernel image.

This patch reworks the broken flushing code so that we:

  (1) Use a DMB to order the strongly-ordered write of the cacheline
      against the subsequent cache-maintenance operation (by-VA
      operations only hazard against normal, cacheable accesses).

  (2) Use a single dc ivac instruction to invalidate any clean lines
      containing a stale copy of the line after it has been updated.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-09 17:04:12 +01:00
Mark Salter 3c7f255039 arm64: efi: add EFI stub
This patch adds PE/COFF header fields to the start of the kernel
Image so that it appears as an EFI application to UEFI firmware.
An EFI stub is included to allow direct booting of the kernel
Image.

Signed-off-by: Mark Salter <msalter@redhat.com>
[Add support in PE/COFF header for signed images]
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Leif Lindholm <leif.lindholm@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-04-30 19:57:04 +01:00
Linus Torvalds e4f30545a2 - Documentation clarification on CPU topology and booting requirements
- Additional cache flushing during boot (needed in the presence of
   external caches or under virtualisation)
 - DMA range invalidation fix for non cache line aligned buffers
 - Build failure fix with !COMPAT
 - Kconfig update for STRICT_DEVMEM
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1.4.9 (GNU/Linux)
 
 iQIcBAABAgAGBQJTRDI8AAoJEGvWsS0AyF7xx2IP/jgBjjeIII7L6T45dG3BeJR1
 ph8CBfnsHK3wr5KXiFrdQ2mhAWFr44SLNOd9xuZZmzpA/QWlyqaH46+oqz2GozUf
 y2HucJoEIO+wXMFscQ/EQDiT4uUSbEXBQ6JeZzrhCgVaXeSs+wkKTtGXkxEh2gWo
 w4OI1/JX7phv4heim51aabzziQ3o9JziIs6hALv6OVZVsuPF/TX+mK8C2ejJWLnv
 ou+6E6iv69wNrgPnM3fcKj1CDisCNdFVvjd2LwzkJS7MUra74SWoXczCbfBYW6Ty
 1GgZ/t3TOluDoaLgXfGyQXxnhOUyHdV16034/k8wISfuXG59V0eT+CaCgAotftKD
 5oH+P4MfyVOvZpILrRtY/4MajlCr4V1RnzSYKnS/h3zzHW7Cx/BtYbbQEOVQnZwc
 Gh4adLqc0f8QtkD4zGI7UWmxPxiI9KX9EEpVDAU3TJw6FjVSp7qZ1ifajsWc201h
 STzQEu8LDBWQY2WKrtZxXvFjZj4eSSXNaDHNVCugODk2FBU6wNv4P5q1S+23xt+G
 rR9UI8a0mpginNvhnwHoIR6X+RW3CDaUzn9k3gaJWDWvGoTeuAIstdyxCn6Dm26c
 XF2B5xf5SdxXeNv513WfULCyLgJmLs9CzVFybgYWKLciLbLn7f3pAiisHuO1DWvu
 Gv+70wfYArAcbkGPw5Qt
 =mDkd
 -----END PGP SIGNATURE-----

Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux

Pull second set of arm64 updates from Catalin Marinas:
 "A second pull request for this merging window, mainly with fixes and
  docs clarification:

   - Documentation clarification on CPU topology and booting
     requirements
   - Additional cache flushing during boot (needed in the presence of
     external caches or under virtualisation)
   - DMA range invalidation fix for non cache line aligned buffers
   - Build failure fix with !COMPAT
   - Kconfig update for STRICT_DEVMEM"

* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
  arm64: Fix DMA range invalidation for cache line unaligned buffers
  arm64: Add missing Kconfig for CONFIG_STRICT_DEVMEM
  arm64: fix !CONFIG_COMPAT build failures
  Revert "arm64: virt: ensure visibility of __boot_cpu_mode"
  arm64: Relax the kernel cache requirements for boot
  arm64: Update the TCR_EL1 translation granule definitions for 16K pages
  ARM: topology: Make it clear that all CPUs need to be described
2014-04-08 12:06:03 -07:00
Mark Salter bf4b558eba arm64: add early_ioremap support
Add support for early IO or memory mappings which are needed before the
normal ioremap() is usable.  This also adds fixmap support for permanent
fixed mappings such as that used by the earlyprintk device register
region.

Signed-off-by: Mark Salter <msalter@redhat.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Borislav Petkov <borislav.petkov@amd.com>
Cc: Dave Young <dyoung@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-04-07 16:36:15 -07:00
Catalin Marinas c218bca74e arm64: Relax the kernel cache requirements for boot
With system caches for the host OS or architected caches for guest OS we
cannot easily guarantee that there are no dirty or stale cache lines for
the areas of memory written by the kernel during boot with the MMU off
(therefore non-cacheable accesses).

This patch adds the necessary cache maintenance during boot and relaxes
the booting requirements.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-04-05 10:06:18 +01:00
Catalin Marinas ea8c2e1124 arm64: Extend the idmap to the whole kernel image
This patch changes the idmap page table creation during boot to cover
the whole kernel image, allowing functions like cpu_reset() to be safely
called with the physical address.

This patch also simplifies the create_block_map asm macro to no longer
take an idmap argument and always use the phys/virt/end parameters. For
the idmap case, phys == virt.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-02-26 11:16:28 +00:00
Geoff Levand b22cf637bb arm64: Remove unused __data_loc variable
The __data_loc variable is an unused left over from the 32 bit arm implementation.
Remove that variable and adjust the __mmap_switched startup routine accordingly.

Signed-off-by: Geoff Levand <geoff@infradead.org> for Huawei, Linaro
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-12-20 12:04:48 +00:00
Lorenzo Pieralisi 85cc00eaa8 arm64: kernel: add code to set cpu boot mode to secondary_entry shim
The refactoring of el2_setup split code setting up EL2 and detecting the
CPU boot mode in separate chunks. This allows the code that sets up EL2 to
run in an endian independent way - ie before the endianess is set up in
the respective sctlr registers.

This patch brings secondary_entry up-to-date so that CPUs entering the
kernel through this code path set-up EL2 and the cpu boot mode properly.

Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Acked-by: Mark Rutland <mark.rutand@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-12-06 17:21:51 +00:00
Matthew Leach 9cf7172893 arm64: big-endian: set correct endianess on kernel entry
The endianness of memory accesses at EL2 and EL1 are configured by
SCTLR_EL2.EE and SCTLR_EL1.EE respectively. When the kernel is booted,
the state of SCTLR_EL{2,1}.EE is unknown, and thus the kernel must
ensure that they are set before performing any memory accesses.

This patch ensures that SCTLR_EL{2,1} are configured appropriately at
boot for kernels of either endianness.

Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Matthew Leach <matthew.leach@arm.com>
[catalin.marinas@arm.com: fix SCTLR_EL1.E0E bit setting in head.S]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-10-25 15:59:41 +01:00
Matthew Leach 828e9834e9 arm64: head: create a new function for setting the boot_cpu_mode flag
Currently, the code for setting the __cpu_boot_mode flag is munged in
with el2_setup. This makes things difficult on a BE bringup as a
memory access has to have occurred before el2_setup which is the place
that we'd like to set the endianess on the current EL.

Create a new function for setting __cpu_boot_mode and have el2_setup
return the mode the CPU. Also define a new constant in virt.h,
BOOT_CPU_MODE_EL1, for readability.

Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Matthew Leach <matthew.leach@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-10-25 15:59:39 +01:00
Mark Rutland 652af89979 arm64: factor out spin-table boot method
The arm64 kernel has an internal holding pen, which is necessary for
some systems where we can't bring CPUs online individually and must hold
multiple CPUs in a safe area until the kernel is able to handle them.
The current SMP infrastructure for arm64 is closely coupled to this
holding pen, and alternative boot methods must launch CPUs into the pen,
where they sit before they are launched into the kernel proper.

With PSCI (and possibly other future boot methods), we can bring CPUs
online individually, and need not perform the secondary_holding_pen
dance. Instead, this patch factors the holding pen management code out
to the spin-table boot method code, as it is the only boot method
requiring the pen.

A new entry point for secondaries, secondary_entry is added for other
boot methods to use, which bypasses the holding pen and its associated
overhead when bringing CPUs online. The smp.pen.text section is also
removed, as the pen can live in head.text without problem.

The cpu_operations structure is extended with two new functions,
cpu_boot and cpu_postboot, for bringing a cpu into the kernel and
performing any post-boot cleanup required by a bootmethod (e.g.
resetting the secondary_holding_pen_release to INVALID_HWID).
Documentation is added for cpu_operations.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-10-25 11:33:20 +01:00
Roy Franz 4370eec05a arm64: Expand arm64 image header
Expand the arm64 image header to allow for co-existance with
PE/COFF header required by the EFI stub.  The PE/COFF format
requires the "MZ" header to be at offset 0, and the offset
to the PE/COFF header to be at offset 0x3c.  The image
header is expanded to allow 2 instructions at the beginning
to accommodate a benign intruction at offset 0 that includes
the "MZ" header, a magic number, and the offset to the PE/COFF
header.

Signed-off-by: Roy Franz <roy.franz@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-08-22 11:45:04 +01:00
Javi Merino 0359b0e2d0 arm64: head: match all affinity levels in the pen of the secondaries
The reg property of the cpu nodes in the DT now contains all the
affinity levels in (MPIDR[39:32] and MPIDR[23:0]) and that's what
boot_secondary() writes in the pen, so increase the mask in
secondary_holding_pen accordingly.

Signed-off-by: Javi Merino <javi.merino@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-03-20 18:09:42 +00:00
Catalin Marinas 2475ff9d2c arm64: Add simple earlyprintk support
This patch adds support for "earlyprintk=" parameter on the kernel
command line. The format is:

  earlyprintk=<name>[,<addr>][,<options>]

where <name> is the name of the (UART) device, e.g. "pl011", <addr> is
the I/O address. The <options> aren't currently used.

The mapping of the earlyprintk device is done very early during kernel
boot and there are restrictions on which functions it can call. A
special early_io_map() function is added which creates the mapping from
the pre-defined EARLY_IOBASE to the device I/O address passed via the
kernel parameter. The pgd entry corresponding to EARLY_IOBASE is
pre-populated in head.S during kernel boot.

Only PL011 is currently supported and it is assumed that the interface
is already initialised by the boot loader before the kernel is started.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
2013-01-22 17:51:01 +00:00
Marc Zyngier 7dbfbe5b2f arm64: hyp: initialize vttbr_el2 to zero
The architecture doesn't mandate any reset value for vttbr_el2.
Better set it to a known value before some HYP code gets confused.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2012-12-05 11:26:50 +00:00
Marc Zyngier 712c6ff4db arm64: add hypervisor stub
If booted in EL2, install an dummy hypervisor whose only purpose
is to be replaced by a full fledged one.

A minimal API allows to:
- obtain the current HYP vectors (__hyp_get_vectors)
- set new HYP vectors (__hyp_set_vectors)

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2012-12-05 11:26:49 +00:00
Marc Zyngier f35a92053b arm64: record boot mode when entering the kernel
To be able to signal the availability of EL2 to other parts of
the kernel, record the boot mode.

Once booted, two predicates indicate if HYP mode is available,
and if not, whether this is due to a boot mode mismatch or not.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2012-12-05 11:26:48 +00:00
Will Deacon 1f75ff0a3d arm64: generic timer: use virtual counter instead of physical at EL0
We want to use the virtual counter at EL0, as the physical counter
may not track the current clocksource for guests running under a
hypervisor.

This patch updates the vdso and generic timer driver to use the virtual
counter. The kernel EL2 entry code is also updated to ensure that the
virtual offset is initialised to zero.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2012-12-05 11:20:04 +00:00
Catalin Marinas 9703d9d7f7 arm64: Kernel booting and initialisation
The patch adds the kernel booting and the initial setup code.
Documentation/arm64/booting.txt describes the booting protocol on the
AArch64 Linux kernel. This is subject to change following the work on
boot standardisation, ACPI.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Olof Johansson <olof@lixom.net>
Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
2012-09-17 10:24:45 +01:00