Commit Graph

789 Commits

Author SHA1 Message Date
Will Deacon ef5e724b25 arm64: alternative: put secondary CPUs into polling loop during patch
When patching the kernel text with alternatives, we may end up patching
parts of the stop_machine state machine (e.g. atomic_dec_and_test in
ack_state) and consequently corrupt the instruction stream of any
secondary CPUs.

This patch passes the cpu_online_mask to stop_machine, forcing all of
the CPUs into our own callback which can place the secondary cores into
a dumb (but safe!) polling loop whilst the patching is carried out.

Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-30 19:07:28 +01:00
Will Deacon 484c96dbb2 arm64: lse: fix lse cmpxchg code indentation
For some reason, the ll/sc cmpxchg asm is all off to the left and
awkward to read in conjunction with the following (correctly indented)
LSE version.

This patch shifts the ll/sc code back to where it should be.

Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-29 18:32:09 +01:00
Jonas Rabenstein 377bcff9a3 arm64: remove dead-code depending on CONFIG_UP_LATE_INIT
Commit 4b3dc9679c ("arm64: force CONFIG_SMP=y and remove redundant
and therfore can not be selected anymore.

Remove dead #ifdef-block depending on UP_LATE_INIT in
arch/arm64/kernel/setup.c

Signed-off-by: Jonas Rabenstein <jonas.rabenstein@studium.uni-erlangen.de>
[will: kill do_post_cpus_up_work altogether]
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-29 18:32:09 +01:00
Will Deacon 766ffb6980 arm64: pgtable: fix definition of pte_valid
pte_valid should check if the PTE_VALID bit (1 << 0) is set in the pte,
so fix the macro definition to use bitwise & instead of logical &&.

Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-28 16:14:03 +01:00
Will Deacon c1d7cd228b arm64: spinlock: fix ll/sc unlock on big-endian systems
When unlocking a spinlock, we perform a read-modify-write on the owner
ticket in order to increment it and store it back with release
semantics.

In the LL/SC case, we load the 16-bit ticket using a 32-bit load and
therefore store back the wrong halfword on a big-endian system,
corrupting the lock after the first unlock and killing the system dead.

This patch fixes the unlock code to use 16-bit accessors consistently.

Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-28 14:48:00 +01:00
Catalin Marinas 4150e50bf5 arm64: Use last level TLBI for user pte changes
The flush_tlb_page() function is used on user address ranges when PTEs
(or PMDs/PUDs for huge pages) were changed (attributes or clearing). For
such cases, it is more efficient to invalidate only the last level of
the TLB with the "tlbi vale1is" instruction.

In the TLB shoot-down case, the TLB caching of the intermediate page
table levels (pmd, pud, pgd) is handled by __flush_tlb_pgtable() via the
__(pte|pmd|pud)_free_tlb() functions and it is not deferred to
tlb_finish_mmu() (as of commit 285994a62c - "arm64: Invalidate the TLB
corresponding to intermediate page table levels"). The tlb_flush()
function only needs to invalidate the TLB for the last level of page
tables; the __flush_tlb_range() function gains a fourth argument for
last level TLBI.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-28 11:44:01 +01:00
Catalin Marinas da4e73303e arm64: Clean up __flush_tlb(_kernel)_range functions
This patch moves the MAX_TLB_RANGE check into the
flush_tlb(_kernel)_range functions directly to avoid the
undescore-prefixed definitions (and for consistency with a subsequent
patch).

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-28 11:43:15 +01:00
Will Deacon 6f883d10a1 arm64: debug: rename enum debug_el to avoid symbol collision
lib/list_sort.c defines a 'struct debug_el', where "el" is assumedly a
a contraction of "element". This conflicts with 'enum debug_el' in our
asm/debug-monitors.h header file, where "el" stands for Exception Level.

The result is build failure when targetting allmodconfig, so rename our
enum to 'dbg_active_el' to be slightly more explicit about what it is.

Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 18:36:54 +01:00
Will Deacon c739dc83a0 arm64: lse: rename ARM64_CPU_FEAT_LSE_ATOMICS for consistency
Other CPU features follow an 'ARM64_HAS_*' naming scheme, so do the same
for the LSE atomics.

Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 15:28:54 +01:00
Will Deacon db26217e6f arm64: atomic64_dec_if_positive: fix incorrect branch condition
If we attempt to atomic64_dec_if_positive on INT_MIN, we will underflow
and incorrectly decide that the original parameter was positive.

This patches fixes the broken condition code so that we handle this
corner case correctly.

Reviewed-by: Steve Capper <steve.capper@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 15:28:54 +01:00
Will Deacon 6059a7b6e8 arm64: atomics: implement atomic{,64}_cmpxchg using cmpxchg
We don't need duplicate cmpxchg implementations, so use cmpxchg to
implement atomic{,64}_cmpxchg, like we do for xchg already.

Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 15:28:53 +01:00
Will Deacon 0ea366f5e1 arm64: atomics: prefetch the destination word for write prior to stxr
The cost of changing a cacheline from shared to exclusive state can be
significant, especially when this is triggered by an exclusive store,
since it may result in having to retry the transaction.

This patch makes use of prfm to prefetch cachelines for write prior to
ldxr/stxr loops when using the ll/sc atomic routines.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 15:28:53 +01:00
Will Deacon a82e62382f arm64: atomics: tidy up common atomic{,64}_* macros
The common (i.e. identical for ll/sc and lse) atomic macros in atomic.h
are needlessley different for atomic_t and atomic64_t.

This patch tidies up the definitions to make them consistent across the
two atomic types and factors out common code such as the add_unless
implementation based on cmpxchg.

Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 15:28:53 +01:00
Will Deacon 4e39715f4b arm64: cmpxchg: avoid memory barrier on comparison failure
cmpxchg doesn't require memory barrier semantics when the value
comparison fails, so make the barrier conditional on success.

Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 15:28:52 +01:00
Will Deacon 0bc671d3f4 arm64: cmpxchg: avoid "cc" clobber in ll/sc routines
We can perform the cmpxchg comparison using eor and cbnz which avoids
the "cc" clobber for the ll/sc case and consequently for the LSE case
where we may have to fall-back on the ll/sc code at runtime.

Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 15:28:52 +01:00
Will Deacon e9a4b79565 arm64: cmpxchg_dbl: patch in lse instructions when supported by the CPU
On CPUs which support the LSE atomic instructions introduced in ARMv8.1,
it makes sense to use them in preference to ll/sc sequences.

This patch introduces runtime patching of our cmpxchg_double primitives
so that the LSE casp instruction is used instead.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 15:28:52 +01:00
Will Deacon c342f78217 arm64: cmpxchg: patch in lse instructions when supported by the CPU
On CPUs which support the LSE atomic instructions introduced in ARMv8.1,
it makes sense to use them in preference to ll/sc sequences.

This patch introduces runtime patching of our cmpxchg primitives so that
the LSE cas instruction is used instead.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 15:28:51 +01:00
Will Deacon c8366ba0fb arm64: xchg: patch in lse instructions when supported by the CPU
On CPUs which support the LSE atomic instructions introduced in ARMv8.1,
it makes sense to use them in preference to ll/sc sequences.

This patch introduces runtime patching of our xchg primitives so that
the LSE swp instruction (yes, you read right!) is used instead.

Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 15:28:51 +01:00
Will Deacon 084f903727 arm64: bitops: patch in lse instructions when supported by the CPU
On CPUs which support the LSE atomic instructions introduced in ARMv8.1,
it makes sense to use them in preference to ll/sc sequences.

This patch introduces runtime patching of our bitops functions so that
LSE atomic instructions are used instead.

Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 15:28:51 +01:00
Will Deacon 81bb5c6420 arm64: locks: patch in lse instructions when supported by the CPU
On CPUs which support the LSE atomic instructions introduced in ARMv8.1,
it makes sense to use them in preference to ll/sc sequences.

This patch introduces runtime patching of our locking functions so that
LSE atomic instructions are used for spinlocks and rwlocks.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 15:28:51 +01:00
Will Deacon c09d6a04d1 arm64: atomics: patch in lse instructions when supported by the CPU
On CPUs which support the LSE atomic instructions introduced in ARMv8.1,
it makes sense to use them in preference to ll/sc sequences.

This patch introduces runtime patching of atomic_t and atomic64_t
routines so that the call-site for the out-of-line ll/sc sequences is
patched with an LSE atomic instruction when we detect that
the CPU supports it.

If binutils is not recent enough to assemble the LSE instructions, then
the ll/sc sequences are inlined as though CONFIG_ARM64_LSE_ATOMICS=n.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 15:28:50 +01:00
Will Deacon c0385b24af arm64: introduce CONFIG_ARM64_LSE_ATOMICS as fallback to ll/sc atomics
In order to patch in the new atomic instructions at runtime, we need to
generate wrappers around the out-of-line exclusive load/store atomics.

This patch adds a new Kconfig option, CONFIG_ARM64_LSE_ATOMICS. which
causes our atomic functions to branch to the out-of-line ll/sc
implementations. To avoid the register spill overhead of the PCS, the
out-of-line functions are compiled with specific compiler flags to
force out-of-line save/restore of any registers that are usually
caller-saved.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 15:28:50 +01:00
Will Deacon d964b7229e arm64: alternatives: add cpu feature for lse atomics
Add a CPU feature for the LSE atomic instructions, so that they can be
patched in at runtime when we detect that they are supported.

Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 14:34:39 +01:00
Will Deacon c275f76bb4 arm64: atomics: move ll/sc atomics into separate header file
In preparation for the Large System Extension (LSE) atomic instructions
introduced by ARM v8.1, move the current exclusive load/store (LL/SC)
atomics into their own header file.

Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 14:34:39 +01:00
Will Deacon 144e9697a9 arm64: cpufeature.h: add missing #include of kernel.h
cpufeature.h makes use of DECLARE_BITMAP, which in turn relies on the
BITS_TO_LONGS and DIV_ROUND_UP macros.

This patch includes kernel.h in cpufeature.h to prevent all users having
to do the same thing.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 14:26:34 +01:00
Will Deacon 9511ca19da arm64: rwlocks: don't fail trylock purely due to contention
STXR can fail for a number of reasons, so don't fail an rwlock trylock
operation simply because the STXR reported failure.

I'm not aware of any issues with the current code, but this makes it
consistent with spin_trylock and also other architectures (e.g. arch/arm).

Reported-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 14:26:34 +01:00
Will Deacon fc9eb93cd4 Merge branch 'locking/arch-atomic' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip into aarch64/for-next/core
Merge in PeterZ's logical atomic ops so that we can implement them in
our subsequent LSE atomics.
2015-07-27 14:21:15 +01:00
Peter Zijlstra e6942b7de2 atomic: Provide atomic_{or,xor,and}
Implement atomic logic ops -- atomic_{or,xor,and}.

These will replace the atomic_{set,clear}_mask functions that are
available on some archs.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-07-27 14:06:24 +02:00
Peter Zijlstra 22288b40e2 arm64: Provide atomic_{or,xor,and}
Implement atomic logic ops -- atomic_{or,xor,and}.

These will replace the atomic_{set,clear}_mask functions that are
available on some archs.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-07-27 14:06:22 +02:00
Will Deacon 772d68355e arm64: include linux/types.h in asm/spinlock_types.h
Our ticket-based spinlock structures rely on a definition of u16, so
include linux/types.h explicitly to ensure the thing compiles.

Found by a module build failure in -next:

  arch/arm64/include/asm/spinlock_types.h:27:2: error: unknown type name 'u16'
  arch/arm64/include/asm/spinlock_types.h:28:2: error: unknown type name 'u16'
  arch/arm64/include/asm/spinlock_types.h:33:13: error: expected declaration specifiers or '...' before numeric constant
  include/linux/spinlock_types.h:21:2: error: unknown type name 'arch_spinlock_t'
  arch/arm64/include/asm/spinlock.h:34:35: error: unknown type name 'arch_spinlock_t'
  arch/arm64/include/asm/spinlock.h:65:37: error: unknown type name 'arch_spinlock_t'

Reported-by: Russell King <linux@arm.linux.org.uk>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 11:09:34 +01:00
Dave P Martin 9fb7410f95 arm64/BUG: Use BRK instruction for generic BUG traps
Currently, the minimal default BUG() implementation from asm-
generic is used for arm64.

This patch uses the BRK software breakpoint instruction to generate
a trap instead, similarly to most other arches, with the generic
BUG code generating the dmesg boilerplate.

This allows bug metadata to be moved to a separate table and
reduces the amount of inline code at BUG and WARN sites.  This also
avoids clobbering any registers before they can be dumped.

To mitigate the size of the bug table further, this patch makes
use of the existing infrastructure for encoding addresses within
the bug table as 32-bit offsets instead of absolute pointers.
(Note that this limits the kernel size to 2GB.)

Traps are registered at arch_initcall time for aarch64, but BUG
has minimal real dependencies and it is desirable to be able to
generate bug splats as early as possible.  This patch redirects
all debug exceptions caused by BRK directly to bug_handler() until
the full debug exception support has been initialised.

Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 11:08:42 +01:00
Dave P Martin d7a33f4fbd arm64/debug: Add missing #includes
<asm/debug-monitors.h> relies on <asm/ptrace.h>, but doesn't
declare this dependency.  This becomes a problem once
debug-monitors.h starts getting included all over the place to get
the BRK immedates.

The missing include of <asm/memory.h> (for UL()) in <asm/esr.h> is
also added.  The series no longer relies on this, but I spotted it
during development and it may as well get fixed.

No functional change.

Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 11:08:42 +01:00
Dave P Martin c696b93461 arm64/debug: Simplify BRK insn opcode declarations
The way the KGDB_DYN_BRK_INS_BYTEx macros are declared is more
complex than it needs to be.  Also, the macros are only used in one
place, which is arch-specific anyway.

This patch refactors the macros to simplify them, and exposes an
argument so that we can have a single macro instead of 4.

As a side effect, this patch also fixes some anomalous spellings of
"KGDB".

These changes alter the compile types of some integer constants
that are harmless but trigger truncation warnings in gcc when
assigning to 32-bit variables.  This patch adds an explicit cast
for the affected cases.

Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 11:08:42 +01:00
Dave P Martin 72d033e80a arm64/debug: Move BRK ESR template macro into <asm/esr.h>
It makes sense to keep all the architectural exception syndrome
definitions in the same place.

Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 11:08:42 +01:00
Dave P Martin c172d994e1 arm64/debug: More consistent naming for the BRK ESR template macro
The naming of DBG_ESR_VAL_BRK is inconsistent with the way other
similar macros are named.

This patch makes the naming more consistent, and appends "64"
as a reminder that this ESR pattern only matches from AArch64
state.

Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 11:08:42 +01:00
Dave P Martin 03923696a9 arm64/debug: Eliminate magic number from ESR template definition
<asm/esr.h> has perfectly good constants for defining ESR values
already.  Let's use them.

Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 11:08:42 +01:00
Dave P Martin dfac68314c arm64/debug: Mask off all reserved bits from generated ESR values
There are only 16 comment bits in a BRK instruction, which
correspond to ESR bits 15:0.  Bits 24:16 of the ESR are RES0,
and might have weird meanings in the future.

This code inserts 16 bits of comment in the ESR value instead of
20 (almost certainly a typo in the original code).

Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 11:08:42 +01:00
Dave P Martin 951757ae83 arm64/debug: Eliminate magic number for size of BRK instruction
The size of an A64 BRK instruction is the same as the size of all other
A64 instructions, because all A64 instructions are the same size.

BREAK_INSTR_SIZE is retained for readibility, but it should not be
an independent constant from AARCH64_INSN_SIZE.

Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 11:08:42 +01:00
Will Deacon 77ee306c0a arm64: alternatives: add enable parameter to conditional asm macros
There are cases where we want to compile out both versions of an
alternative code block, so add an enable parameter to the new conditional
alternative assembly macros in the same way as alternative_insn.

Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 11:08:41 +01:00
James Morse 338d4f49d6 arm64: kernel: Add support for Privileged Access Never
'Privileged Access Never' is a new arm8.1 feature which prevents
privileged code from accessing any virtual address where read or write
access is also permitted at EL0.

This patch enables the PAN feature on all CPUs, and modifies {get,put}_user
helpers temporarily to permit access.

This will catch kernel bugs where user memory is accessed directly.
'Unprivileged loads and stores' using ldtrb et al are unaffected by PAN.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
[will: use ALTERNATIVE in asm and tidy up pan_enable check]
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 11:08:41 +01:00
Suzuki K. Poulose 9ded63aaf8 arm64: Generalise msr_s/mrs_s operations
The system register encoding generated by sys_reg() works only
for MRS/MSR(Register) operations, as we hardcode Bit20 to 1 in
mrs_s/msr_s mask. This makes it unusable for generating instructions
accessing registers with Op0 < 2(e.g, PSTATE.x with Op0=0).

As per ARMv8 ARM, (Ref: ARMv8 ARM, Section: "System instruction class
encoding overview", C5.2, version:ARM DDI 0487A.f), the instruction
encoding reserves bits [20-19] for Op0.

This patch generalises the sys_reg, mrs_s and msr_s macros, so that
we could use them to access any of the supported system register.

Cc: James Morse <james.morse@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 11:08:41 +01:00
James Morse 91a5cefa2f arm64: kernel: Add optional CONFIG_ parameter to ALTERNATIVE()
Some uses of ALTERNATIVE() may depend on a feature that is disabled at
compile time by a Kconfig option. In this case the unused alternative
instructions waste space, and if the original instruction is a nop, it
wastes time and space.

This patch adds an optional 'config' option to ALTERNATIVE() and
alternative_insn that allows the compiler to remove both the original
and alternative instructions if the config option is not defined.

Suggested-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 11:08:41 +01:00
James Morse 18ffa046c5 arm64: kernel: Add min_field_value and use '>=' for feature detection
When a new cpu feature is available, the cpu feature bits will have some
initial value, which is incremented when the feature is updated.
This patch changes 'register_value' to be 'min_field_value', and checks
the feature bits value (interpreted as a signed int) is greater than this
minimum.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 11:08:41 +01:00
James Morse 1c0763037f arm64: kernel: Add cpufeature 'enable' callback
This patch adds an 'enable()' callback to cpu capability/feature
detection, allowing features that require some setup or configuration
to get this opportunity once the feature has been detected.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 11:08:41 +01:00
James Morse 870828e57b arm64: kernel: Move config_sctlr_el1
Later patches need config_sctlr_el1 to set/clear bits in the sctlr_el1
register.

This patch moves this function into header a file.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 11:08:41 +01:00
Daniel Thompson 63e40815f0 arm64: alternative: Provide if/else/endif assembler macros
The existing alternative_insn macro has some limitations that make it
hard to work with. In particular the fact it takes instructions from it
own macro arguments means it doesn't play very nicely with C pre-processor
macros because the macro arguments look like a string to the C
pre-processor. Workarounds are (probably) possible but things start to
look ugly.

Introduce an alternative set of macros that allows instructions to be
presented to the assembler as normal and switch everything over to the
new macros.

Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 11:08:40 +01:00
James Morse 79b0e09a3c arm64: kernel: Add cpuid_feature_extract_field() for 4bit sign extension
Based on arch/arm/include/asm/cputype.h, this function does the
shifting and sign extension necessary when accessing cpu feature fields.

Signed-off-by: James Morse <james.morse@arm.com>
Suggested-by: Russell King <linux@arm.linux.org.uk>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 11:08:40 +01:00
Jisheng Zhang 0a570e7ade arm64: hugetlb: remove paragraph about writing to FSF
Remove paragraph about writing to the Free Software Foundation's
mailing address from GPL notice.

Signed-off-by: Jisheng Zhang <jszhang@marvell.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 11:08:40 +01:00
Will Deacon 4b3dc9679c arm64: force CONFIG_SMP=y and remove redundant #ifdefs
Nobody seems to be producing !SMP systems anymore, so this is just
becoming a source of kernel bugs, particularly if people want to use
coherent DMA with non-shared pages.

This patch forces CONFIG_SMP=y for arm64, removing a modest amount of
code in the process.

Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 11:08:40 +01:00
Mark Rutland 52da443ec4 arm64: perf: factor out callchain code
We currently bundle the callchain handling code with the PMU code,
despite the fact the two are distinct, and the former can be useful even
in the absence of the latter.

Follow the example of arch/arm and factor the callchain handling into
its own file dependent on CONFIG_PERF_EVENTS rather than
CONFIG_HW_PERF_EVENTS.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 11:08:39 +01:00