Commit Graph

90286 Commits

Author SHA1 Message Date
Neil Zhang 883c057367 arm64: fix typo in entry.S
Commit 64681787 (arm64: let the core code deal with preempt_count)
changed the code, but left the comments unchanged, fix it.

Signed-off-by: Neil Zhang <zhangwm@marvell.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-01-13 13:55:13 +00:00
Lorenzo Pieralisi 65c021bb49 arm64: kernel: restore HW breakpoint registers in cpu_suspend
When a CPU resumes from low-power, it restores HW breakpoint and
watchpoint slots through a CPU PM notifier. Since we want to enable
debugging as early as possible in the resume path, the mdscr content
is restored along the general purpose registers in the cpu_suspend API
and debug exceptions are reenabled when cpu_suspend returns. Since the
CPU PM notifier is run after a CPU has been resumed, we cannot expect
HW breakpoint registers to contain sane values till the notifier is run,
since the HW breakpoints registers content is unknown at reset; this means
that the CPU might run with debug exceptions enabled, mdscr restored but HW
breakpoint registers containing junk values that can trigger spurious
debug exceptions.

This patch fixes current HW breakpoints restore by moving the HW breakpoints
registers restoration to the cpu_suspend API, before the debug exceptions are
enabled. This way, as soon as the cpu_suspend function returns the
kernel can resume debugging with sane values in HW breakpoint registers.

Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-01-10 17:51:35 +00:00
Jiang Liu 9732cafd9d arm64, jump label: optimize jump label implementation
Optimize jump label implementation for ARM64 by dynamically patching
kernel text.

Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Jiang Liu <liuj97@gmail.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-01-08 15:23:53 +00:00
Jiang Liu 5c5bf25d4f arm64: introduce aarch64_insn_gen_{nop|branch_imm}() helper functions
Introduce aarch64_insn_gen_{nop|branch_imm}() helper functions, which
will be used to implement jump label on ARM64.

Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Jiang Liu <liuj97@gmail.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-01-08 15:21:29 +00:00
Jiang Liu c84fced8d9 arm64: move encode_insn_immediate() from module.c to insn.c
Function encode_insn_immediate() will be used by other instruction
manipulate related functions, so move it into insn.c and rename it
as aarch64_insn_encode_immediate().

Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Jiang Liu <liuj97@gmail.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-01-08 15:21:29 +00:00
Jiang Liu ae16480785 arm64: introduce interfaces to hotpatch kernel and module code
Introduce three interfaces to patch kernel and module code:
aarch64_insn_patch_text_nosync():
	patch code without synchronization, it's caller's responsibility
	to synchronize all CPUs if needed.
aarch64_insn_patch_text_sync():
	patch code and always synchronize with stop_machine()
aarch64_insn_patch_text():
	patch code and synchronize with stop_machine() if needed

Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Jiang Liu <liuj97@gmail.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-01-08 15:21:29 +00:00
Jiang Liu b11a64a48c arm64: introduce basic aarch64 instruction decoding helpers
Introduce basic aarch64 instruction decoding helper
aarch64_get_insn_class() and aarch64_insn_hotpatch_safe().

Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Jiang Liu <liuj97@gmail.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-01-08 15:21:28 +00:00
Mark Brown 4e5e1eb89f arm64: dts: Reduce size of virtio block device for foundation model
Will Deacon observed that kvmtool uses a size of 0x200 for virtio
block memory region and that the virtio block spec only uses 31 bytes in
the device specific region at 0x100 so reduce the region to a less
wasteful 0x200.

Signed-off-by: Mark Brown <broonie@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-12-20 16:20:21 +00:00
Geoff Levand b22cf637bb arm64: Remove unused __data_loc variable
The __data_loc variable is an unused left over from the 32 bit arm implementation.
Remove that variable and adjust the __mmap_switched startup routine accordingly.

Signed-off-by: Geoff Levand <geoff@infradead.org> for Huawei, Linaro
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-12-20 12:04:48 +00:00
Catalin Marinas 0a5be743e8 Merge tag 'arm64-suspend' of git://linux-arm.org/linux-2.6-lp into upstream
* tag 'arm64-suspend' of git://linux-arm.org/linux-2.6-lp:
  arm64: add CPU power management menu/entries
  arm64: kernel: add PM build infrastructure
  arm64: kernel: add CPU idle call
  arm64: enable generic clockevent broadcast
  arm64: kernel: implement HW breakpoints CPU PM notifier
  arm64: kernel: refactor code to install/uninstall breakpoints
  arm: kvm: implement CPU PM notifier
  arm64: kernel: implement fpsimd CPU PM notifier
  arm64: kernel: cpu_{suspend/resume} implementation
  arm64: kernel: suspend/resume registers save/restore
  arm64: kernel: build MPIDR_EL1 hash function data structure
  arm64: kernel: add MPIDR_EL1 accessors macros

Conflicts:
	arch/arm64/Kconfig
2013-12-19 17:57:51 +00:00
Laura Abbott 6ac2104deb arm64: Enable CMA
arm64 bit targets need the features CMA provides. Add the appropriate
hooks, header files, and Kconfig to allow this to happen.

Cc: Will Deacon <will.deacon@arm.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-12-19 17:44:09 +00:00
Laura Abbott c666e8d5ca arm64: Warn on NULL device structure for dma APIs
Although parts of the DMA apis may properly check for NULL devices,
there may be some places that don't. Rather than fix up all the
possible locations, just require a non-NULL device structure to be
used for allocating/freeing.

Cc: Will Deacon <will.deacon@arm.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
[catalin.marinas@arm.com: s/WARN/WARN_ONCE/]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-12-19 17:44:08 +00:00
Steve Capper 4bff28ccda arm64: Add hwcaps for crypto and CRC32 extensions.
Advertise the optional cryptographic and CRC32 instructions to
user space where present. Several hwcap bits [3-7] are allocated.

Signed-off-by: Steve Capper <steve.capper@linaro.org>
[bit 2 is taken now so use bits 3-7 instead]
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-12-19 17:44:08 +00:00
Ard Biesheuvel 148eb0a1db arm64: drop redundant macros from read_cpuid()
asm/cputype.h contains a bunch of #defines for CPU id registers
that essentially map to themselves. Remove the #defines and pass
the tokens directly to the inline asm() that reads the registers.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-12-19 17:44:07 +00:00
Liviu Dudau 81cac69944 arm64: Remove outdated comment
Code referenced in the comment has moved to arch/arm64/kernel/cputable.c

Signed-off-by: Liviu Dudau <Liviu.Dudau@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-12-19 17:44:06 +00:00
Mark Hambleton 60010e5081 arm64: cmpxchg: update macros to prevent warnings
Make sure the value we are going to return is referenced in order to
avoid warnings from newer GCCs such as:

arch/arm64/include/asm/cmpxchg.h:162:3: warning: value computed is not used [-Wunused-value]
  ((__typeof__(*(ptr)))__cmpxchg_mb((ptr),   \
   ^
net/netfilter/nf_conntrack_core.c:674:2: note: in expansion of macro ‘cmpxchg’
  cmpxchg(&nf_conntrack_hash_rnd, 0, rand);

[Modified to use the current underlying implementation as current
mainline for both cmpxchg() and cmpxchg_local() does -- broonie]

Signed-off-by: Mark Hambleton <mahamble@broadcom.com>
Signed-off-by: Mark Brown <broonie@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-12-19 17:44:05 +00:00
Sandeepa Prabhu ee6214cec7 arm64: support single-step and breakpoint handler hooks
AArch64 Single Steping and Breakpoint debug exceptions will be
used by multiple debug framworks like kprobes & kgdb.

This patch implements the hooks for those frameworks to register
their own handlers for handling breakpoint and single step events.

Reworked the debug exception handler in entry.S: do_dbg to route
software breakpoint (BRK64) exception to do_debug_exception()

Signed-off-by: Sandeepa Prabhu <sandeepa.prabhu@linaro.org>
Signed-off-by: Deepak Saxena <dsaxena@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-12-19 17:43:11 +00:00
Konstantin Khlebnikov 26920dd2da ARM64: fix framepointer check in unwind_frame
We need at least 24 bytes above frame pointer.

Signed-off-by: Konstantin Khlebnikov <k.khlebnikov@samsung.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-12-19 17:43:10 +00:00
Konstantin Khlebnikov 408c3658b0 ARM64: check stack pointer in get_wchan
get_wchan() is lockless. Task may wakeup at any time and change its own stack,
thus each next stack frame may be overwritten and filled with random stuff.

Signed-off-by: Konstantin Khlebnikov <k.khlebnikov@samsung.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-12-19 17:43:09 +00:00
Will Deacon 50afc33a90 arm64: kconfig: select HAVE_EFFICIENT_UNALIGNED_ACCESS
ARMv8 CPUs can perform efficient unaligned memory accesses in hardware
and this feature is relied up on by code such as the dcache
word-at-a-time name hashing.

This patch selects HAVE_EFFICIENT_UNALIGNED_ACCESS for arm64.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-12-19 17:43:09 +00:00
Will Deacon 7bc13fd33a arm64: dcache: select DCACHE_WORD_ACCESS for little-endian CPUs
DCACHE_WORD_ACCESS uses the word-at-a-time API for optimised string
comparisons in the vfs layer.

This patch implements support for load_unaligned_zeropad in much the
same way as has been done for ARM, although big-endian systems are also
supported.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-12-19 17:43:08 +00:00
Will Deacon 4da7a56c59 arm64: futex: ensure .fixup entries are sufficiently aligned
AArch64 instructions must be 4-byte aligned, so make sure this is true
for the futex .fixup section.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-12-19 17:43:07 +00:00
Will Deacon 12a0ef7b0a arm64: use generic strnlen_user and strncpy_from_user functions
This patch implements the word-at-a-time interface for arm64 using the
same algorithm as ARM. We use the fls64 macro, which expands to a clz
instruction via a compiler builtin. Big-endian configurations make use
of the implementation from asm-generic.

With this implemented, we can replace our byte-at-a-time strnlen_user
and strncpy_from_user functions with the optimised generic versions.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-12-19 17:43:06 +00:00
Will Deacon 7158627686 arm64: percpu: implement optimised pcpu access using tpidr_el1
This patch implements optimised percpu variable accesses using the
el1 r/w thread register (tpidr_el1) along the same lines as arch/arm/.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-12-19 17:43:06 +00:00
Vinayak Kale 66aa8d6a14 arm64: perf: add support for percpu pmu interrupt
Add support for irq registration when pmu interrupt is percpu.

Signed-off-by: Vinayak Kale <vkale@apm.com>
Signed-off-by: Tuan Phan <tphan@apm.com>
[will: tidied up cross-calling to pass &irq]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-12-19 17:43:05 +00:00
Mark Rutland 67ad461f73 arm64: vmlinux.lds.S: drop redundant .comment
We currently try to emit .comment twice, once in STABS_DEBUG, and once
in the line immediately following it. As the two section definitions are
identical, the latter is redundant and can be dropped.

This patch drops the redundant .comment section definition.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-12-19 17:43:04 +00:00
Mark Hambleton 1bb2cbb6a5 arm64: dts: Add a virtio disk to the RTSM motherboard
Describe the virtio device so we can mount disk images in the simulator.

[Reduced the size of the region based on feedback from review -- broonie]

Signed-off-by: Mark Hambleton <mahamble@broadcom.com>
Signed-off-by: Mark Brown <broonie@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-12-19 17:43:03 +00:00
Laura Abbott e26db3f3d9 arm64: Correct virt_addr_valid
The definition of virt_addr_valid is that virt_addr_valid should
return true if and only if virt_to_page returns a valid pointer.
The current definition of virt_addr_valid only checks against the
virtual address range. There's no guarantee that just because a
virtual address falls bewteen PAGE_OFFSET and high_memory the
associated physical memory has a valid backing struct page. Follow
the example of other architectures and convert to pfn_valid to
verify that the virtual address is actually valid.

Cc: Will Deacon <will.deacon@arm.com>
Cc: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-12-19 17:43:02 +00:00
Lorenzo Pieralisi 1307220d7b arm64: add CPU power management menu/entries
This patch provides a menu for CPU power management options in the
arm64 Kconfig and adds an entry to enable the generic CPU idle configuration.

Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
2013-12-16 17:17:37 +00:00
Lorenzo Pieralisi 166936bace arm64: kernel: add PM build infrastructure
This patch adds the required makefile and kconfig entries to enable PM
for arm64 systems.

The kernel relies on the cpu_{suspend}/{resume} infrastructure to
properly save the context for a CPU and put it to sleep, hence this
patch adds the config option required to enable cpu_{suspend}/{resume}
API.

In order to rely on the CPU PM implementation for saving and restoring
of CPU subsystems like GIC and PMU, the arch Kconfig must be also
augmented to select the CONFIG_CPU_PM option when SUSPEND or CPU_IDLE
kernel implementations are selected.

Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
2013-12-16 17:17:36 +00:00
Lorenzo Pieralisi b8824dfe1b arm64: kernel: add CPU idle call
When CPU idle is enabled, the architectural idle call should go through
the idle subsystem to allow CPUs to enter idle states defined
by the platform CPU idle back-end operations.

This patch, mirroring other archs behaviour, adds the CPU idle call to the
architectural arch_cpu_idle implementation for arm64.

Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
2013-12-16 17:17:35 +00:00
Lorenzo Pieralisi 1f85008e74 arm64: enable generic clockevent broadcast
On platforms with power management capabilities, timers that are shut
down when a CPU enters deep C-states must be emulated using an always-on
timer and a timer IPI to relay the timer IRQ to target CPUs on an SMP
system.

This patch enables the generic clockevents broadcast infrastructure for
arm64, by providing the required Kconfig entries and adding the timer
IPI infrastructure.

Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
2013-12-16 17:17:35 +00:00
Lorenzo Pieralisi 60fc6942f6 arm64: kernel: implement HW breakpoints CPU PM notifier
When a CPU is shutdown either through CPU idle or suspend to RAM, the
content of HW breakpoint registers must be reset or restored to proper
values when CPU resume from low power states. This patch adds debug register
restore operations to the HW breakpoint control function and implements a
CPU PM notifier that allows to restore the content of HW breakpoint registers
to allow proper suspend/resume operations.

Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
2013-12-16 17:17:34 +00:00
Lorenzo Pieralisi 2f04304587 arm64: kernel: refactor code to install/uninstall breakpoints
Most of the code executed to install and uninstall breakpoints is
common and can be factored out in a function that through a runtime
operations type provides the requested implementation.

This patch creates a common function that can be used to install/uninstall
breakpoints and defines the set of operations that can be carried out
through it.

Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
2013-12-16 17:17:33 +00:00
Lorenzo Pieralisi 1fcf7ce0c6 arm: kvm: implement CPU PM notifier
Upon CPU shutdown and consequent warm-reboot, the hypervisor CPU state
must be re-initialized. This patch implements a CPU PM notifier that
upon warm-boot calls a KVM hook to reinitialize properly the hypervisor
state so that the CPU can be safely resumed.

Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
2013-12-16 17:17:33 +00:00
Lorenzo Pieralisi fb1ab1ab38 arm64: kernel: implement fpsimd CPU PM notifier
When a CPU enters a low power state, its FP register content is lost.
This patch adds a notifier to save the FP context on CPU shutdown
and restore it on CPU resume. The context is saved and restored only
if the suspending thread is not a kernel thread, mirroring the current
context switch behaviour.

Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
2013-12-16 17:17:32 +00:00
Lorenzo Pieralisi 95322526ef arm64: kernel: cpu_{suspend/resume} implementation
Kernel subsystems like CPU idle and suspend to RAM require a generic
mechanism to suspend a processor, save its context and put it into
a quiescent state. The cpu_{suspend}/{resume} implementation provides
such a framework through a kernel interface allowing to save/restore
registers, flush the context to DRAM and suspend/resume to/from
low-power states where processor context may be lost.

The CPU suspend implementation relies on the suspend protocol registered
in CPU operations to carry out a suspend request after context is
saved and flushed to DRAM. The cpu_suspend interface:

int cpu_suspend(unsigned long arg);

allows to pass an opaque parameter that is handed over to the suspend CPU
operations back-end so that it can take action according to the
semantics attached to it. The arg parameter allows suspend to RAM and CPU
idle drivers to communicate to suspend protocol back-ends; it requires
standardization so that the interface can be reused seamlessly across
systems, paving the way for generic drivers.

Context memory is allocated on the stack, whose address is stashed in a
per-cpu variable to keep track of it and passed to core functions that
save/restore the registers required by the architecture.

Even though, upon successful execution, the cpu_suspend function shuts
down the suspending processor, the warm boot resume mechanism, based
on the cpu_resume function, makes the resume path operate as a
cpu_suspend function return, so that cpu_suspend can be treated as a C
function by the caller, which simplifies coding the PM drivers that rely
on the cpu_suspend API.

Upon context save, the minimal amount of memory is flushed to DRAM so
that it can be retrieved when the MMU is off and caches are not searched.

The suspend CPU operation, depending on the required operations (eg CPU vs
Cluster shutdown) is in charge of flushing the cache hierarchy either
implicitly (by calling firmware implementations like PSCI) or explicitly
by executing the required cache maintainance functions.

Debug exceptions are disabled during cpu_{suspend}/{resume} operations
so that debug registers can be saved and restored properly preventing
preemption from debug agents enabled in the kernel.

Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
2013-12-16 17:17:31 +00:00
Lorenzo Pieralisi 6732bc65c2 arm64: kernel: suspend/resume registers save/restore
Power management software requires the kernel to save and restore
CPU registers while going through suspend and resume operations
triggered by kernel subsystems like CPU idle and suspend to RAM.

This patch implements code that provides save and restore mechanism
for the arm v8 implementation. Memory for the context is passed as
parameter to both cpu_do_suspend and cpu_do_resume functions, and allows
the callers to implement context allocation as they deem fit.

The registers that are saved and restored correspond to the registers set
actually required by the kernel to be up and running which represents a
subset of v8 ISA.

Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
2013-12-16 17:17:31 +00:00
Lorenzo Pieralisi 976d7d3f79 arm64: kernel: build MPIDR_EL1 hash function data structure
On ARM64 SMP systems, cores are identified by their MPIDR_EL1 register.
The MPIDR_EL1 guidelines in the ARM ARM do not provide strict enforcement of
MPIDR_EL1 layout, only recommendations that, if followed, split the MPIDR_EL1
on ARM 64 bit platforms in four affinity levels. In multi-cluster
systems like big.LITTLE, if the affinity guidelines are followed, the
MPIDR_EL1 can not be considered a linear index. This means that the
association between logical CPU in the kernel and the HW CPU identifier
becomes somewhat more complicated requiring methods like hashing to
associate a given MPIDR_EL1 to a CPU logical index, in order for the look-up
to be carried out in an efficient and scalable way.

This patch provides a function in the kernel that starting from the
cpu_logical_map, implement collision-free hashing of MPIDR_EL1 values by
checking all significative bits of MPIDR_EL1 affinity level bitfields.
The hashing can then be carried out through bits shifting and ORing; the
resulting hash algorithm is a collision-free though not minimal hash that can
be executed with few assembly instructions. The mpidr_el1 is filtered through a
mpidr mask that is built by checking all bits that toggle in the set of
MPIDR_EL1s corresponding to possible CPUs. Bits that do not toggle do not
carry information so they do not contribute to the resulting hash.

Pseudo code:

/* check all bits that toggle, so they are required */
for (i = 1, mpidr_el1_mask = 0; i < num_possible_cpus(); i++)
	mpidr_el1_mask |= (cpu_logical_map(i) ^ cpu_logical_map(0));

/*
 * Build shifts to be applied to aff0, aff1, aff2, aff3 values to hash the
 * mpidr_el1
 * fls() returns the last bit set in a word, 0 if none
 * ffs() returns the first bit set in a word, 0 if none
 */
fs0 = mpidr_el1_mask[7:0] ? ffs(mpidr_el1_mask[7:0]) - 1 : 0;
fs1 = mpidr_el1_mask[15:8] ? ffs(mpidr_el1_mask[15:8]) - 1 : 0;
fs2 = mpidr_el1_mask[23:16] ? ffs(mpidr_el1_mask[23:16]) - 1 : 0;
fs3 = mpidr_el1_mask[39:32] ? ffs(mpidr_el1_mask[39:32]) - 1 : 0;
ls0 = fls(mpidr_el1_mask[7:0]);
ls1 = fls(mpidr_el1_mask[15:8]);
ls2 = fls(mpidr_el1_mask[23:16]);
ls3 = fls(mpidr_el1_mask[39:32]);
bits0 = ls0 - fs0;
bits1 = ls1 - fs1;
bits2 = ls2 - fs2;
bits3 = ls3 - fs3;
aff0_shift = fs0;
aff1_shift = 8 + fs1 - bits0;
aff2_shift = 16 + fs2 - (bits0 + bits1);
aff3_shift = 32 + fs3 - (bits0 + bits1 + bits2);
u32 hash(u64 mpidr_el1) {
	u32 l[4];
	u64 mpidr_el1_masked = mpidr_el1 & mpidr_el1_mask;
	l[0] = mpidr_el1_masked & 0xff;
	l[1] = mpidr_el1_masked & 0xff00;
	l[2] = mpidr_el1_masked & 0xff0000;
	l[3] = mpidr_el1_masked & 0xff00000000;
	return (l[0] >> aff0_shift | l[1] >> aff1_shift | l[2] >> aff2_shift |
		l[3] >> aff3_shift);
}

The hashing algorithm relies on the inherent properties set in the ARM ARM
recommendations for the MPIDR_EL1. Exotic configurations, where for instance
the MPIDR_EL1 values at a given affinity level have large holes, can end up
requiring big hash tables since the compression of values that can be achieved
through shifting is somewhat crippled when holes are present. Kernel warns if
the number of buckets of the resulting hash table exceeds the number of
possible CPUs by a factor of 4, which is a symptom of a very sparse HW
MPIDR_EL1 configuration.

The hash algorithm is quite simple and can easily be implemented in assembly
code, to be used in code paths where the kernel virtual address space is
not set-up (ie cpu_resume) and instruction and data fetches are strongly
ordered so code must be compact and must carry out few data accesses.

Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
2013-12-16 17:17:30 +00:00
Lorenzo Pieralisi b058450f38 arm64: kernel: add MPIDR_EL1 accessors macros
In order to simplify access to different affinity levels within the
MPIDR_EL1 register values, this patch implements some preprocessor
macros that allow to retrieve the MPIDR_EL1 affinity level value according
to the level passed as input parameter.

Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
2013-12-16 17:17:29 +00:00
Linus Torvalds 908bfda754 Merge branch 'x86/urgent' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 fixes from Peter Anvin:
 "This is a pretty small batch:

  The biggest single change is to stop using EFI time services on 32-bit
  platforms.  This matches our current behavior on 64-bit platforms as
  we already had ruled them out there as being too unreliable.  Turns
  out that affects 32-bit platforms, too.

  One NULL pointer fix for SGI UV.

  Two minor build fixes, one of which only affects icc and the other
  which affects icc and future versions or nonstandard default settings
  of gcc"

* 'x86/urgent' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86, efi: Don't use (U)EFI time services on 32 bit
  x86, build, icc: Remove uninitialized_var() from compiler-intel.h
  x86/UV: Fix NULL pointer dereference in uv_flush_tlb_others() if the 'nobau' boot option is used
  x86, build: Pass in additional -mno-mmx, -mno-sse options
2013-12-15 11:52:47 -08:00
Linus Torvalds b2077ebc19 Merge branch 'fixes' of git://ftp.arm.linux.org.uk/~rmk/linux-arm
Pull ARM fixes from Russell King:
 "This resolves some further issues with the dma mask changes on ARM
  which have been found by TI and others, and also some corner cases
  with the updates to the virtual to physical address translations.

  Konstantin also found some problems with the unwinder, which now
  performs tighter verification that the stack is valid while unwinding"

* 'fixes' of git://ftp.arm.linux.org.uk/~rmk/linux-arm:
  ARM: fix asm/memory.h build error
  ARM: 7917/1: cacheflush: correctly limit range of memory region being flushed
  ARM: 7913/1: fix framepointer check in unwind_frame
  ARM: 7912/1: check stack pointer in get_wchan
  ARM: 7909/1: mm: Call setup_dma_zone() post early_paging_init()
  ARM: 7908/1: mm: Fix the arm_dma_limit calculation
  ARM: another fix for the DMA mapping checks
2013-12-13 16:16:03 -08:00
Linus Torvalds 2430cdd0fe ARC Fixes for 3.13
- Couple of fixes for recently added perf code
 - Build time extable sort
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1.4.12 (GNU/Linux)
 
 iQIcBAABAgAGBQJSqqyMAAoJEGnX8d3iisJeTG4QAMUxMnqDHJL919gukLAoivom
 BdLdyPkHXECnwvu9G4kd4kOHvF37QUDSaIJYlgHNA7+vkZg2O9qPBWBAl5DQQ8BU
 nOQeurxnmNKvhBNcLJzRt+MF6J3TATV28sHB3TF5XSC/JV6yCdwhztBNUjeynRHT
 fDBjVyK5tdRCsdh1lRID/4cQW6SnNG4VPuyQHCRt+PZ84nE7AHKu5eYMkSnIpof5
 x4/y/kEYLtzuOfbjgze+ZZk9QlR+ymEVq+YSQsbGH8dM5curazGMh4lh2isa0nkJ
 G4ptA/E3XSvqNkwgNSYeDss8ugxvwHnjAufgSYOlBSZjfCWxwiA8UC1nA0eks4OW
 MBIwCZe6Qo8HyRfZWQgvNOjP81Q9LWfRNa7UObB2HcNvXDghuTmcmOjZJSheZWip
 KA7fuISnUz24mwdRlSMwfLjG5zh13GKphpb/PL79m+uzrVB8yHfJWg8nBU7y8Tfn
 j8BmyxS9cQQPN6lC2w0ESx4Fp891yR63KNKZq+MLZCj/4iP0h9s2ifL8o/xx03a0
 WgqNZJaXYnssqsZAd1BhnV7Oma/OJmrwm7LgWVxAr01FvjONAh/bd3LJR0G2Nksy
 PMJI0NnVWrHrso9BeWQ4f0L//tamtmkBqrTjXxgrgQisuxntxdhe16xa0FhsrmIi
 B/wfllRLceFyqT78GvNr
 =JA1e
 -----END PGP SIGNATURE-----

Merge tag 'arc-fixes-for-3.13' of git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arc

Pull ARC fixes from Vineet Gupta:
 "These are couple of weeks old already, but I just couldn't get them to
  you earlier.

   - couple of fixes for recently added perf code
   - build time extable sort"

* tag 'arc-fixes-for-3.13' of git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arc:
  ARC: [perf] Fix a few thinkos
  ARC: Add guard macro to uapi/asm/unistd.h
  ARC: extable: Enable sorting at build time
2013-12-13 16:14:39 -08:00
Russell King b713aa0b15 ARM: fix asm/memory.h build error
Jason Gunthorpe reports a build failure when ARM_PATCH_PHYS_VIRT is
not defined:

In file included from arch/arm/include/asm/page.h:163:0,
                 from include/linux/mm_types.h:16,
                 from include/linux/sched.h:24,
                 from arch/arm/kernel/asm-offsets.c:13:
arch/arm/include/asm/memory.h: In function '__virt_to_phys':
arch/arm/include/asm/memory.h:244:40: error: 'PHYS_OFFSET' undeclared (first use in this function)
arch/arm/include/asm/memory.h:244:40: note: each undeclared identifier is reported only once for each function it appears in
arch/arm/include/asm/memory.h: In function '__phys_to_virt':
arch/arm/include/asm/memory.h:249:13: error: 'PHYS_OFFSET' undeclared (first use in this function)

Fixes: ca5a45c06c ("ARM: mm: use phys_addr_t appropriately in p2v and v2p conversions")
Tested-By: Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-12-13 20:25:30 +00:00
Linus Torvalds 54fb723cc4 Four security fixes for KVM on x86. Thanks to Andrew Honig and Lars Bull
from Google for reporting them.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2.0.22 (GNU/Linux)
 
 iQIcBAABAgAGBQJSqi7GAAoJEBvWZb6bTYbyy94P/jdBo/J+4zxujJNDfw9D15xP
 81/ByzZ1qxAZhrKKCqlOMWEYIOhEV6sjoJayMMIPkV0i9aYfOl3N4OUTGx8xuDhl
 eIIQDRQdnFmqi69R2inBTxFYb8uGngsJwGF0iuiIImg/gJvoIAfywFADFPPUbtRP
 BQQ69IHSCR/rblGVW3hyio7Y/dFtE4dqNYKTH7pamkSVdCz4j3FdVPz+COcXMsc+
 wOhphbe0zRnrq8MmwsqMXKefSJtihD34wx+M85tiltGKXx4Jumi3eQcfFTnMCbH1
 loA6fGLztXuyul5kpkaLdvoYgvxZDueZ7pO0OO1Wqh60T6OyDRqc/jKohdbzI/g3
 /2OCZ7P8yHgxJb1tLAZBr3aWwCQtRhlF8O6eP+bBPQo8Di5Z6xYHDVggvLpHCE7f
 KRQy1V1ooXbZ1UoytqA0QauCXURUb1jC+tzuZvZzcJN6oFojY8ojL1oVLlW0iDt6
 WYzS6YAmIo5jeJ2qvP42dLG8n4kijkQ1gQgBsI8rfsDOYGXJe8TWu7O2aD1rs8Jz
 d7aPgL+zz8K7wwZgG+U2PTjzkDOuyjRbhNEi7jrCVio6hxvvdQARiLsi+0Q+QUjF
 Xk0iiSsseCBcFWj6sDnTPn10YnnXyIj6eDM1OImdd+/2VVqnUIiqzwpUsNr3yzVc
 a+bZbYEsCUP0MwqlmCcA
 =auYv
 -----END PGP SIGNATURE-----

Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm

Pull kvm fixes from Paolo Bonzini:
 "Four security fixes for KVM on x86.  Thanks to Andrew Honig and Lars
  Bull from Google for reporting them"

* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm:
  KVM: x86: fix guest-initiated crash with x2apic (CVE-2013-6376)
  KVM: x86: Convert vapic synchronization to _cached functions (CVE-2013-6368)
  KVM: x86: Fix potential divide by 0 in lapic (CVE-2013-6367)
  KVM: Improve create VCPU parameter (CVE-2013-4587)
2013-12-12 15:46:06 -08:00
Linus Torvalds ea1e61cbb9 ARM: SoC fixes for 3.13-rc
Another week, another batch of fixes.
 
 Again, OMAP regressions due to move to DT is the bulk of the changes here,
 but this should be the last of it for 3.13. There are also a handful of
 OMAP hwmod changes (power management, reset handling) for USB on OMAP3
 that fixes some longish-standing bugs around USB resets.
 
 There are a couple of other changes that also add up line count a bit:
 One is a long-standing bug with the keyboard layout on one of the
 PXA platforms. The other is a fix for highbank that moves their
 power-off/reset button handling to be done in-kernel since relying on
 userspace to handle it was fragile and awkward.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1.4.11 (GNU/Linux)
 
 iQIcBAABAgAGBQJSqRaIAAoJEIwa5zzehBx36e0P/35YmpsILX1FR1MvAv2DjB1m
 cRrUrdGI2hPkT7aIE35QPJF3GEvNm//4hHOK/r9+BkQSRecDmJaY6E6NvEo4mPRz
 at+R9JpddInbthMc1AylnlRJOxl5TbFFx0MJr/cXB8KAXN0iYu9h3brZyDHz7Wkg
 3hqQ+4ZuQwXQmmNJEftPxnXCQAZLiU3hSMYPCmJ71YEB9oKBJoNsJNsMNDRQWu/d
 VCYbGlnzCuVaOvHm0/KHUQHKOS2K28vT9goCyh3f+Vbt5n4HNb6SicXTo2f3pY30
 N1ThifxRuGEYhQMzlq6AWnFaLkDqivBq6V2P0tG+JJOY1Z4HkqGsAr2SDylOkTy0
 rqMCRA12PPw79W57LwMT8Fzokuq5CLBT+sahqUuuV4l7C9Sdnzr8ZbQn7s2YeW52
 lqG4rk+t2muMdqHmTmglb86nMiBr6raOLwDGVt8Ttgnryjl/au64FoZx7UsWMEDg
 /ppkKvAmeL+f8Fde+JQTFpJdxaUHKc/NgjHYFxpt8Ef46CRiOcCAh08VD2oUWjso
 JKwb03axdHaJVgFm/KKwQ8uoNG0ouxkw9aCLjkrMYda7MRCpBAYPdQPBaMXziaAU
 acjouVrxxNfKLH2h3+brmFhMO5yUZYUbQs4BOi+Z0w1BJLxouEMrJ97ZxeFxBRug
 J4i4tk7d//YDriS5HoYx
 =WRn8
 -----END PGP SIGNATURE-----

Merge tag 'fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc

Pull ARM SoC fixes from Olof Johansson:
 "Another week, another batch of fixes.

  Again, OMAP regressions due to move to DT is the bulk of the changes
  here, but this should be the last of it for 3.13.  There are also a
  handful of OMAP hwmod changes (power management, reset handling) for
  USB on OMAP3 that fixes some longish-standing bugs around USB resets.

  There are a couple of other changes that also add up line count a bit:
  One is a long-standing bug with the keyboard layout on one of the PXA
  platforms.  The other is a fix for highbank that moves their
  power-off/reset button handling to be done in-kernel since relying on
  userspace to handle it was fragile and awkward"

* tag 'fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc:
  ARM: sun6i: dt: Fix interrupt trigger types
  ARM: sun7i: dt: Fix interrupt trigger types
  MAINTAINERS: merge IMX6 entry into IMX
  ARM: tegra: add missing break to fuse initialization code
  ARM: pxa: prevent PXA270 occasional reboot freezes
  ARM: pxa: tosa: fix keys mapping
  ARM: OMAP2+: omap_device: add fail hook for runtime_pm when bad data is detected
  ARM: OMAP2+: hwmod: Fix usage of invalid iclk / oclk when clock node is not present
  ARM: OMAP3: hwmod data: Don't prevent RESET of USB Host module
  ARM: OMAP2+: hwmod: Fix SOFTRESET logic
  ARM: OMAP4+: hwmod data: Don't prevent RESET of USB Host module
  ARM: dts: Fix booting for secure omaps
  ARM: OMAP2+: Fix the machine entry for am3517
  ARM: dts: Fix missing entries for am3517
  ARM: OMAP2+: Fix overwriting hwmod data with data from device tree
  ARM: davinci: Fix McASP mem resource names
  ARM: highbank: handle soft poweroff and reset key events
  ARM: davinci: fix number of resources passed to davinci_gpio_register()
  gpio: davinci: fix check for unbanked gpio
2013-12-12 15:45:03 -08:00
Gleb Natapov 17d68b763f KVM: x86: fix guest-initiated crash with x2apic (CVE-2013-6376)
A guest can cause a BUG_ON() leading to a host kernel crash.
When the guest writes to the ICR to request an IPI, while in x2apic
mode the following things happen, the destination is read from
ICR2, which is a register that the guest can control.

kvm_irq_delivery_to_apic_fast uses the high 16 bits of ICR2 as the
cluster id.  A BUG_ON is triggered, which is a protection against
accessing map->logical_map with an out-of-bounds access and manages
to avoid that anything really unsafe occurs.

The logic in the code is correct from real HW point of view. The problem
is that KVM supports only one cluster with ID 0 in clustered mode, but
the code that has the bug does not take this into account.

Reported-by: Lars Bull <larsbull@google.com>
Cc: stable@vger.kernel.org
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-12-12 22:46:18 +01:00
Andy Honig fda4e2e855 KVM: x86: Convert vapic synchronization to _cached functions (CVE-2013-6368)
In kvm_lapic_sync_from_vapic and kvm_lapic_sync_to_vapic there is the
potential to corrupt kernel memory if userspace provides an address that
is at the end of a page.  This patches concerts those functions to use
kvm_write_guest_cached and kvm_read_guest_cached.  It also checks the
vapic_address specified by userspace during ioctl processing and returns
an error to userspace if the address is not a valid GPA.

This is generally not guest triggerable, because the required write is
done by firmware that runs before the guest.  Also, it only affects AMD
processors and oldish Intel that do not have the FlexPriority feature
(unless you disable FlexPriority, of course; then newer processors are
also affected).

Fixes: b93463aa59 ('KVM: Accelerated apic support')

Reported-by: Andrew Honig <ahonig@google.com>
Cc: stable@vger.kernel.org
Signed-off-by: Andrew Honig <ahonig@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-12-12 22:39:46 +01:00
Andy Honig b963a22e6d KVM: x86: Fix potential divide by 0 in lapic (CVE-2013-6367)
Under guest controllable circumstances apic_get_tmcct will execute a
divide by zero and cause a crash.  If the guest cpuid support
tsc deadline timers and performs the following sequence of requests
the host will crash.
- Set the mode to periodic
- Set the TMICT to 0
- Set the mode bits to 11 (neither periodic, nor one shot, nor tsc deadline)
- Set the TMICT to non-zero.
Then the lapic_timer.period will be 0, but the TMICT will not be.  If the
guest then reads from the TMCCT then the host will perform a divide by 0.

This patch ensures that if the lapic_timer.period is 0, then the division
does not occur.

Reported-by: Andrew Honig <ahonig@google.com>
Cc: stable@vger.kernel.org
Signed-off-by: Andrew Honig <ahonig@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-12-12 22:39:45 +01:00
Maxime Ripard 6f97dc8d46 ARM: sun6i: dt: Fix interrupt trigger types
The Allwinner A31 uses the ARM GIC as its internal interrupts controller. The
GIC can work on several interrupt triggers, and the A31 was actually setting it
up to use a rising edge as a trigger, while it was actually a level high
trigger, leading to some interrupts that would be completely ignored if the
edge was missed.

Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Acked-by: Hans de Goede <hdegoede@redhat.com>
Cc: stable@vger.kernel.org # 3.12+
Signed-off-by: Olof Johansson <olof@lixom.net>
2013-12-11 17:15:24 -08:00