On revisions of the Cortex-A9 prior to r2p0, the Store Buffer does not
have any automatic draining mechanism and therefore a livelock may occur
if an external agent continuously polls a memory location waiting to
observe an update.
This workaround defines cpu_relax() as smp_mb(), preventing correctly
written polling loops from denying visibility of updates to memory.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
PTRACE_SINGLESTEP is a ptrace request designed to offer single-stepping
support to userspace when the underlying architecture has hardware
support for this operation.
On ARM, we set arch_has_single_step() to 1 and attempt to emulate hardware
single-stepping by disassembling the current instruction to determine the
next pc and placing a software breakpoint on that location.
Unfortunately this has the following problems:
1.) Only a subset of ARMv7 instructions are supported
2.) Thumb-2 is unsupported
3.) The code is not SMP safe
We could try to fix this code, but it turns out that because of the above
issues it is rarely used in practice. GDB, for example, uses PTRACE_POKETEXT
and PTRACE_PEEKTEXT to manage breakpoints itself and does not require any
kernel assistance.
This patch removes the single-step emulation code from ptrace meaning that
the PTRACE_SINGLESTEP request will return -EIO on ARM. Portable code must
check the return value from a ptrace call and handle the failure gracefully.
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
For debuggers to take advantage of the hw-breakpoint framework in the kernel,
it is necessary to expose the API calls via a ptrace interface.
This patch exposes the hardware breakpoints framework as a collection of
virtual registers, accesible using PTRACE_SETHBPREGS and PTRACE_GETHBPREGS
requests. The breakpoints are stored in the debug_info struct of the running
thread.
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: S. Karthikeyan <informkarthik@gmail.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Linux expects that if a CPU modifies a memory location, then that
modification will eventually become visible to other CPUs in the system.
On an ARM11MPCore processor, loads are prioritised over stores so it is
possible for a store operation to be postponed if a polling loop immediately
follows it. If the variable being polled indirectly depends on the outstanding
store [for example, another CPU may be polling the variable that is pending
modification] then there is the potential for deadlock if interrupts are
disabled. This deadlock occurs in the KGDB testsuire when executing on an
SMP ARM11MPCore configuration.
This patch changes the definition of cpu_relax() to smp_mb() for ARMv6 cores,
forcing a flushing of the write buffer on SMP systems before the next load
takes place. If the Kernel is not compiled for SMP support, this will expand
to a barrier() as before.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Starting with ARMv6, the CPUs support the BE-8 variant of big-endian
(byte-invariant). This patch adds the core support:
- setting of the BE-8 mode via the CPSR.E register for both kernel and
user threads
- big-endian page table walking
- REV used to rotate instructions read from memory during fault
processing as they are still little-endian format
- Kconfig and Makefile support for BE-8. The --be8 option must be passed
to the final linking stage to convert the instructions to
little-endian
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Commit 8ec53663d2 ("[ARM] Improve
non-executable support") added support for detecting non-executable
stack binaries. One of the things it does is to make READ_IMPLIES_EXEC
be set in ->personality if we are running on a CPU that doesn't support
the XN ("Execute Never") page table bit or if we are running a binary
that needs an executable stack.
This exposed a latent bug in ARM's asm/processor.h due to which we'll
end up placing the stack at a very low address, where it will bump into
the heap on any application that uses significant amount of stack or
heap or both, causing many interesting crashes.
Fix this by testing the ADDR_LIMIT_32BIT bit in ->personality instead
of testing for equality against PER_LINUX_32BIT.
Reviewed-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
As suggested by Andrew Morton, remove memzero() - it's not supported
on other architectures so use of it is a potential build breaking bug.
Since the compiler optimizes memset(x,0,n) to __memzero() perfectly
well, we don't miss out on the underlying benefits of memzero().
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
With gcc 4.3 and later, a pointer that has already been dereferenced is
assumed not to be null since it should have caused a segmentation fault
otherwise, hence any subsequent test against NULL is optimized away.
Current inline asm constraint used in the implementation of prefetch()
makes gcc believe that the pointer is dereferenced even though the PLD
instruction does not load any data and does not cause a segmentation
fault on null pointers, which causes all sorts of interesting results
when reaching the end of a linked lists for example.
Let's use a better constraint to properly represent the actual usage of
the pointer value.
Problem reported by Chris Steel.
Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Move platform independent header files to arch/arm/include/asm, leaving
those in asm/arch* and asm/plat* alone.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>