Commit Graph

10 Commits

Author SHA1 Message Date
Sarbojit Ganguly e8973a889e ARM: 8443/1: Adding support for atomic half word exchange
Since support for half-word atomic exchange was not there and Qspinlock
on ARM requires it, modified __xchg() to add support for that as well.
ARMv6 and lower does not support ldrex{b,h} so, added a guard code to
prevent build breaks.

Signed-off-by: Sarbojit Ganguly <ganguly.s@samsung.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2015-10-09 16:22:54 +01:00
Will Deacon 0ca326de7a locking, ARM, atomics: Define our SMP atomics in terms of _relaxed() operations
By defining our SMP atomics in terms of relaxed operations, we gain
a small reduction in code size and have acquire/release/fence variants
generated automatically by the core code.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Waiman.Long@hp.com
Cc: paulmck@linux.vnet.ibm.com
Link: http://lkml.kernel.org/r/1438880084-18856-9-git-send-email-will.deacon@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-08-12 11:59:10 +02:00
Russell King e001bbae71 ARM: cmpxchg: avoid warnings from macro-ized cmpxchg() implementations
A recent change in kernel/acct.c added a new warning for many
configurations on ARM:

kernel/acct.c: In function 'acct_pin_kill':
arch/arm/include/asm/cmpxchg.h:122:3: warning: value computed is not used [-Wunused-value]

The code is in fact correct, it's just a cmpxchg() call that
intentionally ignores the result, and no other code does that.  The
warning does not show up on x86 because of the way that its cmpxchg()
macro is written. This changes the ARM implementation to use a similar
construct with a compound expression instead of a typecast, which causes
the compiler to not complain about an unused result.

Fix the other macros in this file in a similar way, and place them
just below their function implementations.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2015-06-02 09:58:20 +01:00
Russell King 31cd08c3a1 ARM: remove __bad_xchg definition
We want link errors if xchg() is called for a variable size we do not
support.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2015-06-02 09:58:19 +01:00
Will Deacon c32ffce0f6 ARM: 7984/1: prefetch: add prefetchw invocations for barriered atomics
After a bunch of benchmarking on the interaction between dmb and pldw,
it turns out that issuing the pldw *after* the dmb instruction can
give modest performance gains (~3% atomic_add_return improvement on a
dual A15).

This patch adds prefetchw invocations to our barriered atomic operations
including cmpxchg, test_and_xxx and futexes.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-02-25 11:30:20 +00:00
Will Deacon 775ebcc16b ARM: 7853/1: cmpxchg: implement cmpxchg64_relaxed
This patch introduces cmpxchg64_relaxed for arm, which performs a 64-bit
cmpxchg operation without barrier semantics. cmpxchg64_local is updated
to use the new operation.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-10-29 11:06:09 +00:00
Will Deacon 2523c67bb6 ARM: 7852/1: cmpxchg: implement barrier-less cmpxchg64_local
Our cmpxchg64 macros are wrappers around atomic64_cmpxchg. Whilst this is
great for code re-use, there is a case for barrier-less cmpxchg where it
is known to be safe (for example cmpxchg64_local and cmpxchg-based
lockrefs).

This patch introduces a 64-bit cmpxchg implementation specifically
for the cmpxchg64_* macros, so that it can be later used by the lockref
code.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-10-29 11:06:06 +00:00
Jaccon Bastiaansen 6eabb3301b ARM: 7720/1: ARM v6/v7 cmpxchg64 shouldn't clear upper 32 bits of the old/new value
The implementation of cmpxchg64() for the ARM v6 and v7 architecture
casts parameter 2 and 3 (the old and new 64bit values) to an unsigned
long before calling the atomic_cmpxchg64() function. This clears
the top 32 bits of the old and new values, resulting in the wrong
values being compare-exchanged. Luckily, this only appears to be used
for 64-bit sched_clock, which we don't (yet) have on ARM.

This bug was introduced by commit 3e0f5a15f5 ("ARM: 7404/1: cmpxchg64:
use atomic64 and local64 routines for cmpxchg64").

Cc: <stable@vger.kernel.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Jaccon Bastiaansen <jaccon.bastiaansen@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-05-13 23:42:24 +01:00
Will Deacon 3e0f5a15f5 ARM: 7404/1: cmpxchg64: use atomic64 and local64 routines for cmpxchg64
The cmpxchg64 routines for ARMv6+ CPUs replicate inline assembly that
already exists for atomic64 operations. Furthermore, the cmpxchg64 code
uses the "memory" constraint in the clobber list rather than identifying
the region of memory that is actually modified.

This patch replaces the ARMv6+ cmpxchg64 code with macros that expand to
the atomic64_ and local64_ variants, casting the pointer parameter to
the appropriate container type.

Cc: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-04-28 17:32:44 +01:00
David Howells 9f97da78bf Disintegrate asm/system.h for ARM
Disintegrate asm/system.h for ARM.

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Russell King <linux@arm.linux.org.uk>
cc: linux-arm-kernel@lists.infradead.org
2012-03-28 18:30:01 +01:00