linux/arch/arm/common/vlock.S

109 lines
2.6 KiB
ArmAsm
Raw Normal View History

ARM: mcpm: Add baremetal voting mutexes This patch adds a simple low-level voting mutex implementation to be used to arbitrate during first man selection when no load/store exclusive instructions are usable. For want of a better name, these are called "vlocks". (I was tempted to call them ballot locks, but "block" is way too confusing an abbreviation...) There is no function to wait for the lock to be released, and no vlock_lock() function since we don't need these at the moment. These could straightforwardly be added if vlocks get used for other purposes. For architectural correctness even Strongly-Ordered memory accesses require barriers in order to guarantee that multiple CPUs have a coherent view of the ordering of memory accesses. Whether or not this matters depends on hardware implementation details of the memory system. Since the purpose of this code is to provide a clean, generic locking mechanism with no platform-specific dependencies the barriers should be present to avoid unpleasant surprises on future platforms. Note: * When taking the lock, we don't care about implicit background memory operations and other signalling which may be pending, because those are not part of the critical section anyway. A DMB is sufficient to ensure correctly observed ordering if the explicit memory accesses in vlock_trylock. * No barrier is required after checking the election result, because the result is determined by the store to VLOCK_OWNER_OFFSET and is already globally observed due to the barriers in voting_end. This means that global agreement on the winner is guaranteed, even before the winner is known locally. Signed-off-by: Dave Martin <dave.martin@linaro.org> Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org> Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Reviewed-by: Will Deacon <will.deacon@arm.com>
2012-08-17 23:07:01 +08:00
/*
* vlock.S - simple voting lock implementation for ARM
*
* Created by: Dave Martin, 2012-08-16
* Copyright: (C) 2012-2013 Linaro Limited
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*
* This algorithm is described in more detail in
* Documentation/arm/vlocks.txt.
*/
#include <linux/linkage.h>
#include "vlock.h"
/* Select different code if voting flags can fit in a single word. */
#if VLOCK_VOTING_SIZE > 4
#define FEW(x...)
#define MANY(x...) x
#else
#define FEW(x...) x
#define MANY(x...)
#endif
@ voting lock for first-man coordination
.macro voting_begin rbase:req, rcpu:req, rscratch:req
mov \rscratch, #1
strb \rscratch, [\rbase, \rcpu]
dmb
.endm
.macro voting_end rbase:req, rcpu:req, rscratch:req
dmb
mov \rscratch, #0
strb \rscratch, [\rbase, \rcpu]
dsb st
ARM: mcpm: Add baremetal voting mutexes This patch adds a simple low-level voting mutex implementation to be used to arbitrate during first man selection when no load/store exclusive instructions are usable. For want of a better name, these are called "vlocks". (I was tempted to call them ballot locks, but "block" is way too confusing an abbreviation...) There is no function to wait for the lock to be released, and no vlock_lock() function since we don't need these at the moment. These could straightforwardly be added if vlocks get used for other purposes. For architectural correctness even Strongly-Ordered memory accesses require barriers in order to guarantee that multiple CPUs have a coherent view of the ordering of memory accesses. Whether or not this matters depends on hardware implementation details of the memory system. Since the purpose of this code is to provide a clean, generic locking mechanism with no platform-specific dependencies the barriers should be present to avoid unpleasant surprises on future platforms. Note: * When taking the lock, we don't care about implicit background memory operations and other signalling which may be pending, because those are not part of the critical section anyway. A DMB is sufficient to ensure correctly observed ordering if the explicit memory accesses in vlock_trylock. * No barrier is required after checking the election result, because the result is determined by the store to VLOCK_OWNER_OFFSET and is already globally observed due to the barriers in voting_end. This means that global agreement on the winner is guaranteed, even before the winner is known locally. Signed-off-by: Dave Martin <dave.martin@linaro.org> Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org> Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Reviewed-by: Will Deacon <will.deacon@arm.com>
2012-08-17 23:07:01 +08:00
sev
.endm
/*
* The vlock structure must reside in Strongly-Ordered or Device memory.
* This implementation deliberately eliminates most of the barriers which
* would be required for other memory types, and assumes that independent
* writes to neighbouring locations within a cacheline do not interfere
* with one another.
*/
@ r0: lock structure base
@ r1: CPU ID (0-based index within cluster)
ENTRY(vlock_trylock)
add r1, r1, #VLOCK_VOTING_OFFSET
voting_begin r0, r1, r2
ldrb r2, [r0, #VLOCK_OWNER_OFFSET] @ check whether lock is held
cmp r2, #VLOCK_OWNER_NONE
bne trylock_fail @ fail if so
@ Control dependency implies strb not observable before previous ldrb.
strb r1, [r0, #VLOCK_OWNER_OFFSET] @ submit my vote
voting_end r0, r1, r2 @ implies DMB
@ Wait for the current round of voting to finish:
MANY( mov r3, #VLOCK_VOTING_OFFSET )
0:
MANY( ldr r2, [r0, r3] )
FEW( ldr r2, [r0, #VLOCK_VOTING_OFFSET] )
cmp r2, #0
wfene
bne 0b
MANY( add r3, r3, #4 )
MANY( cmp r3, #VLOCK_VOTING_OFFSET + VLOCK_VOTING_SIZE )
MANY( bne 0b )
@ Check who won:
dmb
ldrb r2, [r0, #VLOCK_OWNER_OFFSET]
eor r0, r1, r2 @ zero if I won, else nonzero
bx lr
trylock_fail:
voting_end r0, r1, r2
mov r0, #1 @ nonzero indicates that I lost
bx lr
ENDPROC(vlock_trylock)
@ r0: lock structure base
ENTRY(vlock_unlock)
dmb
mov r1, #VLOCK_OWNER_NONE
strb r1, [r0, #VLOCK_OWNER_OFFSET]
dsb st
ARM: mcpm: Add baremetal voting mutexes This patch adds a simple low-level voting mutex implementation to be used to arbitrate during first man selection when no load/store exclusive instructions are usable. For want of a better name, these are called "vlocks". (I was tempted to call them ballot locks, but "block" is way too confusing an abbreviation...) There is no function to wait for the lock to be released, and no vlock_lock() function since we don't need these at the moment. These could straightforwardly be added if vlocks get used for other purposes. For architectural correctness even Strongly-Ordered memory accesses require barriers in order to guarantee that multiple CPUs have a coherent view of the ordering of memory accesses. Whether or not this matters depends on hardware implementation details of the memory system. Since the purpose of this code is to provide a clean, generic locking mechanism with no platform-specific dependencies the barriers should be present to avoid unpleasant surprises on future platforms. Note: * When taking the lock, we don't care about implicit background memory operations and other signalling which may be pending, because those are not part of the critical section anyway. A DMB is sufficient to ensure correctly observed ordering if the explicit memory accesses in vlock_trylock. * No barrier is required after checking the election result, because the result is determined by the store to VLOCK_OWNER_OFFSET and is already globally observed due to the barriers in voting_end. This means that global agreement on the winner is guaranteed, even before the winner is known locally. Signed-off-by: Dave Martin <dave.martin@linaro.org> Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org> Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Reviewed-by: Will Deacon <will.deacon@arm.com>
2012-08-17 23:07:01 +08:00
sev
bx lr
ENDPROC(vlock_unlock)