arm64: spinlock: order spin_{is_locked,unlock_wait} against local locks
spin_is_locked has grown two very different use-cases: (1) [The sane case] API functions may require a certain lock to be held by the caller and can therefore use spin_is_locked as part of an assert statement in order to verify that the lock is indeed held. For example, usage of assert_spin_locked. (2) [The insane case] There are two locks, where a CPU takes one of the locks and then checks whether or not the other one is held before accessing some shared state. For example, the "optimized locking" in ipc/sem.c. In the latter case, the sequence looks like: spin_lock(&sem->lock); if (!spin_is_locked(&sma->sem_perm.lock)) /* Access shared state */ and requires that the spin_is_locked check is ordered after taking the sem->lock. Unfortunately, since our spinlocks are implemented using a LDAXR/STXR sequence, the read of &sma->sem_perm.lock can be speculated before the STXR and consequently return a stale value. Whilst this hasn't been seen to cause issues in practice, PowerPC fixed the same issue in51d7d5205d
("powerpc: Add smp_mb() to arch_spin_is_locked()") and, although we did something similar for spin_unlock_wait ind86b8da04d
("arm64: spinlock: serialise spin_unlock_wait against concurrent lockers") that doesn't actually take care of ordering against local acquisition of a different lock. This patch adds an smp_mb() to the start of our arch_spin_is_locked and arch_spin_unlock_wait routines to ensure that the lock value is always loaded after any other locks have been taken by the current CPU. Reported-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
This commit is contained in:
parent
f7a6c1492a
commit
38b850a730
@ -31,6 +31,12 @@ static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
|
|||||||
unsigned int tmp;
|
unsigned int tmp;
|
||||||
arch_spinlock_t lockval;
|
arch_spinlock_t lockval;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Ensure prior spin_lock operations to other locks have completed
|
||||||
|
* on this CPU before we test whether "lock" is locked.
|
||||||
|
*/
|
||||||
|
smp_mb();
|
||||||
|
|
||||||
asm volatile(
|
asm volatile(
|
||||||
" sevl\n"
|
" sevl\n"
|
||||||
"1: wfe\n"
|
"1: wfe\n"
|
||||||
@ -148,6 +154,7 @@ static inline int arch_spin_value_unlocked(arch_spinlock_t lock)
|
|||||||
|
|
||||||
static inline int arch_spin_is_locked(arch_spinlock_t *lock)
|
static inline int arch_spin_is_locked(arch_spinlock_t *lock)
|
||||||
{
|
{
|
||||||
|
smp_mb(); /* See arch_spin_unlock_wait */
|
||||||
return !arch_spin_value_unlocked(READ_ONCE(*lock));
|
return !arch_spin_value_unlocked(READ_ONCE(*lock));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user