mirror of
https://github.com/torvalds/linux.git
synced 2024-11-28 15:11:31 +00:00
arm64: spinlocks: implement smp_mb__before_spinlock() as smp_mb()
smp_mb__before_spinlock() is intended to upgrade a spin_lock() operation to a full barrier, such that prior stores are ordered with respect to loads and stores occuring inside the critical section. Unfortunately, the core code defines the barrier as smp_wmb(), which is insufficient to provide the required ordering guarantees when used in conjunction with our load-acquire-based spinlock implementation. This patch overrides the arm64 definition of smp_mb__before_spinlock() to map to a full smp_mb(). Cc: <stable@vger.kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Reported-by: Alan Stern <stern@rowland.harvard.edu> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This commit is contained in:
parent
c6935931c1
commit
872c63fbf9
@ -363,4 +363,14 @@ static inline int arch_read_trylock(arch_rwlock_t *rw)
|
||||
#define arch_read_relax(lock) cpu_relax()
|
||||
#define arch_write_relax(lock) cpu_relax()
|
||||
|
||||
/*
|
||||
* Accesses appearing in program order before a spin_lock() operation
|
||||
* can be reordered with accesses inside the critical section, by virtue
|
||||
* of arch_spin_lock being constructed using acquire semantics.
|
||||
*
|
||||
* In cases where this is problematic (e.g. try_to_wake_up), an
|
||||
* smp_mb__before_spinlock() can restore the required ordering.
|
||||
*/
|
||||
#define smp_mb__before_spinlock() smp_mb()
|
||||
|
||||
#endif /* __ASM_SPINLOCK_H */
|
||||
|
Loading…
Reference in New Issue
Block a user