mirror of
https://github.com/torvalds/linux.git
synced 2024-11-01 17:51:43 +00:00
a76d7bd96d
The open-coded mutex implementation for ARMv6+ cores suffers from a severe lack of barriers, so in the uncontended case we don't actually protect any accesses performed during the critical section. Furthermore, the code is largely a duplication of the ARMv6+ atomic_dec code but optimised to remove a branch instruction, as the mutex fastpath was previously inlined. Now that this is executed out-of-line, we can reuse the atomic access code for the locking (in fact, we use the xchg code as this produces shorter critical sections). This patch uses the generic xchg based implementation for mutexes on ARMv6+, which introduces barriers to the lock/unlock operations and also has the benefit of removing a fair amount of inline assembly code. Cc: <stable@vger.kernel.org> Acked-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Nicolas Pitre <nico@linaro.org> Reported-by: Shan Kang <kangshan0910@gmail.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
17 lines
411 B
C
17 lines
411 B
C
/*
|
|
* arch/arm/include/asm/mutex.h
|
|
*
|
|
* ARM optimized mutex locking primitives
|
|
*
|
|
* Please look into asm-generic/mutex-xchg.h for a formal definition.
|
|
*/
|
|
#ifndef _ASM_MUTEX_H
|
|
#define _ASM_MUTEX_H
|
|
/*
|
|
* On pre-ARMv6 hardware this results in a swp-based implementation,
|
|
* which is the most efficient. For ARMv6+, we emit a pair of exclusive
|
|
* accesses instead.
|
|
*/
|
|
#include <asm-generic/mutex-xchg.h>
|
|
#endif
|