locking: Remove smp_read_barrier_depends() from queued_spin_lock_slowpath()

Queued spinlocks are not used by DEC Alpha, and furthermore operations
such as READ_ONCE() and release/relaxed RMW atomics are being changed
to imply smp_read_barrier_depends().  This commit therefore removes the
now-redundant smp_read_barrier_depends() from queued_spin_lock_slowpath(),
and adjusts the comments accordingly.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
This commit is contained in:
Paul E. McKenney 2017-10-09 11:22:50 -07:00
parent 5c6338b487
commit 548095dea6

View File

@ -170,7 +170,7 @@ static __always_inline void clear_pending_set_locked(struct qspinlock *lock)
* @tail : The new queue tail code word
* Return: The previous queue tail code word
*
* xchg(lock, tail)
* xchg(lock, tail), which heads an address dependency
*
* p,*,* -> n,*,* ; prev = xchg(lock, node)
*/
@ -409,13 +409,11 @@ queue:
if (old & _Q_TAIL_MASK) {
prev = decode_tail(old);
/*
* The above xchg_tail() is also a load of @lock which generates,
* through decode_tail(), a pointer.
*
* The address dependency matches the RELEASE of xchg_tail()
* such that the access to @prev must happen after.
* The above xchg_tail() is also a load of @lock which
* generates, through decode_tail(), a pointer. The address
* dependency matches the RELEASE of xchg_tail() such that
* the subsequent access to @prev happens after.
*/
smp_read_barrier_depends();
WRITE_ONCE(prev->next, node);