forked from Minki/linux
xen: use stronger barrier after unlocking lock
We need to have a stronger barrier between releasing the lock and checking for any waiting spinners. A compiler barrier is not sufficient because the CPU's ordering rules do not prevent the read xl->spinners from happening before the unlock assignment, as they are different memory locations. We need to have an explicit barrier to enforce the write-read ordering to different memory locations. Because of it, I can't bring up > 4 HVM guests on one SMP machine. [ Code and commit comments expanded -J ] [ Impact: avoid deadlock when using Xen PV spinlocks ] Signed-off-by: Yang Xiaowei <xiaowei.yang@intel.com> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
This commit is contained in:
parent
4d576b57b5
commit
2496afbf1e
@ -326,8 +326,13 @@ static void xen_spin_unlock(struct raw_spinlock *lock)
|
||||
smp_wmb(); /* make sure no writes get moved after unlock */
|
||||
xl->lock = 0; /* release lock */
|
||||
|
||||
/* make sure unlock happens before kick */
|
||||
barrier();
|
||||
/*
|
||||
* Make sure unlock happens before checking for waiting
|
||||
* spinners. We need a strong barrier to enforce the
|
||||
* write-read ordering to different memory locations, as the
|
||||
* CPU makes no implied guarantees about their ordering.
|
||||
*/
|
||||
mb();
|
||||
|
||||
if (unlikely(xl->spinners))
|
||||
xen_spin_unlock_slow(xl);
|
||||
|
Loading…
Reference in New Issue
Block a user