forked from Minki/linux
locking/rtmutex: Prevent lockdep false positive with PI futexes
On PREEMPT_RT the futex hashbucket spinlock becomes 'sleeping' and rtmutex based. That causes a lockdep false positive because some of the futex functions invoke spin_unlock(&hb->lock) with the wait_lock of the rtmutex associated to the pi_futex held. spin_unlock() in turn takes wait_lock of the rtmutex on which the spinlock is based which makes lockdep notice a lock recursion. Give the futex/rtmutex wait_lock a separate key. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20210815211305.750701219@linutronix.de
This commit is contained in:
parent
07d91ef510
commit
51711e825a
@ -214,7 +214,19 @@ EXPORT_SYMBOL_GPL(__rt_mutex_init);
|
||||
void __sched rt_mutex_init_proxy_locked(struct rt_mutex_base *lock,
|
||||
struct task_struct *proxy_owner)
|
||||
{
|
||||
static struct lock_class_key pi_futex_key;
|
||||
|
||||
__rt_mutex_base_init(lock);
|
||||
/*
|
||||
* On PREEMPT_RT the futex hashbucket spinlock becomes 'sleeping'
|
||||
* and rtmutex based. That causes a lockdep false positive, because
|
||||
* some of the futex functions invoke spin_unlock(&hb->lock) with
|
||||
* the wait_lock of the rtmutex associated to the pi_futex held.
|
||||
* spin_unlock() in turn takes wait_lock of the rtmutex on which
|
||||
* the spinlock is based, which makes lockdep notice a lock
|
||||
* recursion. Give the futex/rtmutex wait_lock a separate key.
|
||||
*/
|
||||
lockdep_set_class(&lock->wait_lock, &pi_futex_key);
|
||||
rt_mutex_set_owner(lock, proxy_owner);
|
||||
}
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user