rtmutex: Drop rt_mutex::wait_lock before scheduling

rt_mutex_handle_deadlock() is called with rt_mutex::wait_lock held. In the
good case it returns with the lock held and in the deadlock case it emits a
warning and goes into an endless scheduling loop with the lock held, which
triggers the 'scheduling in atomic' warning.

Unlock rt_mutex::wait_lock in the dead lock case before issuing the warning
and dropping into the schedule for ever loop.

[ tglx: Moved unlock before the WARN(), removed the pointless comment,
massaged changelog, added Fixes tag ]

Fixes: 3d5c9340d194 ("rtmutex: Handle deadlock detection smarter")
Signed-off-by: Roland Xu <mu001999@outlook.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/all/ME0P300MB063599BEF0743B8FA339C2CECC802@ME0P300MB0635.AUSP300.PROD.OUTLOOK.COM

authored by Roland Xu and committed by Thomas Gleixner d33d2603 7c626ce4

+5 -4
+5 -4
kernel/locking/rtmutex.c
··· 1644 1644 } 1645 1645 1646 1646 static void __sched rt_mutex_handle_deadlock(int res, int detect_deadlock, 1647 + struct rt_mutex_base *lock, 1647 1648 struct rt_mutex_waiter *w) 1648 1649 { 1649 1650 /* ··· 1657 1656 if (build_ww_mutex() && w->ww_ctx) 1658 1657 return; 1659 1658 1660 - /* 1661 - * Yell loudly and stop the task right here. 1662 - */ 1659 + raw_spin_unlock_irq(&lock->wait_lock); 1660 + 1663 1661 WARN(1, "rtmutex deadlock detected\n"); 1662 + 1664 1663 while (1) { 1665 1664 set_current_state(TASK_INTERRUPTIBLE); 1666 1665 rt_mutex_schedule(); ··· 1714 1713 } else { 1715 1714 __set_current_state(TASK_RUNNING); 1716 1715 remove_waiter(lock, waiter); 1717 - rt_mutex_handle_deadlock(ret, chwalk, waiter); 1716 + rt_mutex_handle_deadlock(ret, chwalk, lock, waiter); 1718 1717 } 1719 1718 1720 1719 /*