locking/rwsem: Fix (possible) missed wakeup

Because wake_q_add() can imply an immediate wakeup (cmpxchg failure
case), we must not rely on the wakeup being delayed. However, commit:

e38513905eea ("locking/rwsem: Rework zeroing reader waiter->task")

relies on exactly that behaviour in that the wakeup must not happen
until after we clear waiter->task.

[ peterz: Added changelog. ]

Signed-off-by: Xie Yongji <xieyongji@baidu.com>
Signed-off-by: Zhang Yu <zhangyu31@baidu.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes: e38513905eea ("locking/rwsem: Rework zeroing reader waiter->task")
Link: https://lkml.kernel.org/r/1543495830-2644-1-git-send-email-xieyongji@baidu.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>

authored by Xie Yongji and committed by Ingo Molnar e158488b b061c38b

Changed files
+9 -2
kernel
locking
+9 -2
kernel/locking/rwsem-xadd.c
··· 198 198 woken++; 199 199 tsk = waiter->task; 200 200 201 - wake_q_add(wake_q, tsk); 201 + get_task_struct(tsk); 202 202 list_del(&waiter->list); 203 203 /* 204 - * Ensure that the last operation is setting the reader 204 + * Ensure calling get_task_struct() before setting the reader 205 205 * waiter to nil such that rwsem_down_read_failed() cannot 206 206 * race with do_exit() by always holding a reference count 207 207 * to the task to wakeup. 208 208 */ 209 209 smp_store_release(&waiter->task, NULL); 210 + /* 211 + * Ensure issuing the wakeup (either by us or someone else) 212 + * after setting the reader waiter to nil. 213 + */ 214 + wake_q_add(wake_q, tsk); 215 + /* wake_q_add() already take the task ref */ 216 + put_task_struct(tsk); 210 217 } 211 218 212 219 adjustment = woken * RWSEM_ACTIVE_READ_BIAS - adjustment;