Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

io_uring/poll: unify poll waitqueue entry and list removal

For some cases, the order in which the waitq entry list and head
writing happens is important, for others it doesn't really matter.
But it's somewhat confusing to have them spread out over the file.

Abstract out the nicely documented code in io_pollfree_wake() and
move it into a helper, and use that helper consistently rather than
having other call sites manually do the same thing. While at it,
correct a comment function name as well.

Signed-off-by: Jens Axboe <axboe@kernel.dk>

+22 -21
+22 -21
io_uring/poll.c
··· 138 138 init_waitqueue_func_entry(&poll->wait, io_poll_wake); 139 139 } 140 140 141 + static void io_poll_remove_waitq(struct io_poll *poll) 142 + { 143 + /* 144 + * If the waitqueue is being freed early but someone is already holds 145 + * ownership over it, we have to tear down the request as best we can. 146 + * That means immediately removing the request from its waitqueue and 147 + * preventing all further accesses to the waitqueue via the request. 148 + */ 149 + list_del_init(&poll->wait.entry); 150 + 151 + /* 152 + * Careful: this *must* be the last step, since as soon as req->head is 153 + * NULL'ed out, the request can be completed and freed, since 154 + * io_poll_remove_entry() will no longer need to take the waitqueue 155 + * lock. 156 + */ 157 + smp_store_release(&poll->head, NULL); 158 + } 159 + 141 160 static inline void io_poll_remove_entry(struct io_poll *poll) 142 161 { 143 162 struct wait_queue_head *head = smp_load_acquire(&poll->head); 144 163 145 164 if (head) { 146 165 spin_lock_irq(&head->lock); 147 - list_del_init(&poll->wait.entry); 148 - poll->head = NULL; 166 + io_poll_remove_waitq(poll); 149 167 spin_unlock_irq(&head->lock); 150 168 } 151 169 } ··· 386 368 io_poll_mark_cancelled(req); 387 369 /* we have to kick tw in case it's not already */ 388 370 io_poll_execute(req, 0); 389 - 390 - /* 391 - * If the waitqueue is being freed early but someone is already 392 - * holds ownership over it, we have to tear down the request as 393 - * best we can. That means immediately removing the request from 394 - * its waitqueue and preventing all further accesses to the 395 - * waitqueue via the request. 396 - */ 397 - list_del_init(&poll->wait.entry); 398 - 399 - /* 400 - * Careful: this *must* be the last step, since as soon 401 - * as req->head is NULL'ed out, the request can be 402 - * completed and freed, since aio_poll_complete_work() 403 - * will no longer need to take the waitqueue lock. 404 - */ 405 - smp_store_release(&poll->head, NULL); 371 + io_poll_remove_waitq(poll); 406 372 return 1; 407 373 } 408 374 ··· 415 413 416 414 /* optional, saves extra locking for removal in tw handler */ 417 415 if (mask && poll->events & EPOLLONESHOT) { 418 - list_del_init(&poll->wait.entry); 419 - poll->head = NULL; 416 + io_poll_remove_waitq(poll); 420 417 if (wqe_is_double(wait)) 421 418 req->flags &= ~REQ_F_DOUBLE_POLL; 422 419 else