Skip to content

Commit

Permalink
io_uring/poll: serialize poll linked timer start with poll removal
Browse files Browse the repository at this point in the history
We selectively grab the ctx->uring_lock for poll update/removal, but
we really should grab it from the start to fully synchronize with
linked timeouts. Normally this is indeed the case, but if requests
are forced async by the application, we don't fully cover removal
and timer disarm within the uring_lock.

Make this simpler by having consistent locking state for poll removal.

Cc: [email protected] # 6.1+
Reported-by: Querijn Voet <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
  • Loading branch information
axboe committed Jun 18, 2023
1 parent adeaa3f commit ef7dfac
Showing 1 changed file with 4 additions and 5 deletions.
9 changes: 4 additions & 5 deletions io_uring/poll.c
Original file line number Diff line number Diff line change
Expand Up @@ -977,8 +977,9 @@ int io_poll_remove(struct io_kiocb *req, unsigned int issue_flags)
struct io_hash_bucket *bucket;
struct io_kiocb *preq;
int ret2, ret = 0;
struct io_tw_state ts = {};
struct io_tw_state ts = { .locked = true };

io_ring_submit_lock(ctx, issue_flags);
preq = io_poll_find(ctx, true, &cd, &ctx->cancel_table, &bucket);
ret2 = io_poll_disarm(preq);
if (bucket)
Expand All @@ -990,12 +991,10 @@ int io_poll_remove(struct io_kiocb *req, unsigned int issue_flags)
goto out;
}

io_ring_submit_lock(ctx, issue_flags);
preq = io_poll_find(ctx, true, &cd, &ctx->cancel_table_locked, &bucket);
ret2 = io_poll_disarm(preq);
if (bucket)
spin_unlock(&bucket->lock);
io_ring_submit_unlock(ctx, issue_flags);
if (ret2) {
ret = ret2;
goto out;
Expand All @@ -1019,17 +1018,17 @@ int io_poll_remove(struct io_kiocb *req, unsigned int issue_flags)
if (poll_update->update_user_data)
preq->cqe.user_data = poll_update->new_user_data;

ret2 = io_poll_add(preq, issue_flags);
ret2 = io_poll_add(preq, issue_flags & ~IO_URING_F_UNLOCKED);
/* successfully updated, don't complete poll request */
if (!ret2 || ret2 == -EIOCBQUEUED)
goto out;
}

req_set_fail(preq);
io_req_set_res(preq, -ECANCELED, 0);
ts.locked = !(issue_flags & IO_URING_F_UNLOCKED);
io_req_task_complete(preq, &ts);
out:
io_ring_submit_unlock(ctx, issue_flags);
if (ret < 0) {
req_set_fail(req);
return ret;
Expand Down

0 comments on commit ef7dfac

Please sign in to comment.