epoll: eliminate unnecessary lock for zero timeout

We call ep_events_available() under lock when timeout is 0, and then call
it without locks in the loop for the other cases.

Instead, call ep_events_available() without lock for all cases.  For
non-zero timeouts, we will recheck after adding the thread to the wait
queue.  For zero timeout cases, by definition, user is opportunistically
polling and will have to call epoll_wait again in the future.

Note that this lock was kept in c5a282e963 because the whole loop was
historically under lock.

This patch results in a 1% CPU/RPC reduction in RPC benchmarks.

Link: https://lkml.kernel.org/r/20201106231635.3528496-9-soheil.kdev@gmail.com
Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com>
Suggested-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Willem de Bruijn <willemb@google.com>
Reviewed-by: Khazhismel Kumykov <khazhy@google.com>
Cc: Guantao Liu <guantaol@google.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
Soheil Hassas Yeganeh 2020-12-18 14:02:06 -08:00 committed by Linus Torvalds
parent 00b27634bc
commit e59d3c64cb
1 changed files with 12 additions and 13 deletions

View File

@ -1743,7 +1743,7 @@ static inline struct timespec64 ep_set_mstimeout(long ms)
static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events, static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events,
int maxevents, long timeout) int maxevents, long timeout)
{ {
int res, eavail = 0, timed_out = 0; int res, eavail, timed_out = 0;
u64 slack = 0; u64 slack = 0;
wait_queue_entry_t wait; wait_queue_entry_t wait;
ktime_t expires, *to = NULL; ktime_t expires, *to = NULL;
@ -1759,18 +1759,21 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events,
} else if (timeout == 0) { } else if (timeout == 0) {
/* /*
* Avoid the unnecessary trip to the wait queue loop, if the * Avoid the unnecessary trip to the wait queue loop, if the
* caller specified a non blocking operation. We still need * caller specified a non blocking operation.
* lock because we could race and not see an epi being added
* to the ready list while in irq callback. Thus incorrectly
* returning 0 back to userspace.
*/ */
timed_out = 1; timed_out = 1;
write_lock_irq(&ep->lock);
eavail = ep_events_available(ep);
write_unlock_irq(&ep->lock);
} }
/*
* This call is racy: We may or may not see events that are being added
* to the ready list under the lock (e.g., in IRQ callbacks). For, cases
* with a non-zero timeout, this thread will check the ready list under
* lock and will added to the wait queue. For, cases with a zero
* timeout, the user by definition should not care and will have to
* recheck again.
*/
eavail = ep_events_available(ep);
while (1) { while (1) {
if (eavail) { if (eavail) {
/* /*
@ -1786,10 +1789,6 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events,
if (timed_out) if (timed_out)
return 0; return 0;
eavail = ep_events_available(ep);
if (eavail)
continue;
eavail = ep_busy_loop(ep, timed_out); eavail = ep_busy_loop(ep, timed_out);
if (eavail) if (eavail)
continue; continue;