rcu/nocb: Perform deferred wake up before last idle's need_resched() check

Entering RCU idle mode may cause a deferred wake up of an RCU NOCB_GP
kthread (rcuog) to be serviced.

Usually a local wake up happening while running the idle task is handled
in one of the need_resched() checks carefully placed within the idle
loop that can break to the scheduler.

Unfortunately the call to rcu_idle_enter() is already beyond the last
generic need_resched() check and we may halt the CPU with a resched
request unhandled, leaving the task hanging.

Fix this with splitting the rcuog wakeup handling from rcu_idle_enter()
and place it before the last generic need_resched() check in the idle
loop. It is then assumed that no call to call_rcu() will be performed
after that in the idle loop until the CPU is put in low power mode.

Fixes: 96d3fd0d31 (rcu: Break call_rcu() deadlock involving scheduler and perf)
Reported-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/20210131230548.32970-3-frederic@kernel.org
This commit is contained in:
Frederic Weisbecker 2021-02-01 00:05:45 +01:00 committed by Ingo Molnar
parent 54b7429eff
commit 43789ef3f7
4 changed files with 8 additions and 3 deletions

View File

@ -110,8 +110,10 @@ static inline void rcu_user_exit(void) { }
#ifdef CONFIG_RCU_NOCB_CPU #ifdef CONFIG_RCU_NOCB_CPU
void rcu_init_nohz(void); void rcu_init_nohz(void);
void rcu_nocb_flush_deferred_wakeup(void);
#else /* #ifdef CONFIG_RCU_NOCB_CPU */ #else /* #ifdef CONFIG_RCU_NOCB_CPU */
static inline void rcu_init_nohz(void) { } static inline void rcu_init_nohz(void) { }
static inline void rcu_nocb_flush_deferred_wakeup(void) { }
#endif /* #else #ifdef CONFIG_RCU_NOCB_CPU */ #endif /* #else #ifdef CONFIG_RCU_NOCB_CPU */
/** /**

View File

@ -671,10 +671,7 @@ static noinstr void rcu_eqs_enter(bool user)
*/ */
void rcu_idle_enter(void) void rcu_idle_enter(void)
{ {
struct rcu_data *rdp = this_cpu_ptr(&rcu_data);
lockdep_assert_irqs_disabled(); lockdep_assert_irqs_disabled();
do_nocb_deferred_wakeup(rdp);
rcu_eqs_enter(false); rcu_eqs_enter(false);
} }
EXPORT_SYMBOL_GPL(rcu_idle_enter); EXPORT_SYMBOL_GPL(rcu_idle_enter);

View File

@ -2187,6 +2187,11 @@ static void do_nocb_deferred_wakeup(struct rcu_data *rdp)
do_nocb_deferred_wakeup_common(rdp); do_nocb_deferred_wakeup_common(rdp);
} }
void rcu_nocb_flush_deferred_wakeup(void)
{
do_nocb_deferred_wakeup(this_cpu_ptr(&rcu_data));
}
void __init rcu_init_nohz(void) void __init rcu_init_nohz(void)
{ {
int cpu; int cpu;

View File

@ -285,6 +285,7 @@ static void do_idle(void)
} }
arch_cpu_idle_enter(); arch_cpu_idle_enter();
rcu_nocb_flush_deferred_wakeup();
/* /*
* In poll mode we reenable interrupts and spin. Also if we * In poll mode we reenable interrupts and spin. Also if we