linux/kernel/sched
Jason Low 52a08ef1f1 sched: Fix the rq->next_balance logic in rebalance_domains() and idle_balance()
Currently, in idle_balance(), we update rq->next_balance when we pull_tasks.
However, it is also important to update this in the !pulled_tasks case too.

When the CPU is "busy" (the CPU isn't idle), rq->next_balance gets computed
using sd->busy_factor (so we increase the balance interval when the CPU is
busy). However, when the CPU goes idle, rq->next_balance could still be set
to a large value that was computed with the sd->busy_factor.

Thus, we need to also update rq->next_balance in idle_balance() in the cases
where !pulled_tasks too, so that rq->next_balance gets updated without taking
the busy_factor into account when the CPU is about to go idle.

This patch makes rq->next_balance get updated independently of whether or
not we pulled_task. Also, we add logic to ensure that we always traverse
at least 1 of the sched domains to get a proper next_balance value for
updating rq->next_balance.

Additionally, since load_balance() modifies the sd->balance_interval, we
need to re-obtain the sched domain's interval after the call to
load_balance() in rebalance_domains() before we update rq->next_balance.

This patch adds and uses 2 new helper functions, update_next_balance() and
get_sd_balance_interval() to update next_balance and obtain the sched
domain's balance_interval.

Signed-off-by: Jason Low <jason.low2@hp.com>
Reviewed-by: Preeti U Murthy <preeti@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: daniel.lezcano@linaro.org
Cc: alex.shi@linaro.org
Cc: efault@gmx.de
Cc: vincent.guittot@linaro.org
Cc: morten.rasmussen@arm.com
Cc: aswin@hp.com
Link: http://lkml.kernel.org/r/1399596562.2200.7.camel@j-VirtualBox
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-05-22 11:16:32 +02:00
..
Makefile sched/idle: Move cpu/idle.c to sched/idle.c 2014-02-11 09:58:30 +01:00
auto_group.c sched: Replace hardcoding of -20 and 19 with MIN_NICE and MAX_NICE 2014-02-22 18:15:54 +01:00
auto_group.h Revert "sched/autogroup: Fix crash on reboot when autogroup is disabled" 2012-12-11 10:23:45 +01:00
clock.c kernel: use macros from compiler.h instead of __attribute__((...)) 2014-04-07 16:36:11 -07:00
completion.c sched: Move completion code from core.c to completion.c 2013-11-06 07:49:19 +01:00
core.c sched: Use clamp() and clamp_val() to make sys_nice() more readable 2014-05-22 11:16:31 +02:00
cpuacct.c cgroup: clean up cgroup_subsys names and initialization 2014-02-08 10:36:58 -05:00
cpuacct.h sched/cpuacct: Initialize root cpuacct earlier 2013-04-10 13:54:20 +02:00
cpudeadline.c sched/deadline: Replace NR_CPUS arrays 2014-05-22 10:21:28 +02:00
cpudeadline.h sched/deadline: Replace NR_CPUS arrays 2014-05-22 10:21:28 +02:00
cpupri.c sched/cpupri: Replace NR_CPUS arrays 2014-05-22 10:21:29 +02:00
cpupri.h sched/cpupri: Replace NR_CPUS arrays 2014-05-22 10:21:29 +02:00
cputime.c sched: Sanitize irq accounting madness 2014-05-07 11:51:30 +02:00
deadline.c sched/deadline: Fix sched_yield() behavior 2014-05-07 11:51:31 +02:00
debug.c Merge branch 'for-3.15' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup 2014-04-03 13:05:42 -07:00
fair.c sched: Fix the rq->next_balance logic in rebalance_domains() and idle_balance() 2014-05-22 11:16:32 +02:00
features.h sched/numa: Resist moving tasks towards nodes with fewer hinting faults 2013-10-09 12:40:27 +02:00
idle.c Merge branch 'pm-cpuidle' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm into sched/core 2014-05-22 10:37:06 +02:00
idle_task.c sched/fair: Push down check for high priority class task into idle_balance() 2014-03-11 12:05:37 +01:00
proc.c sched: Change get_rq_runnable_load() to static and inline 2013-06-27 10:07:44 +02:00
rt.c sched: Revert commit 4c6c4e38c4 ("sched/core: Fix endless loop in pick_next_task()") 2014-04-18 12:07:29 +02:00
sched.h sched: Revert commit 4c6c4e38c4 ("sched/core: Fix endless loop in pick_next_task()") 2014-04-18 12:07:29 +02:00
stats.c kernel: audit/fix non-modular users of module_init in core code 2014-04-03 16:21:07 -07:00
stats.h sched: Micro-optimize by dropping unnecessary task_rq() calls 2013-09-25 13:51:06 +02:00
stop_task.c sched: Fix hotplug task migration 2014-02-21 21:43:18 +01:00
wait.c sched: Move wait code from core.c to wait.c 2013-11-06 07:49:18 +01:00