2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* linux/mm/oom_kill.c
|
|
|
|
*
|
|
|
|
* Copyright (C) 1998,2000 Rik van Riel
|
|
|
|
* Thanks go out to Claus Fischer for some serious inspiration and
|
|
|
|
* for goading me into coding this file...
|
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 08:19:46 +08:00
|
|
|
* Copyright (C) 2010 Google, Inc.
|
|
|
|
* Rewritten by David Rientjes
|
2005-04-17 06:20:36 +08:00
|
|
|
*
|
|
|
|
* The routines in this file are used to kill a process when
|
[PATCH] cpusets: oom_kill tweaks
This patch series extends the use of the cpuset attribute 'mem_exclusive'
to support cpuset configurations that:
1) allow GFP_KERNEL allocations to come from a potentially larger
set of memory nodes than GFP_USER allocations, and
2) can constrain the oom killer to tasks running in cpusets in
a specified subtree of the cpuset hierarchy.
Here's an example usage scenario. For a few hours or more, a large NUMA
system at a University is to be divided in two halves, with a bunch of student
jobs running in half the system under some form of batch manager, and with a
big research project running in the other half. Each of the student jobs is
placed in a small cpuset, but should share the classic Unix time share
facilities, such as buffered pages of files in /bin and /usr/lib. The big
research project wants no interference whatsoever from the student jobs, and
has highly tuned, unusual memory and i/o patterns that intend to make full use
of all the main memory on the nodes available to it.
In this example, we have two big sibling cpusets, one of which is further
divided into a more dynamic set of child cpusets.
We want kernel memory allocations constrained by the two big cpusets, and user
allocations constrained by the smaller child cpusets where present. And we
require that the oom killer not operate across the two halves of this system,
or else the first time a student job runs amuck, the big research project will
likely be first inline to get shot.
Tweaking /proc/<pid>/oom_adj is not ideal -- if the big research project
really does run amuck allocating memory, it should be shot, not some other
task outside the research projects mem_exclusive cpuset.
I propose to extend the use of the 'mem_exclusive' flag of cpusets to manage
such scenarios. Let memory allocations for user space (GFP_USER) be
constrained by a tasks current cpuset, but memory allocations for kernel space
(GFP_KERNEL) by constrained by the nearest mem_exclusive ancestor of the
current cpuset, even though kernel space allocations will still _prefer_ to
remain within the current tasks cpuset, if memory is easily available.
Let the oom killer be constrained to consider only tasks that are in
overlapping mem_exclusive cpusets (it won't help much to kill a task that
normally cannot allocate memory on any of the same nodes as the ones on which
the current task can allocate.)
The current constraints imposed on setting mem_exclusive are unchanged. A
cpuset may only be mem_exclusive if its parent is also mem_exclusive, and a
mem_exclusive cpuset may not overlap any of its siblings memory nodes.
This patch was presented on linux-mm in early July 2005, though did not
generate much feedback at that time. It has been built for a variety of
arch's using cross tools, and built, booted and tested for function on SN2
(ia64).
There are 4 patches in this set:
1) Some minor cleanup, and some improvements to the code layout
of one routine to make subsequent patches cleaner.
2) Add another GFP flag - __GFP_HARDWALL. It marks memory
requests for USER space, which are tightly confined by the
current tasks cpuset.
3) Now memory requests (such as KERNEL) that not marked HARDWALL can
if short on memory, look in the potentially larger pool of memory
defined by the nearest mem_exclusive ancestor cpuset of the current
tasks cpuset.
4) Finally, modify the oom killer to skip any task whose mem_exclusive
cpuset doesn't overlap ours.
Patch (1), the one time I looked on an SN2 (ia64) build, actually saved 32
bytes of kernel text space. Patch (2) has no affect on the size of kernel
text space (it just adds a preprocessor flag). Patches (3) and (4) added
about 600 bytes each of kernel text space, mostly in kernel/cpuset.c, which
matters only if CONFIG_CPUSET is enabled.
This patch:
This patch applies a few comment and code cleanups to mm/oom_kill.c prior to
applying a few small patches to improve cpuset management of memory placement.
The comment changed in oom_kill.c was seriously misleading. The code layout
change in select_bad_process() makes room for adding another condition on
which a process can be spared the oom killer (see the subsequent
cpuset_nodes_overlap patch for this addition).
Also a couple typos and spellos that bugged me, while I was here.
This patch should have no material affect.
Signed-off-by: Paul Jackson <pj@sgi.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-09-07 06:18:09 +08:00
|
|
|
* we're seriously out of memory. This gets called from __alloc_pages()
|
|
|
|
* in mm/page_alloc.c when we really run out of memory.
|
2005-04-17 06:20:36 +08:00
|
|
|
*
|
|
|
|
* Since we won't call these routines often (on a well-configured
|
|
|
|
* machine) this file will double as a 'coding guide' and a signpost
|
|
|
|
* for newbie kernel hackers. It features several pointers to major
|
|
|
|
* kernel subsystems and hints as to where to find out what things do.
|
|
|
|
*/
|
|
|
|
|
2006-10-20 14:28:32 +08:00
|
|
|
#include <linux/oom.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <linux/mm.h>
|
2007-07-30 06:36:13 +08:00
|
|
|
#include <linux/err.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 16:04:11 +08:00
|
|
|
#include <linux/gfp.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <linux/sched.h>
|
|
|
|
#include <linux/swap.h>
|
|
|
|
#include <linux/timex.h>
|
|
|
|
#include <linux/jiffies.h>
|
2005-09-07 06:18:13 +08:00
|
|
|
#include <linux/cpuset.h>
|
2011-10-16 14:01:52 +08:00
|
|
|
#include <linux/export.h>
|
2006-09-26 14:31:20 +08:00
|
|
|
#include <linux/notifier.h>
|
2008-02-07 16:13:58 +08:00
|
|
|
#include <linux/memcontrol.h>
|
2010-08-10 08:18:52 +08:00
|
|
|
#include <linux/mempolicy.h>
|
security: Fix setting of PF_SUPERPRIV by __capable()
Fix the setting of PF_SUPERPRIV by __capable() as it could corrupt the flags
the target process if that is not the current process and it is trying to
change its own flags in a different way at the same time.
__capable() is using neither atomic ops nor locking to protect t->flags. This
patch removes __capable() and introduces has_capability() that doesn't set
PF_SUPERPRIV on the process being queried.
This patch further splits security_ptrace() in two:
(1) security_ptrace_may_access(). This passes judgement on whether one
process may access another only (PTRACE_MODE_ATTACH for ptrace() and
PTRACE_MODE_READ for /proc), and takes a pointer to the child process.
current is the parent.
(2) security_ptrace_traceme(). This passes judgement on PTRACE_TRACEME only,
and takes only a pointer to the parent process. current is the child.
In Smack and commoncap, this uses has_capability() to determine whether
the parent will be permitted to use PTRACE_ATTACH if normal checks fail.
This does not set PF_SUPERPRIV.
Two of the instances of __capable() actually only act on current, and so have
been changed to calls to capable().
Of the places that were using __capable():
(1) The OOM killer calls __capable() thrice when weighing the killability of a
process. All of these now use has_capability().
(2) cap_ptrace() and smack_ptrace() were using __capable() to check to see
whether the parent was allowed to trace any process. As mentioned above,
these have been split. For PTRACE_ATTACH and /proc, capable() is now
used, and for PTRACE_TRACEME, has_capability() is used.
(3) cap_safe_nice() only ever saw current, so now uses capable().
(4) smack_setprocattr() rejected accesses to tasks other than current just
after calling __capable(), so the order of these two tests have been
switched and capable() is used instead.
(5) In smack_file_send_sigiotask(), we need to allow privileged processes to
receive SIGIO on files they're manipulating.
(6) In smack_task_wait(), we let a process wait for a privileged process,
whether or not the process doing the waiting is privileged.
I've tested this with the LTP SELinux and syscalls testscripts.
Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Serge Hallyn <serue@us.ibm.com>
Acked-by: Casey Schaufler <casey@schaufler-ca.com>
Acked-by: Andrew G. Morgan <morgan@kernel.org>
Acked-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: James Morris <jmorris@namei.org>
2008-08-14 18:37:28 +08:00
|
|
|
#include <linux/security.h>
|
2011-03-23 07:30:12 +08:00
|
|
|
#include <linux/ptrace.h>
|
2011-11-01 08:07:07 +08:00
|
|
|
#include <linux/freezer.h>
|
2012-01-11 07:08:09 +08:00
|
|
|
#include <linux/ftrace.h>
|
2012-03-22 07:33:47 +08:00
|
|
|
#include <linux/ratelimit.h>
|
2016-03-26 05:20:24 +08:00
|
|
|
#include <linux/kthread.h>
|
|
|
|
#include <linux/init.h>
|
|
|
|
|
|
|
|
#include <asm/tlb.h>
|
|
|
|
#include "internal.h"
|
2012-01-11 07:08:09 +08:00
|
|
|
|
|
|
|
#define CREATE_TRACE_POINTS
|
|
|
|
#include <trace/events/oom.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2006-06-23 17:03:13 +08:00
|
|
|
int sysctl_panic_on_oom;
|
2007-10-17 14:25:56 +08:00
|
|
|
int sysctl_oom_kill_allocating_task;
|
2010-08-10 08:18:53 +08:00
|
|
|
int sysctl_oom_dump_tasks = 1;
|
2015-06-25 07:57:19 +08:00
|
|
|
|
|
|
|
DEFINE_MUTEX(oom_lock);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2010-08-10 08:18:52 +08:00
|
|
|
#ifdef CONFIG_NUMA
|
|
|
|
/**
|
|
|
|
* has_intersects_mems_allowed() - check task eligiblity for kill
|
2014-01-22 07:50:00 +08:00
|
|
|
* @start: task struct of which task to consider
|
2010-08-10 08:18:52 +08:00
|
|
|
* @mask: nodemask passed to page allocator for mempolicy ooms
|
|
|
|
*
|
|
|
|
* Task eligibility is determined by whether or not a candidate task, @tsk,
|
|
|
|
* shares the same mempolicy nodes as current if it is bound by such a policy
|
|
|
|
* and whether or not it has the same set of allowed cpuset nodes.
|
2009-09-22 08:03:14 +08:00
|
|
|
*/
|
2014-01-22 07:50:00 +08:00
|
|
|
static bool has_intersects_mems_allowed(struct task_struct *start,
|
2010-08-10 08:18:52 +08:00
|
|
|
const nodemask_t *mask)
|
2009-09-22 08:03:14 +08:00
|
|
|
{
|
2014-01-22 07:50:00 +08:00
|
|
|
struct task_struct *tsk;
|
|
|
|
bool ret = false;
|
2009-09-22 08:03:14 +08:00
|
|
|
|
2014-01-22 07:50:00 +08:00
|
|
|
rcu_read_lock();
|
2014-01-22 07:49:58 +08:00
|
|
|
for_each_thread(start, tsk) {
|
2010-08-10 08:18:52 +08:00
|
|
|
if (mask) {
|
|
|
|
/*
|
|
|
|
* If this is a mempolicy constrained oom, tsk's
|
|
|
|
* cpuset is irrelevant. Only return true if its
|
|
|
|
* mempolicy intersects current, otherwise it may be
|
|
|
|
* needlessly killed.
|
|
|
|
*/
|
2014-01-22 07:50:00 +08:00
|
|
|
ret = mempolicy_nodemask_intersects(tsk, mask);
|
2010-08-10 08:18:52 +08:00
|
|
|
} else {
|
|
|
|
/*
|
|
|
|
* This is not a mempolicy constrained oom, so only
|
|
|
|
* check the mems of tsk's cpuset.
|
|
|
|
*/
|
2014-01-22 07:50:00 +08:00
|
|
|
ret = cpuset_mems_allowed_intersects(current, tsk);
|
2010-08-10 08:18:52 +08:00
|
|
|
}
|
2014-01-22 07:50:00 +08:00
|
|
|
if (ret)
|
|
|
|
break;
|
2014-01-22 07:49:58 +08:00
|
|
|
}
|
2014-01-22 07:50:00 +08:00
|
|
|
rcu_read_unlock();
|
2010-08-10 08:19:39 +08:00
|
|
|
|
2014-01-22 07:50:00 +08:00
|
|
|
return ret;
|
2010-08-10 08:18:52 +08:00
|
|
|
}
|
|
|
|
#else
|
|
|
|
static bool has_intersects_mems_allowed(struct task_struct *tsk,
|
|
|
|
const nodemask_t *mask)
|
|
|
|
{
|
|
|
|
return true;
|
2009-09-22 08:03:14 +08:00
|
|
|
}
|
2010-08-10 08:18:52 +08:00
|
|
|
#endif /* CONFIG_NUMA */
|
2009-09-22 08:03:14 +08:00
|
|
|
|
2010-08-10 08:18:52 +08:00
|
|
|
/*
|
|
|
|
* The process p may have detached its own ->mm while exiting or through
|
|
|
|
* use_mm(), but one or more of its subthreads may still have a valid
|
|
|
|
* pointer. Return p, or any of its subthreads with a valid ->mm, with
|
|
|
|
* task_lock() held.
|
|
|
|
*/
|
2010-08-11 09:03:00 +08:00
|
|
|
struct task_struct *find_lock_task_mm(struct task_struct *p)
|
oom: introduce find_lock_task_mm() to fix !mm false positives
Almost all ->mm == NULL checks in oom_kill.c are wrong.
The current code assumes that the task without ->mm has already released
its memory and ignores the process. However this is not necessarily true
when this process is multithreaded, other live sub-threads can use this
->mm.
- Remove the "if (!p->mm)" check in select_bad_process(), it is
just wrong.
- Add the new helper, find_lock_task_mm(), which finds the live
thread which uses the memory and takes task_lock() to pin ->mm
- change oom_badness() to use this helper instead of just checking
->mm != NULL.
- As David pointed out, select_bad_process() must never choose the
task without ->mm, but no matter what oom_badness() returns the
task can be chosen if nothing else has been found yet.
Change oom_badness() to return int, change it to return -1 if
find_lock_task_mm() fails, and change select_bad_process() to
check points >= 0.
Note! This patch is not enough, we need more changes.
- oom_badness() was fixed, but oom_kill_task() still ignores
the task without ->mm
- oom_forkbomb_penalty() should use find_lock_task_mm() too,
and it also needs other changes to actually find the first
first-descendant children
This will be addressed later.
[kosaki.motohiro@jp.fujitsu.com: use in badness(), __oom_kill_task()]
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 08:18:45 +08:00
|
|
|
{
|
2014-01-22 07:49:58 +08:00
|
|
|
struct task_struct *t;
|
oom: introduce find_lock_task_mm() to fix !mm false positives
Almost all ->mm == NULL checks in oom_kill.c are wrong.
The current code assumes that the task without ->mm has already released
its memory and ignores the process. However this is not necessarily true
when this process is multithreaded, other live sub-threads can use this
->mm.
- Remove the "if (!p->mm)" check in select_bad_process(), it is
just wrong.
- Add the new helper, find_lock_task_mm(), which finds the live
thread which uses the memory and takes task_lock() to pin ->mm
- change oom_badness() to use this helper instead of just checking
->mm != NULL.
- As David pointed out, select_bad_process() must never choose the
task without ->mm, but no matter what oom_badness() returns the
task can be chosen if nothing else has been found yet.
Change oom_badness() to return int, change it to return -1 if
find_lock_task_mm() fails, and change select_bad_process() to
check points >= 0.
Note! This patch is not enough, we need more changes.
- oom_badness() was fixed, but oom_kill_task() still ignores
the task without ->mm
- oom_forkbomb_penalty() should use find_lock_task_mm() too,
and it also needs other changes to actually find the first
first-descendant children
This will be addressed later.
[kosaki.motohiro@jp.fujitsu.com: use in badness(), __oom_kill_task()]
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 08:18:45 +08:00
|
|
|
|
2014-01-22 07:50:01 +08:00
|
|
|
rcu_read_lock();
|
|
|
|
|
2014-01-22 07:49:58 +08:00
|
|
|
for_each_thread(p, t) {
|
oom: introduce find_lock_task_mm() to fix !mm false positives
Almost all ->mm == NULL checks in oom_kill.c are wrong.
The current code assumes that the task without ->mm has already released
its memory and ignores the process. However this is not necessarily true
when this process is multithreaded, other live sub-threads can use this
->mm.
- Remove the "if (!p->mm)" check in select_bad_process(), it is
just wrong.
- Add the new helper, find_lock_task_mm(), which finds the live
thread which uses the memory and takes task_lock() to pin ->mm
- change oom_badness() to use this helper instead of just checking
->mm != NULL.
- As David pointed out, select_bad_process() must never choose the
task without ->mm, but no matter what oom_badness() returns the
task can be chosen if nothing else has been found yet.
Change oom_badness() to return int, change it to return -1 if
find_lock_task_mm() fails, and change select_bad_process() to
check points >= 0.
Note! This patch is not enough, we need more changes.
- oom_badness() was fixed, but oom_kill_task() still ignores
the task without ->mm
- oom_forkbomb_penalty() should use find_lock_task_mm() too,
and it also needs other changes to actually find the first
first-descendant children
This will be addressed later.
[kosaki.motohiro@jp.fujitsu.com: use in badness(), __oom_kill_task()]
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 08:18:45 +08:00
|
|
|
task_lock(t);
|
|
|
|
if (likely(t->mm))
|
2014-01-22 07:50:01 +08:00
|
|
|
goto found;
|
oom: introduce find_lock_task_mm() to fix !mm false positives
Almost all ->mm == NULL checks in oom_kill.c are wrong.
The current code assumes that the task without ->mm has already released
its memory and ignores the process. However this is not necessarily true
when this process is multithreaded, other live sub-threads can use this
->mm.
- Remove the "if (!p->mm)" check in select_bad_process(), it is
just wrong.
- Add the new helper, find_lock_task_mm(), which finds the live
thread which uses the memory and takes task_lock() to pin ->mm
- change oom_badness() to use this helper instead of just checking
->mm != NULL.
- As David pointed out, select_bad_process() must never choose the
task without ->mm, but no matter what oom_badness() returns the
task can be chosen if nothing else has been found yet.
Change oom_badness() to return int, change it to return -1 if
find_lock_task_mm() fails, and change select_bad_process() to
check points >= 0.
Note! This patch is not enough, we need more changes.
- oom_badness() was fixed, but oom_kill_task() still ignores
the task without ->mm
- oom_forkbomb_penalty() should use find_lock_task_mm() too,
and it also needs other changes to actually find the first
first-descendant children
This will be addressed later.
[kosaki.motohiro@jp.fujitsu.com: use in badness(), __oom_kill_task()]
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 08:18:45 +08:00
|
|
|
task_unlock(t);
|
2014-01-22 07:49:58 +08:00
|
|
|
}
|
2014-01-22 07:50:01 +08:00
|
|
|
t = NULL;
|
|
|
|
found:
|
|
|
|
rcu_read_unlock();
|
oom: introduce find_lock_task_mm() to fix !mm false positives
Almost all ->mm == NULL checks in oom_kill.c are wrong.
The current code assumes that the task without ->mm has already released
its memory and ignores the process. However this is not necessarily true
when this process is multithreaded, other live sub-threads can use this
->mm.
- Remove the "if (!p->mm)" check in select_bad_process(), it is
just wrong.
- Add the new helper, find_lock_task_mm(), which finds the live
thread which uses the memory and takes task_lock() to pin ->mm
- change oom_badness() to use this helper instead of just checking
->mm != NULL.
- As David pointed out, select_bad_process() must never choose the
task without ->mm, but no matter what oom_badness() returns the
task can be chosen if nothing else has been found yet.
Change oom_badness() to return int, change it to return -1 if
find_lock_task_mm() fails, and change select_bad_process() to
check points >= 0.
Note! This patch is not enough, we need more changes.
- oom_badness() was fixed, but oom_kill_task() still ignores
the task without ->mm
- oom_forkbomb_penalty() should use find_lock_task_mm() too,
and it also needs other changes to actually find the first
first-descendant children
This will be addressed later.
[kosaki.motohiro@jp.fujitsu.com: use in badness(), __oom_kill_task()]
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 08:18:45 +08:00
|
|
|
|
2014-01-22 07:50:01 +08:00
|
|
|
return t;
|
oom: introduce find_lock_task_mm() to fix !mm false positives
Almost all ->mm == NULL checks in oom_kill.c are wrong.
The current code assumes that the task without ->mm has already released
its memory and ignores the process. However this is not necessarily true
when this process is multithreaded, other live sub-threads can use this
->mm.
- Remove the "if (!p->mm)" check in select_bad_process(), it is
just wrong.
- Add the new helper, find_lock_task_mm(), which finds the live
thread which uses the memory and takes task_lock() to pin ->mm
- change oom_badness() to use this helper instead of just checking
->mm != NULL.
- As David pointed out, select_bad_process() must never choose the
task without ->mm, but no matter what oom_badness() returns the
task can be chosen if nothing else has been found yet.
Change oom_badness() to return int, change it to return -1 if
find_lock_task_mm() fails, and change select_bad_process() to
check points >= 0.
Note! This patch is not enough, we need more changes.
- oom_badness() was fixed, but oom_kill_task() still ignores
the task without ->mm
- oom_forkbomb_penalty() should use find_lock_task_mm() too,
and it also needs other changes to actually find the first
first-descendant children
This will be addressed later.
[kosaki.motohiro@jp.fujitsu.com: use in badness(), __oom_kill_task()]
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 08:18:45 +08:00
|
|
|
}
|
|
|
|
|
2015-11-07 08:28:06 +08:00
|
|
|
/*
|
|
|
|
* order == -1 means the oom kill is required by sysrq, otherwise only
|
|
|
|
* for display purposes.
|
|
|
|
*/
|
|
|
|
static inline bool is_sysrq_oom(struct oom_control *oc)
|
|
|
|
{
|
|
|
|
return oc->order == -1;
|
|
|
|
}
|
|
|
|
|
2010-08-10 08:19:35 +08:00
|
|
|
/* return true if the task is not adequate as candidate victim task. */
|
2010-09-23 04:05:10 +08:00
|
|
|
static bool oom_unkillable_task(struct task_struct *p,
|
2014-12-11 07:44:33 +08:00
|
|
|
struct mem_cgroup *memcg, const nodemask_t *nodemask)
|
2010-08-10 08:19:35 +08:00
|
|
|
{
|
|
|
|
if (is_global_init(p))
|
|
|
|
return true;
|
|
|
|
if (p->flags & PF_KTHREAD)
|
|
|
|
return true;
|
|
|
|
|
|
|
|
/* When mem_cgroup_out_of_memory() and p is not member of the group */
|
2012-01-13 09:18:32 +08:00
|
|
|
if (memcg && !task_in_mem_cgroup(p, memcg))
|
2010-08-10 08:19:35 +08:00
|
|
|
return true;
|
|
|
|
|
|
|
|
/* p may not have freeable memory in nodemask */
|
|
|
|
if (!has_intersects_mems_allowed(p, nodemask))
|
|
|
|
return true;
|
|
|
|
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/**
|
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 08:19:46 +08:00
|
|
|
* oom_badness - heuristic function to determine which candidate task to kill
|
2005-04-17 06:20:36 +08:00
|
|
|
* @p: task struct of which task we should calculate
|
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 08:19:46 +08:00
|
|
|
* @totalpages: total present RAM allowed for page allocation
|
2005-04-17 06:20:36 +08:00
|
|
|
*
|
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 08:19:46 +08:00
|
|
|
* The heuristic for determining which task to kill is made to be as simple and
|
|
|
|
* predictable as possible. The goal is to return the highest value for the
|
|
|
|
* task consuming the most memory to avoid subsequent oom failures.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2012-05-30 06:06:47 +08:00
|
|
|
unsigned long oom_badness(struct task_struct *p, struct mem_cgroup *memcg,
|
|
|
|
const nodemask_t *nodemask, unsigned long totalpages)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2012-06-09 04:21:26 +08:00
|
|
|
long points;
|
2012-06-21 03:52:58 +08:00
|
|
|
long adj;
|
oom: move oom_adj value from task_struct to signal_struct
Currently, OOM logic callflow is here.
__out_of_memory()
select_bad_process() for each task
badness() calculate badness of one task
oom_kill_process() search child
oom_kill_task() kill target task and mm shared tasks with it
example, process-A have two thread, thread-A and thread-B and it have very
fat memory and each thread have following oom_adj and oom_score.
thread-A: oom_adj = OOM_DISABLE, oom_score = 0
thread-B: oom_adj = 0, oom_score = very-high
Then, select_bad_process() select thread-B, but oom_kill_task() refuse
kill the task because thread-A have OOM_DISABLE. Thus __out_of_memory()
call select_bad_process() again. but select_bad_process() select the same
task. It mean kernel fall in livelock.
The fact is, select_bad_process() must select killable task. otherwise
OOM logic go into livelock.
And root cause is, oom_adj shouldn't be per-thread value. it should be
per-process value because OOM-killer kill a process, not thread. Thus
This patch moves oomkilladj (now more appropriately named oom_adj) from
struct task_struct to struct signal_struct. it naturally prevent
select_bad_process() choose wrong task.
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Paul Menage <menage@google.com>
Cc: David Rientjes <rientjes@google.com>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-09-22 08:03:13 +08:00
|
|
|
|
2012-01-13 09:18:32 +08:00
|
|
|
if (oom_unkillable_task(p, memcg, nodemask))
|
2010-08-10 08:19:37 +08:00
|
|
|
return 0;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
oom: introduce find_lock_task_mm() to fix !mm false positives
Almost all ->mm == NULL checks in oom_kill.c are wrong.
The current code assumes that the task without ->mm has already released
its memory and ignores the process. However this is not necessarily true
when this process is multithreaded, other live sub-threads can use this
->mm.
- Remove the "if (!p->mm)" check in select_bad_process(), it is
just wrong.
- Add the new helper, find_lock_task_mm(), which finds the live
thread which uses the memory and takes task_lock() to pin ->mm
- change oom_badness() to use this helper instead of just checking
->mm != NULL.
- As David pointed out, select_bad_process() must never choose the
task without ->mm, but no matter what oom_badness() returns the
task can be chosen if nothing else has been found yet.
Change oom_badness() to return int, change it to return -1 if
find_lock_task_mm() fails, and change select_bad_process() to
check points >= 0.
Note! This patch is not enough, we need more changes.
- oom_badness() was fixed, but oom_kill_task() still ignores
the task without ->mm
- oom_forkbomb_penalty() should use find_lock_task_mm() too,
and it also needs other changes to actually find the first
first-descendant children
This will be addressed later.
[kosaki.motohiro@jp.fujitsu.com: use in badness(), __oom_kill_task()]
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 08:18:45 +08:00
|
|
|
p = find_lock_task_mm(p);
|
|
|
|
if (!p)
|
2005-04-17 06:20:36 +08:00
|
|
|
return 0;
|
|
|
|
|
2016-05-21 07:57:18 +08:00
|
|
|
/*
|
|
|
|
* Do not even consider tasks which are explicitly marked oom
|
2016-07-29 06:44:46 +08:00
|
|
|
* unkillable or have been already oom reaped or the are in
|
|
|
|
* the middle of vfork
|
2016-05-21 07:57:18 +08:00
|
|
|
*/
|
2012-12-12 08:02:54 +08:00
|
|
|
adj = (long)p->signal->oom_score_adj;
|
2016-05-21 07:57:18 +08:00
|
|
|
if (adj == OOM_SCORE_ADJ_MIN ||
|
2016-07-29 06:44:46 +08:00
|
|
|
test_bit(MMF_OOM_REAPED, &p->mm->flags) ||
|
|
|
|
in_vfork(p)) {
|
2011-11-16 06:36:07 +08:00
|
|
|
task_unlock(p);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 08:19:46 +08:00
|
|
|
* The baseline for the badness score is the proportion of RAM that each
|
2011-04-28 06:26:50 +08:00
|
|
|
* task's rss, pagetable and swap space use.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
mm: account pmd page tables to the process
Dave noticed that unprivileged process can allocate significant amount of
memory -- >500 MiB on x86_64 -- and stay unnoticed by oom-killer and
memory cgroup. The trick is to allocate a lot of PMD page tables. Linux
kernel doesn't account PMD tables to the process, only PTE.
The use-cases below use few tricks to allocate a lot of PMD page tables
while keeping VmRSS and VmPTE low. oom_score for the process will be 0.
#include <errno.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/mman.h>
#include <sys/prctl.h>
#define PUD_SIZE (1UL << 30)
#define PMD_SIZE (1UL << 21)
#define NR_PUD 130000
int main(void)
{
char *addr = NULL;
unsigned long i;
prctl(PR_SET_THP_DISABLE);
for (i = 0; i < NR_PUD ; i++) {
addr = mmap(addr + PUD_SIZE, PUD_SIZE, PROT_WRITE|PROT_READ,
MAP_ANONYMOUS|MAP_PRIVATE, -1, 0);
if (addr == MAP_FAILED) {
perror("mmap");
break;
}
*addr = 'x';
munmap(addr, PMD_SIZE);
mmap(addr, PMD_SIZE, PROT_WRITE|PROT_READ,
MAP_ANONYMOUS|MAP_PRIVATE|MAP_FIXED, -1, 0);
if (addr == MAP_FAILED)
perror("re-mmap"), exit(1);
}
printf("PID %d consumed %lu KiB in PMD page tables\n",
getpid(), i * 4096 >> 10);
return pause();
}
The patch addresses the issue by account PMD tables to the process the
same way we account PTE.
The main place where PMD tables is accounted is __pmd_alloc() and
free_pmd_range(). But there're few corner cases:
- HugeTLB can share PMD page tables. The patch handles by accounting
the table to all processes who share it.
- x86 PAE pre-allocates few PMD tables on fork.
- Architectures with FIRST_USER_ADDRESS > 0. We need to adjust sanity
check on exit(2).
Accounting only happens on configuration where PMD page table's level is
present (PMD is not folded). As with nr_ptes we use per-mm counter. The
counter value is used to calculate baseline for badness score by
oom-killer.
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reported-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Hugh Dickins <hughd@google.com>
Reviewed-by: Cyrill Gorcunov <gorcunov@openvz.org>
Cc: Pavel Emelyanov <xemul@openvz.org>
Cc: David Rientjes <rientjes@google.com>
Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-02-12 07:26:50 +08:00
|
|
|
points = get_mm_rss(p->mm) + get_mm_counter(p->mm, MM_SWAPENTS) +
|
|
|
|
atomic_long_read(&p->mm->nr_ptes) + mm_nr_pmds(p->mm);
|
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 08:19:46 +08:00
|
|
|
task_unlock(p);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/*
|
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 08:19:46 +08:00
|
|
|
* Root processes get 3% bonus, just like the __vm_enough_memory()
|
|
|
|
* implementation used by LSMs.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 08:19:46 +08:00
|
|
|
if (has_capability_noaudit(p, CAP_SYS_ADMIN))
|
mm, oom: base root bonus on current usage
A 3% of system memory bonus is sometimes too excessive in comparison to
other processes.
With commit a63d83f427fb ("oom: badness heuristic rewrite"), the OOM
killer tries to avoid killing privileged tasks by subtracting 3% of
overall memory (system or cgroup) from their per-task consumption. But
as a result, all root tasks that consume less than 3% of overall memory
are considered equal, and so it only takes 33+ privileged tasks pushing
the system out of memory for the OOM killer to do something stupid and
kill dhclient or other root-owned processes. For example, on a 32G
machine it can't tell the difference between the 1M agetty and the 10G
fork bomb member.
The changelog describes this 3% boost as the equivalent to the global
overcommit limit being 3% higher for privileged tasks, but this is not
the same as discounting 3% of overall memory from _every privileged task
individually_ during OOM selection.
Replace the 3% of system memory bonus with a 3% of current memory usage
bonus.
By giving root tasks a bonus that is proportional to their actual size,
they remain comparable even when relatively small. In the example
above, the OOM killer will discount the 1M agetty's 256 badness points
down to 179, and the 10G fork bomb's 262144 points down to 183500 points
and make the right choice, instead of discounting both to 0 and killing
agetty because it's first in the task list.
Signed-off-by: David Rientjes <rientjes@google.com>
Reported-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-01-31 07:46:11 +08:00
|
|
|
points -= (points * 3) / 100;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2012-06-21 03:52:58 +08:00
|
|
|
/* Normalize to oom_score_adj units */
|
|
|
|
adj *= totalpages / 1000;
|
|
|
|
points += adj;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2010-09-23 04:04:52 +08:00
|
|
|
/*
|
2012-05-30 06:06:47 +08:00
|
|
|
* Never return 0 for an eligible task regardless of the root bonus and
|
|
|
|
* oom_score_adj (oom_score_adj can't be OOM_SCORE_ADJ_MIN here).
|
2010-09-23 04:04:52 +08:00
|
|
|
*/
|
2012-06-09 04:21:26 +08:00
|
|
|
return points > 0 ? points : 1;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2006-02-21 10:27:52 +08:00
|
|
|
/*
|
|
|
|
* Determine the type of allocation constraint.
|
|
|
|
*/
|
|
|
|
#ifdef CONFIG_NUMA
|
2015-09-09 06:00:36 +08:00
|
|
|
static enum oom_constraint constrained_alloc(struct oom_control *oc,
|
|
|
|
unsigned long *totalpages)
|
2009-12-16 08:45:33 +08:00
|
|
|
{
|
2008-04-28 17:12:16 +08:00
|
|
|
struct zone *zone;
|
2008-04-28 17:12:17 +08:00
|
|
|
struct zoneref *z;
|
2015-09-09 06:00:36 +08:00
|
|
|
enum zone_type high_zoneidx = gfp_zone(oc->gfp_mask);
|
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 08:19:46 +08:00
|
|
|
bool cpuset_limited = false;
|
|
|
|
int nid;
|
2006-02-21 10:27:52 +08:00
|
|
|
|
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 08:19:46 +08:00
|
|
|
/* Default to all available memory */
|
|
|
|
*totalpages = totalram_pages + total_swap_pages;
|
|
|
|
|
2015-09-09 06:00:36 +08:00
|
|
|
if (!oc->zonelist)
|
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 08:19:46 +08:00
|
|
|
return CONSTRAINT_NONE;
|
2009-12-16 08:45:33 +08:00
|
|
|
/*
|
|
|
|
* Reach here only when __GFP_NOFAIL is used. So, we should avoid
|
|
|
|
* to kill current.We have to random task kill in this case.
|
|
|
|
* Hopefully, CONSTRAINT_THISNODE...but no way to handle it, now.
|
|
|
|
*/
|
2015-09-09 06:00:36 +08:00
|
|
|
if (oc->gfp_mask & __GFP_THISNODE)
|
2009-12-16 08:45:33 +08:00
|
|
|
return CONSTRAINT_NONE;
|
2006-02-21 10:27:52 +08:00
|
|
|
|
2009-12-16 08:45:33 +08:00
|
|
|
/*
|
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 08:19:46 +08:00
|
|
|
* This is not a __GFP_THISNODE allocation, so a truncated nodemask in
|
|
|
|
* the page allocator means a mempolicy is in effect. Cpuset policy
|
|
|
|
* is enforced in get_page_from_freelist().
|
2009-12-16 08:45:33 +08:00
|
|
|
*/
|
2015-09-09 06:00:36 +08:00
|
|
|
if (oc->nodemask &&
|
|
|
|
!nodes_subset(node_states[N_MEMORY], *oc->nodemask)) {
|
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 08:19:46 +08:00
|
|
|
*totalpages = total_swap_pages;
|
2015-09-09 06:00:36 +08:00
|
|
|
for_each_node_mask(nid, *oc->nodemask)
|
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 08:19:46 +08:00
|
|
|
*totalpages += node_spanned_pages(nid);
|
2006-02-21 10:27:52 +08:00
|
|
|
return CONSTRAINT_MEMORY_POLICY;
|
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 08:19:46 +08:00
|
|
|
}
|
2009-12-16 08:45:33 +08:00
|
|
|
|
|
|
|
/* Check this allocation failure is caused by cpuset's wall function */
|
2015-09-09 06:00:36 +08:00
|
|
|
for_each_zone_zonelist_nodemask(zone, z, oc->zonelist,
|
|
|
|
high_zoneidx, oc->nodemask)
|
|
|
|
if (!cpuset_zone_allowed(zone, oc->gfp_mask))
|
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 08:19:46 +08:00
|
|
|
cpuset_limited = true;
|
2006-02-21 10:27:52 +08:00
|
|
|
|
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 08:19:46 +08:00
|
|
|
if (cpuset_limited) {
|
|
|
|
*totalpages = total_swap_pages;
|
|
|
|
for_each_node_mask(nid, cpuset_current_mems_allowed)
|
|
|
|
*totalpages += node_spanned_pages(nid);
|
|
|
|
return CONSTRAINT_CPUSET;
|
|
|
|
}
|
2006-02-21 10:27:52 +08:00
|
|
|
return CONSTRAINT_NONE;
|
|
|
|
}
|
2009-12-16 08:45:33 +08:00
|
|
|
#else
|
2015-09-09 06:00:36 +08:00
|
|
|
static enum oom_constraint constrained_alloc(struct oom_control *oc,
|
|
|
|
unsigned long *totalpages)
|
2009-12-16 08:45:33 +08:00
|
|
|
{
|
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 08:19:46 +08:00
|
|
|
*totalpages = totalram_pages + total_swap_pages;
|
2009-12-16 08:45:33 +08:00
|
|
|
return CONSTRAINT_NONE;
|
|
|
|
}
|
|
|
|
#endif
|
2006-02-21 10:27:52 +08:00
|
|
|
|
2015-09-09 06:00:36 +08:00
|
|
|
enum oom_scan_t oom_scan_process_thread(struct oom_control *oc,
|
2016-07-27 06:24:39 +08:00
|
|
|
struct task_struct *task)
|
2012-08-01 07:43:40 +08:00
|
|
|
{
|
2015-09-09 06:00:36 +08:00
|
|
|
if (oom_unkillable_task(task, NULL, oc->nodemask))
|
2012-08-01 07:43:40 +08:00
|
|
|
return OOM_SCAN_CONTINUE;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This task already has access to memory reserves and is being killed.
|
2016-07-29 06:45:01 +08:00
|
|
|
* Don't allow any other task to have access to the reserves unless
|
|
|
|
* the task has MMF_OOM_REAPED because chances that it would release
|
|
|
|
* any memory is quite low.
|
2012-08-01 07:43:40 +08:00
|
|
|
*/
|
2016-07-29 06:45:01 +08:00
|
|
|
if (!is_sysrq_oom(oc) && atomic_read(&task->signal->oom_victims)) {
|
|
|
|
struct task_struct *p = find_lock_task_mm(task);
|
|
|
|
enum oom_scan_t ret = OOM_SCAN_ABORT;
|
|
|
|
|
|
|
|
if (p) {
|
|
|
|
if (test_bit(MMF_OOM_REAPED, &p->mm->flags))
|
|
|
|
ret = OOM_SCAN_CONTINUE;
|
|
|
|
task_unlock(p);
|
|
|
|
}
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
2012-08-01 07:43:40 +08:00
|
|
|
|
2012-12-12 08:02:56 +08:00
|
|
|
/*
|
|
|
|
* If task is allocating a lot of memory and has been marked to be
|
|
|
|
* killed first if it triggers an oom, then select it.
|
|
|
|
*/
|
|
|
|
if (oom_task_origin(task))
|
|
|
|
return OOM_SCAN_SELECT;
|
|
|
|
|
2012-08-01 07:43:40 +08:00
|
|
|
return OOM_SCAN_OK;
|
|
|
|
}
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* Simple selection loop. We chose the process with the highest
|
2013-07-15 09:54:08 +08:00
|
|
|
* number of 'points'. Returns -1 on scan abort.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2015-09-09 06:00:36 +08:00
|
|
|
static struct task_struct *select_bad_process(struct oom_control *oc,
|
|
|
|
unsigned int *ppoints, unsigned long totalpages)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2016-05-21 07:57:27 +08:00
|
|
|
struct task_struct *p;
|
2005-04-17 06:20:36 +08:00
|
|
|
struct task_struct *chosen = NULL;
|
2012-05-30 06:06:47 +08:00
|
|
|
unsigned long chosen_points = 0;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2012-08-01 07:43:45 +08:00
|
|
|
rcu_read_lock();
|
2016-05-21 07:57:27 +08:00
|
|
|
for_each_process(p) {
|
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 08:19:46 +08:00
|
|
|
unsigned int points;
|
[PATCH] cpusets: oom_kill tweaks
This patch series extends the use of the cpuset attribute 'mem_exclusive'
to support cpuset configurations that:
1) allow GFP_KERNEL allocations to come from a potentially larger
set of memory nodes than GFP_USER allocations, and
2) can constrain the oom killer to tasks running in cpusets in
a specified subtree of the cpuset hierarchy.
Here's an example usage scenario. For a few hours or more, a large NUMA
system at a University is to be divided in two halves, with a bunch of student
jobs running in half the system under some form of batch manager, and with a
big research project running in the other half. Each of the student jobs is
placed in a small cpuset, but should share the classic Unix time share
facilities, such as buffered pages of files in /bin and /usr/lib. The big
research project wants no interference whatsoever from the student jobs, and
has highly tuned, unusual memory and i/o patterns that intend to make full use
of all the main memory on the nodes available to it.
In this example, we have two big sibling cpusets, one of which is further
divided into a more dynamic set of child cpusets.
We want kernel memory allocations constrained by the two big cpusets, and user
allocations constrained by the smaller child cpusets where present. And we
require that the oom killer not operate across the two halves of this system,
or else the first time a student job runs amuck, the big research project will
likely be first inline to get shot.
Tweaking /proc/<pid>/oom_adj is not ideal -- if the big research project
really does run amuck allocating memory, it should be shot, not some other
task outside the research projects mem_exclusive cpuset.
I propose to extend the use of the 'mem_exclusive' flag of cpusets to manage
such scenarios. Let memory allocations for user space (GFP_USER) be
constrained by a tasks current cpuset, but memory allocations for kernel space
(GFP_KERNEL) by constrained by the nearest mem_exclusive ancestor of the
current cpuset, even though kernel space allocations will still _prefer_ to
remain within the current tasks cpuset, if memory is easily available.
Let the oom killer be constrained to consider only tasks that are in
overlapping mem_exclusive cpusets (it won't help much to kill a task that
normally cannot allocate memory on any of the same nodes as the ones on which
the current task can allocate.)
The current constraints imposed on setting mem_exclusive are unchanged. A
cpuset may only be mem_exclusive if its parent is also mem_exclusive, and a
mem_exclusive cpuset may not overlap any of its siblings memory nodes.
This patch was presented on linux-mm in early July 2005, though did not
generate much feedback at that time. It has been built for a variety of
arch's using cross tools, and built, booted and tested for function on SN2
(ia64).
There are 4 patches in this set:
1) Some minor cleanup, and some improvements to the code layout
of one routine to make subsequent patches cleaner.
2) Add another GFP flag - __GFP_HARDWALL. It marks memory
requests for USER space, which are tightly confined by the
current tasks cpuset.
3) Now memory requests (such as KERNEL) that not marked HARDWALL can
if short on memory, look in the potentially larger pool of memory
defined by the nearest mem_exclusive ancestor cpuset of the current
tasks cpuset.
4) Finally, modify the oom killer to skip any task whose mem_exclusive
cpuset doesn't overlap ours.
Patch (1), the one time I looked on an SN2 (ia64) build, actually saved 32
bytes of kernel text space. Patch (2) has no affect on the size of kernel
text space (it just adds a preprocessor flag). Patches (3) and (4) added
about 600 bytes each of kernel text space, mostly in kernel/cpuset.c, which
matters only if CONFIG_CPUSET is enabled.
This patch:
This patch applies a few comment and code cleanups to mm/oom_kill.c prior to
applying a few small patches to improve cpuset management of memory placement.
The comment changed in oom_kill.c was seriously misleading. The code layout
change in select_bad_process() makes room for adding another condition on
which a process can be spared the oom killer (see the subsequent
cpuset_nodes_overlap patch for this addition).
Also a couple typos and spellos that bugged me, while I was here.
This patch should have no material affect.
Signed-off-by: Paul Jackson <pj@sgi.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-09-07 06:18:09 +08:00
|
|
|
|
2016-07-27 06:24:39 +08:00
|
|
|
switch (oom_scan_process_thread(oc, p)) {
|
2012-08-01 07:43:40 +08:00
|
|
|
case OOM_SCAN_SELECT:
|
|
|
|
chosen = p;
|
|
|
|
chosen_points = ULONG_MAX;
|
|
|
|
/* fall through */
|
|
|
|
case OOM_SCAN_CONTINUE:
|
2011-07-30 22:35:02 +08:00
|
|
|
continue;
|
2012-08-01 07:43:40 +08:00
|
|
|
case OOM_SCAN_ABORT:
|
2012-08-01 07:43:45 +08:00
|
|
|
rcu_read_unlock();
|
2013-07-15 09:54:08 +08:00
|
|
|
return (struct task_struct *)(-1UL);
|
2012-08-01 07:43:40 +08:00
|
|
|
case OOM_SCAN_OK:
|
|
|
|
break;
|
|
|
|
};
|
2015-09-09 06:00:36 +08:00
|
|
|
points = oom_badness(p, NULL, oc->nodemask, totalpages);
|
2014-01-24 07:53:34 +08:00
|
|
|
if (!points || points < chosen_points)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
chosen = p;
|
|
|
|
chosen_points = points;
|
2014-01-22 07:49:58 +08:00
|
|
|
}
|
2012-08-01 07:43:45 +08:00
|
|
|
if (chosen)
|
|
|
|
get_task_struct(chosen);
|
|
|
|
rcu_read_unlock();
|
2006-09-29 17:01:12 +08:00
|
|
|
|
2012-05-30 06:06:47 +08:00
|
|
|
*ppoints = chosen_points * 1000 / totalpages;
|
2005-04-17 06:20:36 +08:00
|
|
|
return chosen;
|
|
|
|
}
|
|
|
|
|
oom: add sysctl to enable task memory dump
Adds a new sysctl, 'oom_dump_tasks', that enables the kernel to produce a
dump of all system tasks (excluding kernel threads) when performing an
OOM-killing. Information includes pid, uid, tgid, vm size, rss, cpu,
oom_adj score, and name.
This is helpful for determining why there was an OOM condition and which
rogue task caused it.
It is configurable so that large systems, such as those with several
thousand tasks, do not incur a performance penalty associated with dumping
data they may not desire.
If an OOM was triggered as a result of a memory controller, the tasklist
shall be filtered to exclude tasks that are not a member of the same
cgroup.
Cc: Andrea Arcangeli <andrea@suse.de>
Cc: Christoph Lameter <clameter@sgi.com>
Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-02-07 16:14:07 +08:00
|
|
|
/**
|
2008-03-20 08:00:42 +08:00
|
|
|
* dump_tasks - dump current memory state of all system tasks
|
2012-06-21 03:53:01 +08:00
|
|
|
* @memcg: current's memory controller, if constrained
|
2010-09-23 04:05:10 +08:00
|
|
|
* @nodemask: nodemask passed to page allocator for mempolicy ooms
|
2008-03-20 08:00:42 +08:00
|
|
|
*
|
2010-09-23 04:05:10 +08:00
|
|
|
* Dumps the current memory state of all eligible tasks. Tasks not in the same
|
|
|
|
* memcg, not in the same cpuset, or bound to a disjoint set of mempolicy nodes
|
|
|
|
* are not shown.
|
2012-08-01 07:42:56 +08:00
|
|
|
* State information includes task's pid, uid, tgid, vm size, rss, nr_ptes,
|
|
|
|
* swapents, oom_score_adj value, and name.
|
oom: add sysctl to enable task memory dump
Adds a new sysctl, 'oom_dump_tasks', that enables the kernel to produce a
dump of all system tasks (excluding kernel threads) when performing an
OOM-killing. Information includes pid, uid, tgid, vm size, rss, cpu,
oom_adj score, and name.
This is helpful for determining why there was an OOM condition and which
rogue task caused it.
It is configurable so that large systems, such as those with several
thousand tasks, do not incur a performance penalty associated with dumping
data they may not desire.
If an OOM was triggered as a result of a memory controller, the tasklist
shall be filtered to exclude tasks that are not a member of the same
cgroup.
Cc: Andrea Arcangeli <andrea@suse.de>
Cc: Christoph Lameter <clameter@sgi.com>
Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-02-07 16:14:07 +08:00
|
|
|
*/
|
2014-12-11 07:44:33 +08:00
|
|
|
static void dump_tasks(struct mem_cgroup *memcg, const nodemask_t *nodemask)
|
oom: add sysctl to enable task memory dump
Adds a new sysctl, 'oom_dump_tasks', that enables the kernel to produce a
dump of all system tasks (excluding kernel threads) when performing an
OOM-killing. Information includes pid, uid, tgid, vm size, rss, cpu,
oom_adj score, and name.
This is helpful for determining why there was an OOM condition and which
rogue task caused it.
It is configurable so that large systems, such as those with several
thousand tasks, do not incur a performance penalty associated with dumping
data they may not desire.
If an OOM was triggered as a result of a memory controller, the tasklist
shall be filtered to exclude tasks that are not a member of the same
cgroup.
Cc: Andrea Arcangeli <andrea@suse.de>
Cc: Christoph Lameter <clameter@sgi.com>
Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-02-07 16:14:07 +08:00
|
|
|
{
|
2010-08-10 08:18:46 +08:00
|
|
|
struct task_struct *p;
|
|
|
|
struct task_struct *task;
|
oom: add sysctl to enable task memory dump
Adds a new sysctl, 'oom_dump_tasks', that enables the kernel to produce a
dump of all system tasks (excluding kernel threads) when performing an
OOM-killing. Information includes pid, uid, tgid, vm size, rss, cpu,
oom_adj score, and name.
This is helpful for determining why there was an OOM condition and which
rogue task caused it.
It is configurable so that large systems, such as those with several
thousand tasks, do not incur a performance penalty associated with dumping
data they may not desire.
If an OOM was triggered as a result of a memory controller, the tasklist
shall be filtered to exclude tasks that are not a member of the same
cgroup.
Cc: Andrea Arcangeli <andrea@suse.de>
Cc: Christoph Lameter <clameter@sgi.com>
Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-02-07 16:14:07 +08:00
|
|
|
|
mm: account pmd page tables to the process
Dave noticed that unprivileged process can allocate significant amount of
memory -- >500 MiB on x86_64 -- and stay unnoticed by oom-killer and
memory cgroup. The trick is to allocate a lot of PMD page tables. Linux
kernel doesn't account PMD tables to the process, only PTE.
The use-cases below use few tricks to allocate a lot of PMD page tables
while keeping VmRSS and VmPTE low. oom_score for the process will be 0.
#include <errno.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/mman.h>
#include <sys/prctl.h>
#define PUD_SIZE (1UL << 30)
#define PMD_SIZE (1UL << 21)
#define NR_PUD 130000
int main(void)
{
char *addr = NULL;
unsigned long i;
prctl(PR_SET_THP_DISABLE);
for (i = 0; i < NR_PUD ; i++) {
addr = mmap(addr + PUD_SIZE, PUD_SIZE, PROT_WRITE|PROT_READ,
MAP_ANONYMOUS|MAP_PRIVATE, -1, 0);
if (addr == MAP_FAILED) {
perror("mmap");
break;
}
*addr = 'x';
munmap(addr, PMD_SIZE);
mmap(addr, PMD_SIZE, PROT_WRITE|PROT_READ,
MAP_ANONYMOUS|MAP_PRIVATE|MAP_FIXED, -1, 0);
if (addr == MAP_FAILED)
perror("re-mmap"), exit(1);
}
printf("PID %d consumed %lu KiB in PMD page tables\n",
getpid(), i * 4096 >> 10);
return pause();
}
The patch addresses the issue by account PMD tables to the process the
same way we account PTE.
The main place where PMD tables is accounted is __pmd_alloc() and
free_pmd_range(). But there're few corner cases:
- HugeTLB can share PMD page tables. The patch handles by accounting
the table to all processes who share it.
- x86 PAE pre-allocates few PMD tables on fork.
- Architectures with FIRST_USER_ADDRESS > 0. We need to adjust sanity
check on exit(2).
Accounting only happens on configuration where PMD page table's level is
present (PMD is not folded). As with nr_ptes we use per-mm counter. The
counter value is used to calculate baseline for badness score by
oom-killer.
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reported-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Hugh Dickins <hughd@google.com>
Reviewed-by: Cyrill Gorcunov <gorcunov@openvz.org>
Cc: Pavel Emelyanov <xemul@openvz.org>
Cc: David Rientjes <rientjes@google.com>
Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-02-12 07:26:50 +08:00
|
|
|
pr_info("[ pid ] uid tgid total_vm rss nr_ptes nr_pmds swapents oom_score_adj name\n");
|
2012-08-01 07:43:45 +08:00
|
|
|
rcu_read_lock();
|
2010-08-10 08:18:46 +08:00
|
|
|
for_each_process(p) {
|
2012-01-13 09:18:32 +08:00
|
|
|
if (oom_unkillable_task(p, memcg, nodemask))
|
2008-11-07 04:53:29 +08:00
|
|
|
continue;
|
oom: add sysctl to enable task memory dump
Adds a new sysctl, 'oom_dump_tasks', that enables the kernel to produce a
dump of all system tasks (excluding kernel threads) when performing an
OOM-killing. Information includes pid, uid, tgid, vm size, rss, cpu,
oom_adj score, and name.
This is helpful for determining why there was an OOM condition and which
rogue task caused it.
It is configurable so that large systems, such as those with several
thousand tasks, do not incur a performance penalty associated with dumping
data they may not desire.
If an OOM was triggered as a result of a memory controller, the tasklist
shall be filtered to exclude tasks that are not a member of the same
cgroup.
Cc: Andrea Arcangeli <andrea@suse.de>
Cc: Christoph Lameter <clameter@sgi.com>
Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-02-07 16:14:07 +08:00
|
|
|
|
2010-08-10 08:18:46 +08:00
|
|
|
task = find_lock_task_mm(p);
|
|
|
|
if (!task) {
|
2009-05-29 05:34:19 +08:00
|
|
|
/*
|
2010-08-10 08:18:46 +08:00
|
|
|
* This is a kthread or all of p's threads have already
|
|
|
|
* detached their mm's. There's no need to report
|
2010-08-10 08:18:46 +08:00
|
|
|
* them; they can't be oom killed anyway.
|
2009-05-29 05:34:19 +08:00
|
|
|
*/
|
|
|
|
continue;
|
|
|
|
}
|
2010-08-10 08:18:46 +08:00
|
|
|
|
mm: account pmd page tables to the process
Dave noticed that unprivileged process can allocate significant amount of
memory -- >500 MiB on x86_64 -- and stay unnoticed by oom-killer and
memory cgroup. The trick is to allocate a lot of PMD page tables. Linux
kernel doesn't account PMD tables to the process, only PTE.
The use-cases below use few tricks to allocate a lot of PMD page tables
while keeping VmRSS and VmPTE low. oom_score for the process will be 0.
#include <errno.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/mman.h>
#include <sys/prctl.h>
#define PUD_SIZE (1UL << 30)
#define PMD_SIZE (1UL << 21)
#define NR_PUD 130000
int main(void)
{
char *addr = NULL;
unsigned long i;
prctl(PR_SET_THP_DISABLE);
for (i = 0; i < NR_PUD ; i++) {
addr = mmap(addr + PUD_SIZE, PUD_SIZE, PROT_WRITE|PROT_READ,
MAP_ANONYMOUS|MAP_PRIVATE, -1, 0);
if (addr == MAP_FAILED) {
perror("mmap");
break;
}
*addr = 'x';
munmap(addr, PMD_SIZE);
mmap(addr, PMD_SIZE, PROT_WRITE|PROT_READ,
MAP_ANONYMOUS|MAP_PRIVATE|MAP_FIXED, -1, 0);
if (addr == MAP_FAILED)
perror("re-mmap"), exit(1);
}
printf("PID %d consumed %lu KiB in PMD page tables\n",
getpid(), i * 4096 >> 10);
return pause();
}
The patch addresses the issue by account PMD tables to the process the
same way we account PTE.
The main place where PMD tables is accounted is __pmd_alloc() and
free_pmd_range(). But there're few corner cases:
- HugeTLB can share PMD page tables. The patch handles by accounting
the table to all processes who share it.
- x86 PAE pre-allocates few PMD tables on fork.
- Architectures with FIRST_USER_ADDRESS > 0. We need to adjust sanity
check on exit(2).
Accounting only happens on configuration where PMD page table's level is
present (PMD is not folded). As with nr_ptes we use per-mm counter. The
counter value is used to calculate baseline for badness score by
oom-killer.
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reported-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Hugh Dickins <hughd@google.com>
Reviewed-by: Cyrill Gorcunov <gorcunov@openvz.org>
Cc: Pavel Emelyanov <xemul@openvz.org>
Cc: David Rientjes <rientjes@google.com>
Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-02-12 07:26:50 +08:00
|
|
|
pr_info("[%5d] %5d %5d %8lu %8lu %7ld %7ld %8lu %5hd %s\n",
|
2012-02-08 23:00:08 +08:00
|
|
|
task->pid, from_kuid(&init_user_ns, task_uid(task)),
|
|
|
|
task->tgid, task->mm->total_vm, get_mm_rss(task->mm),
|
2013-11-15 06:30:48 +08:00
|
|
|
atomic_long_read(&task->mm->nr_ptes),
|
mm: account pmd page tables to the process
Dave noticed that unprivileged process can allocate significant amount of
memory -- >500 MiB on x86_64 -- and stay unnoticed by oom-killer and
memory cgroup. The trick is to allocate a lot of PMD page tables. Linux
kernel doesn't account PMD tables to the process, only PTE.
The use-cases below use few tricks to allocate a lot of PMD page tables
while keeping VmRSS and VmPTE low. oom_score for the process will be 0.
#include <errno.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/mman.h>
#include <sys/prctl.h>
#define PUD_SIZE (1UL << 30)
#define PMD_SIZE (1UL << 21)
#define NR_PUD 130000
int main(void)
{
char *addr = NULL;
unsigned long i;
prctl(PR_SET_THP_DISABLE);
for (i = 0; i < NR_PUD ; i++) {
addr = mmap(addr + PUD_SIZE, PUD_SIZE, PROT_WRITE|PROT_READ,
MAP_ANONYMOUS|MAP_PRIVATE, -1, 0);
if (addr == MAP_FAILED) {
perror("mmap");
break;
}
*addr = 'x';
munmap(addr, PMD_SIZE);
mmap(addr, PMD_SIZE, PROT_WRITE|PROT_READ,
MAP_ANONYMOUS|MAP_PRIVATE|MAP_FIXED, -1, 0);
if (addr == MAP_FAILED)
perror("re-mmap"), exit(1);
}
printf("PID %d consumed %lu KiB in PMD page tables\n",
getpid(), i * 4096 >> 10);
return pause();
}
The patch addresses the issue by account PMD tables to the process the
same way we account PTE.
The main place where PMD tables is accounted is __pmd_alloc() and
free_pmd_range(). But there're few corner cases:
- HugeTLB can share PMD page tables. The patch handles by accounting
the table to all processes who share it.
- x86 PAE pre-allocates few PMD tables on fork.
- Architectures with FIRST_USER_ADDRESS > 0. We need to adjust sanity
check on exit(2).
Accounting only happens on configuration where PMD page table's level is
present (PMD is not folded). As with nr_ptes we use per-mm counter. The
counter value is used to calculate baseline for badness score by
oom-killer.
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reported-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Hugh Dickins <hughd@google.com>
Reviewed-by: Cyrill Gorcunov <gorcunov@openvz.org>
Cc: Pavel Emelyanov <xemul@openvz.org>
Cc: David Rientjes <rientjes@google.com>
Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-02-12 07:26:50 +08:00
|
|
|
mm_nr_pmds(task->mm),
|
2012-08-01 07:42:56 +08:00
|
|
|
get_mm_counter(task->mm, MM_SWAPENTS),
|
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 08:19:46 +08:00
|
|
|
task->signal->oom_score_adj, task->comm);
|
2010-08-10 08:18:46 +08:00
|
|
|
task_unlock(task);
|
|
|
|
}
|
2012-08-01 07:43:45 +08:00
|
|
|
rcu_read_unlock();
|
oom: add sysctl to enable task memory dump
Adds a new sysctl, 'oom_dump_tasks', that enables the kernel to produce a
dump of all system tasks (excluding kernel threads) when performing an
OOM-killing. Information includes pid, uid, tgid, vm size, rss, cpu,
oom_adj score, and name.
This is helpful for determining why there was an OOM condition and which
rogue task caused it.
It is configurable so that large systems, such as those with several
thousand tasks, do not incur a performance penalty associated with dumping
data they may not desire.
If an OOM was triggered as a result of a memory controller, the tasklist
shall be filtered to exclude tasks that are not a member of the same
cgroup.
Cc: Andrea Arcangeli <andrea@suse.de>
Cc: Christoph Lameter <clameter@sgi.com>
Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-02-07 16:14:07 +08:00
|
|
|
}
|
|
|
|
|
2016-07-27 06:22:33 +08:00
|
|
|
static void dump_header(struct oom_control *oc, struct task_struct *p)
|
2009-12-15 09:57:47 +08:00
|
|
|
{
|
2016-03-18 05:19:47 +08:00
|
|
|
pr_warn("%s invoked oom-killer: gfp_mask=%#x(%pGg), order=%d, oom_score_adj=%hd\n",
|
2016-03-16 05:56:05 +08:00
|
|
|
current->comm, oc->gfp_mask, &oc->gfp_mask, oc->order,
|
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 08:19:46 +08:00
|
|
|
current->signal->oom_score_adj);
|
2016-03-16 05:56:05 +08:00
|
|
|
|
2015-11-06 10:48:05 +08:00
|
|
|
cpuset_print_current_mems_allowed();
|
2009-12-15 09:57:47 +08:00
|
|
|
dump_stack();
|
2016-07-27 06:22:33 +08:00
|
|
|
if (oc->memcg)
|
|
|
|
mem_cgroup_print_oom_info(oc->memcg, p);
|
memcg, oom: provide more precise dump info while memcg oom happening
Currently when a memcg oom is happening the oom dump messages is still
global state and provides few useful info for users. This patch prints
more pointed memcg page statistics for memcg-oom and take hierarchy into
consideration:
Based on Michal's advice, we take hierarchy into consideration: supppose
we trigger an OOM on A's limit
root_memcg
|
A (use_hierachy=1)
/ \
B C
|
D
then the printed info will be:
Memory cgroup stats for /A:...
Memory cgroup stats for /A/B:...
Memory cgroup stats for /A/C:...
Memory cgroup stats for /A/B/D:...
Following are samples of oom output:
(1) Before change:
mal-80 invoked oom-killer:gfp_mask=0xd0, order=0, oom_score_adj=0
mal-80 cpuset=/ mems_allowed=0
Pid: 2976, comm: mal-80 Not tainted 3.7.0+ #10
Call Trace:
[<ffffffff8167fbfb>] dump_header+0x83/0x1ca
..... (call trace)
[<ffffffff8168a818>] page_fault+0x28/0x30
<<<<<<<<<<<<<<<<<<<<< memcg specific information
Task in /A/B/D killed as a result of limit of /A
memory: usage 101376kB, limit 101376kB, failcnt 57
memory+swap: usage 101376kB, limit 101376kB, failcnt 0
kmem: usage 0kB, limit 9007199254740991kB, failcnt 0
<<<<<<<<<<<<<<<<<<<<< print per cpu pageset stat
Mem-Info:
Node 0 DMA per-cpu:
CPU 0: hi: 0, btch: 1 usd: 0
......
CPU 3: hi: 0, btch: 1 usd: 0
Node 0 DMA32 per-cpu:
CPU 0: hi: 186, btch: 31 usd: 173
......
CPU 3: hi: 186, btch: 31 usd: 130
<<<<<<<<<<<<<<<<<<<<< print global page state
active_anon:92963 inactive_anon:40777 isolated_anon:0
active_file:33027 inactive_file:51718 isolated_file:0
unevictable:0 dirty:3 writeback:0 unstable:0
free:729995 slab_reclaimable:6897 slab_unreclaimable:6263
mapped:20278 shmem:35971 pagetables:5885 bounce:0
free_cma:0
<<<<<<<<<<<<<<<<<<<<< print per zone page state
Node 0 DMA free:15836kB ... all_unreclaimable? no
lowmem_reserve[]: 0 3175 3899 3899
Node 0 DMA32 free:2888564kB ... all_unrelaimable? no
lowmem_reserve[]: 0 0 724 724
lowmem_reserve[]: 0 0 0 0
Node 0 DMA: 1*4kB (U) ... 3*4096kB (M) = 15836kB
Node 0 DMA32: 41*4kB (UM) ... 702*4096kB (MR) = 2888316kB
120710 total pagecache pages
0 pages in swap cache
<<<<<<<<<<<<<<<<<<<<< print global swap cache stat
Swap cache stats: add 0, delete 0, find 0/0
Free swap = 499708kB
Total swap = 499708kB
1040368 pages RAM
58678 pages reserved
169065 pages shared
173632 pages non-shared
[ pid ] uid tgid total_vm rss nr_ptes swapents oom_score_adj name
[ 2693] 0 2693 6005 1324 17 0 0 god
[ 2754] 0 2754 6003 1320 16 0 0 god
[ 2811] 0 2811 5992 1304 18 0 0 god
[ 2874] 0 2874 6005 1323 18 0 0 god
[ 2935] 0 2935 8720 7742 21 0 0 mal-30
[ 2976] 0 2976 21520 17577 42 0 0 mal-80
Memory cgroup out of memory: Kill process 2976 (mal-80) score 665 or sacrifice child
Killed process 2976 (mal-80) total-vm:86080kB, anon-rss:69964kB, file-rss:344kB
We can see that messages dumped by show_free_areas() are longsome and can
provide so limited info for memcg that just happen oom.
(2) After change
mal-80 invoked oom-killer: gfp_mask=0xd0, order=0, oom_score_adj=0
mal-80 cpuset=/ mems_allowed=0
Pid: 2704, comm: mal-80 Not tainted 3.7.0+ #10
Call Trace:
[<ffffffff8167fd0b>] dump_header+0x83/0x1d1
.......(call trace)
[<ffffffff8168a918>] page_fault+0x28/0x30
Task in /A/B/D killed as a result of limit of /A
<<<<<<<<<<<<<<<<<<<<< memcg specific information
memory: usage 102400kB, limit 102400kB, failcnt 140
memory+swap: usage 102400kB, limit 102400kB, failcnt 0
kmem: usage 0kB, limit 9007199254740991kB, failcnt 0
Memory cgroup stats for /A: cache:32KB rss:30984KB mapped_file:0KB swap:0KB inactive_anon:6912KB active_anon:24072KB inactive_file:32KB active_file:0KB unevictable:0KB
Memory cgroup stats for /A/B: cache:0KB rss:0KB mapped_file:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB
Memory cgroup stats for /A/C: cache:0KB rss:0KB mapped_file:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB
Memory cgroup stats for /A/B/D: cache:32KB rss:71352KB mapped_file:0KB swap:0KB inactive_anon:6656KB active_anon:64696KB inactive_file:16KB active_file:16KB unevictable:0KB
[ pid ] uid tgid total_vm rss nr_ptes swapents oom_score_adj name
[ 2260] 0 2260 6006 1325 18 0 0 god
[ 2383] 0 2383 6003 1319 17 0 0 god
[ 2503] 0 2503 6004 1321 18 0 0 god
[ 2622] 0 2622 6004 1321 16 0 0 god
[ 2695] 0 2695 8720 7741 22 0 0 mal-30
[ 2704] 0 2704 21520 17839 43 0 0 mal-80
Memory cgroup out of memory: Kill process 2704 (mal-80) score 669 or sacrifice child
Killed process 2704 (mal-80) total-vm:86080kB, anon-rss:71016kB, file-rss:340kB
This version provides more pointed info for memcg in "Memory cgroup stats
for XXX" section.
Signed-off-by: Sha Zhengju <handai.szj@taobao.com>
Acked-by: Michal Hocko <mhocko@suse.cz>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-02-23 08:32:05 +08:00
|
|
|
else
|
|
|
|
show_mem(SHOW_MEM_FILTER_NODES);
|
2009-12-15 09:57:47 +08:00
|
|
|
if (sysctl_oom_dump_tasks)
|
2016-07-27 06:22:33 +08:00
|
|
|
dump_tasks(oc->memcg, oc->nodemask);
|
2009-12-15 09:57:47 +08:00
|
|
|
}
|
|
|
|
|
2014-10-21 00:12:32 +08:00
|
|
|
/*
|
2015-02-12 07:26:24 +08:00
|
|
|
* Number of OOM victims in flight
|
2014-10-21 00:12:32 +08:00
|
|
|
*/
|
2015-02-12 07:26:24 +08:00
|
|
|
static atomic_t oom_victims = ATOMIC_INIT(0);
|
|
|
|
static DECLARE_WAIT_QUEUE_HEAD(oom_victims_wait);
|
2014-10-21 00:12:32 +08:00
|
|
|
|
2015-02-12 07:26:24 +08:00
|
|
|
bool oom_killer_disabled __read_mostly;
|
2014-10-21 00:12:32 +08:00
|
|
|
|
2016-03-26 05:20:30 +08:00
|
|
|
#define K(x) ((x) << (PAGE_SHIFT-10))
|
|
|
|
|
2016-05-20 08:13:12 +08:00
|
|
|
/*
|
|
|
|
* task->mm can be NULL if the task is the exited group leader. So to
|
|
|
|
* determine whether the task is using a particular mm, we examine all the
|
|
|
|
* task's threads: if one of those is using this mm then this task was also
|
|
|
|
* using it.
|
|
|
|
*/
|
2016-07-29 06:44:43 +08:00
|
|
|
bool process_shares_mm(struct task_struct *p, struct mm_struct *mm)
|
2016-05-20 08:13:12 +08:00
|
|
|
{
|
|
|
|
struct task_struct *t;
|
|
|
|
|
|
|
|
for_each_thread(p, t) {
|
|
|
|
struct mm_struct *t_mm = READ_ONCE(t->mm);
|
|
|
|
if (t_mm)
|
|
|
|
return t_mm == mm;
|
|
|
|
}
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2016-03-26 05:20:24 +08:00
|
|
|
#ifdef CONFIG_MMU
|
|
|
|
/*
|
|
|
|
* OOM Reaper kernel thread which tries to reap the memory used by the OOM
|
|
|
|
* victim (if that is possible) to help the OOM killer to move on.
|
|
|
|
*/
|
|
|
|
static struct task_struct *oom_reaper_th;
|
|
|
|
static DECLARE_WAIT_QUEUE_HEAD(oom_reaper_wait);
|
2016-03-26 05:20:39 +08:00
|
|
|
static struct task_struct *oom_reaper_list;
|
2016-03-26 05:20:33 +08:00
|
|
|
static DEFINE_SPINLOCK(oom_reaper_lock);
|
|
|
|
|
2016-03-26 05:20:27 +08:00
|
|
|
static bool __oom_reap_task(struct task_struct *tsk)
|
2016-03-26 05:20:24 +08:00
|
|
|
{
|
|
|
|
struct mmu_gather tlb;
|
|
|
|
struct vm_area_struct *vma;
|
oom_reaper: close race with exiting task
Tetsuo has reported:
Out of memory: Kill process 443 (oleg's-test) score 855 or sacrifice child
Killed process 443 (oleg's-test) total-vm:493248kB, anon-rss:423880kB, file-rss:4kB, shmem-rss:0kB
sh invoked oom-killer: gfp_mask=0x24201ca(GFP_HIGHUSER_MOVABLE|__GFP_COLD), order=0, oom_score_adj=0
sh cpuset=/ mems_allowed=0
CPU: 2 PID: 1 Comm: sh Not tainted 4.6.0-rc7+ #51
Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 07/31/2013
Call Trace:
dump_stack+0x85/0xc8
dump_header+0x5b/0x394
oom_reaper: reaped process 443 (oleg's-test), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
In other words:
__oom_reap_task exit_mm
atomic_inc_not_zero
tsk->mm = NULL
mmput
atomic_dec_and_test # > 0
exit_oom_victim # New victim will be
# selected
<OOM killer invoked>
# no TIF_MEMDIE task so we can select a new one
unmap_page_range # to release the memory
The race exists even without the oom_reaper because anybody who pins the
address space and gets preempted might race with exit_mm but oom_reaper
made this race more probable.
We can address the oom_reaper part by using oom_lock for __oom_reap_task
because this would guarantee that a new oom victim will not be selected
if the oom reaper might race with the exit path. This doesn't solve the
original issue, though, because somebody else still might be pinning
mm_users and so __mmput won't be called to release the memory but that
is not really realiably solvable because the task will get away from the
oom sight as soon as it is unhashed from the task_list and so we cannot
guarantee a new victim won't be selected.
[akpm@linux-foundation.org: fix use of unused `mm', Per Stephen]
[akpm@linux-foundation.org: coding-style fixes]
Fixes: aac453635549 ("mm, oom: introduce oom reaper")
Link: http://lkml.kernel.org/r/1464271493-20008-1-git-send-email-mhocko@kernel.org
Signed-off-by: Michal Hocko <mhocko@suse.com>
Reported-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-05-28 05:27:35 +08:00
|
|
|
struct mm_struct *mm = NULL;
|
2016-03-26 05:20:27 +08:00
|
|
|
struct task_struct *p;
|
2016-03-26 05:20:24 +08:00
|
|
|
struct zap_details details = {.check_swap_entries = true,
|
|
|
|
.ignore_dirty = true};
|
|
|
|
bool ret = true;
|
|
|
|
|
oom_reaper: close race with exiting task
Tetsuo has reported:
Out of memory: Kill process 443 (oleg's-test) score 855 or sacrifice child
Killed process 443 (oleg's-test) total-vm:493248kB, anon-rss:423880kB, file-rss:4kB, shmem-rss:0kB
sh invoked oom-killer: gfp_mask=0x24201ca(GFP_HIGHUSER_MOVABLE|__GFP_COLD), order=0, oom_score_adj=0
sh cpuset=/ mems_allowed=0
CPU: 2 PID: 1 Comm: sh Not tainted 4.6.0-rc7+ #51
Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 07/31/2013
Call Trace:
dump_stack+0x85/0xc8
dump_header+0x5b/0x394
oom_reaper: reaped process 443 (oleg's-test), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
In other words:
__oom_reap_task exit_mm
atomic_inc_not_zero
tsk->mm = NULL
mmput
atomic_dec_and_test # > 0
exit_oom_victim # New victim will be
# selected
<OOM killer invoked>
# no TIF_MEMDIE task so we can select a new one
unmap_page_range # to release the memory
The race exists even without the oom_reaper because anybody who pins the
address space and gets preempted might race with exit_mm but oom_reaper
made this race more probable.
We can address the oom_reaper part by using oom_lock for __oom_reap_task
because this would guarantee that a new oom victim will not be selected
if the oom reaper might race with the exit path. This doesn't solve the
original issue, though, because somebody else still might be pinning
mm_users and so __mmput won't be called to release the memory but that
is not really realiably solvable because the task will get away from the
oom sight as soon as it is unhashed from the task_list and so we cannot
guarantee a new victim won't be selected.
[akpm@linux-foundation.org: fix use of unused `mm', Per Stephen]
[akpm@linux-foundation.org: coding-style fixes]
Fixes: aac453635549 ("mm, oom: introduce oom reaper")
Link: http://lkml.kernel.org/r/1464271493-20008-1-git-send-email-mhocko@kernel.org
Signed-off-by: Michal Hocko <mhocko@suse.com>
Reported-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-05-28 05:27:35 +08:00
|
|
|
/*
|
|
|
|
* We have to make sure to not race with the victim exit path
|
|
|
|
* and cause premature new oom victim selection:
|
|
|
|
* __oom_reap_task exit_mm
|
2016-07-27 06:24:50 +08:00
|
|
|
* mmget_not_zero
|
oom_reaper: close race with exiting task
Tetsuo has reported:
Out of memory: Kill process 443 (oleg's-test) score 855 or sacrifice child
Killed process 443 (oleg's-test) total-vm:493248kB, anon-rss:423880kB, file-rss:4kB, shmem-rss:0kB
sh invoked oom-killer: gfp_mask=0x24201ca(GFP_HIGHUSER_MOVABLE|__GFP_COLD), order=0, oom_score_adj=0
sh cpuset=/ mems_allowed=0
CPU: 2 PID: 1 Comm: sh Not tainted 4.6.0-rc7+ #51
Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 07/31/2013
Call Trace:
dump_stack+0x85/0xc8
dump_header+0x5b/0x394
oom_reaper: reaped process 443 (oleg's-test), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
In other words:
__oom_reap_task exit_mm
atomic_inc_not_zero
tsk->mm = NULL
mmput
atomic_dec_and_test # > 0
exit_oom_victim # New victim will be
# selected
<OOM killer invoked>
# no TIF_MEMDIE task so we can select a new one
unmap_page_range # to release the memory
The race exists even without the oom_reaper because anybody who pins the
address space and gets preempted might race with exit_mm but oom_reaper
made this race more probable.
We can address the oom_reaper part by using oom_lock for __oom_reap_task
because this would guarantee that a new oom victim will not be selected
if the oom reaper might race with the exit path. This doesn't solve the
original issue, though, because somebody else still might be pinning
mm_users and so __mmput won't be called to release the memory but that
is not really realiably solvable because the task will get away from the
oom sight as soon as it is unhashed from the task_list and so we cannot
guarantee a new victim won't be selected.
[akpm@linux-foundation.org: fix use of unused `mm', Per Stephen]
[akpm@linux-foundation.org: coding-style fixes]
Fixes: aac453635549 ("mm, oom: introduce oom reaper")
Link: http://lkml.kernel.org/r/1464271493-20008-1-git-send-email-mhocko@kernel.org
Signed-off-by: Michal Hocko <mhocko@suse.com>
Reported-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-05-28 05:27:35 +08:00
|
|
|
* mmput
|
|
|
|
* atomic_dec_and_test
|
|
|
|
* exit_oom_victim
|
|
|
|
* [...]
|
|
|
|
* out_of_memory
|
|
|
|
* select_bad_process
|
|
|
|
* # no TIF_MEMDIE task selects new victim
|
|
|
|
* unmap_page_range # frees some memory
|
|
|
|
*/
|
|
|
|
mutex_lock(&oom_lock);
|
|
|
|
|
2016-03-26 05:20:27 +08:00
|
|
|
/*
|
|
|
|
* Make sure we find the associated mm_struct even when the particular
|
|
|
|
* thread has already terminated and cleared its mm.
|
|
|
|
* We might have race with exit path so consider our work done if there
|
|
|
|
* is no mm.
|
|
|
|
*/
|
|
|
|
p = find_lock_task_mm(tsk);
|
|
|
|
if (!p)
|
oom_reaper: close race with exiting task
Tetsuo has reported:
Out of memory: Kill process 443 (oleg's-test) score 855 or sacrifice child
Killed process 443 (oleg's-test) total-vm:493248kB, anon-rss:423880kB, file-rss:4kB, shmem-rss:0kB
sh invoked oom-killer: gfp_mask=0x24201ca(GFP_HIGHUSER_MOVABLE|__GFP_COLD), order=0, oom_score_adj=0
sh cpuset=/ mems_allowed=0
CPU: 2 PID: 1 Comm: sh Not tainted 4.6.0-rc7+ #51
Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 07/31/2013
Call Trace:
dump_stack+0x85/0xc8
dump_header+0x5b/0x394
oom_reaper: reaped process 443 (oleg's-test), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
In other words:
__oom_reap_task exit_mm
atomic_inc_not_zero
tsk->mm = NULL
mmput
atomic_dec_and_test # > 0
exit_oom_victim # New victim will be
# selected
<OOM killer invoked>
# no TIF_MEMDIE task so we can select a new one
unmap_page_range # to release the memory
The race exists even without the oom_reaper because anybody who pins the
address space and gets preempted might race with exit_mm but oom_reaper
made this race more probable.
We can address the oom_reaper part by using oom_lock for __oom_reap_task
because this would guarantee that a new oom victim will not be selected
if the oom reaper might race with the exit path. This doesn't solve the
original issue, though, because somebody else still might be pinning
mm_users and so __mmput won't be called to release the memory but that
is not really realiably solvable because the task will get away from the
oom sight as soon as it is unhashed from the task_list and so we cannot
guarantee a new victim won't be selected.
[akpm@linux-foundation.org: fix use of unused `mm', Per Stephen]
[akpm@linux-foundation.org: coding-style fixes]
Fixes: aac453635549 ("mm, oom: introduce oom reaper")
Link: http://lkml.kernel.org/r/1464271493-20008-1-git-send-email-mhocko@kernel.org
Signed-off-by: Michal Hocko <mhocko@suse.com>
Reported-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-05-28 05:27:35 +08:00
|
|
|
goto unlock_oom;
|
2016-03-26 05:20:27 +08:00
|
|
|
mm = p->mm;
|
2016-07-27 06:24:50 +08:00
|
|
|
atomic_inc(&mm->mm_count);
|
2016-03-26 05:20:27 +08:00
|
|
|
task_unlock(p);
|
2016-03-26 05:20:24 +08:00
|
|
|
|
|
|
|
if (!down_read_trylock(&mm->mmap_sem)) {
|
|
|
|
ret = false;
|
2016-07-27 06:24:50 +08:00
|
|
|
goto mm_drop;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* increase mm_users only after we know we will reap something so
|
|
|
|
* that the mmput_async is called only when we have reaped something
|
|
|
|
* and delayed __mmput doesn't matter that much
|
|
|
|
*/
|
|
|
|
if (!mmget_not_zero(mm)) {
|
|
|
|
up_read(&mm->mmap_sem);
|
|
|
|
goto mm_drop;
|
2016-03-26 05:20:24 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
tlb_gather_mmu(&tlb, mm, 0, -1);
|
|
|
|
for (vma = mm->mmap ; vma; vma = vma->vm_next) {
|
|
|
|
if (is_vm_hugetlb_page(vma))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* mlocked VMAs require explicit munlocking before unmap.
|
|
|
|
* Let's keep it simple here and skip such VMAs.
|
|
|
|
*/
|
|
|
|
if (vma->vm_flags & VM_LOCKED)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Only anonymous pages have a good chance to be dropped
|
|
|
|
* without additional steps which we cannot afford as we
|
|
|
|
* are OOM already.
|
|
|
|
*
|
|
|
|
* We do not even care about fs backed pages because all
|
|
|
|
* which are reclaimable have already been reclaimed and
|
|
|
|
* we do not want to block exit_mmap by keeping mm ref
|
|
|
|
* count elevated without a good reason.
|
|
|
|
*/
|
|
|
|
if (vma_is_anonymous(vma) || !(vma->vm_flags & VM_SHARED))
|
|
|
|
unmap_page_range(&tlb, vma, vma->vm_start, vma->vm_end,
|
|
|
|
&details);
|
|
|
|
}
|
|
|
|
tlb_finish_mmu(&tlb, 0, -1);
|
2016-03-26 05:20:30 +08:00
|
|
|
pr_info("oom_reaper: reaped process %d (%s), now anon-rss:%lukB, file-rss:%lukB, shmem-rss:%lukB\n",
|
|
|
|
task_pid_nr(tsk), tsk->comm,
|
|
|
|
K(get_mm_counter(mm, MM_ANONPAGES)),
|
|
|
|
K(get_mm_counter(mm, MM_FILEPAGES)),
|
|
|
|
K(get_mm_counter(mm, MM_SHMEMPAGES)));
|
2016-03-26 05:20:24 +08:00
|
|
|
up_read(&mm->mmap_sem);
|
2016-03-26 05:20:27 +08:00
|
|
|
|
|
|
|
/*
|
2016-05-20 08:13:15 +08:00
|
|
|
* This task can be safely ignored because we cannot do much more
|
|
|
|
* to release its memory.
|
2016-03-26 05:20:27 +08:00
|
|
|
*/
|
2016-05-21 07:57:18 +08:00
|
|
|
set_bit(MMF_OOM_REAPED, &mm->flags);
|
2016-05-21 07:57:21 +08:00
|
|
|
/*
|
|
|
|
* Drop our reference but make sure the mmput slow path is called from a
|
|
|
|
* different context because we shouldn't risk we get stuck there and
|
|
|
|
* put the oom_reaper out of the way.
|
|
|
|
*/
|
2016-07-27 06:24:50 +08:00
|
|
|
mmput_async(mm);
|
|
|
|
mm_drop:
|
|
|
|
mmdrop(mm);
|
|
|
|
unlock_oom:
|
|
|
|
mutex_unlock(&oom_lock);
|
2016-03-26 05:20:24 +08:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2016-03-26 05:20:30 +08:00
|
|
|
#define MAX_OOM_REAP_RETRIES 10
|
2016-03-26 05:20:27 +08:00
|
|
|
static void oom_reap_task(struct task_struct *tsk)
|
2016-03-26 05:20:24 +08:00
|
|
|
{
|
|
|
|
int attempts = 0;
|
|
|
|
|
|
|
|
/* Retry the down_read_trylock(mmap_sem) a few times */
|
2016-03-26 05:20:30 +08:00
|
|
|
while (attempts++ < MAX_OOM_REAP_RETRIES && !__oom_reap_task(tsk))
|
2016-03-26 05:20:24 +08:00
|
|
|
schedule_timeout_idle(HZ/10);
|
|
|
|
|
2016-03-26 05:20:30 +08:00
|
|
|
if (attempts > MAX_OOM_REAP_RETRIES) {
|
2016-07-29 06:44:58 +08:00
|
|
|
struct task_struct *p;
|
|
|
|
|
2016-03-26 05:20:30 +08:00
|
|
|
pr_info("oom_reaper: unable to reap pid:%d (%s)\n",
|
|
|
|
task_pid_nr(tsk), tsk->comm);
|
2016-07-29 06:44:58 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* If we've already tried to reap this task in the past and
|
|
|
|
* failed it probably doesn't make much sense to try yet again
|
|
|
|
* so hide the mm from the oom killer so that it can move on
|
|
|
|
* to another task with a different mm struct.
|
|
|
|
*/
|
|
|
|
p = find_lock_task_mm(tsk);
|
|
|
|
if (p) {
|
|
|
|
if (test_and_set_bit(MMF_OOM_NOT_REAPABLE, &p->mm->flags)) {
|
|
|
|
pr_info("oom_reaper: giving up pid:%d (%s)\n",
|
|
|
|
task_pid_nr(tsk), tsk->comm);
|
|
|
|
set_bit(MMF_OOM_REAPED, &p->mm->flags);
|
|
|
|
}
|
|
|
|
task_unlock(p);
|
|
|
|
}
|
|
|
|
|
2016-03-26 05:20:30 +08:00
|
|
|
debug_show_all_locks();
|
|
|
|
}
|
|
|
|
|
2016-05-20 08:13:15 +08:00
|
|
|
/*
|
|
|
|
* Clear TIF_MEMDIE because the task shouldn't be sitting on a
|
|
|
|
* reasonably reclaimable memory anymore or it is not a good candidate
|
|
|
|
* for the oom victim right now because it cannot release its memory
|
|
|
|
* itself nor by the oom reaper.
|
|
|
|
*/
|
|
|
|
tsk->oom_reaper_list = NULL;
|
|
|
|
exit_oom_victim(tsk);
|
|
|
|
|
2016-03-26 05:20:24 +08:00
|
|
|
/* Drop a reference taken by wake_oom_reaper */
|
2016-03-26 05:20:27 +08:00
|
|
|
put_task_struct(tsk);
|
2016-03-26 05:20:24 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static int oom_reaper(void *unused)
|
|
|
|
{
|
2016-03-26 05:20:41 +08:00
|
|
|
set_freezable();
|
|
|
|
|
2016-03-26 05:20:24 +08:00
|
|
|
while (true) {
|
2016-03-26 05:20:33 +08:00
|
|
|
struct task_struct *tsk = NULL;
|
2016-03-26 05:20:24 +08:00
|
|
|
|
2016-03-26 05:20:39 +08:00
|
|
|
wait_event_freezable(oom_reaper_wait, oom_reaper_list != NULL);
|
2016-03-26 05:20:33 +08:00
|
|
|
spin_lock(&oom_reaper_lock);
|
2016-03-26 05:20:39 +08:00
|
|
|
if (oom_reaper_list != NULL) {
|
|
|
|
tsk = oom_reaper_list;
|
|
|
|
oom_reaper_list = tsk->oom_reaper_list;
|
2016-03-26 05:20:33 +08:00
|
|
|
}
|
|
|
|
spin_unlock(&oom_reaper_lock);
|
|
|
|
|
|
|
|
if (tsk)
|
|
|
|
oom_reap_task(tsk);
|
2016-03-26 05:20:24 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2016-07-29 06:44:52 +08:00
|
|
|
void wake_oom_reaper(struct task_struct *tsk)
|
2016-03-26 05:20:24 +08:00
|
|
|
{
|
2016-04-02 05:31:34 +08:00
|
|
|
if (!oom_reaper_th)
|
|
|
|
return;
|
|
|
|
|
|
|
|
/* tsk is already queued? */
|
|
|
|
if (tsk == oom_reaper_list || tsk->oom_reaper_list)
|
2016-03-26 05:20:24 +08:00
|
|
|
return;
|
|
|
|
|
2016-03-26 05:20:27 +08:00
|
|
|
get_task_struct(tsk);
|
2016-03-26 05:20:24 +08:00
|
|
|
|
2016-03-26 05:20:33 +08:00
|
|
|
spin_lock(&oom_reaper_lock);
|
2016-03-26 05:20:39 +08:00
|
|
|
tsk->oom_reaper_list = oom_reaper_list;
|
|
|
|
oom_reaper_list = tsk;
|
2016-03-26 05:20:33 +08:00
|
|
|
spin_unlock(&oom_reaper_lock);
|
|
|
|
wake_up(&oom_reaper_wait);
|
2016-03-26 05:20:24 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static int __init oom_init(void)
|
|
|
|
{
|
|
|
|
oom_reaper_th = kthread_run(oom_reaper, NULL, "oom_reaper");
|
|
|
|
if (IS_ERR(oom_reaper_th)) {
|
|
|
|
pr_err("Unable to start OOM reaper %ld. Continuing regardless\n",
|
|
|
|
PTR_ERR(oom_reaper_th));
|
|
|
|
oom_reaper_th = NULL;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
subsys_initcall(oom_init)
|
|
|
|
#endif
|
|
|
|
|
2015-02-12 07:26:12 +08:00
|
|
|
/**
|
2015-06-25 07:57:07 +08:00
|
|
|
* mark_oom_victim - mark the given task as OOM victim
|
2015-02-12 07:26:12 +08:00
|
|
|
* @tsk: task to mark
|
2015-02-12 07:26:24 +08:00
|
|
|
*
|
2015-06-25 07:57:19 +08:00
|
|
|
* Has to be called with oom_lock held and never after
|
2015-02-12 07:26:24 +08:00
|
|
|
* oom has been disabled already.
|
2015-02-12 07:26:12 +08:00
|
|
|
*/
|
2015-06-25 07:57:07 +08:00
|
|
|
void mark_oom_victim(struct task_struct *tsk)
|
2015-02-12 07:26:12 +08:00
|
|
|
{
|
2015-02-12 07:26:24 +08:00
|
|
|
WARN_ON(oom_killer_disabled);
|
|
|
|
/* OOM killer might race with memcg OOM */
|
|
|
|
if (test_and_set_tsk_thread_flag(tsk, TIF_MEMDIE))
|
|
|
|
return;
|
2016-05-21 07:57:27 +08:00
|
|
|
atomic_inc(&tsk->signal->oom_victims);
|
2015-02-12 07:26:15 +08:00
|
|
|
/*
|
|
|
|
* Make sure that the task is woken up from uninterruptible sleep
|
|
|
|
* if it is frozen because OOM killer wouldn't be able to free
|
|
|
|
* any memory and livelock. freezing_slow_path will tell the freezer
|
|
|
|
* that TIF_MEMDIE tasks should be ignored.
|
|
|
|
*/
|
|
|
|
__thaw_task(tsk);
|
2015-02-12 07:26:24 +08:00
|
|
|
atomic_inc(&oom_victims);
|
2015-02-12 07:26:12 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
2015-06-25 07:57:07 +08:00
|
|
|
* exit_oom_victim - note the exit of an OOM victim
|
2015-02-12 07:26:12 +08:00
|
|
|
*/
|
2016-03-26 05:20:27 +08:00
|
|
|
void exit_oom_victim(struct task_struct *tsk)
|
2015-02-12 07:26:12 +08:00
|
|
|
{
|
2016-03-26 05:20:27 +08:00
|
|
|
if (!test_and_clear_tsk_thread_flag(tsk, TIF_MEMDIE))
|
|
|
|
return;
|
2016-05-21 07:57:27 +08:00
|
|
|
atomic_dec(&tsk->signal->oom_victims);
|
2015-02-12 07:26:24 +08:00
|
|
|
|
2015-06-25 07:57:13 +08:00
|
|
|
if (!atomic_dec_return(&oom_victims))
|
2015-02-12 07:26:24 +08:00
|
|
|
wake_up_all(&oom_victims_wait);
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* oom_killer_disable - disable OOM killer
|
|
|
|
*
|
|
|
|
* Forces all page allocations to fail rather than trigger OOM killer.
|
|
|
|
* Will block and wait until all OOM victims are killed.
|
|
|
|
*
|
|
|
|
* The function cannot be called when there are runnable user tasks because
|
|
|
|
* the userspace would see unexpected allocation failures as a result. Any
|
|
|
|
* new usage of this function should be consulted with MM people.
|
|
|
|
*
|
|
|
|
* Returns true if successful and false if the OOM killer cannot be
|
|
|
|
* disabled.
|
|
|
|
*/
|
|
|
|
bool oom_killer_disable(void)
|
|
|
|
{
|
|
|
|
/*
|
2016-03-18 05:20:45 +08:00
|
|
|
* Make sure to not race with an ongoing OOM killer. Check that the
|
|
|
|
* current is not killed (possibly due to sharing the victim's memory).
|
2015-02-12 07:26:24 +08:00
|
|
|
*/
|
2016-03-18 05:20:45 +08:00
|
|
|
if (mutex_lock_killable(&oom_lock))
|
2015-02-12 07:26:24 +08:00
|
|
|
return false;
|
|
|
|
oom_killer_disabled = true;
|
2015-06-25 07:57:19 +08:00
|
|
|
mutex_unlock(&oom_lock);
|
2015-02-12 07:26:24 +08:00
|
|
|
|
|
|
|
wait_event(oom_victims_wait, !atomic_read(&oom_victims));
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* oom_killer_enable - enable OOM killer
|
|
|
|
*/
|
|
|
|
void oom_killer_enable(void)
|
|
|
|
{
|
|
|
|
oom_killer_disabled = false;
|
2015-02-12 07:26:12 +08:00
|
|
|
}
|
|
|
|
|
2016-07-29 06:44:52 +08:00
|
|
|
static inline bool __task_will_free_mem(struct task_struct *task)
|
|
|
|
{
|
|
|
|
struct signal_struct *sig = task->signal;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* A coredumping process may sleep for an extended period in exit_mm(),
|
|
|
|
* so the oom killer cannot assume that the process will promptly exit
|
|
|
|
* and release memory.
|
|
|
|
*/
|
|
|
|
if (sig->flags & SIGNAL_GROUP_COREDUMP)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
if (sig->flags & SIGNAL_GROUP_EXIT)
|
|
|
|
return true;
|
|
|
|
|
|
|
|
if (thread_group_empty(task) && (task->flags & PF_EXITING))
|
|
|
|
return true;
|
|
|
|
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Checks whether the given task is dying or exiting and likely to
|
|
|
|
* release its address space. This means that all threads and processes
|
|
|
|
* sharing the same mm have to be killed or exiting.
|
2016-07-29 06:45:04 +08:00
|
|
|
* Caller has to make sure that task->mm is stable (hold task_lock or
|
|
|
|
* it operates on the current).
|
2016-07-29 06:44:52 +08:00
|
|
|
*/
|
|
|
|
bool task_will_free_mem(struct task_struct *task)
|
|
|
|
{
|
2016-07-29 06:45:04 +08:00
|
|
|
struct mm_struct *mm = task->mm;
|
2016-07-29 06:44:52 +08:00
|
|
|
struct task_struct *p;
|
2016-08-12 06:33:09 +08:00
|
|
|
bool ret = true;
|
2016-07-29 06:44:52 +08:00
|
|
|
|
|
|
|
/*
|
2016-07-29 06:45:04 +08:00
|
|
|
* Skip tasks without mm because it might have passed its exit_mm and
|
|
|
|
* exit_oom_victim. oom_reaper could have rescued that but do not rely
|
|
|
|
* on that for now. We can consider find_lock_task_mm in future.
|
2016-07-29 06:44:52 +08:00
|
|
|
*/
|
2016-07-29 06:45:04 +08:00
|
|
|
if (!mm)
|
2016-07-29 06:44:52 +08:00
|
|
|
return false;
|
|
|
|
|
2016-07-29 06:45:04 +08:00
|
|
|
if (!__task_will_free_mem(task))
|
|
|
|
return false;
|
mm, oom: task_will_free_mem should skip oom_reaped tasks
The 0-day robot has encountered the following:
Out of memory: Kill process 3914 (trinity-c0) score 167 or sacrifice child
Killed process 3914 (trinity-c0) total-vm:55864kB, anon-rss:1512kB, file-rss:1088kB, shmem-rss:25616kB
oom_reaper: reaped process 3914 (trinity-c0), now anon-rss:0kB, file-rss:0kB, shmem-rss:26488kB
oom_reaper: reaped process 3914 (trinity-c0), now anon-rss:0kB, file-rss:0kB, shmem-rss:26900kB
oom_reaper: reaped process 3914 (trinity-c0), now anon-rss:0kB, file-rss:0kB, shmem-rss:26900kB
oom_reaper: reaped process 3914 (trinity-c0), now anon-rss:0kB, file-rss:0kB, shmem-rss:27296kB
oom_reaper: reaped process 3914 (trinity-c0), now anon-rss:0kB, file-rss:0kB, shmem-rss:28148kB
oom_reaper is trying to reap the same task again and again.
This is possible only when the oom killer is bypassed because of
task_will_free_mem because we skip over tasks with MMF_OOM_REAPED
already set during select_bad_process. Teach task_will_free_mem to skip
over MMF_OOM_REAPED tasks as well because they will be unlikely to free
anything more.
Analyzed by Tetsuo Handa.
Link: http://lkml.kernel.org/r/1466426628-15074-9-git-send-email-mhocko@kernel.org
Signed-off-by: Michal Hocko <mhocko@suse.com>
Acked-by: Oleg Nesterov <oleg@redhat.com>
Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Cc: Vladimir Davydov <vdavydov@virtuozzo.com>
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-07-29 06:44:55 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* This task has already been drained by the oom reaper so there are
|
|
|
|
* only small chances it will free some more
|
|
|
|
*/
|
2016-07-29 06:45:04 +08:00
|
|
|
if (test_bit(MMF_OOM_REAPED, &mm->flags))
|
mm, oom: task_will_free_mem should skip oom_reaped tasks
The 0-day robot has encountered the following:
Out of memory: Kill process 3914 (trinity-c0) score 167 or sacrifice child
Killed process 3914 (trinity-c0) total-vm:55864kB, anon-rss:1512kB, file-rss:1088kB, shmem-rss:25616kB
oom_reaper: reaped process 3914 (trinity-c0), now anon-rss:0kB, file-rss:0kB, shmem-rss:26488kB
oom_reaper: reaped process 3914 (trinity-c0), now anon-rss:0kB, file-rss:0kB, shmem-rss:26900kB
oom_reaper: reaped process 3914 (trinity-c0), now anon-rss:0kB, file-rss:0kB, shmem-rss:26900kB
oom_reaper: reaped process 3914 (trinity-c0), now anon-rss:0kB, file-rss:0kB, shmem-rss:27296kB
oom_reaper: reaped process 3914 (trinity-c0), now anon-rss:0kB, file-rss:0kB, shmem-rss:28148kB
oom_reaper is trying to reap the same task again and again.
This is possible only when the oom killer is bypassed because of
task_will_free_mem because we skip over tasks with MMF_OOM_REAPED
already set during select_bad_process. Teach task_will_free_mem to skip
over MMF_OOM_REAPED tasks as well because they will be unlikely to free
anything more.
Analyzed by Tetsuo Handa.
Link: http://lkml.kernel.org/r/1466426628-15074-9-git-send-email-mhocko@kernel.org
Signed-off-by: Michal Hocko <mhocko@suse.com>
Acked-by: Oleg Nesterov <oleg@redhat.com>
Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Cc: Vladimir Davydov <vdavydov@virtuozzo.com>
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-07-29 06:44:55 +08:00
|
|
|
return false;
|
|
|
|
|
2016-07-29 06:45:04 +08:00
|
|
|
if (atomic_read(&mm->mm_users) <= 1)
|
2016-07-29 06:44:52 +08:00
|
|
|
return true;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This is really pessimistic but we do not have any reliable way
|
|
|
|
* to check that external processes share with our mm
|
|
|
|
*/
|
|
|
|
rcu_read_lock();
|
|
|
|
for_each_process(p) {
|
|
|
|
if (!process_shares_mm(p, mm))
|
|
|
|
continue;
|
|
|
|
if (same_thread_group(task, p))
|
|
|
|
continue;
|
|
|
|
ret = __task_will_free_mem(p);
|
|
|
|
if (!ret)
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
rcu_read_unlock();
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2012-08-01 07:43:45 +08:00
|
|
|
/*
|
|
|
|
* Must be called while holding a reference to p, which will be released upon
|
|
|
|
* returning.
|
|
|
|
*/
|
2015-09-09 06:00:36 +08:00
|
|
|
void oom_kill_process(struct oom_control *oc, struct task_struct *p,
|
mm, memcg: introduce own oom handler to iterate only over its own threads
The global oom killer is serialized by the per-zonelist
try_set_zonelist_oom() which is used in the page allocator. Concurrent
oom kills are thus a rare event and only occur in systems using
mempolicies and with a large number of nodes.
Memory controller oom kills, however, can frequently be concurrent since
there is no serialization once the oom killer is called for oom conditions
in several different memcgs in parallel.
This creates a massive contention on tasklist_lock since the oom killer
requires the readside for the tasklist iteration. If several memcgs are
calling the oom killer, this lock can be held for a substantial amount of
time, especially if threads continue to enter it as other threads are
exiting.
Since the exit path grabs the writeside of the lock with irqs disabled in
a few different places, this can cause a soft lockup on cpus as a result
of tasklist_lock starvation.
The kernel lacks unfair writelocks, and successful calls to the oom killer
usually result in at least one thread entering the exit path, so an
alternative solution is needed.
This patch introduces a seperate oom handler for memcgs so that they do
not require tasklist_lock for as much time. Instead, it iterates only
over the threads attached to the oom memcg and grabs a reference to the
selected thread before calling oom_kill_process() to ensure it doesn't
prematurely exit.
This still requires tasklist_lock for the tasklist dump, iterating
children of the selected process, and killing all other threads on the
system sharing the same memory as the selected victim. So while this
isn't a complete solution to tasklist_lock starvation, it significantly
reduces the amount of time that it is held.
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Acked-by: Michal Hocko <mhocko@suse.cz>
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Reviewed-by: Sha Zhengju <handai.szj@taobao.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-08-01 07:43:44 +08:00
|
|
|
unsigned int points, unsigned long totalpages,
|
2016-07-27 06:22:33 +08:00
|
|
|
const char *message)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2011-03-15 06:17:07 +08:00
|
|
|
struct task_struct *victim = p;
|
2010-08-10 08:18:51 +08:00
|
|
|
struct task_struct *child;
|
2014-01-22 07:49:58 +08:00
|
|
|
struct task_struct *t;
|
2012-03-22 07:33:46 +08:00
|
|
|
struct mm_struct *mm;
|
2011-03-15 06:17:07 +08:00
|
|
|
unsigned int victim_points = 0;
|
2012-03-22 07:33:47 +08:00
|
|
|
static DEFINE_RATELIMIT_STATE(oom_rs, DEFAULT_RATELIMIT_INTERVAL,
|
|
|
|
DEFAULT_RATELIMIT_BURST);
|
2016-03-26 05:20:44 +08:00
|
|
|
bool can_oom_reap = true;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2006-09-26 14:31:29 +08:00
|
|
|
/*
|
|
|
|
* If the task is already exiting, don't alarm the sysadmin or kill
|
|
|
|
* its children or threads, just set TIF_MEMDIE so it can die quickly
|
|
|
|
*/
|
2016-07-29 06:45:04 +08:00
|
|
|
task_lock(p);
|
2016-07-29 06:44:52 +08:00
|
|
|
if (task_will_free_mem(p)) {
|
2015-06-25 07:57:07 +08:00
|
|
|
mark_oom_victim(p);
|
2016-07-29 06:44:52 +08:00
|
|
|
wake_oom_reaper(p);
|
2016-07-29 06:45:04 +08:00
|
|
|
task_unlock(p);
|
2012-08-01 07:43:45 +08:00
|
|
|
put_task_struct(p);
|
2012-03-22 07:33:46 +08:00
|
|
|
return;
|
2006-09-26 14:31:29 +08:00
|
|
|
}
|
2016-07-29 06:45:04 +08:00
|
|
|
task_unlock(p);
|
2006-09-26 14:31:29 +08:00
|
|
|
|
2012-03-22 07:33:47 +08:00
|
|
|
if (__ratelimit(&oom_rs))
|
2016-07-27 06:22:33 +08:00
|
|
|
dump_header(oc, p);
|
2012-03-22 07:33:47 +08:00
|
|
|
|
2015-06-25 07:58:01 +08:00
|
|
|
pr_err("%s: Kill process %d (%s) score %u or sacrifice child\n",
|
2010-08-10 08:18:51 +08:00
|
|
|
message, task_pid_nr(p), p->comm, points);
|
2006-12-07 12:31:51 +08:00
|
|
|
|
2010-08-10 08:18:51 +08:00
|
|
|
/*
|
|
|
|
* If any of p's children has a different mm and is eligible for kill,
|
2011-07-26 08:12:17 +08:00
|
|
|
* the one with the highest oom_badness() score is sacrificed for its
|
2010-08-10 08:18:51 +08:00
|
|
|
* parent. This attempts to lose the minimal amount of work done while
|
|
|
|
* still freeing memory.
|
|
|
|
*/
|
2012-08-01 07:43:45 +08:00
|
|
|
read_lock(&tasklist_lock);
|
2014-01-22 07:49:58 +08:00
|
|
|
for_each_thread(p, t) {
|
2010-08-10 08:18:51 +08:00
|
|
|
list_for_each_entry(child, &t->children, sibling) {
|
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 08:19:46 +08:00
|
|
|
unsigned int child_points;
|
2010-08-10 08:18:51 +08:00
|
|
|
|
2015-11-06 10:48:26 +08:00
|
|
|
if (process_shares_mm(child, p->mm))
|
2011-03-23 07:30:12 +08:00
|
|
|
continue;
|
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 08:19:46 +08:00
|
|
|
/*
|
|
|
|
* oom_badness() returns 0 if the thread is unkillable
|
|
|
|
*/
|
2016-07-27 06:22:33 +08:00
|
|
|
child_points = oom_badness(child,
|
|
|
|
oc->memcg, oc->nodemask, totalpages);
|
2010-08-10 08:18:51 +08:00
|
|
|
if (child_points > victim_points) {
|
2012-08-01 07:43:45 +08:00
|
|
|
put_task_struct(victim);
|
2010-08-10 08:18:51 +08:00
|
|
|
victim = child;
|
|
|
|
victim_points = child_points;
|
2012-08-01 07:43:45 +08:00
|
|
|
get_task_struct(victim);
|
2010-08-10 08:18:51 +08:00
|
|
|
}
|
oom: introduce find_lock_task_mm() to fix !mm false positives
Almost all ->mm == NULL checks in oom_kill.c are wrong.
The current code assumes that the task without ->mm has already released
its memory and ignores the process. However this is not necessarily true
when this process is multithreaded, other live sub-threads can use this
->mm.
- Remove the "if (!p->mm)" check in select_bad_process(), it is
just wrong.
- Add the new helper, find_lock_task_mm(), which finds the live
thread which uses the memory and takes task_lock() to pin ->mm
- change oom_badness() to use this helper instead of just checking
->mm != NULL.
- As David pointed out, select_bad_process() must never choose the
task without ->mm, but no matter what oom_badness() returns the
task can be chosen if nothing else has been found yet.
Change oom_badness() to return int, change it to return -1 if
find_lock_task_mm() fails, and change select_bad_process() to
check points >= 0.
Note! This patch is not enough, we need more changes.
- oom_badness() was fixed, but oom_kill_task() still ignores
the task without ->mm
- oom_forkbomb_penalty() should use find_lock_task_mm() too,
and it also needs other changes to actually find the first
first-descendant children
This will be addressed later.
[kosaki.motohiro@jp.fujitsu.com: use in badness(), __oom_kill_task()]
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 08:18:45 +08:00
|
|
|
}
|
2014-01-22 07:49:58 +08:00
|
|
|
}
|
2012-08-01 07:43:45 +08:00
|
|
|
read_unlock(&tasklist_lock);
|
oom: introduce find_lock_task_mm() to fix !mm false positives
Almost all ->mm == NULL checks in oom_kill.c are wrong.
The current code assumes that the task without ->mm has already released
its memory and ignores the process. However this is not necessarily true
when this process is multithreaded, other live sub-threads can use this
->mm.
- Remove the "if (!p->mm)" check in select_bad_process(), it is
just wrong.
- Add the new helper, find_lock_task_mm(), which finds the live
thread which uses the memory and takes task_lock() to pin ->mm
- change oom_badness() to use this helper instead of just checking
->mm != NULL.
- As David pointed out, select_bad_process() must never choose the
task without ->mm, but no matter what oom_badness() returns the
task can be chosen if nothing else has been found yet.
Change oom_badness() to return int, change it to return -1 if
find_lock_task_mm() fails, and change select_bad_process() to
check points >= 0.
Note! This patch is not enough, we need more changes.
- oom_badness() was fixed, but oom_kill_task() still ignores
the task without ->mm
- oom_forkbomb_penalty() should use find_lock_task_mm() too,
and it also needs other changes to actually find the first
first-descendant children
This will be addressed later.
[kosaki.motohiro@jp.fujitsu.com: use in badness(), __oom_kill_task()]
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 08:18:45 +08:00
|
|
|
|
2012-08-01 07:43:45 +08:00
|
|
|
p = find_lock_task_mm(victim);
|
|
|
|
if (!p) {
|
|
|
|
put_task_struct(victim);
|
2012-03-22 07:33:46 +08:00
|
|
|
return;
|
2012-08-01 07:43:45 +08:00
|
|
|
} else if (victim != p) {
|
|
|
|
get_task_struct(p);
|
|
|
|
put_task_struct(victim);
|
|
|
|
victim = p;
|
|
|
|
}
|
2012-03-22 07:33:46 +08:00
|
|
|
|
2015-11-06 10:47:51 +08:00
|
|
|
/* Get a reference to safely compare mm after task_unlock(victim) */
|
2012-03-22 07:33:46 +08:00
|
|
|
mm = victim->mm;
|
2015-11-06 10:47:51 +08:00
|
|
|
atomic_inc(&mm->mm_count);
|
2015-11-06 10:47:44 +08:00
|
|
|
/*
|
|
|
|
* We should send SIGKILL before setting TIF_MEMDIE in order to prevent
|
|
|
|
* the OOM victim from depleting the memory reserves from the user
|
|
|
|
* space under its control.
|
|
|
|
*/
|
|
|
|
do_send_sig_info(SIGKILL, SEND_SIG_FORCED, victim, true);
|
2015-06-25 07:57:07 +08:00
|
|
|
mark_oom_victim(victim);
|
2016-01-15 07:19:26 +08:00
|
|
|
pr_err("Killed process %d (%s) total-vm:%lukB, anon-rss:%lukB, file-rss:%lukB, shmem-rss:%lukB\n",
|
2012-03-22 07:33:46 +08:00
|
|
|
task_pid_nr(victim), victim->comm, K(victim->mm->total_vm),
|
|
|
|
K(get_mm_counter(victim->mm, MM_ANONPAGES)),
|
2016-01-15 07:19:26 +08:00
|
|
|
K(get_mm_counter(victim->mm, MM_FILEPAGES)),
|
|
|
|
K(get_mm_counter(victim->mm, MM_SHMEMPAGES)));
|
2012-03-22 07:33:46 +08:00
|
|
|
task_unlock(victim);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Kill all user processes sharing victim->mm in other thread groups, if
|
|
|
|
* any. They don't get access to memory reserves, though, to avoid
|
|
|
|
* depletion of all memory. This prevents mm->mmap_sem livelock when an
|
|
|
|
* oom killed thread cannot exit because it requires the semaphore and
|
|
|
|
* its contended by another thread trying to allocate memory itself.
|
|
|
|
* That thread will now get access to memory reserves since it has a
|
|
|
|
* pending fatal signal.
|
|
|
|
*/
|
2014-01-22 07:50:01 +08:00
|
|
|
rcu_read_lock();
|
2015-11-06 10:48:23 +08:00
|
|
|
for_each_process(p) {
|
2015-11-06 10:48:26 +08:00
|
|
|
if (!process_shares_mm(p, mm))
|
2015-11-06 10:48:23 +08:00
|
|
|
continue;
|
|
|
|
if (same_thread_group(p, victim))
|
|
|
|
continue;
|
2016-07-29 06:44:49 +08:00
|
|
|
if (unlikely(p->flags & PF_KTHREAD) || is_global_init(p)) {
|
2016-03-26 05:20:24 +08:00
|
|
|
/*
|
|
|
|
* We cannot use oom_reaper for the mm shared by this
|
|
|
|
* process because it wouldn't get killed and so the
|
2016-07-29 06:45:01 +08:00
|
|
|
* memory might be still used. Hide the mm from the oom
|
|
|
|
* killer to guarantee OOM forward progress.
|
2016-03-26 05:20:24 +08:00
|
|
|
*/
|
|
|
|
can_oom_reap = false;
|
2016-07-29 06:45:01 +08:00
|
|
|
set_bit(MMF_OOM_REAPED, &mm->flags);
|
|
|
|
pr_info("oom killer %d (%s) has mm pinned by %d (%s)\n",
|
|
|
|
task_pid_nr(victim), victim->comm,
|
|
|
|
task_pid_nr(p), p->comm);
|
2015-11-06 10:48:23 +08:00
|
|
|
continue;
|
2016-03-26 05:20:24 +08:00
|
|
|
}
|
2015-11-06 10:48:23 +08:00
|
|
|
do_send_sig_info(SIGKILL, SEND_SIG_FORCED, p, true);
|
|
|
|
}
|
2012-08-01 07:43:45 +08:00
|
|
|
rcu_read_unlock();
|
2012-03-22 07:33:46 +08:00
|
|
|
|
2016-03-26 05:20:24 +08:00
|
|
|
if (can_oom_reap)
|
2016-03-26 05:20:27 +08:00
|
|
|
wake_oom_reaper(victim);
|
2016-03-26 05:20:24 +08:00
|
|
|
|
2015-11-06 10:47:51 +08:00
|
|
|
mmdrop(mm);
|
2012-08-01 07:43:45 +08:00
|
|
|
put_task_struct(victim);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2012-03-22 07:33:46 +08:00
|
|
|
#undef K
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2010-08-10 08:18:54 +08:00
|
|
|
/*
|
|
|
|
* Determines whether the kernel must panic because of the panic_on_oom sysctl.
|
|
|
|
*/
|
2016-07-27 06:22:33 +08:00
|
|
|
void check_panic_on_oom(struct oom_control *oc, enum oom_constraint constraint)
|
2010-08-10 08:18:54 +08:00
|
|
|
{
|
|
|
|
if (likely(!sysctl_panic_on_oom))
|
|
|
|
return;
|
|
|
|
if (sysctl_panic_on_oom != 2) {
|
|
|
|
/*
|
|
|
|
* panic_on_oom == 1 only affects CONSTRAINT_NONE, the kernel
|
|
|
|
* does not panic for cpuset, mempolicy, or memcg allocation
|
|
|
|
* failures.
|
|
|
|
*/
|
|
|
|
if (constraint != CONSTRAINT_NONE)
|
|
|
|
return;
|
|
|
|
}
|
2015-09-09 06:00:42 +08:00
|
|
|
/* Do not panic for oom kills triggered by sysrq */
|
2015-11-07 08:28:06 +08:00
|
|
|
if (is_sysrq_oom(oc))
|
2015-09-09 06:00:42 +08:00
|
|
|
return;
|
2016-07-27 06:22:33 +08:00
|
|
|
dump_header(oc, NULL);
|
2010-08-10 08:18:54 +08:00
|
|
|
panic("Out of memory: %s panic_on_oom is enabled\n",
|
|
|
|
sysctl_panic_on_oom == 2 ? "compulsory" : "system-wide");
|
|
|
|
}
|
|
|
|
|
2006-09-26 14:31:20 +08:00
|
|
|
static BLOCKING_NOTIFIER_HEAD(oom_notify_list);
|
|
|
|
|
|
|
|
int register_oom_notifier(struct notifier_block *nb)
|
|
|
|
{
|
|
|
|
return blocking_notifier_chain_register(&oom_notify_list, nb);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(register_oom_notifier);
|
|
|
|
|
|
|
|
int unregister_oom_notifier(struct notifier_block *nb)
|
|
|
|
{
|
|
|
|
return blocking_notifier_chain_unregister(&oom_notify_list, nb);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(unregister_oom_notifier);
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/**
|
2015-09-09 06:00:36 +08:00
|
|
|
* out_of_memory - kill the "best" process when we run out of memory
|
|
|
|
* @oc: pointer to struct oom_control
|
2005-04-17 06:20:36 +08:00
|
|
|
*
|
|
|
|
* If we run out of memory, we have the choice between either
|
|
|
|
* killing a random task (bad), letting the system crash (worse)
|
|
|
|
* OR try to be smart about which process to kill. Note that we
|
|
|
|
* don't have to be perfect here, we just have to be good.
|
|
|
|
*/
|
2015-09-09 06:00:36 +08:00
|
|
|
bool out_of_memory(struct oom_control *oc)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2010-08-10 08:18:59 +08:00
|
|
|
struct task_struct *p;
|
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 08:19:46 +08:00
|
|
|
unsigned long totalpages;
|
2006-09-26 14:31:20 +08:00
|
|
|
unsigned long freed = 0;
|
mm, memcg: introduce own oom handler to iterate only over its own threads
The global oom killer is serialized by the per-zonelist
try_set_zonelist_oom() which is used in the page allocator. Concurrent
oom kills are thus a rare event and only occur in systems using
mempolicies and with a large number of nodes.
Memory controller oom kills, however, can frequently be concurrent since
there is no serialization once the oom killer is called for oom conditions
in several different memcgs in parallel.
This creates a massive contention on tasklist_lock since the oom killer
requires the readside for the tasklist iteration. If several memcgs are
calling the oom killer, this lock can be held for a substantial amount of
time, especially if threads continue to enter it as other threads are
exiting.
Since the exit path grabs the writeside of the lock with irqs disabled in
a few different places, this can cause a soft lockup on cpus as a result
of tasklist_lock starvation.
The kernel lacks unfair writelocks, and successful calls to the oom killer
usually result in at least one thread entering the exit path, so an
alternative solution is needed.
This patch introduces a seperate oom handler for memcgs so that they do
not require tasklist_lock for as much time. Instead, it iterates only
over the threads attached to the oom memcg and grabs a reference to the
selected thread before calling oom_kill_process() to ensure it doesn't
prematurely exit.
This still requires tasklist_lock for the tasklist dump, iterating
children of the selected process, and killing all other threads on the
system sharing the same memory as the selected victim. So while this
isn't a complete solution to tasklist_lock starvation, it significantly
reduces the amount of time that it is held.
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Acked-by: Michal Hocko <mhocko@suse.cz>
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Reviewed-by: Sha Zhengju <handai.szj@taobao.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-08-01 07:43:44 +08:00
|
|
|
unsigned int uninitialized_var(points);
|
2010-08-10 08:18:55 +08:00
|
|
|
enum oom_constraint constraint = CONSTRAINT_NONE;
|
2006-09-26 14:31:20 +08:00
|
|
|
|
2015-06-25 07:57:19 +08:00
|
|
|
if (oom_killer_disabled)
|
|
|
|
return false;
|
|
|
|
|
2006-09-26 14:31:20 +08:00
|
|
|
blocking_notifier_call_chain(&oom_notify_list, 0, &freed);
|
|
|
|
if (freed > 0)
|
|
|
|
/* Got some memory back in the last second. */
|
2015-09-09 06:00:47 +08:00
|
|
|
return true;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2010-08-10 08:18:48 +08:00
|
|
|
/*
|
mm, oom: allow exiting threads to have access to memory reserves
Exiting threads, those with PF_EXITING set, can pagefault and require
memory before they can make forward progress. This happens, for instance,
when a process must fault task->robust_list, a userspace structure, before
detaching its memory.
These threads also aren't guaranteed to get access to memory reserves
unless oom killed or killed from userspace. The oom killer won't grant
memory reserves if other threads are also exiting other than current and
stalling at the same point. This prevents needlessly killing processes
when others are already exiting.
Instead of special casing all the possible situations between PF_EXITING
getting set and a thread detaching its mm where it may allocate memory,
which probably wouldn't get updated when a change is made to the exit
path, the solution is to give all exiting threads access to memory
reserves if they call the oom killer. This allows them to quickly
allocate, detach its mm, and free the memory it represents.
Summary of Luigi's bug report:
: He had an oom condition where threads were faulting on task->robust_list
: and repeatedly called the oom killer but it would defer killing a thread
: because it saw other PF_EXITING threads. This can happen anytime we need
: to allocate memory after setting PF_EXITING and before detaching our mm;
: if there are other threads in the same state then the oom killer won't do
: anything unless one of them happens to be killed from userspace.
:
: So instead of only deferring for PF_EXITING and !task->robust_list, it's
: better to just give them access to memory reserves to prevent a potential
: livelock so that any other faults that may be introduced in the future in
: the exit path don't cause the same problem (and hopefully we don't allow
: too many of those!).
Signed-off-by: David Rientjes <rientjes@google.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Tested-by: Luigi Semenzato <semenzato@google.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-12-12 08:01:30 +08:00
|
|
|
* If current has a pending SIGKILL or is exiting, then automatically
|
|
|
|
* select it. The goal is to allow it to allocate so that it may
|
|
|
|
* quickly exit and free its memory.
|
2010-08-10 08:18:48 +08:00
|
|
|
*/
|
2016-07-29 06:45:04 +08:00
|
|
|
if (task_will_free_mem(current)) {
|
2015-06-25 07:57:07 +08:00
|
|
|
mark_oom_victim(current);
|
2016-07-29 06:44:52 +08:00
|
|
|
wake_oom_reaper(current);
|
2015-09-09 06:00:47 +08:00
|
|
|
return true;
|
2010-08-10 08:18:48 +08:00
|
|
|
}
|
|
|
|
|
2016-05-20 08:13:09 +08:00
|
|
|
/*
|
|
|
|
* The OOM killer does not compensate for IO-less reclaim.
|
|
|
|
* pagefault_out_of_memory lost its gfp context so we have to
|
|
|
|
* make sure exclude 0 mask - all other users should have at least
|
|
|
|
* ___GFP_DIRECT_RECLAIM to get here.
|
|
|
|
*/
|
|
|
|
if (oc->gfp_mask && !(oc->gfp_mask & (__GFP_FS|__GFP_NOFAIL)))
|
|
|
|
return true;
|
|
|
|
|
2006-02-21 10:27:52 +08:00
|
|
|
/*
|
|
|
|
* Check if there were limitations on the allocation (only relevant for
|
|
|
|
* NUMA) that may require different handling.
|
|
|
|
*/
|
2015-09-09 06:00:36 +08:00
|
|
|
constraint = constrained_alloc(oc, &totalpages);
|
|
|
|
if (constraint != CONSTRAINT_MEMORY_POLICY)
|
|
|
|
oc->nodemask = NULL;
|
2016-07-27 06:22:33 +08:00
|
|
|
check_panic_on_oom(oc, constraint);
|
2010-08-10 08:18:59 +08:00
|
|
|
|
2012-08-01 07:42:55 +08:00
|
|
|
if (sysctl_oom_kill_allocating_task && current->mm &&
|
2015-09-09 06:00:36 +08:00
|
|
|
!oom_unkillable_task(current, NULL, oc->nodemask) &&
|
2012-08-01 07:42:55 +08:00
|
|
|
current->signal->oom_score_adj != OOM_SCORE_ADJ_MIN) {
|
2012-08-01 07:43:45 +08:00
|
|
|
get_task_struct(current);
|
2016-07-27 06:22:33 +08:00
|
|
|
oom_kill_process(oc, current, 0, totalpages,
|
2012-03-22 07:33:46 +08:00
|
|
|
"Out of memory (oom_kill_allocating_task)");
|
2015-09-09 06:00:47 +08:00
|
|
|
return true;
|
2010-08-10 08:18:59 +08:00
|
|
|
}
|
|
|
|
|
2015-09-09 06:00:36 +08:00
|
|
|
p = select_bad_process(oc, &points, totalpages);
|
2010-08-10 08:18:59 +08:00
|
|
|
/* Found nothing?!?! Either we hang forever, or we panic. */
|
2015-11-07 08:28:06 +08:00
|
|
|
if (!p && !is_sysrq_oom(oc)) {
|
2016-07-27 06:22:33 +08:00
|
|
|
dump_header(oc, NULL);
|
2010-08-10 08:18:59 +08:00
|
|
|
panic("Out of memory and no killable processes...\n");
|
|
|
|
}
|
2015-09-09 06:00:42 +08:00
|
|
|
if (p && p != (void *)-1UL) {
|
2016-07-27 06:22:33 +08:00
|
|
|
oom_kill_process(oc, p, points, totalpages, "Out of memory");
|
2015-09-09 06:00:47 +08:00
|
|
|
/*
|
|
|
|
* Give the killed process a good chance to exit before trying
|
|
|
|
* to allocate memory again.
|
|
|
|
*/
|
2012-08-01 07:42:37 +08:00
|
|
|
schedule_timeout_killable(1);
|
2015-09-09 06:00:47 +08:00
|
|
|
}
|
2015-06-25 07:57:19 +08:00
|
|
|
return true;
|
2015-02-12 07:26:24 +08:00
|
|
|
}
|
|
|
|
|
2010-08-10 08:18:55 +08:00
|
|
|
/*
|
|
|
|
* The pagefault handler calls here because it is out of memory, so kill a
|
2016-07-27 06:22:30 +08:00
|
|
|
* memory-hogging task. If oom_lock is held by somebody else, a parallel oom
|
|
|
|
* killing is already in progress so do nothing.
|
2010-08-10 08:18:55 +08:00
|
|
|
*/
|
|
|
|
void pagefault_out_of_memory(void)
|
|
|
|
{
|
2015-09-09 06:00:36 +08:00
|
|
|
struct oom_control oc = {
|
|
|
|
.zonelist = NULL,
|
|
|
|
.nodemask = NULL,
|
2016-07-27 06:22:33 +08:00
|
|
|
.memcg = NULL,
|
2015-09-09 06:00:36 +08:00
|
|
|
.gfp_mask = 0,
|
|
|
|
.order = 0,
|
|
|
|
};
|
|
|
|
|
2013-10-17 04:46:59 +08:00
|
|
|
if (mem_cgroup_oom_synchronize(true))
|
2015-06-25 07:57:19 +08:00
|
|
|
return;
|
mm: memcg: do not trap chargers with full callstack on OOM
The memcg OOM handling is incredibly fragile and can deadlock. When a
task fails to charge memory, it invokes the OOM killer and loops right
there in the charge code until it succeeds. Comparably, any other task
that enters the charge path at this point will go to a waitqueue right
then and there and sleep until the OOM situation is resolved. The problem
is that these tasks may hold filesystem locks and the mmap_sem; locks that
the selected OOM victim may need to exit.
For example, in one reported case, the task invoking the OOM killer was
about to charge a page cache page during a write(), which holds the
i_mutex. The OOM killer selected a task that was just entering truncate()
and trying to acquire the i_mutex:
OOM invoking task:
mem_cgroup_handle_oom+0x241/0x3b0
mem_cgroup_cache_charge+0xbe/0xe0
add_to_page_cache_locked+0x4c/0x140
add_to_page_cache_lru+0x22/0x50
grab_cache_page_write_begin+0x8b/0xe0
ext3_write_begin+0x88/0x270
generic_file_buffered_write+0x116/0x290
__generic_file_aio_write+0x27c/0x480
generic_file_aio_write+0x76/0xf0 # takes ->i_mutex
do_sync_write+0xea/0x130
vfs_write+0xf3/0x1f0
sys_write+0x51/0x90
system_call_fastpath+0x18/0x1d
OOM kill victim:
do_truncate+0x58/0xa0 # takes i_mutex
do_last+0x250/0xa30
path_openat+0xd7/0x440
do_filp_open+0x49/0xa0
do_sys_open+0x106/0x240
sys_open+0x20/0x30
system_call_fastpath+0x18/0x1d
The OOM handling task will retry the charge indefinitely while the OOM
killed task is not releasing any resources.
A similar scenario can happen when the kernel OOM killer for a memcg is
disabled and a userspace task is in charge of resolving OOM situations.
In this case, ALL tasks that enter the OOM path will be made to sleep on
the OOM waitqueue and wait for userspace to free resources or increase
the group's limit. But a userspace OOM handler is prone to deadlock
itself on the locks held by the waiting tasks. For example one of the
sleeping tasks may be stuck in a brk() call with the mmap_sem held for
writing but the userspace handler, in order to pick an optimal victim,
may need to read files from /proc/<pid>, which tries to acquire the same
mmap_sem for reading and deadlocks.
This patch changes the way tasks behave after detecting a memcg OOM and
makes sure nobody loops or sleeps with locks held:
1. When OOMing in a user fault, invoke the OOM killer and restart the
fault instead of looping on the charge attempt. This way, the OOM
victim can not get stuck on locks the looping task may hold.
2. When OOMing in a user fault but somebody else is handling it
(either the kernel OOM killer or a userspace handler), don't go to
sleep in the charge context. Instead, remember the OOMing memcg in
the task struct and then fully unwind the page fault stack with
-ENOMEM. pagefault_out_of_memory() will then call back into the
memcg code to check if the -ENOMEM came from the memcg, and then
either put the task to sleep on the memcg's OOM waitqueue or just
restart the fault. The OOM victim can no longer get stuck on any
lock a sleeping task may hold.
Debugged by Michal Hocko.
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Reported-by: azurIt <azurit@pobox.sk>
Acked-by: Michal Hocko <mhocko@suse.cz>
Cc: David Rientjes <rientjes@google.com>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-09-13 06:13:44 +08:00
|
|
|
|
2015-06-25 07:57:19 +08:00
|
|
|
if (!mutex_trylock(&oom_lock))
|
|
|
|
return;
|
2015-02-12 07:26:24 +08:00
|
|
|
|
2015-09-09 06:00:36 +08:00
|
|
|
if (!out_of_memory(&oc)) {
|
2015-06-25 07:57:19 +08:00
|
|
|
/*
|
|
|
|
* There shouldn't be any user tasks runnable while the
|
|
|
|
* OOM killer is disabled, so the current task has to
|
|
|
|
* be a racing OOM victim for which oom_killer_disable()
|
|
|
|
* is waiting for.
|
|
|
|
*/
|
|
|
|
WARN_ON(test_thread_flag(TIF_MEMDIE));
|
2010-08-10 08:18:55 +08:00
|
|
|
}
|
2015-06-25 07:57:19 +08:00
|
|
|
|
|
|
|
mutex_unlock(&oom_lock);
|
2010-08-10 08:18:55 +08:00
|
|
|
}
|