Documenation: update cgroup's document path

cgroup's document path is changed to "cgroup-v1". update it.

Signed-off-by: seokhoon.yoon <iamyooon@gmail.com>
Signed-off-by: Jonathan Corbet <corbet@lwn.net>
This commit is contained in:
seokhoon.yoon 2016-08-02 23:23:57 +09:00 committed by Jonathan Corbet
parent d9a77fe243
commit 09c3bcce7c
14 changed files with 19 additions and 19 deletions

View File

@ -2,7 +2,7 @@
------- -------
Written by Paul Menage <menage@google.com> based on Written by Paul Menage <menage@google.com> based on
Documentation/cgroups/cpusets.txt Documentation/cgroup-v1/cpusets.txt
Original copyright statements from cpusets.txt: Original copyright statements from cpusets.txt:
Portions Copyright (C) 2004 BULL SA. Portions Copyright (C) 2004 BULL SA.
@ -72,7 +72,7 @@ On their own, the only use for cgroups is for simple job
tracking. The intention is that other subsystems hook into the generic tracking. The intention is that other subsystems hook into the generic
cgroup support to provide new attributes for cgroups, such as cgroup support to provide new attributes for cgroups, such as
accounting/limiting the resources which processes in a cgroup can accounting/limiting the resources which processes in a cgroup can
access. For example, cpusets (see Documentation/cgroups/cpusets.txt) allow access. For example, cpusets (see Documentation/cgroup-v1/cpusets.txt) allow
you to associate a set of CPUs and a set of memory nodes with the you to associate a set of CPUs and a set of memory nodes with the
tasks in each cgroup. tasks in each cgroup.

View File

@ -48,7 +48,7 @@ hooks, beyond what is already present, required to manage dynamic
job placement on large systems. job placement on large systems.
Cpusets use the generic cgroup subsystem described in Cpusets use the generic cgroup subsystem described in
Documentation/cgroups/cgroups.txt. Documentation/cgroup-v1/cgroups.txt.
Requests by a task, using the sched_setaffinity(2) system call to Requests by a task, using the sched_setaffinity(2) system call to
include CPUs in its CPU affinity mask, and using the mbind(2) and include CPUs in its CPU affinity mask, and using the mbind(2) and

View File

@ -6,7 +6,7 @@ Because VM is getting complex (one of reasons is memcg...), memcg's behavior
is complex. This is a document for memcg's internal behavior. is complex. This is a document for memcg's internal behavior.
Please note that implementation details can be changed. Please note that implementation details can be changed.
(*) Topics on API should be in Documentation/cgroups/memory.txt) (*) Topics on API should be in Documentation/cgroup-v1/memory.txt)
0. How to record usage ? 0. How to record usage ?
2 objects are used. 2 objects are used.
@ -256,7 +256,7 @@ Under below explanation, we assume CONFIG_MEM_RES_CTRL_SWAP=y.
You can see charges have been moved by reading *.usage_in_bytes or You can see charges have been moved by reading *.usage_in_bytes or
memory.stat of both A and B. memory.stat of both A and B.
See 8.2 of Documentation/cgroups/memory.txt to see what value should be See 8.2 of Documentation/cgroup-v1/memory.txt to see what value should be
written to move_charge_at_immigrate. written to move_charge_at_immigrate.
9.10 Memory thresholds 9.10 Memory thresholds

View File

@ -98,7 +98,7 @@ A memory policy with a valid NodeList will be saved, as specified, for
use at file creation time. When a task allocates a file in the file use at file creation time. When a task allocates a file in the file
system, the mount option memory policy will be applied with a NodeList, system, the mount option memory policy will be applied with a NodeList,
if any, modified by the calling task's cpuset constraints if any, modified by the calling task's cpuset constraints
[See Documentation/cgroups/cpusets.txt] and any optional flags, listed [See Documentation/cgroup-v1/cpusets.txt] and any optional flags, listed
below. If the resulting NodeLists is the empty set, the effective memory below. If the resulting NodeLists is the empty set, the effective memory
policy for the file will revert to "default" policy. policy for the file will revert to "default" policy.

View File

@ -3547,7 +3547,7 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
relax_domain_level= relax_domain_level=
[KNL, SMP] Set scheduler's default relax_domain_level. [KNL, SMP] Set scheduler's default relax_domain_level.
See Documentation/cgroups/cpusets.txt. See Documentation/cgroup-v1/cpusets.txt.
relative_sleep_states= relative_sleep_states=
[SUSPEND] Use sleep state labeling where the deepest [SUSPEND] Use sleep state labeling where the deepest
@ -3867,7 +3867,7 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
swapaccount=[0|1] swapaccount=[0|1]
[KNL] Enable accounting of swap in memory resource [KNL] Enable accounting of swap in memory resource
controller if no parameter or 1 is given or disable controller if no parameter or 1 is given or disable
it if 0 is given (See Documentation/cgroups/memory.txt) it if 0 is given (See Documentation/cgroup-v1/memory.txt)
swiotlb= [ARM,IA-64,PPC,MIPS,X86] swiotlb= [ARM,IA-64,PPC,MIPS,X86]
Format: { <int> | force } Format: { <int> | force }

View File

@ -10,7 +10,7 @@ REFERENCES
o Documentation/IRQ-affinity.txt: Binding interrupts to sets of CPUs. o Documentation/IRQ-affinity.txt: Binding interrupts to sets of CPUs.
o Documentation/cgroups: Using cgroups to bind tasks to sets of CPUs. o Documentation/cgroup-v1: Using cgroups to bind tasks to sets of CPUs.
o man taskset: Using the taskset command to bind tasks to sets o man taskset: Using the taskset command to bind tasks to sets
of CPUs. of CPUs.

View File

@ -431,7 +431,7 @@ CONTENTS
-deadline tasks cannot have an affinity mask smaller that the entire -deadline tasks cannot have an affinity mask smaller that the entire
root_domain they are created on. However, affinities can be specified root_domain they are created on. However, affinities can be specified
through the cpuset facility (Documentation/cgroups/cpusets.txt). through the cpuset facility (Documentation/cgroup-v1/cpusets.txt).
5.1 SCHED_DEADLINE and cpusets HOWTO 5.1 SCHED_DEADLINE and cpusets HOWTO
------------------------------------ ------------------------------------

View File

@ -215,7 +215,7 @@ SCHED_BATCH) tasks.
These options need CONFIG_CGROUPS to be defined, and let the administrator These options need CONFIG_CGROUPS to be defined, and let the administrator
create arbitrary groups of tasks, using the "cgroup" pseudo filesystem. See create arbitrary groups of tasks, using the "cgroup" pseudo filesystem. See
Documentation/cgroups/cgroups.txt for more information about this filesystem. Documentation/cgroup-v1/cgroups.txt for more information about this filesystem.
When CONFIG_FAIR_GROUP_SCHED is defined, a "cpu.shares" file is created for each When CONFIG_FAIR_GROUP_SCHED is defined, a "cpu.shares" file is created for each
group created using the pseudo filesystem. See example steps below to create group created using the pseudo filesystem. See example steps below to create

View File

@ -133,7 +133,7 @@ This uses the cgroup virtual file system and "<cgroup>/cpu.rt_runtime_us"
to control the CPU time reserved for each control group. to control the CPU time reserved for each control group.
For more information on working with control groups, you should read For more information on working with control groups, you should read
Documentation/cgroups/cgroups.txt as well. Documentation/cgroup-v1/cgroups.txt as well.
Group settings are checked against the following limits in order to keep the Group settings are checked against the following limits in order to keep the
configuration schedulable: configuration schedulable:

View File

@ -63,7 +63,7 @@ nodes. Each emulated node will manage a fraction of the underlying cells'
physical memory. NUMA emluation is useful for testing NUMA kernel and physical memory. NUMA emluation is useful for testing NUMA kernel and
application features on non-NUMA platforms, and as a sort of memory resource application features on non-NUMA platforms, and as a sort of memory resource
management mechanism when used together with cpusets. management mechanism when used together with cpusets.
[see Documentation/cgroups/cpusets.txt] [see Documentation/cgroup-v1/cpusets.txt]
For each node with memory, Linux constructs an independent memory management For each node with memory, Linux constructs an independent memory management
subsystem, complete with its own free page lists, in-use page lists, usage subsystem, complete with its own free page lists, in-use page lists, usage
@ -113,7 +113,7 @@ allocation behavior using Linux NUMA memory policy.
System administrators can restrict the CPUs and nodes' memories that a non- System administrators can restrict the CPUs and nodes' memories that a non-
privileged user can specify in the scheduling or NUMA commands and functions privileged user can specify in the scheduling or NUMA commands and functions
using control groups and CPUsets. [see Documentation/cgroups/cpusets.txt] using control groups and CPUsets. [see Documentation/cgroup-v1/cpusets.txt]
On architectures that do not hide memoryless nodes, Linux will include only On architectures that do not hide memoryless nodes, Linux will include only
zones [nodes] with memory in the zonelists. This means that for a memoryless zones [nodes] with memory in the zonelists. This means that for a memoryless

View File

@ -9,7 +9,7 @@ document attempts to describe the concepts and APIs of the 2.6 memory policy
support. support.
Memory policies should not be confused with cpusets Memory policies should not be confused with cpusets
(Documentation/cgroups/cpusets.txt) (Documentation/cgroup-v1/cpusets.txt)
which is an administrative mechanism for restricting the nodes from which which is an administrative mechanism for restricting the nodes from which
memory may be allocated by a set of processes. Memory policies are a memory may be allocated by a set of processes. Memory policies are a
programming interface that a NUMA-aware application can take advantage of. When programming interface that a NUMA-aware application can take advantage of. When

View File

@ -38,7 +38,7 @@ locations.
Larger installations usually partition the system using cpusets into Larger installations usually partition the system using cpusets into
sections of nodes. Paul Jackson has equipped cpusets with the ability to sections of nodes. Paul Jackson has equipped cpusets with the ability to
move pages when a task is moved to another cpuset (See move pages when a task is moved to another cpuset (See
Documentation/cgroups/cpusets.txt). Documentation/cgroup-v1/cpusets.txt).
Cpusets allows the automation of process locality. If a task is moved to Cpusets allows the automation of process locality. If a task is moved to
a new cpuset then also all its pages are moved with it so that the a new cpuset then also all its pages are moved with it so that the
performance of the process does not sink dramatically. Also the pages performance of the process does not sink dramatically. Also the pages

View File

@ -122,7 +122,7 @@ MEMORY CONTROL GROUP INTERACTION
-------------------------------- --------------------------------
The unevictable LRU facility interacts with the memory control group [aka The unevictable LRU facility interacts with the memory control group [aka
memory controller; see Documentation/cgroups/memory.txt] by extending the memory controller; see Documentation/cgroup-v1/memory.txt] by extending the
lru_list enum. lru_list enum.
The memory controller data structure automatically gets a per-zone unevictable The memory controller data structure automatically gets a per-zone unevictable

View File

@ -8,7 +8,7 @@ assign them to cpusets and their attached tasks. This is a way of limiting the
amount of system memory that are available to a certain class of tasks. amount of system memory that are available to a certain class of tasks.
For more information on the features of cpusets, see For more information on the features of cpusets, see
Documentation/cgroups/cpusets.txt. Documentation/cgroup-v1/cpusets.txt.
There are a number of different configurations you can use for your needs. For There are a number of different configurations you can use for your needs. For
more information on the numa=fake command line option and its various ways of more information on the numa=fake command line option and its various ways of
configuring fake nodes, see Documentation/x86/x86_64/boot-options.txt. configuring fake nodes, see Documentation/x86/x86_64/boot-options.txt.
@ -33,7 +33,7 @@ A machine may be split as follows with "numa=fake=4*512," as reported by dmesg:
On node 3 totalpages: 131072 On node 3 totalpages: 131072
Now following the instructions for mounting the cpusets filesystem from Now following the instructions for mounting the cpusets filesystem from
Documentation/cgroups/cpusets.txt, you can assign fake nodes (i.e. contiguous memory Documentation/cgroup-v1/cpusets.txt, you can assign fake nodes (i.e. contiguous memory
address spaces) to individual cpusets: address spaces) to individual cpusets:
[root@xroads /]# mkdir exampleset [root@xroads /]# mkdir exampleset