mirror of https://gitee.com/openkylin/linux.git
Merge branch 'for-3.10' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup
Pull cgroup updates from Tejun Heo: - Fixes and a lot of cleanups. Locking cleanup is finally complete. cgroup_mutex is no longer exposed to individual controlelrs which used to cause nasty deadlock issues. Li fixed and cleaned up quite a bit including long standing ones like racy cgroup_path(). - device cgroup now supports proper hierarchy thanks to Aristeu. - perf_event cgroup now supports proper hierarchy. - A new mount option "__DEVEL__sane_behavior" is added. As indicated by the name, this option is to be used for development only at this point and generates a warning message when used. Unfortunately, cgroup interface currently has too many brekages and inconsistencies to implement a consistent and unified hierarchy on top. The new flag is used to collect the behavior changes which are necessary to implement consistent unified hierarchy. It's likely that this flag won't be used verbatim when it becomes ready but will be enabled implicitly along with unified hierarchy. The option currently disables some of broken behaviors in cgroup core and also .use_hierarchy switch in memcg (will be routed through -mm), which can be used to make very unusual hierarchy where nesting is partially honored. It will also be used to implement hierarchy support for blk-throttle which would be impossible otherwise without introducing a full separate set of control knobs. This is essentially versioning of interface which isn't very nice but at this point I can't see any other options which would allow keeping the interface the same while moving towards hierarchy behavior which is at least somewhat sane. The planned unified hierarchy is likely to require some level of adaptation from userland anyway, so I think it'd be best to take the chance and update the interface such that it's supportable in the long term. Maintaining the existing interface does complicate cgroup core but shouldn't put too much strain on individual controllers and I think it'd be manageable for the foreseeable future. Maybe we'll be able to drop it in a decade. Fix up conflicts (including a semantic one adding a new #include to ppc that was uncovered by header the file changes) as per Tejun. * 'for-3.10' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup: (45 commits) cpuset: fix compile warning when CONFIG_SMP=n cpuset: fix cpu hotplug vs rebuild_sched_domains() race cpuset: use rebuild_sched_domains() in cpuset_hotplug_workfn() cgroup: restore the call to eventfd->poll() cgroup: fix use-after-free when umounting cgroupfs cgroup: fix broken file xattrs devcg: remove parent_cgroup. memcg: force use_hierarchy if sane_behavior cgroup: remove cgrp->top_cgroup cgroup: introduce sane_behavior mount option move cgroupfs_root to include/linux/cgroup.h cgroup: convert cgroupfs_root flag bits to masks and add CGRP_ prefix cgroup: make cgroup_path() not print double slashes Revert "cgroup: remove bind() method from cgroup_subsys." perf: make perf_event cgroup hierarchical cgroup: implement cgroup_is_descendant() cgroup: make sure parent won't be destroyed before its children cgroup: remove bind() method from cgroup_subsys. devcg: remove broken_hierarchy tag cgroup: remove cgroup_lock_is_held() ...
This commit is contained in:
commit
191a712090
|
@ -442,7 +442,7 @@ You can attach the current shell task by echoing 0:
|
|||
You can use the cgroup.procs file instead of the tasks file to move all
|
||||
threads in a threadgroup at once. Echoing the PID of any task in a
|
||||
threadgroup to cgroup.procs causes all tasks in that threadgroup to be
|
||||
be attached to the cgroup. Writing 0 to cgroup.procs moves all tasks
|
||||
attached to the cgroup. Writing 0 to cgroup.procs moves all tasks
|
||||
in the writing task's threadgroup.
|
||||
|
||||
Note: Since every task is always a member of exactly one cgroup in each
|
||||
|
@ -580,6 +580,7 @@ propagation along the hierarchy. See the comment on
|
|||
cgroup_for_each_descendant_pre() for details.
|
||||
|
||||
void css_offline(struct cgroup *cgrp);
|
||||
(cgroup_mutex held by caller)
|
||||
|
||||
This is the counterpart of css_online() and called iff css_online()
|
||||
has succeeded on @cgrp. This signifies the beginning of the end of
|
||||
|
|
|
@ -13,9 +13,7 @@ either an integer or * for all. Access is a composition of r
|
|||
The root device cgroup starts with rwm to 'all'. A child device
|
||||
cgroup gets a copy of the parent. Administrators can then remove
|
||||
devices from the whitelist or add new entries. A child cgroup can
|
||||
never receive a device access which is denied by its parent. However
|
||||
when a device access is removed from a parent it will not also be
|
||||
removed from the child(ren).
|
||||
never receive a device access which is denied by its parent.
|
||||
|
||||
2. User Interface
|
||||
|
||||
|
@ -50,3 +48,69 @@ task to a new cgroup. (Again we'll probably want to change that).
|
|||
|
||||
A cgroup may not be granted more permissions than the cgroup's
|
||||
parent has.
|
||||
|
||||
4. Hierarchy
|
||||
|
||||
device cgroups maintain hierarchy by making sure a cgroup never has more
|
||||
access permissions than its parent. Every time an entry is written to
|
||||
a cgroup's devices.deny file, all its children will have that entry removed
|
||||
from their whitelist and all the locally set whitelist entries will be
|
||||
re-evaluated. In case one of the locally set whitelist entries would provide
|
||||
more access than the cgroup's parent, it'll be removed from the whitelist.
|
||||
|
||||
Example:
|
||||
A
|
||||
/ \
|
||||
B
|
||||
|
||||
group behavior exceptions
|
||||
A allow "b 8:* rwm", "c 116:1 rw"
|
||||
B deny "c 1:3 rwm", "c 116:2 rwm", "b 3:* rwm"
|
||||
|
||||
If a device is denied in group A:
|
||||
# echo "c 116:* r" > A/devices.deny
|
||||
it'll propagate down and after revalidating B's entries, the whitelist entry
|
||||
"c 116:2 rwm" will be removed:
|
||||
|
||||
group whitelist entries denied devices
|
||||
A all "b 8:* rwm", "c 116:* rw"
|
||||
B "c 1:3 rwm", "b 3:* rwm" all the rest
|
||||
|
||||
In case parent's exceptions change and local exceptions are not allowed
|
||||
anymore, they'll be deleted.
|
||||
|
||||
Notice that new whitelist entries will not be propagated:
|
||||
A
|
||||
/ \
|
||||
B
|
||||
|
||||
group whitelist entries denied devices
|
||||
A "c 1:3 rwm", "c 1:5 r" all the rest
|
||||
B "c 1:3 rwm", "c 1:5 r" all the rest
|
||||
|
||||
when adding "c *:3 rwm":
|
||||
# echo "c *:3 rwm" >A/devices.allow
|
||||
|
||||
the result:
|
||||
group whitelist entries denied devices
|
||||
A "c *:3 rwm", "c 1:5 r" all the rest
|
||||
B "c 1:3 rwm", "c 1:5 r" all the rest
|
||||
|
||||
but now it'll be possible to add new entries to B:
|
||||
# echo "c 2:3 rwm" >B/devices.allow
|
||||
# echo "c 50:3 r" >B/devices.allow
|
||||
or even
|
||||
# echo "c *:3 rwm" >B/devices.allow
|
||||
|
||||
Allowing or denying all by writing 'a' to devices.allow or devices.deny will
|
||||
not be possible once the device cgroups has children.
|
||||
|
||||
4.1 Hierarchy (internal implementation)
|
||||
|
||||
device cgroups is implemented internally using a behavior (ALLOW, DENY) and a
|
||||
list of exceptions. The internal state is controlled using the same user
|
||||
interface to preserve compatibility with the previous whitelist-only
|
||||
implementation. Removal or addition of exceptions that will reduce the access
|
||||
to devices will be propagated down the hierarchy.
|
||||
For every propagated exception, the effective rules will be re-evaluated based
|
||||
on current parent's access rules.
|
||||
|
|
|
@ -22,6 +22,7 @@
|
|||
#include <linux/pfn.h>
|
||||
#include <linux/cpuset.h>
|
||||
#include <linux/node.h>
|
||||
#include <linux/slab.h>
|
||||
#include <asm/sparsemem.h>
|
||||
#include <asm/prom.h>
|
||||
#include <asm/smp.h>
|
||||
|
|
|
@ -247,9 +247,7 @@ static inline int blkg_path(struct blkcg_gq *blkg, char *buf, int buflen)
|
|||
{
|
||||
int ret;
|
||||
|
||||
rcu_read_lock();
|
||||
ret = cgroup_path(blkg->blkcg->css.cgroup, buf, buflen);
|
||||
rcu_read_unlock();
|
||||
if (ret)
|
||||
strncpy(buf, "<unavailable>", buflen);
|
||||
return ret;
|
||||
|
|
|
@ -19,6 +19,7 @@
|
|||
#include <linux/idr.h>
|
||||
#include <linux/workqueue.h>
|
||||
#include <linux/xattr.h>
|
||||
#include <linux/fs.h>
|
||||
|
||||
#ifdef CONFIG_CGROUPS
|
||||
|
||||
|
@ -30,10 +31,6 @@ struct css_id;
|
|||
|
||||
extern int cgroup_init_early(void);
|
||||
extern int cgroup_init(void);
|
||||
extern void cgroup_lock(void);
|
||||
extern int cgroup_lock_is_held(void);
|
||||
extern bool cgroup_lock_live_group(struct cgroup *cgrp);
|
||||
extern void cgroup_unlock(void);
|
||||
extern void cgroup_fork(struct task_struct *p);
|
||||
extern void cgroup_post_fork(struct task_struct *p);
|
||||
extern void cgroup_exit(struct task_struct *p, int run_callbacks);
|
||||
|
@ -44,14 +41,25 @@ extern void cgroup_unload_subsys(struct cgroup_subsys *ss);
|
|||
|
||||
extern const struct file_operations proc_cgroup_operations;
|
||||
|
||||
/* Define the enumeration of all builtin cgroup subsystems */
|
||||
/*
|
||||
* Define the enumeration of all cgroup subsystems.
|
||||
*
|
||||
* We define ids for builtin subsystems and then modular ones.
|
||||
*/
|
||||
#define SUBSYS(_x) _x ## _subsys_id,
|
||||
#define IS_SUBSYS_ENABLED(option) IS_ENABLED(option)
|
||||
enum cgroup_subsys_id {
|
||||
#define IS_SUBSYS_ENABLED(option) IS_BUILTIN(option)
|
||||
#include <linux/cgroup_subsys.h>
|
||||
#undef IS_SUBSYS_ENABLED
|
||||
CGROUP_BUILTIN_SUBSYS_COUNT,
|
||||
|
||||
__CGROUP_SUBSYS_TEMP_PLACEHOLDER = CGROUP_BUILTIN_SUBSYS_COUNT - 1,
|
||||
|
||||
#define IS_SUBSYS_ENABLED(option) IS_MODULE(option)
|
||||
#include <linux/cgroup_subsys.h>
|
||||
#undef IS_SUBSYS_ENABLED
|
||||
CGROUP_SUBSYS_COUNT,
|
||||
};
|
||||
#undef IS_SUBSYS_ENABLED
|
||||
#undef SUBSYS
|
||||
|
||||
/* Per-subsystem/per-cgroup state maintained by the system. */
|
||||
|
@ -148,6 +156,13 @@ enum {
|
|||
* specified at mount time and thus is implemented here.
|
||||
*/
|
||||
CGRP_CPUSET_CLONE_CHILDREN,
|
||||
/* see the comment above CGRP_ROOT_SANE_BEHAVIOR for details */
|
||||
CGRP_SANE_BEHAVIOR,
|
||||
};
|
||||
|
||||
struct cgroup_name {
|
||||
struct rcu_head rcu_head;
|
||||
char name[];
|
||||
};
|
||||
|
||||
struct cgroup {
|
||||
|
@ -172,11 +187,23 @@ struct cgroup {
|
|||
struct cgroup *parent; /* my parent */
|
||||
struct dentry *dentry; /* cgroup fs entry, RCU protected */
|
||||
|
||||
/*
|
||||
* This is a copy of dentry->d_name, and it's needed because
|
||||
* we can't use dentry->d_name in cgroup_path().
|
||||
*
|
||||
* You must acquire rcu_read_lock() to access cgrp->name, and
|
||||
* the only place that can change it is rename(), which is
|
||||
* protected by parent dir's i_mutex.
|
||||
*
|
||||
* Normally you should use cgroup_name() wrapper rather than
|
||||
* access it directly.
|
||||
*/
|
||||
struct cgroup_name __rcu *name;
|
||||
|
||||
/* Private pointers for each registered subsystem */
|
||||
struct cgroup_subsys_state *subsys[CGROUP_SUBSYS_COUNT];
|
||||
|
||||
struct cgroupfs_root *root;
|
||||
struct cgroup *top_cgroup;
|
||||
|
||||
/*
|
||||
* List of cg_cgroup_links pointing at css_sets with
|
||||
|
@ -213,6 +240,96 @@ struct cgroup {
|
|||
struct simple_xattrs xattrs;
|
||||
};
|
||||
|
||||
#define MAX_CGROUP_ROOT_NAMELEN 64
|
||||
|
||||
/* cgroupfs_root->flags */
|
||||
enum {
|
||||
/*
|
||||
* Unfortunately, cgroup core and various controllers are riddled
|
||||
* with idiosyncrasies and pointless options. The following flag,
|
||||
* when set, will force sane behavior - some options are forced on,
|
||||
* others are disallowed, and some controllers will change their
|
||||
* hierarchical or other behaviors.
|
||||
*
|
||||
* The set of behaviors affected by this flag are still being
|
||||
* determined and developed and the mount option for this flag is
|
||||
* prefixed with __DEVEL__. The prefix will be dropped once we
|
||||
* reach the point where all behaviors are compatible with the
|
||||
* planned unified hierarchy, which will automatically turn on this
|
||||
* flag.
|
||||
*
|
||||
* The followings are the behaviors currently affected this flag.
|
||||
*
|
||||
* - Mount options "noprefix" and "clone_children" are disallowed.
|
||||
* Also, cgroupfs file cgroup.clone_children is not created.
|
||||
*
|
||||
* - When mounting an existing superblock, mount options should
|
||||
* match.
|
||||
*
|
||||
* - Remount is disallowed.
|
||||
*
|
||||
* - memcg: use_hierarchy is on by default and the cgroup file for
|
||||
* the flag is not created.
|
||||
*
|
||||
* The followings are planned changes.
|
||||
*
|
||||
* - release_agent will be disallowed once replacement notification
|
||||
* mechanism is implemented.
|
||||
*/
|
||||
CGRP_ROOT_SANE_BEHAVIOR = (1 << 0),
|
||||
|
||||
CGRP_ROOT_NOPREFIX = (1 << 1), /* mounted subsystems have no named prefix */
|
||||
CGRP_ROOT_XATTR = (1 << 2), /* supports extended attributes */
|
||||
};
|
||||
|
||||
/*
|
||||
* A cgroupfs_root represents the root of a cgroup hierarchy, and may be
|
||||
* associated with a superblock to form an active hierarchy. This is
|
||||
* internal to cgroup core. Don't access directly from controllers.
|
||||
*/
|
||||
struct cgroupfs_root {
|
||||
struct super_block *sb;
|
||||
|
||||
/*
|
||||
* The bitmask of subsystems intended to be attached to this
|
||||
* hierarchy
|
||||
*/
|
||||
unsigned long subsys_mask;
|
||||
|
||||
/* Unique id for this hierarchy. */
|
||||
int hierarchy_id;
|
||||
|
||||
/* The bitmask of subsystems currently attached to this hierarchy */
|
||||
unsigned long actual_subsys_mask;
|
||||
|
||||
/* A list running through the attached subsystems */
|
||||
struct list_head subsys_list;
|
||||
|
||||
/* The root cgroup for this hierarchy */
|
||||
struct cgroup top_cgroup;
|
||||
|
||||
/* Tracks how many cgroups are currently defined in hierarchy.*/
|
||||
int number_of_cgroups;
|
||||
|
||||
/* A list running through the active hierarchies */
|
||||
struct list_head root_list;
|
||||
|
||||
/* All cgroups on this root, cgroup_mutex protected */
|
||||
struct list_head allcg_list;
|
||||
|
||||
/* Hierarchy-specific flags */
|
||||
unsigned long flags;
|
||||
|
||||
/* IDs for cgroups in this hierarchy */
|
||||
struct ida cgroup_ida;
|
||||
|
||||
/* The path to use for release notifications. */
|
||||
char release_agent_path[PATH_MAX];
|
||||
|
||||
/* The name for this hierarchy - may be empty */
|
||||
char name[MAX_CGROUP_ROOT_NAMELEN];
|
||||
};
|
||||
|
||||
/*
|
||||
* A css_set is a structure holding pointers to a set of
|
||||
* cgroup_subsys_state objects. This saves space in the task struct
|
||||
|
@ -278,6 +395,7 @@ struct cgroup_map_cb {
|
|||
/* cftype->flags */
|
||||
#define CFTYPE_ONLY_ON_ROOT (1U << 0) /* only create on root cg */
|
||||
#define CFTYPE_NOT_ON_ROOT (1U << 1) /* don't create on root cg */
|
||||
#define CFTYPE_INSANE (1U << 2) /* don't create if sane_behavior */
|
||||
|
||||
#define MAX_CFTYPE_NAME 64
|
||||
|
||||
|
@ -304,9 +422,6 @@ struct cftype {
|
|||
/* CFTYPE_* flags */
|
||||
unsigned int flags;
|
||||
|
||||
/* file xattrs */
|
||||
struct simple_xattrs xattrs;
|
||||
|
||||
int (*open)(struct inode *inode, struct file *file);
|
||||
ssize_t (*read)(struct cgroup *cgrp, struct cftype *cft,
|
||||
struct file *file,
|
||||
|
@ -404,18 +519,31 @@ struct cgroup_scanner {
|
|||
void *data;
|
||||
};
|
||||
|
||||
/*
|
||||
* See the comment above CGRP_ROOT_SANE_BEHAVIOR for details. This
|
||||
* function can be called as long as @cgrp is accessible.
|
||||
*/
|
||||
static inline bool cgroup_sane_behavior(const struct cgroup *cgrp)
|
||||
{
|
||||
return cgrp->root->flags & CGRP_ROOT_SANE_BEHAVIOR;
|
||||
}
|
||||
|
||||
/* Caller should hold rcu_read_lock() */
|
||||
static inline const char *cgroup_name(const struct cgroup *cgrp)
|
||||
{
|
||||
return rcu_dereference(cgrp->name)->name;
|
||||
}
|
||||
|
||||
int cgroup_add_cftypes(struct cgroup_subsys *ss, struct cftype *cfts);
|
||||
int cgroup_rm_cftypes(struct cgroup_subsys *ss, struct cftype *cfts);
|
||||
|
||||
int cgroup_is_removed(const struct cgroup *cgrp);
|
||||
bool cgroup_is_descendant(struct cgroup *cgrp, struct cgroup *ancestor);
|
||||
|
||||
int cgroup_path(const struct cgroup *cgrp, char *buf, int buflen);
|
||||
|
||||
int cgroup_task_count(const struct cgroup *cgrp);
|
||||
|
||||
/* Return true if cgrp is a descendant of the task's cgroup */
|
||||
int cgroup_is_descendant(const struct cgroup *cgrp, struct task_struct *task);
|
||||
|
||||
/*
|
||||
* Control Group taskset, used to pass around set of tasks to cgroup_subsys
|
||||
* methods.
|
||||
|
@ -523,10 +651,16 @@ static inline struct cgroup_subsys_state *cgroup_subsys_state(
|
|||
* rcu_dereference_check() conditions, such as locks used during the
|
||||
* cgroup_subsys::attach() methods.
|
||||
*/
|
||||
#ifdef CONFIG_PROVE_RCU
|
||||
extern struct mutex cgroup_mutex;
|
||||
#define task_subsys_state_check(task, subsys_id, __c) \
|
||||
rcu_dereference_check(task->cgroups->subsys[subsys_id], \
|
||||
lockdep_is_held(&task->alloc_lock) || \
|
||||
cgroup_lock_is_held() || (__c))
|
||||
rcu_dereference_check((task)->cgroups->subsys[(subsys_id)], \
|
||||
lockdep_is_held(&(task)->alloc_lock) || \
|
||||
lockdep_is_held(&cgroup_mutex) || (__c))
|
||||
#else
|
||||
#define task_subsys_state_check(task, subsys_id, __c) \
|
||||
rcu_dereference((task)->cgroups->subsys[(subsys_id)])
|
||||
#endif
|
||||
|
||||
static inline struct cgroup_subsys_state *
|
||||
task_subsys_state(struct task_struct *task, int subsys_id)
|
||||
|
@ -661,8 +795,8 @@ struct task_struct *cgroup_iter_next(struct cgroup *cgrp,
|
|||
struct cgroup_iter *it);
|
||||
void cgroup_iter_end(struct cgroup *cgrp, struct cgroup_iter *it);
|
||||
int cgroup_scan_tasks(struct cgroup_scanner *scan);
|
||||
int cgroup_attach_task(struct cgroup *, struct task_struct *);
|
||||
int cgroup_attach_task_all(struct task_struct *from, struct task_struct *);
|
||||
int cgroup_transfer_tasks(struct cgroup *to, struct cgroup *from);
|
||||
|
||||
/*
|
||||
* CSS ID is ID for cgroup_subsys_state structs under subsys. This only works
|
||||
|
|
|
@ -11,7 +11,6 @@
|
|||
#include <linux/sched.h>
|
||||
#include <linux/cpumask.h>
|
||||
#include <linux/nodemask.h>
|
||||
#include <linux/cgroup.h>
|
||||
#include <linux/mm.h>
|
||||
|
||||
#ifdef CONFIG_CPUSETS
|
||||
|
|
|
@ -13,7 +13,7 @@
|
|||
* info about what this counter is.
|
||||
*/
|
||||
|
||||
#include <linux/cgroup.h>
|
||||
#include <linux/spinlock.h>
|
||||
#include <linux/errno.h>
|
||||
|
||||
/*
|
||||
|
|
724
kernel/cgroup.c
724
kernel/cgroup.c
File diff suppressed because it is too large
Load Diff
115
kernel/cpuset.c
115
kernel/cpuset.c
|
@ -264,17 +264,6 @@ static struct cpuset top_cpuset = {
|
|||
static DEFINE_MUTEX(cpuset_mutex);
|
||||
static DEFINE_MUTEX(callback_mutex);
|
||||
|
||||
/*
|
||||
* cpuset_buffer_lock protects both the cpuset_name and cpuset_nodelist
|
||||
* buffers. They are statically allocated to prevent using excess stack
|
||||
* when calling cpuset_print_task_mems_allowed().
|
||||
*/
|
||||
#define CPUSET_NAME_LEN (128)
|
||||
#define CPUSET_NODELIST_LEN (256)
|
||||
static char cpuset_name[CPUSET_NAME_LEN];
|
||||
static char cpuset_nodelist[CPUSET_NODELIST_LEN];
|
||||
static DEFINE_SPINLOCK(cpuset_buffer_lock);
|
||||
|
||||
/*
|
||||
* CPU / memory hotplug is handled asynchronously.
|
||||
*/
|
||||
|
@ -780,25 +769,26 @@ static void rebuild_sched_domains_locked(void)
|
|||
lockdep_assert_held(&cpuset_mutex);
|
||||
get_online_cpus();
|
||||
|
||||
/*
|
||||
* We have raced with CPU hotplug. Don't do anything to avoid
|
||||
* passing doms with offlined cpu to partition_sched_domains().
|
||||
* Anyways, hotplug work item will rebuild sched domains.
|
||||
*/
|
||||
if (!cpumask_equal(top_cpuset.cpus_allowed, cpu_active_mask))
|
||||
goto out;
|
||||
|
||||
/* Generate domain masks and attrs */
|
||||
ndoms = generate_sched_domains(&doms, &attr);
|
||||
|
||||
/* Have scheduler rebuild the domains */
|
||||
partition_sched_domains(ndoms, doms, attr);
|
||||
|
||||
out:
|
||||
put_online_cpus();
|
||||
}
|
||||
#else /* !CONFIG_SMP */
|
||||
static void rebuild_sched_domains_locked(void)
|
||||
{
|
||||
}
|
||||
|
||||
static int generate_sched_domains(cpumask_var_t **domains,
|
||||
struct sched_domain_attr **attributes)
|
||||
{
|
||||
*domains = NULL;
|
||||
return 1;
|
||||
}
|
||||
#endif /* CONFIG_SMP */
|
||||
|
||||
void rebuild_sched_domains(void)
|
||||
|
@ -2005,50 +1995,6 @@ int __init cpuset_init(void)
|
|||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* cpuset_do_move_task - move a given task to another cpuset
|
||||
* @tsk: pointer to task_struct the task to move
|
||||
* @scan: struct cgroup_scanner contained in its struct cpuset_hotplug_scanner
|
||||
*
|
||||
* Called by cgroup_scan_tasks() for each task in a cgroup.
|
||||
* Return nonzero to stop the walk through the tasks.
|
||||
*/
|
||||
static void cpuset_do_move_task(struct task_struct *tsk,
|
||||
struct cgroup_scanner *scan)
|
||||
{
|
||||
struct cgroup *new_cgroup = scan->data;
|
||||
|
||||
cgroup_lock();
|
||||
cgroup_attach_task(new_cgroup, tsk);
|
||||
cgroup_unlock();
|
||||
}
|
||||
|
||||
/**
|
||||
* move_member_tasks_to_cpuset - move tasks from one cpuset to another
|
||||
* @from: cpuset in which the tasks currently reside
|
||||
* @to: cpuset to which the tasks will be moved
|
||||
*
|
||||
* Called with cpuset_mutex held
|
||||
* callback_mutex must not be held, as cpuset_attach() will take it.
|
||||
*
|
||||
* The cgroup_scan_tasks() function will scan all the tasks in a cgroup,
|
||||
* calling callback functions for each.
|
||||
*/
|
||||
static void move_member_tasks_to_cpuset(struct cpuset *from, struct cpuset *to)
|
||||
{
|
||||
struct cgroup_scanner scan;
|
||||
|
||||
scan.cg = from->css.cgroup;
|
||||
scan.test_task = NULL; /* select all tasks in cgroup */
|
||||
scan.process_task = cpuset_do_move_task;
|
||||
scan.heap = NULL;
|
||||
scan.data = to->css.cgroup;
|
||||
|
||||
if (cgroup_scan_tasks(&scan))
|
||||
printk(KERN_ERR "move_member_tasks_to_cpuset: "
|
||||
"cgroup_scan_tasks failed\n");
|
||||
}
|
||||
|
||||
/*
|
||||
* If CPU and/or memory hotplug handlers, below, unplug any CPUs
|
||||
* or memory nodes, we need to walk over the cpuset hierarchy,
|
||||
|
@ -2069,7 +2015,12 @@ static void remove_tasks_in_empty_cpuset(struct cpuset *cs)
|
|||
nodes_empty(parent->mems_allowed))
|
||||
parent = parent_cs(parent);
|
||||
|
||||
move_member_tasks_to_cpuset(cs, parent);
|
||||
if (cgroup_transfer_tasks(parent->css.cgroup, cs->css.cgroup)) {
|
||||
rcu_read_lock();
|
||||
printk(KERN_ERR "cpuset: failed to transfer tasks out of empty cpuset %s\n",
|
||||
cgroup_name(cs->css.cgroup));
|
||||
rcu_read_unlock();
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -2222,17 +2173,8 @@ static void cpuset_hotplug_workfn(struct work_struct *work)
|
|||
flush_workqueue(cpuset_propagate_hotplug_wq);
|
||||
|
||||
/* rebuild sched domains if cpus_allowed has changed */
|
||||
if (cpus_updated) {
|
||||
struct sched_domain_attr *attr;
|
||||
cpumask_var_t *doms;
|
||||
int ndoms;
|
||||
|
||||
mutex_lock(&cpuset_mutex);
|
||||
ndoms = generate_sched_domains(&doms, &attr);
|
||||
mutex_unlock(&cpuset_mutex);
|
||||
|
||||
partition_sched_domains(ndoms, doms, attr);
|
||||
}
|
||||
if (cpus_updated)
|
||||
rebuild_sched_domains();
|
||||
}
|
||||
|
||||
void cpuset_update_active_cpus(bool cpu_online)
|
||||
|
@ -2594,6 +2536,8 @@ int cpuset_mems_allowed_intersects(const struct task_struct *tsk1,
|
|||
return nodes_intersects(tsk1->mems_allowed, tsk2->mems_allowed);
|
||||
}
|
||||
|
||||
#define CPUSET_NODELIST_LEN (256)
|
||||
|
||||
/**
|
||||
* cpuset_print_task_mems_allowed - prints task's cpuset and mems_allowed
|
||||
* @task: pointer to task_struct of some task.
|
||||
|
@ -2604,25 +2548,22 @@ int cpuset_mems_allowed_intersects(const struct task_struct *tsk1,
|
|||
*/
|
||||
void cpuset_print_task_mems_allowed(struct task_struct *tsk)
|
||||
{
|
||||
struct dentry *dentry;
|
||||
/* Statically allocated to prevent using excess stack. */
|
||||
static char cpuset_nodelist[CPUSET_NODELIST_LEN];
|
||||
static DEFINE_SPINLOCK(cpuset_buffer_lock);
|
||||
|
||||
dentry = task_cs(tsk)->css.cgroup->dentry;
|
||||
struct cgroup *cgrp = task_cs(tsk)->css.cgroup;
|
||||
|
||||
rcu_read_lock();
|
||||
spin_lock(&cpuset_buffer_lock);
|
||||
|
||||
if (!dentry) {
|
||||
strcpy(cpuset_name, "/");
|
||||
} else {
|
||||
spin_lock(&dentry->d_lock);
|
||||
strlcpy(cpuset_name, (const char *)dentry->d_name.name,
|
||||
CPUSET_NAME_LEN);
|
||||
spin_unlock(&dentry->d_lock);
|
||||
}
|
||||
|
||||
nodelist_scnprintf(cpuset_nodelist, CPUSET_NODELIST_LEN,
|
||||
tsk->mems_allowed);
|
||||
printk(KERN_INFO "%s cpuset=%s mems_allowed=%s\n",
|
||||
tsk->comm, cpuset_name, cpuset_nodelist);
|
||||
tsk->comm, cgroup_name(cgrp), cpuset_nodelist);
|
||||
|
||||
spin_unlock(&cpuset_buffer_lock);
|
||||
rcu_read_unlock();
|
||||
}
|
||||
|
||||
/*
|
||||
|
|
|
@ -251,7 +251,22 @@ perf_cgroup_match(struct perf_event *event)
|
|||
struct perf_event_context *ctx = event->ctx;
|
||||
struct perf_cpu_context *cpuctx = __get_cpu_context(ctx);
|
||||
|
||||
return !event->cgrp || event->cgrp == cpuctx->cgrp;
|
||||
/* @event doesn't care about cgroup */
|
||||
if (!event->cgrp)
|
||||
return true;
|
||||
|
||||
/* wants specific cgroup scope but @cpuctx isn't associated with any */
|
||||
if (!cpuctx->cgrp)
|
||||
return false;
|
||||
|
||||
/*
|
||||
* Cgroup scoping is recursive. An event enabled for a cgroup is
|
||||
* also enabled for all its descendant cgroups. If @cpuctx's
|
||||
* cgroup is a descendant of @event's (the test covers identity
|
||||
* case), it's a match.
|
||||
*/
|
||||
return cgroup_is_descendant(cpuctx->cgrp->css.cgroup,
|
||||
event->cgrp->css.cgroup);
|
||||
}
|
||||
|
||||
static inline bool perf_tryget_cgroup(struct perf_event *event)
|
||||
|
@ -7517,12 +7532,5 @@ struct cgroup_subsys perf_subsys = {
|
|||
.css_free = perf_cgroup_css_free,
|
||||
.exit = perf_cgroup_exit,
|
||||
.attach = perf_cgroup_attach,
|
||||
|
||||
/*
|
||||
* perf_event cgroup doesn't handle nesting correctly.
|
||||
* ctx->nr_cgroups adjustments should be propagated through the
|
||||
* cgroup hierarchy. Fix it and remove the following.
|
||||
*/
|
||||
.broken_hierarchy = true,
|
||||
};
|
||||
#endif /* CONFIG_CGROUP_PERF */
|
||||
|
|
|
@ -3321,43 +3321,6 @@ void mem_cgroup_destroy_cache(struct kmem_cache *cachep)
|
|||
schedule_work(&cachep->memcg_params->destroy);
|
||||
}
|
||||
|
||||
static char *memcg_cache_name(struct mem_cgroup *memcg, struct kmem_cache *s)
|
||||
{
|
||||
char *name;
|
||||
struct dentry *dentry;
|
||||
|
||||
rcu_read_lock();
|
||||
dentry = rcu_dereference(memcg->css.cgroup->dentry);
|
||||
rcu_read_unlock();
|
||||
|
||||
BUG_ON(dentry == NULL);
|
||||
|
||||
name = kasprintf(GFP_KERNEL, "%s(%d:%s)", s->name,
|
||||
memcg_cache_id(memcg), dentry->d_name.name);
|
||||
|
||||
return name;
|
||||
}
|
||||
|
||||
static struct kmem_cache *kmem_cache_dup(struct mem_cgroup *memcg,
|
||||
struct kmem_cache *s)
|
||||
{
|
||||
char *name;
|
||||
struct kmem_cache *new;
|
||||
|
||||
name = memcg_cache_name(memcg, s);
|
||||
if (!name)
|
||||
return NULL;
|
||||
|
||||
new = kmem_cache_create_memcg(memcg, name, s->object_size, s->align,
|
||||
(s->flags & ~SLAB_PANIC), s->ctor, s);
|
||||
|
||||
if (new)
|
||||
new->allocflags |= __GFP_KMEMCG;
|
||||
|
||||
kfree(name);
|
||||
return new;
|
||||
}
|
||||
|
||||
/*
|
||||
* This lock protects updaters, not readers. We want readers to be as fast as
|
||||
* they can, and they will either see NULL or a valid cache value. Our model
|
||||
|
@ -3367,6 +3330,44 @@ static struct kmem_cache *kmem_cache_dup(struct mem_cgroup *memcg,
|
|||
* will span more than one worker. Only one of them can create the cache.
|
||||
*/
|
||||
static DEFINE_MUTEX(memcg_cache_mutex);
|
||||
|
||||
/*
|
||||
* Called with memcg_cache_mutex held
|
||||
*/
|
||||
static struct kmem_cache *kmem_cache_dup(struct mem_cgroup *memcg,
|
||||
struct kmem_cache *s)
|
||||
{
|
||||
struct kmem_cache *new;
|
||||
static char *tmp_name = NULL;
|
||||
|
||||
lockdep_assert_held(&memcg_cache_mutex);
|
||||
|
||||
/*
|
||||
* kmem_cache_create_memcg duplicates the given name and
|
||||
* cgroup_name for this name requires RCU context.
|
||||
* This static temporary buffer is used to prevent from
|
||||
* pointless shortliving allocation.
|
||||
*/
|
||||
if (!tmp_name) {
|
||||
tmp_name = kmalloc(PATH_MAX, GFP_KERNEL);
|
||||
if (!tmp_name)
|
||||
return NULL;
|
||||
}
|
||||
|
||||
rcu_read_lock();
|
||||
snprintf(tmp_name, PATH_MAX, "%s(%d:%s)", s->name,
|
||||
memcg_cache_id(memcg), cgroup_name(memcg->css.cgroup));
|
||||
rcu_read_unlock();
|
||||
|
||||
new = kmem_cache_create_memcg(memcg, tmp_name, s->object_size, s->align,
|
||||
(s->flags & ~SLAB_PANIC), s->ctor, s);
|
||||
|
||||
if (new)
|
||||
new->allocflags |= __GFP_KMEMCG;
|
||||
|
||||
return new;
|
||||
}
|
||||
|
||||
static struct kmem_cache *memcg_create_kmem_cache(struct mem_cgroup *memcg,
|
||||
struct kmem_cache *cachep)
|
||||
{
|
||||
|
@ -5912,6 +5913,7 @@ static struct cftype mem_cgroup_files[] = {
|
|||
},
|
||||
{
|
||||
.name = "use_hierarchy",
|
||||
.flags = CFTYPE_INSANE,
|
||||
.write_u64 = mem_cgroup_hierarchy_write,
|
||||
.read_u64 = mem_cgroup_hierarchy_read,
|
||||
},
|
||||
|
@ -6907,6 +6909,21 @@ static void mem_cgroup_move_task(struct cgroup *cont,
|
|||
}
|
||||
#endif
|
||||
|
||||
/*
|
||||
* Cgroup retains root cgroups across [un]mount cycles making it necessary
|
||||
* to verify sane_behavior flag on each mount attempt.
|
||||
*/
|
||||
static void mem_cgroup_bind(struct cgroup *root)
|
||||
{
|
||||
/*
|
||||
* use_hierarchy is forced with sane_behavior. cgroup core
|
||||
* guarantees that @root doesn't have any children, so turning it
|
||||
* on for the root memcg is enough.
|
||||
*/
|
||||
if (cgroup_sane_behavior(root))
|
||||
mem_cgroup_from_cont(root)->use_hierarchy = true;
|
||||
}
|
||||
|
||||
struct cgroup_subsys mem_cgroup_subsys = {
|
||||
.name = "memory",
|
||||
.subsys_id = mem_cgroup_subsys_id,
|
||||
|
@ -6917,6 +6934,7 @@ struct cgroup_subsys mem_cgroup_subsys = {
|
|||
.can_attach = mem_cgroup_can_attach,
|
||||
.cancel_attach = mem_cgroup_cancel_attach,
|
||||
.attach = mem_cgroup_move_task,
|
||||
.bind = mem_cgroup_bind,
|
||||
.base_cftypes = mem_cgroup_files,
|
||||
.early_init = 0,
|
||||
.use_id = 1,
|
||||
|
|
|
@ -25,6 +25,12 @@
|
|||
|
||||
static DEFINE_MUTEX(devcgroup_mutex);
|
||||
|
||||
enum devcg_behavior {
|
||||
DEVCG_DEFAULT_NONE,
|
||||
DEVCG_DEFAULT_ALLOW,
|
||||
DEVCG_DEFAULT_DENY,
|
||||
};
|
||||
|
||||
/*
|
||||
* exception list locking rules:
|
||||
* hold devcgroup_mutex for update/read.
|
||||
|
@ -42,10 +48,9 @@ struct dev_exception_item {
|
|||
struct dev_cgroup {
|
||||
struct cgroup_subsys_state css;
|
||||
struct list_head exceptions;
|
||||
enum {
|
||||
DEVCG_DEFAULT_ALLOW,
|
||||
DEVCG_DEFAULT_DENY,
|
||||
} behavior;
|
||||
enum devcg_behavior behavior;
|
||||
/* temporary list for pending propagation operations */
|
||||
struct list_head propagate_pending;
|
||||
};
|
||||
|
||||
static inline struct dev_cgroup *css_to_devcgroup(struct cgroup_subsys_state *s)
|
||||
|
@ -182,35 +187,62 @@ static void dev_exception_clean(struct dev_cgroup *dev_cgroup)
|
|||
__dev_exception_clean(dev_cgroup);
|
||||
}
|
||||
|
||||
static inline bool is_devcg_online(const struct dev_cgroup *devcg)
|
||||
{
|
||||
return (devcg->behavior != DEVCG_DEFAULT_NONE);
|
||||
}
|
||||
|
||||
/**
|
||||
* devcgroup_online - initializes devcgroup's behavior and exceptions based on
|
||||
* parent's
|
||||
* @cgroup: cgroup getting online
|
||||
* returns 0 in case of success, error code otherwise
|
||||
*/
|
||||
static int devcgroup_online(struct cgroup *cgroup)
|
||||
{
|
||||
struct dev_cgroup *dev_cgroup, *parent_dev_cgroup = NULL;
|
||||
int ret = 0;
|
||||
|
||||
mutex_lock(&devcgroup_mutex);
|
||||
dev_cgroup = cgroup_to_devcgroup(cgroup);
|
||||
if (cgroup->parent)
|
||||
parent_dev_cgroup = cgroup_to_devcgroup(cgroup->parent);
|
||||
|
||||
if (parent_dev_cgroup == NULL)
|
||||
dev_cgroup->behavior = DEVCG_DEFAULT_ALLOW;
|
||||
else {
|
||||
ret = dev_exceptions_copy(&dev_cgroup->exceptions,
|
||||
&parent_dev_cgroup->exceptions);
|
||||
if (!ret)
|
||||
dev_cgroup->behavior = parent_dev_cgroup->behavior;
|
||||
}
|
||||
mutex_unlock(&devcgroup_mutex);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void devcgroup_offline(struct cgroup *cgroup)
|
||||
{
|
||||
struct dev_cgroup *dev_cgroup = cgroup_to_devcgroup(cgroup);
|
||||
|
||||
mutex_lock(&devcgroup_mutex);
|
||||
dev_cgroup->behavior = DEVCG_DEFAULT_NONE;
|
||||
mutex_unlock(&devcgroup_mutex);
|
||||
}
|
||||
|
||||
/*
|
||||
* called from kernel/cgroup.c with cgroup_lock() held.
|
||||
*/
|
||||
static struct cgroup_subsys_state *devcgroup_css_alloc(struct cgroup *cgroup)
|
||||
{
|
||||
struct dev_cgroup *dev_cgroup, *parent_dev_cgroup;
|
||||
struct cgroup *parent_cgroup;
|
||||
int ret;
|
||||
struct dev_cgroup *dev_cgroup;
|
||||
|
||||
dev_cgroup = kzalloc(sizeof(*dev_cgroup), GFP_KERNEL);
|
||||
if (!dev_cgroup)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
INIT_LIST_HEAD(&dev_cgroup->exceptions);
|
||||
parent_cgroup = cgroup->parent;
|
||||
|
||||
if (parent_cgroup == NULL)
|
||||
dev_cgroup->behavior = DEVCG_DEFAULT_ALLOW;
|
||||
else {
|
||||
parent_dev_cgroup = cgroup_to_devcgroup(parent_cgroup);
|
||||
mutex_lock(&devcgroup_mutex);
|
||||
ret = dev_exceptions_copy(&dev_cgroup->exceptions,
|
||||
&parent_dev_cgroup->exceptions);
|
||||
dev_cgroup->behavior = parent_dev_cgroup->behavior;
|
||||
mutex_unlock(&devcgroup_mutex);
|
||||
if (ret) {
|
||||
kfree(dev_cgroup);
|
||||
return ERR_PTR(ret);
|
||||
}
|
||||
}
|
||||
INIT_LIST_HEAD(&dev_cgroup->propagate_pending);
|
||||
dev_cgroup->behavior = DEVCG_DEFAULT_NONE;
|
||||
|
||||
return &dev_cgroup->css;
|
||||
}
|
||||
|
@ -304,9 +336,11 @@ static int devcgroup_seq_read(struct cgroup *cgroup, struct cftype *cft,
|
|||
* verify if a certain access is allowed.
|
||||
* @dev_cgroup: dev cgroup to be tested against
|
||||
* @refex: new exception
|
||||
* @behavior: behavior of the exception
|
||||
*/
|
||||
static int may_access(struct dev_cgroup *dev_cgroup,
|
||||
struct dev_exception_item *refex)
|
||||
static bool may_access(struct dev_cgroup *dev_cgroup,
|
||||
struct dev_exception_item *refex,
|
||||
enum devcg_behavior behavior)
|
||||
{
|
||||
struct dev_exception_item *ex;
|
||||
bool match = false;
|
||||
|
@ -330,18 +364,29 @@ static int may_access(struct dev_cgroup *dev_cgroup,
|
|||
break;
|
||||
}
|
||||
|
||||
/*
|
||||
* In two cases we'll consider this new exception valid:
|
||||
* - the dev cgroup has its default policy to allow + exception list:
|
||||
* the new exception should *not* match any of the exceptions
|
||||
* (behavior == DEVCG_DEFAULT_ALLOW, !match)
|
||||
* - the dev cgroup has its default policy to deny + exception list:
|
||||
* the new exception *should* match the exceptions
|
||||
* (behavior == DEVCG_DEFAULT_DENY, match)
|
||||
*/
|
||||
if ((dev_cgroup->behavior == DEVCG_DEFAULT_DENY) == match)
|
||||
return 1;
|
||||
return 0;
|
||||
if (dev_cgroup->behavior == DEVCG_DEFAULT_ALLOW) {
|
||||
if (behavior == DEVCG_DEFAULT_ALLOW) {
|
||||
/* the exception will deny access to certain devices */
|
||||
return true;
|
||||
} else {
|
||||
/* the exception will allow access to certain devices */
|
||||
if (match)
|
||||
/*
|
||||
* a new exception allowing access shouldn't
|
||||
* match an parent's exception
|
||||
*/
|
||||
return false;
|
||||
return true;
|
||||
}
|
||||
} else {
|
||||
/* only behavior == DEVCG_DEFAULT_DENY allowed here */
|
||||
if (match)
|
||||
/* parent has an exception that matches the proposed */
|
||||
return true;
|
||||
else
|
||||
return false;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -358,7 +403,7 @@ static int parent_has_perm(struct dev_cgroup *childcg,
|
|||
if (!pcg)
|
||||
return 1;
|
||||
parent = cgroup_to_devcgroup(pcg);
|
||||
return may_access(parent, ex);
|
||||
return may_access(parent, ex, childcg->behavior);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -374,6 +419,111 @@ static inline int may_allow_all(struct dev_cgroup *parent)
|
|||
return parent->behavior == DEVCG_DEFAULT_ALLOW;
|
||||
}
|
||||
|
||||
/**
|
||||
* revalidate_active_exceptions - walks through the active exception list and
|
||||
* revalidates the exceptions based on parent's
|
||||
* behavior and exceptions. The exceptions that
|
||||
* are no longer valid will be removed.
|
||||
* Called with devcgroup_mutex held.
|
||||
* @devcg: cgroup which exceptions will be checked
|
||||
*
|
||||
* This is one of the three key functions for hierarchy implementation.
|
||||
* This function is responsible for re-evaluating all the cgroup's active
|
||||
* exceptions due to a parent's exception change.
|
||||
* Refer to Documentation/cgroups/devices.txt for more details.
|
||||
*/
|
||||
static void revalidate_active_exceptions(struct dev_cgroup *devcg)
|
||||
{
|
||||
struct dev_exception_item *ex;
|
||||
struct list_head *this, *tmp;
|
||||
|
||||
list_for_each_safe(this, tmp, &devcg->exceptions) {
|
||||
ex = container_of(this, struct dev_exception_item, list);
|
||||
if (!parent_has_perm(devcg, ex))
|
||||
dev_exception_rm(devcg, ex);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* get_online_devcg - walks the cgroup tree and fills a list with the online
|
||||
* groups
|
||||
* @root: cgroup used as starting point
|
||||
* @online: list that will be filled with online groups
|
||||
*
|
||||
* Must be called with devcgroup_mutex held. Grabs RCU lock.
|
||||
* Because devcgroup_mutex is held, no devcg will become online or offline
|
||||
* during the tree walk (see devcgroup_online, devcgroup_offline)
|
||||
* A separated list is needed because propagate_behavior() and
|
||||
* propagate_exception() need to allocate memory and can block.
|
||||
*/
|
||||
static void get_online_devcg(struct cgroup *root, struct list_head *online)
|
||||
{
|
||||
struct cgroup *pos;
|
||||
struct dev_cgroup *devcg;
|
||||
|
||||
lockdep_assert_held(&devcgroup_mutex);
|
||||
|
||||
rcu_read_lock();
|
||||
cgroup_for_each_descendant_pre(pos, root) {
|
||||
devcg = cgroup_to_devcgroup(pos);
|
||||
if (is_devcg_online(devcg))
|
||||
list_add_tail(&devcg->propagate_pending, online);
|
||||
}
|
||||
rcu_read_unlock();
|
||||
}
|
||||
|
||||
/**
|
||||
* propagate_exception - propagates a new exception to the children
|
||||
* @devcg_root: device cgroup that added a new exception
|
||||
* @ex: new exception to be propagated
|
||||
*
|
||||
* returns: 0 in case of success, != 0 in case of error
|
||||
*/
|
||||
static int propagate_exception(struct dev_cgroup *devcg_root,
|
||||
struct dev_exception_item *ex)
|
||||
{
|
||||
struct cgroup *root = devcg_root->css.cgroup;
|
||||
struct dev_cgroup *devcg, *parent, *tmp;
|
||||
int rc = 0;
|
||||
LIST_HEAD(pending);
|
||||
|
||||
get_online_devcg(root, &pending);
|
||||
|
||||
list_for_each_entry_safe(devcg, tmp, &pending, propagate_pending) {
|
||||
parent = cgroup_to_devcgroup(devcg->css.cgroup->parent);
|
||||
|
||||
/*
|
||||
* in case both root's behavior and devcg is allow, a new
|
||||
* restriction means adding to the exception list
|
||||
*/
|
||||
if (devcg_root->behavior == DEVCG_DEFAULT_ALLOW &&
|
||||
devcg->behavior == DEVCG_DEFAULT_ALLOW) {
|
||||
rc = dev_exception_add(devcg, ex);
|
||||
if (rc)
|
||||
break;
|
||||
} else {
|
||||
/*
|
||||
* in the other possible cases:
|
||||
* root's behavior: allow, devcg's: deny
|
||||
* root's behavior: deny, devcg's: deny
|
||||
* the exception will be removed
|
||||
*/
|
||||
dev_exception_rm(devcg, ex);
|
||||
}
|
||||
revalidate_active_exceptions(devcg);
|
||||
|
||||
list_del_init(&devcg->propagate_pending);
|
||||
}
|
||||
return rc;
|
||||
}
|
||||
|
||||
static inline bool has_children(struct dev_cgroup *devcgroup)
|
||||
{
|
||||
struct cgroup *cgrp = devcgroup->css.cgroup;
|
||||
|
||||
return !list_empty(&cgrp->children);
|
||||
}
|
||||
|
||||
/*
|
||||
* Modify the exception list using allow/deny rules.
|
||||
* CAP_SYS_ADMIN is needed for this. It's at least separate from CAP_MKNOD
|
||||
|
@ -392,7 +542,7 @@ static int devcgroup_update_access(struct dev_cgroup *devcgroup,
|
|||
{
|
||||
const char *b;
|
||||
char temp[12]; /* 11 + 1 characters needed for a u32 */
|
||||
int count, rc;
|
||||
int count, rc = 0;
|
||||
struct dev_exception_item ex;
|
||||
struct cgroup *p = devcgroup->css.cgroup;
|
||||
struct dev_cgroup *parent = NULL;
|
||||
|
@ -410,6 +560,9 @@ static int devcgroup_update_access(struct dev_cgroup *devcgroup,
|
|||
case 'a':
|
||||
switch (filetype) {
|
||||
case DEVCG_ALLOW:
|
||||
if (has_children(devcgroup))
|
||||
return -EINVAL;
|
||||
|
||||
if (!may_allow_all(parent))
|
||||
return -EPERM;
|
||||
dev_exception_clean(devcgroup);
|
||||
|
@ -423,6 +576,9 @@ static int devcgroup_update_access(struct dev_cgroup *devcgroup,
|
|||
return rc;
|
||||
break;
|
||||
case DEVCG_DENY:
|
||||
if (has_children(devcgroup))
|
||||
return -EINVAL;
|
||||
|
||||
dev_exception_clean(devcgroup);
|
||||
devcgroup->behavior = DEVCG_DEFAULT_DENY;
|
||||
break;
|
||||
|
@ -517,22 +673,28 @@ static int devcgroup_update_access(struct dev_cgroup *devcgroup,
|
|||
dev_exception_rm(devcgroup, &ex);
|
||||
return 0;
|
||||
}
|
||||
return dev_exception_add(devcgroup, &ex);
|
||||
rc = dev_exception_add(devcgroup, &ex);
|
||||
break;
|
||||
case DEVCG_DENY:
|
||||
/*
|
||||
* If the default policy is to deny by default, try to remove
|
||||
* an matching exception instead. And be silent about it: we
|
||||
* don't want to break compatibility
|
||||
*/
|
||||
if (devcgroup->behavior == DEVCG_DEFAULT_DENY) {
|
||||
if (devcgroup->behavior == DEVCG_DEFAULT_DENY)
|
||||
dev_exception_rm(devcgroup, &ex);
|
||||
return 0;
|
||||
}
|
||||
return dev_exception_add(devcgroup, &ex);
|
||||
else
|
||||
rc = dev_exception_add(devcgroup, &ex);
|
||||
|
||||
if (rc)
|
||||
break;
|
||||
/* we only propagate new restrictions */
|
||||
rc = propagate_exception(devcgroup, &ex);
|
||||
break;
|
||||
default:
|
||||
return -EINVAL;
|
||||
rc = -EINVAL;
|
||||
}
|
||||
return 0;
|
||||
return rc;
|
||||
}
|
||||
|
||||
static int devcgroup_access_write(struct cgroup *cgrp, struct cftype *cft,
|
||||
|
@ -571,17 +733,10 @@ struct cgroup_subsys devices_subsys = {
|
|||
.can_attach = devcgroup_can_attach,
|
||||
.css_alloc = devcgroup_css_alloc,
|
||||
.css_free = devcgroup_css_free,
|
||||
.css_online = devcgroup_online,
|
||||
.css_offline = devcgroup_offline,
|
||||
.subsys_id = devices_subsys_id,
|
||||
.base_cftypes = dev_cgroup_files,
|
||||
|
||||
/*
|
||||
* While devices cgroup has the rudimentary hierarchy support which
|
||||
* checks the parent's restriction, it doesn't properly propagates
|
||||
* config changes in ancestors to their descendents. A child
|
||||
* should only be allowed to add more restrictions to the parent's
|
||||
* configuration. Fix it and remove the following.
|
||||
*/
|
||||
.broken_hierarchy = true,
|
||||
};
|
||||
|
||||
/**
|
||||
|
@ -609,7 +764,7 @@ static int __devcgroup_check_permission(short type, u32 major, u32 minor,
|
|||
|
||||
rcu_read_lock();
|
||||
dev_cgroup = task_devcgroup(current);
|
||||
rc = may_access(dev_cgroup, &ex);
|
||||
rc = may_access(dev_cgroup, &ex, dev_cgroup->behavior);
|
||||
rcu_read_unlock();
|
||||
|
||||
if (!rc)
|
||||
|
|
Loading…
Reference in New Issue