Newer policy will combine the file and policydb dfas, allowing for
better optimizations. However to support older policy we need to
keep the ability to address the "file" dfa separately. So dup
the policydb as if it is the file dfa and set the appropriate start
state.
Signed-off-by: John Johansen <john.johansen@canonical.com>
The dfa is currently setup to be shared (has the basis of refcounting)
but currently can't be because the count can't be increased.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Newer policy encodes more than just version in the version tag,
so add masking to make sure the comparison remains correct.
Note: this is fully compatible with older policy as it will never set
the bits being masked out.
Signed-off-by: John Johansen <john.johansen@canonical.com>
When possible its better to name a learning profile after the missing
profile in question. This allows for both more informative names and
for profile reuse.
Signed-off-by: John Johansen <john.johansen@canonical.com>
prepare_ns() will need to be called from alternate views, and namespaces
will need to be created via different interfaces. So refactor and
allow specifying the view ns.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Rename to indicate the test is only about whether path mediation is used,
not whether other types of mediation might be used.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Policy namespaces will be diverging from profile management and
expanding so put it in its own file.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Prepare to mark sensitive kernel structures for randomization by making
sure they're using designated initializers. These were identified during
allyesconfig builds of x86, arm, and arm64, with most initializer fixes
extracted from grsecurity.
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: John Johansen <john.johansen@canonical.com>
Calling kmalloc(GFP_NOIO) with order == PAGE_ALLOC_COSTLY_ORDER is not
recommended because it might fall into infinite retry loop without
invoking the OOM killer.
Since aa_dfa_unpack() is the only caller of kvzalloc() and
aa_dfa_unpack() which is calling kvzalloc() via unpack_table() is
doing kzalloc(GFP_KERNEL), it is safe to use GFP_KERNEL from
__aa_kvmalloc().
Since aa_simple_write_to_buffer() is the only caller of kvmalloc()
and aa_simple_write_to_buffer() is calling copy_from_user() which
is GFP_KERNEL context (see memdup_user_nul()), it is safe to use
GFP_KERNEL from __aa_kvmalloc().
Therefore, replace GFP_NOIO with GFP_KERNEL. Also, since we have
vmalloc() fallback, add __GFP_NORETRY so that we don't invoke the OOM
killer by kmalloc(GFP_KERNEL) with order == PAGE_ALLOC_COSTLY_ORDER.
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: John Johansen <john.johansen@canonical.com>
After a policy replacement, the task cred may be out of date and need
to be updated. However change_hat is using the stale profiles from
the out of date cred resulting in either: a stale profile being applied
or, incorrect failure when searching for a hat profile as it has been
migrated to the new parent profile.
Fixes: 01e2b670aa (failure to find hat)
Fixes: 898127c34e (stale policy being applied)
Bugzilla: https://bugzilla.suse.com/show_bug.cgi?id=1000287
Cc: stable@vger.kernel.org
Signed-off-by: John Johansen <john.johansen@canonical.com>
Signed-off-by: James Morris <james.l.morris@oracle.com>
CURRENT_TIME macro is not appropriate for filesystems as it
doesn't use the right granularity for filesystem timestamps.
Use current_time() instead.
CURRENT_TIME is also not y2038 safe.
This is also in preparation for the patch that transitions
vfs timestamps to use 64 bit time and hence make them
y2038 safe. As part of the effort current_time() will be
extended to do range checks. Hence, it is necessary for all
file system timestamps to use current_time(). Also,
current_time() will be transitioned along with vfs to be
y2038 safe.
Note that whenever a single call to current_time() is used
to change timestamps in different inodes, it is because they
share the same time granularity.
Signed-off-by: Deepa Dinamani <deepa.kernel@gmail.com>
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Felipe Balbi <balbi@kernel.org>
Acked-by: Steven Whitehouse <swhiteho@redhat.com>
Acked-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
Acked-by: David Sterba <dsterba@suse.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
The newly added Kconfig option could never work and just causes a build error
when disabled:
security/apparmor/lsm.c:675:25: error: 'CONFIG_SECURITY_APPARMOR_HASH_DEFAULT' undeclared here (not in a function)
bool aa_g_hash_policy = CONFIG_SECURITY_APPARMOR_HASH_DEFAULT;
The problem is that the macro undefined in this case, and we need to use the IS_ENABLED()
helper to turn it into a boolean constant.
Another minor problem with the original patch is that the option is even offered
in sysfs when SECURITY_APPARMOR_HASH is not enabled, so this also hides the option
in that case.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Fixes: 6059f71f1e ("apparmor: add parameter to control whether policy hashing is used")
Signed-off-by: John Johansen <john.johansen@canonical.com>
Signed-off-by: James Morris <james.l.morris@oracle.com>
When proc_pid_attr_write() was changed to use memdup_user apparmor's
(interface violating) assumption that the setprocattr buffer was always
a single page was violated.
The size test is not strictly speaking needed as proc_pid_attr_write()
will reject anything larger, but for the sake of robustness we can keep
it in.
SMACK and SELinux look safe to me, but somebody else should probably
have a look just in case.
Based on original patch from Vegard Nossum <vegard.nossum@oracle.com>
modified for the case that apparmor provides null termination.
Fixes: bb646cdb12
Reported-by: Vegard Nossum <vegard.nossum@oracle.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: John Johansen <john.johansen@canonical.com>
Cc: Paul Moore <paul@paul-moore.com>
Cc: Stephen Smalley <sds@tycho.nsa.gov>
Cc: Eric Paris <eparis@parisplace.org>
Cc: Casey Schaufler <casey@schaufler-ca.com>
Cc: stable@kernel.org
Signed-off-by: John Johansen <john.johansen@canonical.com>
Reviewed-by: Tyler Hicks <tyhicks@canonical.com>
Signed-off-by: James Morris <james.l.morris@oracle.com>
Do not copy uninitalized fields th.td_hilen, th.td_data.
Signed-off-by: Heinrich Schuchardt <xypron.glpk@gmx.de>
Signed-off-by: John Johansen <john.johansen@canonical.com>
the policy_lock parameter is a one way switch that prevents policy
from being further modified. Unfortunately some of the module parameters
can effectively modify policy by turning off enforcement.
split policy_admin_capable into a view check and a full admin check,
and update the admin check to test the policy_lock parameter.
Signed-off-by: John Johansen <john.johansen@canonical.com>
BugLink: http://bugs.launchpad.net/bugs/1592547
If unpack_dfa() returns NULL due to the dfa not being present,
profile_unpack() is not checking if the dfa is not present (NULL).
Signed-off-by: John Johansen <john.johansen@canonical.com>
While using AppArmor, SYS_CAP_RESOURCE is insufficient to call prlimit
on another task. The only other example of a AppArmor mediating access to
another, already running, task (ignoring fork+exec) is ptrace.
The AppArmor model for ptrace is that one of the following must be true:
1) The tracer is unconfined
2) The tracer is in complain mode
3) The tracer and tracee are confined by the same profile
4) The tracer is confined but has SYS_CAP_PTRACE
1), 2, and 3) are already true for setrlimit.
We can match the ptrace model just by allowing CAP_SYS_RESOURCE.
We still test the values of the rlimit since it can always be overridden
using a value that means unlimited for a particular resource.
Signed-off-by: Jeff Mahoney <jeffm@suse.com>
Signed-off-by: John Johansen <john.johansen@canonical.com>
list_next_entry has been defined in list.h, so I replace list_entry_next
with it.
Signed-off-by: Geliang Tang <geliangtang@163.com>
Acked-by: Serge Hallyn <serge.hallyn@canonical.com>
Signed-off-by: John Johansen <john.johansen@canonical.com>
When finding a child profile via an rcu critical section, the profile
may be put and scheduled for deletion after the child is found but
before its refcount is incremented.
Protect against this by repeating the lookup if the profiles refcount
is 0 and is one its way to deletion.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Seth Arnold <seth.arnold@canonical.com>
The target profile name was not being correctly audited in a few
cases because the target variable was not being set and gotos
passed the code to set it at apply:
Since it is always based on new_profile just drop the target var
and conditionally report based on new_profile.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Seth Arnold <seth.arnold@canonical.com>
Currently logging of a successful profile load only logs the basename
of the profile. This can result in confusion when a child profile has
the same name as the another profile in the set. Logging the hname
will ensure there is no confusion.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Seth Arnold <seth.arnold@canonical.com>
currently only the profile that is causing the failure is logged. This
makes it more confusing than necessary about which profiles loaded
and which didn't. So make sure to log success and failure messages for
all profiles in the set being loaded.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Seth Arnold <seth.arnold@canonical.com>
Internal mounts are not mounted anywhere and as such should be treated
as disconnected paths.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Seth Arnold <seth.arnold@canonical.com>