The 54e70ec5eb commit introduced a
bidirectional check that should have checked for mutual WRITE access
between two labels. Due to a typo the second check was incorrect.
Signed-off-by: Lukasz Pawelczyk <l.pawelczyk@samsung.com>
People keep asking me for permissive mode, and I keep saying "no".
Permissive mode is wrong for more reasons than I can enumerate,
but the compelling one is that it's once on, never off.
Nonetheless, there is an argument to be made for running a
process with lots of permissions, logging which are required,
and then locking the process down. There wasn't a way to do
that with Smack, but this provides it.
The notion is that you start out by giving the process an
appropriate Smack label, such as "ATBirds". You create rules
with a wide range of access and the "b" mode. On Tizen it
might be:
ATBirds System rwxalb
ATBirds User rwxalb
ATBirds _ rwxalb
User ATBirds wb
System ATBirds wb
Accesses that fail will generate audit records. Accesses
that succeed because of rules marked with a "b" generate
log messages identifying the rule, the program and as much
object information as is convenient.
When the system is properly configured and the programs
brought in line with the labeling scheme the "b" mode can
be removed from the rules. When the system is ready for
production the facility can be configured out.
This provides the developer the convenience of permissive
mode without creating a system that looks like it is
enforcing a policy while it is not.
Signed-off-by: Casey Schaufler <casey@schaufler-ca.com>
While opening with CAP_MAC_OVERRIDE file label is not set.
Other calls may access it after CAP_MAC_OVERRIDE is dropped from process.
Signed-off-by: Marcin Niesluchowski <m.niesluchow@samsung.com>
Values of extended attributes are stored as binary blobs. NULL-termination
of them isn't required. It just wastes disk space and confuses command-line
tools like getfattr because they have to print that zero byte at the end.
This patch removes terminating zero byte from initial security label in
smack_inode_init_security and cuts it out in function smack_inode_getsecurity
which is used by syscall getxattr. This change seems completely safe, because
function smk_parse_smack ignores everything after first zero byte.
Signed-off-by: Konstantin Khlebnikov <k.khlebnikov@samsung.com>
Zero-length security labels are invalid but kernel should handle them.
This patch fixes kernel panic after setting zero-length security labels:
# attr -S -s "SMACK64" -V "" file
And after writing zero-length string into smackfs files syslog and onlycp:
# python -c 'import os; os.write(1, "")' > /smack/syslog
The problem is caused by brain-damaged logic in function smk_parse_smack()
which takes pointer to buffer and its length but if length below or equal zero
it thinks that the buffer is zero-terminated. Unfortunately callers of this
function are widely used and proper fix requires serious refactoring.
Signed-off-by: Konstantin Khlebnikov <k.khlebnikov@samsung.com>
Security operation ->inode_listsecurity is used for generating list of
available extended attributes for syscall listxattr. Currently it's used
only in nfs4 or if filesystem doesn't provide i_op->listxattr.
The list is the set of NULL-terminated names, one after the other.
This method must include zero byte at the and into result.
Also this function must return length even if string does not fit into
output buffer or it is NULL, see similar method in selinux and man listxattr.
Signed-off-by: Konstantin Khlebnikov <k.khlebnikov@samsung.com>
Historically the NetLabel LSM secattr catmap functions and data
structures have had very long names which makes a mess of the NetLabel
code and anyone who uses NetLabel. This patch renames the catmap
functions and structures from "*_secattr_catmap_*" to just "*_catmap_*"
which improves things greatly.
There are no substantial code or logic changes in this patch.
Signed-off-by: Paul Moore <pmoore@redhat.com>
Tested-by: Casey Schaufler <casey@schaufler-ca.com>
The NetLabel secattr catmap functions, and the SELinux import/export
glue routines, were broken in many horrible ways and the SELinux glue
code fiddled with the NetLabel catmap structures in ways that we
probably shouldn't allow. At some point this "worked", but that was
likely due to a bit of dumb luck and sub-par testing (both inflicted
by yours truly). This patch corrects these problems by basically
gutting the code in favor of something less obtuse and restoring the
NetLabel abstractions in the SELinux catmap glue code.
Everything is working now, and if it decides to break itself in the
future this code will be much easier to debug than the code it
replaces.
One noteworthy side effect of the changes is that it is no longer
necessary to allocate a NetLabel catmap before calling one of the
NetLabel APIs to set a bit in the catmap. NetLabel will automatically
allocate the catmap nodes when needed, resulting in less allocations
when the lowest bit is greater than 255 and less code in the LSMs.
Cc: stable@vger.kernel.org
Reported-by: Christian Evans <frodox@zoho.com>
Signed-off-by: Paul Moore <pmoore@redhat.com>
Tested-by: Casey Schaufler <casey@schaufler-ca.com>
The NetLabel category (catmap) functions have a problem in that they
assume categories will be set in an increasing manner, e.g. the next
category set will always be larger than the last. Unfortunately, this
is not a valid assumption and could result in problems when attempting
to set categories less than the startbit in the lowest catmap node.
In some cases kernel panics and other nasties can result.
This patch corrects the problem by checking for this and allocating a
new catmap node instance and placing it at the front of the list.
Cc: stable@vger.kernel.org
Reported-by: Christian Evans <frodox@zoho.com>
Signed-off-by: Paul Moore <pmoore@redhat.com>
Tested-by: Casey Schaufler <casey@schaufler-ca.com>
This reverts commit 4da6daf4d3.
Unfortunately, the commit in question caused problems with Bluetooth
devices, specifically it caused them to get caught in the newly
created BUG_ON() check. The AF_ALG problem still exists, but will be
addressed in a future patch.
Cc: stable@vger.kernel.org
Signed-off-by: Paul Moore <pmoore@redhat.com>
Commit fc7c70e "KEYS: struct key_preparsed_payload should have two
payload pointers" erroneously modified encrypted-keys. This patch
reverts the change to that file.
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
Signed-off-by: David Howells <dhowells@redhat.com>
The "security: introduce kernel_fw_from_file hook" patch defined a
new security hook to evaluate any loaded firmware that wasn't built
into the kernel.
This patch defines ima_fw_from_file(), which is called from the new
security hook, to measure and/or appraise the loaded firmware's
integrity.
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
In order to validate the contents of firmware being loaded, there must be
a hook to evaluate any loaded firmware that wasn't built into the kernel
itself. Without this, there is a risk that a root user could load malicious
firmware designed to mount an attack against kernel memory (e.g. via DMA).
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Takashi Iwai <tiwai@suse.de>
This is effectively a revert of 7b9a7ec565
plus fixing it a different way...
We found, when trying to run an application from an application which
had dropped privs that the kernel does security checks on undefined
capability bits. This was ESPECIALLY difficult to debug as those
undefined bits are hidden from /proc/$PID/status.
Consider a root application which drops all capabilities from ALL 4
capability sets. We assume, since the application is going to set
eff/perm/inh from an array that it will clear not only the defined caps
less than CAP_LAST_CAP, but also the higher 28ish bits which are
undefined future capabilities.
The BSET gets cleared differently. Instead it is cleared one bit at a
time. The problem here is that in security/commoncap.c::cap_task_prctl()
we actually check the validity of a capability being read. So any task
which attempts to 'read all things set in bset' followed by 'unset all
things set in bset' will not even attempt to unset the undefined bits
higher than CAP_LAST_CAP.
So the 'parent' will look something like:
CapInh: 0000000000000000
CapPrm: 0000000000000000
CapEff: 0000000000000000
CapBnd: ffffffc000000000
All of this 'should' be fine. Given that these are undefined bits that
aren't supposed to have anything to do with permissions. But they do...
So lets now consider a task which cleared the eff/perm/inh completely
and cleared all of the valid caps in the bset (but not the invalid caps
it couldn't read out of the kernel). We know that this is exactly what
the libcap-ng library does and what the go capabilities library does.
They both leave you in that above situation if you try to clear all of
you capapabilities from all 4 sets. If that root task calls execve()
the child task will pick up all caps not blocked by the bset. The bset
however does not block bits higher than CAP_LAST_CAP. So now the child
task has bits in eff which are not in the parent. These are
'meaningless' undefined bits, but still bits which the parent doesn't
have.
The problem is now in cred_cap_issubset() (or any operation which does a
subset test) as the child, while a subset for valid cap bits, is not a
subset for invalid cap bits! So now we set durring commit creds that
the child is not dumpable. Given it is 'more priv' than its parent. It
also means the parent cannot ptrace the child and other stupidity.
The solution here:
1) stop hiding capability bits in status
This makes debugging easier!
2) stop giving any task undefined capability bits. it's simple, it you
don't put those invalid bits in CAP_FULL_SET you won't get them in init
and you won't get them in any other task either.
This fixes the cap_issubset() tests and resulting fallout (which
made the init task in a docker container untraceable among other
things)
3) mask out undefined bits when sys_capset() is called as it might use
~0, ~0 to denote 'all capabilities' for backward/forward compatibility.
This lets 'capsh --caps="all=eip" -- -c /bin/bash' run.
4) mask out undefined bit when we read a file capability off of disk as
again likely all bits are set in the xattr for forward/backward
compatibility.
This lets 'setcap all+pe /bin/bash; /bin/bash' run
Signed-off-by: Eric Paris <eparis@redhat.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Cc: Andrew Vagin <avagin@openvz.org>
Cc: Andrew G. Morgan <morgan@kernel.org>
Cc: Serge E. Hallyn <serge.hallyn@canonical.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Steve Grubb <sgrubb@redhat.com>
Cc: Dan Walsh <dwalsh@redhat.com>
Cc: stable@vger.kernel.org
Signed-off-by: James Morris <james.l.morris@oracle.com>
In function cap_task_prctl(), we would allocate a credential
unconditionally and then check if we support the requested function.
If not we would release this credential with abort_creds() by using
RCU method. But on some archs such as powerpc, the sys_prctl is heavily
used to get/set the floating point exception mode. So the unnecessary
allocating/releasing of credential not only introduce runtime overhead
but also do cause OOM due to the RCU implementation.
This patch removes abort_creds() from cap_task_prctl() by calling
prepare_creds() only when we need to modify it.
Reported-by: Kevin Hao <haokexin@gmail.com>
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Reviewed-by: Paul Moore <paul@paul-moore.com>
Acked-by: Serge E. Hallyn <serge.hallyn@ubuntu.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: James Morris <james.l.morris@oracle.com>
Provide key preparsing for the request_key_auth key type so that we can make
preparsing mandatory. This does nothing as this type can only be set up
internally to the kernel.
Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Steve Dickson <steved@redhat.com>
Acked-by: Jeff Layton <jlayton@primarydata.com>
Provide key preparsing in the keyring so that we can make preparsing
mandatory. For keyrings, however, only an empty payload is permitted.
Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Steve Dickson <steved@redhat.com>
Acked-by: Jeff Layton <jlayton@primarydata.com>
Make use of key preparsing in the big key type so that quota size determination
can take place prior to keyring locking when a key is being added.
Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Steve Dickson <steved@redhat.com>
Make use of key preparsing in user-defined and logon keys so that quota size
determination can take place prior to keyring locking when a key is being
added.
Also the idmapper key types need to change to match as they use the
user-defined key type routines.
Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Steve Dickson <steved@redhat.com>
Acked-by: Jeff Layton <jlayton@primarydata.com>
Call the ->free_preparse() key type op even after ->preparse() returns an
error as it does cleaning up type stuff.
Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Steve Dickson <steved@redhat.com>
Acked-by: Jeff Layton <jlayton@primarydata.com>
Reviewed-by: Sage Weil <sage@redhat.com>
Allow a key type's preparsing routine to set the expiry time for a key.
Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Steve Dickson <steved@redhat.com>
Acked-by: Jeff Layton <jlayton@primarydata.com>
Reviewed-by: Sage Weil <sage@redhat.com>
struct key_preparsed_payload should have two payload pointers to correspond
with those in struct key.
Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Steve Dickson <steved@redhat.com>
Acked-by: Jeff Layton <jlayton@primarydata.com>
Reviewed-by: Sage Weil <sage@redhat.com>
Since seccomp transitions between threads requires updates to the
no_new_privs flag to be atomic, the flag must be part of an atomic flag
set. This moves the nnp flag into a separate task field, and introduces
accessors.
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Reviewed-by: Andy Lutomirski <luto@amacapital.net>
Provide a generic instantiation function for key types that use the preparse
hook. This makes it easier to prereserve key quota before keyrings get locked
to retain the new key.
Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Steve Dickson <steved@redhat.com>
Acked-by: Jeff Layton <jlayton@primarydata.com>
Reviewed-by: Sage Weil <sage@redhat.com>
Special kernel keys, such as those used to hold DNS results for AFS, CIFS and
NFS and those used to hold idmapper results for NFS, used to be
'invalidateable' with key_revoke(). However, since the default permissions for
keys were reduced:
Commit: 96b5c8fea6
KEYS: Reduce initial permissions on keys
it has become impossible to do this.
Add a key flag (KEY_FLAG_ROOT_CAN_INVAL) that will permit a key to be
invalidated by root. This should not be used for system keyrings as the
garbage collector will try and remove any invalidate key. For system keyrings,
KEY_FLAG_ROOT_CAN_CLEAR can be used instead.
After this, from userspace, keyctl_invalidate() and "keyctl invalidate" can be
used by any possessor of CAP_SYS_ADMIN (typically root) to invalidate DNS and
idmapper keys. Invalidated keys are immediately garbage collected and will be
immediately rerequested if needed again.
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Steve Dickson <steved@redhat.com>
Require all keys added to the IMA keyring be signed by an
existing trusted key on the system trusted keyring.
Changelog v6:
- remove ifdef CONFIG_IMA_TRUSTED_KEYRING in C code - Dmitry
- update Kconfig dependency and help
- select KEYS_DEBUG_PROC_KEYS - Dmitry
Changelog v5:
- Move integrity_init_keyring() to init_ima() - Dmitry
- reset keyring[id] on failure - Dmitry
Changelog v1:
- don't link IMA trusted keyring to user keyring
Changelog:
- define stub integrity_init_keyring() function (reported-by Fengguang Wu)
- differentiate between regular and trusted keyring names.
- replace printk with pr_info (D. Kasatkin)
- only make the IMA keyring a trusted keyring (reported-by D. Kastatkin)
- define stub integrity_init_keyring() definition based on
CONFIG_INTEGRITY_SIGNATURE, not CONFIG_INTEGRITY_ASYMMETRIC_KEYS.
(reported-by Jim Davis)
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
Signed-off-by: Dmitry Kasatkin <d.kasatkin@samsung.com>
Acked-by: David Howells <dhowells@redhat.com>
Dot prefixed keyring names are supposed to be reserved for the
kernel, but add_key() calls key_get_type_from_user(), which
incorrectly verifies the 'type' field, not the 'description' field.
This patch verifies the 'description' field isn't dot prefixed,
when creating a new keyring, and removes the dot prefix test in
key_get_type_from_user().
Changelog v6:
- whitespace and other cleanup
Changelog v5:
- Only prevent userspace from creating a dot prefixed keyring, not
regular keys - Dmitry
Reported-by: Dmitry Kasatkin <d.kasatkin@samsung.com>
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
Acked-by: David Howells <dhowells@redhat.com>
The asynchronous hash API allows initiating a hash calculation and
then performing other tasks, while waiting for the hash calculation
to complete.
This patch introduces usage of double buffering for simultaneous
hashing and reading of the next chunk of data from storage.
Changes in v3:
- better comments
Signed-off-by: Dmitry Kasatkin <d.kasatkin@samsung.com>
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
Use of multiple-page collect buffers reduces:
1) the number of block IO requests
2) the number of asynchronous hash update requests
Second is important for HW accelerated hashing, because significant
amount of time is spent for preparation of hash update operation,
which includes configuring acceleration HW, DMA engine, etc...
Thus, HW accelerators are more efficient when working on large
chunks of data.
This patch introduces usage of multi-page collect buffers. Buffer size
can be specified using 'ahash_bufsize' module parameter. Default buffer
size is 4096 bytes.
Changes in v3:
- kernel parameter replaced with module parameter
Signed-off-by: Dmitry Kasatkin <d.kasatkin@samsung.com>
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
Async hash API allows the use of HW acceleration for hash calculation.
It may give significant performance gain and/or reduce power consumption,
which might be very beneficial for battery powered devices.
This patch introduces hash calculation using ahash API. ahash performance
depends on the data size and the particular HW. Depending on the specific
system, shash performance may be better.
This patch defines 'ahash_minsize' module parameter, which is used to
define the minimal file size to use with ahash. If this minimum file size
is not set or the file is smaller than defined by the parameter, shash will
be used.
Changes in v3:
- kernel parameter replaced with module parameter
- pr_crit replaced with pr_crit_ratelimited
- more comment changes - Mimi
Changes in v2:
- ima_ahash_size became as ima_ahash
- ahash pre-allocation moved out from __init code to be able to use
ahash crypto modules. Ahash allocated once on the first use.
- hash calculation falls back to shash if ahash allocation/calculation fails
- complex initialization separated from variable declaration
- improved comments
Signed-off-by: Dmitry Kasatkin <d.kasatkin@samsung.com>
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
Replace spaces in op keyword labels in log output since userspace audit tools
can't parse orphaned keywords.
Reported-by: Steve Grubb <sgrubb@redhat.com>
Signed-off-by: Richard Guy Briggs <rgb@redhat.com>
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
process_measurement() always calls ima_template_desc_current(),
including when an IMA policy has not been defined.
This patch delays template descriptor lookup until action is
determined.
Signed-off-by: Dmitry Kasatkin <d.kasatkin@samsung.com>
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
Before 2.6.39 inode->i_readcount was maintained by IMA. It was not atomic
and protected using spinlock. For 2.6.39, i_readcount was converted to
atomic and maintaining was moved VFS layer. Spinlock for some unclear
reason was replaced by i_mutex.
After analyzing the code, we came to conclusion that i_mutex locking is
unnecessary, especially when an IMA policy has not been defined.
This patch removes i_mutex locking from ima_rdwr_violation_check().
Signed-off-by: Dmitry Kasatkin <d.kasatkin@samsung.com>
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
The sock_graft() hook has special handling for AF_INET, AF_INET, and
AF_UNIX sockets as those address families have special hooks which
label the sock before it is attached its associated socket.
Unfortunately, the sock_graft() hook was missing a default approach
to labeling sockets which meant that any other address family which
made use of connections or the accept() syscall would find the
returned socket to be in an "unlabeled" state. This was recently
demonstrated by the kcrypto/AF_ALG subsystem and the newly released
cryptsetup package (cryptsetup v1.6.5 and later).
This patch preserves the special handling in selinux_sock_graft(),
but adds a default behavior - setting the sock's label equal to the
associated socket - which resolves the problem with AF_ALG and
presumably any other address family which makes use of accept().
Cc: stable@vger.kernel.org
Signed-off-by: Paul Moore <pmoore@redhat.com>
Tested-by: Milan Broz <gmazyland@gmail.com>
When flushing the AVC, such as during a policy load, the various
network caches are also flushed, with each making a call to
synchronize_net() which has shown to be expensive in some cases.
This patch consolidates the network cache flushes into a single AVC
callback which only calls synchronize_net() once for each AVC cache
flush.
Reported-by: Jaejyn Shin <flagon22bass@gmail.com>
Signed-off-by: Paul Moore <pmoore@redhat.com>
With the introduction of fair queued rwlock, recursive read_lock()
may hang the offending process if there is a write_lock() somewhere
in between.
With recursive read_lock checking enabled, the following error was
reported:
=============================================
[ INFO: possible recursive locking detected ]
3.16.0-rc1 #2 Tainted: G E
---------------------------------------------
load_policy/708 is trying to acquire lock:
(policy_rwlock){.+.+..}, at: [<ffffffff8125b32a>]
security_genfs_sid+0x3a/0x170
but task is already holding lock:
(policy_rwlock){.+.+..}, at: [<ffffffff8125b48c>]
security_fs_use+0x2c/0x110
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0
----
lock(policy_rwlock);
lock(policy_rwlock);
This patch fixes the occurrence of recursive read_lock() of
policy_rwlock by adding a helper function __security_genfs_sid()
which requires caller to take the lock before calling it. The
security_fs_use() was then modified to call the new helper function.
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Acked-by: Stephen Smalley <sds@tycho.nsa.gov>
Signed-off-by: Paul Moore <pmoore@redhat.com>
The cond_read_node() should free the given node on error path as it's
not linked to p->cond_list yet. This is done via cond_node_destroy()
but it's not called when next_entry() fails before the expr loop.
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Signed-off-by: Paul Moore <pmoore@redhat.com>
The node->cur_state and len can be read in a single call of next_entry().
And setting len before reading is a dead write so can be eliminated.
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
(Minor tweak to the length parameter in the call to next_entry())
Signed-off-by: Paul Moore <pmoore@redhat.com>
To increase compiler portability there is <linux/compiler.h> which
provides convenience macros for various gcc constructs. Eg: __packed
for __attribute__((packed)).
This patch is part of a large task I've taken to clean the gcc
specific attributes and use the the macros instead.
Signed-off-by: Gideon Israel Dsouza <gidisrael@gmail.com>
Signed-off-by: Paul Moore <pmoore@redhat.com>
There're some code duplication for reading a string value during
policydb_read(). Add str_read() helper to fix it.
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Signed-off-by: Paul Moore <pmoore@redhat.com>
ARRAY_SIZE is more concise to use when the size of an array is divided
by the size of its type or the size of its first element.
The Coccinelle semantic patch that makes this change is as follows:
// <smpl>
@@
type T;
T[] E;
@@
- (sizeof(E)/sizeof(E[...]))
+ ARRAY_SIZE(E)
// </smpl>
Signed-off-by: Himangi Saraogi <himangi774@gmail.com>
Signed-off-by: Paul Moore <pmoore@redhat.com>