Pull audit fixes from Paul Moore:
"Two small patches to fix some bugs with the audit-by-executable
functionality we introduced back in v4.3 (both patches are marked
for the stable folks)"
* 'stable-4.8' of git://git.infradead.org/users/pcmoore/audit:
audit: fix exe_file access in audit_exe_compare
mm: introduce get_task_exe_file
Changes in this update:
o iomap FIEMAP_EXTENT_MERGED usage fix
o additional mount-time feature restrictions
o rmap btree query fixes
o freeze/unmount io completion workqueue fix
o memory corruption fix for deferred operations handling
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJXyKjtAAoJEK3oKUf0dfoduy4QAMihN9Gqr4BEyTjaW0yGzvLX
3vLTUxUm6U0pHvspuPmgKDFmlaoir1PiUJMcuuFLSSpM+AbUyoRiUjryiwqyU+WH
OOB8YPTk10jBdHnHRG1LowLGOuNdTau6FnzX3JHesOTd+keOSjLVHkBBZ9Gt0wgT
TDPDvZI+6QTvy8HtOfkysnBbG1SUNqtNnr7mk77YL7YzJD7sctytCy5sBWJWbIyl
RxafJ7CRGCbvFAQEzkQuYQKZtQrtO6Q0wulZLDegOa4aQOp6BPeKVlkGBEayOsY0
Zcg/mdiLL4UKF0PQqcHcWMWtbPfE/qFtwobEHpxVPc3OnkX1dcFID8a46pjqmTgP
mmBO3NQODKvMNkn2U3Wao5TAMGRU5cRTc7xxgLy4nJCIEqTYfi6P5izzF+GOV0mB
ION5VmnxztuSTTr/xXIFJDSImRvV/ztaiI81ZnArVoqEmUYuBL+z27bRLz1iCLSa
7r5nzO5qu6CHIQFkNeiqsB+BZnTtS/+K+mlNapV1eb97Mm/aze3n61LwaGd2dTpK
1b0HbychEGknnMu14qwoNl3zh2a/3nfIZJ6XRc2FjeyesehMOOPgAfvl+FYA7GW9
TpXebewg4xIJyaIE1JKLZ4kFnpkzRfbp0OdohDPJwfLVGWi9hurtYI+ASpm8WY+G
dG41MCfgbkhU5VgL/zDt
=2bMl
-----END PGP SIGNATURE-----
Merge tag 'xfs-iomap-for-linus-4.8-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/dgc/linux-xfs
Pull xfs and iomap fixes from Dave Chinner:
"Most of these changes are small regression fixes that address problems
introduced in the 4.8-rc1 window. The two fixes that aren't (IO
completion fix and superblock inprogress check) are fixes for problems
introduced some time ago and need to be pushed back to stable kernels.
Changes in this update:
- iomap FIEMAP_EXTENT_MERGED usage fix
- additional mount-time feature restrictions
- rmap btree query fixes
- freeze/unmount io completion workqueue fix
- memory corruption fix for deferred operations handling"
* tag 'xfs-iomap-for-linus-4.8-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/dgc/linux-xfs:
xfs: track log done items directly in the deferred pending work item
iomap: don't set FIEMAP_EXTENT_MERGED for extent based filesystems
xfs: prevent dropping ioend completions during buftarg wait
xfs: fix superblock inprogress check
xfs: simple btree query range should look right if LE lookup fails
xfs: fix some key handling problems in _btree_simple_query_range
xfs: don't log the entire end of the AGF
xfs: disallow mounting of realtime + rmap filesystems
xfs: don't perform lookups on zero-height btrees
If can_overcommit() in btrfs_calc_reclaim_metadata_size() returns true,
btrfs_async_reclaim_metadata_space() will not reclaim metadata space, just
return directly and also forget to wake up process which are waiting for
their tickets, so these processes will wait endlessly.
Fstests case generic/172 with mount option "-o compress=lzo" have revealed
this bug in my test machine. Here if we have tickets to handle, we must
handle them first.
Signed-off-by: Wang Xiaoguang <wangxg.fnst@cn.fujitsu.com>
Reviewed-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Qgroup function may overwrite the saved error 'err' with 0
in case quota is not enabled, and this ends up with a
endless loop in balance because we keep going back to balance
the same block group.
It really should use 'ret' instead.
Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
Reviewed-by: Qu Wenruo <quwenruo@cn.fujitsu.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Suppose you have the following tree in snap1 on a file system mounted with -o
inode_cache so that inode numbers are recycled
└── [ 258] a
└── [ 257] b
and then you remove b, rename a to c, and then re-create b in c so you have the
following tree
└── [ 258] c
└── [ 257] b
and then you try to do an incremental send you will hit
ASSERT(pending_move == 0);
in process_all_refs(). This is because we assume that any recycling of inodes
will not have a pending change in our path, which isn't the case. This is the
case for the DELETE side, since we want to remove the old file using the old
path, but on the create side we could have a pending move and need to do the
normal pending rename dance. So remove this ASSERT() and put a comment about
why we ignore pending_move. Thanks,
Signed-off-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Be defensive about what underlying fs provides us in the returned xattr
list buffer. If it's not properly null terminated, bail out with a warning
insead of BUG.
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
Cc: <stable@vger.kernel.org>
Now that overlayfs has xattr handlers for iop->{set,remove}xattr, use
those same handlers for iop->getxattr as well.
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
Setting POSIX acl may also modify the file mode, so need to copy that up to
the overlay inode.
Reported-by: Eryu Guan <eguan@redhat.com>
Fixes: d837a49bd5 ("ovl: fix POSIX ACL setting")
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
Commit d837a49bd5 ("ovl: fix POSIX ACL setting") switches from
iop->setxattr from ovl_setxattr to generic_setxattr, so switch from
ovl_removexattr to generic_removexattr as well. As far as permission
checking goes, the same rules should apply in either case.
While doing that, rename ovl_setxattr to ovl_xattr_set to indicate that
this is not an iop->setxattr implementation and remove the unused inode
argument.
Move ovl_other_xattr_set above ovl_own_xattr_set so that they match the
order of handlers in ovl_xattr_handlers.
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
Fixes: d837a49bd5 ("ovl: fix POSIX ACL setting")
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
Use an ordinary #ifdef to conditionally include the POSIX ACL handlers
in ovl_xattr_handlers, like the other filesystems do. Flag the code
that is now only used conditionally with __maybe_unused.
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
Trivial fix to spelling mistake in pr_err message.
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
Some operations (setxattr/chmod) can make the cached acl stale. We either
need to clear overlay's acl cache for the affected inode or prevent acl
caching on the overlay altogether. Preventing caching has the following
advantages:
- no double caching, less memory used
- overlay cache doesn't go stale when fs clears it's own cache
Possible disadvantage is performance loss. If that becomes a problem
get_acl() can be optimized for overlayfs.
This patch disables caching by pre setting i_*acl to a value that
- has bit 0 set, so is_uncached_acl() will return true
- is not equal to ACL_NOT_CACHED, so get_acl() will not overwrite it
The constant -3 was chosen for this purpose.
Fixes: 39a25b2b37 ("ovl: define ->get_acl() for overlay inodes")
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
Instead of calling ->get_acl() directly, use get_acl() to get the cached
value.
We will have the acl cached on the underlying inode anyway, because we do
permission checking on the both the overlay and the underlying fs.
So, since we already have double caching, this improves performance without
any cost.
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
When mounting overlayfs it needs a clean "work" directory under the
supplied workdir.
Previously the mount code removed this directory if it already existed and
created a new one. If the removal failed (e.g. directory was not empty)
then it fell back to a read-only mount not using the workdir.
While this has never been reported, it is possible to get a non-empty
"work" dir from a previous mount of overlayfs in case of crash in the
middle of an operation using the work directory.
In this case the left over state should be discarded and the overlay
filesystem will be consistent, guaranteed by the atomicity of operations on
moving to/from the workdir to the upper layer.
This patch implements cleaning out any files left in workdir. It is
implemented using real recursion for simplicity, but the depth is limited
to 2, because the worst case is that of a directory containing whiteouts
under "work".
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
Cc: <stable@vger.kernel.org>
Clear out posix acl xattrs on workdir and also reset the mode after
creation so that an inherited sgid bit is cleared.
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
Cc: <stable@vger.kernel.org>
Setting MS_POSIXACL in sb->s_flags has the side effect of passing mode to
create functions without masking against umask.
Another problem when creating over a whiteout is that the default posix acl
is not inherited from the parent dir (because the real parent dir at the
time of creation is the work directory).
Fix these problems by:
a) If upper fs does not have MS_POSIXACL, then mask mode with umask.
b) If creating over a whiteout, call posix_acl_create() to get the
inherited acls. After creation (but before moving to the final
destination) set these acls on the created file. posix_acl_create() also
updates the file creation mode as appropriate.
Fixes: 39a25b2b37 ("ovl: define ->get_acl() for overlay inodes")
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
copy from user fixes.
* 'msm-fixes-4.8' of git://people.freedesktop.org/~robclark/linux:
drm/msm: protect against faults from copy_from_user() in submit ioctl
drm/msm: fix use of copy_from_user() while holding spinlock
Prior to the change the function would blindly deference mm, exe_file
and exe_file->f_inode, each of which could have been NULL or freed.
Use get_task_exe_file to safely obtain stable exe_file.
Signed-off-by: Mateusz Guzik <mguzik@redhat.com>
Acked-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Acked-by: Richard Guy Briggs <rgb@redhat.com>
Cc: <stable@vger.kernel.org> # 4.3.x
Signed-off-by: Paul Moore <paul@paul-moore.com>
For more convenient access if one has a pointer to the task.
As a minor nit take advantage of the fact that only task lock + rcu are
needed to safely grab ->exe_file. This saves mm refcount dance.
Use the helper in proc_exe_link.
Signed-off-by: Mateusz Guzik <mguzik@redhat.com>
Acked-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Acked-by: Richard Guy Briggs <rgb@redhat.com>
Cc: <stable@vger.kernel.org> # 4.3.x
Signed-off-by: Paul Moore <paul@paul-moore.com>
Fixes for 4.8:
- 2 CI S4 fixes
- error handling fix
* 'drm-fixes-4.8' of git://people.freedesktop.org/~agd5f/linux:
drm/amdgpu: record error code when ring test failed
drm/amd/amdgpu: compute ring test fail during S4 on CI
drm/amd/amdgpu: sdma resume fail during S4 on CI
Otherwise we may miss errors.
Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: stable@vger.kernel.org
unhalt Instrction Fetch Unit after all rings are inited.
Signed-off-by: JimQu <Jim.Qu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: stable@vger.kernel.org
SDMA could be fail in the thaw() and restore() processes, do software reset
if each SDMA engine is busy.
Signed-off-by: JimQu <Jim.Qu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: stable@vger.kernel.org
Pull crypto fixes from Herbert Xu:
"This fixes the following issues:
- Kconfig problem that prevented mxc-rnga from being enabled
- bogus key sizes in qat aes-xts
- buggy aes-xts code in vmx"
* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: vmx - fix null dereference in p8_aes_xts_crypt
crypto: qat - fix aes-xts key sizes
hwrng: mxc-rnga - Fix Kconfig dependency
We used to delay switching to the new credentials until after we had
mapped the executable (and possible elf interpreter). That was kind of
odd to begin with, since the new executable will actually then _run_
with the new creds, but whatever.
The bigger problem was that we also want to make sure that we turn off
prof events and tracing before we start mapping the new executable
state. So while this is a cleanup, it's also a fix for a possible
information leak.
Reported-by: Robert Święcki <robert@swiecki.net>
Tested-by: Peter Zijlstra <peterz@infradead.org>
Acked-by: David Howells <dhowells@redhat.com>
Acked-by: Oleg Nesterov <oleg@redhat.com>
Acked-by: Andy Lutomirski <luto@amacapital.net>
Acked-by: Eric W. Biederman <ebiederm@xmission.com>
Cc: Willy Tarreau <w@1wt.eu>
Cc: Kees Cook <keescook@chromium.org>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Added devices ids for acces i/o products quad and octal serial cards
that make use of existing Pericom PI7C9X7954 and PI7C9X7958
configurations .
Signed-off-by: Jimi Damon <jdamon@accesio.com>
Cc: stable <stable@vger.kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Since the commit c1a67b48f6 ("serial: 8250_pci: replace switch-case by
formula for Intel MID"), the 8250 driver crashes in the byt_set_termios()
function with a divide error. This is caused by the fact that a baud rate of 0
(B0) is not handled properly. Fix it by falling back to B9600 in this case.
Reported-by: "Mendez Salinas, Fernando" <fernando.mendez.salinas@intel.com>
Fixes: c1a67b48f6 ("serial: 8250_pci: replace switch-case by formula for Intel MID")
Cc: stable@vger.kernel.org
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Serial console is broken in v4.8-rcX. Mika and I independently bisected down to
commit 4ef03d3287 ("tty/serial/8250: use mctrl_gpio helpers").
Since neither author nor anyone else didn't propose a solution we better revert
it for now.
This reverts commit 4ef03d3287.
Link: https://lkml.kernel.org/r/20160809130229.GN1729@lahna.fi.intel.com
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Tested-by: Heikki Krogerus <heikki.krogerus@linux.intel.com>
Tested-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Attributes declared with __ATTR_PREALLOC use sysfs_kf_read() which returns
zero bytes for non-zero offset. This breaks script checkarray in mdadm tool
in debian where /bin/sh is 'dash' because its builtin 'read' reads only one
byte at a time. Script gets 'i' instead of 'idle' when reads current action
from /sys/block/$dev/md/sync_action and as a result does nothing.
This patch adds trivial implementation of partial read: generate whole
string and move required part into buffer head.
Signed-off-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Fixes: 4ef67a8c95 ("sysfs/kernfs: make read requests on pre-alloc files use the buffer.")
Link: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=787950
Cc: Stable <stable@vger.kernel.org> # v3.19+
Acked-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 5590f3196b ("drivers/core/of: Add symlink to device-tree from
devices with an OF node") added a symlink called "of_node" to sysfs
however the documentation describes it as "of_path".
Fix the documentation to match what the code actually does.
Signed-off-by: Martin Fuzzey <mfuzzey@parkeon.com>
Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
kernfs_notify_workfn() sends out file modified events for the
scheduled kernfs_nodes. Because the modifications aren't from
userland, it doesn't have the matching file struct at hand and can't
use fsnotify_modify(). Instead, it looked up the inode and then used
d_find_any_alias() to find the dentry and used fsnotify_parent() and
fsnotify() directly to generate notifications.
The assumption was that the relevant dentries would have been pinned
if there are listeners, which isn't true as inotify doesn't pin
dentries at all and watching the parent doesn't pin the child dentries
even for dnotify. This led to, for example, inotify watchers not
getting notifications if the system is under memory pressure and the
matching dentries got reclaimed. It can also be triggered through
/proc/sys/vm/drop_caches or a remount attempt which involves shrinking
dcache.
fsnotify_parent() only uses the dentry to access the parent inode,
which kernfs can do easily. Update kernfs_notify_workfn() so that it
uses fsnotify() directly for both the parent and target inodes without
going through d_find_any_alias(). While at it, supply the target file
name to fsnotify() from kernfs_node->name.
Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-by: Evgeny Vereshchagin <evvers@ya.ru>
Fixes: d911d98748 ("kernfs: make kernfs_notify() trigger inotify events too")
Cc: John McCutchan <john@johnmccutchan.com>
Cc: Robert Love <rlove@rlove.org>
Cc: Eric Paris <eparis@parisplace.org>
Cc: stable@vger.kernel.org # v3.16+
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Falcon Ridge 4C has been supported by the driver from the beginning,
Falcon Ridge 2C support was just added. Don't irritate users with a
warning declaring the opposite.
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Andreas Noever <andreas.noever@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
From: Xavier Gnata <xavier.gnata@gmail.com>
Add support to INTEL_FALCON_RIDGE_2C controller and corresponding quirk
to support suspend/resume.
Tested against 4.7 master on a MacBook Air 11" 2015.
Signed-off-by: Andreas Noever <andreas.noever@gmail.com>
Reviewed-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The quirk 'quirk_apple_wait_for_thunderbolt' did not fire on Falcon
Ridge 4C controllers with subdevice/subvendor set to zero. This lead
to lost pci devices on system resume.
Older thunderbolt controllers (pre Falcon Ridge) used the same device id
for bridges and for the controller. On Apple hardware the subvendor- &
subdevice-ids were set for the controller, but not for bridges. So that
is what was used to differentiate between the two. Starting with Falcon
Ridge bridges and controllers received different device ids.
Additionally on some MacBookPro models (but not all) the
subvendor/subdevice was zeroed.
Starting with a42fb351c (thunderbolt: Allow loading of module on recent
Apple MacBooks with thunderbolt 2 controller) the thunderbolt driver
binds to all Falcon Ridge 4C controllers (irregardless of
subvendor/subdevice). The corresponding quirk was not updated.
This commit changes the quirk to check the device class instead of its
subvendor-/subdeviceids. This works for all generations of Thunderbolt
controllers.
Signed-off-by: Andreas Noever <andreas.noever@gmail.com>
Reviewed-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
lkdtm_rodata_do_nothing() is an empty function which is generated in
order to test the non-executability of rodata.
Currently if function tracing is enabled then an mcount callsite will be
generated for lkdtm_rodata_do_nothing(), and it will appear in the list
of available functions for function tracing (available_filter_functions).
Given it's purpose purely as a test function, it seems preferable for
lkdtm_rodata_do_nothing() to be marked notrace, so it doesn't appear as
traceable.
This also avoids triggering a linker bug on powerpc:
https://sourceware.org/bugzilla/show_bug.cgi?id=20428
When the linker sees code that needs to generate a call stub, eg. a
branch to mcount(), it assumes the section is executable and
dereferences a NULL pointer leading to a linker segfault. Marking
lkdtm_rodata_do_nothing() notrace avoids triggering the bug because the
function contains no other function calls.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Even if PR3 support is available on the bridge, it will not be used if
the PCI layer considers it unavailable (i.e. on all laptops from 2013
and 2014). Ensure that this condition is checked to allow a fallback to
the Optimus DSM for device poweroff.
Initially I wanted to call pci_d3cold_enable before checking bridge_d3
(in case the user changed d3cold_allowed), but that is such an unlikely
case and likely fragile anyway. The current patch is suggested by Mika
in http://www.spinics.net/lists/linux-pci/msg52599.html
Cc: Mika Westerberg <mika.westerberg@linux.intel.com>
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
Reviewed-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
This commit appends a few _rcuidle suffixes to fix the following
RCU-used-from-idle bug:
> ===============================
> [ INFO: suspicious RCU usage. ]
> 4.6.0-rc5-next-20160426+ #1116 Not tainted
> -------------------------------
> include/trace/events/rpm.h:95 suspicious rcu_dereference_check() usage!
>
> other info that might help us debug this:
>
>
> RCU used illegally from idle CPU!
> rcu_scheduler_active = 1, debug_locks = 0
> RCU used illegally from extended quiescent state!
> 1 lock held by swapper/0/0:
> #0: (&(&dev->power.lock)->rlock){-.-...}, at: [<c052cc2c>] __rpm_callback+0x58/0x60
>
> stack backtrace:
> CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.6.0-rc5-next-20160426+ #1116
> Hardware name: Generic OMAP36xx (Flattened Device Tree)
> [<c0110290>] (unwind_backtrace) from [<c010c3a8>] (show_stack+0x10/0x14)
> [<c010c3a8>] (show_stack) from [<c047fd68>] (dump_stack+0xb0/0xe4)
> [<c047fd68>] (dump_stack) from [<c052d5d0>] (rpm_suspend+0x580/0x768)
> [<c052d5d0>] (rpm_suspend) from [<c052ec58>] (__pm_runtime_suspend+0x64/0x84)
> [<c052ec58>] (__pm_runtime_suspend) from [<c04bf25c>] (omap2_gpio_prepare_for_idle+0x5c/0x70)
> [<c04bf25c>] (omap2_gpio_prepare_for_idle) from [<c0125568>] (omap_sram_idle+0x140/0x244)
> [<c0125568>] (omap_sram_idle) from [<c01269dc>] (omap3_enter_idle_bm+0xfc/0x1ec)
> [<c01269dc>] (omap3_enter_idle_bm) from [<c0601bdc>] (cpuidle_enter_state+0x80/0x3d4)
> [<c0601bdc>] (cpuidle_enter_state) from [<c0183b08>] (cpu_startup_entry+0x198/0x3a0)
> [<c0183b08>] (cpu_startup_entry) from [<c0b00c0c>] (start_kernel+0x354/0x3c8)
> [<c0b00c0c>] (start_kernel) from [<8000807c>] (0x8000807c)
In the immortal words of Steven Rostedt, "*Whack* *Whack* *Whack*!!!"
Reported-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Tested-by: Tony Lindgren <tony@atomide.com>
Tested-by: Guenter Roeck <linux@roeck-us.net>
WhACKED-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
The workqueue "dm_bufio_wq" queues a single work item &dm_bufio_work so
it doesn't require execution ordering. Hence, alloc_workqueue() has
been used to replace the deprecated create_singlethread_workqueue().
The WQ_MEM_RECLAIM flag has been set since DM requires forward progress
under memory pressure.
Since there are fixed number of work items, explicit concurrency
limit is unnecessary here.
Signed-off-by: Bhaktipriya Shridhar <bhaktipriya96@gmail.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
If crypt_alloc_tfms() had to allocate multiple tfms and it failed before
the last allocation, then it would call crypt_free_tfms() and could free
pointers from uninitialized memory -- due to the crypt_free_tfms() check
for non-zero cc->tfms[i]. Fix by allocating zeroed memory.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Cc: stable@vger.kernel.org
When dm-crypt processes writes, it allocates a new bio in
crypt_alloc_buffer(). The bio is allocated from a bio set and it can
have at most BIO_MAX_PAGES vector entries, however the incoming bio can be
larger (e.g. if it was allocated by bcache). If the incoming bio is
larger, bio_alloc_bioset() fails and an error is returned.
To avoid the error, we test for a too large bio in the function
crypt_map() and use dm_accept_partial_bio() to split the bio.
dm_accept_partial_bio() trims the current bio to the desired size and
asks DM core to send another bio with the rest of the data.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Cc: stable@vger.kernel.org # v3.16+
The kthread_run() function returns either a valid task_struct or
ERR_PTR() value, check for NULL is invalid. This change fixes potential
for oops, e.g. in OOM situation.
Signed-off-by: Vladimir Zapolskiy <vz@mleia.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Cc: stable@vger.kernel.org
This fixes a ptrace vs fatal pending signals bug as manifested in
seccomp now that seccomp was reordered to happen after ptrace. The
short version is that seccomp should not attempt to call do_exit()
while fatal signals are pending under a tracer. The existing code was
trying to be as defensively paranoid as possible, but it now ends up
confusing ptrace. Instead, the syscall can just be skipped (which solves
the original concern that the do_exit() was addressing) and normal signal
handling, tracer notification, and process death can happen.
Paraphrasing from the original bug report:
If a tracee task is in a PTRACE_EVENT_SECCOMP trap, or has been resumed
after such a trap but not yet been scheduled, and another task in the
thread-group calls exit_group(), then the tracee task exits without the
ptracer receiving a PTRACE_EVENT_EXIT notification. Test case here:
https://gist.github.com/khuey/3c43ac247c72cef8c956ca73281c9be7
The bug happens because when __seccomp_filter() detects
fatal_signal_pending(), it calls do_exit() without dequeuing the fatal
signal. When do_exit() sends the PTRACE_EVENT_EXIT notification and
that task is descheduled, __schedule() notices that there is a fatal
signal pending and changes its state from TASK_TRACED to TASK_RUNNING.
That prevents the ptracer's waitpid() from returning the ptrace event.
A more detailed analysis is here:
https://github.com/mozilla/rr/issues/1762#issuecomment-237396255.
Reported-by: Robert O'Callahan <robert@ocallahan.org>
Reported-by: Kyle Huey <khuey@kylehuey.com>
Tested-by: Kyle Huey <khuey@kylehuey.com>
Fixes: 93e35efb8d ("x86/ptrace: run seccomp after ptrace")
Signed-off-by: Kees Cook <keescook@chromium.org>
Acked-by: Oleg Nesterov <oleg@redhat.com>
Acked-by: James Morris <james.l.morris@oracle.com>
bio_alloc() can allocate a bio with at most BIO_MAX_PAGES (256) vector
entries. However, the incoming bio may have more vector entries if it
was allocated by other means. For example, bcache submits bios with
more than BIO_MAX_PAGES entries. This results in bio_alloc() failure.
To avoid the failure, change the code so that it allocates bio with at
most BIO_MAX_PAGES entries. If the incoming bio has more entries,
bio_add_page() will fail and a new bio will be allocated - the code that
handles bio_add_page() failure already exists in the dm-log-writes
target.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Reviewed-by: Josef Bacik <jbacik@fb,com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Cc: stable@vger.kernel.org # v4.1+
Move log_one_block()'s atomic_inc(&lc->io_blocks) before bio_alloc() to
fix a bug that the target hangs if bio_alloc() fails. The error path
does put_io_block(lc), so atomic_inc(&lc->io_blocks) must occur before
invoking the error path to avoid underflow of lc->io_blocks.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Reviewed-by: Josef Bacik <jbacik@fb,com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Cc: stable@vger.kernel.org