Cong Wang says:
====================
net_sched: close the race between call_rcu() and cleanup_net()
This patchset tries to fix the race between call_rcu() and
cleanup_net() again. Without holding the netns refcnt the
tc_action_net_exit() in netns workqueue could be called before
filter destroy works in tc filter workqueue. This patchset
moves the netns refcnt from tc actions to tcf_exts, without
breaking per-netns tc actions.
Patch 1 reverts the previous fix, patch 2 introduces two new
API's to help to address the bug and the rest patches switch
to the new API's. Please see each patch for details.
I was not able to reproduce this bug, but now after adding
some delay in filter destroy work I manage to trigger the
crash. After this patchset, the crash is not reproducible
any more and the debugging printk's show the order is expected
too.
====================
Fixes: ddf97ccdd7 ("net_sched: add network namespace support for tc actions")
Reported-by: Lucas Bates <lucasb@mojatatu.com>
Cc: Lucas Bates <lucasb@mojatatu.com>
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Cc: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hold netns refcnt before call_rcu() and release it after
the tcf_exts_destroy() is done.
Note, on ->destroy() path we have to respect the return value
of tcf_exts_get_net(), on other paths it should always return
true, so we don't need to care.
Cc: Lucas Bates <lucasb@mojatatu.com>
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Cc: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hold netns refcnt before call_rcu() and release it after
the tcf_exts_destroy() is done.
Note, on ->destroy() path we have to respect the return value
of tcf_exts_get_net(), on other paths it should always return
true, so we don't need to care.
Cc: Lucas Bates <lucasb@mojatatu.com>
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Cc: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hold netns refcnt before call_rcu() and release it after
the tcf_exts_destroy() is done.
Note, on ->destroy() path we have to respect the return value
of tcf_exts_get_net(), on other paths it should always return
true, so we don't need to care.
Cc: Lucas Bates <lucasb@mojatatu.com>
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Cc: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hold netns refcnt before call_rcu() and release it after
the tcf_exts_destroy() is done.
Note, on ->destroy() path we have to respect the return value
of tcf_exts_get_net(), on other paths it should always return
true, so we don't need to care.
Cc: Lucas Bates <lucasb@mojatatu.com>
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Cc: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hold netns refcnt before call_rcu() and release it after
the tcf_exts_destroy() is done.
Cc: Lucas Bates <lucasb@mojatatu.com>
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Cc: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hold netns refcnt before call_rcu() and release it after
the tcf_exts_destroy() is done.
Note, on ->destroy() path we have to respect the return value
of tcf_exts_get_net(), on other paths it should always return
true, so we don't need to care.
Cc: Lucas Bates <lucasb@mojatatu.com>
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Cc: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hold netns refcnt before call_rcu() and release it after
the tcf_exts_destroy() is done.
Note, on ->destroy() path we have to respect the return value
of tcf_exts_get_net(), on other paths it should always return
true, so we don't need to care.
Cc: Lucas Bates <lucasb@mojatatu.com>
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Cc: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hold netns refcnt before call_rcu() and release it after
the tcf_exts_destroy() is done.
Note, on ->destroy() path we have to respect the return value
of tcf_exts_get_net(), on other paths it should always return
true, so we don't need to care.
Cc: Lucas Bates <lucasb@mojatatu.com>
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Cc: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hold netns refcnt before call_rcu() and release it after
the tcf_exts_destroy() is done.
Note, on ->destroy() path we have to respect the return value
of tcf_exts_get_net(), on other paths it should always return
true, so we don't need to care.
Cc: Lucas Bates <lucasb@mojatatu.com>
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Cc: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hold netns refcnt before call_rcu() and release it after
the tcf_exts_destroy() is done.
Note, on ->destroy() path we have to respect the return value
of tcf_exts_get_net(), on other paths it should always return
true, so we don't need to care.
Cc: Lucas Bates <lucasb@mojatatu.com>
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Cc: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hold netns refcnt before call_rcu() and release it after
the tcf_exts_destroy() is done.
Note, on ->destroy() path we have to respect the return value
of tcf_exts_get_net(), on other paths it should always return
true, so we don't need to care.
Cc: Lucas Bates <lucasb@mojatatu.com>
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Cc: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Instead of holding netns refcnt in tc actions, we can minimize
the holding time by saving it in struct tcf_exts instead. This
means we can just hold netns refcnt right before call_rcu() and
release it after tcf_exts_destroy() is done.
However, because on netns cleanup path we call tcf_proto_destroy()
too, obviously we can not hold netns for a zero refcnt, in this
case we have to do cleanup synchronously. It is fine for RCU too,
the caller cleanup_net() already waits for a grace period.
For other cases, refcnt is non-zero and we can safely grab it as
normal and release it after we are done.
This patch provides two new API for each filter to use:
tcf_exts_get_net() and tcf_exts_put_net(). And all filters now can
use the following pattern:
void __destroy_filter() {
tcf_exts_destroy();
tcf_exts_put_net(); // <== release netns refcnt
kfree();
}
void some_work() {
rtnl_lock();
__destroy_filter();
rtnl_unlock();
}
void some_rcu_callback() {
tcf_queue_work(some_work);
}
if (tcf_exts_get_net()) // <== hold netns refcnt
call_rcu(some_rcu_callback);
else
__destroy_filter();
Cc: Lucas Bates <lucasb@mojatatu.com>
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Cc: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This reverts commit ceffcc5e25.
If we hold that refcnt, the netns can never be destroyed until
all actions are destroyed by user, this breaks our netns design
which we expect all actions are destroyed when we destroy the
whole netns.
Cc: Lucas Bates <lucasb@mojatatu.com>
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Cc: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This reverts commit baedf68a06.
There is an updated version of this fix which covers
the problem more thoroughly.
Signed-off-by: David S. Miller <davem@davemloft.net>
Before we can globally change the function prototype of all timer callbacks,
we have to change those set up by DEFINE_TIMER(). Prepare for this by
casting the callbacks until the prototype changes globally.
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Kees Cook <keescook@chromium.org>
In preparation for unconditionally passing the struct timer_list pointer to
all timer callbacks, switch to using the new timer_setup() and from_timer()
to pass the timer pointer explicitly.
Cc: Wensong Zhang <wensong@linux-vs.org>
Cc: Simon Horman <horms@verge.net.au>
Cc: Julian Anastasov <ja@ssi.bg>
Cc: Pablo Neira Ayuso <pablo@netfilter.org>
Cc: Jozsef Kadlecsik <kadlec@blackhole.kfki.hu>
Cc: Florian Westphal <fw@strlen.de>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: netdev@vger.kernel.org
Cc: lvs-devel@vger.kernel.org
Cc: netfilter-devel@vger.kernel.org
Cc: coreteam@netfilter.org
Signed-off-by: Kees Cook <keescook@chromium.org>
Acked-by: Julian Anastasov <ja@ssi.bg>
Acked-by: Simon Horman <horms@verge.net.au>
In preparation for unconditionally passing the struct timer_list pointer to
all timer callbacks, switch to using the new timer_setup() and from_timer()
to pass the timer pointer explicitly.
Cc: qla2xxx-upstream@qlogic.com
Cc: "James E.J. Bottomley" <jejb@linux.vnet.ibm.com>
Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
Cc: linux-scsi@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Tested-by: Bart Van Assche <Bart.VanAssche@wdc.com>
'cached_raw_freq' is used to get the next frequency quickly but should
always be in sync with sg_policy->next_freq. There is a case where it is
not and in such cases it should be reset to avoid switching to incorrect
frequencies.
Consider this case for example:
- policy->cur is 1.2 GHz (Max)
- New request comes for 780 MHz and we store that in cached_raw_freq.
- Based on 780 MHz, we calculate the effective frequency as 800 MHz.
- We then see the CPU wasn't idle recently and choose to keep the next
freq as 1.2 GHz.
- Now we have cached_raw_freq is 780 MHz and sg_policy->next_freq is
1.2 GHz.
- Now if the utilization doesn't change in then next request, then the
next target frequency will still be 780 MHz and it will match with
cached_raw_freq. But we will choose 1.2 GHz instead of 800 MHz here.
Fixes: b7eaf1aab9 (cpufreq: schedutil: Avoid reducing frequency of busy CPUs prematurely)
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Cc: 4.12+ <stable@vger.kernel.org> # 4.12+
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Himanshu Jha <himanshujha199640@gmail.com>
Acked-by: Luis R. Rodriguez <mcgrof@kernel.org>
Acked-by: Pavel Machek <pavel@ucw.cz>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Problem: This flag does not get cleared currently in the suspend or
resume path in the following cases:
* In case some driver's suspend routine returns an error.
* Successful s2idle case
* etc?
Why is this a problem: What happens is that the next suspend attempt
could fail even though the user did not enable the flag by writing to
/sys/power/wakeup_count. This is 1 use case how the issue can be seen
(but similar use case with driver suspend failure can be thought of):
1. Read /sys/power/wakeup_count
2. echo count > /sys/power/wakeup_count
3. echo freeze > /sys/power/wakeup_count
4. Let the system suspend, and wakeup the system using some wake source
that calls pm_wakeup_event() e.g. power button or something.
5. Note that the combined wakeup count would be incremented due
to the pm_wakeup_event() in the resume path.
6. After resuming the events_check_enabled flag is still set.
At this point if the user attempts to freeze again (without writing to
/sys/power/wakeup_count), the suspend would fail even though there has
been no wake event since the past resume.
Address that by clearing the flag just before a resume is completed,
so that it is always cleared for the corner cases mentioned above.
Signed-off-by: Rajat Jain <rajatja@google.com>
Acked-by: Pavel Machek <pavel@ucw.cz>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
On platforms with large number of Pstates, the transition table, which
is a NxN matrix, can overflow beyond the PAGE_SIZE boundary.
This can be seen on POWER9 which has 100+ Pstates.
As a result, each time the trans_table is read for any of the CPUs, we
will get the following error.
---------------------------------------------------
fill_read_buffer: show+0x0/0xa0 returned bad count
---------------------------------------------------
This patch ensures that in case of an overflow, we print a warning
once in the dmesg and return FILE TOO LARGE error for this and all
subsequent accesses of trans_table.
Signed-off-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Make these const as they are only getting passed to the functions
bL_cpufreq_{register/unregister} having the arguments as const.
Signed-off-by: Bhumika Goyal <bhumirks@gmail.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Make the arguments of functions bL_cpufreq_{register/unregister} as
const as the ops pointer does not modify the fields of the
cpufreq_arm_bL_ops structure it points to. The pointer arm_bL_ops is
also getting initialized with ops but the pointer does not modify the
fields. So, make the function argument and the structure pointer const.
Add const to function prototypes too.
Signed-off-by: Bhumika Goyal <bhumirks@gmail.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Clean up cpuidle_enable_device() to avoid doing an assignment
in an expression evaluated as an argument of if (), which also
makes the code in question more readable.
Signed-off-by: Gaurav Jindal <gauravjindal1104@gmail.com>
[ rjw: Subject & changelog ]
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Do not fetch per CPU drv if cpuidle_curr_governor is NULL
to avoid useless per CPU processing.
Signed-off-by: Gaurav Jindal <gauravjindal1104@gmail.com>
[ rjw: Subject & changelog ]
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
acpi_remove_pm_notifier() ends up calling flush_workqueue() while
holding acpi_pm_notifier_lock, and that same lock is taken by
by the work via acpi_pm_notify_handler(). This can deadlock.
To fix the problem let's split the single lock into two: one to
protect the dev->wakeup between the work vs. add/remove, and
another one to handle notifier installation vs. removal.
After commit a1d14934ea "workqueue/lockdep: 'Fix' flush_work()
annotation" I was able to kill the machine (Intel Braswell)
very easily with 'powertop --auto-tune', runtime suspending i915,
and trying to wake it up via the USB keyboard. The cases when
it didn't die are presumably explained by lockdep getting disabled
by something else (cpu hotplug locking issues usually).
Fortunately I still got a lockdep report over netconsole
(trickling in very slowly), even though the machine was
otherwise practically dead:
[ 112.179806] ======================================================
[ 114.670858] WARNING: possible circular locking dependency detected
[ 117.155663] 4.13.0-rc6-bsw-bisect-00169-ga1d14934ea4b #119 Not tainted
[ 119.658101] ------------------------------------------------------
[ 121.310242] xhci_hcd 0000:00:14.0: xHCI host not responding to stop endpoint command.
[ 121.313294] xhci_hcd 0000:00:14.0: xHCI host controller not responding, assume dead
[ 121.313346] xhci_hcd 0000:00:14.0: HC died; cleaning up
[ 121.313485] usb 1-6: USB disconnect, device number 3
[ 121.313501] usb 1-6.2: USB disconnect, device number 4
[ 134.747383] kworker/0:2/47 is trying to acquire lock:
[ 137.220790] (acpi_pm_notifier_lock){+.+.}, at: [<ffffffff813cafdf>] acpi_pm_notify_handler+0x2f/0x80
[ 139.721524]
[ 139.721524] but task is already holding lock:
[ 144.672922] ((&dpc->work)){+.+.}, at: [<ffffffff8109ce90>] process_one_work+0x160/0x720
[ 147.184450]
[ 147.184450] which lock already depends on the new lock.
[ 147.184450]
[ 154.604711]
[ 154.604711] the existing dependency chain (in reverse order) is:
[ 159.447888]
[ 159.447888] -> #2 ((&dpc->work)){+.+.}:
[ 164.183486] __lock_acquire+0x1255/0x13f0
[ 166.504313] lock_acquire+0xb5/0x210
[ 168.778973] process_one_work+0x1b9/0x720
[ 171.030316] worker_thread+0x4c/0x440
[ 173.257184] kthread+0x154/0x190
[ 175.456143] ret_from_fork+0x27/0x40
[ 177.624348]
[ 177.624348] -> #1 ("kacpi_notify"){+.+.}:
[ 181.850351] __lock_acquire+0x1255/0x13f0
[ 183.941695] lock_acquire+0xb5/0x210
[ 186.046115] flush_workqueue+0xdd/0x510
[ 190.408153] acpi_os_wait_events_complete+0x31/0x40
[ 192.625303] acpi_remove_notify_handler+0x133/0x188
[ 194.820829] acpi_remove_pm_notifier+0x56/0x90
[ 196.989068] acpi_dev_pm_detach+0x5f/0xa0
[ 199.145866] dev_pm_domain_detach+0x27/0x30
[ 201.285614] i2c_device_probe+0x100/0x210
[ 203.411118] driver_probe_device+0x23e/0x310
[ 205.522425] __driver_attach+0xa3/0xb0
[ 207.634268] bus_for_each_dev+0x69/0xa0
[ 209.714797] driver_attach+0x1e/0x20
[ 211.778258] bus_add_driver+0x1bc/0x230
[ 213.837162] driver_register+0x60/0xe0
[ 215.868162] i2c_register_driver+0x42/0x70
[ 217.869551] 0xffffffffa0172017
[ 219.863009] do_one_initcall+0x45/0x170
[ 221.843863] do_init_module+0x5f/0x204
[ 223.817915] load_module+0x225b/0x29b0
[ 225.757234] SyS_finit_module+0xc6/0xd0
[ 227.661851] do_syscall_64+0x5c/0x120
[ 229.536819] return_from_SYSCALL_64+0x0/0x7a
[ 231.392444]
[ 231.392444] -> #0 (acpi_pm_notifier_lock){+.+.}:
[ 235.124914] check_prev_add+0x44e/0x8a0
[ 237.024795] __lock_acquire+0x1255/0x13f0
[ 238.937351] lock_acquire+0xb5/0x210
[ 240.840799] __mutex_lock+0x75/0x940
[ 242.709517] mutex_lock_nested+0x1c/0x20
[ 244.551478] acpi_pm_notify_handler+0x2f/0x80
[ 246.382052] acpi_ev_notify_dispatch+0x44/0x5c
[ 248.194412] acpi_os_execute_deferred+0x14/0x30
[ 250.003925] process_one_work+0x1ec/0x720
[ 251.803191] worker_thread+0x4c/0x440
[ 253.605307] kthread+0x154/0x190
[ 255.387498] ret_from_fork+0x27/0x40
[ 257.153175]
[ 257.153175] other info that might help us debug this:
[ 257.153175]
[ 262.324392] Chain exists of:
[ 262.324392] acpi_pm_notifier_lock --> "kacpi_notify" --> (&dpc->work)
[ 262.324392]
[ 267.391997] Possible unsafe locking scenario:
[ 267.391997]
[ 270.758262] CPU0 CPU1
[ 272.431713] ---- ----
[ 274.060756] lock((&dpc->work));
[ 275.646532] lock("kacpi_notify");
[ 277.260772] lock((&dpc->work));
[ 278.839146] lock(acpi_pm_notifier_lock);
[ 280.391902]
[ 280.391902] *** DEADLOCK ***
[ 280.391902]
[ 284.986385] 2 locks held by kworker/0:2/47:
[ 286.524895] #0: ("kacpi_notify"){+.+.}, at: [<ffffffff8109ce90>] process_one_work+0x160/0x720
[ 288.112927] #1: ((&dpc->work)){+.+.}, at: [<ffffffff8109ce90>] process_one_work+0x160/0x720
[ 289.727725]
Fixes: c072530f39 (ACPI / PM: Revork the handling of ACPI device wakeup notifications)
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: 3.17+ <stable@vger.kernel.org> # 3.17+
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Commit 7744ccdbc1 ("x86/mm: Add Secure Memory Encryption (SME)
support") as a side-effect made PAGE_KERNEL all of a sudden unavailable
to modules which can't make use of EXPORT_SYMBOL_GPL() symbols.
This is because once SME is enabled, sme_me_mask (which is introduced as
EXPORT_SYMBOL_GPL) makes its way to PAGE_KERNEL through _PAGE_ENC,
causing imminent build failure for all the modules which make use of all
the EXPORT-SYMBOL()-exported API (such as vmap(), __vmalloc(),
remap_pfn_range(), ...).
Exporting (as EXPORT_SYMBOL()) interfaces (and having done so for ages)
that take pgprot_t argument, while making it impossible to -- all of a
sudden -- pass PAGE_KERNEL to it, feels rather incosistent.
Restore the original behavior and make it possible to pass PAGE_KERNEL
to all its EXPORT_SYMBOL() consumers.
[ This is all so not wonderful. We shouldn't need that "sme_me_mask"
access at all in all those places that really don't care about that
level of detail, and just want _PAGE_KERNEL or whatever.
We have some similar issues with _PAGE_CACHE_WP and _PAGE_NOCACHE,
both of which hide a "cachemode2protval()" call, and which also ends
up using another EXPORT_SYMBOL(), but at least that only triggers for
the much more rare cases.
Maybe we could move these dynamic page table bits to be generated much
deeper down in the VM layer, instead of hiding them in the macros that
everybody uses.
So this all would merit some cleanup. But not today. - Linus ]
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Despised-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
At a couple of places smatch emits warnings like this:
arch/s390/mm/vmem.c:409 vmem_map_init() warn:
right shifting more than type allows
In fact shifting a signed type right is undefined. Avoid this and add
an unsigned long cast. The shifted values are always positive.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
The current way of adding new instructions to the opcode tables is
painful and error prone. Therefore add, similar to binutils, a text
file which contains all opcodes and the corresponding mnemonics and
instruction formats.
A small gen_opcode_table tool then generates a header file with the
required enums and opcode table initializers at the prepare step of
the kernel build.
This way only a simple text file has to be maintained, which can be
rather easily extended.
Unlike before where there were plenty of opcode tables and a large
switch statement to find the correct opcode table, there is now only
one opcode table left which contains all instructions. A second opcode
offset table now contains offsets within the opcode table to find
instructions which have the same opcode prefix. In order to save space
all 1-byte opcode instructions are grouped together at the end of the
opcode table. This is also quite similar to like it was before.
In addition also move and change code and definitions within the
disassembler. As a side effect this reduces the size required for the
code and opcode tables by ~1.5k.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
insn_to_mnemonic() was introduced ages ago for KVM debugging, but is
unused in the meantime. Therefore remove it.
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
do_gettimeofday() is deprecated because it's not y2038-safe on
32-bit architectures. Since it is basically a wrapper around
ktime_get_real_ts64(), we can just call that function directly
instead.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
[sth@linux.vnet.ibm.com: fix build]
Signed-off-by: Stefan Haberland <sth@linux.vnet.ibm.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Not only is it annoying to have one single flag for all pointers, as if
that was a global choice and all kernel pointers are the same, but %pK
can't get the 'access' vs 'open' time check right anyway.
So make the /proc/kallsyms pointer value code use logic specific to that
particular file. We do continue to honor kptr_restrict, but the default
(which is unrestricted) is changed to instead take expected users into
account, and restrict access by default.
Right now the only actual expected user is kernel profiling, which has a
separate sysctl flag for kernel profile access. There may be others.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
A non-zero value is converted to 1 when assigned to a bool variable, so the
conditional operator in is_ima_appraise_enabled is redundant.
The value of a comparison operator is either 1 or 0 so the conditional
operator in ima_inode_setxattr is redundant as well.
Confirmed that the patch is correct by comparing the object file from
before and after the patch. They are identical.
Signed-off-by: Thiago Jung Bauermann <bauerman@linux.vnet.ibm.com>
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
Bool initializations should use true and false. Bool tests don't need
comparisons.
Signed-off-by: Thomas Meyer <thomas@m3y3r.de>
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
When the user requests MODULE_CHECK policy and its kernel is compiled
with CONFIG_MODULE_SIG_FORCE not set, all modules would not load, just
those loaded in initram time. One option the user would have would be
set a kernel cmdline param (module.sig_enforce) to true, but the IMA
module check code doesn't rely on this value, it checks just
CONFIG_MODULE_SIG_FORCE.
This patch solves this problem checking for the exported value of
module.sig_enforce cmdline param intead of CONFIG_MODULE_SIG_FORCE,
which holds the effective value (CONFIG || param).
Signed-off-by: Bruno E. O. Meneguele <brdeoliv@redhat.com>
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
A static variable sig_enforce is used as status var to indicate the real
value of CONFIG_MODULE_SIG_FORCE, once this one is set the var will hold
true, but if the CONFIG is not set the status var will hold whatever
value is present in the module.sig_enforce kernel cmdline param: true
when =1 and false when =0 or not present.
Considering this cmdline param take place over the CONFIG value when
it's not set, other places in the kernel could misbehave since they
would have only the CONFIG_MODULE_SIG_FORCE value to rely on. Exporting
this status var allows the kernel to rely in the effective value of
module signature enforcement, being it from CONFIG value or cmdline
param.
Signed-off-by: Bruno E. O. Meneguele <brdeoliv@redhat.com>
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
The hash_setup function always sets the hash_setup_done flag, even
when the hash algorithm is invalid. This prevents the default hash
algorithm defined as CONFIG_IMA_DEFAULT_HASH from being used.
This patch sets hash_setup_done flag only for valid hash algorithms.
Fixes: e7a2ad7eb6 "ima: enable support for larger default filedata hash
algorithms"
Signed-off-by: Boshi Wang <wangboshi@huawei.com>
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
A system can validate EVM digital signatures without requiring an HMAC
key, but every EVM validation will generate a kernel error. Change this
so we only generate an error once.
Signed-off-by: Matthew Garrett <mjg59@google.com>
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
EVM will only perform validation once a key has been loaded. This key
may either be a symmetric trusted key (for HMAC validation and creation)
or the public half of an asymmetric key (for digital signature
validation). The /sys/kernel/security/evm interface allows userland to
signal that a symmetric key has been loaded, but does not allow userland
to signal that an asymmetric public key has been loaded.
This patch extends the interface to permit userspace to pass a bitmask
of loaded key types. It also allows userspace to block loading of a
symmetric key in order to avoid a compromised system from being able to
load an additional key type later.
Signed-off-by: Matthew Garrett <mjg59@google.com>
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
Apparmor will be gaining support for security.apparmor labels, and it
would be helpful to include these in EVM validation now so appropriate
signatures can be generated even before full support is merged.
Signed-off-by: Matthew Garrett <mjg59@google.com>
Acked-by: John Johansen <John.johansen@canonical.com>
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
The file hash is calculated and written out as an xattr after
calling fasync(). In order for the file data and metadata to be
written out to disk at the same time, this patch calculates the
file hash and stores it as an xattr before calling fasync.
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
The CONFIG_IMA_LOAD_X509 and CONFIG_EVM_LOAD_X509 options permit
loading x509 signed certificates onto the trusted keyrings without
verifying the x509 certificate file's signature.
This patch replaces the call to the integrity_read_file() specific
function with the common kernel_read_file_from_path() function.
To avoid verifying the file signature, this patch defines
READING_X509_CERTFICATE.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
All files matching a "measure" rule must be included in the IMA
measurement list, even when the file hash cannot be calculated.
Similarly, all files matching an "audit" rule must be audited, even when
the file hash can not be calculated.
The file data hash field contained in the IMA measurement list template
data will contain 0's instead of the actual file hash digest.
Note:
In general, adding, deleting or in anyway changing which files are
included in the IMA measurement list is not a good idea, as it might
result in not being able to unseal trusted keys sealed to a specific
TPM PCR value. This patch not only adds file measurements that were
not previously measured, but specifies that the file hash value for
these files will be 0's.
As the IMA measurement list ordering is not consistent from one boot
to the next, it is unlikely that anyone is sealing keys based on the
IMA measurement list. Remote attestation servers should be able to
process these new measurement records, but might complain about
these unknown records.
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
Reviewed-by: Dmitry Kasatkin <dmitry.kasatkin@huawei.com>
The securityfs policy file is removed unless additional rules can be
appended to the IMA policy (CONFIG_IMA_WRITE_POLICY), regardless as
to whether the policy is configured so that it can be displayed.
This patch changes this behavior, removing the securityfs policy file,
only if CONFIG_IMA_READ_POLICY is also not enabled.
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
The mount i_version flag is not enabled in the new sb_flags. This patch
adds the missing SB_I_VERSION flag.
Fixes: e462ec5 "VFS: Differentiate mount flags (MS_*) from internal
superblock flags"
Cc: David Howells <dhowells@redhat.com>
Cc: Al Viro <viro@ZenIV.linux.org.uk>
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
Commit b70543a0b2b6("x86/idt: Move regular trap init to tables") moves
regular trap init for each trap vector into a table based
initialization. It introduced the initialization for vector X86_TRAP_BP
which was not in the code which it replaced. This breaks uprobe
functionality for x86_32; the probed program segfaults instead of handling
the probe proper.
The reason for this is that TRAP_BP is set up as system interrupt gate
(DPL3) in the early IDT and then replaced by a regular interrupt gate
(DPL0) in idt_setup_traps(). The DPL0 restriction causes the int3 trap
to fail with a #GP resulting in a SIGSEGV of the probed program.
On 64bit this does not cause a problem because the IDT entry is replaced
with a system interrupt gate (DPL3) with interrupt stack afterwards.
Remove X86_TRAP_BP from the def_idts table which is used in
idt_setup_traps(). Remove a redundant entry for X86_TRAP_NMI in def_idts
while at it. Tested on both x86_64 and x86_32.
[ tglx: Amended changelog with a description of the root cause ]
Fixes: b70543a0b2b6("x86/idt: Move regular trap init to tables")
Reported-and-tested-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: a.p.zijlstra@chello.nl
Cc: ast@fb.com
Cc: oleg@redhat.com
Cc: luto@kernel.org
Cc: kernel-team@fb.com
Link: https://lkml.kernel.org/r/20171108192845.552709-1-yhs@fb.com