License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 14:07:57 +00:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
2008-10-23 05:26:29 +00:00
|
|
|
#ifndef _ASM_X86_TLBFLUSH_H
|
|
|
|
#define _ASM_X86_TLBFLUSH_H
|
2008-01-30 12:30:35 +00:00
|
|
|
|
|
|
|
#include <linux/mm.h>
|
|
|
|
#include <linux/sched.h>
|
|
|
|
|
|
|
|
#include <asm/processor.h>
|
2016-01-26 21:12:04 +00:00
|
|
|
#include <asm/cpufeature.h>
|
2012-03-28 17:11:12 +00:00
|
|
|
#include <asm/special_insns.h>
|
2017-05-28 17:00:14 +00:00
|
|
|
#include <asm/smp.h>
|
2017-12-05 12:34:47 +00:00
|
|
|
#include <asm/invpcid.h>
|
2017-12-04 14:07:59 +00:00
|
|
|
#include <asm/pti.h>
|
|
|
|
#include <asm/processor-flags.h>
|
2008-01-30 12:30:35 +00:00
|
|
|
|
2017-12-05 12:34:53 +00:00
|
|
|
/*
|
|
|
|
* The x86 feature is called PCID (Process Context IDentifier). It is similar
|
|
|
|
* to what is traditionally called ASID on the RISC processors.
|
|
|
|
*
|
|
|
|
* We don't use the traditional ASID implementation, where each process/mm gets
|
|
|
|
* its own ASID and flush/restart when we run out of ASID space.
|
|
|
|
*
|
|
|
|
* Instead we have a small per-cpu array of ASIDs and cache the last few mm's
|
|
|
|
* that came by on this CPU, allowing cheaper switch_mm between processes on
|
|
|
|
* this CPU.
|
|
|
|
*
|
|
|
|
* We end up with different spaces for different things. To avoid confusion we
|
|
|
|
* use different names for each of them:
|
|
|
|
*
|
|
|
|
* ASID - [0, TLB_NR_DYN_ASIDS-1]
|
|
|
|
* the canonical identifier for an mm
|
|
|
|
*
|
|
|
|
* kPCID - [1, TLB_NR_DYN_ASIDS]
|
|
|
|
* the value we write into the PCID part of CR3; corresponds to the
|
|
|
|
* ASID+1, because PCID 0 is special.
|
|
|
|
*
|
|
|
|
* uPCID - [2048 + 1, 2048 + TLB_NR_DYN_ASIDS]
|
|
|
|
* for KPTI each mm has two address spaces and thus needs two
|
|
|
|
* PCID values, but we can still do with a single ASID denomination
|
|
|
|
* for each mm. Corresponds to kPCID + 2048.
|
|
|
|
*
|
|
|
|
*/
|
2016-01-29 19:42:57 +00:00
|
|
|
|
2017-12-04 14:07:55 +00:00
|
|
|
/* There are 12 bits of space for ASIDS in CR3 */
|
|
|
|
#define CR3_HW_ASID_BITS 12
|
2017-12-04 14:07:59 +00:00
|
|
|
|
2017-12-04 14:07:55 +00:00
|
|
|
/*
|
|
|
|
* When enabled, PAGE_TABLE_ISOLATION consumes a single bit for
|
|
|
|
* user/kernel switches
|
|
|
|
*/
|
2017-12-04 14:07:59 +00:00
|
|
|
#ifdef CONFIG_PAGE_TABLE_ISOLATION
|
|
|
|
# define PTI_CONSUMED_PCID_BITS 1
|
|
|
|
#else
|
|
|
|
# define PTI_CONSUMED_PCID_BITS 0
|
|
|
|
#endif
|
|
|
|
|
|
|
|
#define CR3_AVAIL_PCID_BITS (X86_CR3_PCID_BITS - PTI_CONSUMED_PCID_BITS)
|
2016-01-29 19:42:57 +00:00
|
|
|
|
2017-12-04 14:07:55 +00:00
|
|
|
/*
|
|
|
|
* ASIDs are zero-based: 0->MAX_AVAIL_ASID are valid. -1 below to account
|
2017-12-05 12:34:53 +00:00
|
|
|
* for them being zero-based. Another -1 is because PCID 0 is reserved for
|
2017-12-04 14:07:55 +00:00
|
|
|
* use by non-PCID-aware users.
|
|
|
|
*/
|
2017-12-04 14:07:59 +00:00
|
|
|
#define MAX_ASID_AVAILABLE ((1 << CR3_AVAIL_PCID_BITS) - 2)
|
2016-01-29 19:42:57 +00:00
|
|
|
|
2017-12-04 14:07:59 +00:00
|
|
|
/*
|
|
|
|
* 6 because 6 should be plenty and struct tlb_state will fit in two cache
|
|
|
|
* lines.
|
|
|
|
*/
|
|
|
|
#define TLB_NR_DYN_ASIDS 6
|
2017-12-04 14:07:55 +00:00
|
|
|
|
2017-12-05 12:34:53 +00:00
|
|
|
/*
|
|
|
|
* Given @asid, compute kPCID
|
|
|
|
*/
|
2017-12-04 14:07:56 +00:00
|
|
|
static inline u16 kern_pcid(u16 asid)
|
2016-01-29 19:42:57 +00:00
|
|
|
{
|
2017-12-04 14:07:56 +00:00
|
|
|
VM_WARN_ON_ONCE(asid > MAX_ASID_AVAILABLE);
|
2017-12-04 14:07:59 +00:00
|
|
|
|
|
|
|
#ifdef CONFIG_PAGE_TABLE_ISOLATION
|
2017-12-04 14:07:56 +00:00
|
|
|
/*
|
2017-12-04 14:07:59 +00:00
|
|
|
* Make sure that the dynamic ASID space does not confict with the
|
|
|
|
* bit we are using to switch between user and kernel ASIDs.
|
|
|
|
*/
|
2018-01-13 23:23:57 +00:00
|
|
|
BUILD_BUG_ON(TLB_NR_DYN_ASIDS >= (1 << X86_CR3_PTI_PCID_USER_BIT));
|
2017-12-04 14:07:59 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* The ASID being passed in here should have respected the
|
|
|
|
* MAX_ASID_AVAILABLE and thus never have the switch bit set.
|
|
|
|
*/
|
2018-01-13 23:23:57 +00:00
|
|
|
VM_WARN_ON_ONCE(asid & (1 << X86_CR3_PTI_PCID_USER_BIT));
|
2017-12-04 14:07:59 +00:00
|
|
|
#endif
|
2017-12-04 14:07:56 +00:00
|
|
|
/*
|
2017-12-04 14:07:59 +00:00
|
|
|
* The dynamically-assigned ASIDs that get passed in are small
|
|
|
|
* (<TLB_NR_DYN_ASIDS). They never have the high switch bit set,
|
|
|
|
* so do not bother to clear it.
|
|
|
|
*
|
2017-12-04 14:07:56 +00:00
|
|
|
* If PCID is on, ASID-aware code paths put the ASID+1 into the
|
|
|
|
* PCID bits. This serves two purposes. It prevents a nasty
|
|
|
|
* situation in which PCID-unaware code saves CR3, loads some other
|
|
|
|
* value (with PCID == 0), and then restores CR3, thus corrupting
|
|
|
|
* the TLB for ASID 0 if the saved ASID was nonzero. It also means
|
|
|
|
* that any bugs involving loading a PCID-enabled CR3 with
|
|
|
|
* CR4.PCIDE off will trigger deterministically.
|
|
|
|
*/
|
|
|
|
return asid + 1;
|
2016-01-29 19:42:57 +00:00
|
|
|
}
|
|
|
|
|
2017-12-04 14:08:01 +00:00
|
|
|
/*
|
2017-12-05 12:34:53 +00:00
|
|
|
* Given @asid, compute uPCID
|
2017-12-04 14:08:01 +00:00
|
|
|
*/
|
|
|
|
static inline u16 user_pcid(u16 asid)
|
|
|
|
{
|
|
|
|
u16 ret = kern_pcid(asid);
|
|
|
|
#ifdef CONFIG_PAGE_TABLE_ISOLATION
|
2018-01-13 23:23:57 +00:00
|
|
|
ret |= 1 << X86_CR3_PTI_PCID_USER_BIT;
|
2017-12-04 14:08:01 +00:00
|
|
|
#endif
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2017-12-04 14:07:54 +00:00
|
|
|
struct pgd_t;
|
|
|
|
static inline unsigned long build_cr3(pgd_t *pgd, u16 asid)
|
2016-01-29 19:42:57 +00:00
|
|
|
{
|
2017-12-04 14:07:54 +00:00
|
|
|
if (static_cpu_has(X86_FEATURE_PCID)) {
|
2017-12-04 14:07:56 +00:00
|
|
|
return __sme_pa(pgd) | kern_pcid(asid);
|
2017-12-04 14:07:54 +00:00
|
|
|
} else {
|
|
|
|
VM_WARN_ON_ONCE(asid != 0);
|
|
|
|
return __sme_pa(pgd);
|
|
|
|
}
|
2016-01-29 19:42:57 +00:00
|
|
|
}
|
|
|
|
|
2017-12-04 14:07:54 +00:00
|
|
|
static inline unsigned long build_cr3_noflush(pgd_t *pgd, u16 asid)
|
2016-01-29 19:42:57 +00:00
|
|
|
{
|
2017-12-04 14:07:55 +00:00
|
|
|
VM_WARN_ON_ONCE(asid > MAX_ASID_AVAILABLE);
|
2018-04-04 19:34:19 +00:00
|
|
|
/*
|
|
|
|
* Use boot_cpu_has() instead of this_cpu_has() as this function
|
|
|
|
* might be called during early boot. This should work even after
|
|
|
|
* boot because all CPU's the have same capabilities:
|
|
|
|
*/
|
|
|
|
VM_WARN_ON_ONCE(!boot_cpu_has(X86_FEATURE_PCID));
|
2017-12-04 14:07:56 +00:00
|
|
|
return __sme_pa(pgd) | kern_pcid(asid) | CR3_NOFLUSH;
|
2017-06-29 15:53:15 +00:00
|
|
|
}
|
|
|
|
|
2008-01-30 12:30:35 +00:00
|
|
|
#ifdef CONFIG_PARAVIRT
|
|
|
|
#include <asm/paravirt.h>
|
|
|
|
#else
|
|
|
|
#define __flush_tlb() __native_flush_tlb()
|
|
|
|
#define __flush_tlb_global() __native_flush_tlb_global()
|
2018-01-31 16:03:10 +00:00
|
|
|
#define __flush_tlb_one_user(addr) __native_flush_tlb_one_user(addr)
|
2008-01-30 12:30:35 +00:00
|
|
|
#endif
|
|
|
|
|
2017-06-29 15:53:16 +00:00
|
|
|
struct tlb_context {
|
|
|
|
u64 ctx_id;
|
|
|
|
u64 tlb_gen;
|
|
|
|
};
|
|
|
|
|
2014-10-24 22:58:08 +00:00
|
|
|
struct tlb_state {
|
x86/mm: Rework lazy TLB to track the actual loaded mm
Lazy TLB state is currently managed in a rather baroque manner.
AFAICT, there are three possible states:
- Non-lazy. This means that we're running a user thread or a
kernel thread that has called use_mm(). current->mm ==
current->active_mm == cpu_tlbstate.active_mm and
cpu_tlbstate.state == TLBSTATE_OK.
- Lazy with user mm. We're running a kernel thread without an mm
and we're borrowing an mm_struct. We have current->mm == NULL,
current->active_mm == cpu_tlbstate.active_mm, cpu_tlbstate.state
!= TLBSTATE_OK (i.e. TLBSTATE_LAZY or 0). The current cpu is set
in mm_cpumask(current->active_mm). CR3 points to
current->active_mm->pgd. The TLB is up to date.
- Lazy with init_mm. This happens when we call leave_mm(). We
have current->mm == NULL, current->active_mm ==
cpu_tlbstate.active_mm, but that mm is only relelvant insofar as
the scheduler is tracking it for refcounting. cpu_tlbstate.state
!= TLBSTATE_OK. The current cpu is clear in
mm_cpumask(current->active_mm). CR3 points to swapper_pg_dir,
i.e. init_mm->pgd.
This patch simplifies the situation. Other than perf, x86 stops
caring about current->active_mm at all. We have
cpu_tlbstate.loaded_mm pointing to the mm that CR3 references. The
TLB is always up to date for that mm. leave_mm() just switches us
to init_mm. There are no longer any special cases for mm_cpumask,
and switch_mm() switches mms without worrying about laziness.
After this patch, cpu_tlbstate.state serves only to tell the TLB
flush code whether it may switch to init_mm instead of doing a
normal flush.
This makes fairly extensive changes to xen_exit_mmap(), which used
to look a bit like black magic.
Perf is unchanged. With or without this change, perf may behave a bit
erratically if it tries to read user memory in kernel thread context.
We should build on this patch to teach perf to never look at user
memory when cpu_tlbstate.loaded_mm != current->mm.
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Borislav Petkov <bpetkov@suse.de>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Nadav Amit <nadav.amit@gmail.com>
Cc: Nadav Amit <namit@vmware.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-mm@kvack.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-05-28 17:00:15 +00:00
|
|
|
/*
|
|
|
|
* cpu_tlbstate.loaded_mm should match CR3 whenever interrupts
|
|
|
|
* are on. This means that it may not match current->active_mm,
|
|
|
|
* which will contain the previous user mm when we're in lazy TLB
|
|
|
|
* mode even if we've already switched back to swapper_pg_dir.
|
2018-08-29 15:47:18 +00:00
|
|
|
*
|
|
|
|
* During switch_mm_irqs_off(), loaded_mm will be set to
|
|
|
|
* LOADED_MM_SWITCHING during the brief interrupts-off window
|
|
|
|
* when CR3 and loaded_mm would otherwise be inconsistent. This
|
|
|
|
* is for nmi_uaccess_okay()'s benefit.
|
x86/mm: Rework lazy TLB to track the actual loaded mm
Lazy TLB state is currently managed in a rather baroque manner.
AFAICT, there are three possible states:
- Non-lazy. This means that we're running a user thread or a
kernel thread that has called use_mm(). current->mm ==
current->active_mm == cpu_tlbstate.active_mm and
cpu_tlbstate.state == TLBSTATE_OK.
- Lazy with user mm. We're running a kernel thread without an mm
and we're borrowing an mm_struct. We have current->mm == NULL,
current->active_mm == cpu_tlbstate.active_mm, cpu_tlbstate.state
!= TLBSTATE_OK (i.e. TLBSTATE_LAZY or 0). The current cpu is set
in mm_cpumask(current->active_mm). CR3 points to
current->active_mm->pgd. The TLB is up to date.
- Lazy with init_mm. This happens when we call leave_mm(). We
have current->mm == NULL, current->active_mm ==
cpu_tlbstate.active_mm, but that mm is only relelvant insofar as
the scheduler is tracking it for refcounting. cpu_tlbstate.state
!= TLBSTATE_OK. The current cpu is clear in
mm_cpumask(current->active_mm). CR3 points to swapper_pg_dir,
i.e. init_mm->pgd.
This patch simplifies the situation. Other than perf, x86 stops
caring about current->active_mm at all. We have
cpu_tlbstate.loaded_mm pointing to the mm that CR3 references. The
TLB is always up to date for that mm. leave_mm() just switches us
to init_mm. There are no longer any special cases for mm_cpumask,
and switch_mm() switches mms without worrying about laziness.
After this patch, cpu_tlbstate.state serves only to tell the TLB
flush code whether it may switch to init_mm instead of doing a
normal flush.
This makes fairly extensive changes to xen_exit_mmap(), which used
to look a bit like black magic.
Perf is unchanged. With or without this change, perf may behave a bit
erratically if it tries to read user memory in kernel thread context.
We should build on this patch to teach perf to never look at user
memory when cpu_tlbstate.loaded_mm != current->mm.
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Borislav Petkov <bpetkov@suse.de>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Nadav Amit <nadav.amit@gmail.com>
Cc: Nadav Amit <namit@vmware.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-mm@kvack.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-05-28 17:00:15 +00:00
|
|
|
*/
|
|
|
|
struct mm_struct *loaded_mm;
|
2018-08-29 15:47:18 +00:00
|
|
|
|
|
|
|
#define LOADED_MM_SWITCHING ((struct mm_struct *)1)
|
|
|
|
|
x86/mm: Implement PCID based optimization: try to preserve old TLB entries using PCID
PCID is a "process context ID" -- it's what other architectures call
an address space ID. Every non-global TLB entry is tagged with a
PCID, only TLB entries that match the currently selected PCID are
used, and we can switch PGDs without flushing the TLB. x86's
PCID is 12 bits.
This is an unorthodox approach to using PCID. x86's PCID is far too
short to uniquely identify a process, and we can't even really
uniquely identify a running process because there are monster
systems with over 4096 CPUs. To make matters worse, past attempts
to use all 12 PCID bits have resulted in slowdowns instead of
speedups.
This patch uses PCID differently. We use a PCID to identify a
recently-used mm on a per-cpu basis. An mm has no fixed PCID
binding at all; instead, we give it a fresh PCID each time it's
loaded except in cases where we want to preserve the TLB, in which
case we reuse a recent value.
Here are some benchmark results, done on a Skylake laptop at 2.3 GHz
(turbo off, intel_pstate requesting max performance) under KVM with
the guest using idle=poll (to avoid artifacts when bouncing between
CPUs). I haven't done any real statistics here -- I just ran them
in a loop and picked the fastest results that didn't look like
outliers. Unpatched means commit a4eb8b993554, so all the
bookkeeping overhead is gone.
ping-pong between two mms on the same CPU using eventfd:
patched: 1.22µs
patched, nopcid: 1.33µs
unpatched: 1.34µs
Same ping-pong, but now touch 512 pages (all zero-page to minimize
cache misses) each iteration. dTLB misses are measured by
dtlb_load_misses.miss_causes_a_walk:
patched: 1.8µs 11M dTLB misses
patched, nopcid: 6.2µs, 207M dTLB misses
unpatched: 6.1µs, 190M dTLB misses
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Reviewed-by: Nadav Amit <nadav.amit@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-mm@kvack.org
Link: http://lkml.kernel.org/r/9ee75f17a81770feed616358e6860d98a2a5b1e7.1500957502.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-07-25 04:41:38 +00:00
|
|
|
u16 loaded_mm_asid;
|
|
|
|
u16 next_asid;
|
2018-01-29 22:04:47 +00:00
|
|
|
/* last user mm's ctx id */
|
|
|
|
u64 last_ctx_id;
|
2014-10-24 22:58:08 +00:00
|
|
|
|
x86/mm: Flush more aggressively in lazy TLB mode
Since commit:
94b1b03b519b ("x86/mm: Rework lazy TLB mode and TLB freshness tracking")
x86's lazy TLB mode has been all the way lazy: when running a kernel thread
(including the idle thread), the kernel keeps using the last user mm's
page tables without attempting to maintain user TLB coherence at all.
From a pure semantic perspective, this is fine -- kernel threads won't
attempt to access user pages, so having stale TLB entries doesn't matter.
Unfortunately, I forgot about a subtlety. By skipping TLB flushes,
we also allow any paging-structure caches that may exist on the CPU
to become incoherent. This means that we can have a
paging-structure cache entry that references a freed page table, and
the CPU is within its rights to do a speculative page walk starting
at the freed page table.
I can imagine this causing two different problems:
- A speculative page walk starting from a bogus page table could read
IO addresses. I haven't seen any reports of this causing problems.
- A speculative page walk that involves a bogus page table can install
garbage in the TLB. Such garbage would always be at a user VA, but
some AMD CPUs have logic that triggers a machine check when it notices
these bogus entries. I've seen a couple reports of this.
Boris further explains the failure mode:
> It is actually more of an optimization which assumes that paging-structure
> entries are in WB DRAM:
>
> "TlbCacheDis: cacheable memory disable. Read-write. 0=Enables
> performance optimization that assumes PML4, PDP, PDE, and PTE entries
> are in cacheable WB-DRAM; memory type checks may be bypassed, and
> addresses outside of WB-DRAM may result in undefined behavior or NB
> protocol errors. 1=Disables performance optimization and allows PML4,
> PDP, PDE and PTE entries to be in any memory type. Operating systems
> that maintain page tables in memory types other than WB- DRAM must set
> TlbCacheDis to insure proper operation."
>
> The MCE generated is an NB protocol error to signal that
>
> "Link: A specific coherent-only packet from a CPU was issued to an
> IO link. This may be caused by software which addresses page table
> structures in a memory type other than cacheable WB-DRAM without
> properly configuring MSRC001_0015[TlbCacheDis]. This may occur, for
> example, when page table structure addresses are above top of memory. In
> such cases, the NB will generate an MCE if it sees a mismatch between
> the memory operation generated by the core and the link type."
>
> I'm assuming coherent-only packets don't go out on IO links, thus the
> error.
To fix this, reinstate TLB coherence in lazy mode. With this patch
applied, we do it in one of two ways:
- If we have PCID, we simply switch back to init_mm's page tables
when we enter a kernel thread -- this seems to be quite cheap
except for the cost of serializing the CPU.
- If we don't have PCID, then we set a flag and switch to init_mm
the first time we would otherwise need to flush the TLB.
The /sys/kernel/debug/x86/tlb_use_lazy_mode debug switch can be changed
to override the default mode for benchmarking.
In theory, we could optimize this better by only flushing the TLB in
lazy CPUs when a page table is freed. Doing that would require
auditing the mm code to make sure that all page table freeing goes
through tlb_remove_page() as well as reworking some data structures
to implement the improved flush logic.
Reported-by: Markus Trippelsdorf <markus@trippelsdorf.de>
Reported-by: Adam Borowski <kilobyte@angband.pl>
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Borislav Petkov <bp@suse.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Eric Biggers <ebiggers@google.com>
Cc: Johannes Hirte <johannes.hirte@datenkhaos.de>
Cc: Kees Cook <keescook@chromium.org>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Nadav Amit <nadav.amit@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Roman Kagan <rkagan@virtuozzo.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes: 94b1b03b519b ("x86/mm: Rework lazy TLB mode and TLB freshness tracking")
Link: http://lkml.kernel.org/r/20171009170231.fkpraqokz6e4zeco@pd.tnic
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-10-09 16:50:49 +00:00
|
|
|
/*
|
|
|
|
* We can be in one of several states:
|
|
|
|
*
|
|
|
|
* - Actively using an mm. Our CPU's bit will be set in
|
|
|
|
* mm_cpumask(loaded_mm) and is_lazy == false;
|
|
|
|
*
|
|
|
|
* - Not using a real mm. loaded_mm == &init_mm. Our CPU's bit
|
|
|
|
* will not be set in mm_cpumask(&init_mm) and is_lazy == false.
|
|
|
|
*
|
|
|
|
* - Lazily using a real mm. loaded_mm != &init_mm, our bit
|
|
|
|
* is set in mm_cpumask(loaded_mm), but is_lazy == true.
|
|
|
|
* We're heuristically guessing that the CR3 load we
|
|
|
|
* skipped more than makes up for the overhead added by
|
|
|
|
* lazy mode.
|
|
|
|
*/
|
|
|
|
bool is_lazy;
|
|
|
|
|
2017-12-04 14:07:57 +00:00
|
|
|
/*
|
|
|
|
* If set we changed the page tables in such a way that we
|
|
|
|
* needed an invalidation of all contexts (aka. PCIDs / ASIDs).
|
|
|
|
* This tells us to go invalidate all the non-loaded ctxs[]
|
|
|
|
* on the next context switch.
|
|
|
|
*
|
|
|
|
* The current ctx was kept up-to-date as it ran and does not
|
|
|
|
* need to be invalidated.
|
|
|
|
*/
|
|
|
|
bool invalidate_other;
|
|
|
|
|
2017-12-04 14:07:59 +00:00
|
|
|
/*
|
|
|
|
* Mask that contains TLB_NR_DYN_ASIDS+1 bits to indicate
|
|
|
|
* the corresponding user PCID needs a flush next time we
|
|
|
|
* switch to it; see SWITCH_TO_USER_CR3.
|
|
|
|
*/
|
|
|
|
unsigned short user_pcid_flush_mask;
|
|
|
|
|
2014-10-24 22:58:08 +00:00
|
|
|
/*
|
|
|
|
* Access to this CR4 shadow and to H/W CR4 is protected by
|
|
|
|
* disabling interrupts when modifying either one.
|
|
|
|
*/
|
|
|
|
unsigned long cr4;
|
2017-06-29 15:53:16 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* This is a list of all contexts that might exist in the TLB.
|
x86/mm: Implement PCID based optimization: try to preserve old TLB entries using PCID
PCID is a "process context ID" -- it's what other architectures call
an address space ID. Every non-global TLB entry is tagged with a
PCID, only TLB entries that match the currently selected PCID are
used, and we can switch PGDs without flushing the TLB. x86's
PCID is 12 bits.
This is an unorthodox approach to using PCID. x86's PCID is far too
short to uniquely identify a process, and we can't even really
uniquely identify a running process because there are monster
systems with over 4096 CPUs. To make matters worse, past attempts
to use all 12 PCID bits have resulted in slowdowns instead of
speedups.
This patch uses PCID differently. We use a PCID to identify a
recently-used mm on a per-cpu basis. An mm has no fixed PCID
binding at all; instead, we give it a fresh PCID each time it's
loaded except in cases where we want to preserve the TLB, in which
case we reuse a recent value.
Here are some benchmark results, done on a Skylake laptop at 2.3 GHz
(turbo off, intel_pstate requesting max performance) under KVM with
the guest using idle=poll (to avoid artifacts when bouncing between
CPUs). I haven't done any real statistics here -- I just ran them
in a loop and picked the fastest results that didn't look like
outliers. Unpatched means commit a4eb8b993554, so all the
bookkeeping overhead is gone.
ping-pong between two mms on the same CPU using eventfd:
patched: 1.22µs
patched, nopcid: 1.33µs
unpatched: 1.34µs
Same ping-pong, but now touch 512 pages (all zero-page to minimize
cache misses) each iteration. dTLB misses are measured by
dtlb_load_misses.miss_causes_a_walk:
patched: 1.8µs 11M dTLB misses
patched, nopcid: 6.2µs, 207M dTLB misses
unpatched: 6.1µs, 190M dTLB misses
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Reviewed-by: Nadav Amit <nadav.amit@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-mm@kvack.org
Link: http://lkml.kernel.org/r/9ee75f17a81770feed616358e6860d98a2a5b1e7.1500957502.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-07-25 04:41:38 +00:00
|
|
|
* There is one per ASID that we use, and the ASID (what the
|
|
|
|
* CPU calls PCID) is the index into ctxts.
|
2017-06-29 15:53:16 +00:00
|
|
|
*
|
|
|
|
* For each context, ctx_id indicates which mm the TLB's user
|
|
|
|
* entries came from. As an invariant, the TLB will never
|
|
|
|
* contain entries that are out-of-date as when that mm reached
|
|
|
|
* the tlb_gen in the list.
|
|
|
|
*
|
|
|
|
* To be clear, this means that it's legal for the TLB code to
|
|
|
|
* flush the TLB without updating tlb_gen. This can happen
|
|
|
|
* (for now, at least) due to paravirt remote flushes.
|
x86/mm: Implement PCID based optimization: try to preserve old TLB entries using PCID
PCID is a "process context ID" -- it's what other architectures call
an address space ID. Every non-global TLB entry is tagged with a
PCID, only TLB entries that match the currently selected PCID are
used, and we can switch PGDs without flushing the TLB. x86's
PCID is 12 bits.
This is an unorthodox approach to using PCID. x86's PCID is far too
short to uniquely identify a process, and we can't even really
uniquely identify a running process because there are monster
systems with over 4096 CPUs. To make matters worse, past attempts
to use all 12 PCID bits have resulted in slowdowns instead of
speedups.
This patch uses PCID differently. We use a PCID to identify a
recently-used mm on a per-cpu basis. An mm has no fixed PCID
binding at all; instead, we give it a fresh PCID each time it's
loaded except in cases where we want to preserve the TLB, in which
case we reuse a recent value.
Here are some benchmark results, done on a Skylake laptop at 2.3 GHz
(turbo off, intel_pstate requesting max performance) under KVM with
the guest using idle=poll (to avoid artifacts when bouncing between
CPUs). I haven't done any real statistics here -- I just ran them
in a loop and picked the fastest results that didn't look like
outliers. Unpatched means commit a4eb8b993554, so all the
bookkeeping overhead is gone.
ping-pong between two mms on the same CPU using eventfd:
patched: 1.22µs
patched, nopcid: 1.33µs
unpatched: 1.34µs
Same ping-pong, but now touch 512 pages (all zero-page to minimize
cache misses) each iteration. dTLB misses are measured by
dtlb_load_misses.miss_causes_a_walk:
patched: 1.8µs 11M dTLB misses
patched, nopcid: 6.2µs, 207M dTLB misses
unpatched: 6.1µs, 190M dTLB misses
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Reviewed-by: Nadav Amit <nadav.amit@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-mm@kvack.org
Link: http://lkml.kernel.org/r/9ee75f17a81770feed616358e6860d98a2a5b1e7.1500957502.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-07-25 04:41:38 +00:00
|
|
|
*
|
|
|
|
* NB: context 0 is a bit special, since it's also used by
|
|
|
|
* various bits of init code. This is fine -- code that
|
|
|
|
* isn't aware of PCID will end up harmlessly flushing
|
|
|
|
* context 0.
|
2017-06-29 15:53:16 +00:00
|
|
|
*/
|
x86/mm: Implement PCID based optimization: try to preserve old TLB entries using PCID
PCID is a "process context ID" -- it's what other architectures call
an address space ID. Every non-global TLB entry is tagged with a
PCID, only TLB entries that match the currently selected PCID are
used, and we can switch PGDs without flushing the TLB. x86's
PCID is 12 bits.
This is an unorthodox approach to using PCID. x86's PCID is far too
short to uniquely identify a process, and we can't even really
uniquely identify a running process because there are monster
systems with over 4096 CPUs. To make matters worse, past attempts
to use all 12 PCID bits have resulted in slowdowns instead of
speedups.
This patch uses PCID differently. We use a PCID to identify a
recently-used mm on a per-cpu basis. An mm has no fixed PCID
binding at all; instead, we give it a fresh PCID each time it's
loaded except in cases where we want to preserve the TLB, in which
case we reuse a recent value.
Here are some benchmark results, done on a Skylake laptop at 2.3 GHz
(turbo off, intel_pstate requesting max performance) under KVM with
the guest using idle=poll (to avoid artifacts when bouncing between
CPUs). I haven't done any real statistics here -- I just ran them
in a loop and picked the fastest results that didn't look like
outliers. Unpatched means commit a4eb8b993554, so all the
bookkeeping overhead is gone.
ping-pong between two mms on the same CPU using eventfd:
patched: 1.22µs
patched, nopcid: 1.33µs
unpatched: 1.34µs
Same ping-pong, but now touch 512 pages (all zero-page to minimize
cache misses) each iteration. dTLB misses are measured by
dtlb_load_misses.miss_causes_a_walk:
patched: 1.8µs 11M dTLB misses
patched, nopcid: 6.2µs, 207M dTLB misses
unpatched: 6.1µs, 190M dTLB misses
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Reviewed-by: Nadav Amit <nadav.amit@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-mm@kvack.org
Link: http://lkml.kernel.org/r/9ee75f17a81770feed616358e6860d98a2a5b1e7.1500957502.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-07-25 04:41:38 +00:00
|
|
|
struct tlb_context ctxs[TLB_NR_DYN_ASIDS];
|
2014-10-24 22:58:08 +00:00
|
|
|
};
|
|
|
|
DECLARE_PER_CPU_SHARED_ALIGNED(struct tlb_state, cpu_tlbstate);
|
|
|
|
|
2018-08-29 15:47:18 +00:00
|
|
|
/*
|
|
|
|
* Blindly accessing user memory from NMI context can be dangerous
|
|
|
|
* if we're in the middle of switching the current user task or
|
|
|
|
* switching the loaded mm. It can also be dangerous if we
|
|
|
|
* interrupted some kernel code that was temporarily using a
|
|
|
|
* different mm.
|
|
|
|
*/
|
|
|
|
static inline bool nmi_uaccess_okay(void)
|
|
|
|
{
|
|
|
|
struct mm_struct *loaded_mm = this_cpu_read(cpu_tlbstate.loaded_mm);
|
|
|
|
struct mm_struct *current_mm = current->mm;
|
|
|
|
|
|
|
|
VM_WARN_ON_ONCE(!loaded_mm);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The condition we want to check is
|
|
|
|
* current_mm->pgd == __va(read_cr3_pa()). This may be slow, though,
|
|
|
|
* if we're running in a VM with shadow paging, and nmi_uaccess_okay()
|
|
|
|
* is supposed to be reasonably fast.
|
|
|
|
*
|
|
|
|
* Instead, we check the almost equivalent but somewhat conservative
|
|
|
|
* condition below, and we rely on the fact that switch_mm_irqs_off()
|
|
|
|
* sets loaded_mm to LOADED_MM_SWITCHING before writing to CR3.
|
|
|
|
*/
|
|
|
|
if (loaded_mm != current_mm)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
VM_WARN_ON_ONCE(current_mm->pgd != __va(read_cr3_pa()));
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2014-10-24 22:58:08 +00:00
|
|
|
/* Initialize cr4 shadow for this CPU. */
|
|
|
|
static inline void cr4_init_shadow(void)
|
|
|
|
{
|
2016-09-29 19:48:12 +00:00
|
|
|
this_cpu_write(cpu_tlbstate.cr4, __read_cr4());
|
2014-10-24 22:58:08 +00:00
|
|
|
}
|
|
|
|
|
2017-11-25 03:29:06 +00:00
|
|
|
static inline void __cr4_set(unsigned long cr4)
|
|
|
|
{
|
2017-11-25 03:29:07 +00:00
|
|
|
lockdep_assert_irqs_disabled();
|
2017-11-25 03:29:06 +00:00
|
|
|
this_cpu_write(cpu_tlbstate.cr4, cr4);
|
|
|
|
__write_cr4(cr4);
|
|
|
|
}
|
|
|
|
|
2014-10-24 22:58:07 +00:00
|
|
|
/* Set in this cpu's CR4. */
|
|
|
|
static inline void cr4_set_bits(unsigned long mask)
|
|
|
|
{
|
2017-11-25 03:29:07 +00:00
|
|
|
unsigned long cr4, flags;
|
2014-10-24 22:58:07 +00:00
|
|
|
|
2017-11-25 03:29:07 +00:00
|
|
|
local_irq_save(flags);
|
2014-10-24 22:58:08 +00:00
|
|
|
cr4 = this_cpu_read(cpu_tlbstate.cr4);
|
2017-11-25 03:29:06 +00:00
|
|
|
if ((cr4 | mask) != cr4)
|
|
|
|
__cr4_set(cr4 | mask);
|
2017-11-25 03:29:07 +00:00
|
|
|
local_irq_restore(flags);
|
2014-10-24 22:58:07 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Clear in this cpu's CR4. */
|
|
|
|
static inline void cr4_clear_bits(unsigned long mask)
|
|
|
|
{
|
2017-11-25 03:29:07 +00:00
|
|
|
unsigned long cr4, flags;
|
2014-10-24 22:58:07 +00:00
|
|
|
|
2017-11-25 03:29:07 +00:00
|
|
|
local_irq_save(flags);
|
2014-10-24 22:58:08 +00:00
|
|
|
cr4 = this_cpu_read(cpu_tlbstate.cr4);
|
2017-11-25 03:29:06 +00:00
|
|
|
if ((cr4 & ~mask) != cr4)
|
|
|
|
__cr4_set(cr4 & ~mask);
|
2017-11-25 03:29:07 +00:00
|
|
|
local_irq_restore(flags);
|
2014-10-24 22:58:08 +00:00
|
|
|
}
|
|
|
|
|
2017-11-25 03:29:07 +00:00
|
|
|
static inline void cr4_toggle_bits_irqsoff(unsigned long mask)
|
2017-02-14 08:11:04 +00:00
|
|
|
{
|
|
|
|
unsigned long cr4;
|
|
|
|
|
|
|
|
cr4 = this_cpu_read(cpu_tlbstate.cr4);
|
2017-11-25 03:29:06 +00:00
|
|
|
__cr4_set(cr4 ^ mask);
|
2017-02-14 08:11:04 +00:00
|
|
|
}
|
|
|
|
|
2014-10-24 22:58:08 +00:00
|
|
|
/* Read the CR4 shadow. */
|
|
|
|
static inline unsigned long cr4_read_shadow(void)
|
|
|
|
{
|
|
|
|
return this_cpu_read(cpu_tlbstate.cr4);
|
2014-10-24 22:58:07 +00:00
|
|
|
}
|
|
|
|
|
2017-12-04 14:07:57 +00:00
|
|
|
/*
|
|
|
|
* Mark all other ASIDs as invalid, preserves the current.
|
|
|
|
*/
|
|
|
|
static inline void invalidate_other_asid(void)
|
|
|
|
{
|
|
|
|
this_cpu_write(cpu_tlbstate.invalidate_other, true);
|
|
|
|
}
|
|
|
|
|
2014-10-24 22:58:07 +00:00
|
|
|
/*
|
|
|
|
* Save some of cr4 feature set we're using (e.g. Pentium 4MB
|
|
|
|
* enable and PPro Global page enable), so that any CPU's that boot
|
|
|
|
* up after us can get the correct flags. This should only be used
|
|
|
|
* during boot on the boot cpu.
|
|
|
|
*/
|
|
|
|
extern unsigned long mmu_cr4_features;
|
|
|
|
extern u32 *trampoline_cr4_features;
|
|
|
|
|
|
|
|
static inline void cr4_set_bits_and_update_boot(unsigned long mask)
|
|
|
|
{
|
|
|
|
mmu_cr4_features |= mask;
|
|
|
|
if (trampoline_cr4_features)
|
|
|
|
*trampoline_cr4_features = mmu_cr4_features;
|
|
|
|
cr4_set_bits(mask);
|
|
|
|
}
|
|
|
|
|
2017-09-07 02:54:53 +00:00
|
|
|
extern void initialize_tlbstate_and_flush(void);
|
|
|
|
|
2017-12-04 14:07:59 +00:00
|
|
|
/*
|
|
|
|
* Given an ASID, flush the corresponding user ASID. We can delay this
|
|
|
|
* until the next time we switch to it.
|
|
|
|
*
|
|
|
|
* See SWITCH_TO_USER_CR3.
|
|
|
|
*/
|
|
|
|
static inline void invalidate_user_asid(u16 asid)
|
|
|
|
{
|
|
|
|
/* There is no user ASID if address space separation is off */
|
|
|
|
if (!IS_ENABLED(CONFIG_PAGE_TABLE_ISOLATION))
|
|
|
|
return;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We only have a single ASID if PCID is off and the CR3
|
|
|
|
* write will have flushed it.
|
|
|
|
*/
|
|
|
|
if (!cpu_feature_enabled(X86_FEATURE_PCID))
|
|
|
|
return;
|
|
|
|
|
|
|
|
if (!static_cpu_has(X86_FEATURE_PTI))
|
|
|
|
return;
|
|
|
|
|
|
|
|
__set_bit(kern_pcid(asid),
|
|
|
|
(unsigned long *)this_cpu_ptr(&cpu_tlbstate.user_pcid_flush_mask));
|
|
|
|
}
|
|
|
|
|
2017-12-05 12:34:52 +00:00
|
|
|
/*
|
|
|
|
* flush the entire current user mapping
|
|
|
|
*/
|
2008-01-30 12:30:35 +00:00
|
|
|
static inline void __native_flush_tlb(void)
|
|
|
|
{
|
2016-08-05 13:37:39 +00:00
|
|
|
/*
|
2017-12-30 21:13:54 +00:00
|
|
|
* Preemption or interrupts must be disabled to protect the access
|
|
|
|
* to the per CPU variable and to prevent being preempted between
|
|
|
|
* read_cr3() and write_cr3().
|
2016-08-05 13:37:39 +00:00
|
|
|
*/
|
2017-12-30 21:13:54 +00:00
|
|
|
WARN_ON_ONCE(preemptible());
|
|
|
|
|
|
|
|
invalidate_user_asid(this_cpu_read(cpu_tlbstate.loaded_mm_asid));
|
|
|
|
|
|
|
|
/* If current->mm == NULL then the read_cr3() "borrows" an mm */
|
2017-06-12 17:26:14 +00:00
|
|
|
native_write_cr3(__native_read_cr3());
|
2008-01-30 12:30:35 +00:00
|
|
|
}
|
|
|
|
|
2017-12-05 12:34:52 +00:00
|
|
|
/*
|
|
|
|
* flush everything
|
|
|
|
*/
|
2008-01-30 12:30:35 +00:00
|
|
|
static inline void __native_flush_tlb_global(void)
|
|
|
|
{
|
2017-12-05 12:34:51 +00:00
|
|
|
unsigned long cr4, flags;
|
2008-01-30 12:30:35 +00:00
|
|
|
|
2016-01-29 19:42:59 +00:00
|
|
|
if (static_cpu_has(X86_FEATURE_INVPCID)) {
|
|
|
|
/*
|
|
|
|
* Using INVPCID is considerably faster than a pair of writes
|
|
|
|
* to CR4 sandwiched inside an IRQ flag save/restore.
|
2017-12-04 14:08:01 +00:00
|
|
|
*
|
|
|
|
* Note, this works with CR4.PCIDE=0 or 1.
|
2016-01-29 19:42:59 +00:00
|
|
|
*/
|
|
|
|
invpcid_flush_all();
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2008-05-12 19:21:15 +00:00
|
|
|
/*
|
|
|
|
* Read-modify-write to CR4 - protect it from preemption and
|
|
|
|
* from interrupts. (Use the raw variant because this code can
|
|
|
|
* be called from deep inside debugging code.)
|
|
|
|
*/
|
|
|
|
raw_local_irq_save(flags);
|
|
|
|
|
2017-12-05 12:34:51 +00:00
|
|
|
cr4 = this_cpu_read(cpu_tlbstate.cr4);
|
|
|
|
/* toggle PGE */
|
|
|
|
native_write_cr4(cr4 ^ X86_CR4_PGE);
|
|
|
|
/* write old PGE again and flush TLBs */
|
|
|
|
native_write_cr4(cr4);
|
2008-05-12 19:21:15 +00:00
|
|
|
|
|
|
|
raw_local_irq_restore(flags);
|
2008-01-30 12:30:35 +00:00
|
|
|
}
|
|
|
|
|
2017-12-05 12:34:52 +00:00
|
|
|
/*
|
|
|
|
* flush one page in the user mapping
|
|
|
|
*/
|
2018-01-31 16:03:10 +00:00
|
|
|
static inline void __native_flush_tlb_one_user(unsigned long addr)
|
2008-01-30 12:30:35 +00:00
|
|
|
{
|
2017-12-04 14:07:59 +00:00
|
|
|
u32 loaded_mm_asid = this_cpu_read(cpu_tlbstate.loaded_mm_asid);
|
|
|
|
|
2008-03-23 08:03:45 +00:00
|
|
|
asm volatile("invlpg (%0)" ::"r" (addr) : "memory");
|
2017-12-04 14:07:59 +00:00
|
|
|
|
|
|
|
if (!static_cpu_has(X86_FEATURE_PTI))
|
|
|
|
return;
|
|
|
|
|
2017-12-04 14:08:01 +00:00
|
|
|
/*
|
|
|
|
* Some platforms #GP if we call invpcid(type=1/2) before CR4.PCIDE=1.
|
|
|
|
* Just use invalidate_user_asid() in case we are called early.
|
|
|
|
*/
|
|
|
|
if (!this_cpu_has(X86_FEATURE_INVPCID_SINGLE))
|
|
|
|
invalidate_user_asid(loaded_mm_asid);
|
|
|
|
else
|
|
|
|
invpcid_flush_one(user_pcid(loaded_mm_asid), addr);
|
2008-01-30 12:30:35 +00:00
|
|
|
}
|
|
|
|
|
2017-12-05 12:34:52 +00:00
|
|
|
/*
|
|
|
|
* flush everything
|
|
|
|
*/
|
2008-01-30 12:30:35 +00:00
|
|
|
static inline void __flush_tlb_all(void)
|
|
|
|
{
|
2017-12-05 12:34:52 +00:00
|
|
|
if (boot_cpu_has(X86_FEATURE_PGE)) {
|
2008-01-30 12:30:35 +00:00
|
|
|
__flush_tlb_global();
|
2017-12-05 12:34:52 +00:00
|
|
|
} else {
|
|
|
|
/*
|
|
|
|
* !PGE -> !PCID (setup_pcid()), thus every flush is total.
|
|
|
|
*/
|
2008-01-30 12:30:35 +00:00
|
|
|
__flush_tlb();
|
2017-12-05 12:34:52 +00:00
|
|
|
}
|
2008-01-30 12:30:35 +00:00
|
|
|
}
|
|
|
|
|
2017-12-05 12:34:52 +00:00
|
|
|
/*
|
|
|
|
* flush one page in the kernel mapping
|
|
|
|
*/
|
2018-01-31 16:03:10 +00:00
|
|
|
static inline void __flush_tlb_one_kernel(unsigned long addr)
|
2008-01-30 12:30:35 +00:00
|
|
|
{
|
2014-01-21 22:33:16 +00:00
|
|
|
count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ONE);
|
2018-01-31 16:03:10 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* If PTI is off, then __flush_tlb_one_user() is just INVLPG or its
|
|
|
|
* paravirt equivalent. Even with PCID, this is sufficient: we only
|
|
|
|
* use PCID if we also use global PTEs for the kernel mapping, and
|
|
|
|
* INVLPG flushes global translations across all address spaces.
|
|
|
|
*
|
|
|
|
* If PTI is on, then the kernel is mapped with non-global PTEs, and
|
|
|
|
* __flush_tlb_one_user() will flush the given address for the current
|
|
|
|
* kernel address space and for its usermode counterpart, but it does
|
|
|
|
* not flush it for other address spaces.
|
|
|
|
*/
|
|
|
|
__flush_tlb_one_user(addr);
|
2017-12-04 14:07:57 +00:00
|
|
|
|
|
|
|
if (!static_cpu_has(X86_FEATURE_PTI))
|
|
|
|
return;
|
|
|
|
|
|
|
|
/*
|
2018-01-31 16:03:10 +00:00
|
|
|
* See above. We need to propagate the flush to all other address
|
|
|
|
* spaces. In principle, we only need to propagate it to kernelmode
|
|
|
|
* address spaces, but the extra bookkeeping we would need is not
|
|
|
|
* worth it.
|
2017-12-04 14:07:57 +00:00
|
|
|
*/
|
|
|
|
invalidate_other_asid();
|
2008-01-30 12:30:35 +00:00
|
|
|
}
|
|
|
|
|
2012-05-10 10:01:59 +00:00
|
|
|
#define TLB_FLUSH_ALL -1UL
|
2008-01-30 12:30:35 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* TLB flushing:
|
|
|
|
*
|
|
|
|
* - flush_tlb_all() flushes all processes TLBs
|
|
|
|
* - flush_tlb_mm(mm) flushes the specified mm context TLB's
|
|
|
|
* - flush_tlb_page(vma, vmaddr) flushes one page
|
|
|
|
* - flush_tlb_range(vma, start, end) flushes a range of pages
|
|
|
|
* - flush_tlb_kernel_range(start, end) flushes a range of kernel pages
|
2017-05-28 17:00:10 +00:00
|
|
|
* - flush_tlb_others(cpumask, info) flushes TLBs on other cpus
|
2008-01-30 12:30:35 +00:00
|
|
|
*
|
|
|
|
* ..but the i386 has somewhat limited tlb flushing capabilities,
|
|
|
|
* and page-granular flushes are available only on i486 and up.
|
|
|
|
*/
|
2017-05-28 17:00:10 +00:00
|
|
|
struct flush_tlb_info {
|
2017-06-29 15:53:16 +00:00
|
|
|
/*
|
|
|
|
* We support several kinds of flushes.
|
|
|
|
*
|
|
|
|
* - Fully flush a single mm. .mm will be set, .end will be
|
|
|
|
* TLB_FLUSH_ALL, and .new_tlb_gen will be the tlb_gen to
|
|
|
|
* which the IPI sender is trying to catch us up.
|
|
|
|
*
|
|
|
|
* - Partially flush a single mm. .mm will be set, .start and
|
|
|
|
* .end will indicate the range, and .new_tlb_gen will be set
|
|
|
|
* such that the changes between generation .new_tlb_gen-1 and
|
|
|
|
* .new_tlb_gen are entirely contained in the indicated range.
|
|
|
|
*
|
|
|
|
* - Fully flush all mms whose tlb_gens have been updated. .mm
|
|
|
|
* will be NULL, .end will be TLB_FLUSH_ALL, and .new_tlb_gen
|
|
|
|
* will be zero.
|
|
|
|
*/
|
|
|
|
struct mm_struct *mm;
|
|
|
|
unsigned long start;
|
|
|
|
unsigned long end;
|
|
|
|
u64 new_tlb_gen;
|
2018-08-26 10:56:48 +00:00
|
|
|
unsigned int stride_shift;
|
2018-09-26 03:58:43 +00:00
|
|
|
bool freed_tables;
|
2017-05-28 17:00:10 +00:00
|
|
|
};
|
|
|
|
|
2008-01-30 12:30:35 +00:00
|
|
|
#define local_flush_tlb() __flush_tlb()
|
|
|
|
|
2018-09-26 03:58:42 +00:00
|
|
|
#define flush_tlb_mm(mm) \
|
|
|
|
flush_tlb_mm_range(mm, 0UL, TLB_FLUSH_ALL, 0UL, true)
|
x86/tlb: enable tlb flush range support for x86
Not every tlb_flush execution moment is really need to evacuate all
TLB entries, like in munmap, just few 'invlpg' is better for whole
process performance, since it leaves most of TLB entries for later
accessing.
This patch also rewrite flush_tlb_range for 2 purposes:
1, split it out to get flush_blt_mm_range function.
2, clean up to reduce line breaking, thanks for Borislav's input.
My micro benchmark 'mummap' http://lkml.org/lkml/2012/5/17/59
show that the random memory access on other CPU has 0~50% speed up
on a 2P * 4cores * HT NHM EP while do 'munmap'.
Thanks Yongjie's testing on this patch:
-------------
I used Linux 3.4-RC6 w/ and w/o his patches as Xen dom0 and guest
kernel.
After running two benchmarks in Xen HVM guest, I found his patches
brought about 1%~3% performance gain in 'kernel build' and 'netperf'
testing, though the performance gain was not very stable in 'kernel
build' testing.
Some detailed testing results are below.
Testing Environment:
Hardware: Romley-EP platform
Xen version: latest upstream
Linux kernel: 3.4-RC6
Guest vCPU number: 8
NIC: Intel 82599 (10GB bandwidth)
In 'kernel build' testing in guest:
Command line | performance gain
make -j 4 | 3.81%
make -j 8 | 0.37%
make -j 16 | -0.52%
In 'netperf' testing, we tested TCP_STREAM with default socket size
16384 byte as large packet and 64 byte as small packet.
I used several clients to add networking pressure, then 'netperf' server
automatically generated several threads to response them.
I also used large-size packet and small-size packet in the testing.
Packet size | Thread number | performance gain
16384 bytes | 4 | 0.02%
16384 bytes | 8 | 2.21%
16384 bytes | 16 | 2.04%
64 bytes | 4 | 1.07%
64 bytes | 8 | 3.31%
64 bytes | 16 | 0.71%
Signed-off-by: Alex Shi <alex.shi@intel.com>
Link: http://lkml.kernel.org/r/1340845344-27557-8-git-send-email-alex.shi@intel.com
Tested-by: Ren, Yongjie <yongjie.ren@intel.com>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2012-06-28 01:02:22 +00:00
|
|
|
|
2018-08-26 10:56:48 +00:00
|
|
|
#define flush_tlb_range(vma, start, end) \
|
|
|
|
flush_tlb_mm_range((vma)->vm_mm, start, end, \
|
|
|
|
((vma)->vm_flags & VM_HUGETLB) \
|
|
|
|
? huge_page_shift(hstate_vma(vma)) \
|
2018-09-26 03:58:42 +00:00
|
|
|
: PAGE_SHIFT, false)
|
x86/tlb: enable tlb flush range support for x86
Not every tlb_flush execution moment is really need to evacuate all
TLB entries, like in munmap, just few 'invlpg' is better for whole
process performance, since it leaves most of TLB entries for later
accessing.
This patch also rewrite flush_tlb_range for 2 purposes:
1, split it out to get flush_blt_mm_range function.
2, clean up to reduce line breaking, thanks for Borislav's input.
My micro benchmark 'mummap' http://lkml.org/lkml/2012/5/17/59
show that the random memory access on other CPU has 0~50% speed up
on a 2P * 4cores * HT NHM EP while do 'munmap'.
Thanks Yongjie's testing on this patch:
-------------
I used Linux 3.4-RC6 w/ and w/o his patches as Xen dom0 and guest
kernel.
After running two benchmarks in Xen HVM guest, I found his patches
brought about 1%~3% performance gain in 'kernel build' and 'netperf'
testing, though the performance gain was not very stable in 'kernel
build' testing.
Some detailed testing results are below.
Testing Environment:
Hardware: Romley-EP platform
Xen version: latest upstream
Linux kernel: 3.4-RC6
Guest vCPU number: 8
NIC: Intel 82599 (10GB bandwidth)
In 'kernel build' testing in guest:
Command line | performance gain
make -j 4 | 3.81%
make -j 8 | 0.37%
make -j 16 | -0.52%
In 'netperf' testing, we tested TCP_STREAM with default socket size
16384 byte as large packet and 64 byte as small packet.
I used several clients to add networking pressure, then 'netperf' server
automatically generated several threads to response them.
I also used large-size packet and small-size packet in the testing.
Packet size | Thread number | performance gain
16384 bytes | 4 | 0.02%
16384 bytes | 8 | 2.21%
16384 bytes | 16 | 2.04%
64 bytes | 4 | 1.07%
64 bytes | 8 | 3.31%
64 bytes | 16 | 0.71%
Signed-off-by: Alex Shi <alex.shi@intel.com>
Link: http://lkml.kernel.org/r/1340845344-27557-8-git-send-email-alex.shi@intel.com
Tested-by: Ren, Yongjie <yongjie.ren@intel.com>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2012-06-28 01:02:22 +00:00
|
|
|
|
2008-01-30 12:30:35 +00:00
|
|
|
extern void flush_tlb_all(void);
|
x86/tlb: enable tlb flush range support for x86
Not every tlb_flush execution moment is really need to evacuate all
TLB entries, like in munmap, just few 'invlpg' is better for whole
process performance, since it leaves most of TLB entries for later
accessing.
This patch also rewrite flush_tlb_range for 2 purposes:
1, split it out to get flush_blt_mm_range function.
2, clean up to reduce line breaking, thanks for Borislav's input.
My micro benchmark 'mummap' http://lkml.org/lkml/2012/5/17/59
show that the random memory access on other CPU has 0~50% speed up
on a 2P * 4cores * HT NHM EP while do 'munmap'.
Thanks Yongjie's testing on this patch:
-------------
I used Linux 3.4-RC6 w/ and w/o his patches as Xen dom0 and guest
kernel.
After running two benchmarks in Xen HVM guest, I found his patches
brought about 1%~3% performance gain in 'kernel build' and 'netperf'
testing, though the performance gain was not very stable in 'kernel
build' testing.
Some detailed testing results are below.
Testing Environment:
Hardware: Romley-EP platform
Xen version: latest upstream
Linux kernel: 3.4-RC6
Guest vCPU number: 8
NIC: Intel 82599 (10GB bandwidth)
In 'kernel build' testing in guest:
Command line | performance gain
make -j 4 | 3.81%
make -j 8 | 0.37%
make -j 16 | -0.52%
In 'netperf' testing, we tested TCP_STREAM with default socket size
16384 byte as large packet and 64 byte as small packet.
I used several clients to add networking pressure, then 'netperf' server
automatically generated several threads to response them.
I also used large-size packet and small-size packet in the testing.
Packet size | Thread number | performance gain
16384 bytes | 4 | 0.02%
16384 bytes | 8 | 2.21%
16384 bytes | 16 | 2.04%
64 bytes | 4 | 1.07%
64 bytes | 8 | 3.31%
64 bytes | 16 | 0.71%
Signed-off-by: Alex Shi <alex.shi@intel.com>
Link: http://lkml.kernel.org/r/1340845344-27557-8-git-send-email-alex.shi@intel.com
Tested-by: Ren, Yongjie <yongjie.ren@intel.com>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2012-06-28 01:02:22 +00:00
|
|
|
extern void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
|
2018-09-26 03:58:42 +00:00
|
|
|
unsigned long end, unsigned int stride_shift,
|
|
|
|
bool freed_tables);
|
2012-06-28 01:02:24 +00:00
|
|
|
extern void flush_tlb_kernel_range(unsigned long start, unsigned long end);
|
2008-01-30 12:30:35 +00:00
|
|
|
|
2017-05-22 22:30:01 +00:00
|
|
|
static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long a)
|
|
|
|
{
|
2018-09-26 03:58:42 +00:00
|
|
|
flush_tlb_mm_range(vma->vm_mm, a, a + PAGE_SIZE, PAGE_SHIFT, false);
|
2017-05-22 22:30:01 +00:00
|
|
|
}
|
|
|
|
|
2009-01-11 05:58:09 +00:00
|
|
|
void native_flush_tlb_others(const struct cpumask *cpumask,
|
2017-05-28 17:00:10 +00:00
|
|
|
const struct flush_tlb_info *info);
|
2008-01-30 12:30:35 +00:00
|
|
|
|
2017-12-05 12:34:53 +00:00
|
|
|
static inline u64 inc_mm_tlb_gen(struct mm_struct *mm)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Bump the generation count. This also serves as a full barrier
|
|
|
|
* that synchronizes with switch_mm(): callers are required to order
|
|
|
|
* their read of mm_cpumask after their writes to the paging
|
|
|
|
* structures.
|
|
|
|
*/
|
|
|
|
return atomic64_inc_return(&mm->context.tlb_gen);
|
|
|
|
}
|
|
|
|
|
2017-05-22 22:30:03 +00:00
|
|
|
static inline void arch_tlbbatch_add_mm(struct arch_tlbflush_unmap_batch *batch,
|
|
|
|
struct mm_struct *mm)
|
|
|
|
{
|
2017-06-29 15:53:15 +00:00
|
|
|
inc_mm_tlb_gen(mm);
|
2017-05-22 22:30:03 +00:00
|
|
|
cpumask_or(&batch->cpumask, &batch->cpumask, mm_cpumask(mm));
|
|
|
|
}
|
|
|
|
|
|
|
|
extern void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch);
|
|
|
|
|
2008-01-30 12:30:35 +00:00
|
|
|
#ifndef CONFIG_PARAVIRT
|
2017-05-28 17:00:10 +00:00
|
|
|
#define flush_tlb_others(mask, info) \
|
|
|
|
native_flush_tlb_others(mask, info)
|
2018-08-22 15:30:16 +00:00
|
|
|
|
|
|
|
#define paravirt_tlb_remove_table(tlb, page) \
|
|
|
|
tlb_remove_page(tlb, (void *)(page))
|
2007-10-11 09:20:03 +00:00
|
|
|
#endif
|
2008-01-30 12:30:35 +00:00
|
|
|
|
2008-10-23 05:26:29 +00:00
|
|
|
#endif /* _ASM_X86_TLBFLUSH_H */
|