License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 14:07:57 +00:00
|
|
|
// SPDX-License-Identifier: GPL-2.0
|
2007-10-16 08:24:13 +00:00
|
|
|
/*
|
|
|
|
* Virtual Memory Map support
|
|
|
|
*
|
2008-07-04 16:59:22 +00:00
|
|
|
* (C) 2007 sgi. Christoph Lameter.
|
2007-10-16 08:24:13 +00:00
|
|
|
*
|
|
|
|
* Virtual memory maps allow VM primitives pfn_to_page, page_to_pfn,
|
|
|
|
* virt_to_page, page_address() to be implemented as a base offset
|
|
|
|
* calculation without memory access.
|
|
|
|
*
|
|
|
|
* However, virtual mappings need a page table and TLBs. Many Linux
|
|
|
|
* architectures already map their physical space using 1-1 mappings
|
tree-wide: fix comment/printk typos
"gadget", "through", "command", "maintain", "maintain", "controller", "address",
"between", "initiali[zs]e", "instead", "function", "select", "already",
"equal", "access", "management", "hierarchy", "registration", "interest",
"relative", "memory", "offset", "already",
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
2010-11-01 19:38:34 +00:00
|
|
|
* via TLBs. For those arches the virtual memory map is essentially
|
2007-10-16 08:24:13 +00:00
|
|
|
* for free if we use the same page size as the 1-1 mappings. In that
|
|
|
|
* case the overhead consists of a few additional pages that are
|
|
|
|
* allocated to create a view of memory for vmemmap.
|
|
|
|
*
|
2007-10-16 08:24:14 +00:00
|
|
|
* The architecture is expected to provide a vmemmap_populate() function
|
|
|
|
* to instantiate the mapping.
|
2007-10-16 08:24:13 +00:00
|
|
|
*/
|
|
|
|
#include <linux/mm.h>
|
|
|
|
#include <linux/mmzone.h>
|
2018-10-30 22:09:44 +00:00
|
|
|
#include <linux/memblock.h>
|
2016-01-16 00:56:22 +00:00
|
|
|
#include <linux/memremap.h>
|
2007-10-16 08:24:13 +00:00
|
|
|
#include <linux/highmem.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 08:04:11 +00:00
|
|
|
#include <linux/slab.h>
|
2007-10-16 08:24:13 +00:00
|
|
|
#include <linux/spinlock.h>
|
|
|
|
#include <linux/vmalloc.h>
|
2007-10-29 21:37:19 +00:00
|
|
|
#include <linux/sched.h>
|
2021-07-01 01:47:13 +00:00
|
|
|
#include <linux/pgtable.h>
|
|
|
|
#include <linux/bootmem_info.h>
|
|
|
|
|
2007-10-16 08:24:13 +00:00
|
|
|
#include <asm/dma.h>
|
|
|
|
#include <asm/pgalloc.h>
|
2021-07-01 01:47:13 +00:00
|
|
|
#include <asm/tlbflush.h>
|
|
|
|
|
2022-03-22 21:45:12 +00:00
|
|
|
#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
|
2021-07-01 01:47:13 +00:00
|
|
|
/**
|
|
|
|
* struct vmemmap_remap_walk - walk vmemmap page table
|
|
|
|
*
|
|
|
|
* @remap_pte: called for each lowest-level entry (PTE).
|
2021-07-01 01:48:22 +00:00
|
|
|
* @nr_walked: the number of walked pte.
|
2021-07-01 01:47:13 +00:00
|
|
|
* @reuse_page: the page which is reused for the tail vmemmap pages.
|
|
|
|
* @reuse_addr: the virtual address of the @reuse_page page.
|
2021-07-01 01:47:21 +00:00
|
|
|
* @vmemmap_pages: the list head of the vmemmap pages that can be freed
|
|
|
|
* or is mapped from.
|
2021-07-01 01:47:13 +00:00
|
|
|
*/
|
|
|
|
struct vmemmap_remap_walk {
|
|
|
|
void (*remap_pte)(pte_t *pte, unsigned long addr,
|
|
|
|
struct vmemmap_remap_walk *walk);
|
2021-07-01 01:48:22 +00:00
|
|
|
unsigned long nr_walked;
|
2021-07-01 01:47:13 +00:00
|
|
|
struct page *reuse_page;
|
|
|
|
unsigned long reuse_addr;
|
|
|
|
struct list_head *vmemmap_pages;
|
|
|
|
};
|
|
|
|
|
2022-03-22 21:45:06 +00:00
|
|
|
static int __split_vmemmap_huge_pmd(pmd_t *pmd, unsigned long start)
|
2021-07-01 01:48:22 +00:00
|
|
|
{
|
|
|
|
pmd_t __pmd;
|
|
|
|
int i;
|
|
|
|
unsigned long addr = start;
|
|
|
|
struct page *page = pmd_page(*pmd);
|
|
|
|
pte_t *pgtable = pte_alloc_one_kernel(&init_mm);
|
|
|
|
|
|
|
|
if (!pgtable)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
pmd_populate_kernel(&init_mm, &__pmd, pgtable);
|
|
|
|
|
|
|
|
for (i = 0; i < PMD_SIZE / PAGE_SIZE; i++, addr += PAGE_SIZE) {
|
|
|
|
pte_t entry, *pte;
|
|
|
|
pgprot_t pgprot = PAGE_KERNEL;
|
|
|
|
|
|
|
|
entry = mk_pte(page + i, pgprot);
|
|
|
|
pte = pte_offset_kernel(&__pmd, addr);
|
|
|
|
set_pte_at(&init_mm, addr, pte, entry);
|
|
|
|
}
|
|
|
|
|
2022-03-22 21:45:06 +00:00
|
|
|
spin_lock(&init_mm.page_table_lock);
|
|
|
|
if (likely(pmd_leaf(*pmd))) {
|
|
|
|
/* Make pte visible before pmd. See comment in pmd_install(). */
|
|
|
|
smp_wmb();
|
|
|
|
pmd_populate_kernel(&init_mm, pmd, pgtable);
|
|
|
|
flush_tlb_kernel_range(start, start + PMD_SIZE);
|
|
|
|
} else {
|
|
|
|
pte_free_kernel(&init_mm, pgtable);
|
|
|
|
}
|
|
|
|
spin_unlock(&init_mm.page_table_lock);
|
2021-07-01 01:48:22 +00:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2022-03-22 21:45:06 +00:00
|
|
|
static int split_vmemmap_huge_pmd(pmd_t *pmd, unsigned long start)
|
|
|
|
{
|
|
|
|
int leaf;
|
|
|
|
|
|
|
|
spin_lock(&init_mm.page_table_lock);
|
|
|
|
leaf = pmd_leaf(*pmd);
|
|
|
|
spin_unlock(&init_mm.page_table_lock);
|
|
|
|
|
|
|
|
if (!leaf)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
return __split_vmemmap_huge_pmd(pmd, start);
|
|
|
|
}
|
|
|
|
|
2021-07-01 01:47:13 +00:00
|
|
|
static void vmemmap_pte_range(pmd_t *pmd, unsigned long addr,
|
|
|
|
unsigned long end,
|
|
|
|
struct vmemmap_remap_walk *walk)
|
|
|
|
{
|
|
|
|
pte_t *pte = pte_offset_kernel(pmd, addr);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The reuse_page is found 'first' in table walk before we start
|
|
|
|
* remapping (which is calling @walk->remap_pte).
|
|
|
|
*/
|
|
|
|
if (!walk->reuse_page) {
|
|
|
|
walk->reuse_page = pte_page(*pte);
|
|
|
|
/*
|
|
|
|
* Because the reuse address is part of the range that we are
|
|
|
|
* walking, skip the reuse address range.
|
|
|
|
*/
|
|
|
|
addr += PAGE_SIZE;
|
|
|
|
pte++;
|
2021-07-01 01:48:22 +00:00
|
|
|
walk->nr_walked++;
|
2021-07-01 01:47:13 +00:00
|
|
|
}
|
|
|
|
|
2021-07-01 01:48:22 +00:00
|
|
|
for (; addr != end; addr += PAGE_SIZE, pte++) {
|
2021-07-01 01:47:13 +00:00
|
|
|
walk->remap_pte(pte, addr, walk);
|
2021-07-01 01:48:22 +00:00
|
|
|
walk->nr_walked++;
|
|
|
|
}
|
2021-07-01 01:47:13 +00:00
|
|
|
}
|
|
|
|
|
2021-07-01 01:48:22 +00:00
|
|
|
static int vmemmap_pmd_range(pud_t *pud, unsigned long addr,
|
|
|
|
unsigned long end,
|
|
|
|
struct vmemmap_remap_walk *walk)
|
2021-07-01 01:47:13 +00:00
|
|
|
{
|
|
|
|
pmd_t *pmd;
|
|
|
|
unsigned long next;
|
|
|
|
|
|
|
|
pmd = pmd_offset(pud, addr);
|
|
|
|
do {
|
2022-03-22 21:45:06 +00:00
|
|
|
int ret;
|
|
|
|
|
|
|
|
ret = split_vmemmap_huge_pmd(pmd, addr & PMD_MASK);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
2021-07-01 01:47:13 +00:00
|
|
|
|
|
|
|
next = pmd_addr_end(addr, end);
|
|
|
|
vmemmap_pte_range(pmd, addr, next, walk);
|
|
|
|
} while (pmd++, addr = next, addr != end);
|
2021-07-01 01:48:22 +00:00
|
|
|
|
|
|
|
return 0;
|
2021-07-01 01:47:13 +00:00
|
|
|
}
|
|
|
|
|
2021-07-01 01:48:22 +00:00
|
|
|
static int vmemmap_pud_range(p4d_t *p4d, unsigned long addr,
|
|
|
|
unsigned long end,
|
|
|
|
struct vmemmap_remap_walk *walk)
|
2021-07-01 01:47:13 +00:00
|
|
|
{
|
|
|
|
pud_t *pud;
|
|
|
|
unsigned long next;
|
|
|
|
|
|
|
|
pud = pud_offset(p4d, addr);
|
|
|
|
do {
|
2021-07-01 01:48:22 +00:00
|
|
|
int ret;
|
|
|
|
|
2021-07-01 01:47:13 +00:00
|
|
|
next = pud_addr_end(addr, end);
|
2021-07-01 01:48:22 +00:00
|
|
|
ret = vmemmap_pmd_range(pud, addr, next, walk);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
2021-07-01 01:47:13 +00:00
|
|
|
} while (pud++, addr = next, addr != end);
|
2021-07-01 01:48:22 +00:00
|
|
|
|
|
|
|
return 0;
|
2021-07-01 01:47:13 +00:00
|
|
|
}
|
|
|
|
|
2021-07-01 01:48:22 +00:00
|
|
|
static int vmemmap_p4d_range(pgd_t *pgd, unsigned long addr,
|
|
|
|
unsigned long end,
|
|
|
|
struct vmemmap_remap_walk *walk)
|
2021-07-01 01:47:13 +00:00
|
|
|
{
|
|
|
|
p4d_t *p4d;
|
|
|
|
unsigned long next;
|
|
|
|
|
|
|
|
p4d = p4d_offset(pgd, addr);
|
|
|
|
do {
|
2021-07-01 01:48:22 +00:00
|
|
|
int ret;
|
|
|
|
|
2021-07-01 01:47:13 +00:00
|
|
|
next = p4d_addr_end(addr, end);
|
2021-07-01 01:48:22 +00:00
|
|
|
ret = vmemmap_pud_range(p4d, addr, next, walk);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
2021-07-01 01:47:13 +00:00
|
|
|
} while (p4d++, addr = next, addr != end);
|
2021-07-01 01:48:22 +00:00
|
|
|
|
|
|
|
return 0;
|
2021-07-01 01:47:13 +00:00
|
|
|
}
|
|
|
|
|
2021-07-01 01:48:22 +00:00
|
|
|
static int vmemmap_remap_range(unsigned long start, unsigned long end,
|
|
|
|
struct vmemmap_remap_walk *walk)
|
2021-07-01 01:47:13 +00:00
|
|
|
{
|
|
|
|
unsigned long addr = start;
|
|
|
|
unsigned long next;
|
|
|
|
pgd_t *pgd;
|
|
|
|
|
|
|
|
VM_BUG_ON(!IS_ALIGNED(start, PAGE_SIZE));
|
|
|
|
VM_BUG_ON(!IS_ALIGNED(end, PAGE_SIZE));
|
|
|
|
|
|
|
|
pgd = pgd_offset_k(addr);
|
|
|
|
do {
|
2021-07-01 01:48:22 +00:00
|
|
|
int ret;
|
|
|
|
|
2021-07-01 01:47:13 +00:00
|
|
|
next = pgd_addr_end(addr, end);
|
2021-07-01 01:48:22 +00:00
|
|
|
ret = vmemmap_p4d_range(pgd, addr, next, walk);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
2021-07-01 01:47:13 +00:00
|
|
|
} while (pgd++, addr = next, addr != end);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We only change the mapping of the vmemmap virtual address range
|
|
|
|
* [@start + PAGE_SIZE, end), so we only need to flush the TLB which
|
|
|
|
* belongs to the range.
|
|
|
|
*/
|
|
|
|
flush_tlb_kernel_range(start + PAGE_SIZE, end);
|
2021-07-01 01:48:22 +00:00
|
|
|
|
|
|
|
return 0;
|
2021-07-01 01:47:13 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Free a vmemmap page. A vmemmap page can be allocated from the memblock
|
|
|
|
* allocator or buddy allocator. If the PG_reserved flag is set, it means
|
|
|
|
* that it allocated from the memblock allocator, just free it via the
|
|
|
|
* free_bootmem_page(). Otherwise, use __free_page().
|
|
|
|
*/
|
|
|
|
static inline void free_vmemmap_page(struct page *page)
|
|
|
|
{
|
|
|
|
if (PageReserved(page))
|
|
|
|
free_bootmem_page(page);
|
|
|
|
else
|
|
|
|
__free_page(page);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Free a list of the vmemmap pages */
|
|
|
|
static void free_vmemmap_page_list(struct list_head *list)
|
|
|
|
{
|
|
|
|
struct page *page, *next;
|
|
|
|
|
|
|
|
list_for_each_entry_safe(page, next, list, lru) {
|
|
|
|
list_del(&page->lru);
|
|
|
|
free_vmemmap_page(page);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void vmemmap_remap_pte(pte_t *pte, unsigned long addr,
|
|
|
|
struct vmemmap_remap_walk *walk)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Remap the tail pages as read-only to catch illegal write operation
|
|
|
|
* to the tail pages.
|
|
|
|
*/
|
|
|
|
pgprot_t pgprot = PAGE_KERNEL_RO;
|
|
|
|
pte_t entry = mk_pte(walk->reuse_page, pgprot);
|
|
|
|
struct page *page = pte_page(*pte);
|
|
|
|
|
2021-07-01 01:48:22 +00:00
|
|
|
list_add_tail(&page->lru, walk->vmemmap_pages);
|
2021-07-01 01:47:13 +00:00
|
|
|
set_pte_at(&init_mm, addr, pte, entry);
|
|
|
|
}
|
|
|
|
|
mm: hugetlb: free the 2nd vmemmap page associated with each HugeTLB page
Patch series "Free the 2nd vmemmap page associated with each HugeTLB
page", v7.
This series can minimize the overhead of struct page for 2MB HugeTLB
pages significantly. It further reduces the overhead of struct page by
12.5% for a 2MB HugeTLB compared to the previous approach, which means
2GB per 1TB HugeTLB. It is a nice gain. Comments and reviews are
welcome. Thanks.
The main implementation and details can refer to the commit log of patch
1. In this series, I have changed the following four helpers, the
following table shows the impact of the overhead of those helpers.
+------------------+-----------------------+
| APIs | head page | tail page |
+------------------+-----------+-----------+
| PageHead() | Y | N |
+------------------+-----------+-----------+
| PageTail() | Y | N |
+------------------+-----------+-----------+
| PageCompound() | N | N |
+------------------+-----------+-----------+
| compound_head() | Y | N |
+------------------+-----------+-----------+
Y: Overhead is increased.
N: Overhead is _NOT_ increased.
It shows that the overhead of those helpers on a tail page don't change
between "hugetlb_free_vmemmap=on" and "hugetlb_free_vmemmap=off". But the
overhead on a head page will be increased when "hugetlb_free_vmemmap=on"
(except PageCompound()). So I believe that Matthew Wilcox's folio series
will help with this.
The users of PageHead() and PageTail() are much less than compound_head()
and most users of PageTail() are VM_BUG_ON(), so I have done some tests
about the overhead of compound_head() on head pages.
I have tested the overhead of calling compound_head() on a head page,
which is 2.11ns (Measure the call time of 10 million times
compound_head(), and then average).
For a head page whose address is not aligned with PAGE_SIZE or a
non-compound page, the overhead of compound_head() is 2.54ns which is
increased by 20%. For a head page whose address is aligned with
PAGE_SIZE, the overhead of compound_head() is 2.97ns which is increased by
40%. Most pages are the former. I do not think the overhead is
significant since the overhead of compound_head() itself is low.
This patch (of 5):
This patch minimizes the overhead of struct page for 2MB HugeTLB pages
significantly. It further reduces the overhead of struct page by 12.5%
for a 2MB HugeTLB compared to the previous approach, which means 2GB per
1TB HugeTLB (2MB type).
After the feature of "Free sonme vmemmap pages of HugeTLB page" is
enabled, the mapping of the vmemmap addresses associated with a 2MB
HugeTLB page becomes the figure below.
HugeTLB struct pages(8 pages) page frame(8 pages)
+-----------+ ---virt_to_page---> +-----------+ mapping to +-----------+---> PG_head
| | | 0 | -------------> | 0 |
| | +-----------+ +-----------+
| | | 1 | -------------> | 1 |
| | +-----------+ +-----------+
| | | 2 | ----------------^ ^ ^ ^ ^ ^
| | +-----------+ | | | | |
| | | 3 | ------------------+ | | | |
| | +-----------+ | | | |
| | | 4 | --------------------+ | | |
| 2MB | +-----------+ | | |
| | | 5 | ----------------------+ | |
| | +-----------+ | |
| | | 6 | ------------------------+ |
| | +-----------+ |
| | | 7 | --------------------------+
| | +-----------+
| |
| |
| |
+-----------+
As we can see, the 2nd vmemmap page frame (indexed by 1) is reused and
remaped. However, the 2nd vmemmap page frame is also can be freed to
the buddy allocator, then we can change the mapping from the figure
above to the figure below.
HugeTLB struct pages(8 pages) page frame(8 pages)
+-----------+ ---virt_to_page---> +-----------+ mapping to +-----------+---> PG_head
| | | 0 | -------------> | 0 |
| | +-----------+ +-----------+
| | | 1 | ---------------^ ^ ^ ^ ^ ^ ^
| | +-----------+ | | | | | |
| | | 2 | -----------------+ | | | | |
| | +-----------+ | | | | |
| | | 3 | -------------------+ | | | |
| | +-----------+ | | | |
| | | 4 | ---------------------+ | | |
| 2MB | +-----------+ | | |
| | | 5 | -----------------------+ | |
| | +-----------+ | |
| | | 6 | -------------------------+ |
| | +-----------+ |
| | | 7 | ---------------------------+
| | +-----------+
| |
| |
| |
+-----------+
After we do this, all tail vmemmap pages (1-7) are mapped to the head
vmemmap page frame (0). In other words, there are more than one page
struct with PG_head associated with each HugeTLB page. We __know__ that
there is only one head page struct, the tail page structs with PG_head are
fake head page structs. We need an approach to distinguish between those
two different types of page structs so that compound_head(), PageHead()
and PageTail() can work properly if the parameter is the tail page struct
but with PG_head.
The following code snippet describes how to distinguish between real and
fake head page struct.
if (test_bit(PG_head, &page->flags)) {
unsigned long head = READ_ONCE(page[1].compound_head);
if (head & 1) {
if (head == (unsigned long)page + 1)
==> head page struct
else
==> tail page struct
} else
==> head page struct
}
We can safely access the field of the @page[1] with PG_head because the
@page is a compound page composed with at least two contiguous pages.
[songmuchun@bytedance.com: restore lost comment changes]
Link: https://lkml.kernel.org/r/20211101031651.75851-1-songmuchun@bytedance.com
Link: https://lkml.kernel.org/r/20211101031651.75851-2-songmuchun@bytedance.com
Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Reviewed-by: Barry Song <song.bao.hua@hisilicon.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Michal Hocko <mhocko@suse.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Chen Huang <chenhuang5@huawei.com>
Cc: Bodeddula Balasubramaniam <bodeddub@amazon.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Xiongchun Duan <duanxiongchun@bytedance.com>
Cc: Fam Zheng <fam.zheng@bytedance.com>
Cc: Qi Zheng <zhengqi.arch@bytedance.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2022-03-22 21:45:00 +00:00
|
|
|
/*
|
|
|
|
* How many struct page structs need to be reset. When we reuse the head
|
|
|
|
* struct page, the special metadata (e.g. page->flags or page->mapping)
|
|
|
|
* cannot copy to the tail struct page structs. The invalid value will be
|
|
|
|
* checked in the free_tail_pages_check(). In order to avoid the message
|
|
|
|
* of "corrupted mapping in tail page". We need to reset at least 3 (one
|
|
|
|
* head struct page struct and two tail struct page structs) struct page
|
|
|
|
* structs.
|
|
|
|
*/
|
|
|
|
#define NR_RESET_STRUCT_PAGE 3
|
|
|
|
|
|
|
|
static inline void reset_struct_pages(struct page *start)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
struct page *from = start + NR_RESET_STRUCT_PAGE;
|
|
|
|
|
|
|
|
for (i = 0; i < NR_RESET_STRUCT_PAGE; i++)
|
|
|
|
memcpy(start + i, from, sizeof(*from));
|
|
|
|
}
|
|
|
|
|
2021-07-01 01:48:22 +00:00
|
|
|
static void vmemmap_restore_pte(pte_t *pte, unsigned long addr,
|
|
|
|
struct vmemmap_remap_walk *walk)
|
|
|
|
{
|
|
|
|
pgprot_t pgprot = PAGE_KERNEL;
|
|
|
|
struct page *page;
|
|
|
|
void *to;
|
|
|
|
|
|
|
|
BUG_ON(pte_page(*pte) != walk->reuse_page);
|
|
|
|
|
|
|
|
page = list_first_entry(walk->vmemmap_pages, struct page, lru);
|
|
|
|
list_del(&page->lru);
|
|
|
|
to = page_to_virt(page);
|
|
|
|
copy_page(to, (void *)walk->reuse_addr);
|
mm: hugetlb: free the 2nd vmemmap page associated with each HugeTLB page
Patch series "Free the 2nd vmemmap page associated with each HugeTLB
page", v7.
This series can minimize the overhead of struct page for 2MB HugeTLB
pages significantly. It further reduces the overhead of struct page by
12.5% for a 2MB HugeTLB compared to the previous approach, which means
2GB per 1TB HugeTLB. It is a nice gain. Comments and reviews are
welcome. Thanks.
The main implementation and details can refer to the commit log of patch
1. In this series, I have changed the following four helpers, the
following table shows the impact of the overhead of those helpers.
+------------------+-----------------------+
| APIs | head page | tail page |
+------------------+-----------+-----------+
| PageHead() | Y | N |
+------------------+-----------+-----------+
| PageTail() | Y | N |
+------------------+-----------+-----------+
| PageCompound() | N | N |
+------------------+-----------+-----------+
| compound_head() | Y | N |
+------------------+-----------+-----------+
Y: Overhead is increased.
N: Overhead is _NOT_ increased.
It shows that the overhead of those helpers on a tail page don't change
between "hugetlb_free_vmemmap=on" and "hugetlb_free_vmemmap=off". But the
overhead on a head page will be increased when "hugetlb_free_vmemmap=on"
(except PageCompound()). So I believe that Matthew Wilcox's folio series
will help with this.
The users of PageHead() and PageTail() are much less than compound_head()
and most users of PageTail() are VM_BUG_ON(), so I have done some tests
about the overhead of compound_head() on head pages.
I have tested the overhead of calling compound_head() on a head page,
which is 2.11ns (Measure the call time of 10 million times
compound_head(), and then average).
For a head page whose address is not aligned with PAGE_SIZE or a
non-compound page, the overhead of compound_head() is 2.54ns which is
increased by 20%. For a head page whose address is aligned with
PAGE_SIZE, the overhead of compound_head() is 2.97ns which is increased by
40%. Most pages are the former. I do not think the overhead is
significant since the overhead of compound_head() itself is low.
This patch (of 5):
This patch minimizes the overhead of struct page for 2MB HugeTLB pages
significantly. It further reduces the overhead of struct page by 12.5%
for a 2MB HugeTLB compared to the previous approach, which means 2GB per
1TB HugeTLB (2MB type).
After the feature of "Free sonme vmemmap pages of HugeTLB page" is
enabled, the mapping of the vmemmap addresses associated with a 2MB
HugeTLB page becomes the figure below.
HugeTLB struct pages(8 pages) page frame(8 pages)
+-----------+ ---virt_to_page---> +-----------+ mapping to +-----------+---> PG_head
| | | 0 | -------------> | 0 |
| | +-----------+ +-----------+
| | | 1 | -------------> | 1 |
| | +-----------+ +-----------+
| | | 2 | ----------------^ ^ ^ ^ ^ ^
| | +-----------+ | | | | |
| | | 3 | ------------------+ | | | |
| | +-----------+ | | | |
| | | 4 | --------------------+ | | |
| 2MB | +-----------+ | | |
| | | 5 | ----------------------+ | |
| | +-----------+ | |
| | | 6 | ------------------------+ |
| | +-----------+ |
| | | 7 | --------------------------+
| | +-----------+
| |
| |
| |
+-----------+
As we can see, the 2nd vmemmap page frame (indexed by 1) is reused and
remaped. However, the 2nd vmemmap page frame is also can be freed to
the buddy allocator, then we can change the mapping from the figure
above to the figure below.
HugeTLB struct pages(8 pages) page frame(8 pages)
+-----------+ ---virt_to_page---> +-----------+ mapping to +-----------+---> PG_head
| | | 0 | -------------> | 0 |
| | +-----------+ +-----------+
| | | 1 | ---------------^ ^ ^ ^ ^ ^ ^
| | +-----------+ | | | | | |
| | | 2 | -----------------+ | | | | |
| | +-----------+ | | | | |
| | | 3 | -------------------+ | | | |
| | +-----------+ | | | |
| | | 4 | ---------------------+ | | |
| 2MB | +-----------+ | | |
| | | 5 | -----------------------+ | |
| | +-----------+ | |
| | | 6 | -------------------------+ |
| | +-----------+ |
| | | 7 | ---------------------------+
| | +-----------+
| |
| |
| |
+-----------+
After we do this, all tail vmemmap pages (1-7) are mapped to the head
vmemmap page frame (0). In other words, there are more than one page
struct with PG_head associated with each HugeTLB page. We __know__ that
there is only one head page struct, the tail page structs with PG_head are
fake head page structs. We need an approach to distinguish between those
two different types of page structs so that compound_head(), PageHead()
and PageTail() can work properly if the parameter is the tail page struct
but with PG_head.
The following code snippet describes how to distinguish between real and
fake head page struct.
if (test_bit(PG_head, &page->flags)) {
unsigned long head = READ_ONCE(page[1].compound_head);
if (head & 1) {
if (head == (unsigned long)page + 1)
==> head page struct
else
==> tail page struct
} else
==> head page struct
}
We can safely access the field of the @page[1] with PG_head because the
@page is a compound page composed with at least two contiguous pages.
[songmuchun@bytedance.com: restore lost comment changes]
Link: https://lkml.kernel.org/r/20211101031651.75851-1-songmuchun@bytedance.com
Link: https://lkml.kernel.org/r/20211101031651.75851-2-songmuchun@bytedance.com
Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Reviewed-by: Barry Song <song.bao.hua@hisilicon.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Michal Hocko <mhocko@suse.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Chen Huang <chenhuang5@huawei.com>
Cc: Bodeddula Balasubramaniam <bodeddub@amazon.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Xiongchun Duan <duanxiongchun@bytedance.com>
Cc: Fam Zheng <fam.zheng@bytedance.com>
Cc: Qi Zheng <zhengqi.arch@bytedance.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2022-03-22 21:45:00 +00:00
|
|
|
reset_struct_pages(to);
|
2021-07-01 01:48:22 +00:00
|
|
|
|
|
|
|
set_pte_at(&init_mm, addr, pte, mk_pte(page, pgprot));
|
|
|
|
}
|
|
|
|
|
2021-07-01 01:47:13 +00:00
|
|
|
/**
|
|
|
|
* vmemmap_remap_free - remap the vmemmap virtual address range [@start, @end)
|
|
|
|
* to the page which @reuse is mapped to, then free vmemmap
|
|
|
|
* which the range are mapped to.
|
|
|
|
* @start: start address of the vmemmap virtual address range that we want
|
|
|
|
* to remap.
|
|
|
|
* @end: end address of the vmemmap virtual address range that we want to
|
|
|
|
* remap.
|
|
|
|
* @reuse: reuse address.
|
|
|
|
*
|
2021-07-01 01:48:22 +00:00
|
|
|
* Return: %0 on success, negative error code otherwise.
|
2021-07-01 01:47:13 +00:00
|
|
|
*/
|
2021-07-01 01:48:22 +00:00
|
|
|
int vmemmap_remap_free(unsigned long start, unsigned long end,
|
|
|
|
unsigned long reuse)
|
2021-07-01 01:47:13 +00:00
|
|
|
{
|
2021-07-01 01:48:22 +00:00
|
|
|
int ret;
|
2021-07-01 01:47:13 +00:00
|
|
|
LIST_HEAD(vmemmap_pages);
|
|
|
|
struct vmemmap_remap_walk walk = {
|
|
|
|
.remap_pte = vmemmap_remap_pte,
|
|
|
|
.reuse_addr = reuse,
|
|
|
|
.vmemmap_pages = &vmemmap_pages,
|
|
|
|
};
|
|
|
|
|
|
|
|
/*
|
|
|
|
* In order to make remapping routine most efficient for the huge pages,
|
|
|
|
* the routine of vmemmap page table walking has the following rules
|
|
|
|
* (see more details from the vmemmap_pte_range()):
|
|
|
|
*
|
|
|
|
* - The range [@start, @end) and the range [@reuse, @reuse + PAGE_SIZE)
|
|
|
|
* should be continuous.
|
|
|
|
* - The @reuse address is part of the range [@reuse, @end) that we are
|
|
|
|
* walking which is passed to vmemmap_remap_range().
|
|
|
|
* - The @reuse address is the first in the complete range.
|
|
|
|
*
|
|
|
|
* So we need to make sure that @start and @reuse meet the above rules.
|
|
|
|
*/
|
|
|
|
BUG_ON(start - reuse != PAGE_SIZE);
|
|
|
|
|
2022-03-22 21:45:06 +00:00
|
|
|
mmap_read_lock(&init_mm);
|
2021-07-01 01:48:22 +00:00
|
|
|
ret = vmemmap_remap_range(reuse, end, &walk);
|
|
|
|
if (ret && walk.nr_walked) {
|
|
|
|
end = reuse + walk.nr_walked * PAGE_SIZE;
|
|
|
|
/*
|
|
|
|
* vmemmap_pages contains pages from the previous
|
|
|
|
* vmemmap_remap_range call which failed. These
|
|
|
|
* are pages which were removed from the vmemmap.
|
|
|
|
* They will be restored in the following call.
|
|
|
|
*/
|
|
|
|
walk = (struct vmemmap_remap_walk) {
|
|
|
|
.remap_pte = vmemmap_restore_pte,
|
|
|
|
.reuse_addr = reuse,
|
|
|
|
.vmemmap_pages = &vmemmap_pages,
|
|
|
|
};
|
2021-07-01 01:47:21 +00:00
|
|
|
|
2021-07-01 01:48:22 +00:00
|
|
|
vmemmap_remap_range(reuse, end, &walk);
|
|
|
|
}
|
|
|
|
mmap_read_unlock(&init_mm);
|
2021-07-01 01:47:21 +00:00
|
|
|
|
2021-07-01 01:48:22 +00:00
|
|
|
free_vmemmap_page_list(&vmemmap_pages);
|
2021-07-01 01:47:21 +00:00
|
|
|
|
2021-07-01 01:48:22 +00:00
|
|
|
return ret;
|
2021-07-01 01:47:21 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static int alloc_vmemmap_page_list(unsigned long start, unsigned long end,
|
|
|
|
gfp_t gfp_mask, struct list_head *list)
|
|
|
|
{
|
|
|
|
unsigned long nr_pages = (end - start) >> PAGE_SHIFT;
|
|
|
|
int nid = page_to_nid((struct page *)start);
|
|
|
|
struct page *page, *next;
|
|
|
|
|
|
|
|
while (nr_pages--) {
|
|
|
|
page = alloc_pages_node(nid, gfp_mask, 0);
|
|
|
|
if (!page)
|
|
|
|
goto out;
|
|
|
|
list_add_tail(&page->lru, list);
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
out:
|
|
|
|
list_for_each_entry_safe(page, next, list, lru)
|
|
|
|
__free_pages(page, 0);
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* vmemmap_remap_alloc - remap the vmemmap virtual address range [@start, end)
|
|
|
|
* to the page which is from the @vmemmap_pages
|
|
|
|
* respectively.
|
|
|
|
* @start: start address of the vmemmap virtual address range that we want
|
|
|
|
* to remap.
|
|
|
|
* @end: end address of the vmemmap virtual address range that we want to
|
|
|
|
* remap.
|
|
|
|
* @reuse: reuse address.
|
|
|
|
* @gfp_mask: GFP flag for allocating vmemmap pages.
|
2021-07-01 01:48:22 +00:00
|
|
|
*
|
|
|
|
* Return: %0 on success, negative error code otherwise.
|
2021-07-01 01:47:21 +00:00
|
|
|
*/
|
|
|
|
int vmemmap_remap_alloc(unsigned long start, unsigned long end,
|
|
|
|
unsigned long reuse, gfp_t gfp_mask)
|
|
|
|
{
|
|
|
|
LIST_HEAD(vmemmap_pages);
|
|
|
|
struct vmemmap_remap_walk walk = {
|
|
|
|
.remap_pte = vmemmap_restore_pte,
|
|
|
|
.reuse_addr = reuse,
|
|
|
|
.vmemmap_pages = &vmemmap_pages,
|
|
|
|
};
|
|
|
|
|
|
|
|
/* See the comment in the vmemmap_remap_free(). */
|
|
|
|
BUG_ON(start - reuse != PAGE_SIZE);
|
|
|
|
|
|
|
|
if (alloc_vmemmap_page_list(start, end, gfp_mask, &vmemmap_pages))
|
|
|
|
return -ENOMEM;
|
|
|
|
|
2021-07-01 01:48:22 +00:00
|
|
|
mmap_read_lock(&init_mm);
|
2021-07-01 01:47:21 +00:00
|
|
|
vmemmap_remap_range(reuse, end, &walk);
|
2021-07-01 01:48:22 +00:00
|
|
|
mmap_read_unlock(&init_mm);
|
2021-07-01 01:47:21 +00:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
2022-03-22 21:45:12 +00:00
|
|
|
#endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */
|
2021-07-01 01:47:21 +00:00
|
|
|
|
2007-10-16 08:24:13 +00:00
|
|
|
/*
|
|
|
|
* Allocate a block of memory to be used to back the virtual memory map
|
|
|
|
* or to back the page tables that are used to create the mapping.
|
|
|
|
* Uses the main allocators if they are available, else bootmem.
|
|
|
|
*/
|
2007-11-29 00:21:57 +00:00
|
|
|
|
2016-08-02 21:03:33 +00:00
|
|
|
static void * __ref __earlyonly_bootmem_alloc(int node,
|
2007-11-29 00:21:57 +00:00
|
|
|
unsigned long size,
|
|
|
|
unsigned long align,
|
|
|
|
unsigned long goal)
|
|
|
|
{
|
2018-10-30 22:08:04 +00:00
|
|
|
return memblock_alloc_try_nid_raw(size, align, goal,
|
2018-10-30 22:09:44 +00:00
|
|
|
MEMBLOCK_ALLOC_ACCESSIBLE, node);
|
2007-11-29 00:21:57 +00:00
|
|
|
}
|
|
|
|
|
2007-10-16 08:24:13 +00:00
|
|
|
void * __meminit vmemmap_alloc_block(unsigned long size, int node)
|
|
|
|
{
|
|
|
|
/* If the main allocator is up use that, fallback to bootmem. */
|
|
|
|
if (slab_is_available()) {
|
2017-11-16 01:38:56 +00:00
|
|
|
gfp_t gfp_mask = GFP_KERNEL|__GFP_RETRY_MAYFAIL|__GFP_NOWARN;
|
|
|
|
int order = get_order(size);
|
|
|
|
static bool warned;
|
2009-09-22 00:01:19 +00:00
|
|
|
struct page *page;
|
|
|
|
|
2017-11-16 01:38:56 +00:00
|
|
|
page = alloc_pages_node(node, gfp_mask, order);
|
2007-10-16 08:24:13 +00:00
|
|
|
if (page)
|
|
|
|
return page_address(page);
|
2017-11-16 01:38:56 +00:00
|
|
|
|
|
|
|
if (!warned) {
|
|
|
|
warn_alloc(gfp_mask & ~__GFP_NOWARN, NULL,
|
|
|
|
"vmemmap alloc failure: order:%u", order);
|
|
|
|
warned = true;
|
|
|
|
}
|
2007-10-16 08:24:13 +00:00
|
|
|
return NULL;
|
|
|
|
} else
|
2007-11-29 00:21:57 +00:00
|
|
|
return __earlyonly_bootmem_alloc(node, size, size,
|
2007-10-16 08:24:13 +00:00
|
|
|
__pa(MAX_DMA_ADDRESS));
|
|
|
|
}
|
|
|
|
|
2020-08-07 06:23:24 +00:00
|
|
|
static void * __meminit altmap_alloc_block_buf(unsigned long size,
|
|
|
|
struct vmem_altmap *altmap);
|
|
|
|
|
2010-02-10 09:20:22 +00:00
|
|
|
/* need to make sure size is all the same during early stage */
|
2020-08-07 06:23:24 +00:00
|
|
|
void * __meminit vmemmap_alloc_block_buf(unsigned long size, int node,
|
|
|
|
struct vmem_altmap *altmap)
|
2010-02-10 09:20:22 +00:00
|
|
|
{
|
2020-08-07 06:23:24 +00:00
|
|
|
void *ptr;
|
|
|
|
|
|
|
|
if (altmap)
|
|
|
|
return altmap_alloc_block_buf(size, altmap);
|
2010-02-10 09:20:22 +00:00
|
|
|
|
2020-08-07 06:23:24 +00:00
|
|
|
ptr = sparse_buffer_alloc(size);
|
2018-08-17 22:49:21 +00:00
|
|
|
if (!ptr)
|
|
|
|
ptr = vmemmap_alloc_block(size, node);
|
2010-02-10 09:20:22 +00:00
|
|
|
return ptr;
|
|
|
|
}
|
|
|
|
|
2016-01-16 00:56:22 +00:00
|
|
|
static unsigned long __meminit vmem_altmap_next_pfn(struct vmem_altmap *altmap)
|
|
|
|
{
|
|
|
|
return altmap->base_pfn + altmap->reserve + altmap->alloc
|
|
|
|
+ altmap->align;
|
|
|
|
}
|
|
|
|
|
|
|
|
static unsigned long __meminit vmem_altmap_nr_free(struct vmem_altmap *altmap)
|
|
|
|
{
|
|
|
|
unsigned long allocated = altmap->alloc + altmap->align;
|
|
|
|
|
|
|
|
if (altmap->free > allocated)
|
|
|
|
return altmap->free - allocated;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2020-08-07 06:23:24 +00:00
|
|
|
static void * __meminit altmap_alloc_block_buf(unsigned long size,
|
|
|
|
struct vmem_altmap *altmap)
|
2016-01-16 00:56:22 +00:00
|
|
|
{
|
2017-12-29 07:53:59 +00:00
|
|
|
unsigned long pfn, nr_pfns, nr_align;
|
2016-01-16 00:56:22 +00:00
|
|
|
|
|
|
|
if (size & ~PAGE_MASK) {
|
|
|
|
pr_warn_once("%s: allocations must be multiple of PAGE_SIZE (%ld)\n",
|
|
|
|
__func__, size);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2017-12-29 07:53:59 +00:00
|
|
|
pfn = vmem_altmap_next_pfn(altmap);
|
2016-01-16 00:56:22 +00:00
|
|
|
nr_pfns = size >> PAGE_SHIFT;
|
2017-12-29 07:53:59 +00:00
|
|
|
nr_align = 1UL << find_first_bit(&nr_pfns, BITS_PER_LONG);
|
|
|
|
nr_align = ALIGN(pfn, nr_align) - pfn;
|
|
|
|
if (nr_pfns + nr_align > vmem_altmap_nr_free(altmap))
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
altmap->alloc += nr_pfns;
|
|
|
|
altmap->align += nr_align;
|
|
|
|
pfn += nr_align;
|
|
|
|
|
2016-01-16 00:56:22 +00:00
|
|
|
pr_debug("%s: pfn: %#lx alloc: %ld align: %ld nr: %#lx\n",
|
|
|
|
__func__, pfn, altmap->alloc, altmap->align, nr_pfns);
|
2017-12-29 07:53:59 +00:00
|
|
|
return __va(__pfn_to_phys(pfn));
|
2016-01-16 00:56:22 +00:00
|
|
|
}
|
|
|
|
|
2007-10-16 08:24:13 +00:00
|
|
|
void __meminit vmemmap_verify(pte_t *pte, int node,
|
|
|
|
unsigned long start, unsigned long end)
|
|
|
|
{
|
|
|
|
unsigned long pfn = pte_pfn(*pte);
|
|
|
|
int actual_node = early_pfn_to_nid(pfn);
|
|
|
|
|
2008-11-06 20:53:31 +00:00
|
|
|
if (node_distance(actual_node, node) > LOCAL_DISTANCE)
|
2016-03-17 21:19:50 +00:00
|
|
|
pr_warn("[%lx-%lx] potential offnode page_structs\n",
|
|
|
|
start, end - 1);
|
2007-10-16 08:24:13 +00:00
|
|
|
}
|
|
|
|
|
2020-08-07 06:23:19 +00:00
|
|
|
pte_t * __meminit vmemmap_pte_populate(pmd_t *pmd, unsigned long addr, int node,
|
|
|
|
struct vmem_altmap *altmap)
|
2007-10-16 08:24:13 +00:00
|
|
|
{
|
2007-10-16 08:24:14 +00:00
|
|
|
pte_t *pte = pte_offset_kernel(pmd, addr);
|
|
|
|
if (pte_none(*pte)) {
|
|
|
|
pte_t entry;
|
2020-08-07 06:23:19 +00:00
|
|
|
void *p;
|
|
|
|
|
2020-08-07 06:23:24 +00:00
|
|
|
p = vmemmap_alloc_block_buf(PAGE_SIZE, node, altmap);
|
2007-10-16 08:24:14 +00:00
|
|
|
if (!p)
|
2008-03-29 03:07:28 +00:00
|
|
|
return NULL;
|
2007-10-16 08:24:14 +00:00
|
|
|
entry = pfn_pte(__pa(p) >> PAGE_SHIFT, PAGE_KERNEL);
|
|
|
|
set_pte_at(&init_mm, addr, pte, entry);
|
|
|
|
}
|
|
|
|
return pte;
|
2007-10-16 08:24:13 +00:00
|
|
|
}
|
|
|
|
|
2017-11-16 01:36:44 +00:00
|
|
|
static void * __meminit vmemmap_alloc_block_zero(unsigned long size, int node)
|
|
|
|
{
|
|
|
|
void *p = vmemmap_alloc_block(size, node);
|
|
|
|
|
|
|
|
if (!p)
|
|
|
|
return NULL;
|
|
|
|
memset(p, 0, size);
|
|
|
|
|
|
|
|
return p;
|
|
|
|
}
|
|
|
|
|
2007-10-16 08:24:14 +00:00
|
|
|
pmd_t * __meminit vmemmap_pmd_populate(pud_t *pud, unsigned long addr, int node)
|
2007-10-16 08:24:13 +00:00
|
|
|
{
|
2007-10-16 08:24:14 +00:00
|
|
|
pmd_t *pmd = pmd_offset(pud, addr);
|
|
|
|
if (pmd_none(*pmd)) {
|
2017-11-16 01:36:44 +00:00
|
|
|
void *p = vmemmap_alloc_block_zero(PAGE_SIZE, node);
|
2007-10-16 08:24:14 +00:00
|
|
|
if (!p)
|
2008-03-29 03:07:28 +00:00
|
|
|
return NULL;
|
2007-10-16 08:24:14 +00:00
|
|
|
pmd_populate_kernel(&init_mm, pmd, p);
|
2007-10-16 08:24:13 +00:00
|
|
|
}
|
2007-10-16 08:24:14 +00:00
|
|
|
return pmd;
|
2007-10-16 08:24:13 +00:00
|
|
|
}
|
|
|
|
|
2017-03-09 14:24:07 +00:00
|
|
|
pud_t * __meminit vmemmap_pud_populate(p4d_t *p4d, unsigned long addr, int node)
|
2007-10-16 08:24:13 +00:00
|
|
|
{
|
2017-03-09 14:24:07 +00:00
|
|
|
pud_t *pud = pud_offset(p4d, addr);
|
2007-10-16 08:24:14 +00:00
|
|
|
if (pud_none(*pud)) {
|
2017-11-16 01:36:44 +00:00
|
|
|
void *p = vmemmap_alloc_block_zero(PAGE_SIZE, node);
|
2007-10-16 08:24:14 +00:00
|
|
|
if (!p)
|
2008-03-29 03:07:28 +00:00
|
|
|
return NULL;
|
2007-10-16 08:24:14 +00:00
|
|
|
pud_populate(&init_mm, pud, p);
|
|
|
|
}
|
|
|
|
return pud;
|
|
|
|
}
|
2007-10-16 08:24:13 +00:00
|
|
|
|
2017-03-09 14:24:07 +00:00
|
|
|
p4d_t * __meminit vmemmap_p4d_populate(pgd_t *pgd, unsigned long addr, int node)
|
|
|
|
{
|
|
|
|
p4d_t *p4d = p4d_offset(pgd, addr);
|
|
|
|
if (p4d_none(*p4d)) {
|
2017-11-16 01:36:44 +00:00
|
|
|
void *p = vmemmap_alloc_block_zero(PAGE_SIZE, node);
|
2017-03-09 14:24:07 +00:00
|
|
|
if (!p)
|
|
|
|
return NULL;
|
|
|
|
p4d_populate(&init_mm, p4d, p);
|
|
|
|
}
|
|
|
|
return p4d;
|
|
|
|
}
|
|
|
|
|
2007-10-16 08:24:14 +00:00
|
|
|
pgd_t * __meminit vmemmap_pgd_populate(unsigned long addr, int node)
|
|
|
|
{
|
|
|
|
pgd_t *pgd = pgd_offset_k(addr);
|
|
|
|
if (pgd_none(*pgd)) {
|
2017-11-16 01:36:44 +00:00
|
|
|
void *p = vmemmap_alloc_block_zero(PAGE_SIZE, node);
|
2007-10-16 08:24:14 +00:00
|
|
|
if (!p)
|
2008-03-29 03:07:28 +00:00
|
|
|
return NULL;
|
2007-10-16 08:24:14 +00:00
|
|
|
pgd_populate(&init_mm, pgd, p);
|
2007-10-16 08:24:13 +00:00
|
|
|
}
|
2007-10-16 08:24:14 +00:00
|
|
|
return pgd;
|
2007-10-16 08:24:13 +00:00
|
|
|
}
|
|
|
|
|
2020-08-07 06:23:19 +00:00
|
|
|
int __meminit vmemmap_populate_basepages(unsigned long start, unsigned long end,
|
|
|
|
int node, struct vmem_altmap *altmap)
|
2007-10-16 08:24:13 +00:00
|
|
|
{
|
2013-04-29 22:07:50 +00:00
|
|
|
unsigned long addr = start;
|
2007-10-16 08:24:14 +00:00
|
|
|
pgd_t *pgd;
|
2017-03-09 14:24:07 +00:00
|
|
|
p4d_t *p4d;
|
2007-10-16 08:24:14 +00:00
|
|
|
pud_t *pud;
|
|
|
|
pmd_t *pmd;
|
|
|
|
pte_t *pte;
|
2007-10-16 08:24:13 +00:00
|
|
|
|
2007-10-16 08:24:14 +00:00
|
|
|
for (; addr < end; addr += PAGE_SIZE) {
|
|
|
|
pgd = vmemmap_pgd_populate(addr, node);
|
|
|
|
if (!pgd)
|
|
|
|
return -ENOMEM;
|
2017-03-09 14:24:07 +00:00
|
|
|
p4d = vmemmap_p4d_populate(pgd, addr, node);
|
|
|
|
if (!p4d)
|
|
|
|
return -ENOMEM;
|
|
|
|
pud = vmemmap_pud_populate(p4d, addr, node);
|
2007-10-16 08:24:14 +00:00
|
|
|
if (!pud)
|
|
|
|
return -ENOMEM;
|
|
|
|
pmd = vmemmap_pmd_populate(pud, addr, node);
|
|
|
|
if (!pmd)
|
|
|
|
return -ENOMEM;
|
2020-08-07 06:23:19 +00:00
|
|
|
pte = vmemmap_pte_populate(pmd, addr, node, altmap);
|
2007-10-16 08:24:14 +00:00
|
|
|
if (!pte)
|
|
|
|
return -ENOMEM;
|
|
|
|
vmemmap_verify(pte, node, addr, addr + PAGE_SIZE);
|
2007-10-16 08:24:13 +00:00
|
|
|
}
|
2007-10-16 08:24:14 +00:00
|
|
|
|
|
|
|
return 0;
|
2007-10-16 08:24:13 +00:00
|
|
|
}
|
|
|
|
|
2019-07-18 22:58:11 +00:00
|
|
|
struct page * __meminit __populate_section_memmap(unsigned long pfn,
|
|
|
|
unsigned long nr_pages, int nid, struct vmem_altmap *altmap)
|
2007-10-16 08:24:13 +00:00
|
|
|
{
|
2020-08-07 06:23:59 +00:00
|
|
|
unsigned long start = (unsigned long) pfn_to_page(pfn);
|
|
|
|
unsigned long end = start + nr_pages * sizeof(struct page);
|
|
|
|
|
|
|
|
if (WARN_ON_ONCE(!IS_ALIGNED(pfn, PAGES_PER_SUBSECTION) ||
|
|
|
|
!IS_ALIGNED(nr_pages, PAGES_PER_SUBSECTION)))
|
|
|
|
return NULL;
|
2013-04-29 22:07:50 +00:00
|
|
|
|
2017-12-29 07:53:54 +00:00
|
|
|
if (vmemmap_populate(start, end, nid, altmap))
|
2007-10-16 08:24:13 +00:00
|
|
|
return NULL;
|
|
|
|
|
2019-07-18 22:58:11 +00:00
|
|
|
return pfn_to_page(pfn);
|
2007-10-16 08:24:13 +00:00
|
|
|
}
|