2005-04-16 22:20:36 +00:00
|
|
|
/*
|
|
|
|
*
|
|
|
|
* Copyright (C) 1995 Linus Torvalds
|
|
|
|
*
|
|
|
|
* Support of BIGMEM added by Gerhard Wichert, Siemens AG, July 1999
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include <linux/module.h>
|
|
|
|
#include <linux/signal.h>
|
|
|
|
#include <linux/sched.h>
|
|
|
|
#include <linux/kernel.h>
|
|
|
|
#include <linux/errno.h>
|
|
|
|
#include <linux/string.h>
|
|
|
|
#include <linux/types.h>
|
|
|
|
#include <linux/ptrace.h>
|
|
|
|
#include <linux/mman.h>
|
|
|
|
#include <linux/mm.h>
|
|
|
|
#include <linux/hugetlb.h>
|
|
|
|
#include <linux/swap.h>
|
|
|
|
#include <linux/smp.h>
|
|
|
|
#include <linux/init.h>
|
|
|
|
#include <linux/highmem.h>
|
|
|
|
#include <linux/pagemap.h>
|
2008-12-16 20:17:36 +00:00
|
|
|
#include <linux/pci.h>
|
[PATCH] x86: tighten kernel image page access rights
On x86-64, kernel memory freed after init can be entirely unmapped instead
of just getting 'poisoned' by overwriting with a debug pattern.
On i386 and x86-64 (under CONFIG_DEBUG_RODATA), kernel text and bug table
can also be write-protected.
Compared to the first version, this one prevents re-creating deleted
mappings in the kernel image range on x86-64, if those got removed
previously. This, together with the original changes, prevents temporarily
having inconsistent mappings when cacheability attributes are being
changed on such pages (e.g. from AGP code). While on i386 such duplicate
mappings don't exist, the same change is done there, too, both for
consistency and because checking pte_present() before using various other
pte_XXX functions is a requirement anyway. At once, i386 code gets
adjusted to use pte_huge() instead of open coding this.
AK: split out cpa() changes
Signed-off-by: Jan Beulich <jbeulich@novell.com>
Signed-off-by: Andi Kleen <ak@suse.de>
2007-05-02 17:27:10 +00:00
|
|
|
#include <linux/pfn.h>
|
2006-06-27 09:53:52 +00:00
|
|
|
#include <linux/poison.h>
|
2005-04-16 22:20:36 +00:00
|
|
|
#include <linux/bootmem.h>
|
|
|
|
#include <linux/slab.h>
|
|
|
|
#include <linux/proc_fs.h>
|
2005-10-30 01:16:57 +00:00
|
|
|
#include <linux/memory_hotplug.h>
|
2005-11-14 00:06:51 +00:00
|
|
|
#include <linux/initrd.h>
|
2006-06-23 09:04:49 +00:00
|
|
|
#include <linux/cpumask.h>
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2008-02-04 15:47:58 +00:00
|
|
|
#include <asm/asm.h>
|
2008-10-12 13:06:29 +00:00
|
|
|
#include <asm/bios_ebda.h>
|
2005-04-16 22:20:36 +00:00
|
|
|
#include <asm/processor.h>
|
|
|
|
#include <asm/system.h>
|
|
|
|
#include <asm/uaccess.h>
|
|
|
|
#include <asm/pgtable.h>
|
|
|
|
#include <asm/dma.h>
|
|
|
|
#include <asm/fixmap.h>
|
|
|
|
#include <asm/e820.h>
|
|
|
|
#include <asm/apic.h>
|
2008-01-30 12:34:10 +00:00
|
|
|
#include <asm/bugs.h>
|
2005-04-16 22:20:36 +00:00
|
|
|
#include <asm/tlb.h>
|
|
|
|
#include <asm/tlbflush.h>
|
2008-01-30 12:33:39 +00:00
|
|
|
#include <asm/pgalloc.h>
|
2005-04-16 22:20:36 +00:00
|
|
|
#include <asm/sections.h>
|
[PATCH] i386: PARAVIRT: Hooks to set up initial pagetable
This patch introduces paravirt_ops hooks to control how the kernel's
initial pagetable is set up.
In the case of a native boot, the very early bootstrap code creates a
simple non-PAE pagetable to map the kernel and physical memory. When
the VM subsystem is initialized, it creates a proper pagetable which
respects the PAE mode, large pages, etc.
When booting under a hypervisor, there are many possibilities for what
paging environment the hypervisor establishes for the guest kernel, so
the constructon of the kernel's pagetable depends on the hypervisor.
In the case of Xen, the hypervisor boots the kernel with a fully
constructed pagetable, which is already using PAE if necessary. Also,
Xen requires particular care when constructing pagetables to make sure
all pagetables are always mapped read-only.
In order to make this easier, kernel's initial pagetable construction
has been changed to only allocate and initialize a pagetable page if
there's no page already present in the pagetable. This allows the Xen
paravirt backend to make a copy of the hypervisor-provided pagetable,
allowing the kernel to establish any more mappings it needs while
keeping the existing ones.
A slightly subtle point which is worth highlighting here is that Xen
requires all kernel mappings to share the same pte_t pages between all
pagetables, so that updating a kernel page's mapping in one pagetable
is reflected in all other pagetables. This makes it possible to
allocate a page and attach it to a pagetable without having to
explicitly enumerate that page's mapping in all pagetables.
And:
+From: "Eric W. Biederman" <ebiederm@xmission.com>
If we don't set the leaf page table entries it is quite possible that
will inherit and incorrect page table entry from the initial boot
page table setup in head.S. So we need to redo the effort here,
so we pick up PSE, PGE and the like.
Hypervisors like Xen require that their page tables be read-only,
which is slightly incompatible with our low identity mappings, however
I discussed this with Jeremy he has modified the Xen early set_pte
function to avoid problems in this area.
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Acked-by: William Irwin <bill.irwin@oracle.com>
Cc: Ingo Molnar <mingo@elte.hu>
2007-05-02 17:27:13 +00:00
|
|
|
#include <asm/paravirt.h>
|
2008-02-09 22:24:09 +00:00
|
|
|
#include <asm/setup.h>
|
2008-02-12 20:12:01 +00:00
|
|
|
#include <asm/cacheflush.h>
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2008-07-11 03:38:26 +00:00
|
|
|
unsigned long max_low_pfn_mapped;
|
2008-03-21 20:27:10 +00:00
|
|
|
unsigned long max_pfn_mapped;
|
2008-03-12 02:53:27 +00:00
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
DEFINE_PER_CPU(struct mmu_gather, mmu_gathers);
|
|
|
|
unsigned long highstart_pfn, highend_pfn;
|
|
|
|
|
2008-01-30 12:34:10 +00:00
|
|
|
static noinline int do_test_wp_bit(void);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2008-06-24 19:18:14 +00:00
|
|
|
|
|
|
|
static unsigned long __initdata table_start;
|
|
|
|
static unsigned long __meminitdata table_end;
|
|
|
|
static unsigned long __meminitdata table_top;
|
|
|
|
|
|
|
|
static int __initdata after_init_bootmem;
|
|
|
|
|
2008-12-16 11:42:45 +00:00
|
|
|
static __init void *alloc_low_page(void)
|
2008-06-24 19:18:14 +00:00
|
|
|
{
|
|
|
|
unsigned long pfn = table_end++;
|
|
|
|
void *adr;
|
|
|
|
|
|
|
|
if (pfn >= table_top)
|
|
|
|
panic("alloc_low_page: ran out of memory");
|
|
|
|
|
|
|
|
adr = __va(pfn * PAGE_SIZE);
|
|
|
|
memset(adr, 0, PAGE_SIZE);
|
|
|
|
return adr;
|
|
|
|
}
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
/*
|
|
|
|
* Creates a middle page table and puts a pointer to it in the
|
|
|
|
* given global directory entry. This only returns the gd entry
|
|
|
|
* in non-PAE compilation mode, since the middle layer is folded.
|
|
|
|
*/
|
|
|
|
static pmd_t * __init one_md_table_init(pgd_t *pgd)
|
|
|
|
{
|
|
|
|
pud_t *pud;
|
|
|
|
pmd_t *pmd_table;
|
2008-01-30 12:34:10 +00:00
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
#ifdef CONFIG_X86_PAE
|
[PATCH] i386: PARAVIRT: Hooks to set up initial pagetable
This patch introduces paravirt_ops hooks to control how the kernel's
initial pagetable is set up.
In the case of a native boot, the very early bootstrap code creates a
simple non-PAE pagetable to map the kernel and physical memory. When
the VM subsystem is initialized, it creates a proper pagetable which
respects the PAE mode, large pages, etc.
When booting under a hypervisor, there are many possibilities for what
paging environment the hypervisor establishes for the guest kernel, so
the constructon of the kernel's pagetable depends on the hypervisor.
In the case of Xen, the hypervisor boots the kernel with a fully
constructed pagetable, which is already using PAE if necessary. Also,
Xen requires particular care when constructing pagetables to make sure
all pagetables are always mapped read-only.
In order to make this easier, kernel's initial pagetable construction
has been changed to only allocate and initialize a pagetable page if
there's no page already present in the pagetable. This allows the Xen
paravirt backend to make a copy of the hypervisor-provided pagetable,
allowing the kernel to establish any more mappings it needs while
keeping the existing ones.
A slightly subtle point which is worth highlighting here is that Xen
requires all kernel mappings to share the same pte_t pages between all
pagetables, so that updating a kernel page's mapping in one pagetable
is reflected in all other pagetables. This makes it possible to
allocate a page and attach it to a pagetable without having to
explicitly enumerate that page's mapping in all pagetables.
And:
+From: "Eric W. Biederman" <ebiederm@xmission.com>
If we don't set the leaf page table entries it is quite possible that
will inherit and incorrect page table entry from the initial boot
page table setup in head.S. So we need to redo the effort here,
so we pick up PSE, PGE and the like.
Hypervisors like Xen require that their page tables be read-only,
which is slightly incompatible with our low identity mappings, however
I discussed this with Jeremy he has modified the Xen early set_pte
function to avoid problems in this area.
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Acked-by: William Irwin <bill.irwin@oracle.com>
Cc: Ingo Molnar <mingo@elte.hu>
2007-05-02 17:27:13 +00:00
|
|
|
if (!(pgd_val(*pgd) & _PAGE_PRESENT)) {
|
2008-06-24 19:18:14 +00:00
|
|
|
if (after_init_bootmem)
|
|
|
|
pmd_table = (pmd_t *)alloc_bootmem_low_pages(PAGE_SIZE);
|
|
|
|
else
|
2008-12-16 11:42:45 +00:00
|
|
|
pmd_table = (pmd_t *)alloc_low_page();
|
2008-03-17 23:37:01 +00:00
|
|
|
paravirt_alloc_pmd(&init_mm, __pa(pmd_table) >> PAGE_SHIFT);
|
[PATCH] i386: PARAVIRT: Hooks to set up initial pagetable
This patch introduces paravirt_ops hooks to control how the kernel's
initial pagetable is set up.
In the case of a native boot, the very early bootstrap code creates a
simple non-PAE pagetable to map the kernel and physical memory. When
the VM subsystem is initialized, it creates a proper pagetable which
respects the PAE mode, large pages, etc.
When booting under a hypervisor, there are many possibilities for what
paging environment the hypervisor establishes for the guest kernel, so
the constructon of the kernel's pagetable depends on the hypervisor.
In the case of Xen, the hypervisor boots the kernel with a fully
constructed pagetable, which is already using PAE if necessary. Also,
Xen requires particular care when constructing pagetables to make sure
all pagetables are always mapped read-only.
In order to make this easier, kernel's initial pagetable construction
has been changed to only allocate and initialize a pagetable page if
there's no page already present in the pagetable. This allows the Xen
paravirt backend to make a copy of the hypervisor-provided pagetable,
allowing the kernel to establish any more mappings it needs while
keeping the existing ones.
A slightly subtle point which is worth highlighting here is that Xen
requires all kernel mappings to share the same pte_t pages between all
pagetables, so that updating a kernel page's mapping in one pagetable
is reflected in all other pagetables. This makes it possible to
allocate a page and attach it to a pagetable without having to
explicitly enumerate that page's mapping in all pagetables.
And:
+From: "Eric W. Biederman" <ebiederm@xmission.com>
If we don't set the leaf page table entries it is quite possible that
will inherit and incorrect page table entry from the initial boot
page table setup in head.S. So we need to redo the effort here,
so we pick up PSE, PGE and the like.
Hypervisors like Xen require that their page tables be read-only,
which is slightly incompatible with our low identity mappings, however
I discussed this with Jeremy he has modified the Xen early set_pte
function to avoid problems in this area.
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Acked-by: William Irwin <bill.irwin@oracle.com>
Cc: Ingo Molnar <mingo@elte.hu>
2007-05-02 17:27:13 +00:00
|
|
|
set_pgd(pgd, __pgd(__pa(pmd_table) | _PAGE_PRESENT));
|
|
|
|
pud = pud_offset(pgd, 0);
|
2008-01-30 12:34:10 +00:00
|
|
|
BUG_ON(pmd_table != pmd_offset(pud, 0));
|
2008-10-31 09:43:04 +00:00
|
|
|
|
|
|
|
return pmd_table;
|
[PATCH] i386: PARAVIRT: Hooks to set up initial pagetable
This patch introduces paravirt_ops hooks to control how the kernel's
initial pagetable is set up.
In the case of a native boot, the very early bootstrap code creates a
simple non-PAE pagetable to map the kernel and physical memory. When
the VM subsystem is initialized, it creates a proper pagetable which
respects the PAE mode, large pages, etc.
When booting under a hypervisor, there are many possibilities for what
paging environment the hypervisor establishes for the guest kernel, so
the constructon of the kernel's pagetable depends on the hypervisor.
In the case of Xen, the hypervisor boots the kernel with a fully
constructed pagetable, which is already using PAE if necessary. Also,
Xen requires particular care when constructing pagetables to make sure
all pagetables are always mapped read-only.
In order to make this easier, kernel's initial pagetable construction
has been changed to only allocate and initialize a pagetable page if
there's no page already present in the pagetable. This allows the Xen
paravirt backend to make a copy of the hypervisor-provided pagetable,
allowing the kernel to establish any more mappings it needs while
keeping the existing ones.
A slightly subtle point which is worth highlighting here is that Xen
requires all kernel mappings to share the same pte_t pages between all
pagetables, so that updating a kernel page's mapping in one pagetable
is reflected in all other pagetables. This makes it possible to
allocate a page and attach it to a pagetable without having to
explicitly enumerate that page's mapping in all pagetables.
And:
+From: "Eric W. Biederman" <ebiederm@xmission.com>
If we don't set the leaf page table entries it is quite possible that
will inherit and incorrect page table entry from the initial boot
page table setup in head.S. So we need to redo the effort here,
so we pick up PSE, PGE and the like.
Hypervisors like Xen require that their page tables be read-only,
which is slightly incompatible with our low identity mappings, however
I discussed this with Jeremy he has modified the Xen early set_pte
function to avoid problems in this area.
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Acked-by: William Irwin <bill.irwin@oracle.com>
Cc: Ingo Molnar <mingo@elte.hu>
2007-05-02 17:27:13 +00:00
|
|
|
}
|
|
|
|
#endif
|
2005-04-16 22:20:36 +00:00
|
|
|
pud = pud_offset(pgd, 0);
|
|
|
|
pmd_table = pmd_offset(pud, 0);
|
2008-01-30 12:34:10 +00:00
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
return pmd_table;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Create a page table and place a pointer to it in a middle page
|
2008-01-30 12:34:10 +00:00
|
|
|
* directory entry:
|
2005-04-16 22:20:36 +00:00
|
|
|
*/
|
|
|
|
static pte_t * __init one_page_table_init(pmd_t *pmd)
|
|
|
|
{
|
[PATCH] i386: PARAVIRT: Hooks to set up initial pagetable
This patch introduces paravirt_ops hooks to control how the kernel's
initial pagetable is set up.
In the case of a native boot, the very early bootstrap code creates a
simple non-PAE pagetable to map the kernel and physical memory. When
the VM subsystem is initialized, it creates a proper pagetable which
respects the PAE mode, large pages, etc.
When booting under a hypervisor, there are many possibilities for what
paging environment the hypervisor establishes for the guest kernel, so
the constructon of the kernel's pagetable depends on the hypervisor.
In the case of Xen, the hypervisor boots the kernel with a fully
constructed pagetable, which is already using PAE if necessary. Also,
Xen requires particular care when constructing pagetables to make sure
all pagetables are always mapped read-only.
In order to make this easier, kernel's initial pagetable construction
has been changed to only allocate and initialize a pagetable page if
there's no page already present in the pagetable. This allows the Xen
paravirt backend to make a copy of the hypervisor-provided pagetable,
allowing the kernel to establish any more mappings it needs while
keeping the existing ones.
A slightly subtle point which is worth highlighting here is that Xen
requires all kernel mappings to share the same pte_t pages between all
pagetables, so that updating a kernel page's mapping in one pagetable
is reflected in all other pagetables. This makes it possible to
allocate a page and attach it to a pagetable without having to
explicitly enumerate that page's mapping in all pagetables.
And:
+From: "Eric W. Biederman" <ebiederm@xmission.com>
If we don't set the leaf page table entries it is quite possible that
will inherit and incorrect page table entry from the initial boot
page table setup in head.S. So we need to redo the effort here,
so we pick up PSE, PGE and the like.
Hypervisors like Xen require that their page tables be read-only,
which is slightly incompatible with our low identity mappings, however
I discussed this with Jeremy he has modified the Xen early set_pte
function to avoid problems in this area.
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Acked-by: William Irwin <bill.irwin@oracle.com>
Cc: Ingo Molnar <mingo@elte.hu>
2007-05-02 17:27:13 +00:00
|
|
|
if (!(pmd_val(*pmd) & _PAGE_PRESENT)) {
|
2007-10-17 16:04:34 +00:00
|
|
|
pte_t *page_table = NULL;
|
|
|
|
|
2008-06-24 19:18:14 +00:00
|
|
|
if (after_init_bootmem) {
|
2007-10-17 16:04:34 +00:00
|
|
|
#ifdef CONFIG_DEBUG_PAGEALLOC
|
2008-06-24 19:18:14 +00:00
|
|
|
page_table = (pte_t *) alloc_bootmem_pages(PAGE_SIZE);
|
2007-10-17 16:04:34 +00:00
|
|
|
#endif
|
2008-06-24 19:18:14 +00:00
|
|
|
if (!page_table)
|
|
|
|
page_table =
|
2007-10-17 16:04:34 +00:00
|
|
|
(pte_t *)alloc_bootmem_low_pages(PAGE_SIZE);
|
2008-12-16 11:42:45 +00:00
|
|
|
} else
|
|
|
|
page_table = (pte_t *)alloc_low_page();
|
[PATCH] i386: PARAVIRT: Hooks to set up initial pagetable
This patch introduces paravirt_ops hooks to control how the kernel's
initial pagetable is set up.
In the case of a native boot, the very early bootstrap code creates a
simple non-PAE pagetable to map the kernel and physical memory. When
the VM subsystem is initialized, it creates a proper pagetable which
respects the PAE mode, large pages, etc.
When booting under a hypervisor, there are many possibilities for what
paging environment the hypervisor establishes for the guest kernel, so
the constructon of the kernel's pagetable depends on the hypervisor.
In the case of Xen, the hypervisor boots the kernel with a fully
constructed pagetable, which is already using PAE if necessary. Also,
Xen requires particular care when constructing pagetables to make sure
all pagetables are always mapped read-only.
In order to make this easier, kernel's initial pagetable construction
has been changed to only allocate and initialize a pagetable page if
there's no page already present in the pagetable. This allows the Xen
paravirt backend to make a copy of the hypervisor-provided pagetable,
allowing the kernel to establish any more mappings it needs while
keeping the existing ones.
A slightly subtle point which is worth highlighting here is that Xen
requires all kernel mappings to share the same pte_t pages between all
pagetables, so that updating a kernel page's mapping in one pagetable
is reflected in all other pagetables. This makes it possible to
allocate a page and attach it to a pagetable without having to
explicitly enumerate that page's mapping in all pagetables.
And:
+From: "Eric W. Biederman" <ebiederm@xmission.com>
If we don't set the leaf page table entries it is quite possible that
will inherit and incorrect page table entry from the initial boot
page table setup in head.S. So we need to redo the effort here,
so we pick up PSE, PGE and the like.
Hypervisors like Xen require that their page tables be read-only,
which is slightly incompatible with our low identity mappings, however
I discussed this with Jeremy he has modified the Xen early set_pte
function to avoid problems in this area.
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Acked-by: William Irwin <bill.irwin@oracle.com>
Cc: Ingo Molnar <mingo@elte.hu>
2007-05-02 17:27:13 +00:00
|
|
|
|
2008-03-17 23:37:01 +00:00
|
|
|
paravirt_alloc_pte(&init_mm, __pa(page_table) >> PAGE_SHIFT);
|
2005-04-16 22:20:36 +00:00
|
|
|
set_pmd(pmd, __pmd(__pa(page_table) | _PAGE_TABLE));
|
[PATCH] i386: PARAVIRT: Hooks to set up initial pagetable
This patch introduces paravirt_ops hooks to control how the kernel's
initial pagetable is set up.
In the case of a native boot, the very early bootstrap code creates a
simple non-PAE pagetable to map the kernel and physical memory. When
the VM subsystem is initialized, it creates a proper pagetable which
respects the PAE mode, large pages, etc.
When booting under a hypervisor, there are many possibilities for what
paging environment the hypervisor establishes for the guest kernel, so
the constructon of the kernel's pagetable depends on the hypervisor.
In the case of Xen, the hypervisor boots the kernel with a fully
constructed pagetable, which is already using PAE if necessary. Also,
Xen requires particular care when constructing pagetables to make sure
all pagetables are always mapped read-only.
In order to make this easier, kernel's initial pagetable construction
has been changed to only allocate and initialize a pagetable page if
there's no page already present in the pagetable. This allows the Xen
paravirt backend to make a copy of the hypervisor-provided pagetable,
allowing the kernel to establish any more mappings it needs while
keeping the existing ones.
A slightly subtle point which is worth highlighting here is that Xen
requires all kernel mappings to share the same pte_t pages between all
pagetables, so that updating a kernel page's mapping in one pagetable
is reflected in all other pagetables. This makes it possible to
allocate a page and attach it to a pagetable without having to
explicitly enumerate that page's mapping in all pagetables.
And:
+From: "Eric W. Biederman" <ebiederm@xmission.com>
If we don't set the leaf page table entries it is quite possible that
will inherit and incorrect page table entry from the initial boot
page table setup in head.S. So we need to redo the effort here,
so we pick up PSE, PGE and the like.
Hypervisors like Xen require that their page tables be read-only,
which is slightly incompatible with our low identity mappings, however
I discussed this with Jeremy he has modified the Xen early set_pte
function to avoid problems in this area.
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Acked-by: William Irwin <bill.irwin@oracle.com>
Cc: Ingo Molnar <mingo@elte.hu>
2007-05-02 17:27:13 +00:00
|
|
|
BUG_ON(page_table != pte_offset_kernel(pmd, 0));
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
2007-10-17 16:04:34 +00:00
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
return pte_offset_kernel(pmd, 0);
|
|
|
|
}
|
|
|
|
|
2009-01-16 11:59:33 +00:00
|
|
|
static pte_t *__init page_table_kmap_check(pte_t *pte, pmd_t *pmd,
|
|
|
|
unsigned long vaddr, pte_t *lastpte)
|
|
|
|
{
|
|
|
|
#ifdef CONFIG_HIGHMEM
|
|
|
|
/*
|
|
|
|
* Something (early fixmap) may already have put a pte
|
|
|
|
* page here, which causes the page table allocation
|
|
|
|
* to become nonlinear. Attempt to fix it, and if it
|
|
|
|
* is still nonlinear then we have to bug.
|
|
|
|
*/
|
|
|
|
int pmd_idx_kmap_begin = fix_to_virt(FIX_KMAP_END) >> PMD_SHIFT;
|
|
|
|
int pmd_idx_kmap_end = fix_to_virt(FIX_KMAP_BEGIN) >> PMD_SHIFT;
|
|
|
|
|
|
|
|
if (pmd_idx_kmap_begin != pmd_idx_kmap_end
|
|
|
|
&& (vaddr >> PMD_SHIFT) >= pmd_idx_kmap_begin
|
|
|
|
&& (vaddr >> PMD_SHIFT) <= pmd_idx_kmap_end
|
|
|
|
&& ((__pa(pte) >> PAGE_SHIFT) < table_start
|
|
|
|
|| (__pa(pte) >> PAGE_SHIFT) >= table_end)) {
|
|
|
|
pte_t *newpte;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
BUG_ON(after_init_bootmem);
|
|
|
|
newpte = alloc_low_page();
|
|
|
|
for (i = 0; i < PTRS_PER_PTE; i++)
|
|
|
|
set_pte(newpte + i, pte[i]);
|
|
|
|
|
|
|
|
paravirt_alloc_pte(&init_mm, __pa(newpte) >> PAGE_SHIFT);
|
|
|
|
set_pmd(pmd, __pmd(__pa(newpte)|_PAGE_TABLE));
|
|
|
|
BUG_ON(newpte != pte_offset_kernel(pmd, 0));
|
|
|
|
__flush_tlb_all();
|
|
|
|
|
|
|
|
paravirt_release_pte(__pa(pte) >> PAGE_SHIFT);
|
|
|
|
pte = newpte;
|
|
|
|
}
|
|
|
|
BUG_ON(vaddr < fix_to_virt(FIX_KMAP_BEGIN - 1)
|
|
|
|
&& vaddr > fix_to_virt(FIX_KMAP_END)
|
|
|
|
&& lastpte && lastpte + PTRS_PER_PTE != pte);
|
|
|
|
#endif
|
|
|
|
return pte;
|
|
|
|
}
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
/*
|
2008-01-30 12:34:10 +00:00
|
|
|
* This function initializes a certain range of kernel virtual memory
|
2005-04-16 22:20:36 +00:00
|
|
|
* with new bootmem page tables, everywhere page tables are missing in
|
|
|
|
* the given range.
|
2008-01-30 12:34:10 +00:00
|
|
|
*
|
|
|
|
* NOTE: The pagetables are allocated contiguous on the physical space
|
|
|
|
* so we can cache the place of the first one and move around without
|
2005-04-16 22:20:36 +00:00
|
|
|
* checking the pgd every time.
|
|
|
|
*/
|
2008-01-30 12:34:10 +00:00
|
|
|
static void __init
|
|
|
|
page_table_range_init(unsigned long start, unsigned long end, pgd_t *pgd_base)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
|
|
|
int pgd_idx, pmd_idx;
|
|
|
|
unsigned long vaddr;
|
2008-01-30 12:34:10 +00:00
|
|
|
pgd_t *pgd;
|
|
|
|
pmd_t *pmd;
|
2009-01-16 11:59:33 +00:00
|
|
|
pte_t *pte = NULL;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
vaddr = start;
|
|
|
|
pgd_idx = pgd_index(vaddr);
|
|
|
|
pmd_idx = pmd_index(vaddr);
|
|
|
|
pgd = pgd_base + pgd_idx;
|
|
|
|
|
|
|
|
for ( ; (pgd_idx < PTRS_PER_PGD) && (vaddr != end); pgd++, pgd_idx++) {
|
[PATCH] i386: PARAVIRT: Hooks to set up initial pagetable
This patch introduces paravirt_ops hooks to control how the kernel's
initial pagetable is set up.
In the case of a native boot, the very early bootstrap code creates a
simple non-PAE pagetable to map the kernel and physical memory. When
the VM subsystem is initialized, it creates a proper pagetable which
respects the PAE mode, large pages, etc.
When booting under a hypervisor, there are many possibilities for what
paging environment the hypervisor establishes for the guest kernel, so
the constructon of the kernel's pagetable depends on the hypervisor.
In the case of Xen, the hypervisor boots the kernel with a fully
constructed pagetable, which is already using PAE if necessary. Also,
Xen requires particular care when constructing pagetables to make sure
all pagetables are always mapped read-only.
In order to make this easier, kernel's initial pagetable construction
has been changed to only allocate and initialize a pagetable page if
there's no page already present in the pagetable. This allows the Xen
paravirt backend to make a copy of the hypervisor-provided pagetable,
allowing the kernel to establish any more mappings it needs while
keeping the existing ones.
A slightly subtle point which is worth highlighting here is that Xen
requires all kernel mappings to share the same pte_t pages between all
pagetables, so that updating a kernel page's mapping in one pagetable
is reflected in all other pagetables. This makes it possible to
allocate a page and attach it to a pagetable without having to
explicitly enumerate that page's mapping in all pagetables.
And:
+From: "Eric W. Biederman" <ebiederm@xmission.com>
If we don't set the leaf page table entries it is quite possible that
will inherit and incorrect page table entry from the initial boot
page table setup in head.S. So we need to redo the effort here,
so we pick up PSE, PGE and the like.
Hypervisors like Xen require that their page tables be read-only,
which is slightly incompatible with our low identity mappings, however
I discussed this with Jeremy he has modified the Xen early set_pte
function to avoid problems in this area.
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Acked-by: William Irwin <bill.irwin@oracle.com>
Cc: Ingo Molnar <mingo@elte.hu>
2007-05-02 17:27:13 +00:00
|
|
|
pmd = one_md_table_init(pgd);
|
|
|
|
pmd = pmd + pmd_index(vaddr);
|
2008-01-30 12:34:10 +00:00
|
|
|
for (; (pmd_idx < PTRS_PER_PMD) && (vaddr != end);
|
|
|
|
pmd++, pmd_idx++) {
|
2009-01-16 11:59:33 +00:00
|
|
|
pte = page_table_kmap_check(one_page_table_init(pmd),
|
|
|
|
pmd, vaddr, pte);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
vaddr += PMD_SIZE;
|
|
|
|
}
|
|
|
|
pmd_idx = 0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int is_kernel_text(unsigned long addr)
|
|
|
|
{
|
|
|
|
if (addr >= PAGE_OFFSET && addr <= (unsigned long)__init_end)
|
|
|
|
return 1;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2008-01-30 12:34:10 +00:00
|
|
|
* This maps the physical memory to kernel virtual address space, a total
|
|
|
|
* of max_low_pfn pages, by creating page tables starting from address
|
|
|
|
* PAGE_OFFSET:
|
2005-04-16 22:20:36 +00:00
|
|
|
*/
|
2008-06-24 19:18:14 +00:00
|
|
|
static void __init kernel_physical_mapping_init(pgd_t *pgd_base,
|
2008-06-29 07:39:06 +00:00
|
|
|
unsigned long start_pfn,
|
|
|
|
unsigned long end_pfn,
|
|
|
|
int use_pse)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2008-01-30 12:34:10 +00:00
|
|
|
int pgd_idx, pmd_idx, pte_ofs;
|
2005-04-16 22:20:36 +00:00
|
|
|
unsigned long pfn;
|
|
|
|
pgd_t *pgd;
|
|
|
|
pmd_t *pmd;
|
|
|
|
pte_t *pte;
|
2008-09-23 21:00:38 +00:00
|
|
|
unsigned pages_2m, pages_4k;
|
|
|
|
int mapping_iter;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* First iteration will setup identity mapping using large/small pages
|
|
|
|
* based on use_pse, with other attributes same as set by
|
|
|
|
* the early code in head_32.S
|
|
|
|
*
|
|
|
|
* Second iteration will setup the appropriate attributes (NX, GLOBAL..)
|
|
|
|
* as desired for the kernel identity mapping.
|
|
|
|
*
|
|
|
|
* This two pass mechanism conforms to the TLB app note which says:
|
|
|
|
*
|
|
|
|
* "Software should not write to a paging-structure entry in a way
|
|
|
|
* that would change, for any linear address, both the page size
|
|
|
|
* and either the page frame or attributes."
|
|
|
|
*/
|
|
|
|
mapping_iter = 1;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2008-06-29 07:39:06 +00:00
|
|
|
if (!cpu_has_pse)
|
|
|
|
use_pse = 0;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2008-09-23 21:00:38 +00:00
|
|
|
repeat:
|
|
|
|
pages_2m = pages_4k = 0;
|
2008-06-29 07:39:06 +00:00
|
|
|
pfn = start_pfn;
|
|
|
|
pgd_idx = pgd_index((pfn<<PAGE_SHIFT) + PAGE_OFFSET);
|
|
|
|
pgd = pgd_base + pgd_idx;
|
2005-04-16 22:20:36 +00:00
|
|
|
for (; pgd_idx < PTRS_PER_PGD; pgd++, pgd_idx++) {
|
|
|
|
pmd = one_md_table_init(pgd);
|
2008-01-30 12:34:10 +00:00
|
|
|
|
2008-06-29 07:39:06 +00:00
|
|
|
if (pfn >= end_pfn)
|
|
|
|
continue;
|
|
|
|
#ifdef CONFIG_X86_PAE
|
|
|
|
pmd_idx = pmd_index((pfn<<PAGE_SHIFT) + PAGE_OFFSET);
|
|
|
|
pmd += pmd_idx;
|
|
|
|
#else
|
|
|
|
pmd_idx = 0;
|
|
|
|
#endif
|
|
|
|
for (; pmd_idx < PTRS_PER_PMD && pfn < end_pfn;
|
2008-01-30 12:31:09 +00:00
|
|
|
pmd++, pmd_idx++) {
|
2008-01-30 12:34:10 +00:00
|
|
|
unsigned int addr = pfn * PAGE_SIZE + PAGE_OFFSET;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2008-01-30 12:34:10 +00:00
|
|
|
/*
|
|
|
|
* Map with big pages if possible, otherwise
|
|
|
|
* create normal page tables:
|
|
|
|
*/
|
2008-06-29 07:39:06 +00:00
|
|
|
if (use_pse) {
|
2008-01-30 12:34:10 +00:00
|
|
|
unsigned int addr2;
|
2008-01-30 12:31:09 +00:00
|
|
|
pgprot_t prot = PAGE_KERNEL_LARGE;
|
2008-09-23 21:00:38 +00:00
|
|
|
/*
|
|
|
|
* first pass will use the same initial
|
|
|
|
* identity mapping attribute + _PAGE_PSE.
|
|
|
|
*/
|
|
|
|
pgprot_t init_prot =
|
|
|
|
__pgprot(PTE_IDENT_ATTR |
|
|
|
|
_PAGE_PSE);
|
2008-01-30 12:31:09 +00:00
|
|
|
|
2008-01-30 12:34:10 +00:00
|
|
|
addr2 = (pfn + PTRS_PER_PTE-1) * PAGE_SIZE +
|
2008-01-30 12:31:09 +00:00
|
|
|
PAGE_OFFSET + PAGE_SIZE-1;
|
|
|
|
|
2008-01-30 12:34:10 +00:00
|
|
|
if (is_kernel_text(addr) ||
|
|
|
|
is_kernel_text(addr2))
|
2008-01-30 12:31:09 +00:00
|
|
|
prot = PAGE_KERNEL_LARGE_EXEC;
|
|
|
|
|
2008-05-02 09:46:49 +00:00
|
|
|
pages_2m++;
|
2008-09-23 21:00:38 +00:00
|
|
|
if (mapping_iter == 1)
|
|
|
|
set_pmd(pmd, pfn_pmd(pfn, init_prot));
|
|
|
|
else
|
|
|
|
set_pmd(pmd, pfn_pmd(pfn, prot));
|
[PATCH] i386: PARAVIRT: Hooks to set up initial pagetable
This patch introduces paravirt_ops hooks to control how the kernel's
initial pagetable is set up.
In the case of a native boot, the very early bootstrap code creates a
simple non-PAE pagetable to map the kernel and physical memory. When
the VM subsystem is initialized, it creates a proper pagetable which
respects the PAE mode, large pages, etc.
When booting under a hypervisor, there are many possibilities for what
paging environment the hypervisor establishes for the guest kernel, so
the constructon of the kernel's pagetable depends on the hypervisor.
In the case of Xen, the hypervisor boots the kernel with a fully
constructed pagetable, which is already using PAE if necessary. Also,
Xen requires particular care when constructing pagetables to make sure
all pagetables are always mapped read-only.
In order to make this easier, kernel's initial pagetable construction
has been changed to only allocate and initialize a pagetable page if
there's no page already present in the pagetable. This allows the Xen
paravirt backend to make a copy of the hypervisor-provided pagetable,
allowing the kernel to establish any more mappings it needs while
keeping the existing ones.
A slightly subtle point which is worth highlighting here is that Xen
requires all kernel mappings to share the same pte_t pages between all
pagetables, so that updating a kernel page's mapping in one pagetable
is reflected in all other pagetables. This makes it possible to
allocate a page and attach it to a pagetable without having to
explicitly enumerate that page's mapping in all pagetables.
And:
+From: "Eric W. Biederman" <ebiederm@xmission.com>
If we don't set the leaf page table entries it is quite possible that
will inherit and incorrect page table entry from the initial boot
page table setup in head.S. So we need to redo the effort here,
so we pick up PSE, PGE and the like.
Hypervisors like Xen require that their page tables be read-only,
which is slightly incompatible with our low identity mappings, however
I discussed this with Jeremy he has modified the Xen early set_pte
function to avoid problems in this area.
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Acked-by: William Irwin <bill.irwin@oracle.com>
Cc: Ingo Molnar <mingo@elte.hu>
2007-05-02 17:27:13 +00:00
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
pfn += PTRS_PER_PTE;
|
2008-01-30 12:34:10 +00:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
pte = one_page_table_init(pmd);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2008-06-29 07:39:06 +00:00
|
|
|
pte_ofs = pte_index((pfn<<PAGE_SHIFT) + PAGE_OFFSET);
|
|
|
|
pte += pte_ofs;
|
|
|
|
for (; pte_ofs < PTRS_PER_PTE && pfn < end_pfn;
|
2008-01-30 12:34:10 +00:00
|
|
|
pte++, pfn++, pte_ofs++, addr += PAGE_SIZE) {
|
|
|
|
pgprot_t prot = PAGE_KERNEL;
|
2008-09-23 21:00:38 +00:00
|
|
|
/*
|
|
|
|
* first pass will use the same initial
|
|
|
|
* identity mapping attribute.
|
|
|
|
*/
|
|
|
|
pgprot_t init_prot = __pgprot(PTE_IDENT_ATTR);
|
2008-01-30 12:31:09 +00:00
|
|
|
|
2008-01-30 12:34:10 +00:00
|
|
|
if (is_kernel_text(addr))
|
|
|
|
prot = PAGE_KERNEL_EXEC;
|
2008-01-30 12:31:09 +00:00
|
|
|
|
2008-05-02 09:46:49 +00:00
|
|
|
pages_4k++;
|
2008-09-23 21:00:38 +00:00
|
|
|
if (mapping_iter == 1)
|
|
|
|
set_pte(pte, pfn_pte(pfn, init_prot));
|
|
|
|
else
|
|
|
|
set_pte(pte, pfn_pte(pfn, prot));
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2008-09-23 21:00:38 +00:00
|
|
|
if (mapping_iter == 1) {
|
|
|
|
/*
|
|
|
|
* update direct mapping page count only in the first
|
|
|
|
* iteration.
|
|
|
|
*/
|
|
|
|
update_page_count(PG_LEVEL_2M, pages_2m);
|
|
|
|
update_page_count(PG_LEVEL_4K, pages_4k);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* local global flush tlb, which will flush the previous
|
|
|
|
* mappings present in both small and large page TLB's.
|
|
|
|
*/
|
|
|
|
__flush_tlb_all();
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Second iteration will set the actual desired PTE attributes.
|
|
|
|
*/
|
|
|
|
mapping_iter = 2;
|
|
|
|
goto repeat;
|
|
|
|
}
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
2008-04-24 21:40:47 +00:00
|
|
|
/*
|
|
|
|
* devmem_is_allowed() checks to see if /dev/mem access to a certain address
|
|
|
|
* is valid. The argument is a physical page number.
|
|
|
|
*
|
|
|
|
*
|
|
|
|
* On x86, access has to be given to the first megabyte of ram because that area
|
|
|
|
* contains bios code and data regions used by X and dosemu and similar apps.
|
|
|
|
* Access has to be given to non-kernel-ram areas as well, these contain the PCI
|
|
|
|
* mmio resources as well as potential bios/acpi data regions.
|
|
|
|
*/
|
|
|
|
int devmem_is_allowed(unsigned long pagenr)
|
|
|
|
{
|
|
|
|
if (pagenr <= 256)
|
|
|
|
return 1;
|
2008-10-23 02:55:31 +00:00
|
|
|
if (iomem_is_exclusive(pagenr << PAGE_SHIFT))
|
|
|
|
return 0;
|
2008-04-24 21:40:47 +00:00
|
|
|
if (!page_is_ram(pagenr))
|
|
|
|
return 1;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
pte_t *kmap_pte;
|
|
|
|
pgprot_t kmap_prot;
|
|
|
|
|
2008-01-30 12:34:10 +00:00
|
|
|
static inline pte_t *kmap_get_fixmap_pte(unsigned long vaddr)
|
|
|
|
{
|
|
|
|
return pte_offset_kernel(pmd_offset(pud_offset(pgd_offset_k(vaddr),
|
|
|
|
vaddr), vaddr), vaddr);
|
|
|
|
}
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
static void __init kmap_init(void)
|
|
|
|
{
|
|
|
|
unsigned long kmap_vstart;
|
|
|
|
|
2008-01-30 12:34:10 +00:00
|
|
|
/*
|
|
|
|
* Cache the first kmap pte:
|
|
|
|
*/
|
2005-04-16 22:20:36 +00:00
|
|
|
kmap_vstart = __fix_to_virt(FIX_KMAP_BEGIN);
|
|
|
|
kmap_pte = kmap_get_fixmap_pte(kmap_vstart);
|
|
|
|
|
|
|
|
kmap_prot = PAGE_KERNEL;
|
|
|
|
}
|
|
|
|
|
2008-10-31 02:37:09 +00:00
|
|
|
#ifdef CONFIG_HIGHMEM
|
2005-04-16 22:20:36 +00:00
|
|
|
static void __init permanent_kmaps_init(pgd_t *pgd_base)
|
|
|
|
{
|
2008-01-30 12:34:10 +00:00
|
|
|
unsigned long vaddr;
|
2005-04-16 22:20:36 +00:00
|
|
|
pgd_t *pgd;
|
|
|
|
pud_t *pud;
|
|
|
|
pmd_t *pmd;
|
|
|
|
pte_t *pte;
|
|
|
|
|
|
|
|
vaddr = PKMAP_BASE;
|
|
|
|
page_table_range_init(vaddr, vaddr + PAGE_SIZE*LAST_PKMAP, pgd_base);
|
|
|
|
|
|
|
|
pgd = swapper_pg_dir + pgd_index(vaddr);
|
|
|
|
pud = pud_offset(pgd, vaddr);
|
|
|
|
pmd = pmd_offset(pud, vaddr);
|
|
|
|
pte = pte_offset_kernel(pmd, vaddr);
|
2008-01-30 12:34:10 +00:00
|
|
|
pkmap_page_table = pte;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
2008-06-16 23:11:08 +00:00
|
|
|
static void __init add_one_highpage_init(struct page *page, int pfn)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2008-06-16 23:11:08 +00:00
|
|
|
ClearPageReserved(page);
|
|
|
|
init_page_count(page);
|
|
|
|
__free_page(page);
|
|
|
|
totalhigh_pages++;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
2008-06-15 01:32:52 +00:00
|
|
|
struct add_highpages_data {
|
|
|
|
unsigned long start_pfn;
|
|
|
|
unsigned long end_pfn;
|
|
|
|
};
|
|
|
|
|
2008-06-17 03:10:55 +00:00
|
|
|
static int __init add_highpages_work_fn(unsigned long start_pfn,
|
2008-06-15 01:32:52 +00:00
|
|
|
unsigned long end_pfn, void *datax)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2008-06-15 01:32:52 +00:00
|
|
|
int node_pfn;
|
|
|
|
struct page *page;
|
|
|
|
unsigned long final_start_pfn, final_end_pfn;
|
|
|
|
struct add_highpages_data *data;
|
2008-01-30 12:34:10 +00:00
|
|
|
|
2008-06-15 01:32:52 +00:00
|
|
|
data = (struct add_highpages_data *)datax;
|
|
|
|
|
|
|
|
final_start_pfn = max(start_pfn, data->start_pfn);
|
|
|
|
final_end_pfn = min(end_pfn, data->end_pfn);
|
|
|
|
if (final_start_pfn >= final_end_pfn)
|
2008-06-17 03:10:55 +00:00
|
|
|
return 0;
|
2008-06-15 01:32:52 +00:00
|
|
|
|
|
|
|
for (node_pfn = final_start_pfn; node_pfn < final_end_pfn;
|
|
|
|
node_pfn++) {
|
|
|
|
if (!pfn_valid(node_pfn))
|
|
|
|
continue;
|
|
|
|
page = pfn_to_page(node_pfn);
|
2008-06-16 23:11:08 +00:00
|
|
|
add_one_highpage_init(page, node_pfn);
|
2008-01-15 15:44:37 +00:00
|
|
|
}
|
2008-06-15 01:32:52 +00:00
|
|
|
|
2008-06-17 03:10:55 +00:00
|
|
|
return 0;
|
|
|
|
|
2008-06-15 01:32:52 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void __init add_highpages_with_active_regions(int nid, unsigned long start_pfn,
|
2008-06-16 23:11:08 +00:00
|
|
|
unsigned long end_pfn)
|
2008-06-15 01:32:52 +00:00
|
|
|
{
|
|
|
|
struct add_highpages_data data;
|
|
|
|
|
|
|
|
data.start_pfn = start_pfn;
|
|
|
|
data.end_pfn = end_pfn;
|
|
|
|
|
|
|
|
work_with_active_regions(nid, add_highpages_work_fn, &data);
|
|
|
|
}
|
|
|
|
|
2008-01-30 12:34:10 +00:00
|
|
|
#ifndef CONFIG_NUMA
|
2008-06-16 23:11:08 +00:00
|
|
|
static void __init set_highmem_pages_init(void)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2008-06-16 23:11:08 +00:00
|
|
|
add_highpages_with_active_regions(0, highstart_pfn, highend_pfn);
|
2008-06-15 01:32:52 +00:00
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
totalram_pages += totalhigh_pages;
|
|
|
|
}
|
2008-01-30 12:34:10 +00:00
|
|
|
#endif /* !CONFIG_NUMA */
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
#else
|
2009-01-02 13:42:00 +00:00
|
|
|
static inline void permanent_kmaps_init(pgd_t *pgd_base)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
static inline void set_highmem_pages_init(void)
|
|
|
|
{
|
|
|
|
}
|
2005-04-16 22:20:36 +00:00
|
|
|
#endif /* CONFIG_HIGHMEM */
|
|
|
|
|
[PATCH] i386: PARAVIRT: Hooks to set up initial pagetable
This patch introduces paravirt_ops hooks to control how the kernel's
initial pagetable is set up.
In the case of a native boot, the very early bootstrap code creates a
simple non-PAE pagetable to map the kernel and physical memory. When
the VM subsystem is initialized, it creates a proper pagetable which
respects the PAE mode, large pages, etc.
When booting under a hypervisor, there are many possibilities for what
paging environment the hypervisor establishes for the guest kernel, so
the constructon of the kernel's pagetable depends on the hypervisor.
In the case of Xen, the hypervisor boots the kernel with a fully
constructed pagetable, which is already using PAE if necessary. Also,
Xen requires particular care when constructing pagetables to make sure
all pagetables are always mapped read-only.
In order to make this easier, kernel's initial pagetable construction
has been changed to only allocate and initialize a pagetable page if
there's no page already present in the pagetable. This allows the Xen
paravirt backend to make a copy of the hypervisor-provided pagetable,
allowing the kernel to establish any more mappings it needs while
keeping the existing ones.
A slightly subtle point which is worth highlighting here is that Xen
requires all kernel mappings to share the same pte_t pages between all
pagetables, so that updating a kernel page's mapping in one pagetable
is reflected in all other pagetables. This makes it possible to
allocate a page and attach it to a pagetable without having to
explicitly enumerate that page's mapping in all pagetables.
And:
+From: "Eric W. Biederman" <ebiederm@xmission.com>
If we don't set the leaf page table entries it is quite possible that
will inherit and incorrect page table entry from the initial boot
page table setup in head.S. So we need to redo the effort here,
so we pick up PSE, PGE and the like.
Hypervisors like Xen require that their page tables be read-only,
which is slightly incompatible with our low identity mappings, however
I discussed this with Jeremy he has modified the Xen early set_pte
function to avoid problems in this area.
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Acked-by: William Irwin <bill.irwin@oracle.com>
Cc: Ingo Molnar <mingo@elte.hu>
2007-05-02 17:27:13 +00:00
|
|
|
void __init native_pagetable_setup_start(pgd_t *base)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2008-02-09 22:24:09 +00:00
|
|
|
unsigned long pfn, va;
|
|
|
|
pgd_t *pgd;
|
|
|
|
pud_t *pud;
|
|
|
|
pmd_t *pmd;
|
|
|
|
pte_t *pte;
|
[PATCH] i386: PARAVIRT: Hooks to set up initial pagetable
This patch introduces paravirt_ops hooks to control how the kernel's
initial pagetable is set up.
In the case of a native boot, the very early bootstrap code creates a
simple non-PAE pagetable to map the kernel and physical memory. When
the VM subsystem is initialized, it creates a proper pagetable which
respects the PAE mode, large pages, etc.
When booting under a hypervisor, there are many possibilities for what
paging environment the hypervisor establishes for the guest kernel, so
the constructon of the kernel's pagetable depends on the hypervisor.
In the case of Xen, the hypervisor boots the kernel with a fully
constructed pagetable, which is already using PAE if necessary. Also,
Xen requires particular care when constructing pagetables to make sure
all pagetables are always mapped read-only.
In order to make this easier, kernel's initial pagetable construction
has been changed to only allocate and initialize a pagetable page if
there's no page already present in the pagetable. This allows the Xen
paravirt backend to make a copy of the hypervisor-provided pagetable,
allowing the kernel to establish any more mappings it needs while
keeping the existing ones.
A slightly subtle point which is worth highlighting here is that Xen
requires all kernel mappings to share the same pte_t pages between all
pagetables, so that updating a kernel page's mapping in one pagetable
is reflected in all other pagetables. This makes it possible to
allocate a page and attach it to a pagetable without having to
explicitly enumerate that page's mapping in all pagetables.
And:
+From: "Eric W. Biederman" <ebiederm@xmission.com>
If we don't set the leaf page table entries it is quite possible that
will inherit and incorrect page table entry from the initial boot
page table setup in head.S. So we need to redo the effort here,
so we pick up PSE, PGE and the like.
Hypervisors like Xen require that their page tables be read-only,
which is slightly incompatible with our low identity mappings, however
I discussed this with Jeremy he has modified the Xen early set_pte
function to avoid problems in this area.
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Acked-by: William Irwin <bill.irwin@oracle.com>
Cc: Ingo Molnar <mingo@elte.hu>
2007-05-02 17:27:13 +00:00
|
|
|
|
|
|
|
/*
|
2008-02-09 22:24:09 +00:00
|
|
|
* Remove any mappings which extend past the end of physical
|
|
|
|
* memory from the boot time page table:
|
[PATCH] i386: PARAVIRT: Hooks to set up initial pagetable
This patch introduces paravirt_ops hooks to control how the kernel's
initial pagetable is set up.
In the case of a native boot, the very early bootstrap code creates a
simple non-PAE pagetable to map the kernel and physical memory. When
the VM subsystem is initialized, it creates a proper pagetable which
respects the PAE mode, large pages, etc.
When booting under a hypervisor, there are many possibilities for what
paging environment the hypervisor establishes for the guest kernel, so
the constructon of the kernel's pagetable depends on the hypervisor.
In the case of Xen, the hypervisor boots the kernel with a fully
constructed pagetable, which is already using PAE if necessary. Also,
Xen requires particular care when constructing pagetables to make sure
all pagetables are always mapped read-only.
In order to make this easier, kernel's initial pagetable construction
has been changed to only allocate and initialize a pagetable page if
there's no page already present in the pagetable. This allows the Xen
paravirt backend to make a copy of the hypervisor-provided pagetable,
allowing the kernel to establish any more mappings it needs while
keeping the existing ones.
A slightly subtle point which is worth highlighting here is that Xen
requires all kernel mappings to share the same pte_t pages between all
pagetables, so that updating a kernel page's mapping in one pagetable
is reflected in all other pagetables. This makes it possible to
allocate a page and attach it to a pagetable without having to
explicitly enumerate that page's mapping in all pagetables.
And:
+From: "Eric W. Biederman" <ebiederm@xmission.com>
If we don't set the leaf page table entries it is quite possible that
will inherit and incorrect page table entry from the initial boot
page table setup in head.S. So we need to redo the effort here,
so we pick up PSE, PGE and the like.
Hypervisors like Xen require that their page tables be read-only,
which is slightly incompatible with our low identity mappings, however
I discussed this with Jeremy he has modified the Xen early set_pte
function to avoid problems in this area.
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Acked-by: William Irwin <bill.irwin@oracle.com>
Cc: Ingo Molnar <mingo@elte.hu>
2007-05-02 17:27:13 +00:00
|
|
|
*/
|
2008-02-09 22:24:09 +00:00
|
|
|
for (pfn = max_low_pfn + 1; pfn < 1<<(32-PAGE_SHIFT); pfn++) {
|
|
|
|
va = PAGE_OFFSET + (pfn<<PAGE_SHIFT);
|
|
|
|
pgd = base + pgd_index(va);
|
|
|
|
if (!pgd_present(*pgd))
|
|
|
|
break;
|
|
|
|
|
|
|
|
pud = pud_offset(pgd, va);
|
|
|
|
pmd = pmd_offset(pud, va);
|
|
|
|
if (!pmd_present(*pmd))
|
|
|
|
break;
|
|
|
|
|
|
|
|
pte = pte_offset_kernel(pmd, va);
|
|
|
|
if (!pte_present(*pte))
|
|
|
|
break;
|
|
|
|
|
|
|
|
pte_clear(NULL, va, pte);
|
|
|
|
}
|
2008-03-17 23:37:01 +00:00
|
|
|
paravirt_alloc_pmd(&init_mm, __pa(base) >> PAGE_SHIFT);
|
[PATCH] i386: PARAVIRT: Hooks to set up initial pagetable
This patch introduces paravirt_ops hooks to control how the kernel's
initial pagetable is set up.
In the case of a native boot, the very early bootstrap code creates a
simple non-PAE pagetable to map the kernel and physical memory. When
the VM subsystem is initialized, it creates a proper pagetable which
respects the PAE mode, large pages, etc.
When booting under a hypervisor, there are many possibilities for what
paging environment the hypervisor establishes for the guest kernel, so
the constructon of the kernel's pagetable depends on the hypervisor.
In the case of Xen, the hypervisor boots the kernel with a fully
constructed pagetable, which is already using PAE if necessary. Also,
Xen requires particular care when constructing pagetables to make sure
all pagetables are always mapped read-only.
In order to make this easier, kernel's initial pagetable construction
has been changed to only allocate and initialize a pagetable page if
there's no page already present in the pagetable. This allows the Xen
paravirt backend to make a copy of the hypervisor-provided pagetable,
allowing the kernel to establish any more mappings it needs while
keeping the existing ones.
A slightly subtle point which is worth highlighting here is that Xen
requires all kernel mappings to share the same pte_t pages between all
pagetables, so that updating a kernel page's mapping in one pagetable
is reflected in all other pagetables. This makes it possible to
allocate a page and attach it to a pagetable without having to
explicitly enumerate that page's mapping in all pagetables.
And:
+From: "Eric W. Biederman" <ebiederm@xmission.com>
If we don't set the leaf page table entries it is quite possible that
will inherit and incorrect page table entry from the initial boot
page table setup in head.S. So we need to redo the effort here,
so we pick up PSE, PGE and the like.
Hypervisors like Xen require that their page tables be read-only,
which is slightly incompatible with our low identity mappings, however
I discussed this with Jeremy he has modified the Xen early set_pte
function to avoid problems in this area.
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Acked-by: William Irwin <bill.irwin@oracle.com>
Cc: Ingo Molnar <mingo@elte.hu>
2007-05-02 17:27:13 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void __init native_pagetable_setup_done(pgd_t *base)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Build a proper pagetable for the kernel mappings. Up until this
|
|
|
|
* point, we've been running on some set of pagetables constructed by
|
|
|
|
* the boot process.
|
|
|
|
*
|
|
|
|
* If we're booting on native hardware, this will be a pagetable
|
2008-02-09 22:24:09 +00:00
|
|
|
* constructed in arch/x86/kernel/head_32.S. The root of the
|
|
|
|
* pagetable will be swapper_pg_dir.
|
[PATCH] i386: PARAVIRT: Hooks to set up initial pagetable
This patch introduces paravirt_ops hooks to control how the kernel's
initial pagetable is set up.
In the case of a native boot, the very early bootstrap code creates a
simple non-PAE pagetable to map the kernel and physical memory. When
the VM subsystem is initialized, it creates a proper pagetable which
respects the PAE mode, large pages, etc.
When booting under a hypervisor, there are many possibilities for what
paging environment the hypervisor establishes for the guest kernel, so
the constructon of the kernel's pagetable depends on the hypervisor.
In the case of Xen, the hypervisor boots the kernel with a fully
constructed pagetable, which is already using PAE if necessary. Also,
Xen requires particular care when constructing pagetables to make sure
all pagetables are always mapped read-only.
In order to make this easier, kernel's initial pagetable construction
has been changed to only allocate and initialize a pagetable page if
there's no page already present in the pagetable. This allows the Xen
paravirt backend to make a copy of the hypervisor-provided pagetable,
allowing the kernel to establish any more mappings it needs while
keeping the existing ones.
A slightly subtle point which is worth highlighting here is that Xen
requires all kernel mappings to share the same pte_t pages between all
pagetables, so that updating a kernel page's mapping in one pagetable
is reflected in all other pagetables. This makes it possible to
allocate a page and attach it to a pagetable without having to
explicitly enumerate that page's mapping in all pagetables.
And:
+From: "Eric W. Biederman" <ebiederm@xmission.com>
If we don't set the leaf page table entries it is quite possible that
will inherit and incorrect page table entry from the initial boot
page table setup in head.S. So we need to redo the effort here,
so we pick up PSE, PGE and the like.
Hypervisors like Xen require that their page tables be read-only,
which is slightly incompatible with our low identity mappings, however
I discussed this with Jeremy he has modified the Xen early set_pte
function to avoid problems in this area.
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Acked-by: William Irwin <bill.irwin@oracle.com>
Cc: Ingo Molnar <mingo@elte.hu>
2007-05-02 17:27:13 +00:00
|
|
|
*
|
|
|
|
* If we're booting paravirtualized under a hypervisor, then there are
|
|
|
|
* more options: we may already be running PAE, and the pagetable may
|
|
|
|
* or may not be based in swapper_pg_dir. In any case,
|
|
|
|
* paravirt_pagetable_setup_start() will set up swapper_pg_dir
|
|
|
|
* appropriately for the rest of the initialization to work.
|
|
|
|
*
|
|
|
|
* In general, pagetable_init() assumes that the pagetable may already
|
|
|
|
* be partially populated, and so it avoids stomping on any existing
|
|
|
|
* mappings.
|
|
|
|
*/
|
2008-06-26 04:51:28 +00:00
|
|
|
static void __init early_ioremap_page_table_range_init(pgd_t *pgd_base)
|
[PATCH] i386: PARAVIRT: Hooks to set up initial pagetable
This patch introduces paravirt_ops hooks to control how the kernel's
initial pagetable is set up.
In the case of a native boot, the very early bootstrap code creates a
simple non-PAE pagetable to map the kernel and physical memory. When
the VM subsystem is initialized, it creates a proper pagetable which
respects the PAE mode, large pages, etc.
When booting under a hypervisor, there are many possibilities for what
paging environment the hypervisor establishes for the guest kernel, so
the constructon of the kernel's pagetable depends on the hypervisor.
In the case of Xen, the hypervisor boots the kernel with a fully
constructed pagetable, which is already using PAE if necessary. Also,
Xen requires particular care when constructing pagetables to make sure
all pagetables are always mapped read-only.
In order to make this easier, kernel's initial pagetable construction
has been changed to only allocate and initialize a pagetable page if
there's no page already present in the pagetable. This allows the Xen
paravirt backend to make a copy of the hypervisor-provided pagetable,
allowing the kernel to establish any more mappings it needs while
keeping the existing ones.
A slightly subtle point which is worth highlighting here is that Xen
requires all kernel mappings to share the same pte_t pages between all
pagetables, so that updating a kernel page's mapping in one pagetable
is reflected in all other pagetables. This makes it possible to
allocate a page and attach it to a pagetable without having to
explicitly enumerate that page's mapping in all pagetables.
And:
+From: "Eric W. Biederman" <ebiederm@xmission.com>
If we don't set the leaf page table entries it is quite possible that
will inherit and incorrect page table entry from the initial boot
page table setup in head.S. So we need to redo the effort here,
so we pick up PSE, PGE and the like.
Hypervisors like Xen require that their page tables be read-only,
which is slightly incompatible with our low identity mappings, however
I discussed this with Jeremy he has modified the Xen early set_pte
function to avoid problems in this area.
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Acked-by: William Irwin <bill.irwin@oracle.com>
Cc: Ingo Molnar <mingo@elte.hu>
2007-05-02 17:27:13 +00:00
|
|
|
{
|
2008-01-30 12:34:10 +00:00
|
|
|
unsigned long vaddr, end;
|
[PATCH] i386: PARAVIRT: Hooks to set up initial pagetable
This patch introduces paravirt_ops hooks to control how the kernel's
initial pagetable is set up.
In the case of a native boot, the very early bootstrap code creates a
simple non-PAE pagetable to map the kernel and physical memory. When
the VM subsystem is initialized, it creates a proper pagetable which
respects the PAE mode, large pages, etc.
When booting under a hypervisor, there are many possibilities for what
paging environment the hypervisor establishes for the guest kernel, so
the constructon of the kernel's pagetable depends on the hypervisor.
In the case of Xen, the hypervisor boots the kernel with a fully
constructed pagetable, which is already using PAE if necessary. Also,
Xen requires particular care when constructing pagetables to make sure
all pagetables are always mapped read-only.
In order to make this easier, kernel's initial pagetable construction
has been changed to only allocate and initialize a pagetable page if
there's no page already present in the pagetable. This allows the Xen
paravirt backend to make a copy of the hypervisor-provided pagetable,
allowing the kernel to establish any more mappings it needs while
keeping the existing ones.
A slightly subtle point which is worth highlighting here is that Xen
requires all kernel mappings to share the same pte_t pages between all
pagetables, so that updating a kernel page's mapping in one pagetable
is reflected in all other pagetables. This makes it possible to
allocate a page and attach it to a pagetable without having to
explicitly enumerate that page's mapping in all pagetables.
And:
+From: "Eric W. Biederman" <ebiederm@xmission.com>
If we don't set the leaf page table entries it is quite possible that
will inherit and incorrect page table entry from the initial boot
page table setup in head.S. So we need to redo the effort here,
so we pick up PSE, PGE and the like.
Hypervisors like Xen require that their page tables be read-only,
which is slightly incompatible with our low identity mappings, however
I discussed this with Jeremy he has modified the Xen early set_pte
function to avoid problems in this area.
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Acked-by: William Irwin <bill.irwin@oracle.com>
Cc: Ingo Molnar <mingo@elte.hu>
2007-05-02 17:27:13 +00:00
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
/*
|
|
|
|
* Fixed mappings, only the page table structure has to be
|
|
|
|
* created - mappings will be set by set_fixmap():
|
|
|
|
*/
|
|
|
|
vaddr = __fix_to_virt(__end_of_fixed_addresses - 1) & PMD_MASK;
|
[PATCH] i386: PARAVIRT: Hooks to set up initial pagetable
This patch introduces paravirt_ops hooks to control how the kernel's
initial pagetable is set up.
In the case of a native boot, the very early bootstrap code creates a
simple non-PAE pagetable to map the kernel and physical memory. When
the VM subsystem is initialized, it creates a proper pagetable which
respects the PAE mode, large pages, etc.
When booting under a hypervisor, there are many possibilities for what
paging environment the hypervisor establishes for the guest kernel, so
the constructon of the kernel's pagetable depends on the hypervisor.
In the case of Xen, the hypervisor boots the kernel with a fully
constructed pagetable, which is already using PAE if necessary. Also,
Xen requires particular care when constructing pagetables to make sure
all pagetables are always mapped read-only.
In order to make this easier, kernel's initial pagetable construction
has been changed to only allocate and initialize a pagetable page if
there's no page already present in the pagetable. This allows the Xen
paravirt backend to make a copy of the hypervisor-provided pagetable,
allowing the kernel to establish any more mappings it needs while
keeping the existing ones.
A slightly subtle point which is worth highlighting here is that Xen
requires all kernel mappings to share the same pte_t pages between all
pagetables, so that updating a kernel page's mapping in one pagetable
is reflected in all other pagetables. This makes it possible to
allocate a page and attach it to a pagetable without having to
explicitly enumerate that page's mapping in all pagetables.
And:
+From: "Eric W. Biederman" <ebiederm@xmission.com>
If we don't set the leaf page table entries it is quite possible that
will inherit and incorrect page table entry from the initial boot
page table setup in head.S. So we need to redo the effort here,
so we pick up PSE, PGE and the like.
Hypervisors like Xen require that their page tables be read-only,
which is slightly incompatible with our low identity mappings, however
I discussed this with Jeremy he has modified the Xen early set_pte
function to avoid problems in this area.
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Acked-by: William Irwin <bill.irwin@oracle.com>
Cc: Ingo Molnar <mingo@elte.hu>
2007-05-02 17:27:13 +00:00
|
|
|
end = (FIXADDR_TOP + PMD_SIZE - 1) & PMD_MASK;
|
|
|
|
page_table_range_init(vaddr, end, pgd_base);
|
2008-01-30 12:33:44 +00:00
|
|
|
early_ioremap_reset();
|
2008-06-26 04:51:28 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void __init pagetable_init(void)
|
|
|
|
{
|
|
|
|
pgd_t *pgd_base = swapper_pg_dir;
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
permanent_kmaps_init(pgd_base);
|
|
|
|
}
|
|
|
|
|
2008-02-01 14:28:16 +00:00
|
|
|
#ifdef CONFIG_ACPI_SLEEP
|
2005-04-16 22:20:36 +00:00
|
|
|
/*
|
2008-02-01 14:28:16 +00:00
|
|
|
* ACPI suspend needs this for resume, because things like the intel-agp
|
2005-04-16 22:20:36 +00:00
|
|
|
* driver might have split up a kernel 4MB mapping.
|
|
|
|
*/
|
2008-02-01 14:28:16 +00:00
|
|
|
char swsusp_pg_dir[PAGE_SIZE]
|
2008-01-30 12:34:10 +00:00
|
|
|
__attribute__ ((aligned(PAGE_SIZE)));
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
static inline void save_pg_dir(void)
|
|
|
|
{
|
|
|
|
memcpy(swsusp_pg_dir, swapper_pg_dir, PAGE_SIZE);
|
|
|
|
}
|
2008-02-01 14:28:16 +00:00
|
|
|
#else /* !CONFIG_ACPI_SLEEP */
|
2005-04-16 22:20:36 +00:00
|
|
|
static inline void save_pg_dir(void)
|
|
|
|
{
|
|
|
|
}
|
2008-02-01 14:28:16 +00:00
|
|
|
#endif /* !CONFIG_ACPI_SLEEP */
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2008-01-30 12:34:10 +00:00
|
|
|
void zap_low_mappings(void)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Zap initial low-memory mappings.
|
|
|
|
*
|
|
|
|
* Note that "pgd_clear()" doesn't do it for
|
|
|
|
* us, because pgd_clear() is a no-op on i386.
|
|
|
|
*/
|
2008-03-17 23:37:13 +00:00
|
|
|
for (i = 0; i < KERNEL_PGD_BOUNDARY; i++) {
|
2005-04-16 22:20:36 +00:00
|
|
|
#ifdef CONFIG_X86_PAE
|
|
|
|
set_pgd(swapper_pg_dir+i, __pgd(1 + __pa(empty_zero_page)));
|
|
|
|
#else
|
|
|
|
set_pgd(swapper_pg_dir+i, __pgd(0));
|
|
|
|
#endif
|
2008-01-30 12:34:10 +00:00
|
|
|
}
|
2005-04-16 22:20:36 +00:00
|
|
|
flush_tlb_all();
|
|
|
|
}
|
|
|
|
|
2008-01-30 12:34:10 +00:00
|
|
|
int nx_enabled;
|
2007-07-21 15:10:26 +00:00
|
|
|
|
2008-09-07 22:21:13 +00:00
|
|
|
pteval_t __supported_pte_mask __read_mostly = ~(_PAGE_NX | _PAGE_GLOBAL | _PAGE_IOMAP);
|
2008-01-30 12:32:57 +00:00
|
|
|
EXPORT_SYMBOL_GPL(__supported_pte_mask);
|
|
|
|
|
2007-07-21 15:10:26 +00:00
|
|
|
#ifdef CONFIG_X86_PAE
|
|
|
|
|
2008-01-30 12:34:10 +00:00
|
|
|
static int disable_nx __initdata;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* noexec = on|off
|
|
|
|
*
|
|
|
|
* Control non executable mappings.
|
|
|
|
*
|
|
|
|
* on Enable
|
|
|
|
* off Disable
|
|
|
|
*/
|
2006-09-26 08:52:32 +00:00
|
|
|
static int __init noexec_setup(char *str)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2006-09-26 08:52:32 +00:00
|
|
|
if (!str || !strcmp(str, "on")) {
|
|
|
|
if (cpu_has_nx) {
|
|
|
|
__supported_pte_mask |= _PAGE_NX;
|
|
|
|
disable_nx = 0;
|
|
|
|
}
|
2008-01-30 12:34:10 +00:00
|
|
|
} else {
|
|
|
|
if (!strcmp(str, "off")) {
|
|
|
|
disable_nx = 1;
|
|
|
|
__supported_pte_mask &= ~_PAGE_NX;
|
|
|
|
} else {
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
}
|
2006-09-26 08:52:32 +00:00
|
|
|
|
|
|
|
return 0;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
2006-09-26 08:52:32 +00:00
|
|
|
early_param("noexec", noexec_setup);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
static void __init set_nx(void)
|
|
|
|
{
|
|
|
|
unsigned int v[4], l, h;
|
|
|
|
|
|
|
|
if (cpu_has_pae && (cpuid_eax(0x80000000) > 0x80000001)) {
|
|
|
|
cpuid(0x80000001, &v[0], &v[1], &v[2], &v[3]);
|
2008-01-30 12:34:10 +00:00
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
if ((v[3] & (1 << 20)) && !disable_nx) {
|
|
|
|
rdmsr(MSR_EFER, l, h);
|
|
|
|
l |= EFER_NX;
|
|
|
|
wrmsr(MSR_EFER, l, h);
|
|
|
|
nx_enabled = 1;
|
|
|
|
__supported_pte_mask |= _PAGE_NX;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2008-06-23 19:00:45 +00:00
|
|
|
/* user-defined highmem size */
|
|
|
|
static unsigned int highmem_pages = -1;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* highmem=size forces highmem to be exactly 'size' bytes.
|
|
|
|
* This works even on boxes that have no highmem otherwise.
|
|
|
|
* This also works to reduce highmem size on bigger boxes.
|
|
|
|
*/
|
|
|
|
static int __init parse_highmem(char *arg)
|
|
|
|
{
|
|
|
|
if (!arg)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
highmem_pages = memparse(arg, &arg) >> PAGE_SHIFT;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
early_param("highmem", parse_highmem);
|
|
|
|
|
2009-02-12 12:31:41 +00:00
|
|
|
#define MSG_HIGHMEM_TOO_BIG \
|
|
|
|
"highmem size (%luMB) is bigger than pages available (%luMB)!\n"
|
|
|
|
|
|
|
|
#define MSG_LOWMEM_TOO_SMALL \
|
|
|
|
"highmem size (%luMB) results in <64MB lowmem, ignoring it!\n"
|
2008-06-23 19:00:45 +00:00
|
|
|
/*
|
2009-02-12 12:31:41 +00:00
|
|
|
* All of RAM fits into lowmem - but if user wants highmem
|
|
|
|
* artificially via the highmem=x boot parameter then create
|
|
|
|
* it:
|
2008-06-23 19:00:45 +00:00
|
|
|
*/
|
2009-02-12 12:31:41 +00:00
|
|
|
void __init lowmem_pfn_init(void)
|
2008-06-23 19:00:45 +00:00
|
|
|
{
|
2008-06-23 10:06:14 +00:00
|
|
|
/* max_low_pfn is 0, we already have early_res support */
|
2008-06-23 19:00:45 +00:00
|
|
|
max_low_pfn = max_pfn;
|
2009-02-12 14:16:03 +00:00
|
|
|
|
2009-02-12 12:31:41 +00:00
|
|
|
if (highmem_pages == -1)
|
|
|
|
highmem_pages = 0;
|
|
|
|
#ifdef CONFIG_HIGHMEM
|
|
|
|
if (highmem_pages >= max_pfn) {
|
|
|
|
printk(KERN_ERR MSG_HIGHMEM_TOO_BIG,
|
|
|
|
pages_to_mb(highmem_pages), pages_to_mb(max_pfn));
|
|
|
|
highmem_pages = 0;
|
|
|
|
}
|
|
|
|
if (highmem_pages) {
|
|
|
|
if (max_low_pfn - highmem_pages < 64*1024*1024/PAGE_SIZE) {
|
|
|
|
printk(KERN_ERR MSG_LOWMEM_TOO_SMALL,
|
2008-06-23 19:00:45 +00:00
|
|
|
pages_to_mb(highmem_pages));
|
|
|
|
highmem_pages = 0;
|
|
|
|
}
|
2009-02-12 12:31:41 +00:00
|
|
|
max_low_pfn -= highmem_pages;
|
|
|
|
}
|
|
|
|
#else
|
|
|
|
if (highmem_pages)
|
|
|
|
printk(KERN_ERR "ignoring highmem size on non-highmem kernel!\n");
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
|
|
|
#define MSG_HIGHMEM_TOO_SMALL \
|
|
|
|
"only %luMB highmem pages available, ignoring highmem size of %luMB!\n"
|
|
|
|
|
|
|
|
#define MSG_HIGHMEM_TRIMMED \
|
|
|
|
"Warning: only 4GB will be used. Use a HIGHMEM64G enabled kernel!\n"
|
|
|
|
/*
|
|
|
|
* We have more RAM than fits into lowmem - we try to put it into
|
|
|
|
* highmem, also taking the highmem=x boot parameter into account:
|
|
|
|
*/
|
|
|
|
void __init highmem_pfn_init(void)
|
|
|
|
{
|
2009-02-12 14:16:03 +00:00
|
|
|
max_low_pfn = MAXMEM_PFN;
|
|
|
|
|
2009-02-12 12:31:41 +00:00
|
|
|
if (highmem_pages == -1)
|
|
|
|
highmem_pages = max_pfn - MAXMEM_PFN;
|
|
|
|
|
|
|
|
if (highmem_pages + MAXMEM_PFN < max_pfn)
|
|
|
|
max_pfn = MAXMEM_PFN + highmem_pages;
|
|
|
|
|
|
|
|
if (highmem_pages + MAXMEM_PFN > max_pfn) {
|
|
|
|
printk(KERN_WARNING MSG_HIGHMEM_TOO_SMALL,
|
|
|
|
pages_to_mb(max_pfn - MAXMEM_PFN),
|
|
|
|
pages_to_mb(highmem_pages));
|
|
|
|
highmem_pages = 0;
|
|
|
|
}
|
2008-06-23 19:00:45 +00:00
|
|
|
#ifndef CONFIG_HIGHMEM
|
2009-02-12 12:31:41 +00:00
|
|
|
/* Maximum memory usable is what is directly addressable */
|
|
|
|
printk(KERN_WARNING "Warning only %ldMB will be used.\n", MAXMEM>>20);
|
|
|
|
if (max_pfn > MAX_NONPAE_PFN)
|
|
|
|
printk(KERN_WARNING "Use a HIGHMEM64G enabled kernel.\n");
|
|
|
|
else
|
|
|
|
printk(KERN_WARNING "Use a HIGHMEM enabled kernel.\n");
|
|
|
|
max_pfn = MAXMEM_PFN;
|
2008-06-23 19:00:45 +00:00
|
|
|
#else /* !CONFIG_HIGHMEM */
|
|
|
|
#ifndef CONFIG_HIGHMEM64G
|
2009-02-12 12:31:41 +00:00
|
|
|
if (max_pfn > MAX_NONPAE_PFN) {
|
|
|
|
max_pfn = MAX_NONPAE_PFN;
|
|
|
|
printk(KERN_WARNING MSG_HIGHMEM_TRIMMED);
|
|
|
|
}
|
2008-06-23 19:00:45 +00:00
|
|
|
#endif /* !CONFIG_HIGHMEM64G */
|
|
|
|
#endif /* !CONFIG_HIGHMEM */
|
2009-02-12 12:31:41 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Determine low and high memory ranges:
|
|
|
|
*/
|
|
|
|
void __init find_low_pfn_range(void)
|
|
|
|
{
|
|
|
|
/* it could update max_pfn */
|
|
|
|
|
2009-02-12 14:16:03 +00:00
|
|
|
if (max_pfn <= MAXMEM_PFN)
|
2009-02-12 12:31:41 +00:00
|
|
|
lowmem_pfn_init();
|
2009-02-12 14:16:03 +00:00
|
|
|
else
|
|
|
|
highmem_pfn_init();
|
2008-06-23 19:00:45 +00:00
|
|
|
}
|
|
|
|
|
2008-06-22 09:45:39 +00:00
|
|
|
#ifndef CONFIG_NEED_MULTIPLE_NODES
|
2008-06-23 10:05:30 +00:00
|
|
|
void __init initmem_init(unsigned long start_pfn,
|
2008-06-22 09:45:39 +00:00
|
|
|
unsigned long end_pfn)
|
|
|
|
{
|
|
|
|
#ifdef CONFIG_HIGHMEM
|
|
|
|
highstart_pfn = highend_pfn = max_pfn;
|
|
|
|
if (max_pfn > max_low_pfn)
|
|
|
|
highstart_pfn = max_low_pfn;
|
|
|
|
memory_present(0, 0, highend_pfn);
|
2008-07-02 07:31:02 +00:00
|
|
|
e820_register_active_regions(0, 0, highend_pfn);
|
2008-06-22 09:45:39 +00:00
|
|
|
printk(KERN_NOTICE "%ldMB HIGHMEM available.\n",
|
|
|
|
pages_to_mb(highend_pfn - highstart_pfn));
|
|
|
|
num_physpages = highend_pfn;
|
|
|
|
high_memory = (void *) __va(highstart_pfn * PAGE_SIZE - 1) + 1;
|
|
|
|
#else
|
|
|
|
memory_present(0, 0, max_low_pfn);
|
2008-07-02 07:31:02 +00:00
|
|
|
e820_register_active_regions(0, 0, max_low_pfn);
|
2008-06-22 09:45:39 +00:00
|
|
|
num_physpages = max_low_pfn;
|
|
|
|
high_memory = (void *) __va(max_low_pfn * PAGE_SIZE - 1) + 1;
|
|
|
|
#endif
|
|
|
|
#ifdef CONFIG_FLATMEM
|
|
|
|
max_mapnr = num_physpages;
|
|
|
|
#endif
|
|
|
|
printk(KERN_NOTICE "%ldMB LOWMEM available.\n",
|
|
|
|
pages_to_mb(max_low_pfn));
|
|
|
|
|
|
|
|
setup_bootmem_allocator();
|
|
|
|
}
|
2008-07-02 07:31:02 +00:00
|
|
|
#endif /* !CONFIG_NEED_MULTIPLE_NODES */
|
2008-06-22 09:45:39 +00:00
|
|
|
|
2008-07-02 07:31:02 +00:00
|
|
|
static void __init zone_sizes_init(void)
|
2008-06-22 09:45:39 +00:00
|
|
|
{
|
|
|
|
unsigned long max_zone_pfns[MAX_NR_ZONES];
|
|
|
|
memset(max_zone_pfns, 0, sizeof(max_zone_pfns));
|
|
|
|
max_zone_pfns[ZONE_DMA] =
|
|
|
|
virt_to_phys((char *)MAX_DMA_ADDRESS) >> PAGE_SHIFT;
|
|
|
|
max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
|
|
|
|
#ifdef CONFIG_HIGHMEM
|
|
|
|
max_zone_pfns[ZONE_HIGHMEM] = highend_pfn;
|
|
|
|
#endif
|
|
|
|
|
|
|
|
free_area_init_nodes(max_zone_pfns);
|
|
|
|
}
|
|
|
|
|
|
|
|
void __init setup_bootmem_allocator(void)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
unsigned long bootmap_size, bootmap;
|
|
|
|
/*
|
|
|
|
* Initialize the boot-time allocator (with low memory only):
|
|
|
|
*/
|
|
|
|
bootmap_size = bootmem_bootmap_pages(max_low_pfn)<<PAGE_SHIFT;
|
|
|
|
bootmap = find_e820_area(min_low_pfn<<PAGE_SHIFT,
|
|
|
|
max_pfn_mapped<<PAGE_SHIFT, bootmap_size,
|
|
|
|
PAGE_SIZE);
|
|
|
|
if (bootmap == -1L)
|
|
|
|
panic("Cannot find bootmem map of size %ld\n", bootmap_size);
|
|
|
|
reserve_early(bootmap, bootmap + bootmap_size, "BOOTMAP");
|
2008-06-22 09:46:58 +00:00
|
|
|
|
2008-06-23 10:06:14 +00:00
|
|
|
/* don't touch min_low_pfn */
|
|
|
|
bootmap_size = init_bootmem_node(NODE_DATA(0), bootmap >> PAGE_SHIFT,
|
|
|
|
min_low_pfn, max_low_pfn);
|
2008-06-22 09:45:39 +00:00
|
|
|
printk(KERN_INFO " mapped low ram: 0 - %08lx\n",
|
|
|
|
max_pfn_mapped<<PAGE_SHIFT);
|
|
|
|
printk(KERN_INFO " low ram: %08lx - %08lx\n",
|
|
|
|
min_low_pfn<<PAGE_SHIFT, max_low_pfn<<PAGE_SHIFT);
|
|
|
|
printk(KERN_INFO " bootmap %08lx - %08lx\n",
|
|
|
|
bootmap, bootmap + bootmap_size);
|
|
|
|
for_each_online_node(i)
|
|
|
|
free_bootmem_with_active_regions(i, max_low_pfn);
|
|
|
|
early_res_to_bootmem(0, max_low_pfn<<PAGE_SHIFT);
|
|
|
|
|
2008-06-24 19:18:14 +00:00
|
|
|
after_init_bootmem = 1;
|
2008-06-22 09:45:39 +00:00
|
|
|
}
|
|
|
|
|
2008-09-23 21:00:39 +00:00
|
|
|
static void __init find_early_table_space(unsigned long end, int use_pse)
|
2008-06-24 19:18:14 +00:00
|
|
|
{
|
2008-06-28 10:30:39 +00:00
|
|
|
unsigned long puds, pmds, ptes, tables, start;
|
2008-06-24 19:18:14 +00:00
|
|
|
|
|
|
|
puds = (end + PUD_SIZE - 1) >> PUD_SHIFT;
|
2009-03-03 10:55:05 +00:00
|
|
|
tables = roundup(puds * sizeof(pud_t), PAGE_SIZE);
|
2008-06-24 19:18:14 +00:00
|
|
|
|
|
|
|
pmds = (end + PMD_SIZE - 1) >> PMD_SHIFT;
|
2009-03-03 10:55:05 +00:00
|
|
|
tables += roundup(pmds * sizeof(pmd_t), PAGE_SIZE);
|
2008-06-24 19:18:14 +00:00
|
|
|
|
2008-09-23 21:00:39 +00:00
|
|
|
if (use_pse) {
|
2008-06-28 10:30:39 +00:00
|
|
|
unsigned long extra;
|
2008-06-29 07:39:06 +00:00
|
|
|
|
|
|
|
extra = end - ((end>>PMD_SHIFT) << PMD_SHIFT);
|
|
|
|
extra += PMD_SIZE;
|
2008-06-28 10:30:39 +00:00
|
|
|
ptes = (extra + PAGE_SIZE - 1) >> PAGE_SHIFT;
|
|
|
|
} else
|
|
|
|
ptes = (end + PAGE_SIZE - 1) >> PAGE_SHIFT;
|
|
|
|
|
2009-03-03 10:55:05 +00:00
|
|
|
tables += roundup(ptes * sizeof(pte_t), PAGE_SIZE);
|
2008-06-24 21:32:48 +00:00
|
|
|
|
2008-06-29 07:39:06 +00:00
|
|
|
/* for fixmap */
|
2009-03-03 10:55:05 +00:00
|
|
|
tables += roundup(__end_of_fixed_addresses * sizeof(pte_t), PAGE_SIZE);
|
2008-06-29 07:39:06 +00:00
|
|
|
|
2008-06-24 19:18:14 +00:00
|
|
|
/*
|
|
|
|
* RED-PEN putting page tables only on node 0 could
|
|
|
|
* cause a hotspot and fill up ZONE_DMA. The page tables
|
|
|
|
* need roughly 0.5KB per GB.
|
|
|
|
*/
|
|
|
|
start = 0x7000;
|
|
|
|
table_start = find_e820_area(start, max_pfn_mapped<<PAGE_SHIFT,
|
|
|
|
tables, PAGE_SIZE);
|
|
|
|
if (table_start == -1UL)
|
|
|
|
panic("Cannot find space for the kernel page tables");
|
|
|
|
|
|
|
|
table_start >>= PAGE_SHIFT;
|
|
|
|
table_end = table_start;
|
|
|
|
table_top = table_start + (tables>>PAGE_SHIFT);
|
|
|
|
|
|
|
|
printk(KERN_DEBUG "kernel direct mapping tables up to %lx @ %lx-%lx\n",
|
|
|
|
end, table_start << PAGE_SHIFT,
|
|
|
|
(table_start << PAGE_SHIFT) + tables);
|
|
|
|
}
|
|
|
|
|
|
|
|
unsigned long __init_refok init_memory_mapping(unsigned long start,
|
|
|
|
unsigned long end)
|
|
|
|
{
|
|
|
|
pgd_t *pgd_base = swapper_pg_dir;
|
2008-06-29 07:39:06 +00:00
|
|
|
unsigned long start_pfn, end_pfn;
|
|
|
|
unsigned long big_page_start;
|
2008-09-23 21:00:39 +00:00
|
|
|
#ifdef CONFIG_DEBUG_PAGEALLOC
|
|
|
|
/*
|
|
|
|
* For CONFIG_DEBUG_PAGEALLOC, identity mapping will use small pages.
|
|
|
|
* This will simplify cpa(), which otherwise needs to support splitting
|
|
|
|
* large pages into small in interrupt context, etc.
|
|
|
|
*/
|
|
|
|
int use_pse = 0;
|
|
|
|
#else
|
|
|
|
int use_pse = cpu_has_pse;
|
|
|
|
#endif
|
2008-06-24 19:18:14 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Find space for the kernel direct mapping tables.
|
|
|
|
*/
|
|
|
|
if (!after_init_bootmem)
|
2008-09-23 21:00:39 +00:00
|
|
|
find_early_table_space(end, use_pse);
|
2008-06-24 19:18:14 +00:00
|
|
|
|
|
|
|
#ifdef CONFIG_X86_PAE
|
|
|
|
set_nx();
|
|
|
|
if (nx_enabled)
|
|
|
|
printk(KERN_INFO "NX (Execute Disable) protection: active\n");
|
|
|
|
#endif
|
|
|
|
|
|
|
|
/* Enable PSE if available */
|
|
|
|
if (cpu_has_pse)
|
|
|
|
set_in_cr4(X86_CR4_PSE);
|
|
|
|
|
|
|
|
/* Enable PGE if available */
|
|
|
|
if (cpu_has_pge) {
|
|
|
|
set_in_cr4(X86_CR4_PGE);
|
2008-07-01 23:46:36 +00:00
|
|
|
__supported_pte_mask |= _PAGE_GLOBAL;
|
2008-06-24 19:18:14 +00:00
|
|
|
}
|
|
|
|
|
2008-06-29 07:39:06 +00:00
|
|
|
/*
|
|
|
|
* Don't use a large page for the first 2/4MB of memory
|
|
|
|
* because there are often fixed size MTRRs in there
|
|
|
|
* and overlapping MTRRs into large pages can cause
|
|
|
|
* slowdowns.
|
|
|
|
*/
|
|
|
|
big_page_start = PMD_SIZE;
|
|
|
|
|
|
|
|
if (start < big_page_start) {
|
|
|
|
start_pfn = start >> PAGE_SHIFT;
|
|
|
|
end_pfn = min(big_page_start>>PAGE_SHIFT, end>>PAGE_SHIFT);
|
|
|
|
} else {
|
|
|
|
/* head is not big page alignment ? */
|
|
|
|
start_pfn = start >> PAGE_SHIFT;
|
|
|
|
end_pfn = ((start + (PMD_SIZE - 1))>>PMD_SHIFT)
|
|
|
|
<< (PMD_SHIFT - PAGE_SHIFT);
|
|
|
|
}
|
|
|
|
if (start_pfn < end_pfn)
|
|
|
|
kernel_physical_mapping_init(pgd_base, start_pfn, end_pfn, 0);
|
|
|
|
|
|
|
|
/* big page range */
|
|
|
|
start_pfn = ((start + (PMD_SIZE - 1))>>PMD_SHIFT)
|
|
|
|
<< (PMD_SHIFT - PAGE_SHIFT);
|
|
|
|
if (start_pfn < (big_page_start >> PAGE_SHIFT))
|
|
|
|
start_pfn = big_page_start >> PAGE_SHIFT;
|
|
|
|
end_pfn = (end>>PMD_SHIFT) << (PMD_SHIFT - PAGE_SHIFT);
|
|
|
|
if (start_pfn < end_pfn)
|
|
|
|
kernel_physical_mapping_init(pgd_base, start_pfn, end_pfn,
|
2008-09-23 21:00:39 +00:00
|
|
|
use_pse);
|
2008-06-29 07:39:06 +00:00
|
|
|
|
|
|
|
/* tail is not big page alignment ? */
|
|
|
|
start_pfn = end_pfn;
|
|
|
|
if (start_pfn > (big_page_start>>PAGE_SHIFT)) {
|
|
|
|
end_pfn = end >> PAGE_SHIFT;
|
|
|
|
if (start_pfn < end_pfn)
|
|
|
|
kernel_physical_mapping_init(pgd_base, start_pfn,
|
|
|
|
end_pfn, 0);
|
|
|
|
}
|
2008-06-24 19:18:14 +00:00
|
|
|
|
2008-06-26 04:51:28 +00:00
|
|
|
early_ioremap_page_table_range_init(pgd_base);
|
|
|
|
|
2008-06-24 19:18:14 +00:00
|
|
|
load_cr3(swapper_pg_dir);
|
|
|
|
|
|
|
|
__flush_tlb_all();
|
|
|
|
|
|
|
|
if (!after_init_bootmem)
|
|
|
|
reserve_early(table_start << PAGE_SHIFT,
|
|
|
|
table_end << PAGE_SHIFT, "PGTABLE");
|
|
|
|
|
2008-07-15 07:03:44 +00:00
|
|
|
if (!after_init_bootmem)
|
|
|
|
early_memtest(start, end);
|
|
|
|
|
2008-06-24 19:18:14 +00:00
|
|
|
return end >> PAGE_SHIFT;
|
|
|
|
}
|
|
|
|
|
2008-06-26 04:51:28 +00:00
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
/*
|
|
|
|
* paging_init() sets up the page tables - note that the first 8MB are
|
|
|
|
* already mapped by head.S.
|
|
|
|
*
|
|
|
|
* This routines also unmaps the page at virtual kernel address 0, so
|
|
|
|
* that we can trap those pesky NULL-reference errors in the kernel.
|
|
|
|
*/
|
|
|
|
void __init paging_init(void)
|
|
|
|
{
|
|
|
|
pagetable_init();
|
|
|
|
|
|
|
|
__flush_tlb_all();
|
|
|
|
|
|
|
|
kmap_init();
|
2008-06-24 02:51:10 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* NOTE: at this point the bootmem allocator is fully available.
|
|
|
|
*/
|
|
|
|
sparse_init();
|
|
|
|
zone_sizes_init();
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Test if the WP bit works in supervisor mode. It isn't supported on 386's
|
2008-04-20 20:47:55 +00:00
|
|
|
* and also on some strange 486's. All 586+'s are OK. This used to involve
|
|
|
|
* black magic jumps to work around some nasty CPU bugs, but fortunately the
|
|
|
|
* switch to using exceptions got rid of all that.
|
2005-04-16 22:20:36 +00:00
|
|
|
*/
|
|
|
|
static void __init test_wp_bit(void)
|
|
|
|
{
|
2008-01-30 12:34:10 +00:00
|
|
|
printk(KERN_INFO
|
|
|
|
"Checking if this processor honours the WP bit even in supervisor mode...");
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
/* Any page-aligned address will do, the test is non-destructive */
|
|
|
|
__set_fixmap(FIX_WP_TEST, __pa(&swapper_pg_dir), PAGE_READONLY);
|
|
|
|
boot_cpu_data.wp_works_ok = do_test_wp_bit();
|
|
|
|
clear_fixmap(FIX_WP_TEST);
|
|
|
|
|
|
|
|
if (!boot_cpu_data.wp_works_ok) {
|
2008-01-30 12:34:10 +00:00
|
|
|
printk(KERN_CONT "No.\n");
|
2005-04-16 22:20:36 +00:00
|
|
|
#ifdef CONFIG_X86_WP_WORKS_OK
|
2008-01-30 12:34:10 +00:00
|
|
|
panic(
|
|
|
|
"This kernel doesn't support CPU's with broken WP. Recompile it for a 386!");
|
2005-04-16 22:20:36 +00:00
|
|
|
#endif
|
|
|
|
} else {
|
2008-01-30 12:34:10 +00:00
|
|
|
printk(KERN_CONT "Ok.\n");
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2008-01-30 12:34:10 +00:00
|
|
|
static struct kcore_list kcore_mem, kcore_vmalloc;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
void __init mem_init(void)
|
|
|
|
{
|
|
|
|
int codesize, reservedpages, datasize, initsize;
|
2008-06-16 23:11:08 +00:00
|
|
|
int tmp;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2008-12-16 20:17:36 +00:00
|
|
|
pci_iommu_alloc();
|
|
|
|
|
2005-06-23 07:07:57 +00:00
|
|
|
#ifdef CONFIG_FLATMEM
|
2006-10-03 21:34:58 +00:00
|
|
|
BUG_ON(!mem_map);
|
2005-04-16 22:20:36 +00:00
|
|
|
#endif
|
|
|
|
/* this will put all low memory onto the freelists */
|
|
|
|
totalram_pages += free_all_bootmem();
|
|
|
|
|
|
|
|
reservedpages = 0;
|
|
|
|
for (tmp = 0; tmp < max_low_pfn; tmp++)
|
|
|
|
/*
|
2008-01-30 12:34:10 +00:00
|
|
|
* Only count reserved RAM pages:
|
2005-04-16 22:20:36 +00:00
|
|
|
*/
|
|
|
|
if (page_is_ram(tmp) && PageReserved(pfn_to_page(tmp)))
|
|
|
|
reservedpages++;
|
|
|
|
|
2008-06-16 23:11:08 +00:00
|
|
|
set_highmem_pages_init();
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
codesize = (unsigned long) &_etext - (unsigned long) &_text;
|
|
|
|
datasize = (unsigned long) &_edata - (unsigned long) &_etext;
|
|
|
|
initsize = (unsigned long) &__init_end - (unsigned long) &__init_begin;
|
|
|
|
|
2008-01-30 12:34:10 +00:00
|
|
|
kclist_add(&kcore_mem, __va(0), max_low_pfn << PAGE_SHIFT);
|
|
|
|
kclist_add(&kcore_vmalloc, (void *)VMALLOC_START,
|
2005-04-16 22:20:36 +00:00
|
|
|
VMALLOC_END-VMALLOC_START);
|
|
|
|
|
2008-01-30 12:34:10 +00:00
|
|
|
printk(KERN_INFO "Memory: %luk/%luk available (%dk kernel code, "
|
|
|
|
"%dk reserved, %dk data, %dk init, %ldk highmem)\n",
|
2005-04-16 22:20:36 +00:00
|
|
|
(unsigned long) nr_free_pages() << (PAGE_SHIFT-10),
|
|
|
|
num_physpages << (PAGE_SHIFT-10),
|
|
|
|
codesize >> 10,
|
|
|
|
reservedpages << (PAGE_SHIFT-10),
|
|
|
|
datasize >> 10,
|
|
|
|
initsize >> 10,
|
|
|
|
(unsigned long) (totalhigh_pages << (PAGE_SHIFT-10))
|
|
|
|
);
|
|
|
|
|
2008-01-30 12:34:10 +00:00
|
|
|
printk(KERN_INFO "virtual kernel memory layout:\n"
|
2008-01-30 12:34:10 +00:00
|
|
|
" fixmap : 0x%08lx - 0x%08lx (%4ld kB)\n"
|
2006-09-26 06:32:25 +00:00
|
|
|
#ifdef CONFIG_HIGHMEM
|
2008-01-30 12:34:10 +00:00
|
|
|
" pkmap : 0x%08lx - 0x%08lx (%4ld kB)\n"
|
2006-09-26 06:32:25 +00:00
|
|
|
#endif
|
2008-01-30 12:34:10 +00:00
|
|
|
" vmalloc : 0x%08lx - 0x%08lx (%4ld MB)\n"
|
|
|
|
" lowmem : 0x%08lx - 0x%08lx (%4ld MB)\n"
|
|
|
|
" .init : 0x%08lx - 0x%08lx (%4ld kB)\n"
|
|
|
|
" .data : 0x%08lx - 0x%08lx (%4ld kB)\n"
|
|
|
|
" .text : 0x%08lx - 0x%08lx (%4ld kB)\n",
|
|
|
|
FIXADDR_START, FIXADDR_TOP,
|
|
|
|
(FIXADDR_TOP - FIXADDR_START) >> 10,
|
2006-09-26 06:32:25 +00:00
|
|
|
|
|
|
|
#ifdef CONFIG_HIGHMEM
|
2008-01-30 12:34:10 +00:00
|
|
|
PKMAP_BASE, PKMAP_BASE+LAST_PKMAP*PAGE_SIZE,
|
|
|
|
(LAST_PKMAP*PAGE_SIZE) >> 10,
|
2006-09-26 06:32:25 +00:00
|
|
|
#endif
|
|
|
|
|
2008-01-30 12:34:10 +00:00
|
|
|
VMALLOC_START, VMALLOC_END,
|
|
|
|
(VMALLOC_END - VMALLOC_START) >> 20,
|
2006-09-26 06:32:25 +00:00
|
|
|
|
2008-01-30 12:34:10 +00:00
|
|
|
(unsigned long)__va(0), (unsigned long)high_memory,
|
|
|
|
((unsigned long)high_memory - (unsigned long)__va(0)) >> 20,
|
2006-09-26 06:32:25 +00:00
|
|
|
|
2008-01-30 12:34:10 +00:00
|
|
|
(unsigned long)&__init_begin, (unsigned long)&__init_end,
|
|
|
|
((unsigned long)&__init_end -
|
|
|
|
(unsigned long)&__init_begin) >> 10,
|
2006-09-26 06:32:25 +00:00
|
|
|
|
2008-01-30 12:34:10 +00:00
|
|
|
(unsigned long)&_etext, (unsigned long)&_edata,
|
|
|
|
((unsigned long)&_edata - (unsigned long)&_etext) >> 10,
|
2006-09-26 06:32:25 +00:00
|
|
|
|
2008-01-30 12:34:10 +00:00
|
|
|
(unsigned long)&_text, (unsigned long)&_etext,
|
|
|
|
((unsigned long)&_etext - (unsigned long)&_text) >> 10);
|
2006-09-26 06:32:25 +00:00
|
|
|
|
2008-12-16 11:45:56 +00:00
|
|
|
/*
|
|
|
|
* Check boundaries twice: Some fundamental inconsistencies can
|
|
|
|
* be detected at build time already.
|
|
|
|
*/
|
|
|
|
#define __FIXADDR_TOP (-PAGE_SIZE)
|
|
|
|
#ifdef CONFIG_HIGHMEM
|
|
|
|
BUILD_BUG_ON(PKMAP_BASE + LAST_PKMAP*PAGE_SIZE > FIXADDR_START);
|
|
|
|
BUILD_BUG_ON(VMALLOC_END > PKMAP_BASE);
|
|
|
|
#endif
|
|
|
|
#define high_memory (-128UL << 20)
|
|
|
|
BUILD_BUG_ON(VMALLOC_START >= VMALLOC_END);
|
|
|
|
#undef high_memory
|
|
|
|
#undef __FIXADDR_TOP
|
|
|
|
|
2006-09-26 06:32:25 +00:00
|
|
|
#ifdef CONFIG_HIGHMEM
|
2008-01-30 12:34:10 +00:00
|
|
|
BUG_ON(PKMAP_BASE + LAST_PKMAP*PAGE_SIZE > FIXADDR_START);
|
|
|
|
BUG_ON(VMALLOC_END > PKMAP_BASE);
|
2006-09-26 06:32:25 +00:00
|
|
|
#endif
|
2008-12-16 11:45:56 +00:00
|
|
|
BUG_ON(VMALLOC_START >= VMALLOC_END);
|
2008-01-30 12:34:10 +00:00
|
|
|
BUG_ON((unsigned long)high_memory > VMALLOC_START);
|
2006-09-26 06:32:25 +00:00
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
if (boot_cpu_data.wp_works_ok < 0)
|
|
|
|
test_wp_bit();
|
|
|
|
|
x86: fix app crashes after SMP resume
After resume on a 2cpu laptop, kernel builds collapse with a sed hang,
sh or make segfault (often on 20295564), real-time signal to cc1 etc.
Several hurdles to jump, but a manually-assisted bisect led to -rc1's
d2bcbad5f3ad38a1c09861bca7e252dde7bb8259 x86: do not zap_low_mappings
in __smp_prepare_cpus. Though the low mappings were removed at bootup,
they were left behind (with Global flags helping to keep them in TLB)
after resume or cpu online, causing the crashes seen.
Reinstate zap_low_mappings (with local __flush_tlb_all) for each cpu_up
on x86_32. This used to be serialized by smp_commenced_mask: that's now
gone, but a low_mappings flag will do. No need for native_smp_cpus_done
to repeat the zap: let mem_init zap BSP's low mappings just like on UP.
(In passing, fix error code from native_cpu_up: do_boot_cpu returns a
variety of diagnostic values, Dprintk what it says but convert to -EIO.
And save_pg_dir separately before zap_low_mappings: doesn't matter now,
but zapping twice in succession wiped out resume's swsusp_pg_dir.)
That worked well on the duo and one quad, but wouldn't boot 3rd or 4th
cpu on P4 Xeon, oopsing just after unlock_ipi_call_lock. The TLB flush
IPI now being sent reveals a long-standing bug: the booting cpu has its
APIC readied in smp_callin at the top of start_secondary, but isn't put
into the cpu_online_map until just before that unlock_ipi_call_lock.
So native_smp_call_function_mask to online cpus would send_IPI_allbutself,
including the cpu just coming up, though it has been excluded from the
count to wait for: by the time it handles the IPI, the call data on
native_smp_call_function_mask's stack may well have been overwritten.
So fall back to send_IPI_mask while cpu_online_map does not match
cpu_callout_map: perhaps there's a better APICological fix to be
made at the start_secondary end, but I wouldn't know that.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-05-13 13:26:57 +00:00
|
|
|
save_pg_dir();
|
2005-04-16 22:20:36 +00:00
|
|
|
zap_low_mappings();
|
|
|
|
}
|
|
|
|
|
2006-05-20 22:00:03 +00:00
|
|
|
#ifdef CONFIG_MEMORY_HOTPLUG
|
2006-06-27 09:53:30 +00:00
|
|
|
int arch_add_memory(int nid, u64 start, u64 size)
|
2005-10-30 01:16:57 +00:00
|
|
|
{
|
2006-12-22 09:11:13 +00:00
|
|
|
struct pglist_data *pgdata = NODE_DATA(nid);
|
2006-09-26 06:31:09 +00:00
|
|
|
struct zone *zone = pgdata->node_zones + ZONE_HIGHMEM;
|
2005-10-30 01:16:57 +00:00
|
|
|
unsigned long start_pfn = start >> PAGE_SHIFT;
|
|
|
|
unsigned long nr_pages = size >> PAGE_SHIFT;
|
|
|
|
|
2009-01-06 22:39:14 +00:00
|
|
|
return __add_pages(nid, zone, start_pfn, nr_pages);
|
2005-10-30 01:16:57 +00:00
|
|
|
}
|
2006-04-07 17:49:15 +00:00
|
|
|
#endif
|
2005-10-30 01:16:57 +00:00
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
/*
|
|
|
|
* This function cannot be __init, since exceptions don't work in that
|
|
|
|
* section. Put this after the callers, so that it cannot be inlined.
|
|
|
|
*/
|
2008-01-30 12:34:10 +00:00
|
|
|
static noinline int do_test_wp_bit(void)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
|
|
|
char tmp_reg;
|
|
|
|
int flag;
|
|
|
|
|
|
|
|
__asm__ __volatile__(
|
2008-01-30 12:34:10 +00:00
|
|
|
" movb %0, %1 \n"
|
|
|
|
"1: movb %1, %0 \n"
|
|
|
|
" xorl %2, %2 \n"
|
2005-04-16 22:20:36 +00:00
|
|
|
"2: \n"
|
2008-02-04 15:47:58 +00:00
|
|
|
_ASM_EXTABLE(1b,2b)
|
2005-04-16 22:20:36 +00:00
|
|
|
:"=m" (*(char *)fix_to_virt(FIX_WP_TEST)),
|
|
|
|
"=q" (tmp_reg),
|
|
|
|
"=r" (flag)
|
|
|
|
:"2" (1)
|
|
|
|
:"memory");
|
2008-01-30 12:34:10 +00:00
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
return flag;
|
|
|
|
}
|
|
|
|
|
2006-01-06 08:12:02 +00:00
|
|
|
#ifdef CONFIG_DEBUG_RODATA
|
2008-01-30 12:34:08 +00:00
|
|
|
const int rodata_test_data = 0xC3;
|
|
|
|
EXPORT_SYMBOL_GPL(rodata_test_data);
|
2006-01-06 08:12:02 +00:00
|
|
|
|
|
|
|
void mark_rodata_ro(void)
|
|
|
|
{
|
[PATCH] x86: tighten kernel image page access rights
On x86-64, kernel memory freed after init can be entirely unmapped instead
of just getting 'poisoned' by overwriting with a debug pattern.
On i386 and x86-64 (under CONFIG_DEBUG_RODATA), kernel text and bug table
can also be write-protected.
Compared to the first version, this one prevents re-creating deleted
mappings in the kernel image range on x86-64, if those got removed
previously. This, together with the original changes, prevents temporarily
having inconsistent mappings when cacheability attributes are being
changed on such pages (e.g. from AGP code). While on i386 such duplicate
mappings don't exist, the same change is done there, too, both for
consistency and because checking pte_present() before using various other
pte_XXX functions is a requirement anyway. At once, i386 code gets
adjusted to use pte_huge() instead of open coding this.
AK: split out cpa() changes
Signed-off-by: Jan Beulich <jbeulich@novell.com>
Signed-off-by: Andi Kleen <ak@suse.de>
2007-05-02 17:27:10 +00:00
|
|
|
unsigned long start = PFN_ALIGN(_text);
|
|
|
|
unsigned long size = PFN_ALIGN(_etext) - start;
|
2006-01-06 08:12:02 +00:00
|
|
|
|
2008-05-12 19:20:56 +00:00
|
|
|
#ifndef CONFIG_DYNAMIC_FTRACE
|
|
|
|
/* Dynamic tracing modifies the kernel text section */
|
2008-02-02 20:42:20 +00:00
|
|
|
set_pages_ro(virt_to_page(start), size >> PAGE_SHIFT);
|
|
|
|
printk(KERN_INFO "Write protecting the kernel text: %luk\n",
|
|
|
|
size >> 10);
|
2008-01-30 12:33:42 +00:00
|
|
|
|
|
|
|
#ifdef CONFIG_CPA_DEBUG
|
2008-02-02 20:42:20 +00:00
|
|
|
printk(KERN_INFO "Testing CPA: Reverting %lx-%lx\n",
|
|
|
|
start, start+size);
|
|
|
|
set_pages_rw(virt_to_page(start), size>>PAGE_SHIFT);
|
2008-01-30 12:33:42 +00:00
|
|
|
|
2008-02-02 20:42:20 +00:00
|
|
|
printk(KERN_INFO "Testing CPA: write protecting again\n");
|
|
|
|
set_pages_ro(virt_to_page(start), size>>PAGE_SHIFT);
|
2007-07-26 19:07:21 +00:00
|
|
|
#endif
|
2008-05-12 19:20:56 +00:00
|
|
|
#endif /* CONFIG_DYNAMIC_FTRACE */
|
|
|
|
|
[PATCH] x86: tighten kernel image page access rights
On x86-64, kernel memory freed after init can be entirely unmapped instead
of just getting 'poisoned' by overwriting with a debug pattern.
On i386 and x86-64 (under CONFIG_DEBUG_RODATA), kernel text and bug table
can also be write-protected.
Compared to the first version, this one prevents re-creating deleted
mappings in the kernel image range on x86-64, if those got removed
previously. This, together with the original changes, prevents temporarily
having inconsistent mappings when cacheability attributes are being
changed on such pages (e.g. from AGP code). While on i386 such duplicate
mappings don't exist, the same change is done there, too, both for
consistency and because checking pte_present() before using various other
pte_XXX functions is a requirement anyway. At once, i386 code gets
adjusted to use pte_huge() instead of open coding this.
AK: split out cpa() changes
Signed-off-by: Jan Beulich <jbeulich@novell.com>
Signed-off-by: Andi Kleen <ak@suse.de>
2007-05-02 17:27:10 +00:00
|
|
|
start += size;
|
|
|
|
size = (unsigned long)__end_rodata - start;
|
2008-01-30 12:34:06 +00:00
|
|
|
set_pages_ro(virt_to_page(start), size >> PAGE_SHIFT);
|
2008-01-30 12:34:10 +00:00
|
|
|
printk(KERN_INFO "Write protecting the kernel read-only data: %luk\n",
|
|
|
|
size >> 10);
|
2008-01-30 12:34:08 +00:00
|
|
|
rodata_test();
|
2006-01-06 08:12:02 +00:00
|
|
|
|
2008-01-30 12:33:42 +00:00
|
|
|
#ifdef CONFIG_CPA_DEBUG
|
2008-01-30 12:34:10 +00:00
|
|
|
printk(KERN_INFO "Testing CPA: undo %lx-%lx\n", start, start + size);
|
2008-01-30 12:34:06 +00:00
|
|
|
set_pages_rw(virt_to_page(start), size >> PAGE_SHIFT);
|
2008-01-30 12:33:42 +00:00
|
|
|
|
2008-01-30 12:34:10 +00:00
|
|
|
printk(KERN_INFO "Testing CPA: write protecting again\n");
|
2008-01-30 12:34:06 +00:00
|
|
|
set_pages_ro(virt_to_page(start), size >> PAGE_SHIFT);
|
2008-01-30 12:33:42 +00:00
|
|
|
#endif
|
2006-01-06 08:12:02 +00:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2006-03-23 10:59:32 +00:00
|
|
|
void free_init_pages(char *what, unsigned long begin, unsigned long end)
|
|
|
|
{
|
2008-01-30 12:34:09 +00:00
|
|
|
#ifdef CONFIG_DEBUG_PAGEALLOC
|
|
|
|
/*
|
|
|
|
* If debugging page accesses then do not free this memory but
|
|
|
|
* mark them not present - any buggy init-section access will
|
|
|
|
* create a kernel page fault:
|
|
|
|
*/
|
|
|
|
printk(KERN_INFO "debug: unmapping init memory %08lx..%08lx\n",
|
|
|
|
begin, PAGE_ALIGN(end));
|
|
|
|
set_memory_np(begin, (end - begin) >> PAGE_SHIFT);
|
|
|
|
#else
|
2008-01-30 12:34:09 +00:00
|
|
|
unsigned long addr;
|
|
|
|
|
2008-01-30 12:34:07 +00:00
|
|
|
/*
|
|
|
|
* We just marked the kernel text read only above, now that
|
|
|
|
* we are going to free part of that, we need to make that
|
|
|
|
* writeable first.
|
|
|
|
*/
|
|
|
|
set_memory_rw(begin, (end - begin) >> PAGE_SHIFT);
|
|
|
|
|
2006-03-23 10:59:32 +00:00
|
|
|
for (addr = begin; addr < end; addr += PAGE_SIZE) {
|
Revert "[PATCH] x86: __pa and __pa_symbol address space separation"
This was broken. It adds complexity, for no good reason. Rather than
separate __pa() and __pa_symbol(), we should deprecate __pa_symbol(),
and preferably __pa() too - and just use "virt_to_phys()" instead, which
is more readable and has nicer semantics.
However, right now, just undo the separation, and make __pa_symbol() be
the exact same as __pa(). That fixes the bugs this patch introduced,
and we can do the fairly obvious cleanups later.
Do the new __phys_addr() function (which is now the actual workhorse for
the unified __pa()/__pa_symbol()) as a real external function, that way
all the potential issues with compile/link-time optimizations of
constant symbol addresses go away, and we can also, if we choose to, add
more sanity-checking of the argument.
Cc: Eric W. Biederman <ebiederm@xmission.com>
Cc: Vivek Goyal <vgoyal@in.ibm.com>
Cc: Andi Kleen <ak@suse.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-07 15:44:24 +00:00
|
|
|
ClearPageReserved(virt_to_page(addr));
|
|
|
|
init_page_count(virt_to_page(addr));
|
|
|
|
memset((void *)addr, POISON_FREE_INITMEM, PAGE_SIZE);
|
|
|
|
free_page(addr);
|
2006-03-23 10:59:32 +00:00
|
|
|
totalram_pages++;
|
|
|
|
}
|
[PATCH] x86: tighten kernel image page access rights
On x86-64, kernel memory freed after init can be entirely unmapped instead
of just getting 'poisoned' by overwriting with a debug pattern.
On i386 and x86-64 (under CONFIG_DEBUG_RODATA), kernel text and bug table
can also be write-protected.
Compared to the first version, this one prevents re-creating deleted
mappings in the kernel image range on x86-64, if those got removed
previously. This, together with the original changes, prevents temporarily
having inconsistent mappings when cacheability attributes are being
changed on such pages (e.g. from AGP code). While on i386 such duplicate
mappings don't exist, the same change is done there, too, both for
consistency and because checking pte_present() before using various other
pte_XXX functions is a requirement anyway. At once, i386 code gets
adjusted to use pte_huge() instead of open coding this.
AK: split out cpa() changes
Signed-off-by: Jan Beulich <jbeulich@novell.com>
Signed-off-by: Andi Kleen <ak@suse.de>
2007-05-02 17:27:10 +00:00
|
|
|
printk(KERN_INFO "Freeing %s: %luk freed\n", what, (end - begin) >> 10);
|
2008-01-30 12:34:09 +00:00
|
|
|
#endif
|
2006-03-23 10:59:32 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void free_initmem(void)
|
|
|
|
{
|
|
|
|
free_init_pages("unused kernel memory",
|
Revert "[PATCH] x86: __pa and __pa_symbol address space separation"
This was broken. It adds complexity, for no good reason. Rather than
separate __pa() and __pa_symbol(), we should deprecate __pa_symbol(),
and preferably __pa() too - and just use "virt_to_phys()" instead, which
is more readable and has nicer semantics.
However, right now, just undo the separation, and make __pa_symbol() be
the exact same as __pa(). That fixes the bugs this patch introduced,
and we can do the fairly obvious cleanups later.
Do the new __phys_addr() function (which is now the actual workhorse for
the unified __pa()/__pa_symbol()) as a real external function, that way
all the potential issues with compile/link-time optimizations of
constant symbol addresses go away, and we can also, if we choose to, add
more sanity-checking of the argument.
Cc: Eric W. Biederman <ebiederm@xmission.com>
Cc: Vivek Goyal <vgoyal@in.ibm.com>
Cc: Andi Kleen <ak@suse.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-07 15:44:24 +00:00
|
|
|
(unsigned long)(&__init_begin),
|
|
|
|
(unsigned long)(&__init_end));
|
2006-03-23 10:59:32 +00:00
|
|
|
}
|
2006-01-06 08:12:02 +00:00
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
#ifdef CONFIG_BLK_DEV_INITRD
|
|
|
|
void free_initrd_mem(unsigned long start, unsigned long end)
|
|
|
|
{
|
Revert "[PATCH] x86: __pa and __pa_symbol address space separation"
This was broken. It adds complexity, for no good reason. Rather than
separate __pa() and __pa_symbol(), we should deprecate __pa_symbol(),
and preferably __pa() too - and just use "virt_to_phys()" instead, which
is more readable and has nicer semantics.
However, right now, just undo the separation, and make __pa_symbol() be
the exact same as __pa(). That fixes the bugs this patch introduced,
and we can do the fairly obvious cleanups later.
Do the new __phys_addr() function (which is now the actual workhorse for
the unified __pa()/__pa_symbol()) as a real external function, that way
all the potential issues with compile/link-time optimizations of
constant symbol addresses go away, and we can also, if we choose to, add
more sanity-checking of the argument.
Cc: Eric W. Biederman <ebiederm@xmission.com>
Cc: Vivek Goyal <vgoyal@in.ibm.com>
Cc: Andi Kleen <ak@suse.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-07 15:44:24 +00:00
|
|
|
free_init_pages("initrd memory", start, end);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
#endif
|
2008-06-13 09:00:56 +00:00
|
|
|
|
|
|
|
int __init reserve_bootmem_generic(unsigned long phys, unsigned long len,
|
|
|
|
int flags)
|
|
|
|
{
|
|
|
|
return reserve_bootmem(phys, len, flags);
|
|
|
|
}
|