2012-04-20 13:45:54 +00:00
|
|
|
/*
|
|
|
|
* ld script to make ARM Linux kernel
|
|
|
|
* taken from the i386 version by Russell King
|
|
|
|
* Written by Martin Mares <mj@atrey.karlin.mff.cuni.cz>
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include <asm-generic/vmlinux.lds.h>
|
2015-12-01 12:20:40 +00:00
|
|
|
#include <asm/cache.h>
|
2015-10-19 13:19:27 +00:00
|
|
|
#include <asm/kernel-pgtable.h>
|
2012-04-20 13:45:54 +00:00
|
|
|
#include <asm/thread_info.h>
|
|
|
|
#include <asm/memory.h>
|
|
|
|
#include <asm/page.h>
|
2015-01-22 01:36:06 +00:00
|
|
|
#include <asm/pgtable.h>
|
2012-04-20 13:45:54 +00:00
|
|
|
|
arm64: Update the Image header
Currently the kernel Image is stripped of everything past the initial
stack, and at runtime the memory is initialised and used by the kernel.
This makes the effective minimum memory footprint of the kernel larger
than the size of the loaded binary, though bootloaders have no mechanism
to identify how large this minimum memory footprint is. This makes it
difficult to choose safe locations to place both the kernel and other
binaries required at boot (DTB, initrd, etc), such that the kernel won't
clobber said binaries or other reserved memory during initialisation.
Additionally when big endian support was added the image load offset was
overlooked, and is currently of an arbitrary endianness, which makes it
difficult for bootloaders to make use of it. It seems that bootloaders
aren't respecting the image load offset at present anyway, and are
assuming that offset 0x80000 will always be correct.
This patch adds an effective image size to the kernel header which
describes the amount of memory from the start of the kernel Image binary
which the kernel expects to use before detecting memory and handling any
memory reservations. This can be used by bootloaders to choose suitable
locations to load the kernel and/or other binaries such that the kernel
will not clobber any memory unexpectedly. As before, memory reservations
are required to prevent the kernel from clobbering these locations
later.
Both the image load offset and the effective image size are forced to be
little-endian regardless of the native endianness of the kernel to
enable bootloaders to load a kernel of arbitrary endianness. Bootloaders
which wish to make use of the load offset can inspect the effective
image size field for a non-zero value to determine if the offset is of a
known endianness. To enable software to determine the endinanness of the
kernel as may be required for certain use-cases, a new flags field (also
little-endian) is added to the kernel header to export this information.
The documentation is updated to clarify these details. To discourage
future assumptions regarding the value of text_offset, the value at this
point in time is removed from the main flow of the documentation (though
kept as a compatibility note). Some minor formatting issues in the
documentation are also corrected.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Tom Rini <trini@ti.com>
Cc: Geoff Levand <geoff@infradead.org>
Cc: Kevin Hilman <kevin.hilman@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-06-24 15:51:36 +00:00
|
|
|
#include "image.h"
|
|
|
|
|
2014-11-25 15:26:13 +00:00
|
|
|
/* .exit.text needed in case of alternative patching */
|
|
|
|
#define ARM_EXIT_KEEP(x) x
|
|
|
|
#define ARM_EXIT_DISCARD(x)
|
2012-04-20 13:45:54 +00:00
|
|
|
|
|
|
|
OUTPUT_ARCH(aarch64)
|
2014-05-16 17:26:01 +00:00
|
|
|
ENTRY(_text)
|
2012-04-20 13:45:54 +00:00
|
|
|
|
|
|
|
jiffies = jiffies_64;
|
|
|
|
|
2012-12-07 18:40:43 +00:00
|
|
|
#define HYPERVISOR_TEXT \
|
|
|
|
/* \
|
ARM, arm64: kvm: get rid of the bounce page
The HYP init bounce page is a runtime construct that ensures that the
HYP init code does not cross a page boundary. However, this is something
we can do perfectly well at build time, by aligning the code appropriately.
For arm64, we just align to 4 KB, and enforce that the code size is less
than 4 KB, regardless of the chosen page size.
For ARM, the whole code is less than 256 bytes, so we tweak the linker
script to align at a power of 2 upper bound of the code size
Note that this also fixes a benign off-by-one error in the original bounce
page code, where a bounce page would be allocated unnecessarily if the code
was exactly 1 page in size.
On ARM, it also fixes an issue with very large kernels reported by Arnd
Bergmann, where stub sections with linker emitted veneers could erroneously
trigger the size/alignment ASSERT() in the linker script.
Tested-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-19 16:42:26 +00:00
|
|
|
* Align to 4 KB so that \
|
|
|
|
* a) the HYP vector table is at its minimum \
|
|
|
|
* alignment of 2048 bytes \
|
|
|
|
* b) the HYP init code will not cross a page \
|
|
|
|
* boundary if its size does not exceed \
|
|
|
|
* 4 KB (see related ASSERT() below) \
|
2012-12-07 18:40:43 +00:00
|
|
|
*/ \
|
ARM, arm64: kvm: get rid of the bounce page
The HYP init bounce page is a runtime construct that ensures that the
HYP init code does not cross a page boundary. However, this is something
we can do perfectly well at build time, by aligning the code appropriately.
For arm64, we just align to 4 KB, and enforce that the code size is less
than 4 KB, regardless of the chosen page size.
For ARM, the whole code is less than 256 bytes, so we tweak the linker
script to align at a power of 2 upper bound of the code size
Note that this also fixes a benign off-by-one error in the original bounce
page code, where a bounce page would be allocated unnecessarily if the code
was exactly 1 page in size.
On ARM, it also fixes an issue with very large kernels reported by Arnd
Bergmann, where stub sections with linker emitted veneers could erroneously
trigger the size/alignment ASSERT() in the linker script.
Tested-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-19 16:42:26 +00:00
|
|
|
. = ALIGN(SZ_4K); \
|
2012-12-07 18:40:43 +00:00
|
|
|
VMLINUX_SYMBOL(__hyp_idmap_text_start) = .; \
|
|
|
|
*(.hyp.idmap.text) \
|
|
|
|
VMLINUX_SYMBOL(__hyp_idmap_text_end) = .; \
|
|
|
|
VMLINUX_SYMBOL(__hyp_text_start) = .; \
|
|
|
|
*(.hyp.text) \
|
|
|
|
VMLINUX_SYMBOL(__hyp_text_end) = .;
|
|
|
|
|
2015-06-01 11:40:33 +00:00
|
|
|
#define IDMAP_TEXT \
|
|
|
|
. = ALIGN(SZ_4K); \
|
|
|
|
VMLINUX_SYMBOL(__idmap_text_start) = .; \
|
|
|
|
*(.idmap.text) \
|
|
|
|
VMLINUX_SYMBOL(__idmap_text_end) = .;
|
|
|
|
|
2014-10-10 16:42:55 +00:00
|
|
|
/*
|
|
|
|
* The size of the PE/COFF section that covers the kernel image, which
|
|
|
|
* runs from stext to _edata, must be a round multiple of the PE/COFF
|
|
|
|
* FileAlignment, which we set to its minimum value of 0x200. 'stext'
|
|
|
|
* itself is 4 KB aligned, so padding out _edata to a 0x200 aligned
|
|
|
|
* boundary should be sufficient.
|
|
|
|
*/
|
|
|
|
PECOFF_FILE_ALIGNMENT = 0x200;
|
|
|
|
|
|
|
|
#ifdef CONFIG_EFI
|
|
|
|
#define PECOFF_EDATA_PADDING \
|
|
|
|
.pecoff_edata_padding : { BYTE(0); . = ALIGN(PECOFF_FILE_ALIGNMENT); }
|
|
|
|
#else
|
|
|
|
#define PECOFF_EDATA_PADDING
|
|
|
|
#endif
|
|
|
|
|
2015-10-26 21:42:33 +00:00
|
|
|
#if defined(CONFIG_DEBUG_ALIGN_RODATA)
|
2015-01-22 01:36:06 +00:00
|
|
|
#define ALIGN_DEBUG_RO . = ALIGN(1<<SECTION_SHIFT);
|
|
|
|
#define ALIGN_DEBUG_RO_MIN(min) ALIGN_DEBUG_RO
|
2015-10-26 21:42:33 +00:00
|
|
|
#elif defined(CONFIG_DEBUG_RODATA)
|
|
|
|
#define ALIGN_DEBUG_RO . = ALIGN(1<<PAGE_SHIFT);
|
|
|
|
#define ALIGN_DEBUG_RO_MIN(min) ALIGN_DEBUG_RO
|
2015-01-22 01:36:06 +00:00
|
|
|
#else
|
|
|
|
#define ALIGN_DEBUG_RO
|
|
|
|
#define ALIGN_DEBUG_RO_MIN(min) . = ALIGN(min);
|
|
|
|
#endif
|
|
|
|
|
2012-04-20 13:45:54 +00:00
|
|
|
SECTIONS
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* XXX: The linker does not define how output sections are
|
|
|
|
* assigned to input sections when there are multiple statements
|
|
|
|
* matching the same input section name. There is no documented
|
|
|
|
* order of matching.
|
|
|
|
*/
|
|
|
|
/DISCARD/ : {
|
|
|
|
ARM_EXIT_DISCARD(EXIT_TEXT)
|
|
|
|
ARM_EXIT_DISCARD(EXIT_DATA)
|
|
|
|
EXIT_CALL
|
|
|
|
*(.discard)
|
|
|
|
*(.discard.*)
|
|
|
|
}
|
|
|
|
|
|
|
|
. = PAGE_OFFSET + TEXT_OFFSET;
|
|
|
|
|
|
|
|
.head.text : {
|
|
|
|
_text = .;
|
|
|
|
HEAD_TEXT
|
|
|
|
}
|
2015-01-22 01:36:06 +00:00
|
|
|
ALIGN_DEBUG_RO
|
2012-04-20 13:45:54 +00:00
|
|
|
.text : { /* Real text segment */
|
|
|
|
_stext = .; /* Text and read-only data */
|
|
|
|
__exception_text_start = .;
|
|
|
|
*(.exception.text)
|
|
|
|
__exception_text_end = .;
|
|
|
|
IRQENTRY_TEXT
|
|
|
|
TEXT_TEXT
|
|
|
|
SCHED_TEXT
|
|
|
|
LOCK_TEXT
|
2012-12-07 18:40:43 +00:00
|
|
|
HYPERVISOR_TEXT
|
2015-06-01 11:40:33 +00:00
|
|
|
IDMAP_TEXT
|
2012-04-20 13:45:54 +00:00
|
|
|
*(.fixup)
|
|
|
|
*(.gnu.warning)
|
|
|
|
. = ALIGN(16);
|
|
|
|
*(.got) /* Global offset table */
|
|
|
|
}
|
|
|
|
|
|
|
|
RO_DATA(PAGE_SIZE)
|
2013-05-08 16:29:24 +00:00
|
|
|
EXCEPTION_TABLE(8)
|
2013-08-23 15:16:42 +00:00
|
|
|
NOTES
|
2015-01-22 01:36:06 +00:00
|
|
|
ALIGN_DEBUG_RO
|
2012-04-20 13:45:54 +00:00
|
|
|
_etext = .; /* End of text and rodata section */
|
|
|
|
|
2015-01-22 01:36:06 +00:00
|
|
|
ALIGN_DEBUG_RO_MIN(PAGE_SIZE)
|
2012-04-20 13:45:54 +00:00
|
|
|
__init_begin = .;
|
|
|
|
|
|
|
|
INIT_TEXT_SECTION(8)
|
|
|
|
.exit.text : {
|
|
|
|
ARM_EXIT_KEEP(EXIT_TEXT)
|
|
|
|
}
|
2015-01-22 01:36:06 +00:00
|
|
|
|
2012-04-20 13:45:54 +00:00
|
|
|
.init.data : {
|
|
|
|
INIT_DATA
|
|
|
|
INIT_SETUP(16)
|
|
|
|
INIT_CALLS
|
|
|
|
CON_INITCALL
|
|
|
|
SECURITY_INITCALL
|
|
|
|
INIT_RAM_FS
|
|
|
|
}
|
|
|
|
.exit.data : {
|
|
|
|
ARM_EXIT_KEEP(EXIT_DATA)
|
|
|
|
}
|
|
|
|
|
2015-12-01 12:20:40 +00:00
|
|
|
PERCPU_SECTION(L1_CACHE_BYTES)
|
2012-04-20 13:45:54 +00:00
|
|
|
|
2014-11-14 15:54:08 +00:00
|
|
|
. = ALIGN(4);
|
|
|
|
.altinstructions : {
|
|
|
|
__alt_instructions = .;
|
|
|
|
*(.altinstructions)
|
|
|
|
__alt_instructions_end = .;
|
|
|
|
}
|
|
|
|
.altinstr_replacement : {
|
|
|
|
*(.altinstr_replacement)
|
|
|
|
}
|
|
|
|
|
|
|
|
. = ALIGN(PAGE_SIZE);
|
2015-12-09 12:44:38 +00:00
|
|
|
__init_end = .;
|
|
|
|
|
2013-11-04 16:38:47 +00:00
|
|
|
_data = .;
|
|
|
|
_sdata = .;
|
2015-12-01 12:20:40 +00:00
|
|
|
RW_DATA_SECTION(L1_CACHE_BYTES, PAGE_SIZE, THREAD_SIZE)
|
2014-10-10 16:42:55 +00:00
|
|
|
PECOFF_EDATA_PADDING
|
2013-11-04 16:38:47 +00:00
|
|
|
_edata = .;
|
2012-04-20 13:45:54 +00:00
|
|
|
|
|
|
|
BSS_SECTION(0, 0, 0)
|
2014-06-24 15:51:35 +00:00
|
|
|
|
|
|
|
. = ALIGN(PAGE_SIZE);
|
|
|
|
idmap_pg_dir = .;
|
|
|
|
. += IDMAP_DIR_SIZE;
|
|
|
|
swapper_pg_dir = .;
|
|
|
|
. += SWAPPER_DIR_SIZE;
|
|
|
|
|
2012-04-20 13:45:54 +00:00
|
|
|
_end = .;
|
|
|
|
|
|
|
|
STABS_DEBUG
|
arm64: Update the Image header
Currently the kernel Image is stripped of everything past the initial
stack, and at runtime the memory is initialised and used by the kernel.
This makes the effective minimum memory footprint of the kernel larger
than the size of the loaded binary, though bootloaders have no mechanism
to identify how large this minimum memory footprint is. This makes it
difficult to choose safe locations to place both the kernel and other
binaries required at boot (DTB, initrd, etc), such that the kernel won't
clobber said binaries or other reserved memory during initialisation.
Additionally when big endian support was added the image load offset was
overlooked, and is currently of an arbitrary endianness, which makes it
difficult for bootloaders to make use of it. It seems that bootloaders
aren't respecting the image load offset at present anyway, and are
assuming that offset 0x80000 will always be correct.
This patch adds an effective image size to the kernel header which
describes the amount of memory from the start of the kernel Image binary
which the kernel expects to use before detecting memory and handling any
memory reservations. This can be used by bootloaders to choose suitable
locations to load the kernel and/or other binaries such that the kernel
will not clobber any memory unexpectedly. As before, memory reservations
are required to prevent the kernel from clobbering these locations
later.
Both the image load offset and the effective image size are forced to be
little-endian regardless of the native endianness of the kernel to
enable bootloaders to load a kernel of arbitrary endianness. Bootloaders
which wish to make use of the load offset can inspect the effective
image size field for a non-zero value to determine if the offset is of a
known endianness. To enable software to determine the endinanness of the
kernel as may be required for certain use-cases, a new flags field (also
little-endian) is added to the kernel header to export this information.
The documentation is updated to clarify these details. To discourage
future assumptions regarding the value of text_offset, the value at this
point in time is removed from the main flow of the documentation (though
kept as a compatibility note). Some minor formatting issues in the
documentation are also corrected.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Tom Rini <trini@ti.com>
Cc: Geoff Levand <geoff@infradead.org>
Cc: Kevin Hilman <kevin.hilman@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-06-24 15:51:36 +00:00
|
|
|
|
|
|
|
HEAD_SYMBOLS
|
2012-04-20 13:45:54 +00:00
|
|
|
}
|
2012-12-07 18:40:43 +00:00
|
|
|
|
|
|
|
/*
|
2015-06-01 11:40:33 +00:00
|
|
|
* The HYP init code and ID map text can't be longer than a page each,
|
ARM, arm64: kvm: get rid of the bounce page
The HYP init bounce page is a runtime construct that ensures that the
HYP init code does not cross a page boundary. However, this is something
we can do perfectly well at build time, by aligning the code appropriately.
For arm64, we just align to 4 KB, and enforce that the code size is less
than 4 KB, regardless of the chosen page size.
For ARM, the whole code is less than 256 bytes, so we tweak the linker
script to align at a power of 2 upper bound of the code size
Note that this also fixes a benign off-by-one error in the original bounce
page code, where a bounce page would be allocated unnecessarily if the code
was exactly 1 page in size.
On ARM, it also fixes an issue with very large kernels reported by Arnd
Bergmann, where stub sections with linker emitted veneers could erroneously
trigger the size/alignment ASSERT() in the linker script.
Tested-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-19 16:42:26 +00:00
|
|
|
* and should not cross a page boundary.
|
2012-12-07 18:40:43 +00:00
|
|
|
*/
|
ARM, arm64: kvm: get rid of the bounce page
The HYP init bounce page is a runtime construct that ensures that the
HYP init code does not cross a page boundary. However, this is something
we can do perfectly well at build time, by aligning the code appropriately.
For arm64, we just align to 4 KB, and enforce that the code size is less
than 4 KB, regardless of the chosen page size.
For ARM, the whole code is less than 256 bytes, so we tweak the linker
script to align at a power of 2 upper bound of the code size
Note that this also fixes a benign off-by-one error in the original bounce
page code, where a bounce page would be allocated unnecessarily if the code
was exactly 1 page in size.
On ARM, it also fixes an issue with very large kernels reported by Arnd
Bergmann, where stub sections with linker emitted veneers could erroneously
trigger the size/alignment ASSERT() in the linker script.
Tested-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-19 16:42:26 +00:00
|
|
|
ASSERT(__hyp_idmap_text_end - (__hyp_idmap_text_start & ~(SZ_4K - 1)) <= SZ_4K,
|
|
|
|
"HYP init code too big or misaligned")
|
2015-06-01 11:40:33 +00:00
|
|
|
ASSERT(__idmap_text_end - (__idmap_text_start & ~(SZ_4K - 1)) <= SZ_4K,
|
|
|
|
"ID map text too big or misaligned")
|
2014-06-24 15:51:37 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* If padding is applied before .head.text, virt<->phys conversions will fail.
|
|
|
|
*/
|
|
|
|
ASSERT(_text == (PAGE_OFFSET + TEXT_OFFSET), "HEAD is misaligned")
|