License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 14:07:57 +00:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
2005-04-16 22:20:36 +00:00
|
|
|
/* ld script to make ARM Linux kernel
|
|
|
|
* taken from the i386 version by Russell King
|
|
|
|
* Written by Martin Mares <mj@atrey.karlin.mff.cuni.cz>
|
|
|
|
*/
|
|
|
|
|
2016-02-03 14:58:10 +00:00
|
|
|
#ifdef CONFIG_XIP_KERNEL
|
|
|
|
#include "vmlinux-xip.lds.S"
|
|
|
|
#else
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
#include <asm-generic/vmlinux.lds.h>
|
2012-01-20 10:55:54 +00:00
|
|
|
#include <asm/cache.h>
|
2005-05-05 12:11:00 +00:00
|
|
|
#include <asm/thread_info.h>
|
2005-10-29 20:44:56 +00:00
|
|
|
#include <asm/memory.h>
|
2018-04-03 09:39:23 +00:00
|
|
|
#include <asm/mpu.h>
|
2009-06-24 22:38:56 +00:00
|
|
|
#include <asm/page.h>
|
2014-04-04 00:28:11 +00:00
|
|
|
#include <asm/pgtable.h>
|
2015-03-23 10:52:57 +00:00
|
|
|
|
2018-02-28 03:39:27 +00:00
|
|
|
#include "vmlinux.lds.h"
|
2011-02-21 10:13:36 +00:00
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
OUTPUT_ARCH(arm)
|
|
|
|
ENTRY(stext)
|
2005-10-29 20:44:56 +00:00
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
#ifndef __ARMEB__
|
|
|
|
jiffies = jiffies_64;
|
|
|
|
#else
|
|
|
|
jiffies = jiffies_64 + 4;
|
|
|
|
#endif
|
2005-10-29 20:44:56 +00:00
|
|
|
|
2006-01-03 17:28:33 +00:00
|
|
|
SECTIONS
|
|
|
|
{
|
2011-05-26 10:25:33 +00:00
|
|
|
/*
|
ARM: fix vmlinux.lds.S discarding sections
We are seeing linker errors caused by sections being discarded, despite
the linker script trying to keep them. The result is (eg):
`.exit.text' referenced in section `.alt.smp.init' of drivers/built-in.o: defined in discarded section `.exit.text' of drivers/built-in.o
`.exit.text' referenced in section `.alt.smp.init' of net/built-in.o: defined in discarded section `.exit.text' of net/built-in.o
This is the relevent part of the linker script (reformatted to make it
clearer):
| SECTIONS
| {
| /*
| * unwind exit sections must be discarded before the rest of the
| * unwind sections get included.
| */
| /DISCARD/ : {
| *(.ARM.exidx.exit.text)
| *(.ARM.extab.exit.text)
| }
| ...
| .exit.text : {
| *(.exit.text)
| *(.memexit.text)
| }
| ...
| /DISCARD/ : {
| *(.exit.text)
| *(.memexit.text)
| *(.exit.data)
| *(.memexit.data)
| *(.memexit.rodata)
| *(.exitcall.exit)
| *(.discard)
| *(.discard.*)
| }
| }
Now, this is what the linker manual says about discarded output sections:
| The special output section name `/DISCARD/' may be used to discard
| input sections. Any input sections which are assigned to an output
| section named `/DISCARD/' are not included in the output file.
No questions, no exceptions. It doesn't say "unless they are listed
before the /DISCARD/ section." Now, this is what asn-generic/vmlinux.lds.S
says:
| /*
| * Default discarded sections.
| *
| * Some archs want to discard exit text/data at runtime rather than
| * link time due to cross-section references such as alt instructions,
| * bug table, eh_frame, etc. DISCARDS must be the last of output
| * section definitions so that such archs put those in earlier section
| * definitions.
| */
And guess what - the list _always_ includes .exit.text etc.
Now, what's actually happening is that the linker is reading the script,
and it finds the first /DISCARD/ output section at the beginning of the
script. It continues reading the script, and finds the 'DISCARD' macro
at the end, which having been postprocessed results in another
/DISCARD/ output section. As the linker already contains the earlier
/DISCARD/ output section, it adds it to that existing section, so it
effectively is placed at the start. This can be seen by using the -M
option to ld:
| Linker script and memory map
|
| 0xc037c080 jiffies = jiffies_64
|
| /DISCARD/
| *(.ARM.exidx.exit.text)
| *(.ARM.extab.exit.text)
| *(.exit.text)
| *(.memexit.text)
| *(.exit.data)
| *(.memexit.data)
| *(.memexit.rodata)
| *(.exitcall.exit)
| *(.discard)
| *(.discard.*)
|
| 0xc0008000 . = 0xc0008000
|
| .head.text 0xc0008000 0x1d0
| 0xc0008000 _text = .
| *(.head.text)
| .head.text 0xc0008000 0x1d0 arch/arm/kernel/head.o
| 0xc0008000 stext
|
| .text 0xc0008200 0x2d78d0
| 0xc0008200 _stext = .
| 0xc0008200 __exception_text_start = .
| *(.exception.text)
| .exception.text
| ...
As you can see, all the discarded sections are grouped together - and
as a result of it being the first output section, they all appear before
any other section.
The result is that not only is the unwind information discarded (as
intended), but also the .exit.text, despite us wanting to have the
.exit.text preserved.
We can't move the unwind information elsewhere, because it'll then be
included even when we do actually discard the .exit.text (and similar)
sections.
So, work around this by avoiding the generic DISCARDS macro, and instead
conditionalize the sections to be discarded ourselves. This avoids the
ambiguity in how the linker assigns input sections to output sections,
making our script less dependent on undocumented linker behaviour.
Reported-by: Rob Herring <robherring2@gmail.com>
Tested-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-09-20 22:35:15 +00:00
|
|
|
* XXX: The linker does not define how output sections are
|
|
|
|
* assigned to input sections when there are multiple statements
|
|
|
|
* matching the same input section name. There is no documented
|
|
|
|
* order of matching.
|
|
|
|
*
|
2011-05-26 10:25:33 +00:00
|
|
|
* unwind exit sections must be discarded before the rest of the
|
|
|
|
* unwind sections get included.
|
|
|
|
*/
|
|
|
|
/DISCARD/ : {
|
2018-03-05 21:34:03 +00:00
|
|
|
ARM_DISCARD
|
2011-05-26 10:25:33 +00:00
|
|
|
#ifndef CONFIG_SMP_ON_UP
|
|
|
|
*(.alt.smp.init)
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2006-01-03 17:28:33 +00:00
|
|
|
. = PAGE_OFFSET + TEXT_OFFSET;
|
2011-07-05 21:56:41 +00:00
|
|
|
.head.text : {
|
2011-07-06 09:53:22 +00:00
|
|
|
_text = .;
|
2011-07-05 21:56:41 +00:00
|
|
|
HEAD_TEXT
|
|
|
|
}
|
2014-04-04 00:28:11 +00:00
|
|
|
|
2017-02-07 00:31:58 +00:00
|
|
|
#ifdef CONFIG_STRICT_KERNEL_RWX
|
2014-04-04 00:28:11 +00:00
|
|
|
. = ALIGN(1<<SECTION_SHIFT);
|
|
|
|
#endif
|
|
|
|
|
2018-04-03 09:39:23 +00:00
|
|
|
#ifdef CONFIG_ARM_MPU
|
|
|
|
. = ALIGN(PMSAv8_MINALIGN);
|
|
|
|
#endif
|
2011-07-06 09:39:34 +00:00
|
|
|
.text : { /* Real text segment */
|
2011-07-06 09:53:22 +00:00
|
|
|
_stext = .; /* Text and read-only data */
|
2018-03-05 21:34:03 +00:00
|
|
|
ARM_TEXT
|
2011-07-06 09:39:34 +00:00
|
|
|
}
|
|
|
|
|
2016-01-26 00:19:36 +00:00
|
|
|
#ifdef CONFIG_DEBUG_ALIGN_RODATA
|
2014-04-03 20:29:50 +00:00
|
|
|
. = ALIGN(1<<SECTION_SHIFT);
|
|
|
|
#endif
|
2016-06-23 20:28:47 +00:00
|
|
|
_etext = .; /* End of text section */
|
|
|
|
|
2011-07-06 09:39:34 +00:00
|
|
|
RO_DATA(PAGE_SIZE)
|
|
|
|
|
2012-10-29 18:19:34 +00:00
|
|
|
. = ALIGN(4);
|
|
|
|
__ex_table : AT(ADDR(__ex_table) - LOAD_OFFSET) {
|
|
|
|
__start___ex_table = .;
|
2018-03-05 21:34:03 +00:00
|
|
|
ARM_MMU_KEEP(*(__ex_table))
|
2012-10-29 18:19:34 +00:00
|
|
|
__stop___ex_table = .;
|
|
|
|
}
|
|
|
|
|
2011-07-06 09:39:34 +00:00
|
|
|
#ifdef CONFIG_ARM_UNWIND
|
2018-03-01 22:32:28 +00:00
|
|
|
ARM_UNWIND_SECTIONS
|
2011-07-06 09:39:34 +00:00
|
|
|
#endif
|
|
|
|
|
2012-12-14 15:46:17 +00:00
|
|
|
NOTES
|
|
|
|
|
2017-02-07 00:31:58 +00:00
|
|
|
#ifdef CONFIG_STRICT_KERNEL_RWX
|
2014-04-04 00:28:11 +00:00
|
|
|
. = ALIGN(1<<SECTION_SHIFT);
|
2016-02-03 14:58:10 +00:00
|
|
|
#else
|
2011-07-06 09:39:34 +00:00
|
|
|
. = ALIGN(PAGE_SIZE);
|
|
|
|
#endif
|
2016-02-03 14:58:10 +00:00
|
|
|
__init_begin = .;
|
|
|
|
|
2018-03-02 03:17:03 +00:00
|
|
|
ARM_VECTORS
|
2011-07-05 21:56:41 +00:00
|
|
|
INIT_TEXT_SECTION(8)
|
|
|
|
.exit.text : {
|
|
|
|
ARM_EXIT_KEEP(EXIT_TEXT)
|
|
|
|
}
|
|
|
|
.init.proc.info : {
|
2010-10-01 14:37:05 +00:00
|
|
|
ARM_CPU_DISCARD(PROC_INFO)
|
2011-07-05 21:56:41 +00:00
|
|
|
}
|
|
|
|
.init.arch.info : {
|
2005-04-16 22:20:36 +00:00
|
|
|
__arch_info_begin = .;
|
2011-07-05 21:56:41 +00:00
|
|
|
*(.arch.info.init)
|
2005-04-16 22:20:36 +00:00
|
|
|
__arch_info_end = .;
|
2011-07-05 21:56:41 +00:00
|
|
|
}
|
|
|
|
.init.tagtable : {
|
2005-04-16 22:20:36 +00:00
|
|
|
__tagtable_begin = .;
|
2011-07-05 21:56:41 +00:00
|
|
|
*(.taglist.init)
|
2005-04-16 22:20:36 +00:00
|
|
|
__tagtable_end = .;
|
2011-07-05 21:56:41 +00:00
|
|
|
}
|
2010-09-04 09:47:48 +00:00
|
|
|
#ifdef CONFIG_SMP_ON_UP
|
2011-07-05 21:56:41 +00:00
|
|
|
.init.smpalt : {
|
2010-09-04 09:47:48 +00:00
|
|
|
__smpalt_begin = .;
|
2011-07-05 21:56:41 +00:00
|
|
|
*(.alt.smp.init)
|
2010-09-04 09:47:48 +00:00
|
|
|
__smpalt_end = .;
|
2011-07-05 21:56:41 +00:00
|
|
|
}
|
2010-09-04 09:47:48 +00:00
|
|
|
#endif
|
2011-07-05 21:56:41 +00:00
|
|
|
.init.pv_table : {
|
ARM: P2V: introduce phys_to_virt/virt_to_phys runtime patching
This idea came from Nicolas, Eric Miao produced an initial version,
which was then rewritten into this.
Patch the physical to virtual translations at runtime. As we modify
the code, this makes it incompatible with XIP kernels, but allows us
to achieve this with minimal loss of performance.
As many translations are of the form:
physical = virtual + (PHYS_OFFSET - PAGE_OFFSET)
virtual = physical - (PHYS_OFFSET - PAGE_OFFSET)
we generate an 'add' instruction for __virt_to_phys(), and a 'sub'
instruction for __phys_to_virt(). We calculate at run time (PHYS_OFFSET
- PAGE_OFFSET) by comparing the address prior to MMU initialization with
where it should be once the MMU has been initialized, and place this
constant into the above add/sub instructions.
Once we have (PHYS_OFFSET - PAGE_OFFSET), we can calculate the real
PHYS_OFFSET as PAGE_OFFSET is a build-time constant, and save this for
the C-mode PHYS_OFFSET variable definition to use.
At present, we are unable to support Realview with Sparsemem enabled
as this uses a complex mapping function, and MSM as this requires a
constant which will not fit in our math instruction.
Add a module version magic string for this feature to prevent
incompatible modules being loaded.
Tested-by: Tony Lindgren <tony@atomide.com>
Reviewed-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Tested-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-01-04 19:09:43 +00:00
|
|
|
__pv_table_begin = .;
|
2011-07-05 21:56:41 +00:00
|
|
|
*(.pv_table)
|
ARM: P2V: introduce phys_to_virt/virt_to_phys runtime patching
This idea came from Nicolas, Eric Miao produced an initial version,
which was then rewritten into this.
Patch the physical to virtual translations at runtime. As we modify
the code, this makes it incompatible with XIP kernels, but allows us
to achieve this with minimal loss of performance.
As many translations are of the form:
physical = virtual + (PHYS_OFFSET - PAGE_OFFSET)
virtual = physical - (PHYS_OFFSET - PAGE_OFFSET)
we generate an 'add' instruction for __virt_to_phys(), and a 'sub'
instruction for __phys_to_virt(). We calculate at run time (PHYS_OFFSET
- PAGE_OFFSET) by comparing the address prior to MMU initialization with
where it should be once the MMU has been initialized, and place this
constant into the above add/sub instructions.
Once we have (PHYS_OFFSET - PAGE_OFFSET), we can calculate the real
PHYS_OFFSET as PAGE_OFFSET is a build-time constant, and save this for
the C-mode PHYS_OFFSET variable definition to use.
At present, we are unable to support Realview with Sparsemem enabled
as this uses a complex mapping function, and MSM as this requires a
constant which will not fit in our math instruction.
Add a module version magic string for this feature to prevent
incompatible modules being loaded.
Tested-by: Tony Lindgren <tony@atomide.com>
Reviewed-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Tested-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-01-04 19:09:43 +00:00
|
|
|
__pv_table_end = .;
|
2011-07-05 21:56:41 +00:00
|
|
|
}
|
2017-08-29 20:33:57 +00:00
|
|
|
|
|
|
|
INIT_DATA_SECTION(16)
|
|
|
|
|
2011-07-05 21:56:41 +00:00
|
|
|
.exit.data : {
|
2011-02-21 10:13:36 +00:00
|
|
|
ARM_EXIT_KEEP(EXIT_DATA)
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
ARM: 7428/1: Prevent KALLSYM size mismatch on ARM.
ARM builds seem to be plagued by an occasional build error:
Inconsistent kallsyms data
This is a bug - please report about it
Try "make KALLSYMS_EXTRA_PASS=1" as a workaround
The problem has to do with alignment of some sections by the linker.
The kallsyms data is built in two passes by first linking the kernel
without it, and then linking the kernel again with the symbols
included. Normally, this just shifts the symbols, without changing
their order, and the compression used by the kallsyms gives the same
result.
On non SMP, the per CPU data is empty. Depending on the where the
alignment ends up, it can come out as either:
+-------------------+
| last text segment |
+-------------------+
/* padding */
+-------------------+ <- L1_CACHE_BYTES alignemnt
| per cpu (empty) |
+-------------------+
__per_cpu_end:
/* padding */
__data_loc:
+-------------------+ <- THREAD_SIZE alignment
| data |
+-------------------+
or
+-------------------+
| last text segment |
+-------------------+
/* padding */
+-------------------+ <- L1_CACHE_BYTES alignemnt
| per cpu (empty) |
+-------------------+
__per_cpu_end:
/* no padding */
__data_loc:
+-------------------+ <- THREAD_SIZE alignment
| data |
+-------------------+
if the alignment satisfies both. Because symbols that have the same
address are sorted by 'nm -n', the second case will be in a different
order than the first case. This changes the compression, changing the
size of the kallsym data, causing the build failure.
The KALLSYMS_EXTRA_PASS=1 workaround usually works, but it is still
possible to have the alignment change between the second and third
pass. It's probably even possible for it to never reach a fixedpoint.
The problem only occurs on non-SMP, when the per-cpu data is empty,
and when the data segment has alignment (and immediately follows the
text segments). Fix this by only including the per_cpu section on
SMP, when it is not empty.
Signed-off-by: David Brown <davidb@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-06-20 21:52:24 +00:00
|
|
|
#ifdef CONFIG_SMP
|
2012-01-20 10:55:54 +00:00
|
|
|
PERCPU_SECTION(L1_CACHE_BYTES)
|
ARM: 7428/1: Prevent KALLSYM size mismatch on ARM.
ARM builds seem to be plagued by an occasional build error:
Inconsistent kallsyms data
This is a bug - please report about it
Try "make KALLSYMS_EXTRA_PASS=1" as a workaround
The problem has to do with alignment of some sections by the linker.
The kallsyms data is built in two passes by first linking the kernel
without it, and then linking the kernel again with the symbols
included. Normally, this just shifts the symbols, without changing
their order, and the compression used by the kallsyms gives the same
result.
On non SMP, the per CPU data is empty. Depending on the where the
alignment ends up, it can come out as either:
+-------------------+
| last text segment |
+-------------------+
/* padding */
+-------------------+ <- L1_CACHE_BYTES alignemnt
| per cpu (empty) |
+-------------------+
__per_cpu_end:
/* padding */
__data_loc:
+-------------------+ <- THREAD_SIZE alignment
| data |
+-------------------+
or
+-------------------+
| last text segment |
+-------------------+
/* padding */
+-------------------+ <- L1_CACHE_BYTES alignemnt
| per cpu (empty) |
+-------------------+
__per_cpu_end:
/* no padding */
__data_loc:
+-------------------+ <- THREAD_SIZE alignment
| data |
+-------------------+
if the alignment satisfies both. Because symbols that have the same
address are sorted by 'nm -n', the second case will be in a different
order than the first case. This changes the compression, changing the
size of the kallsym data, causing the build failure.
The KALLSYMS_EXTRA_PASS=1 workaround usually works, but it is still
possible to have the alignment change between the second and third
pass. It's probably even possible for it to never reach a fixedpoint.
The problem only occurs on non-SMP, when the per-cpu data is empty,
and when the data segment has alignment (and immediately follows the
text segments). Fix this by only including the per_cpu section on
SMP, when it is not empty.
Signed-off-by: David Brown <davidb@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-06-20 21:52:24 +00:00
|
|
|
#endif
|
2009-10-02 20:32:47 +00:00
|
|
|
|
2018-03-09 02:12:04 +00:00
|
|
|
#ifdef CONFIG_HAVE_TCM
|
|
|
|
ARM_TCM
|
|
|
|
#endif
|
|
|
|
|
2017-02-07 00:31:58 +00:00
|
|
|
#ifdef CONFIG_STRICT_KERNEL_RWX
|
2014-04-04 00:28:11 +00:00
|
|
|
. = ALIGN(1<<SECTION_SHIFT);
|
2005-04-16 22:20:36 +00:00
|
|
|
#else
|
2005-05-05 12:11:00 +00:00
|
|
|
. = ALIGN(THREAD_SIZE);
|
2014-04-04 00:28:11 +00:00
|
|
|
#endif
|
2014-09-26 02:30:59 +00:00
|
|
|
__init_end = .;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2017-08-29 20:33:57 +00:00
|
|
|
_sdata = .;
|
|
|
|
RW_DATA_SECTION(L1_CACHE_BYTES, PAGE_SIZE, THREAD_SIZE)
|
|
|
|
_edata = .;
|
2017-03-30 15:49:27 +00:00
|
|
|
|
2009-10-02 20:32:47 +00:00
|
|
|
BSS_SECTION(0, 0, 0)
|
2018-04-03 09:39:23 +00:00
|
|
|
#ifdef CONFIG_ARM_MPU
|
|
|
|
. = ALIGN(PMSAv8_MINALIGN);
|
|
|
|
#endif
|
2009-10-02 20:32:47 +00:00
|
|
|
_end = .;
|
|
|
|
|
|
|
|
STABS_DEBUG
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
2017-02-07 00:31:58 +00:00
|
|
|
#ifdef CONFIG_STRICT_KERNEL_RWX
|
2016-01-26 00:20:21 +00:00
|
|
|
/*
|
|
|
|
* Without CONFIG_DEBUG_ALIGN_RODATA, __start_rodata_section_aligned will
|
|
|
|
* be the first section-aligned location after __start_rodata. Otherwise,
|
|
|
|
* it will be equal to __start_rodata.
|
|
|
|
*/
|
|
|
|
__start_rodata_section_aligned = ALIGN(__start_rodata, 1 << SECTION_SHIFT);
|
2016-02-19 15:41:55 +00:00
|
|
|
#endif
|
2016-01-26 00:20:21 +00:00
|
|
|
|
2005-11-17 16:43:14 +00:00
|
|
|
/*
|
|
|
|
* These must never be empty
|
|
|
|
* If you have to comment these two assert statements out, your
|
|
|
|
* binutils is too old (for other reasons as well)
|
|
|
|
*/
|
2005-04-16 22:20:36 +00:00
|
|
|
ASSERT((__proc_info_end - __proc_info_begin), "missing CPU support")
|
|
|
|
ASSERT((__arch_info_end - __arch_info_begin), "no machine record defined")
|
ARM, arm64: kvm: get rid of the bounce page
The HYP init bounce page is a runtime construct that ensures that the
HYP init code does not cross a page boundary. However, this is something
we can do perfectly well at build time, by aligning the code appropriately.
For arm64, we just align to 4 KB, and enforce that the code size is less
than 4 KB, regardless of the chosen page size.
For ARM, the whole code is less than 256 bytes, so we tweak the linker
script to align at a power of 2 upper bound of the code size
Note that this also fixes a benign off-by-one error in the original bounce
page code, where a bounce page would be allocated unnecessarily if the code
was exactly 1 page in size.
On ARM, it also fixes an issue with very large kernels reported by Arnd
Bergmann, where stub sections with linker emitted veneers could erroneously
trigger the size/alignment ASSERT() in the linker script.
Tested-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-19 16:42:26 +00:00
|
|
|
|
2013-04-12 18:12:04 +00:00
|
|
|
/*
|
ARM, arm64: kvm: get rid of the bounce page
The HYP init bounce page is a runtime construct that ensures that the
HYP init code does not cross a page boundary. However, this is something
we can do perfectly well at build time, by aligning the code appropriately.
For arm64, we just align to 4 KB, and enforce that the code size is less
than 4 KB, regardless of the chosen page size.
For ARM, the whole code is less than 256 bytes, so we tweak the linker
script to align at a power of 2 upper bound of the code size
Note that this also fixes a benign off-by-one error in the original bounce
page code, where a bounce page would be allocated unnecessarily if the code
was exactly 1 page in size.
On ARM, it also fixes an issue with very large kernels reported by Arnd
Bergmann, where stub sections with linker emitted veneers could erroneously
trigger the size/alignment ASSERT() in the linker script.
Tested-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-19 16:42:26 +00:00
|
|
|
* The HYP init code can't be more than a page long,
|
|
|
|
* and should not cross a page boundary.
|
2013-04-12 18:12:04 +00:00
|
|
|
* The above comment applies as well.
|
|
|
|
*/
|
2015-03-24 17:48:07 +00:00
|
|
|
ASSERT(__hyp_idmap_text_end - (__hyp_idmap_text_start & PAGE_MASK) <= PAGE_SIZE,
|
ARM, arm64: kvm: get rid of the bounce page
The HYP init bounce page is a runtime construct that ensures that the
HYP init code does not cross a page boundary. However, this is something
we can do perfectly well at build time, by aligning the code appropriately.
For arm64, we just align to 4 KB, and enforce that the code size is less
than 4 KB, regardless of the chosen page size.
For ARM, the whole code is less than 256 bytes, so we tweak the linker
script to align at a power of 2 upper bound of the code size
Note that this also fixes a benign off-by-one error in the original bounce
page code, where a bounce page would be allocated unnecessarily if the code
was exactly 1 page in size.
On ARM, it also fixes an issue with very large kernels reported by Arnd
Bergmann, where stub sections with linker emitted veneers could erroneously
trigger the size/alignment ASSERT() in the linker script.
Tested-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-19 16:42:26 +00:00
|
|
|
"HYP init code too big or misaligned")
|
2016-02-03 14:58:10 +00:00
|
|
|
|
|
|
|
#endif /* CONFIG_XIP_KERNEL */
|