On arm32, the machine model specified in the device tree is printed
during boot-up, courtesy of of_flat_dt_match_machine().
On arm64, of_flat_dt_match_machine() is not called, and the machine
model information is not available from the kernel log.
Print the machine model to make it easier to derive the machine model
from an arbitrary kernel boot log.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Add missing L2 cache events: read/write accesses and misses, as well as
the DTLB refills.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The arm64 module PLT code allocates all PLT entries in a single core
section, since the overhead of having a separate init PLT section is
not justified by the small number of PLT entries usually required for
init code.
However, the core and init module regions are allocated independently,
and there is a corner case where the core region may be allocated from
the VMALLOC region if the dedicated module region is exhausted, but the
init region, being much smaller, can still be allocated from the module
region. This leads to relocation failures if the distance between those
regions exceeds 128 MB. (In fact, this corner case is highly unlikely to
occur on arm64, but the issue has been observed on ARM, whose module
region is much smaller).
So split the core and init PLT regions, and name the latter ".init.plt"
so it gets allocated along with (and sufficiently close to) the .init
sections that it serves. Also, given that init PLT entries may need to
be emitted for branches that target the core module, modify the logic
that disregards defined symbols to only disregard symbols that are
defined in the same section as the relocated branch instruction.
Since there may now be two PLT entries associated with each entry in
the symbol table, we can no longer hijack the symbol::st_size fields
to record the addresses of PLT entries as we emit them for zero-addend
relocations. So instead, perform an explicit comparison to check for
duplicate entries.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Commit f1b36dcb5c ("arm64: pmuv3: handle !PMUv3 when probing") is
a little too restrictive, and prevents the use of of backwards
compatible PMUv3 extenstions, which have a PMUver value other than 1.
For instance, ARMv8.1 PMU extensions (as implemented by ThunderX2) are
reported with PMUver value 4.
Per the usual ID register principles, at least 0x1-0x7 imply a
PMUv3-compatible PMU. It's not currently clear whether 0x8-0xe imply the
same.
For the time being, treat the value as signed, and with 0x1-0x7 treated
as meaning PMUv3 is implemented. This may be relaxed by future patches.
Reported-by: Jayachandran C <jnair@caviumnetworks.com>
Tested-by: Jayachandran C <jnair@caviumnetworks.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
We now trap accesses to CNTVCT_EL0 when the counter is broken
enough to require the kernel to mediate the access. But it
turns out that some existing userspace (such as OpenMPI) do
probe for the counter frequency, leading to an UNDEF exception
as CNTVCT_EL0 and CNTFRQ_EL0 share the same control bit.
The fix is to handle the exception the same way we do for CNTVCT_EL0.
Fixes: a86bd139f2 ("arm64: arch_timer: Enable CNTVCT_EL0 trap if workaround is enabled")
Reported-by: Hanjun Guo <guohanjun@huawei.com>
Tested-by: Hanjun Guo <guohanjun@huawei.com>
Reviewed-by: Hanjun Guo <guohanjun@huawei.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Since bbb56c2722 ("arm64: Add detection code for broken .inst support
in binutils"), running any make target that doesn't involve the cross
compiler results in a spurious warning:
$ make ARCH=arm64 menuconfig
arch/arm64/Makefile:43: Detected assembler with broken .inst; disassembly will be unreliable
while
$ make ARCH=arm64 CROSS_COMPILE=aarch64-arm-linux- menuconfig
is silent (assuming your compiler is not affected). That's because
the code that tests for the workaround is always run, irrespective
of the current configuration being available or not.
An easy fix is to make the detection conditional on CONFIG_ARM64
being defined, which is only the case when actually building
something.
Fixes: bbb56c2722 ("arm64: Add detection code for broken .inst support in binutils")
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Now that we have a framework to handle the ACPI bits, make the PMUv3
code use this. The framework is a little different to what was
originally envisaged, and we can drop some unused support code in the
process of moving over to it.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
[will: make armv8_pmu_driver_init static]
Signed-off-by: Will Deacon <will.deacon@arm.com>
When probing via ACPI, we won't know up-front whether a CPU has a PMUv3
compatible PMU. Thus we need to consult ID registers during probe time.
This patch updates our PMUv3 probing code to test for the presence of
PMUv3 functionality before touching an PMUv3-specific registers, and
before updating the struct arm_pmu with PMUv3 data.
When a PMUv3-compatible PMU is not present, probing will return -ENODEV.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This patch adds framework code to handle parsing PMU data out of the
MADT, sanity checking this, and managing the association of CPUs (and
their interrupts) with appropriate logical PMUs.
For the time being, we expect that only one PMU driver (PMUv3) will make
use of this, and we simply pass in a single probe function.
This is based on an earlier patch from Jeremy Linton.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Currently the ACPI parking protocol code needs to parse each CPU's MADT
GICC table to extract the mailbox address and so on. Each time we parse
a GICC table, we call back to the parking protocol code to parse it.
This has been fine so far, but we're about to have more code that needs
to extract data from the GICC tables, and adding a callback for each
user is going to get unwieldy.
Instead, this patch ensures that we stash a copy of each CPU's GICC
table at boot time, such that anything needing to parse it can later
request it. This will allow for other parsers of GICC, and for
simplification to the ACPI parking protocol code. Note that we must
store a copy, rather than a pointer, since the core ACPI code
temporarily maps/unmaps tables while iterating over them.
Since we parse the MADT before we know how many CPUs we have (and hence
before we setup the percpu areas), we must use an NR_CPUS sized array.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Now that we've split the pdev and DT probing logic from the runtime
management, let's move the former into its own file. We gain a few lines
due to the copyright header and includes, but this should keep the logic
clearly separated, and paves the way for adding ACPI support in a
similar fashion.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
[will: rename nr_irqs to avoid conflict with global variable]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Currently we request (and potentially free) all IRQs for a given PMU in
cpu_pmu_init(). This works for platform/DT probing today, but it doesn't
fit ACPI well as we don't have all our affinity data up-front.
In preparation for ACPI support, fold the IRQ request/free into
arm_pmu_device_probe(), which will remain specific to platform/DT
probing.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Currently we have functions to request/free all IRQs for a given PMU.
While this works today, this won't work for ACPI, where we don't know
the full set of IRQs up front, and need to request them separately.
To enable supporting ACPI, this patch splits out the cpu-local
request/free into new functions, allowing us to request/free individual
IRQs.
As this makes it possible/necessary to request a PPI once per cpu, an
additional check is added to detect mismatched PPIs. This shouldn't
matter for the DT / platform case, as we check this when parsing.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
For historical reasons, portions of the arm_pmu code use a cpu_pmu_
prefix rather than an armpmu_ prefix. While a minor annoyance, this
hasn't been a problem thusfar.
However, to enable ACPI support, we'll need to expose a few things in
header files, and we should aim to keep those consistently namespaced.
In preparation for exporting our IRQ request/free functions, rename
these to have an armpmu_ prefix. For consistency, the 'cpu_pmu'
parameter is also renamed to 'armpmu'.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
In armpmu_dispatch_irq() we look at arm_pmu::plat_device to acquire
platdata, so that we can defer to platform-specific IRQ handling,
required on some 32-bit parts. With the advent of ACPI we won't always
have a platform_device, and so we must avoid trying to dereference
fields from it.
This patch fixes up armpmu_dispatch_irq() to avoid doing so, introducing
a new armpmu_get_platdata() helper.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The ARM PMU framework code always uses armpmu_dispatch_irq as its common
IRQ handler. Passing this down from cpu_pmu_init() is somewhat
pointless, and gets in the way of refactoring.
This patch makes cpu_pmu_request_irqs() always use armpmu_dispatch_irq
as the handler when requesting IRQs, and removes the handler parameter
from its prototype.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Currently arm_pmu_device_probe contains probing logic specific to the
platform_device infrastructure, and some logic required to safely
register the PMU with various systems.
This patch factors out the logic relating to the registration of the
PMU. This makes arm_pmu_device_probe a little easier to read, and will
make it easier to reuse the logic for an ACPI-specific probing
mechanism.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Given we always want to initialise common fields on an allocated PMU,
this patch folds this common initialisation into armpmu_alloc(). This
will make it simpler to reuse this code for an ACPI-specific probe path.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
We expect an ARM PMU's init function to have a particular prototype,
which we open-code in a few places. This is less than ideal, considering
that we cast a void value to this type in one location, and a mismatch
could easily be missed.
Add a typedef so that we can ensure this is consistent.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
We currently disable the PMU temporarily in armpmu_add(). We may have
required this historically, but the perf core always disables an event's
PMU when calling event::pmu::add(), so this is not necessary.
We don't do similarly in armpmu_del(), or elsewhere, so this is
unnecessary and inconsistent, and only serves to confuse the reader.
Remove the pointless disable, simplifying armpmu_add() in the process.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
- Allow checking of a CPU-local erratum
- Add CNTVCT_EL0 trap handler
- Define Cortex-A73 MIDR
- Allow an erratum to be match for all revisions of a core
- Add capability to advertise Cortex-A73 erratum 858921
-----BEGIN PGP SIGNATURE-----
iQJJBAABCAAzFiEEn9UcU+C1Yxj9lZw9I9DQutE9ekMFAljnlbgVHG1hcmMuenlu
Z2llckBhcm0uY29tAAoJECPQ0LrRPXpDx+YP/1OU8GnrVBR2cjEctaTz3s51sPRR
+7j1BZ3pbKnwY3XpM6GJpxc//sSBSphqWZXN5GJZW1lSgw7hpl02mA9rh62G4Xmh
wUqR2hmQ5UdenoR5tE39BZrzMFlef90bA7AZyatSqum6AEQcYPJZEgzOvEvfw/mY
Bc4w73At9OTILRXeZjJZabdeJ4aJLs+6xMfsc9ApgxiZIZ5xhJ7CQzcJFdQ2tFeB
9wYqxkt8i9aZfmj3IX14m3fXISQ4JlXoi8QTBqy7qunCMEFU6jtCwC5aZKj1U3Hh
zebVWomcWMQpFaJw3qxZgjVXftv5r2UsHbMoLo4wFa98Qerr7t9/bGifbFqpKm7S
hA0uzcftUHoQF+l/Qxt9rTlsPKrBMoXjY2XDkKiWP/hqXsXKDBnngKQb2ZFJFT5r
UABMBhqlBQpI2l0/Qn0fL7aEC1xZCVnSgGpAXFvg/Ci7X2K+HH37brt5uNh3vDZI
c5UaWUUycDa5GcCnmoK3QWVeitVW/ydJGaoIT06CKBDrktcpsoO4u4w6CI8xsIJL
RbJwCb72JU5HV7XDpdbc/lLzi30kWceOEUSHrs3dCEnkr41kOunDGm/Xxk/Sm8VL
SeYSx9UotuJvWWozZzVWpLiUSizjh6UB2XWnkSLkT++KA6YPX4L2/hYAw7E3bYVe
FAFoLL7jUYdrjpCF
=XSx4
-----END PGP SIGNATURE-----
Merge tag 'arch-timer-errata-prereq' of git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms into for-next/core
Pre-requisites for the arch timer errata workarounds:
- Allow checking of a CPU-local erratum
- Add CNTVCT_EL0 trap handler
- Define Cortex-A73 MIDR
- Allow an erratum to be match for all revisions of a core
- Add capability to advertise Cortex-A73 erratum 858921
* tag 'arch-timer-errata-prereq' of git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms:
arm64: cpu_errata: Add capability to advertise Cortex-A73 erratum 858921
arm64: cpu_errata: Allow an erratum to be match for all revisions of a core
arm64: Define Cortex-A73 MIDR
arm64: Add CNTVCT_EL0 trap handler
arm64: Allow checking of a CPU-local erratum
In order to work around Cortex-A73 erratum 858921 in a subsequent
patch, add the required capability that advertise the erratum.
As the configuration option it depends on is not present yet,
this has no immediate effect.
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Some minor erratum may not be fixed in further revisions of a core,
leading to a situation where the workaround needs to be updated each
time an updated core is released.
Introduce a MIDR_ALL_VERSIONS match helper that will work for all
versions of that MIDR, once and for all.
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
As we're about to introduce a new workaround that is specific to
Cortex-A73, let's define the coresponding MIDR.
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Since people seem to make a point in breaking the userspace visible
counter, we have no choice but to trap the access. Add the required
handler.
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
this_cpu_has_cap() only checks the feature array, and not the errata
one. In order to be able to check for a CPU-local erratum, allow it
to inspect the latter as well.
This is consistent with cpus_have_cap()'s behaviour, which includes
errata already.
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
If a page is marked read only we should print out that fact,
instead of printing out that there was a page fault. Right now we
get a cryptic error message that something went wrong with an
unhandled fault, but we don't evaluate the esr to figure out that
it was a read/write permission fault.
Instead of seeing:
Unable to handle kernel paging request at virtual address ffff000008e460d8
pgd = ffff800003504000
[ffff000008e460d8] *pgd=0000000083473003, *pud=0000000083503003, *pmd=0000000000000000
Internal error: Oops: 9600004f [#1] PREEMPT SMP
we'll see:
Unable to handle kernel write to read-only memory at virtual address ffff000008e760d8
pgd = ffff80003d3de000
[ffff000008e760d8] *pgd=0000000083472003, *pud=0000000083435003, *pmd=0000000000000000
Internal error: Oops: 9600004f [#1] PREEMPT SMP
We also add a userspace address check into is_permission_fault()
so that the function doesn't return true for ttbr0 PAN faults
when it shouldn't.
Reviewed-by: James Morse <james.morse@arm.com>
Tested-by: James Morse <james.morse@arm.com>
Acked-by: Laura Abbott <labbott@redhat.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Stephen Boyd <stephen.boyd@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
In cases where a device tree is not provided (ie ACPI based system), an
empty fdt is generated by efistub. #address-cells and #size-cells are not
set in the empty fdt, so they default to 1 (4 byte wide). This can be an
issue on 64-bit systems where values representing addresses, etc may be
8 bytes wide as the default value does not align with the general
requirements for an empty DTB, and is fragile when passed to other agents
as extra care is required to read the entire width of a value.
This issue is observed on Qualcomm Technologies QDF24XX platforms when
kexec-tools inserts 64-bit addresses into the "linux,elfcorehdr" and
"linux,usable-memory-range" properties of the fdt. When the values are
later consumed, they are truncated to 32-bit.
Setting #address-cells and #size-cells to 2 at creation of the empty fdt
resolves the observed issue, and makes the fdt less fragile.
Signed-off-by: Sameer Goel <sgoel@codeaurora.org>
Signed-off-by: Jeffrey Hugo <jhugo@codeaurora.org>
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Add documentation for DT properties:
linux,usable-memory-range
linux,elfcorehdr
used by arm64 kdump. Those are, respectively, a usable memory range
allocated to crash dump kernel and the elfcorehdr's location within it.
Signed-off-by: James Morse <james.morse@arm.com>
[takahiro.akashi@linaro.org: update the text due to recent changes ]
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Cc: devicetree@vger.kernel.org
Cc: Rob Herring <robh+dt@kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Add arch specific descriptions about kdump usage on arm64 to kdump.txt.
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Reviewed-by: Baoquan He <bhe@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Kdump is enabled by default as kexec is.
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Arch-specific functions are added to allow for implementing a crash dump
file interface, /proc/vmcore, which can be viewed as a ELF file.
A user space tool, like kexec-tools, is responsible for allocating
a separate region for the core's ELF header within crash kdump kernel
memory and filling it in when executing kexec_load().
Then, its location will be advertised to crash dump kernel via a new
device-tree property, "linux,elfcorehdr", and crash dump kernel preserves
the region for later use with reserve_elfcorehdr() at boot time.
On crash dump kernel, /proc/vmcore will access the primary kernel's memory
with copy_oldmem_page(), which feeds the data page-by-page by ioremap'ing
it since it does not reside in linear mapping on crash dump kernel.
Meanwhile, elfcorehdr_read() is simple as the region is always mapped.
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Reviewed-by: James Morse <james.morse@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
In addition to common VMCOREINFO's defined in
crash_save_vmcoreinfo_init(), we need to know, for crash utility,
- kimage_voffset
- PHYS_OFFSET
to examine the contents of a dump file (/proc/vmcore) correctly
due to the introduction of KASLR (CONFIG_RANDOMIZE_BASE) in v4.6.
- VA_BITS
is also required for makedumpfile command.
arch_crash_save_vmcoreinfo() appends them to the dump file.
More VMCOREINFO's may be added later.
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Reviewed-by: James Morse <james.morse@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Primary kernel calls machine_crash_shutdown() to shut down non-boot cpus
and save registers' status in per-cpu ELF notes before starting crash
dump kernel. See kernel_kexec().
Even if not all secondary cpus have shut down, we do kdump anyway.
As we don't have to make non-boot(crashed) cpus offline (to preserve
correct status of cpus at crash dump) before shutting down, this patch
also adds a variant of smp_send_stop().
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Reviewed-by: James Morse <james.morse@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Since arch_kexec_protect_crashkres() removes a mapping for crash dump
kernel image, the loaded data won't be preserved around hibernation.
In this patch, helper functions, crash_prepare_suspend()/
crash_post_resume(), are additionally called before/after hibernation so
that the relevant memory segments will be mapped again and preserved just
as the others are.
In addition, to minimize the size of hibernation image, crash_is_nosave()
is added to pfn_is_nosave() in order to recognize only the pages that hold
loaded crash dump kernel image as saveable. Hibernation excludes any pages
that are marked as Reserved and yet "nosave."
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Reviewed-by: James Morse <james.morse@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
arch_kexec_protect_crashkres() and arch_kexec_unprotect_crashkres()
are meant to be called by kexec_load() in order to protect the memory
allocated for crash dump kernel once the image is loaded.
The protection is implemented by unmapping the relevant segments in crash
dump kernel memory, rather than making it read-only as other archs do,
to prevent coherency issues due to potential cache aliasing (with
mismatched attributes).
Page-level mappings are consistently used here so that we can change
the attributes of segments in page granularity as well as shrink the region
also in page granularity through /sys/kernel/kexec_crash_size, putting
the freed memory back to buddy system.
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This function validates and invalidates PTE entries, and will be utilized
in kdump to protect loaded crash dump kernel image.
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
"crashkernel=" kernel parameter specifies the size (and optionally
the start address) of the system ram to be used by crash dump kernel.
reserve_crashkernel() will allocate and reserve that memory at boot time
of primary kernel.
The memory range will be exposed to userspace as a resource named
"Crash kernel" in /proc/iomem.
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Signed-off-by: Mark Salter <msalter@redhat.com>
Signed-off-by: Pratyush Anand <panand@redhat.com>
Reviewed-by: James Morse <james.morse@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Crash dump kernel uses only a limited range of available memory as System
RAM. On arm64 kdump, This memory range is advertised to crash dump kernel
via a device-tree property under /chosen,
linux,usable-memory-range = <BASE SIZE>
Crash dump kernel reads this property at boot time and calls
memblock_cap_memory_range() to limit usable memory which are listed either
in UEFI memory map table or "memory" nodes of a device tree blob.
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Reviewed-by: Geoff Levand <geoff@infradead.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Add memblock_cap_memory_range() which will remove all the memblock regions
except the memory range specified in the arguments. In addition, rework is
done on memblock_mem_limit_remove_map() to re-implement it using
memblock_cap_memory_range().
This function, like memblock_mem_limit_remove_map(), will not remove
memblocks with MEMMAP_NOMAP attribute as they may be mapped and accessed
later as "device memory."
See the commit a571d4eb55 ("mm/memblock.c: add new infrastructure to
address the mem limit issue").
This function is used, in a succeeding patch in the series of arm64 kdump
suuport, to limit the range of usable memory, or System RAM, on crash dump
kernel.
(Please note that "mem=" parameter is of little use for this purpose.)
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Dennis Chen <dennis.chen@arm.com>
Cc: linux-mm@kvack.org
Cc: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This function, with a combination of memblock_mark_nomap(), will be used
in a later kdump patch for arm64 when it temporarily isolates some range
of memory from the other memory blocks in order to create a specific
kernel mapping at boot time.
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Patches contain:
- IORT kernel interface misc clean-ups
- IORT id mapping interface refactoring in preparation for platform
MSI (IORT named components -> GIC ITS mappings) devid mapping code
- IORT id mapping implementation for named components nodes to ITS nodes,
in order to provide the kernel with a firmware interface to map
platform devices devids to GIC ITS components
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABCAAGBQJY3RKBAAoJEIKLOaai0TZ/+yMP/2JnANwQXx9+HYQUFSaRmRVi
fBe0MxJT9bPRhTVZ29N1rMrK1jo1RL93ENVu0xxIpbbJ7W2L3jafiqG2lVduWyp5
dtjrUS+XWhGs44YZKV+NXTlbwbkD+GZaZxar1P9eaNaX7Mut3UwEqnsE0A01a0Qr
hoGsamTvRUXZMFxN3PW44eSpxNm4ETnHrVdNgy5B2Hs/QMRSyIuSOFDEPbDYzM3Q
1wtqSRdRn8hLYbKi7igSpoUhX7juTZ1fpfGijWzFaiqsfKOgRwVVqoX8eOzC2BDg
JCQwpklLPR2ZvfBpqtaJeywW2Iu3RQ7ldARQfeNnTkQKNUKOeod1uudLw+TJkjw/
xMCRaYBWRbrFXvxjmWqbZbC91YN0WFlrPQjY9mUivx7+Mf8XVRYDWWIWxfyDqhhW
LcRPXhGBXN0uZJba1/hHYpgKA/2PlohpaQ+E4CtZ8V2OoZx7a0gH9V+s0Fp3WP0c
vGgUot8n2c0vgQWfBHXx4CwuTIgNbanKrbHpgPn+CtyAcB1o7W65LaCeqiw+j2BS
qQ5rIbUKWlMVAeV/FjXMdAyiGzYHoPmGQJ7oOdFuoJgTEB6Il0AHEqok7/wjutqm
7ujtF7cGt++BjtSQen7xzF+Bmr+6JEMZWJY97qA2GwXNgg3fDAZIty3UZSKqdebz
5g++dndIcI+HV5IHXPgr
=mDUc
-----END PGP SIGNATURE-----
Merge tag 'acpi-arm64-for-v4.12' of git://git.kernel.org/pub/scm/linux/kernel/git/lpieralisi/linux into for-next/core
ACPI ARM64 specific changes for v4.12.
Patches contain:
- IORT kernel interface misc clean-ups
- IORT id mapping interface refactoring in preparation for platform
MSI (IORT named components -> GIC ITS mappings) devid mapping code
- IORT id mapping implementation for named components nodes to ITS nodes,
in order to provide the kernel with a firmware interface to map
platform devices devids to GIC ITS components
* tag 'acpi-arm64-for-v4.12' of git://git.kernel.org/pub/scm/linux/kernel/git/lpieralisi/linux:
ACPI: platform: setup MSI domain for ACPI based platform device
ACPI: platform-msi: retrieve devid from IORT
ACPI/IORT: Introduce iort_node_map_platform_id() to retrieve dev id
ACPI/IORT: Rename iort_node_map_rid() to make it generic
ACPI/IORT: Rework iort_match_node_callback() return value handling
ACPI/IORT: Add missing comment for iort_dev_find_its_id()
ACPI/IORT: Fix the indentation in iort_scan_node()
To prevent unintended modifications to the kernel text (malicious or
otherwise) while running the EFI stub, describe the kernel image as
two separate sections: a .text section with read-execute permissions,
covering .text, .rodata and .init.text, and a .data section with
read-write permissions, covering .init.data, .data and .bss.
This relies on the firmware to actually take the section permission
flags into account, but this is something that is currently being
implemented in EDK2, which means we will likely start seeing it in
the wild between one and two years from now.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Replace open coded constants with symbolic ones throughout the
Image and the EFI headers. No binary level changes are intended.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The kernel's EFI PE/COFF header contains a dummy .reloc section, and
an explanatory comment that claims that this is required for the EFI
application loader to accept the Image as a relocatable image (i.e.,
one that can be loaded at any offset and fixed up in place)
This was inherited from the x86 implementation, which has elaborate host
tooling to mangle the PE/COFF header post-link time, and which populates
the .reloc section with a single dummy base relocation. On ARM, no such
tooling exists, and the .reloc section remains empty, and is never even
exposed via the BaseRelocationTable directory entry, which is where the
PE/COFF loader looks for it.
The PE/COFF spec is unclear about relocatable images that do not require
any fixups, but the EDK2 implementation, which is the de facto reference
for PE/COFF in the UEFI space, clearly does not care, and explicitly
mentions (in a comment) that relocatable images with no base relocations
are perfectly fine, as long as they don't have the RELOCS_STRIPPED
attribute set (which is not the case for our PE/COFF image)
So simply remove the .reloc section altogether.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Peter Jones <pjones@redhat.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bring the PE/COFF header in line with the PE/COFF spec, by setting
NumberOfSymbols to 0, and removing the section alignment flags.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
After having split off the PE header, clean up the bits that remain:
use .long consistently, merge two adjacent #ifdef CONFIG_EFI blocks,
fix the offset of the PE header pointer and remove the redundant .align
that follows it.
Also, since we will be eliminating all open coded constants from the
EFI header in subsequent patches, let's replace the open coded "ARM\x64"
magic number with its .ascii equivalent.
No changes to the resulting binary image are intended.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
In preparation of yet another round of modifications to the PE/COFF
header, macroize it and move the definition into a separate source
file.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>