Commit Graph

63 Commits

Author SHA1 Message Date
Shaokun Zhang
fe7296e192 arm64: perf: Extend event config for ARMv8.1
Perf has supported ARMv8.1 feature with 16-bit evtCount filed [see c210ae8
arm64: perf: Extend event mask for ARMv8.1], event config should be
extended to 16-bit too, otherwise, if use -e event_name whose event_code
is more than 0x3ff, pmu_config_term will return -EINVAL because function
pmu_format_max_value depends on event config.

This patch extends event config to 16-bit.

Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-05-30 12:15:14 +01:00
Ganapatrao Kulkarni
78a19cfdf3 arm64: perf: Ignore exclude_hv when kernel is running in HYP
commit d98ecdaca2 ("arm64: perf: Count EL2 events if the kernel is
running in HYP") returns -EINVAL when perf system call perf_event_open is
called with exclude_hv != exclude_kernel. This change breaks applications
on VHE enabled ARMv8.1 platforms. The issue was observed with HHVM
application, which calls perf_event_open with exclude_hv = 1 and
exclude_kernel = 0.

There is no separate hypervisor privilege level when VHE is enabled, the
host kernel runs at EL2. So when VHE is enabled, we should ignore
exclude_hv from the application. This behaviour is consistent with PowerPC
where the exclude_hv is ignored when the hypervisor is not present and with
x86 where this flag is ignored.

Signed-off-by: Ganapatrao Kulkarni <ganapatrao.kulkarni@cavium.com>
[will: added comment to justify the behaviour of exclude_hv]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-05-15 18:30:37 +01:00
Florian Fainelli
f5337346cd arm64: pmu: Wire-up Cortex A53 L2 cache events and DTLB refills
Add missing L2 cache events: read/write accesses and misses, as well as
the DTLB refills.

Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-04-28 15:23:36 +01:00
Mark Rutland
faa9a08397 arm64: pmuv3: handle pmuv3+
Commit f1b36dcb5c ("arm64: pmuv3: handle !PMUv3 when probing") is
a little too restrictive, and prevents the use of of backwards
compatible PMUv3 extenstions, which have a PMUver value other than 1.

For instance, ARMv8.1 PMU extensions (as implemented by ThunderX2) are
reported with PMUver value 4.

Per the usual ID register principles, at least 0x1-0x7 imply a
PMUv3-compatible PMU. It's not currently clear whether 0x8-0xe imply the
same.

For the time being, treat the value as signed, and with 0x1-0x7 treated
as meaning PMUv3 is implemented. This may be relaxed by future patches.

Reported-by: Jayachandran C <jnair@caviumnetworks.com>
Tested-by: Jayachandran C <jnair@caviumnetworks.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-04-25 15:12:59 +01:00
Mark Rutland
f00fa5f416 arm64: pmuv3: use arm_pmu ACPI framework
Now that we have a framework to handle the ACPI bits, make the PMUv3
code use this. The framework is a little different to what was
originally envisaged, and we can drop some unused support code in the
process of moving over to it.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
[will: make armv8_pmu_driver_init static]
Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-04-11 16:29:54 +01:00
Mark Rutland
f1b36dcb5c arm64: pmuv3: handle !PMUv3 when probing
When probing via ACPI, we won't know up-front whether a CPU has a PMUv3
compatible PMU. Thus we need to consult ID registers during probe time.

This patch updates our PMUv3 probing code to test for the presence of
PMUv3 functionality before touching an PMUv3-specific registers, and
before updating the struct arm_pmu with PMUv3 data.

When a PMUv3-compatible PMU is not present, probing will return -ENODEV.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-04-11 16:29:54 +01:00
Wei Huang
b112c84a6f KVM: arm64: Fix the issues when guest PMCCFILTR is configured
KVM calls kvm_pmu_set_counter_event_type() when PMCCFILTR is configured.
But this function can't deals with PMCCFILTR correctly because the evtCount
bits of PMCCFILTR, which is reserved 0, conflits with the SW_INCR event
type of other PMXEVTYPER<n> registers. To fix it, when eventsel == 0, this
function shouldn't return immediately; instead it needs to check further
if select_idx is ARMV8_PMU_CYCLE_IDX.

Another issue is that KVM shouldn't copy the eventsel bits of PMCCFILTER
blindly to attr.config. Instead it ought to convert the request to the
"cpu cycle" event type (i.e. 0x11).

To support this patch and to prevent duplicated definitions, a limited
set of ARMv8 perf event types were relocated from perf_event.c to
asm/perf_event.h.

Cc: stable@vger.kernel.org # 4.6+
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Wei Huang <wei@redhat.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2016-11-18 09:06:58 +00:00
Jeremy Linton
85023b2e13 arm64: pmu: Hoist pmu platform device name
Move the PMU name into a common header file so it may
be referenced by other users.

Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-16 17:11:34 +01:00
Jeremy Linton
236b9b91cd arm64: pmu: Probe default hw/cache counters
ARMv8 machines can identify the micro/arch defined counters
that are available on a machine. Add all these counters to the
default armv8 perf map. At run-time disable the counters which
are not available on the given PMU.

Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-16 17:11:33 +01:00
Mark Salter
dbee3a74ef arm64: pmu: add fallback probe table
In preparation for ACPI support, add a pmu_probe_info table to
the arm_pmu_device_probe() call. This table gets used when
probing in the absence of a devicetree node for PMU.

Signed-off-by: Mark Salter <msalter@redhat.com>
Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-16 17:11:33 +01:00
Mark Rutland
569de9026c arm64: perf: move to common attr_group fields
By using a common attr_groups array, the common arm_pmu code can set up
common files (e.g. cpumask) for us in subsequent patches.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09 14:51:51 +01:00
Kefeng Wang
826d05623f arm64: perf: Use the builtin_platform_driver
Use the builtin_platform_driver() to simplify code.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-08-22 10:00:48 +01:00
Will Deacon
4ba2578fa7 arm64: perf: don't expose CHAIN event in sysfs
The CHAIN event allows two 32-bit counters to be treated as a single
64-bit counter, under certain allocation restrictions on the PMU.

Whilst userspace could theoretically create CHAIN events using the raw
event syntax, we don't really want to advertise this in sysfs, since
it's useless in isolation. This patch removes the event from our /sys
entries.

Reported-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-04-25 15:05:24 +01:00
Ashok Kumar
201a72b282 arm64/perf: Add Broadcom Vulcan PMU support
Broadcom Vulcan uses ARMv8 PMUv3 and supports most of
the ARMv8 recommended implementation defined events.

Added Vulcan events mapping for perf and perf_cache map.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ashok Kumar <ashoks@broadcom.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-04-25 14:11:30 +01:00
Ashok Kumar
4b1a9e6934 arm64/perf: Filter common events based on PMCEIDn_EL0
The complete common architectural and micro-architectural
event number structure is filtered based on PMCEIDn_EL0 and
exposed to /sys using is_visibile function pointer in events
attribute_group.
To filter the events in is_visible function, pmceid based bitmap
is stored in arm_pmu structure and the id field from
perf_pmu_events_attr is used to check against the bitmap.

The function which derives event bitmap from PMCEIDn_EL0 is
executed in the cpus, which has the pmu being initialized,
for heterogeneous pmu support.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ashok Kumar <ashoks@broadcom.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-04-25 14:11:10 +01:00
Ashok Kumar
bf2d4782e7 arm64/perf: Access pmu register using <read/write>_sys_reg
changed pmu register access to make use of <read/write>_sys_reg
from sysreg.h instead of accessing them directly.

Signed-off-by: Ashok Kumar <ashoks@broadcom.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-04-25 14:11:06 +01:00
Ashok Kumar
0893f74545 arm64/perf: Define complete ARMv8 recommended implementation defined events
Defined all the ARMv8 recommended implementation defined events
from J3 - "ARM recommendations for IMPLEMENTATION DEFINED event numbers"
in ARM DDI 0487A.g.

Signed-off-by: Ashok Kumar <ashoks@broadcom.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-04-25 14:11:06 +01:00
Ashok Kumar
03598fdbc9 arm64/perf: Changed events naming as per the ARM ARM
changed all the common events name definition as per the document
ARM DDI 0487A.g

SoC specific event names follow the general naming style in
the file and doesn't reflect any document.
changed ARMV8_A53_PERFCTR_PREFETCH_LINEFILL to
ARMV8_A53_PERFCTR_PREF_LINEFILL to match with other SoC specific
event names which use _PREF_ style.

corrected typo l21 to l2i.

Signed-off-by: Ashok Kumar <ashoks@broadcom.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-04-25 14:11:06 +01:00
Shannon Zhao
b8cfadfcef arm64: perf: Move PMU register related defines to asm/perf_event.h
To use the ARMv8 PMU related register defines from the KVM code, we move
the relevant definitions to asm/perf_event.h header file and rename them
with prefix ARMV8_PMU_. This allows us to get rid of kvm_perf_event.h.

Signed-off-by: Anup Patel <anup.patel@linaro.org>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-03-29 16:04:57 +01:00
Linus Torvalds
2c856e14da arm[64] perf updates for 4.6:
- Initial support for ARMv8.1 CPU PMUs
 
 - Support for the CPU PMU in Cavium ThunderX
 
 - CPU PMU support for systems running 32-bit Linux in secure mode
 
 - Support for the system PMU in ARM CCI-550 (Cache Coherent Interconnect)
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQEcBAABCgAGBQJW794rAAoJELescNyEwWM0O5IH/0ejoUjip3n4dFZnSzAbQQZe
 VxCy3DXW5gS8YaswwX2dFw9K772/BpHlazq8AIJIhaR+b+Zzl5t0iOc12HluDilV
 pMvi0JTCxwJhsEiKZnP0cVAU9HM6MAgtMOEegkd/YNESKQey30NeDtIcz/pQfTUV
 28AF71+w5VPj/1EpHEEhHQsASRIx7eDbKzThzdlb8PnDS0o23QJhL9HjVTNIAlB8
 BGxrUBKtBu0eH2Hx33vNjc7UYx1WZQlCk5cAaXevA8mbFXzYaMQI2Cel2nbNMO9i
 eu5zPkDUCG7dq16PxK6IgM4AsDCtmmDuckLdN6UEQWYxkLbb2qHNRKtj0bKB8Sk=
 =E4PE
 -----END PGP SIGNATURE-----

Merge tag 'arm64-perf' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux

Pull arm[64] perf updates from Will Deacon:
 "I have another mixed bag of ARM-related perf patches here.

  It's about 25% CPU and 75% interconnect, but with drivers/bus/
  languishing without an obvious maintainer or tree, Olof and I agreed
  to keep all of these PMU patches together.  I suspect a whole load of
  code from drivers/bus/arm-* can be moved under drivers/perf/, so
  that's on the radar for the future.

  Summary:

   - Initial support for ARMv8.1 CPU PMUs

   - Support for the CPU PMU in Cavium ThunderX

   - CPU PMU support for systems running 32-bit Linux in secure mode

   - Support for the system PMU in ARM CCI-550 (Cache Coherent Interconnect)"

* tag 'arm64-perf' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (26 commits)
  drivers/perf: arm_pmu: avoid NULL dereference when not using devicetree
  arm64: perf: Extend ARMV8_EVTYPE_MASK to include PMCR.LC
  arm-cci: remove unused variable
  arm-cci: don't return value from void function
  arm-cci: make private functions static
  arm-cci: CoreLink CCI-550 PMU driver
  arm-cci500: Rearrange PMU driver for code sharing with CCI-550 PMU
  arm-cci: CCI-500: Work around PMU counter writes
  arm-cci: Provide hook for writing to PMU counters
  arm-cci: Add helper to enable PMU without synchornising counters
  arm-cci: Add routines to save/restore all counters
  arm-cci: Get the status of a counter
  arm-cci: write_counter: Remove redundant check
  arm-cci: Delay PMU counter writes to pmu::pmu_enable
  arm-cci: Refactor CCI PMU enable/disable methods
  arm-cci: Group writes to counter
  arm-cci: fix handling cpumask_any_but return value
  arm-cci: simplify sysfs attr handling
  drivers/perf: arm_pmu: implement CPU_PM notifier
  arm64: dts: Add Cavium ThunderX specific PMU
  ...
2016-03-21 13:14:16 -07:00
Will Deacon
fe638401a0 arm64: perf: Extend ARMV8_EVTYPE_MASK to include PMCR.LC
Commit 7175f0591e ("arm64: perf: Enable PMCR long cycle counter bit")
added initial support for a 64-bit cycle counter enabled using PMCR.LC.

Unfortunately, that patch doesn't extend ARMV8_EVTYPE_MASK, so any
attempts to set the enable bit are ignored by armv8pmu_pmcr_write.

This patch extends the mask to include the new bit.

Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-02-29 23:23:59 +00:00
Marc Zyngier
d98ecdaca2 arm64: perf: Count EL2 events if the kernel is running in HYP
When the kernel is running in HYP (with VHE), it is necessary to
include EL2 events if the user requests counting kernel or
hypervisor events.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2016-02-29 18:34:18 +00:00
Jan Glauber
c210ae80e4 arm64: perf: Extend event mask for ARMv8.1
ARMv8.1 increases the PMU event number space to 16 bit so increase
the EVTYPE mask.

Signed-off-by: Jan Glauber <jglauber@cavium.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-02-18 17:23:41 +00:00
Jan Glauber
7175f0591e arm64: perf: Enable PMCR long cycle counter bit
With the long cycle counter bit (LC) disabled the cycle counter is not
working on ThunderX SOC (ThunderX only implements Aarch64).
Also, according to documentation LC == 0 is deprecated.

To keep the code simple the patch does not introduce 64 bit wide counter
functions. Instead writing the cycle counter always sets the upper
32 bits so overflow interrupts are generated as before.

Original patch from Andrew Pinksi <Andrew.Pinksi@caviumnetworks.com>

Signed-off-by: Jan Glauber <jglauber@cavium.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-02-18 17:23:41 +00:00
Jan Glauber
d0aa2bffcf arm64/perf: Add Cavium ThunderX PMU support
Support PMU events on Caviums ThunderX SOC. ThunderX supports
some additional counters compared to the default ARMv8 PMUv3:

- branch instructions counter
- stall frontend & backend counters
- L1 dcache load & store counters
- L1 icache counters
- iTLB & dTLB counters
- L1 dcache & icache prefetch counters

Signed-off-by: Jan Glauber <jglauber@cavium.com>
[will: capitalisation]
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-02-18 17:23:39 +00:00
Jan Glauber
5f140ccef3 arm64: perf: Rename Cortex A57 events
The implemented Cortex A57 events are strictly-speaking not
A57 specific. They are ARM recommended implementation defined events
and can be found on other ARMv8 SOCs like Cavium ThunderX too.

Therefore rename these events to allow using them in other
implementations too.

Signed-off-by: Jan Glauber <jglauber@cavium.com>
[will: capitalisation and ordering]
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-02-18 17:22:42 +00:00
Will Deacon
5d7ee87708 arm64: perf: add support for Cortex-A72
Cortex-A72 has a PMUv3 implementation that is compatible with the PMU
implemented by Cortex-A57.

This patch hooks up the new compatible string so that the Cortex-A57
event mappings are used.

Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-12-22 14:45:35 +00:00
Will Deacon
57d7412395 arm64: perf: add format entry to describe event -> config mapping
It's all very well providing an events directory to userspace that
details our events in terms of "event=0xNN", but if we don't define how
to encode the "event" field in the perf attr.config, then it's a waste
of time.

This patch adds a single format entry to describe that the event field
occupies the bottom 10 bits of our config field on ARMv8 (PMUv3).

Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-12-22 14:45:07 +00:00
Lorenzo Pieralisi
60792ad349 arm64: kernel: enforce pmuserenr_el0 initialization and restore
The pmuserenr_el0 register value is architecturally UNKNOWN on reset.
Current kernel code resets that register value iff the core pmu device is
correctly probed in the kernel. On platforms with missing DT pmu nodes (or
disabled perf events in the kernel), the pmu is not probed, therefore the
pmuserenr_el0 register is not reset in the kernel, which means that its
value retains the reset value that is architecturally UNKNOWN (system
may run with eg pmuserenr_el0 == 0x1, which means that PMU counters access
is available at EL0, which must be disallowed).

This patch adds code that resets pmuserenr_el0 on cold boot and restores
it on core resume from shutdown, so that the pmuserenr_el0 setup is
always enforced in the kernel.

Cc: <stable@vger.kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-12-21 14:43:04 +00:00
Drew Richardson
9e9caa6a49 arm64: perf: Add event descriptions
Add additional information about the ARM architected hardware events
to make counters self describing. This makes the hardware PMUs easier
to use as perf list contains possible events instead of users having
to refer to documentation like the ARM TRMs.

Signed-off-by: Drew Richardson <drew.richardson@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-11-16 17:09:02 +00:00
Drew Richardson
90381cba64 arm64: perf: Convert event enums to #defines
The enums are not necessary and this allows the event values to be
used to construct static strings at compile time.

Signed-off-by: Drew Richardson <drew.richardson@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-11-16 17:09:02 +00:00
Mark Rutland
62a4dda9d6 arm64: perf: add Cortex-A57 support
The Cortex-A57 PMU supports a few events outside of the required PMUv3
set that are rather useful.

This patch adds the event map data for said events.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-07 14:25:24 +01:00
Mark Rutland
ac82d12772 arm64: perf: add Cortex-A53 support
The Cortex-A53 PMU supports a few events outside of the required PMUv3
set that are rather useful.

This patch adds the event map data for said events.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-07 14:25:07 +01:00
Mark Rutland
6475b2d846 arm64: perf: move to shared arm_pmu framework
Now that the arm_pmu framework has been factored out to drivers/perf we
can make use of it for arm64, gaining support for heterogeneous PMUs
and unifying the two codebases before they diverge further.

The as yet unused PMU name for PMUv3 is changed to armv8_pmuv3, matching
the style previously applied to the 32-bit PMUs.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-07 14:24:48 +01:00
Mark Rutland
ae2fb7ece9 arm64: perf: condense event number maps
Most of the cache events an architecture might support do not map well
to those provided by the ARM architecture, and as such most entries in
the event number maps are *_UNSUPPORTED. Unfortuantely as 0 is a valid
physical event identifier, the *_UNSUPPORTED macros expand to a non-zero
value and thus each unsupported event must be explicitly initialised as
such. This leads to large diffs when adding support for a new CPU, and
makes it difficult to spot the important information.

This patch follows arch/arm/ in making use of PERF_*_ALL_UNSUPPORTED
macros to initialise all entries to *_UNSUPPORTED before overriding this
for the specific events we actually support, resulting in a significant
source code reduction.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 11:08:40 +01:00
Mark Rutland
52da443ec4 arm64: perf: factor out callchain code
We currently bundle the callchain handling code with the PMU code,
despite the fact the two are distinct, and the former can be useful even
in the absence of the latter.

Follow the example of arch/arm and factor the callchain handling into
its own file dependent on CONFIG_PERF_EVENTS rather than
CONFIG_HW_PERF_EVENTS.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 11:08:39 +01:00
Sudeep Holla
d09ce834df arm64: perf: replace arch_find_n_match_cpu_physical_id with of_cpu_device_node_get
arch_find_n_match_cpu_physical_id parses the device tree to get the
device node for a given logical cpu index. However, since ARM PMUs get
probed after the CPU device nodes are stashed while registering the
cpus, we can use of_cpu_device_node_get to avoid another DT parse.

This patch replaces arch_find_n_match_cpu_physical_id with
of_cpu_device_node_get to reuse the stashed value directly instead.

Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 11:08:38 +01:00
Suzuki K. Poulose
2d23ed04de arm64: perf: Remove unnecessary printk
ARM64 pmu prints an error message in event_init() when
no hardware PMU is available. This is pretty annoying as
it keeps printing the message for every single trial, flooding
the kernel logs, unnecessarily. The return code is sufficient for
the user to figure out the reason.

Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 11:08:38 +01:00
Shannon Zhao
b265da5a45 arm64: perf: fix unassigned cpu_pmu->plat_device when probing PMU PPIs
Commit d795ef9aa8 ("arm64: perf: don't warn about missing
interrupt-affinity property for PPIs") added a check for PPIs so that
we avoid parsing the interrupt-affinity property for these naturally
affine interrupts.

Unfortunately, this check can trigger an early (successful) return and
we will not assign the value of cpu_pmu->plat_device. This patch fixes
the issue.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-06-30 18:13:05 +01:00
Stephen Boyd
18a11b5e79 arm64: perf: Don't use of_node after putting it
It's possible, albeit unlikely, that using the of_node here will
reference freed memory. Call of_node_put() after printing the
name to be safe.

Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-06-30 18:13:03 +01:00
Anders Roxell
96045ed486 arm64: Mark PMU interrupt IRQF_NO_THREAD
Mark the PMU interrupts as non-threadable, as is the case with
arch/arm: d9c3365 ARM: 7813/1: Mark pmu interupt IRQF_NO_THREAD

Acked-by: Will Deacon <will.deacon@arm.com>
Suggested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Anders Roxell <anders.roxell@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-05-19 15:27:42 +01:00
Will Deacon
4801ba338a arm64: perf: fix memory leak when probing PMU PPIs
Commit d795ef9aa8 ("arm64: perf: don't warn about missing
interrupt-affinity property for PPIs") added a check for PPIs so that
we avoid parsing the interrupt-affinity property for these naturally
affine interrupts.

Unfortunately, this check can trigger an early (successful) return and
we will leak the irqs array. This patch fixes the issue by reordering
the code so that the check is performed before any independent
allocation.

Reported-by: David Binderman <dcb314@hotmail.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-05-12 16:50:21 +01:00
Suzuki K. Poulose
8291fd04d8 arm64: perf: Fix the pmu node name in warning message
With commit d5efd9cc9c ("arm64: pmu: add support for interrupt-affinity
property"), we print a warning when we find a PMU SPI with a missing
missing interrupt-affinity property in a pmu node. Unfortunately, we
pass the wrong (NULL) device node to of_node_full_name, resulting in
unhelpful messages such as:

 hw perfevents: Failed to parse <no-node>/interrupt-affinity[0]

This patch fixes the name to that of the pmu node.

Fixes: d5efd9cc9c (arm64: pmu: add support for interrupt-affinity property)
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-04-30 12:11:30 +01:00
Will Deacon
d795ef9aa8 arm64: perf: don't warn about missing interrupt-affinity property for PPIs
PPIs are affine by nature, so the interrupt-affinity property is not
used and therefore we shouldn't print a warning in its absence.

Reported-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Reviewed-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-04-30 12:11:23 +01:00
Will Deacon
d5efd9cc9c arm64: pmu: add support for interrupt-affinity property
Historically, the PMU devicetree bindings have expected SPIs to be
listed in order of *logical* CPU number. This is problematic for
bootloaders, especially when the boot CPU (logical ID 0) isn't listed
first in the devicetree.

This patch adds a new optional property, interrupt-affinity, to the
PMU node which allows the interrupt affinity to be described using
a list of phandled to CPU nodes, with each entry in the list
corresponding to the SPI at the same index in the interrupts property.

Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-24 15:09:47 +00:00
Suzuki K. Poulose
8fff105e13 arm64: perf: reject groups spanning multiple HW PMUs
The perf core implicitly rejects events spanning multiple HW PMUs, as in
these cases the event->ctx will differ. However this validation is
performed after pmu::event_init() is called in perf_init_event(), and
thus pmu::event_init() may be called with a group leader from a
different HW PMU.

The ARM64 PMU driver does not take this fact into account, and when
validating groups assumes that it can call to_arm_pmu(event->pmu) for
any HW event. When the event in question is from another HW PMU this is
wrong, and results in dereferencing garbage.

This patch updates the ARM64 PMU driver to first test for and reject
events from other PMUs, moving the to_arm_pmu and related logic after
this test. Fixes a crash triggered by perf_fuzzer on Linux-4.0-rc2, with
a CCI PMU present:

Bad mode in Synchronous Abort handler detected, code 0x86000006 -- IABT (current EL)
CPU: 0 PID: 1371 Comm: perf_fuzzer Not tainted 3.19.0+ #249
Hardware name: V2F-1XV7 Cortex-A53x2 SMM (DT)
task: ffffffc07c73a280 ti: ffffffc07b0a0000 task.ti: ffffffc07b0a0000
PC is at 0x0
LR is at validate_event+0x90/0xa8
pc : [<0000000000000000>] lr : [<ffffffc000090228>] pstate: 00000145
sp : ffffffc07b0a3ba0

[<          (null)>]           (null)
[<ffffffc0000907d8>] armpmu_event_init+0x174/0x3cc
[<ffffffc00015d870>] perf_try_init_event+0x34/0x70
[<ffffffc000164094>] perf_init_event+0xe0/0x10c
[<ffffffc000164348>] perf_event_alloc+0x288/0x358
[<ffffffc000164c5c>] SyS_perf_event_open+0x464/0x98c
Code: bad PC value

Also cleans up the code to use the arm_pmu only when we know
that we are dealing with an arm pmu event.

Cc: Will Deacon <will.deacon@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Peter Ziljstra (Intel) <peterz@infradead.org>
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-19 19:45:51 +00:00
Daniel Thompson
cbbf2e6ed7 arm64: perf: Prevent wraparound during overflow
If the overflow threshold for a counter is set above or near the
0xffffffff boundary then the kernel may lose track of the overflow
causing only events that occur *after* the overflow to be recorded.
Specifically the problem occurs when the value of the performance counter
overtakes its original programmed value due to wrap around.

Typical solutions to this problem are either to avoid programming in
values likely to be overtaken or to treat the overflow bit as the 33rd
bit of the counter.

Its somewhat fiddly to refactor the code to correctly handle the 33rd bit
during irqsave sections (context switches for example) so instead we take
the simpler approach of avoiding values likely to be overtaken.

We set the limit to half of max_period because this matches the limit
imposed in __hw_perf_event_init(). This causes a doubling of the interrupt
rate for large threshold values, however even with a very fast counter
ticking at 4GHz the interrupt rate would only be ~1Hz.

Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-12-04 10:26:54 +00:00
Uwe Kleine-König
c8fdd497a4 ARM64: make of_device_ids const
of_device_ids (i.e. compatible strings and the respective data) are not
supposed to change at runtime. All functions working with of_device_ids
provided by <linux/of.h> work with const of_device_ids. So mark the
only non-const struct in arch/arm64 as const, too.

Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-10-03 14:49:28 +01:00
Mark Salter
ff268ff7f3 arm64: fix !CONFIG_COMPAT build failures
Recent arm64 builds using CONFIG_ARM64_64K_PAGES are failing with:

  arch/arm64/kernel/perf_regs.c: In function ‘perf_reg_abi’:
  arch/arm64/kernel/perf_regs.c:41:2: error: implicit declaration of function ‘is_compat_thread’

  arch/arm64/kernel/perf_event.c:1398:2: error: unknown type name ‘compat_uptr_t’

This is due to some recent arm64 perf commits with compat support:

  commit 23c7d70d55:
    ARM64: perf: add support for frame pointer unwinding in compat mode

  commit 2ee0d7fd36:
    ARM64: perf: add support for perf registers API

Those patches make the arm64 kernel unbuildable if CONFIG_COMPAT is not
defined and CONFIG_ARM64_64K_PAGES depends on !CONFIG_COMPAT. This patch
allows the arm64 kernel to build with and without CONFIG_COMPAT.

Signed-off-by: Mark Salter <msalter@redhat.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-04-06 23:25:04 +01:00
Jean Pihet
23c7d70d55 ARM64: perf: add support for frame pointer unwinding in compat mode
When profiling a 32-bit application, user space callchain unwinding
using the frame pointer is performed in compat mode. The code is taken
over from the AARCH32 code and adapted to work on AARCH64.

Signed-off-by: Jean Pihet <jean.pihet@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-03-13 11:22:38 +00:00