mirror of
https://github.com/torvalds/linux.git
synced 2024-11-24 13:11:40 +00:00
6b3e0e2e04
Extend error messages to mention CAP_PERFMON capability as an option to substitute CAP_SYS_ADMIN capability for secure system performance monitoring and observability operations. Make perf_event_paranoid_check() and __cmd_ftrace() to be aware of CAP_PERFMON capability. CAP_PERFMON implements the principle of least privilege for performance monitoring and observability operations (POSIX IEEE 1003.1e 2.2.2.39 principle of least privilege: A security design principle that states that a process or program be granted only those privileges (e.g., capabilities) necessary to accomplish its legitimate function, and only for the time that such privileges are actually required) For backward compatibility reasons access to perf_events subsystem remains open for CAP_SYS_ADMIN privileged processes but CAP_SYS_ADMIN usage for secure perf_events monitoring is discouraged with respect to CAP_PERFMON capability. Committer testing: Using a libcap with this patch: diff --git a/libcap/include/uapi/linux/capability.h b/libcap/include/uapi/linux/capability.h index 78b2fd4c8a95..89b5b0279b60 100644 --- a/libcap/include/uapi/linux/capability.h +++ b/libcap/include/uapi/linux/capability.h @@ -366,8 +366,9 @@ struct vfs_ns_cap_data { #define CAP_AUDIT_READ 37 +#define CAP_PERFMON 38 -#define CAP_LAST_CAP CAP_AUDIT_READ +#define CAP_LAST_CAP CAP_PERFMON #define cap_valid(x) ((x) >= 0 && (x) <= CAP_LAST_CAP) Note that using '38' in place of 'cap_perfmon' works to some degree with an old libcap, its only when cap_get_flag() is called that libcap performs an error check based on the maximum value known for capabilities that it will fail. This makes determining the default of perf_event_attr.exclude_kernel to fail, as it can't determine if CAP_PERFMON is in place. Using 'perf top -e cycles' avoids the default check and sets perf_event_attr.exclude_kernel to 1. As root, with a libcap supporting CAP_PERFMON: # groupadd perf_users # adduser perf -g perf_users # mkdir ~perf/bin # cp ~acme/bin/perf ~perf/bin/ # chgrp perf_users ~perf/bin/perf # setcap "cap_perfmon,cap_sys_ptrace,cap_syslog=ep" ~perf/bin/perf # getcap ~perf/bin/perf /home/perf/bin/perf = cap_sys_ptrace,cap_syslog,cap_perfmon+ep # ls -la ~perf/bin/perf -rwxr-xr-x. 1 root perf_users 16968552 Apr 9 13:10 /home/perf/bin/perf As the 'perf' user in the 'perf_users' group: $ perf top -a --stdio Error: Failed to mmap with 1 (Operation not permitted) $ Either add the cap_ipc_lock capability to the perf binary or reduce the ring buffer size to some smaller value: $ perf top -m10 -a --stdio rounding mmap pages size to 64K (16 pages) Error: Failed to mmap with 1 (Operation not permitted) $ perf top -m4 -a --stdio Error: Failed to mmap with 1 (Operation not permitted) $ perf top -m2 -a --stdio PerfTop: 762 irqs/sec kernel:49.7% exact: 100.0% lost: 0/0 drop: 0/0 [4000Hz cycles], (all, 4 CPUs) ------------------------------------------------------------------------------------------------------ 9.83% perf [.] __symbols__insert 8.58% perf [.] rb_next 5.91% [kernel] [k] module_get_kallsym 5.66% [kernel] [k] kallsyms_expand_symbol.constprop.0 3.98% libc-2.29.so [.] __GI_____strtoull_l_internal 3.66% perf [.] rb_insert_color 2.34% [kernel] [k] vsnprintf 2.30% [kernel] [k] string_nocheck 2.16% libc-2.29.so [.] _IO_getdelim 2.15% [kernel] [k] number 2.13% [kernel] [k] format_decode 1.58% libc-2.29.so [.] _IO_feof 1.52% libc-2.29.so [.] __strcmp_avx2 1.50% perf [.] rb_set_parent_color 1.47% libc-2.29.so [.] __libc_calloc 1.24% [kernel] [k] do_syscall_64 1.17% [kernel] [k] __x86_indirect_thunk_rax $ perf record -a sleep 1 [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.552 MB perf.data (74 samples) ] $ perf evlist cycles $ perf evlist -v cycles: size: 120, { sample_period, sample_freq }: 4000, sample_type: IP|TID|TIME|CPU|PERIOD, read_format: ID, disabled: 1, inherit: 1, mmap: 1, comm: 1, freq: 1, task: 1, precise_ip: 3, sample_id_all: 1, exclude_guest: 1, mmap2: 1, comm_exec: 1, ksymbol: 1, bpf_event: 1 $ perf report | head -20 # To display the perf.data header info, please use --header/--header-only options. # # # Total Lost Samples: 0 # # Samples: 74 of event 'cycles' # Event count (approx.): 15694834 # # Overhead Command Shared Object Symbol # ........ ............... .......................... ...................................... # 19.62% perf [kernel.vmlinux] [k] strnlen_user 13.88% swapper [kernel.vmlinux] [k] intel_idle 13.83% ksoftirqd/0 [kernel.vmlinux] [k] pfifo_fast_dequeue 13.51% swapper [kernel.vmlinux] [k] kmem_cache_free 6.31% gnome-shell [kernel.vmlinux] [k] kmem_cache_free 5.66% kworker/u8:3+ix [kernel.vmlinux] [k] delay_tsc 4.42% perf [kernel.vmlinux] [k] __set_cpus_allowed_ptr 3.45% kworker/2:1-eve [kernel.vmlinux] [k] shmem_truncate_range 2.29% gnome-shell libgobject-2.0.so.0.6000.7 [.] g_closure_ref $ Signed-off-by: Alexey Budankov <alexey.budankov@linux.intel.com> Reviewed-by: James Morris <jamorris@linux.microsoft.com> Acked-by: Jiri Olsa <jolsa@redhat.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Andi Kleen <ak@linux.intel.com> Cc: Igor Lubashev <ilubashe@akamai.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Serge Hallyn <serge@hallyn.com> Cc: Song Liu <songliubraving@fb.com> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: intel-gfx@lists.freedesktop.org Cc: linux-doc@vger.kernel.org Cc: linux-man@vger.kernel.org Cc: linux-security-module@vger.kernel.org Cc: selinux@vger.kernel.org Link: http://lore.kernel.org/lkml/a66d5648-2b8e-577e-e1f2-1d56c017ab5e@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
468 lines
18 KiB
Plaintext
468 lines
18 KiB
Plaintext
|
|
Performance Counters for Linux
|
|
------------------------------
|
|
|
|
Performance counters are special hardware registers available on most modern
|
|
CPUs. These registers count the number of certain types of hw events: such
|
|
as instructions executed, cachemisses suffered, or branches mis-predicted -
|
|
without slowing down the kernel or applications. These registers can also
|
|
trigger interrupts when a threshold number of events have passed - and can
|
|
thus be used to profile the code that runs on that CPU.
|
|
|
|
The Linux Performance Counter subsystem provides an abstraction of these
|
|
hardware capabilities. It provides per task and per CPU counters, counter
|
|
groups, and it provides event capabilities on top of those. It
|
|
provides "virtual" 64-bit counters, regardless of the width of the
|
|
underlying hardware counters.
|
|
|
|
Performance counters are accessed via special file descriptors.
|
|
There's one file descriptor per virtual counter used.
|
|
|
|
The special file descriptor is opened via the sys_perf_event_open()
|
|
system call:
|
|
|
|
int sys_perf_event_open(struct perf_event_attr *hw_event_uptr,
|
|
pid_t pid, int cpu, int group_fd,
|
|
unsigned long flags);
|
|
|
|
The syscall returns the new fd. The fd can be used via the normal
|
|
VFS system calls: read() can be used to read the counter, fcntl()
|
|
can be used to set the blocking mode, etc.
|
|
|
|
Multiple counters can be kept open at a time, and the counters
|
|
can be poll()ed.
|
|
|
|
When creating a new counter fd, 'perf_event_attr' is:
|
|
|
|
struct perf_event_attr {
|
|
/*
|
|
* The MSB of the config word signifies if the rest contains cpu
|
|
* specific (raw) counter configuration data, if unset, the next
|
|
* 7 bits are an event type and the rest of the bits are the event
|
|
* identifier.
|
|
*/
|
|
__u64 config;
|
|
|
|
__u64 irq_period;
|
|
__u32 record_type;
|
|
__u32 read_format;
|
|
|
|
__u64 disabled : 1, /* off by default */
|
|
inherit : 1, /* children inherit it */
|
|
pinned : 1, /* must always be on PMU */
|
|
exclusive : 1, /* only group on PMU */
|
|
exclude_user : 1, /* don't count user */
|
|
exclude_kernel : 1, /* ditto kernel */
|
|
exclude_hv : 1, /* ditto hypervisor */
|
|
exclude_idle : 1, /* don't count when idle */
|
|
mmap : 1, /* include mmap data */
|
|
munmap : 1, /* include munmap data */
|
|
comm : 1, /* include comm data */
|
|
|
|
__reserved_1 : 52;
|
|
|
|
__u32 extra_config_len;
|
|
__u32 wakeup_events; /* wakeup every n events */
|
|
|
|
__u64 __reserved_2;
|
|
__u64 __reserved_3;
|
|
};
|
|
|
|
The 'config' field specifies what the counter should count. It
|
|
is divided into 3 bit-fields:
|
|
|
|
raw_type: 1 bit (most significant bit) 0x8000_0000_0000_0000
|
|
type: 7 bits (next most significant) 0x7f00_0000_0000_0000
|
|
event_id: 56 bits (least significant) 0x00ff_ffff_ffff_ffff
|
|
|
|
If 'raw_type' is 1, then the counter will count a hardware event
|
|
specified by the remaining 63 bits of event_config. The encoding is
|
|
machine-specific.
|
|
|
|
If 'raw_type' is 0, then the 'type' field says what kind of counter
|
|
this is, with the following encoding:
|
|
|
|
enum perf_type_id {
|
|
PERF_TYPE_HARDWARE = 0,
|
|
PERF_TYPE_SOFTWARE = 1,
|
|
PERF_TYPE_TRACEPOINT = 2,
|
|
};
|
|
|
|
A counter of PERF_TYPE_HARDWARE will count the hardware event
|
|
specified by 'event_id':
|
|
|
|
/*
|
|
* Generalized performance counter event types, used by the hw_event.event_id
|
|
* parameter of the sys_perf_event_open() syscall:
|
|
*/
|
|
enum perf_hw_id {
|
|
/*
|
|
* Common hardware events, generalized by the kernel:
|
|
*/
|
|
PERF_COUNT_HW_CPU_CYCLES = 0,
|
|
PERF_COUNT_HW_INSTRUCTIONS = 1,
|
|
PERF_COUNT_HW_CACHE_REFERENCES = 2,
|
|
PERF_COUNT_HW_CACHE_MISSES = 3,
|
|
PERF_COUNT_HW_BRANCH_INSTRUCTIONS = 4,
|
|
PERF_COUNT_HW_BRANCH_MISSES = 5,
|
|
PERF_COUNT_HW_BUS_CYCLES = 6,
|
|
};
|
|
|
|
These are standardized types of events that work relatively uniformly
|
|
on all CPUs that implement Performance Counters support under Linux,
|
|
although there may be variations (e.g., different CPUs might count
|
|
cache references and misses at different levels of the cache hierarchy).
|
|
If a CPU is not able to count the selected event, then the system call
|
|
will return -EINVAL.
|
|
|
|
More hw_event_types are supported as well, but they are CPU-specific
|
|
and accessed as raw events. For example, to count "External bus
|
|
cycles while bus lock signal asserted" events on Intel Core CPUs, pass
|
|
in a 0x4064 event_id value and set hw_event.raw_type to 1.
|
|
|
|
A counter of type PERF_TYPE_SOFTWARE will count one of the available
|
|
software events, selected by 'event_id':
|
|
|
|
/*
|
|
* Special "software" counters provided by the kernel, even if the hardware
|
|
* does not support performance counters. These counters measure various
|
|
* physical and sw events of the kernel (and allow the profiling of them as
|
|
* well):
|
|
*/
|
|
enum perf_sw_ids {
|
|
PERF_COUNT_SW_CPU_CLOCK = 0,
|
|
PERF_COUNT_SW_TASK_CLOCK = 1,
|
|
PERF_COUNT_SW_PAGE_FAULTS = 2,
|
|
PERF_COUNT_SW_CONTEXT_SWITCHES = 3,
|
|
PERF_COUNT_SW_CPU_MIGRATIONS = 4,
|
|
PERF_COUNT_SW_PAGE_FAULTS_MIN = 5,
|
|
PERF_COUNT_SW_PAGE_FAULTS_MAJ = 6,
|
|
PERF_COUNT_SW_ALIGNMENT_FAULTS = 7,
|
|
PERF_COUNT_SW_EMULATION_FAULTS = 8,
|
|
};
|
|
|
|
Counters of the type PERF_TYPE_TRACEPOINT are available when the ftrace event
|
|
tracer is available, and event_id values can be obtained from
|
|
/debug/tracing/events/*/*/id
|
|
|
|
|
|
Counters come in two flavours: counting counters and sampling
|
|
counters. A "counting" counter is one that is used for counting the
|
|
number of events that occur, and is characterised by having
|
|
irq_period = 0.
|
|
|
|
|
|
A read() on a counter returns the current value of the counter and possible
|
|
additional values as specified by 'read_format', each value is a u64 (8 bytes)
|
|
in size.
|
|
|
|
/*
|
|
* Bits that can be set in hw_event.read_format to request that
|
|
* reads on the counter should return the indicated quantities,
|
|
* in increasing order of bit value, after the counter value.
|
|
*/
|
|
enum perf_event_read_format {
|
|
PERF_FORMAT_TOTAL_TIME_ENABLED = 1,
|
|
PERF_FORMAT_TOTAL_TIME_RUNNING = 2,
|
|
};
|
|
|
|
Using these additional values one can establish the overcommit ratio for a
|
|
particular counter allowing one to take the round-robin scheduling effect
|
|
into account.
|
|
|
|
|
|
A "sampling" counter is one that is set up to generate an interrupt
|
|
every N events, where N is given by 'irq_period'. A sampling counter
|
|
has irq_period > 0. The record_type controls what data is recorded on each
|
|
interrupt:
|
|
|
|
/*
|
|
* Bits that can be set in hw_event.record_type to request information
|
|
* in the overflow packets.
|
|
*/
|
|
enum perf_event_record_format {
|
|
PERF_RECORD_IP = 1U << 0,
|
|
PERF_RECORD_TID = 1U << 1,
|
|
PERF_RECORD_TIME = 1U << 2,
|
|
PERF_RECORD_ADDR = 1U << 3,
|
|
PERF_RECORD_GROUP = 1U << 4,
|
|
PERF_RECORD_CALLCHAIN = 1U << 5,
|
|
};
|
|
|
|
Such (and other) events will be recorded in a ring-buffer, which is
|
|
available to user-space using mmap() (see below).
|
|
|
|
The 'disabled' bit specifies whether the counter starts out disabled
|
|
or enabled. If it is initially disabled, it can be enabled by ioctl
|
|
or prctl (see below).
|
|
|
|
The 'inherit' bit, if set, specifies that this counter should count
|
|
events on descendant tasks as well as the task specified. This only
|
|
applies to new descendents, not to any existing descendents at the
|
|
time the counter is created (nor to any new descendents of existing
|
|
descendents).
|
|
|
|
The 'pinned' bit, if set, specifies that the counter should always be
|
|
on the CPU if at all possible. It only applies to hardware counters
|
|
and only to group leaders. If a pinned counter cannot be put onto the
|
|
CPU (e.g. because there are not enough hardware counters or because of
|
|
a conflict with some other event), then the counter goes into an
|
|
'error' state, where reads return end-of-file (i.e. read() returns 0)
|
|
until the counter is subsequently enabled or disabled.
|
|
|
|
The 'exclusive' bit, if set, specifies that when this counter's group
|
|
is on the CPU, it should be the only group using the CPU's counters.
|
|
In future, this will allow sophisticated monitoring programs to supply
|
|
extra configuration information via 'extra_config_len' to exploit
|
|
advanced features of the CPU's Performance Monitor Unit (PMU) that are
|
|
not otherwise accessible and that might disrupt other hardware
|
|
counters.
|
|
|
|
The 'exclude_user', 'exclude_kernel' and 'exclude_hv' bits provide a
|
|
way to request that counting of events be restricted to times when the
|
|
CPU is in user, kernel and/or hypervisor mode.
|
|
|
|
Furthermore the 'exclude_host' and 'exclude_guest' bits provide a way
|
|
to request counting of events restricted to guest and host contexts when
|
|
using Linux as the hypervisor.
|
|
|
|
The 'mmap' and 'munmap' bits allow recording of PROT_EXEC mmap/munmap
|
|
operations, these can be used to relate userspace IP addresses to actual
|
|
code, even after the mapping (or even the whole process) is gone,
|
|
these events are recorded in the ring-buffer (see below).
|
|
|
|
The 'comm' bit allows tracking of process comm data on process creation.
|
|
This too is recorded in the ring-buffer (see below).
|
|
|
|
The 'pid' parameter to the sys_perf_event_open() system call allows the
|
|
counter to be specific to a task:
|
|
|
|
pid == 0: if the pid parameter is zero, the counter is attached to the
|
|
current task.
|
|
|
|
pid > 0: the counter is attached to a specific task (if the current task
|
|
has sufficient privilege to do so)
|
|
|
|
pid < 0: all tasks are counted (per cpu counters)
|
|
|
|
The 'cpu' parameter allows a counter to be made specific to a CPU:
|
|
|
|
cpu >= 0: the counter is restricted to a specific CPU
|
|
cpu == -1: the counter counts on all CPUs
|
|
|
|
(Note: the combination of 'pid == -1' and 'cpu == -1' is not valid.)
|
|
|
|
A 'pid > 0' and 'cpu == -1' counter is a per task counter that counts
|
|
events of that task and 'follows' that task to whatever CPU the task
|
|
gets schedule to. Per task counters can be created by any user, for
|
|
their own tasks.
|
|
|
|
A 'pid == -1' and 'cpu == x' counter is a per CPU counter that counts
|
|
all events on CPU-x. Per CPU counters need CAP_PERFMON or CAP_SYS_ADMIN
|
|
privilege.
|
|
|
|
The 'flags' parameter is currently unused and must be zero.
|
|
|
|
The 'group_fd' parameter allows counter "groups" to be set up. A
|
|
counter group has one counter which is the group "leader". The leader
|
|
is created first, with group_fd = -1 in the sys_perf_event_open call
|
|
that creates it. The rest of the group members are created
|
|
subsequently, with group_fd giving the fd of the group leader.
|
|
(A single counter on its own is created with group_fd = -1 and is
|
|
considered to be a group with only 1 member.)
|
|
|
|
A counter group is scheduled onto the CPU as a unit, that is, it will
|
|
only be put onto the CPU if all of the counters in the group can be
|
|
put onto the CPU. This means that the values of the member counters
|
|
can be meaningfully compared, added, divided (to get ratios), etc.,
|
|
with each other, since they have counted events for the same set of
|
|
executed instructions.
|
|
|
|
|
|
Like stated, asynchronous events, like counter overflow or PROT_EXEC mmap
|
|
tracking are logged into a ring-buffer. This ring-buffer is created and
|
|
accessed through mmap().
|
|
|
|
The mmap size should be 1+2^n pages, where the first page is a meta-data page
|
|
(struct perf_event_mmap_page) that contains various bits of information such
|
|
as where the ring-buffer head is.
|
|
|
|
/*
|
|
* Structure of the page that can be mapped via mmap
|
|
*/
|
|
struct perf_event_mmap_page {
|
|
__u32 version; /* version number of this structure */
|
|
__u32 compat_version; /* lowest version this is compat with */
|
|
|
|
/*
|
|
* Bits needed to read the hw counters in user-space.
|
|
*
|
|
* u32 seq;
|
|
* s64 count;
|
|
*
|
|
* do {
|
|
* seq = pc->lock;
|
|
*
|
|
* barrier()
|
|
* if (pc->index) {
|
|
* count = pmc_read(pc->index - 1);
|
|
* count += pc->offset;
|
|
* } else
|
|
* goto regular_read;
|
|
*
|
|
* barrier();
|
|
* } while (pc->lock != seq);
|
|
*
|
|
* NOTE: for obvious reason this only works on self-monitoring
|
|
* processes.
|
|
*/
|
|
__u32 lock; /* seqlock for synchronization */
|
|
__u32 index; /* hardware counter identifier */
|
|
__s64 offset; /* add to hardware counter value */
|
|
|
|
/*
|
|
* Control data for the mmap() data buffer.
|
|
*
|
|
* User-space reading this value should issue an rmb(), on SMP capable
|
|
* platforms, after reading this value -- see perf_event_wakeup().
|
|
*/
|
|
__u32 data_head; /* head in the data section */
|
|
};
|
|
|
|
NOTE: the hw-counter userspace bits are arch specific and are currently only
|
|
implemented on powerpc.
|
|
|
|
The following 2^n pages are the ring-buffer which contains events of the form:
|
|
|
|
#define PERF_RECORD_MISC_KERNEL (1 << 0)
|
|
#define PERF_RECORD_MISC_USER (1 << 1)
|
|
#define PERF_RECORD_MISC_OVERFLOW (1 << 2)
|
|
|
|
struct perf_event_header {
|
|
__u32 type;
|
|
__u16 misc;
|
|
__u16 size;
|
|
};
|
|
|
|
enum perf_event_type {
|
|
|
|
/*
|
|
* The MMAP events record the PROT_EXEC mappings so that we can
|
|
* correlate userspace IPs to code. They have the following structure:
|
|
*
|
|
* struct {
|
|
* struct perf_event_header header;
|
|
*
|
|
* u32 pid, tid;
|
|
* u64 addr;
|
|
* u64 len;
|
|
* u64 pgoff;
|
|
* char filename[];
|
|
* };
|
|
*/
|
|
PERF_RECORD_MMAP = 1,
|
|
PERF_RECORD_MUNMAP = 2,
|
|
|
|
/*
|
|
* struct {
|
|
* struct perf_event_header header;
|
|
*
|
|
* u32 pid, tid;
|
|
* char comm[];
|
|
* };
|
|
*/
|
|
PERF_RECORD_COMM = 3,
|
|
|
|
/*
|
|
* When header.misc & PERF_RECORD_MISC_OVERFLOW the event_type field
|
|
* will be PERF_RECORD_*
|
|
*
|
|
* struct {
|
|
* struct perf_event_header header;
|
|
*
|
|
* { u64 ip; } && PERF_RECORD_IP
|
|
* { u32 pid, tid; } && PERF_RECORD_TID
|
|
* { u64 time; } && PERF_RECORD_TIME
|
|
* { u64 addr; } && PERF_RECORD_ADDR
|
|
*
|
|
* { u64 nr;
|
|
* { u64 event, val; } cnt[nr]; } && PERF_RECORD_GROUP
|
|
*
|
|
* { u16 nr,
|
|
* hv,
|
|
* kernel,
|
|
* user;
|
|
* u64 ips[nr]; } && PERF_RECORD_CALLCHAIN
|
|
* };
|
|
*/
|
|
};
|
|
|
|
NOTE: PERF_RECORD_CALLCHAIN is arch specific and currently only implemented
|
|
on x86.
|
|
|
|
Notification of new events is possible through poll()/select()/epoll() and
|
|
fcntl() managing signals.
|
|
|
|
Normally a notification is generated for every page filled, however one can
|
|
additionally set perf_event_attr.wakeup_events to generate one every
|
|
so many counter overflow events.
|
|
|
|
Future work will include a splice() interface to the ring-buffer.
|
|
|
|
|
|
Counters can be enabled and disabled in two ways: via ioctl and via
|
|
prctl. When a counter is disabled, it doesn't count or generate
|
|
events but does continue to exist and maintain its count value.
|
|
|
|
An individual counter can be enabled with
|
|
|
|
ioctl(fd, PERF_EVENT_IOC_ENABLE, 0);
|
|
|
|
or disabled with
|
|
|
|
ioctl(fd, PERF_EVENT_IOC_DISABLE, 0);
|
|
|
|
For a counter group, pass PERF_IOC_FLAG_GROUP as the third argument.
|
|
Enabling or disabling the leader of a group enables or disables the
|
|
whole group; that is, while the group leader is disabled, none of the
|
|
counters in the group will count. Enabling or disabling a member of a
|
|
group other than the leader only affects that counter - disabling an
|
|
non-leader stops that counter from counting but doesn't affect any
|
|
other counter.
|
|
|
|
Additionally, non-inherited overflow counters can use
|
|
|
|
ioctl(fd, PERF_EVENT_IOC_REFRESH, nr);
|
|
|
|
to enable a counter for 'nr' events, after which it gets disabled again.
|
|
|
|
A process can enable or disable all the counter groups that are
|
|
attached to it, using prctl:
|
|
|
|
prctl(PR_TASK_PERF_EVENTS_ENABLE);
|
|
|
|
prctl(PR_TASK_PERF_EVENTS_DISABLE);
|
|
|
|
This applies to all counters on the current process, whether created
|
|
by this process or by another, and doesn't affect any counters that
|
|
this process has created on other processes. It only enables or
|
|
disables the group leaders, not any other members in the groups.
|
|
|
|
|
|
Arch requirements
|
|
-----------------
|
|
|
|
If your architecture does not have hardware performance metrics, you can
|
|
still use the generic software counters based on hrtimers for sampling.
|
|
|
|
So to start with, in order to add HAVE_PERF_EVENTS to your Kconfig, you
|
|
will need at least this:
|
|
- asm/perf_event.h - a basic stub will suffice at first
|
|
- support for atomic64 types (and associated helper functions)
|
|
|
|
If your architecture does have hardware capabilities, you can override the
|
|
weak stub hw_perf_event_init() to register hardware counters.
|
|
|
|
Architectures that have d-cache aliassing issues, such as Sparc and ARM,
|
|
should select PERF_USE_VMALLOC in order to avoid these for perf mmap().
|