Threads share map_groups, all map events are merged into it.
Thus we could send mmaps only for thread group leader. Otherwise it
took ages to attach and record something from processes with many vmas
and threads.
Thread group leader could be already dead, but it seems perf cannot
handle this case anyway.
Testing dummy:
#include <stdio.h>
#include <stdlib.h>
#include <sys/mman.h>
#include <pthread.h>
#include <unistd.h>
void *thread(void *arg) {
pause();
}
int main(int argc, char **argv) {
int threads = 10000;
int vmas = 50000;
pthread_t th;
for (int i = 0; i < threads; i++)
pthread_create(&th, NULL, thread, NULL);
for (int i = 0; i < vmas; i++)
mmap(NULL, 4096, (i & 1) ? PROT_READ : PROT_WRITE,
MAP_PRIVATE | MAP_ANONYMOUS | MAP_NORESERVE, -1, 0);
sleep(60);
return 0;
}
Comment by Jiri Olsa:
We actualy synthesize the group leader (if we found one) for the thread
even if it's not present in the thread_map, so the process maps are
always in data.
Signed-off-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/153363294102.396323.6277944760215058174.stgit@buzz
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
We just check that the evsel is the one we associated with the
bpf-output event associated with the "__augmented_syscalls__" eBPF map,
to show that the formatting is done properly:
# perf trace -e perf/tools/perf/examples/bpf/augmented_syscalls.c,openat cat /etc/passwd > /dev/null
0.000 ( ): __augmented_syscalls__:dfd: CWD, filename: 0x43e06da8, flags: CLOEXEC
0.006 ( ): syscalls:sys_enter_openat:dfd: CWD, filename: 0x43e06da8, flags: CLOEXEC
0.007 ( 0.004 ms): cat/11486 openat(dfd: CWD, filename: 0x43e06da8, flags: CLOEXEC ) = 3
0.029 ( ): __augmented_syscalls__:dfd: CWD, filename: 0x4400ece0, flags: CLOEXEC
0.030 ( ): syscalls:sys_enter_openat:dfd: CWD, filename: 0x4400ece0, flags: CLOEXEC
0.031 ( 0.004 ms): cat/11486 openat(dfd: CWD, filename: 0x4400ece0, flags: CLOEXEC ) = 3
0.249 ( ): __augmented_syscalls__:dfd: CWD, filename: 0xc3700d6
0.250 ( ): syscalls:sys_enter_openat:dfd: CWD, filename: 0xc3700d6
0.252 ( 0.003 ms): cat/11486 openat(dfd: CWD, filename: 0xc3700d6 ) = 3
#
Now we just need to get the full blown enter/exit handlers to check if the
evsel being processed is the augmented_syscalls one to go pick the pointer
payloads from the end of the payload.
We also need to state somehow what is the layout for multi pointer arg syscalls.
Also handy would be to have a BTF file with the struct definitions used in
syscalls, compact, generated at kernel built time and available for use in eBPF
programs.
Till we get there we can go on doing some manual coupling of the most relevant
syscalls with some hand built beautifiers.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-r6ba5izrml82nwfmwcp7jpkm@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Add an example BPF script that writes syscalls:sys_enter_openat raw
tracepoint payloads augmented with the first 64 bytes of the "filename"
syscall pointer arg.
Then catch it and print it just like with things written to the
"__bpf_stdout__" map associated with a PERF_COUNT_SW_BPF_OUTPUT software
event, by just letting the default tracepoint handler in 'perf trace',
trace__event_handler(), to use bpf_output__fprintf(trace, sample), just
like it does with all other PERF_COUNT_SW_BPF_OUTPUT events, i.e. just
do a dump on the payload, so that we can check if what is being printed
has at least the first 64 bytes of the "filename" arg:
The augmented_syscalls.c eBPF script:
# cat tools/perf/examples/bpf/augmented_syscalls.c
// SPDX-License-Identifier: GPL-2.0
#include <stdio.h>
struct bpf_map SEC("maps") __augmented_syscalls__ = {
.type = BPF_MAP_TYPE_PERF_EVENT_ARRAY,
.key_size = sizeof(int),
.value_size = sizeof(u32),
.max_entries = __NR_CPUS__,
};
struct syscall_enter_openat_args {
unsigned long long common_tp_fields;
long syscall_nr;
long dfd;
char *filename_ptr;
long flags;
long mode;
};
struct augmented_enter_openat_args {
struct syscall_enter_openat_args args;
char filename[64];
};
int syscall_enter(openat)(struct syscall_enter_openat_args *args)
{
struct augmented_enter_openat_args augmented_args;
probe_read(&augmented_args.args, sizeof(augmented_args.args), args);
probe_read_str(&augmented_args.filename, sizeof(augmented_args.filename), args->filename_ptr);
perf_event_output(args, &__augmented_syscalls__, BPF_F_CURRENT_CPU,
&augmented_args, sizeof(augmented_args));
return 1;
}
license(GPL);
#
So it will just prepare a raw_syscalls:sys_enter payload for the
"openat" syscall.
This will eventually be done for all syscalls with pointer args,
globally or just when the user asks, using some spec, which args of
which syscalls it wants "expanded" this way, we'll probably start with
just all the syscalls that have char * pointers with familiar names, the
ones we already handle with the probe:vfs_getname kprobe if it is in
place hooking the kernel getname_flags() function used to copy from user
the paths.
Running it we get:
# perf trace -e perf/tools/perf/examples/bpf/augmented_syscalls.c,openat cat /etc/passwd > /dev/null
0.000 ( ): __augmented_syscalls__:X?.C......................`\..................../etc/ld.so.cache..#......,....ao.k...............k......1.".........
0.006 ( ): syscalls:sys_enter_openat:dfd: CWD, filename: 0x5c600da8, flags: CLOEXEC
0.008 ( 0.005 ms): cat/31292 openat(dfd: CWD, filename: 0x5c600da8, flags: CLOEXEC ) = 3
0.036 ( ): __augmented_syscalls__:X?.C.......................\..................../lib64/libc.so.6......... .\....#........?.......=.C..../.".........
0.037 ( ): syscalls:sys_enter_openat:dfd: CWD, filename: 0x5c808ce0, flags: CLOEXEC
0.039 ( 0.007 ms): cat/31292 openat(dfd: CWD, filename: 0x5c808ce0, flags: CLOEXEC ) = 3
0.323 ( ): __augmented_syscalls__:X?.C.....................P....................../etc/passwd......>.C....@................>.C.....,....ao.>.C........
0.325 ( ): syscalls:sys_enter_openat:dfd: CWD, filename: 0xe8be50d6
0.327 ( 0.004 ms): cat/31292 openat(dfd: CWD, filename: 0xe8be50d6 ) = 3
#
We need to go on optimizing this to avoid seding trash or zeroes in the
pointer content payload, using the return from bpf_probe_read_str(), but
to keep things simple at this stage and make incremental progress, lets
leave it at that for now.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-g360n1zbj6bkbk6q0qo11c28@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
That, together with the map __bpf_output__ that is already handled by
'perf trace' to print that event's contents as strings provides a
debugging facility, to show it in use, print a simple string everytime
the syscalls:sys_enter_openat() syscall tracepoint is hit:
# cat tools/perf/examples/bpf/hello.c
#include <stdio.h>
int syscall_enter(openat)(void *args)
{
puts("Hello, world\n");
return 0;
}
license(GPL);
#
# perf trace -e openat,tools/perf/examples/bpf/hello.c cat /etc/passwd > /dev/null
0.016 ( ): __bpf_stdout__:Hello, world
0.018 ( 0.010 ms): cat/9079 openat(dfd: CWD, filename: /etc/ld.so.cache, flags: CLOEXEC) = 3
0.057 ( ): __bpf_stdout__:Hello, world
0.059 ( 0.011 ms): cat/9079 openat(dfd: CWD, filename: /lib64/libc.so.6, flags: CLOEXEC) = 3
0.417 ( ): __bpf_stdout__:Hello, world
0.419 ( 0.009 ms): cat/9079 openat(dfd: CWD, filename: /etc/passwd) = 3
#
This is part of an ongoing experimentation on making eBPF scripts as
consumed by perf to be as concise as possible and using familiar
concepts such as stdio.h functions, that end up just wrapping the
existing BPF functions, trying to hide as much boilerplate as possible
while using just conventions and C preprocessor tricks.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-4tiaqlx5crf0fwpe7a6j84x7@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Set annotation percent type from following choices:
global-period, local-period, global-hits, local-hits
With following report option setup the percent type will be passed to
annotation browser:
$ perf report --percent-type period-local
The local/global keywords set if the percentage is computed in the scope
of the function (local) or the whole data (global). The period/hits
keywords set the base the percentage is computed on - the samples period
or the number of samples (hits).
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/20180804130521.11408-21-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Add --percent-type option to set annotation percent type from following
choices:
global-period, local-period, global-hits, local-hits
Examples:
$ perf annotate --percent-type period-local --stdio | head -1
Percent | Source code ... es, percent: local period)
$ perf annotate --percent-type hits-local --stdio | head -1
Percent | Source code ... es, percent: local hits)
$ perf annotate --percent-type hits-global --stdio | head -1
Percent | Source code ... es, percent: global hits)
$ perf annotate --percent-type period-global --stdio | head -1
Percent | Source code ... es, percent: global period)
The local/global keywords set if the percentage is computed in the scope
of the function (local) or the whole data (global).
The period/hits keywords set the base the percentage is computed on -
the samples period or the number of samples (hits).
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/20180804130521.11408-20-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
In following patches we will allow to switch percent type even for stdio
annotation outputs. Adding the percent type value into the annotation
outputs title.
$ perf annotate --stdio
Percent | Sou ... instructions:u } (2805 samples, percent: local period)
--------------------------- ... ------------------------------------------------------
...
$ perf annotate --stdio2
Samples: 2K of events 'anon ... count (approx.): 156525487, [percent: local period]
safe_write.c() /usr/bin/yes
Percent
...
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/20180804130521.11408-19-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
We have more current function tto get the title for annotation,
which is hists__scnprintf_title. They both have same output as
far as the annotation's header line goes.
They differ in counting of the nr_samples, hists__scnprintf_title
provides more accurate number based on the setup of the
symbol_conf.filter_relative variable.
Plus it also displays any uid/thread/dso/socket filters/zooms
if there are set any, which annotation__scnprintf_samples_period
does not.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/20180804130521.11408-4-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Allowing one to hook into the syscalls:sys_enter_NAME tracepoints,
an example is provided that hooks into the 'openat' syscall.
Using it with the probe:vfs_getname probe into getname_flags to get the
filename args as it is copied from userspace:
# perf probe -l
probe:vfs_getname (on getname_flags:73@acme/git/linux/fs/namei.c with pathname)
# perf trace -e probe:*getname,tools/perf/examples/bpf/sys_enter_openat.c cat /etc/passwd > /dev/null
0.000 probe:vfs_getname:(ffffffffbd2a8983) pathname="/etc/ld.so.preload"
0.022 syscalls:sys_enter_openat:dfd: CWD, filename: 0xafbe8da8, flags: CLOEXEC
0.027 probe:vfs_getname:(ffffffffbd2a8983) pathname="/etc/ld.so.cache"
0.054 syscalls:sys_enter_openat:dfd: CWD, filename: 0xafdf0ce0, flags: CLOEXEC
0.057 probe:vfs_getname:(ffffffffbd2a8983) pathname="/lib64/libc.so.6"
0.316 probe:vfs_getname:(ffffffffbd2a8983) pathname="/usr/lib/locale/locale-archive"
0.375 syscalls:sys_enter_openat:dfd: CWD, filename: 0xe2b2b0b4
0.379 probe:vfs_getname:(ffffffffbd2a8983) pathname="/etc/passwd"
#
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-2po9jcqv1qgj0koxlg8kkg30@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Add support for s390 auxiliary trace support.
Use 'perf record -e rbd000 -- ls' to create the perf.data file.
Use 'perf report' to display the auxiliary trace data.
Output before:
[root@s35lp76 perf]# ./perf report --stdio
0x128 [0x10]: failed to process type: 70
Error:
failed to process sample
[root@s35lp76 perf]#
Output after:
[root@s35lp76 perf]# ./perf report --stdio
18.21% 18.21% ls [kernel.kallsyms] [k] ftrace_likely_update
9.52% 9.52% ls [kernel.kallsyms] [k] lock_acquire
9.38% 9.38% ls [kernel.kallsyms] [k] lock_release
3.45% 3.45% ls [kernel.kallsyms] [k] lock_acquired
2.88% 2.88% ls [kernel.kallsyms] [k] link_path_walk
2.63% 2.63% ls [kernel.kallsyms] [k] __d_lookup
2.38% 2.38% ls [kernel.kallsyms] [k] __d_lookup_rcu
2.04% 2.04% ls [kernel.kallsyms] [k] ___might_sleep
1.83% 1.83% ls [kernel.kallsyms] [k] debug_lockdep_rcu_enabled
1.44% 1.44% ls [kernel.kallsyms] [k] dput
....
Signed-off-by: Thomas Richter <tmricht@linux.ibm.com>
Reviewed-by: Hendrik Brueckner <brueckner@linux.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Link: http://lkml.kernel.org/r/20180802074622.13641-4-tmricht@linux.ibm.com
[ Use PRI[xd]64 to fix the build on debian:experimental-x-mips (gcc 8.1.0) and others ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Add support for s390 auxiliary trace support.
Use 'perf record -e rbd000' to create the perf.data file. The event
also has the symbolic name SF_CYCLES_BASIC_DIAG, using 'perf record -e
SF_CYCLES_BASIC_DIAG' is equivalent.
Use 'perf report -D' to display the auxiliary trace data.
Output before:
0 0 0x25a66 [0x30]: PERF_RECORD_AUXTRACE size: 0x40000
offset: 0 ref: 0 idx: 4 tid: -1 cpu: 4
Nothing else
Output after:
0 0 0x25a66 [0x30]: PERF_RECORD_AUXTRACE size: 0x40000
offset: 0 ref: 0 idx: 4 tid: -1 cpu: 4
.
. ... s390 AUX data: size 262144 bytes
[00000000] Basic Def:0001 Inst:0000 TW AS:3 ASN:0xffff IA:0x0000000000c2f1bc
CL:1 HPP:0x8000000000000000 GPP:000000000000000000
[0x000020] Diag Def:8005
[0x0000bf] Basic Def:0001 Inst:0000 TW AS:3 ASN:0xffff IA:0x0000000000c2f1bc
CL:1 HPP:0x8000000000000000 GPP:000000000000000000
[0x0000df] Diag Def:8005
[0x00017e] Basic Def:0001 Inst:0000 TW AS:3 ASN:0xffff IA:0x0000000000c2f1bc
CL:1 HPP:0x8000000000000000 GPP:000000000000000000
....
[0x000fc0] Trailer F T bsdes:32 dsdes:159 Overflow:0 Time:0xd4ab59a8450fa108
C:1 TOD:0xd4ab4ec98ceb3832 1:0x8000000000000000 2:0xd4ab4ec98ceb3832
This output is shown for every sampled data block. The
output contains the
- basic-sampling data entry
- diagnostic-sampling data entry
- trailer entry
The basic sampling entry and diagnostic sampling entry sizes can be
extracted using the trailer entries in the SDB. On older hardware these
values (bsdes and dsdes in the trailer entry) are reserved and zero.
Older hardware use hard coded values based on the s390 machine type.
Signed-off-by: Thomas Richter <tmricht@linux.ibm.com>
Reviewed-by: Hendrik Brueckner <brueckner@linux.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Link: http://lkml.kernel.org/r/20180802074622.13641-3-tmricht@linux.ibm.com
Link: http://lkml.kernel.org/r/eda2632e-7919-5ffd-5f68-821e77d216fa@linux.ibm.com
[ Merged a fix for a 'tipe puned' problem reported by Michael Ellerman see last Link tag. ]
[ Removed __packed from two structs, they're already naturally packed and having that. ]
[ attribute breaks the build in gcc 8.1.1 mips, 4.4.7 x86_64, 7.1.1 ARCompact ISA, etc) ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
There are some powerpc selftests, as tm/tm-unavailable, that run for a long
period (>120 seconds), and if it is interrupted, as pressing CRTL-C
(SIGINT), the foreground process (harness) dies but the child process and
threads continue to execute (with PPID = 1 now) in background.
In this case, you'd think the whole test exited, but there are remaining
threads and processes being executed in background. Sometimes these
zombies processes are doing annoying things, as consuming the whole CPU or
dumping things to STDOUT.
This patch fixes this problem by attaching an empty signal handler to
SIGINT in the harness process. This handler will interrupt (EINTR) the
parent process waitpid() call, letting the code to follow through the
normal flow, which will kill all the processes in the child process group.
This patch also fixes a typo.
Signed-off-by: Breno Leitao <leitao@debian.org>
Signed-off-by: Gustavo Romero <gromero@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
gre_multipath test was using egress vlan_id matching on flows, for the
purpose of collecting next-hops statistics, later to be compared
against configured weights.
As matching on vlan_id on egress direction is not supported on all HW
devices, change the match criteria to use destination IP.
Signed-off-by: Nir Dotan <nird@mellanox.com>
Acked-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Daniel Borkmann says:
====================
pull-request: bpf-next 2018-08-07
The following pull-request contains BPF updates for your *net-next* tree.
The main changes are:
1) Add cgroup local storage for BPF programs, which provides a fast
accessible memory for storing various per-cgroup data like number
of transmitted packets, etc, from Roman.
2) Support bpf_get_socket_cookie() BPF helper in several more program
types that have a full socket available, from Andrey.
3) Significantly improve the performance of perf events which are
reported from BPF offload. Also convert a couple of BPF AF_XDP
samples overto use libbpf, both from Jakub.
4) seg6local LWT provides the End.DT6 action, which allows to
decapsulate an outer IPv6 header containing a Segment Routing Header.
Adds this action now to the seg6local BPF interface, from Mathieu.
5) Do not mark dst register as unbounded in MOV64 instruction when
both src and dst register are the same, from Arthur.
6) Define u_smp_rmb() and u_smp_wmb() to their respective barrier
instructions on arm64 for the AF_XDP sample code, from Brian.
7) Convert the tcp_client.py and tcp_server.py BPF selftest scripts
over from Python 2 to Python 3, from Jeremy.
8) Enable BTF build flags to the BPF sample code Makefile, from Taeung.
9) Remove an unnecessary rcu_read_lock() in run_lwt_bpf(), from Taehee.
10) Several improvements to the README.rst from the BPF documentation
to make it more consistent with RST format, from Tobin.
11) Replace all occurrences of strerror() by calls to strerror_r()
in libbpf and fix a FORTIFY_SOURCE build error along with it,
from Thomas.
12) Fix a bug in bpftool's get_btf() function to correctly propagate
an error via PTR_ERR(), from Yue.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
This adds a set of test cases to test the behaviour of
copy_tofrom_user when exceptions are encountered accessing the
source or destination. Currently, copy_tofrom_user does not always
copy as many bytes as possible when an exception occurs on a store
to the destination, and that is reflected in failures in these tests.
Based on a test program from Anton Blanchard.
[paulus@ozlabs.org - test all three paths, wrote commit description,
made EX_TABLE create an exception table.]
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
The hand-coded assembler 64-bit copy routines include feature sections
that select one code path or another depending on which CPU we are
executing on. The self-tests for these copy routines end up testing
just one path. This adds a mechanism for selecting any desired code
path at compile time, and makes 2 or 3 versions of each test, each
using a different code path, so as to cover all the possible paths.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
[mpe: Add -mcpu=power4 to CFLAGS for older compilers]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
The alignment_handler is documented to only work on Power8/Power9, but
we can make it run on older CPUs by guarding more of the tests with
feature checks.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Currently the alignment_handler test prints "Can't open /dev/fb0"
about 80 times per run, which is a little annoying.
Refactor it to check earlier if it can open /dev/fb0 and skip if not,
this results in each test printing something like:
test: test_alignment_handler_vsx_206
tags: git_version:v4.18-rc3-134-gfb21a48904aa
[SKIP] Test skipped on line 291
skip: test_alignment_handler_vsx_206
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Acked-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
This patch adds a test for testing the new assembly strlen() for PPC32
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
[mpe: Fix 64-bit build]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
This patch adds a test for strlen()
string.c contains a copy of strlen() from lib/string.c
The test first tests the correctness of strlen() by comparing
the result with libc strlen(). It tests all cases of alignment.
It them tests the duration of an aligned strlen() on a 4 bytes string,
on a 16 bytes string and on a 256 bytes string.
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
[mpe: Drop change log from copy of string.c]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
This patch renames memcmp test to memcmp_64 and adds a memcmp_32 test
for testing the 32 bits version of memcmp()
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
[mpe: Fix 64-bit build by adding build_32bit test]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
These tests are currently failing on (some) big endian systems. Until
we can fix that, skip them unless we're on ppc64le.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>