Bye-bye Performance Counters, welcome Performance Events!
In the past few months the perfcounters subsystem has grown out its
initial role of counting hardware events, and has become (and is
becoming) a much broader generic event enumeration, reporting, logging,
monitoring, analysis facility.
Naming its core object 'perf_counter' and naming the subsystem
'perfcounters' has become more and more of a misnomer. With pending
code like hw-breakpoints support the 'counter' name is less and
less appropriate.
All in one, we've decided to rename the subsystem to 'performance
events' and to propagate this rename through all fields, variables
and API names. (in an ABI compatible fashion)
The word 'event' is also a bit shorter than 'counter' - which makes
it slightly more convenient to write/handle as well.
Thanks goes to Stephane Eranian who first observed this misnomer and
suggested a rename.
User-space tooling and ABI compatibility is not affected - this patch
should be function-invariant. (Also, defconfigs were not touched to
keep the size down.)
This patch has been generated via the following script:
FILES=$(find * -type f | grep -vE 'oprofile|[^K]config')
sed -i \
-e 's/PERF_EVENT_/PERF_RECORD_/g' \
-e 's/PERF_COUNTER/PERF_EVENT/g' \
-e 's/perf_counter/perf_event/g' \
-e 's/nb_counters/nb_events/g' \
-e 's/swcounter/swevent/g' \
-e 's/tpcounter_event/tp_event/g' \
$FILES
for N in $(find . -name perf_counter.[ch]); do
M=$(echo $N | sed 's/perf_counter/perf_event/g')
mv $N $M
done
FILES=$(find . -name perf_event.*)
sed -i \
-e 's/COUNTER_MASK/REG_MASK/g' \
-e 's/COUNTER/EVENT/g' \
-e 's/\<event\>/event_id/g' \
-e 's/counter/event/g' \
-e 's/Counter/Event/g' \
$FILES
... to keep it as correct as possible. This script can also be
used by anyone who has pending perfcounters patches - it converts
a Linux kernel tree over to the new naming. We tried to time this
change to the point in time where the amount of pending patches
is the smallest: the end of the merge window.
Namespace clashes were fixed up in a preparatory patch - and some
stylistic fallout will be fixed up in a subsequent patch.
( NOTE: 'counters' are still the proper terminology when we deal
with hardware registers - and these sed scripts are a bit
over-eager in renaming them. I've undone some of that, but
in case there's something left where 'counter' would be
better than 'event' we can undo that on an individual basis
instead of touching an otherwise nicely automated patch. )
Suggested-by: Stephane Eranian <eranian@google.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Paul Mackerras <paulus@samba.org>
Reviewed-by: Arjan van de Ven <arjan@linux.intel.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: David Howells <dhowells@redhat.com>
Cc: Kyle McMartin <kyle@mcmartin.ca>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: <linux-arch@vger.kernel.org>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
This fixes up the kmap_coherent/kunmap_coherent() interface for recent
changes both in the page fault path and the shared cache flushers, as
well as adding in some optimizations.
One of the key things to note here is that the TLB flush itself is
deferred until the unmap, and the call in to update_mmu_cache() itself
goes away, relying on the regular page fault path to handle the lazy
dcache writeback if necessary.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This only bothers with the TLB entry flush in the case of the initial
page write exception, as it is unecessary in the case of the load/store
exceptions.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This adds a bit of rework to have the TLB protection violations skip the
TLB miss fastpath and go directly in to do_page_fault(), as these require
slow path handling.
Based on an earlier patch by SUGIOKA Toshinobu.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
The last commit changed the behaviour on kernel faults when we were
doing something other than syncing the page tables. vmalloc_sync_one()
needs to return NULL if the page tables are up to date, because the
reason for the fault was not a missing/inconsitent page table entry. By
returning NULL if the page tables are sync'd we signal to the calling
function that further work must be done to resolve this fault.
Also, remove the superfluous __va() around the first argument to
vmalloc_sync_one(). The value of pgd_k is already a virtual address and
using it wth __va() causes a NULL dereference.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This rewrites the vmalloc fault handling as per x86, which subsequently
allows for easy future tie-in for vmalloc_sync_all().
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This adds page fault instrumentation for the software performance
counters. Follows the x86 and powerpc changes.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This allows the callers to now pass down the full set of FAULT_FLAG_xyz
flags to handle_mm_fault(). All callers have been (mechanically)
converted to the new calling convention, there's almost certainly room
for architectures to clean up their code and then add FAULT_FLAG_RETRY
when that support is added.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
arch/sh has a couple of stray markers without any users introduced
in commit 3d58695edb. Remove them in
preparation of removing the markers in favour of the TRACE_EVENT
macro (and also because we don't keep dead code around).
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This migrates from the old bitrotted kgdb stub implementation and moves
to the generic stub. In the process support for SH-2/SH-2A is also added,
which the old stub never provided.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This patch adds a pass-through case when ioremapping P4 addresses.
Addresses passed to ioremap() should be physical addresses, so the
best option is usually to convert the virtual address to a physical
address before calling ioremap. This will give you a virtual address
in P2 which matches the physical address and this works well for
most internal hardware blocks on the SuperH architecture.
However, some hardware blocks must be accessed through P4. Converting
the P4 address to a physical and then back to a P2 does not work. One
example of this is the sh7722 TMU block, it must be accessed through P4.
Without this patch P4 addresses will be mapped using PTEs which
requires the page allocator to be up and running.
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This implements a few trace points across events that are deemed
interesting. This implements a number of trace points:
- The page fault handler / TLB miss
- IPC calls
- Kernel thread creation
The original LTTng patch had the slow-path instrumented, which
fails to account for the vast majority of events. In general
placing this in the fast-path is not a huge performance hit, as
we don't take page faults for kernel addresses.
The other bits of interest are some of the other trap handlers, as
well as the syscall entry/exit (which is better off being handled
through the tracehook API). Most of the other trap handlers are corner
cases where alternate means of notification exist, so there is little
value in placing extra trace points in these locations.
Based on top of the points provided both by the LTTng instrumentation
patch as well as the patch shipping in the ST-Linux tree, albeit in a
stripped down form.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This fixes a problem in the code which copies the vmalloc portion of the
kernel's page table into the current user space page table. The addition
of the four level page table code breaks on folded page tables, because
the pud level is always present (although folded). This updates the code
to use the same style of updates for the pud as is used for the pgd
level.
Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
The current kernel behaviour is to reenable interrupts unconditionally
when taking a page fault. This patch changes this to only enable them
if interrupts were previously enabled.
It also fixes a problem seen with this fix in place: the kernel previously
flushed the vsyscall page when handling a signal, which is not only
unncessary, but caused a possible sleep with interrupts disabled.
Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This acts as a reversion of 1c6b2ca5e0 in
the case of UP SH-4, where we still have the risk of a multiple hit
between the slow and fast paths. As seen on SH7780.
Signed-off-by: Hideo Saito <saito@densan.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
The idea is that we want to get rid of the in/out/readb/writeb callbacks from
the machvec and replace that with simple inline read and write operations to
memory. Fast and simple for most hardware devices (think pci).
Some devices require special treatment though - like 16-bit only CF devices -
so we need to have some method to hook in callbacks.
This patch makes it possible to add a per-device trap generating filter. This
way we can get maximum performance of sane hardware - which doesn't need this
filter - and crappy hardware works but gets punished by a performance hit.
V2 changes things around a bit and replaces io access callbacks with a
simple minimum_bus_width value. In the future we can add stride as well.
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>