Tracing updates for 6.2:

- Add options to the osnoise tracer
   o panic_on_stop option that panics the kernel if osnoise is greater than some
     user defined threshold.
   o preempt option, to test noise while preemption is disabled
   o irq option, to test noise when interrupts are disabled
 
 - Add .percent and .graph suffix to histograms to give different outputs
 
 - Add nohitcount to disable showing hitcount in histogram output
 
 - Add new __cpumask() to trace event fields to annotate that a unsigned long
   array is a cpumask to user space and should be treated as one.
 
 - Add trace_trigger kernel command line parameter to enable trace event
   triggers at boot up. Useful to trace stack traces, disable tracing and take
   snapshots.
 
 - Fix x86/kmmio mmio tracer to work with the updates to lockdep
 
 - Unify the panic and die notifiers
 
 - Add back ftrace_expect reference that is used to extract more information in
   the ftrace_bug() code.
 
 - Have trigger filter parsing errors show up in the tracing error log.
 
 - Updated MAINTAINERS file to add kernel tracing  mailing list and patchwork
   info
 
 - Use IDA to keep track of event type numbers.
 
 - And minor fixes and clean ups
 -----BEGIN PGP SIGNATURE-----
 
 iIoEABYIADIWIQRRSw7ePDh/lE+zeZMp5XQQmuv6qgUCY5vIcxQccm9zdGVkdEBn
 b29kbWlzLm9yZwAKCRAp5XQQmuv6qlO7AQCmtZbriadAR6N7Llj092YXmYfzrxyi
 1WS35vhpZsBJ8gEA8j68l+LrgNt51N2gXlTXEHgXzdBgL/TKAPSX4D99GQY=
 =z1pe
 -----END PGP SIGNATURE-----

Merge tag 'trace-v6.2' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace

Pull tracing updates from Steven Rostedt:

 - Add options to the osnoise tracer:
      - 'panic_on_stop' option that panics the kernel if osnoise is
        greater than some user defined threshold.
      - 'preempt' option, to test noise while preemption is disabled
      - 'irq' option, to test noise when interrupts are disabled

 - Add .percent and .graph suffix to histograms to give different
   outputs

 - Add nohitcount to disable showing hitcount in histogram output

 - Add new __cpumask() to trace event fields to annotate that a unsigned
   long array is a cpumask to user space and should be treated as one.

 - Add trace_trigger kernel command line parameter to enable trace event
   triggers at boot up. Useful to trace stack traces, disable tracing
   and take snapshots.

 - Fix x86/kmmio mmio tracer to work with the updates to lockdep

 - Unify the panic and die notifiers

 - Add back ftrace_expect reference that is used to extract more
   information in the ftrace_bug() code.

 - Have trigger filter parsing errors show up in the tracing error log.

 - Updated MAINTAINERS file to add kernel tracing mailing list and
   patchwork info

 - Use IDA to keep track of event type numbers.

 - And minor fixes and clean ups

* tag 'trace-v6.2' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace: (44 commits)
  tracing: Fix cpumask() example typo
  tracing: Improve panic/die notifiers
  ftrace: Prevent RCU stall on PREEMPT_VOLUNTARY kernels
  tracing: Do not synchronize freeing of trigger filter on boot up
  tracing: Remove pointer (asterisk) and brackets from cpumask_t field
  tracing: Have trigger filter parsing errors show up in error_log
  x86/mm/kmmio: Remove redundant preempt_disable()
  tracing: Fix infinite loop in tracing_read_pipe on overflowed print_trace_line
  Documentation/osnoise: Add osnoise/options documentation
  tracing/osnoise: Add preempt and/or irq disabled options
  tracing/osnoise: Add PANIC_ON_STOP option
  Documentation/osnoise: Escape underscore of NO_ prefix
  tracing: Fix some checker warnings
  tracing/osnoise: Make osnoise_options static
  tracing: remove unnecessary trace_trigger ifdef
  ring-buffer: Handle resize in early boot up
  tracing/hist: Fix issue of losting command info in error_log
  tracing: Fix issue of missing one synthetic field
  tracing/hist: Fix out-of-bound write on 'action_data.var_ref_idx'
  tracing/hist: Fix wrong return value in parse_action_params()
  ...
This commit is contained in:
Linus Torvalds 2022-12-15 18:01:16 -08:00
commit fe36bb8736
36 changed files with 843 additions and 219 deletions

View File

@ -6266,6 +6266,25 @@
See also Documentation/trace/ftrace.rst "trace options"
section.
trace_trigger=[trigger-list]
[FTRACE] Add a event trigger on specific events.
Set a trigger on top of a specific event, with an optional
filter.
The format is is "trace_trigger=<event>.<trigger>[ if <filter>],..."
Where more than one trigger may be specified that are comma deliminated.
For example:
trace_trigger="sched_switch.stacktrace if prev_state == 2"
The above will enable the "stacktrace" trigger on the "sched_switch"
event but only trigger it if the "prev_state" of the "sched_switch"
event is "2" (TASK_UNINTERUPTIBLE).
See also "Event triggers" in Documentation/trace/events.rst
traceoff_on_warning
[FTRACE] enable this option to disable tracing when a
warning is hit. This turns off "tracing_on". Tracing can

View File

@ -25,7 +25,7 @@ Documentation written by Tom Zanussi
hist:keys=<field1[,field2,...]>[:values=<field1[,field2,...]>]
[:sort=<field1[,field2,...]>][:size=#entries][:pause][:continue]
[:clear][:name=histname1][:<handler>.<action>] [if <filter>]
[:clear][:name=histname1][:nohitcount][:<handler>.<action>] [if <filter>]
When a matching event is hit, an entry is added to a hash table
using the key(s) and value(s) named. Keys and values correspond to
@ -79,6 +79,8 @@ Documentation written by Tom Zanussi
.log2 display log2 value rather than raw number
.buckets=size display grouping of values rather than raw number
.usecs display a common_timestamp in microseconds
.percent display a number of percentage value
.graph display a bar-graph of a value
============= =================================================
Note that in general the semantics of a given field aren't
@ -137,6 +139,12 @@ Documentation written by Tom Zanussi
existing trigger, rather than via the '>' operator, which will cause
the trigger to be removed through truncation.
The 'nohitcount' (or NOHC) parameter will suppress display of
raw hitcount in the histogram. This option requires at least one
value field which is not a 'raw hitcount'. For example,
'hist:...:vals=hitcount:nohitcount' is rejected, but
'hist:...:vals=hitcount.percent:nohitcount' is OK.
- enable_hist/disable_hist
The enable_hist and disable_hist triggers can be used to have one

View File

@ -92,8 +92,8 @@ Note that the example above shows a high number of HW noise samples.
The reason being is that this sample was taken on a virtual machine,
and the host interference is detected as a hardware interference.
Tracer options
---------------------
Tracer Configuration
--------------------
The tracer has a set of options inside the osnoise directory, they are:
@ -109,6 +109,27 @@ The tracer has a set of options inside the osnoise directory, they are:
- tracing_threshold: the minimum delta between two time() reads to be
considered as noise, in us. When set to 0, the default value will
be used, which is currently 5 us.
- osnoise/options: a set of on/off options that can be enabled by
writing the option name to the file or disabled by writing the option
name preceded with the 'NO\_' prefix. For example, writing
NO_OSNOISE_WORKLOAD disables the OSNOISE_WORKLOAD option. The
special DEAFAULTS option resets all options to the default value.
Tracer Options
--------------
The osnoise/options file exposes a set of on/off configuration options for
the osnoise tracer. These options are:
- DEFAULTS: reset the options to the default value.
- OSNOISE_WORKLOAD: do not dispatch osnoise workload (see dedicated
section below).
- PANIC_ON_STOP: call panic() if the tracer stops. This option serves to
capture a vmcore.
- OSNOISE_PREEMPT_DISABLE: disable preemption while running the osnoise
workload, allowing only IRQ and hardware-related noise.
- OSNOISE_IRQ_DISABLE: disable IRQs while running the osnoise workload,
allowing only NMIs and hardware-related noise, like hwlat tracer.
Additional Tracing
------------------
@ -150,3 +171,10 @@ tracepoints is smaller than eight us reported in the sample_threshold.
The reason roots in the overhead of the entry and exit code that happens
before and after any interference execution. This justifies the dual
approach: measuring thread and tracing.
Running osnoise tracer without workload
---------------------------------------
By enabling the osnoise tracer with the NO_OSNOISE_WORKLOAD option set,
the osnoise: tracepoints serve to measure the execution time of
any type of Linux task, free from the interference of other tasks.

View File

@ -8528,6 +8528,9 @@ FUNCTION HOOKS (FTRACE)
M: Steven Rostedt <rostedt@goodmis.org>
M: Masami Hiramatsu <mhiramat@kernel.org>
R: Mark Rutland <mark.rutland@arm.com>
L: linux-kernel@vger.kernel.org
L: linux-trace-kernel@vger.kernel.org
Q: https://patchwork.kernel.org/project/linux-trace-kernel/list/
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace.git
F: Documentation/trace/ftrace*
@ -11606,6 +11609,9 @@ M: Naveen N. Rao <naveen.n.rao@linux.ibm.com>
M: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
M: "David S. Miller" <davem@davemloft.net>
M: Masami Hiramatsu <mhiramat@kernel.org>
L: linux-kernel@vger.kernel.org
L: linux-trace-kernel@vger.kernel.org
Q: https://patchwork.kernel.org/project/linux-trace-kernel/list/
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace.git
F: Documentation/trace/kprobes.rst
@ -21079,6 +21085,9 @@ F: drivers/hwmon/pmbus/tps546d24.c
TRACING
M: Steven Rostedt <rostedt@goodmis.org>
M: Masami Hiramatsu <mhiramat@kernel.org>
L: linux-kernel@vger.kernel.org
L: linux-trace-kernel@vger.kernel.org
Q: https://patchwork.kernel.org/project/linux-trace-kernel/list/
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace.git
F: Documentation/trace/*

View File

@ -221,7 +221,9 @@ void ftrace_replace_code(int enable)
ret = ftrace_verify_code(rec->ip, old);
if (ret) {
ftrace_expected = old;
ftrace_bug(ret, rec);
ftrace_expected = NULL;
return;
}
}

View File

@ -62,7 +62,13 @@ struct kmmio_context {
int active;
};
static DEFINE_SPINLOCK(kmmio_lock);
/*
* The kmmio_lock is taken in int3 context, which is treated as NMI context.
* This causes lockdep to complain about it bein in both NMI and normal
* context. Hide it from lockdep, as it should not have any other locks
* taken under it, and this is only enabled for debugging mmio anyway.
*/
static arch_spinlock_t kmmio_lock = __ARCH_SPIN_LOCK_UNLOCKED;
/* Protected by kmmio_lock */
unsigned int kmmio_count;
@ -240,15 +246,14 @@ int kmmio_handler(struct pt_regs *regs, unsigned long addr)
page_base &= page_level_mask(l);
/*
* Preemption is now disabled to prevent process switch during
* single stepping. We can only handle one active kmmio trace
* Hold the RCU read lock over single stepping to avoid looking
* up the probe and kmmio_fault_page again. The rcu_read_lock_sched()
* also disables preemption and prevents process switch during
* the single stepping. We can only handle one active kmmio trace
* per cpu, so ensure that we finish it before something else
* gets to run. We also hold the RCU read lock over single
* stepping to avoid looking up the probe and kmmio_fault_page
* again.
* gets to run.
*/
preempt_disable();
rcu_read_lock();
rcu_read_lock_sched_notrace();
faultpage = get_kmmio_fault_page(page_base);
if (!faultpage) {
@ -317,8 +322,7 @@ int kmmio_handler(struct pt_regs *regs, unsigned long addr)
return 1; /* fault handled */
no_kmmio:
rcu_read_unlock();
preempt_enable_no_resched();
rcu_read_unlock_sched_notrace();
return ret;
}
@ -346,10 +350,10 @@ static int post_kmmio_handler(unsigned long condition, struct pt_regs *regs)
ctx->probe->post_handler(ctx->probe, condition, regs);
/* Prevent racing against release_kmmio_fault_page(). */
spin_lock(&kmmio_lock);
arch_spin_lock(&kmmio_lock);
if (ctx->fpage->count)
arm_kmmio_fault_page(ctx->fpage);
spin_unlock(&kmmio_lock);
arch_spin_unlock(&kmmio_lock);
regs->flags &= ~X86_EFLAGS_TF;
regs->flags |= ctx->saved_flags;
@ -357,8 +361,7 @@ static int post_kmmio_handler(unsigned long condition, struct pt_regs *regs)
/* These were acquired in kmmio_handler(). */
ctx->active--;
BUG_ON(ctx->active);
rcu_read_unlock();
preempt_enable_no_resched();
rcu_read_unlock_sched_notrace();
/*
* if somebody else is singlestepping across a probe point, flags
@ -440,7 +443,8 @@ int register_kmmio_probe(struct kmmio_probe *p)
unsigned int l;
pte_t *pte;
spin_lock_irqsave(&kmmio_lock, flags);
local_irq_save(flags);
arch_spin_lock(&kmmio_lock);
if (get_kmmio_probe(addr)) {
ret = -EEXIST;
goto out;
@ -460,7 +464,9 @@ int register_kmmio_probe(struct kmmio_probe *p)
size += page_level_size(l);
}
out:
spin_unlock_irqrestore(&kmmio_lock, flags);
arch_spin_unlock(&kmmio_lock);
local_irq_restore(flags);
/*
* XXX: What should I do here?
* Here was a call to global_flush_tlb(), but it does not exist
@ -494,7 +500,8 @@ static void remove_kmmio_fault_pages(struct rcu_head *head)
struct kmmio_fault_page **prevp = &dr->release_list;
unsigned long flags;
spin_lock_irqsave(&kmmio_lock, flags);
local_irq_save(flags);
arch_spin_lock(&kmmio_lock);
while (f) {
if (!f->count) {
list_del_rcu(&f->list);
@ -506,7 +513,8 @@ static void remove_kmmio_fault_pages(struct rcu_head *head)
}
f = *prevp;
}
spin_unlock_irqrestore(&kmmio_lock, flags);
arch_spin_unlock(&kmmio_lock);
local_irq_restore(flags);
/* This is the real RCU destroy call. */
call_rcu(&dr->rcu, rcu_free_kmmio_fault_pages);
@ -540,14 +548,16 @@ void unregister_kmmio_probe(struct kmmio_probe *p)
if (!pte)
return;
spin_lock_irqsave(&kmmio_lock, flags);
local_irq_save(flags);
arch_spin_lock(&kmmio_lock);
while (size < size_lim) {
release_kmmio_fault_page(addr + size, &release_list);
size += page_level_size(l);
}
list_del_rcu(&p->list);
kmmio_count--;
spin_unlock_irqrestore(&kmmio_lock, flags);
arch_spin_unlock(&kmmio_lock);
local_irq_restore(flags);
if (!release_list)
return;

View File

@ -113,8 +113,7 @@ void ring_buffer_change_overwrite(struct trace_buffer *buffer, int val);
struct ring_buffer_event *ring_buffer_lock_reserve(struct trace_buffer *buffer,
unsigned long length);
int ring_buffer_unlock_commit(struct trace_buffer *buffer,
struct ring_buffer_event *event);
int ring_buffer_unlock_commit(struct trace_buffer *buffer);
int ring_buffer_write(struct trace_buffer *buffer,
unsigned long length, void *data);

View File

@ -136,7 +136,6 @@ struct trace_event_functions {
struct trace_event {
struct hlist_node node;
struct list_head list;
int type;
struct trace_event_functions *funcs;
};
@ -235,7 +234,8 @@ void tracing_record_taskinfo_sched_switch(struct task_struct *prev,
void tracing_record_cmdline(struct task_struct *task);
void tracing_record_tgid(struct task_struct *task);
int trace_output_call(struct trace_iterator *iter, char *name, char *fmt, ...);
int trace_output_call(struct trace_iterator *iter, char *name, char *fmt, ...)
__printf(3, 4);
struct event_filter;

View File

@ -97,7 +97,8 @@ extern int trace_seq_hex_dump(struct trace_seq *s, const char *prefix_str,
const void *buf, size_t len, bool ascii);
#else /* CONFIG_TRACING */
static inline void trace_seq_printf(struct trace_seq *s, const char *fmt, ...)
static inline __printf(2, 3)
void trace_seq_printf(struct trace_seq *s, const char *fmt, ...)
{
}
static inline void

View File

@ -21,6 +21,9 @@
#undef __get_bitmask
#define __get_bitmask(field) (char *)__get_dynamic_array(field)
#undef __get_cpumask
#define __get_cpumask(field) (char *)__get_dynamic_array(field)
#undef __get_sockaddr
#define __get_sockaddr(field) ((struct sockaddr *)__get_dynamic_array(field))
@ -40,6 +43,9 @@
#undef __get_rel_bitmask
#define __get_rel_bitmask(field) (char *)__get_rel_dynamic_array(field)
#undef __get_rel_cpumask
#define __get_rel_cpumask(field) (char *)__get_rel_dynamic_array(field)
#undef __get_rel_sockaddr
#define __get_rel_sockaddr(field) ((struct sockaddr *)__get_rel_dynamic_array(field))

View File

@ -21,6 +21,9 @@
#undef __get_bitmask
#define __get_bitmask(field) (char *)__get_dynamic_array(field)
#undef __get_cpumask
#define __get_cpumask(field) (char *)__get_dynamic_array(field)
#undef __get_sockaddr
#define __get_sockaddr(field) ((struct sockaddr *)__get_dynamic_array(field))
@ -41,6 +44,9 @@
#undef __get_rel_bitmask
#define __get_rel_bitmask(field) (char *)__get_rel_dynamic_array(field)
#undef __get_rel_cpumask
#define __get_rel_cpumask(field) (char *)__get_rel_dynamic_array(field)
#undef __get_rel_sockaddr
#define __get_rel_sockaddr(field) ((struct sockaddr *)__get_rel_dynamic_array(field))

View File

@ -32,6 +32,9 @@
#undef __bitmask
#define __bitmask(item, nr_bits) __dynamic_array(char, item, -1)
#undef __cpumask
#define __cpumask(item) __dynamic_array(char, item, -1)
#undef __sockaddr
#define __sockaddr(field, len) __dynamic_array(u8, field, len)
@ -47,6 +50,9 @@
#undef __rel_bitmask
#define __rel_bitmask(item, nr_bits) __rel_dynamic_array(char, item, -1)
#undef __rel_cpumask
#define __rel_cpumask(item) __rel_dynamic_array(char, item, -1)
#undef __rel_sockaddr
#define __rel_sockaddr(field, len) __rel_dynamic_array(u8, field, len)

View File

@ -38,6 +38,9 @@
#undef __bitmask
#define __bitmask(item, nr_bits) __dynamic_array(unsigned long, item, -1)
#undef __cpumask
#define __cpumask(item) __dynamic_array(unsigned long, item, -1)
#undef __sockaddr
#define __sockaddr(field, len) __dynamic_array(u8, field, len)
@ -53,5 +56,8 @@
#undef __rel_bitmask
#define __rel_bitmask(item, nr_bits) __rel_dynamic_array(unsigned long, item, -1)
#undef __rel_cpumask
#define __rel_cpumask(item) __rel_dynamic_array(unsigned long, item, -1)
#undef __rel_sockaddr
#define __rel_sockaddr(field, len) __rel_dynamic_array(u8, field, len)

View File

@ -42,6 +42,9 @@
trace_print_bitmask_seq(p, __bitmask, __bitmask_size); \
})
#undef __get_cpumask
#define __get_cpumask(field) __get_bitmask(field)
#undef __get_rel_bitmask
#define __get_rel_bitmask(field) \
({ \
@ -51,6 +54,9 @@
trace_print_bitmask_seq(p, __bitmask, __bitmask_size); \
})
#undef __get_rel_cpumask
#define __get_rel_cpumask(field) __get_rel_bitmask(field)
#undef __get_sockaddr
#define __get_sockaddr(field) ((struct sockaddr *)__get_dynamic_array(field))

View File

@ -46,6 +46,12 @@
#undef __bitmask
#define __bitmask(item, nr_bits) __dynamic_array(unsigned long, item, -1)
#undef __cpumask
#define __cpumask(item) { \
.type = "__data_loc cpumask_t", .name = #item, \
.size = 4, .align = 4, \
.is_signed = 0, .filter_type = FILTER_OTHER },
#undef __sockaddr
#define __sockaddr(field, len) __dynamic_array(u8, field, len)
@ -64,5 +70,11 @@
#undef __rel_bitmask
#define __rel_bitmask(item, nr_bits) __rel_dynamic_array(unsigned long, item, -1)
#undef __rel_cpumask
#define __rel_cpumask(item) { \
.type = "__rel_loc cpumask_t", .name = #item, \
.size = 4, .align = 4, \
.is_signed = 0, .filter_type = FILTER_OTHER },
#undef __rel_sockaddr
#define __rel_sockaddr(field, len) __rel_dynamic_array(u8, field, len)

View File

@ -82,10 +82,16 @@
#define __bitmask(item, nr_bits) __dynamic_array(unsigned long, item, \
__bitmask_size_in_longs(nr_bits))
#undef __cpumask
#define __cpumask(item) __bitmask(item, nr_cpumask_bits)
#undef __rel_bitmask
#define __rel_bitmask(item, nr_bits) __rel_dynamic_array(unsigned long, item, \
__bitmask_size_in_longs(nr_bits))
#undef __rel_cpumask
#define __rel_cpumask(item) __rel_bitmask(item, nr_cpumask_bits)
#undef __sockaddr
#define __sockaddr(field, len) __dynamic_array(u8, field, len)

View File

@ -57,6 +57,16 @@
#define __assign_bitmask(dst, src, nr_bits) \
memcpy(__get_bitmask(dst), (src), __bitmask_size_in_bytes(nr_bits))
#undef __cpumask
#define __cpumask(item) __dynamic_array(unsigned long, item, -1)
#undef __get_cpumask
#define __get_cpumask(field) (char *)__get_dynamic_array(field)
#undef __assign_cpumask
#define __assign_cpumask(dst, src) \
memcpy(__get_cpumask(dst), (src), __bitmask_size_in_bytes(nr_cpumask_bits))
#undef __sockaddr
#define __sockaddr(field, len) __dynamic_array(u8, field, len)
@ -98,6 +108,16 @@
#define __assign_rel_bitmask(dst, src, nr_bits) \
memcpy(__get_rel_bitmask(dst), (src), __bitmask_size_in_bytes(nr_bits))
#undef __rel_cpumask
#define __rel_cpumask(item) __rel_dynamic_array(unsigned long, item, -1)
#undef __get_rel_cpumask
#define __get_rel_cpumask(field) (char *)__get_rel_dynamic_array(field)
#undef __assign_rel_cpumask
#define __assign_rel_cpumask(dst, src) \
memcpy(__get_rel_cpumask(dst), (src), __bitmask_size_in_bytes(nr_cpumask_bits))
#undef __rel_sockaddr
#define __rel_sockaddr(field, len) __rel_dynamic_array(u8, field, len)

View File

@ -13,11 +13,13 @@
#undef __get_dynamic_array_len
#undef __get_str
#undef __get_bitmask
#undef __get_cpumask
#undef __get_sockaddr
#undef __get_rel_dynamic_array
#undef __get_rel_dynamic_array_len
#undef __get_rel_str
#undef __get_rel_bitmask
#undef __get_rel_cpumask
#undef __get_rel_sockaddr
#undef __print_array
#undef __print_hex_dump

View File

@ -375,6 +375,7 @@ config SCHED_TRACER
config HWLAT_TRACER
bool "Tracer to detect hardware latencies (like SMIs)"
select GENERIC_TRACER
select TRACER_MAX_TRACE
help
This tracer, when enabled will create one or more kernel threads,
depending on what the cpumask file is set to, which each thread
@ -410,6 +411,7 @@ config HWLAT_TRACER
config OSNOISE_TRACER
bool "OS Noise tracer"
select GENERIC_TRACER
select TRACER_MAX_TRACE
help
In the context of high-performance computing (HPC), the Operating
System Noise (osnoise) refers to the interference experienced by an

View File

@ -163,7 +163,7 @@ static void ftrace_sync_ipi(void *data)
static ftrace_func_t ftrace_ops_get_list_func(struct ftrace_ops *ops)
{
/*
* If this is a dynamic, RCU, or per CPU ops, or we force list func,
* If this is a dynamic or RCU ops, or we force list func,
* then it needs to call the list anyway.
*/
if (ops->flags & (FTRACE_OPS_FL_DYNAMIC | FTRACE_OPS_FL_RCU) ||
@ -2762,6 +2762,19 @@ void __weak ftrace_arch_code_modify_post_process(void)
{
}
static int update_ftrace_func(ftrace_func_t func)
{
static ftrace_func_t save_func;
/* Avoid updating if it hasn't changed */
if (func == save_func)
return 0;
save_func = func;
return ftrace_update_ftrace_func(func);
}
void ftrace_modify_all_code(int command)
{
int update = command & FTRACE_UPDATE_TRACE_FUNC;
@ -2782,7 +2795,7 @@ void ftrace_modify_all_code(int command)
* traced.
*/
if (update) {
err = ftrace_update_ftrace_func(ftrace_ops_list_func);
err = update_ftrace_func(ftrace_ops_list_func);
if (FTRACE_WARN_ON(err))
return;
}
@ -2798,7 +2811,7 @@ void ftrace_modify_all_code(int command)
/* If irqs are disabled, we are in stop machine */
if (!irqs_disabled())
smp_call_function(ftrace_sync_ipi, NULL, 1);
err = ftrace_update_ftrace_func(ftrace_trace_function);
err = update_ftrace_func(ftrace_trace_function);
if (FTRACE_WARN_ON(err))
return;
}
@ -3070,8 +3083,6 @@ out:
/*
* Dynamic ops may be freed, we must make sure that all
* callers are done before leaving this function.
* The same goes for freeing the per_cpu data of the per_cpu
* ops.
*/
if (ops->flags & FTRACE_OPS_FL_DYNAMIC) {
/*
@ -4192,6 +4203,7 @@ match_records(struct ftrace_hash *hash, char *func, int len, char *mod)
}
found = 1;
}
cond_resched();
} while_for_each_ftrace_rec();
out_unlock:
mutex_unlock(&ftrace_lock);
@ -7518,8 +7530,6 @@ __ftrace_ops_list_func(unsigned long ip, unsigned long parent_ip,
/*
* Check the following for each ops before calling their func:
* if RCU flag is set, then rcu_is_watching() must be true
* if PER_CPU is set, then ftrace_function_local_disable()
* must be false
* Otherwise test if the ip matches the ops filter
*
* If any of the above fails then the op->func() is not executed.
@ -7569,8 +7579,8 @@ NOKPROBE_SYMBOL(arch_ftrace_ops_list_func);
/*
* If there's only one function registered but it does not support
* recursion, needs RCU protection and/or requires per cpu handling, then
* this function will be called by the mcount trampoline.
* recursion, needs RCU protection, then this function will be called
* by the mcount trampoline.
*/
static void ftrace_ops_assist_func(unsigned long ip, unsigned long parent_ip,
struct ftrace_ops *op, struct ftrace_regs *fregs)

View File

@ -2062,8 +2062,10 @@ rb_insert_pages(struct ring_buffer_per_cpu *cpu_buffer)
{
struct list_head *pages = &cpu_buffer->new_pages;
int retries, success;
unsigned long flags;
raw_spin_lock_irq(&cpu_buffer->reader_lock);
/* Can be called at early boot up, where interrupts must not been enabled */
raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
/*
* We are holding the reader lock, so the reader page won't be swapped
* in the ring buffer. Now we are racing with the writer trying to
@ -2120,7 +2122,7 @@ rb_insert_pages(struct ring_buffer_per_cpu *cpu_buffer)
* tracing
*/
RB_WARN_ON(cpu_buffer, !success);
raw_spin_unlock_irq(&cpu_buffer->reader_lock);
raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
/* free pages if they weren't inserted */
if (!success) {
@ -2248,8 +2250,16 @@ int ring_buffer_resize(struct trace_buffer *buffer, unsigned long size,
rb_update_pages(cpu_buffer);
cpu_buffer->nr_pages_to_update = 0;
} else {
schedule_work_on(cpu,
&cpu_buffer->update_pages_work);
/* Run directly if possible. */
migrate_disable();
if (cpu != smp_processor_id()) {
migrate_enable();
schedule_work_on(cpu,
&cpu_buffer->update_pages_work);
} else {
update_pages_handler(&cpu_buffer->update_pages_work);
migrate_enable();
}
}
}
@ -2298,9 +2308,17 @@ int ring_buffer_resize(struct trace_buffer *buffer, unsigned long size,
if (!cpu_online(cpu_id))
rb_update_pages(cpu_buffer);
else {
schedule_work_on(cpu_id,
&cpu_buffer->update_pages_work);
wait_for_completion(&cpu_buffer->update_done);
/* Run directly if possible. */
migrate_disable();
if (cpu_id == smp_processor_id()) {
rb_update_pages(cpu_buffer);
migrate_enable();
} else {
migrate_enable();
schedule_work_on(cpu_id,
&cpu_buffer->update_pages_work);
wait_for_completion(&cpu_buffer->update_done);
}
}
cpu_buffer->nr_pages_to_update = 0;
@ -3180,8 +3198,7 @@ static inline void rb_event_discard(struct ring_buffer_event *event)
event->time_delta = 1;
}
static void rb_commit(struct ring_buffer_per_cpu *cpu_buffer,
struct ring_buffer_event *event)
static void rb_commit(struct ring_buffer_per_cpu *cpu_buffer)
{
local_inc(&cpu_buffer->entries);
rb_end_commit(cpu_buffer);
@ -3383,15 +3400,14 @@ void ring_buffer_nest_end(struct trace_buffer *buffer)
*
* Must be paired with ring_buffer_lock_reserve.
*/
int ring_buffer_unlock_commit(struct trace_buffer *buffer,
struct ring_buffer_event *event)
int ring_buffer_unlock_commit(struct trace_buffer *buffer)
{
struct ring_buffer_per_cpu *cpu_buffer;
int cpu = raw_smp_processor_id();
cpu_buffer = buffer->buffers[cpu];
rb_commit(cpu_buffer, event);
rb_commit(cpu_buffer);
rb_wakeups(buffer, cpu_buffer);
@ -3977,7 +3993,7 @@ int ring_buffer_write(struct trace_buffer *buffer,
memcpy(body, data, length);
rb_commit(cpu_buffer, event);
rb_commit(cpu_buffer);
rb_wakeups(buffer, cpu_buffer);
@ -5998,7 +6014,7 @@ static __init int rb_write_something(struct rb_test_data *data, bool nested)
}
out:
ring_buffer_unlock_commit(data->buffer, event);
ring_buffer_unlock_commit(data->buffer);
return 0;
}

View File

@ -258,7 +258,7 @@ static void ring_buffer_producer(void)
hit++;
entry = ring_buffer_event_data(event);
*entry = smp_processor_id();
ring_buffer_unlock_commit(buffer, event);
ring_buffer_unlock_commit(buffer);
}
}
end_time = ktime_get();

View File

@ -19,7 +19,6 @@
#include <linux/kallsyms.h>
#include <linux/security.h>
#include <linux/seq_file.h>
#include <linux/notifier.h>
#include <linux/irqflags.h>
#include <linux/debugfs.h>
#include <linux/tracefs.h>
@ -85,7 +84,7 @@ void __init disable_tracing_selftest(const char *reason)
#endif
/* Pipe tracepoints to printk */
struct trace_iterator *tracepoint_print_iter;
static struct trace_iterator *tracepoint_print_iter;
int tracepoint_printk;
static bool tracepoint_printk_stop_on_boot __initdata;
static DEFINE_STATIC_KEY_FALSE(tracepoint_printk_key);
@ -999,7 +998,7 @@ __buffer_unlock_commit(struct trace_buffer *buffer, struct ring_buffer_event *ev
/* ring_buffer_unlock_commit() enables preemption */
preempt_enable_notrace();
} else
ring_buffer_unlock_commit(buffer, event);
ring_buffer_unlock_commit(buffer);
}
/**
@ -1421,6 +1420,7 @@ int tracing_snapshot_cond_disable(struct trace_array *tr)
return false;
}
EXPORT_SYMBOL_GPL(tracing_snapshot_cond_disable);
#define free_snapshot(tr) do { } while (0)
#endif /* CONFIG_TRACER_SNAPSHOT */
void tracer_tracing_off(struct trace_array *tr)
@ -1692,6 +1692,8 @@ static ssize_t trace_seq_to_buffer(struct trace_seq *s, void *buf, size_t cnt)
}
unsigned long __read_mostly tracing_thresh;
#ifdef CONFIG_TRACER_MAX_TRACE
static const struct file_operations tracing_max_lat_fops;
#ifdef LATENCY_FS_NOTIFY
@ -1748,18 +1750,14 @@ void latency_fsnotify(struct trace_array *tr)
irq_work_queue(&tr->fsnotify_irqwork);
}
#elif defined(CONFIG_TRACER_MAX_TRACE) || defined(CONFIG_HWLAT_TRACER) \
|| defined(CONFIG_OSNOISE_TRACER)
#else /* !LATENCY_FS_NOTIFY */
#define trace_create_maxlat_file(tr, d_tracer) \
trace_create_file("tracing_max_latency", TRACE_MODE_WRITE, \
d_tracer, &tr->max_latency, &tracing_max_lat_fops)
#else
#define trace_create_maxlat_file(tr, d_tracer) do { } while (0)
#endif
#ifdef CONFIG_TRACER_MAX_TRACE
/*
* Copy the new maximum trace into the separate maximum-trace
* structure. (this way the maximum trace is permanently saved,
@ -1834,14 +1832,15 @@ update_max_tr(struct trace_array *tr, struct task_struct *tsk, int cpu,
ring_buffer_record_off(tr->max_buffer.buffer);
#ifdef CONFIG_TRACER_SNAPSHOT
if (tr->cond_snapshot && !tr->cond_snapshot->update(tr, cond_data))
goto out_unlock;
if (tr->cond_snapshot && !tr->cond_snapshot->update(tr, cond_data)) {
arch_spin_unlock(&tr->max_lock);
return;
}
#endif
swap(tr->array_buffer.buffer, tr->max_buffer.buffer);
__update_max_tr(tr, tsk, cpu);
out_unlock:
arch_spin_unlock(&tr->max_lock);
}
@ -1888,6 +1887,7 @@ update_max_tr_single(struct trace_array *tr, struct task_struct *tsk, int cpu)
__update_max_tr(tr, tsk, cpu);
arch_spin_unlock(&tr->max_lock);
}
#endif /* CONFIG_TRACER_MAX_TRACE */
static int wait_on_pipe(struct trace_iterator *iter, int full)
@ -5678,6 +5678,7 @@ static const char readme_msg[] =
"\t [:size=#entries]\n"
"\t [:pause][:continue][:clear]\n"
"\t [:name=histname1]\n"
"\t [:nohitcount]\n"
"\t [:<handler>.<action>]\n"
"\t [if <filter>]\n\n"
"\t Note, special fields can be used as well:\n"
@ -5724,7 +5725,9 @@ static const char readme_msg[] =
"\t .syscall display a syscall id as a syscall name\n"
"\t .log2 display log2 value rather than raw number\n"
"\t .buckets=size display values in groups of size rather than raw number\n"
"\t .usecs display a common_timestamp in microseconds\n\n"
"\t .usecs display a common_timestamp in microseconds\n"
"\t .percent display a number of percentage value\n"
"\t .graph display a bar-graph of a value\n\n"
"\t The 'pause' parameter can be used to pause an existing hist\n"
"\t trigger or to start a hist trigger but not log any events\n"
"\t until told to do so. 'continue' can be used to start or\n"
@ -5732,6 +5735,8 @@ static const char readme_msg[] =
"\t The 'clear' parameter will clear the contents of a running\n"
"\t hist trigger and leave its current paused/active state\n"
"\t unchanged.\n\n"
"\t The 'nohitcount' (or NOHC) parameter will suppress display of\n"
"\t raw hitcount in the histogram.\n\n"
"\t The enable_hist and disable_hist triggers can be used to\n"
"\t have one event conditionally start and stop another event's\n"
"\t already-attached hist trigger. The syntax is analogous to\n"
@ -6572,7 +6577,7 @@ out:
return ret;
}
#if defined(CONFIG_TRACER_MAX_TRACE) || defined(CONFIG_HWLAT_TRACER)
#ifdef CONFIG_TRACER_MAX_TRACE
static ssize_t
tracing_max_lat_read(struct file *filp, char __user *ubuf,
@ -6796,7 +6801,20 @@ waitagain:
ret = print_trace_line(iter);
if (ret == TRACE_TYPE_PARTIAL_LINE) {
/* don't print partial lines */
/*
* If one print_trace_line() fills entire trace_seq in one shot,
* trace_seq_to_user() will returns -EBUSY because save_len == 0,
* In this case, we need to consume it, otherwise, loop will peek
* this event next time, resulting in an infinite loop.
*/
if (save_len == 0) {
iter->seq.full = 0;
trace_seq_puts(&iter->seq, "[LINE TOO BIG]\n");
trace_consume(iter);
break;
}
/* In other cases, don't print partial lines */
iter->seq.seq.len = save_len;
break;
}
@ -7587,7 +7605,7 @@ static const struct file_operations tracing_thresh_fops = {
.llseek = generic_file_llseek,
};
#if defined(CONFIG_TRACER_MAX_TRACE) || defined(CONFIG_HWLAT_TRACER)
#ifdef CONFIG_TRACER_MAX_TRACE
static const struct file_operations tracing_max_lat_fops = {
.open = tracing_open_generic,
.read = tracing_max_lat_read,
@ -9601,7 +9619,9 @@ init_tracer_tracefs(struct trace_array *tr, struct dentry *d_tracer)
create_trace_options_dir(tr);
#ifdef CONFIG_TRACER_MAX_TRACE
trace_create_maxlat_file(tr, d_tracer);
#endif
if (ftrace_create_function_files(tr, d_tracer))
MEM_FAIL(1, "Could not allocate function filter files");
@ -9855,40 +9875,40 @@ static __init int tracer_init_tracefs(void)
fs_initcall(tracer_init_tracefs);
static int trace_panic_handler(struct notifier_block *this,
unsigned long event, void *unused)
{
if (ftrace_dump_on_oops)
ftrace_dump(ftrace_dump_on_oops);
return NOTIFY_OK;
}
static int trace_die_panic_handler(struct notifier_block *self,
unsigned long ev, void *unused);
static struct notifier_block trace_panic_notifier = {
.notifier_call = trace_panic_handler,
.next = NULL,
.priority = 150 /* priority: INT_MAX >= x >= 0 */
.notifier_call = trace_die_panic_handler,
.priority = INT_MAX - 1,
};
static int trace_die_handler(struct notifier_block *self,
unsigned long val,
void *data)
{
switch (val) {
case DIE_OOPS:
if (ftrace_dump_on_oops)
ftrace_dump(ftrace_dump_on_oops);
break;
default:
break;
}
return NOTIFY_OK;
}
static struct notifier_block trace_die_notifier = {
.notifier_call = trace_die_handler,
.priority = 200
.notifier_call = trace_die_panic_handler,
.priority = INT_MAX - 1,
};
/*
* The idea is to execute the following die/panic callback early, in order
* to avoid showing irrelevant information in the trace (like other panic
* notifier functions); we are the 2nd to run, after hung_task/rcu_stall
* warnings get disabled (to prevent potential log flooding).
*/
static int trace_die_panic_handler(struct notifier_block *self,
unsigned long ev, void *unused)
{
if (!ftrace_dump_on_oops)
return NOTIFY_DONE;
/* The die notifier requires DIE_OOPS to trigger */
if (self == &trace_die_notifier && ev != DIE_OOPS)
return NOTIFY_DONE;
ftrace_dump(ftrace_dump_on_oops);
return NOTIFY_DONE;
}
/*
* printk is set to max of 1024, we really don't need it that big.
* Nothing should be printing 1000 characters anyway.

View File

@ -308,8 +308,7 @@ struct trace_array {
struct array_buffer max_buffer;
bool allocated_snapshot;
#endif
#if defined(CONFIG_TRACER_MAX_TRACE) || defined(CONFIG_HWLAT_TRACER) \
|| defined(CONFIG_OSNOISE_TRACER)
#ifdef CONFIG_TRACER_MAX_TRACE
unsigned long max_latency;
#ifdef CONFIG_FSNOTIFY
struct dentry *d_max_latency;
@ -615,7 +614,7 @@ void trace_buffer_unlock_commit_nostack(struct trace_buffer *buffer,
bool trace_is_tracepoint_string(const char *str);
const char *trace_event_format(struct trace_iterator *iter, const char *fmt);
void trace_check_vprintf(struct trace_iterator *iter, const char *fmt,
va_list ap);
va_list ap) __printf(2, 0);
int trace_empty(struct trace_iterator *iter);
@ -688,12 +687,11 @@ void update_max_tr(struct trace_array *tr, struct task_struct *tsk, int cpu,
void *cond_data);
void update_max_tr_single(struct trace_array *tr,
struct task_struct *tsk, int cpu);
#endif /* CONFIG_TRACER_MAX_TRACE */
#if (defined(CONFIG_TRACER_MAX_TRACE) || defined(CONFIG_HWLAT_TRACER) \
|| defined(CONFIG_OSNOISE_TRACER)) && defined(CONFIG_FSNOTIFY)
#ifdef CONFIG_FSNOTIFY
#define LATENCY_FS_NOTIFY
#endif
#endif /* CONFIG_TRACER_MAX_TRACE */
#ifdef LATENCY_FS_NOTIFY
void latency_fsnotify(struct trace_array *tr);
@ -1942,8 +1940,6 @@ static inline void tracer_hardirqs_on(unsigned long a0, unsigned long a1) { }
static inline void tracer_hardirqs_off(unsigned long a0, unsigned long a1) { }
#endif
extern struct trace_iterator *tracepoint_print_iter;
/*
* Reset the state of the trace_iterator so that it can read consumed data.
* Normally, the trace_iterator is used for reading the data when it is not
@ -1956,17 +1952,30 @@ static __always_inline void trace_iterator_reset(struct trace_iterator *iter)
}
/* Check the name is good for event/group/fields */
static inline bool is_good_name(const char *name)
static inline bool __is_good_name(const char *name, bool hash_ok)
{
if (!isalpha(*name) && *name != '_')
if (!isalpha(*name) && *name != '_' && (!hash_ok || *name != '-'))
return false;
while (*++name != '\0') {
if (!isalpha(*name) && !isdigit(*name) && *name != '_')
if (!isalpha(*name) && !isdigit(*name) && *name != '_' &&
(!hash_ok || *name != '-'))
return false;
}
return true;
}
/* Check the name is good for event/group/fields */
static inline bool is_good_name(const char *name)
{
return __is_good_name(name, false);
}
/* Check the name is good for system */
static inline bool is_good_system_name(const char *name)
{
return __is_good_name(name, true);
}
/* Convert certain expected symbols into '_' when generating event names */
static inline void sanitize_event_name(char *name)
{

View File

@ -251,16 +251,12 @@ int perf_kprobe_init(struct perf_event *p_event, bool is_retprobe)
struct trace_event_call *tp_event;
if (p_event->attr.kprobe_func) {
func = kzalloc(KSYM_NAME_LEN, GFP_KERNEL);
if (!func)
return -ENOMEM;
ret = strncpy_from_user(
func, u64_to_user_ptr(p_event->attr.kprobe_func),
KSYM_NAME_LEN);
if (ret == KSYM_NAME_LEN)
ret = -E2BIG;
if (ret < 0)
goto out;
func = strndup_user(u64_to_user_ptr(p_event->attr.kprobe_func),
KSYM_NAME_LEN);
if (IS_ERR(func)) {
ret = PTR_ERR(func);
return (ret == -EINVAL) ? -E2BIG : ret;
}
if (func[0] == '\0') {
kfree(func);

View File

@ -2796,6 +2796,42 @@ trace_create_new_event(struct trace_event_call *call,
return file;
}
#define MAX_BOOT_TRIGGERS 32
static struct boot_triggers {
const char *event;
char *trigger;
} bootup_triggers[MAX_BOOT_TRIGGERS];
static char bootup_trigger_buf[COMMAND_LINE_SIZE];
static int nr_boot_triggers;
static __init int setup_trace_triggers(char *str)
{
char *trigger;
char *buf;
int i;
strlcpy(bootup_trigger_buf, str, COMMAND_LINE_SIZE);
ring_buffer_expanded = true;
disable_tracing_selftest("running event triggers");
buf = bootup_trigger_buf;
for (i = 0; i < MAX_BOOT_TRIGGERS; i++) {
trigger = strsep(&buf, ",");
if (!trigger)
break;
bootup_triggers[i].event = strsep(&trigger, ".");
bootup_triggers[i].trigger = strsep(&trigger, ".");
if (!bootup_triggers[i].trigger)
break;
}
nr_boot_triggers = i;
return 1;
}
__setup("trace_trigger=", setup_trace_triggers);
/* Add an event to a trace directory */
static int
__trace_add_new_event(struct trace_event_call *call, struct trace_array *tr)
@ -2812,6 +2848,24 @@ __trace_add_new_event(struct trace_event_call *call, struct trace_array *tr)
return event_define_fields(call);
}
static void trace_early_triggers(struct trace_event_file *file, const char *name)
{
int ret;
int i;
for (i = 0; i < nr_boot_triggers; i++) {
if (strcmp(name, bootup_triggers[i].event))
continue;
mutex_lock(&event_mutex);
ret = trigger_process_regex(file, bootup_triggers[i].trigger);
mutex_unlock(&event_mutex);
if (ret)
pr_err("Failed to register trigger '%s' on event %s\n",
bootup_triggers[i].trigger,
bootup_triggers[i].event);
}
}
/*
* Just create a descriptor for early init. A descriptor is required
* for enabling events at boot. We want to enable events before
@ -2822,12 +2876,19 @@ __trace_early_add_new_event(struct trace_event_call *call,
struct trace_array *tr)
{
struct trace_event_file *file;
int ret;
file = trace_create_new_event(call, tr);
if (!file)
return -ENOMEM;
return event_define_fields(call);
ret = event_define_fields(call);
if (ret)
return ret;
trace_early_triggers(file, trace_event_name(call));
return 0;
}
struct ftrace_module_file_ops;
@ -3735,6 +3796,8 @@ static __init int event_trace_enable(void)
list_add(&call->list, &ftrace_events);
}
register_trigger_cmds();
/*
* We need the top trace array to have a working set of trace
* points at early init, before the debug files and directories
@ -3749,7 +3812,6 @@ static __init int event_trace_enable(void)
register_event_cmds();
register_trigger_cmds();
return 0;
}

View File

@ -69,7 +69,8 @@
C(INVALID_STR_OPERAND, "String type can not be an operand in expression"), \
C(EXPECT_NUMBER, "Expecting numeric literal"), \
C(UNARY_MINUS_SUBEXPR, "Unary minus not supported in sub-expressions"), \
C(DIVISION_BY_ZERO, "Division by zero"),
C(DIVISION_BY_ZERO, "Division by zero"), \
C(NEED_NOHC_VAL, "Non-hitcount value is required for 'nohitcount'"),
#undef C
#define C(a, b) HIST_ERR_##a
@ -506,6 +507,8 @@ enum hist_field_flags {
HIST_FIELD_FL_ALIAS = 1 << 16,
HIST_FIELD_FL_BUCKET = 1 << 17,
HIST_FIELD_FL_CONST = 1 << 18,
HIST_FIELD_FL_PERCENT = 1 << 19,
HIST_FIELD_FL_GRAPH = 1 << 20,
};
struct var_defs {
@ -524,6 +527,7 @@ struct hist_trigger_attrs {
bool cont;
bool clear;
bool ts_in_usecs;
bool no_hitcount;
unsigned int map_bits;
char *assignment_str[TRACING_MAP_VARS_MAX];
@ -617,7 +621,7 @@ struct action_data {
* event param, and is passed to the synthetic event
* invocation.
*/
unsigned int var_ref_idx[TRACING_MAP_VARS_MAX];
unsigned int var_ref_idx[SYNTH_FIELDS_MAX];
struct synth_event *synth_event;
bool use_trace_keyword;
char *synth_event_name;
@ -1356,6 +1360,8 @@ static const char *hist_field_name(struct hist_field *field,
field_name = field->name;
} else if (field->flags & HIST_FIELD_FL_TIMESTAMP)
field_name = "common_timestamp";
else if (field->flags & HIST_FIELD_FL_HITCOUNT)
field_name = "hitcount";
if (field_name == NULL)
field_name = "";
@ -1546,7 +1552,10 @@ parse_hist_trigger_attrs(struct trace_array *tr, char *trigger_str)
ret = parse_assignment(tr, str, attrs);
if (ret)
goto free;
} else if (strcmp(str, "pause") == 0)
} else if (strcmp(str, "nohitcount") == 0 ||
strcmp(str, "NOHC") == 0)
attrs->no_hitcount = true;
else if (strcmp(str, "pause") == 0)
attrs->pause = true;
else if ((strcmp(str, "cont") == 0) ||
(strcmp(str, "continue") == 0))
@ -1705,6 +1714,10 @@ static const char *get_hist_field_flags(struct hist_field *hist_field)
flags_str = "buckets";
else if (hist_field->flags & HIST_FIELD_FL_TIMESTAMP_USECS)
flags_str = "usecs";
else if (hist_field->flags & HIST_FIELD_FL_PERCENT)
flags_str = "percent";
else if (hist_field->flags & HIST_FIELD_FL_GRAPH)
flags_str = "graph";
return flags_str;
}
@ -2173,7 +2186,9 @@ static struct hist_field *create_var_ref(struct hist_trigger_data *hist_data,
return ref_field;
}
}
/* Sanity check to avoid out-of-bound write on 'hist_data->var_refs' */
if (hist_data->n_var_refs >= TRACING_MAP_VARS_MAX)
return NULL;
ref_field = create_hist_field(var_field->hist_data, NULL, flags, NULL);
if (ref_field) {
if (init_var_ref(ref_field, var_field, system, event_name)) {
@ -2313,6 +2328,14 @@ parse_field(struct hist_trigger_data *hist_data, struct trace_event_file *file,
if (ret || !(*buckets))
goto error;
*flags |= HIST_FIELD_FL_BUCKET;
} else if (strncmp(modifier, "percent", 7) == 0) {
if (*flags & (HIST_FIELD_FL_VAR | HIST_FIELD_FL_KEY))
goto error;
*flags |= HIST_FIELD_FL_PERCENT;
} else if (strncmp(modifier, "graph", 5) == 0) {
if (*flags & (HIST_FIELD_FL_VAR | HIST_FIELD_FL_KEY))
goto error;
*flags |= HIST_FIELD_FL_GRAPH;
} else {
error:
hist_err(tr, HIST_ERR_BAD_FIELD_MODIFIER, errpos(modifier));
@ -2328,6 +2351,8 @@ parse_field(struct hist_trigger_data *hist_data, struct trace_event_file *file,
hist_data->attrs->ts_in_usecs = true;
} else if (strcmp(field_name, "common_cpu") == 0)
*flags |= HIST_FIELD_FL_CPU;
else if (strcmp(field_name, "hitcount") == 0)
*flags |= HIST_FIELD_FL_HITCOUNT;
else {
field = trace_find_event_field(file->event_call, field_name);
if (!field || !field->size) {
@ -3586,6 +3611,7 @@ static int parse_action_params(struct trace_array *tr, char *params,
while (params) {
if (data->n_params >= SYNTH_FIELDS_MAX) {
hist_err(tr, HIST_ERR_TOO_MANY_PARAMS, 0);
ret = -EINVAL;
goto out;
}
@ -3922,6 +3948,10 @@ static int trace_action_create(struct hist_trigger_data *hist_data,
lockdep_assert_held(&event_mutex);
/* Sanity check to avoid out-of-bound write on 'data->var_ref_idx' */
if (data->n_params > SYNTH_FIELDS_MAX)
return -EINVAL;
if (data->use_trace_keyword)
synth_event_name = data->synth_event_name;
else
@ -4328,8 +4358,8 @@ static int create_var_field(struct hist_trigger_data *hist_data,
static int create_val_fields(struct hist_trigger_data *hist_data,
struct trace_event_file *file)
{
unsigned int i, j = 1, n_hitcount = 0;
char *fields_str, *field_str;
unsigned int i, j = 1;
int ret;
ret = create_hitcount_val(hist_data);
@ -4346,8 +4376,10 @@ static int create_val_fields(struct hist_trigger_data *hist_data,
if (!field_str)
break;
if (strcmp(field_str, "hitcount") == 0)
continue;
if (strcmp(field_str, "hitcount") == 0) {
if (!n_hitcount++)
continue;
}
ret = create_val_field(hist_data, j++, file, field_str);
if (ret)
@ -4357,6 +4389,12 @@ static int create_val_fields(struct hist_trigger_data *hist_data,
if (fields_str && (strcmp(fields_str, "hitcount") != 0))
ret = -EINVAL;
out:
/* There is only raw hitcount but nohitcount suppresses it. */
if (j == 1 && hist_data->attrs->no_hitcount) {
hist_err(hist_data->event_file->tr, HIST_ERR_NEED_NOHC_VAL, 0);
ret = -ENOENT;
}
return ret;
}
@ -5285,33 +5323,101 @@ static void hist_trigger_print_key(struct seq_file *m,
seq_puts(m, "}");
}
/* Get the 100 times of the percentage of @val in @total */
static inline unsigned int __get_percentage(u64 val, u64 total)
{
if (!total)
goto div0;
if (val < (U64_MAX / 10000))
return (unsigned int)div64_ul(val * 10000, total);
total = div64_u64(total, 10000);
if (!total)
goto div0;
return (unsigned int)div64_ul(val, total);
div0:
return val ? UINT_MAX : 0;
}
#define BAR_CHAR '#'
static inline const char *__fill_bar_str(char *buf, int size, u64 val, u64 max)
{
unsigned int len = __get_percentage(val, max);
int i;
if (len == UINT_MAX) {
snprintf(buf, size, "[ERROR]");
return buf;
}
len = len * size / 10000;
for (i = 0; i < len && i < size; i++)
buf[i] = BAR_CHAR;
while (i < size)
buf[i++] = ' ';
buf[size] = '\0';
return buf;
}
struct hist_val_stat {
u64 max;
u64 total;
};
static void hist_trigger_print_val(struct seq_file *m, unsigned int idx,
const char *field_name, unsigned long flags,
struct hist_val_stat *stats,
struct tracing_map_elt *elt)
{
u64 val = tracing_map_read_sum(elt, idx);
unsigned int pc;
char bar[21];
if (flags & HIST_FIELD_FL_PERCENT) {
pc = __get_percentage(val, stats[idx].total);
if (pc == UINT_MAX)
seq_printf(m, " %s (%%):[ERROR]", field_name);
else
seq_printf(m, " %s (%%): %3u.%02u", field_name,
pc / 100, pc % 100);
} else if (flags & HIST_FIELD_FL_GRAPH) {
seq_printf(m, " %s: %20s", field_name,
__fill_bar_str(bar, 20, val, stats[idx].max));
} else if (flags & HIST_FIELD_FL_HEX) {
seq_printf(m, " %s: %10llx", field_name, val);
} else {
seq_printf(m, " %s: %10llu", field_name, val);
}
}
static void hist_trigger_entry_print(struct seq_file *m,
struct hist_trigger_data *hist_data,
struct hist_val_stat *stats,
void *key,
struct tracing_map_elt *elt)
{
const char *field_name;
unsigned int i;
unsigned int i = HITCOUNT_IDX;
unsigned long flags;
hist_trigger_print_key(m, hist_data, key, elt);
seq_printf(m, " hitcount: %10llu",
tracing_map_read_sum(elt, HITCOUNT_IDX));
/* At first, show the raw hitcount if !nohitcount */
if (!hist_data->attrs->no_hitcount)
hist_trigger_print_val(m, i, "hitcount", 0, stats, elt);
for (i = 1; i < hist_data->n_vals; i++) {
field_name = hist_field_name(hist_data->fields[i], 0);
if (hist_data->fields[i]->flags & HIST_FIELD_FL_VAR ||
hist_data->fields[i]->flags & HIST_FIELD_FL_EXPR)
flags = hist_data->fields[i]->flags;
if (flags & HIST_FIELD_FL_VAR || flags & HIST_FIELD_FL_EXPR)
continue;
if (hist_data->fields[i]->flags & HIST_FIELD_FL_HEX) {
seq_printf(m, " %s: %10llx", field_name,
tracing_map_read_sum(elt, i));
} else {
seq_printf(m, " %s: %10llu", field_name,
tracing_map_read_sum(elt, i));
}
seq_puts(m, " ");
hist_trigger_print_val(m, i, field_name, flags, stats, elt);
}
print_actions(m, hist_data, elt);
@ -5324,7 +5430,9 @@ static int print_entries(struct seq_file *m,
{
struct tracing_map_sort_entry **sort_entries = NULL;
struct tracing_map *map = hist_data->map;
int i, n_entries;
int i, j, n_entries;
struct hist_val_stat *stats = NULL;
u64 val;
n_entries = tracing_map_sort_entries(map, hist_data->sort_keys,
hist_data->n_sort_keys,
@ -5332,11 +5440,34 @@ static int print_entries(struct seq_file *m,
if (n_entries < 0)
return n_entries;
/* Calculate the max and the total for each field if needed. */
for (j = 0; j < hist_data->n_vals; j++) {
if (!(hist_data->fields[j]->flags &
(HIST_FIELD_FL_PERCENT | HIST_FIELD_FL_GRAPH)))
continue;
if (!stats) {
stats = kcalloc(hist_data->n_vals, sizeof(*stats),
GFP_KERNEL);
if (!stats) {
n_entries = -ENOMEM;
goto out;
}
}
for (i = 0; i < n_entries; i++) {
val = tracing_map_read_sum(sort_entries[i]->elt, j);
stats[j].total += val;
if (stats[j].max < val)
stats[j].max = val;
}
}
for (i = 0; i < n_entries; i++)
hist_trigger_entry_print(m, hist_data,
hist_trigger_entry_print(m, hist_data, stats,
sort_entries[i]->key,
sort_entries[i]->elt);
kfree(stats);
out:
tracing_map_destroy_sort_entries(sort_entries, n_entries);
return n_entries;
@ -5726,6 +5857,7 @@ static int event_hist_trigger_print(struct seq_file *m,
struct hist_trigger_data *hist_data = data->private_data;
struct hist_field *field;
bool have_var = false;
bool show_val = false;
unsigned int i;
seq_puts(m, HIST_PREFIX);
@ -5756,12 +5888,16 @@ static int event_hist_trigger_print(struct seq_file *m,
continue;
}
if (i == HITCOUNT_IDX)
if (i == HITCOUNT_IDX) {
if (hist_data->attrs->no_hitcount)
continue;
seq_puts(m, "hitcount");
else {
seq_puts(m, ",");
} else {
if (show_val)
seq_puts(m, ",");
hist_field_print(m, field);
}
show_val = true;
}
if (have_var) {
@ -5812,6 +5948,8 @@ static int event_hist_trigger_print(struct seq_file *m,
seq_printf(m, ":size=%u", (1 << hist_data->map->map_bits));
if (hist_data->enable_timestamps)
seq_printf(m, ":clock=%s", hist_data->attrs->clock);
if (hist_data->attrs->no_hitcount)
seq_puts(m, ":nohitcount");
print_actions_spec(m, hist_data);
@ -6438,7 +6576,7 @@ enable:
if (se)
se->ref++;
out:
if (ret == 0)
if (ret == 0 && glob[0])
hist_err_clear();
return ret;

View File

@ -1282,12 +1282,12 @@ static int __create_synth_event(const char *name, const char *raw_fields)
goto err_free_arg;
}
fields[n_fields++] = field;
if (n_fields == SYNTH_FIELDS_MAX) {
synth_err(SYNTH_ERR_TOO_MANY_FIELDS, 0);
ret = -EINVAL;
goto err_free_arg;
}
fields[n_fields++] = field;
n_fields_this_loop++;
}

View File

@ -1067,7 +1067,14 @@ int set_trigger_filter(char *filter_str,
/* The filter is for the 'trigger' event, not the triggered event */
ret = create_event_filter(file->tr, file->event_call,
filter_str, false, &filter);
filter_str, true, &filter);
/* Only enabled set_str for error handling */
if (filter) {
kfree(filter->filter_string);
filter->filter_string = NULL;
}
/*
* If create_event_filter() fails, filter still needs to be freed.
* Which the calling code will do with data->filter.
@ -1078,8 +1085,14 @@ int set_trigger_filter(char *filter_str,
rcu_assign_pointer(data->filter, filter);
if (tmp) {
/* Make sure the call is done with the filter */
tracepoint_synchronize_unregister();
/*
* Make sure the call is done with the filter.
* It is possible that a filter could fail at boot up,
* and then this path will be called. Avoid the synchronization
* in that case.
*/
if (system_state != SYSTEM_BOOTING)
tracepoint_synchronize_unregister();
free_event_filter(tmp);
}

View File

@ -1359,6 +1359,7 @@ put_user_lock:
put_user:
user_event_destroy_fields(user);
user_event_destroy_validators(user);
kfree(user->call.print_fmt);
kfree(user);
return ret;
}

View File

@ -1344,7 +1344,6 @@ __kprobe_trace_func(struct trace_kprobe *tk, struct pt_regs *regs,
return;
fbuffer.regs = regs;
entry = fbuffer.entry = ring_buffer_event_data(fbuffer.event);
entry->ip = (unsigned long)tk->rp.kp.addr;
store_trace_args(&entry[1], &tk->tp, regs, sizeof(*entry), dsize);
@ -1385,7 +1384,6 @@ __kretprobe_trace_func(struct trace_kprobe *tk, struct kretprobe_instance *ri,
return;
fbuffer.regs = regs;
entry = fbuffer.entry = ring_buffer_event_data(fbuffer.event);
entry->func = (unsigned long)tk->rp.kp.addr;
entry->ret_ip = get_kretprobe_retaddr(ri);
store_trace_args(&entry[1], &tk->tp, regs, sizeof(*entry), dsize);

View File

@ -48,6 +48,28 @@
#define DEFAULT_TIMERLAT_PERIOD 1000 /* 1ms */
#define DEFAULT_TIMERLAT_PRIO 95 /* FIFO 95 */
/*
* osnoise/options entries.
*/
enum osnoise_options_index {
OSN_DEFAULTS = 0,
OSN_WORKLOAD,
OSN_PANIC_ON_STOP,
OSN_PREEMPT_DISABLE,
OSN_IRQ_DISABLE,
OSN_MAX
};
static const char * const osnoise_options_str[OSN_MAX] = {
"DEFAULTS",
"OSNOISE_WORKLOAD",
"PANIC_ON_STOP",
"OSNOISE_PREEMPT_DISABLE",
"OSNOISE_IRQ_DISABLE" };
#define OSN_DEFAULT_OPTIONS 0x2
static unsigned long osnoise_options = OSN_DEFAULT_OPTIONS;
/*
* trace_array of the enabled osnoise/timerlat instances.
*/
@ -1173,11 +1195,12 @@ trace_sched_switch_callback(void *data, bool preempt,
unsigned int prev_state)
{
struct osnoise_variables *osn_var = this_cpu_osn_var();
int workload = test_bit(OSN_WORKLOAD, &osnoise_options);
if (p->pid != osn_var->pid)
if ((p->pid != osn_var->pid) || !workload)
thread_exit(osn_var, p);
if (n->pid != osn_var->pid)
if ((n->pid != osn_var->pid) || !workload)
thread_entry(osn_var, n);
}
@ -1255,6 +1278,9 @@ static __always_inline void osnoise_stop_tracing(void)
trace_array_printk_buf(tr->array_buffer.buffer, _THIS_IP_,
"stop tracing hit on cpu %d\n", smp_processor_id());
if (test_bit(OSN_PANIC_ON_STOP, &osnoise_options))
panic("tracer hit stop condition on CPU %d\n", smp_processor_id());
tracer_tracing_off(tr);
}
rcu_read_unlock();
@ -1289,18 +1315,26 @@ static void notify_new_max_latency(u64 latency)
*/
static int run_osnoise(void)
{
bool disable_irq = test_bit(OSN_IRQ_DISABLE, &osnoise_options);
struct osnoise_variables *osn_var = this_cpu_osn_var();
u64 start, sample, last_sample;
u64 last_int_count, int_count;
s64 noise = 0, max_noise = 0;
s64 total, last_total = 0;
struct osnoise_sample s;
bool disable_preemption;
unsigned int threshold;
u64 runtime, stop_in;
u64 sum_noise = 0;
int hw_count = 0;
int ret = -1;
/*
* Disabling preemption is only required if IRQs are enabled,
* and the options is set on.
*/
disable_preemption = !disable_irq && test_bit(OSN_PREEMPT_DISABLE, &osnoise_options);
/*
* Considers the current thread as the workload.
*/
@ -1316,6 +1350,15 @@ static int run_osnoise(void)
*/
threshold = tracing_thresh ? : 5000;
/*
* Apply PREEMPT and IRQ disabled options.
*/
if (disable_irq)
local_irq_disable();
if (disable_preemption)
preempt_disable();
/*
* Make sure NMIs see sampling first
*/
@ -1403,16 +1446,21 @@ static int run_osnoise(void)
* cond_resched()
*/
if (IS_ENABLED(CONFIG_PREEMPT_RCU)) {
local_irq_disable();
if (!disable_irq)
local_irq_disable();
rcu_momentary_dyntick_idle();
local_irq_enable();
if (!disable_irq)
local_irq_enable();
}
/*
* For the non-preemptive kernel config: let threads runs, if
* they so wish.
* they so wish, unless set not do to so.
*/
cond_resched();
if (!disable_irq && !disable_preemption)
cond_resched();
last_sample = sample;
last_int_count = int_count;
@ -1431,6 +1479,15 @@ static int run_osnoise(void)
*/
barrier();
/*
* Return to the preemptive state.
*/
if (disable_preemption)
preempt_enable();
if (disable_irq)
local_irq_enable();
/*
* Save noise info.
*/
@ -1710,9 +1767,16 @@ static void stop_kthread(unsigned int cpu)
struct task_struct *kthread;
kthread = per_cpu(per_cpu_osnoise_var, cpu).kthread;
if (kthread)
if (kthread) {
kthread_stop(kthread);
per_cpu(per_cpu_osnoise_var, cpu).kthread = NULL;
per_cpu(per_cpu_osnoise_var, cpu).kthread = NULL;
} else {
if (!test_bit(OSN_WORKLOAD, &osnoise_options)) {
per_cpu(per_cpu_osnoise_var, cpu).sampling = false;
barrier();
return;
}
}
}
/*
@ -1746,6 +1810,13 @@ static int start_kthread(unsigned int cpu)
snprintf(comm, 24, "timerlat/%d", cpu);
main = timerlat_main;
} else {
/* if no workload, just return */
if (!test_bit(OSN_WORKLOAD, &osnoise_options)) {
per_cpu(per_cpu_osnoise_var, cpu).sampling = true;
barrier();
return 0;
}
snprintf(comm, 24, "osnoise/%d", cpu);
}
@ -1860,6 +1931,150 @@ static void osnoise_init_hotplug_support(void)
}
#endif /* CONFIG_HOTPLUG_CPU */
/*
* seq file functions for the osnoise/options file.
*/
static void *s_options_start(struct seq_file *s, loff_t *pos)
{
int option = *pos;
mutex_lock(&interface_lock);
if (option >= OSN_MAX)
return NULL;
return pos;
}
static void *s_options_next(struct seq_file *s, void *v, loff_t *pos)
{
int option = ++(*pos);
if (option >= OSN_MAX)
return NULL;
return pos;
}
static int s_options_show(struct seq_file *s, void *v)
{
loff_t *pos = v;
int option = *pos;
if (option == OSN_DEFAULTS) {
if (osnoise_options == OSN_DEFAULT_OPTIONS)
seq_printf(s, "%s", osnoise_options_str[option]);
else
seq_printf(s, "NO_%s", osnoise_options_str[option]);
goto out;
}
if (test_bit(option, &osnoise_options))
seq_printf(s, "%s", osnoise_options_str[option]);
else
seq_printf(s, "NO_%s", osnoise_options_str[option]);
out:
if (option != OSN_MAX)
seq_puts(s, " ");
return 0;
}
static void s_options_stop(struct seq_file *s, void *v)
{
seq_puts(s, "\n");
mutex_unlock(&interface_lock);
}
static const struct seq_operations osnoise_options_seq_ops = {
.start = s_options_start,
.next = s_options_next,
.show = s_options_show,
.stop = s_options_stop
};
static int osnoise_options_open(struct inode *inode, struct file *file)
{
return seq_open(file, &osnoise_options_seq_ops);
};
/**
* osnoise_options_write - Write function for "options" entry
* @filp: The active open file structure
* @ubuf: The user buffer that contains the value to write
* @cnt: The maximum number of bytes to write to "file"
* @ppos: The current position in @file
*
* Writing the option name sets the option, writing the "NO_"
* prefix in front of the option name disables it.
*
* Writing "DEFAULTS" resets the option values to the default ones.
*/
static ssize_t osnoise_options_write(struct file *filp, const char __user *ubuf,
size_t cnt, loff_t *ppos)
{
int running, option, enable, retval;
char buf[256], *option_str;
if (cnt >= 256)
return -EINVAL;
if (copy_from_user(buf, ubuf, cnt))
return -EFAULT;
buf[cnt] = 0;
if (strncmp(buf, "NO_", 3)) {
option_str = strstrip(buf);
enable = true;
} else {
option_str = strstrip(&buf[3]);
enable = false;
}
option = match_string(osnoise_options_str, OSN_MAX, option_str);
if (option < 0)
return -EINVAL;
/*
* trace_types_lock is taken to avoid concurrency on start/stop.
*/
mutex_lock(&trace_types_lock);
running = osnoise_has_registered_instances();
if (running)
stop_per_cpu_kthreads();
mutex_lock(&interface_lock);
/*
* avoid CPU hotplug operations that might read options.
*/
cpus_read_lock();
retval = cnt;
if (enable) {
if (option == OSN_DEFAULTS)
osnoise_options = OSN_DEFAULT_OPTIONS;
else
set_bit(option, &osnoise_options);
} else {
if (option == OSN_DEFAULTS)
retval = -EINVAL;
else
clear_bit(option, &osnoise_options);
}
cpus_read_unlock();
mutex_unlock(&interface_lock);
if (running)
start_per_cpu_kthreads();
mutex_unlock(&trace_types_lock);
return retval;
}
/*
* osnoise_cpus_read - Read function for reading the "cpus" file
* @filp: The active open file structure
@ -2042,6 +2257,14 @@ static const struct file_operations cpus_fops = {
.llseek = generic_file_llseek,
};
static const struct file_operations osnoise_options_fops = {
.open = osnoise_options_open,
.read = seq_read,
.llseek = seq_lseek,
.release = seq_release,
.write = osnoise_options_write
};
#ifdef CONFIG_TIMERLAT_TRACER
#ifdef CONFIG_STACKTRACE
static int init_timerlat_stack_tracefs(struct dentry *top_dir)
@ -2128,6 +2351,11 @@ static int init_tracefs(void)
if (!tmp)
goto err;
tmp = trace_create_file("options", TRACE_MODE_WRITE, top_dir, NULL,
&osnoise_options_fops);
if (!tmp)
goto err;
ret = init_timerlat_tracefs(top_dir);
if (ret)
goto err;

View File

@ -11,6 +11,7 @@
#include <linux/kprobes.h>
#include <linux/sched/clock.h>
#include <linux/sched/mm.h>
#include <linux/idr.h>
#include "trace_output.h"
@ -21,8 +22,6 @@ DECLARE_RWSEM(trace_event_sem);
static struct hlist_head event_hash[EVENT_HASHSIZE] __read_mostly;
static int next_event_type = __TRACE_LAST_TYPE;
enum print_line_t trace_print_bputs_msg_only(struct trace_iterator *iter)
{
struct trace_seq *s = &iter->seq;
@ -323,8 +322,9 @@ void trace_event_printf(struct trace_iterator *iter, const char *fmt, ...)
}
EXPORT_SYMBOL(trace_event_printf);
static int trace_output_raw(struct trace_iterator *iter, char *name,
char *fmt, va_list ap)
static __printf(3, 0)
int trace_output_raw(struct trace_iterator *iter, char *name,
char *fmt, va_list ap)
{
struct trace_seq *s = &iter->seq;
@ -688,38 +688,23 @@ struct trace_event *ftrace_find_event(int type)
return NULL;
}
static LIST_HEAD(ftrace_event_list);
static DEFINE_IDA(trace_event_ida);
static int trace_search_list(struct list_head **list)
static void free_trace_event_type(int type)
{
struct trace_event *e = NULL, *iter;
int next = __TRACE_LAST_TYPE;
if (type >= __TRACE_LAST_TYPE)
ida_free(&trace_event_ida, type);
}
if (list_empty(&ftrace_event_list)) {
*list = &ftrace_event_list;
return next;
}
static int alloc_trace_event_type(void)
{
int next;
/*
* We used up all possible max events,
* lets see if somebody freed one.
*/
list_for_each_entry(iter, &ftrace_event_list, list) {
if (iter->type != next) {
e = iter;
break;
}
next++;
}
/* Did we used up all 65 thousand events??? */
if (next > TRACE_EVENT_TYPE_MAX)
/* Skip static defined type numbers */
next = ida_alloc_range(&trace_event_ida, __TRACE_LAST_TYPE,
TRACE_EVENT_TYPE_MAX, GFP_KERNEL);
if (next < 0)
return 0;
if (e)
*list = &e->list;
else
*list = &ftrace_event_list;
return next;
}
@ -761,28 +746,10 @@ int register_trace_event(struct trace_event *event)
if (WARN_ON(!event->funcs))
goto out;
INIT_LIST_HEAD(&event->list);
if (!event->type) {
struct list_head *list = NULL;
if (next_event_type > TRACE_EVENT_TYPE_MAX) {
event->type = trace_search_list(&list);
if (!event->type)
goto out;
} else {
event->type = next_event_type++;
list = &ftrace_event_list;
}
if (WARN_ON(ftrace_find_event(event->type)))
event->type = alloc_trace_event_type();
if (!event->type)
goto out;
list_add_tail(&event->list, list);
} else if (WARN(event->type > __TRACE_LAST_TYPE,
"Need to add type to trace.h")) {
goto out;
@ -819,7 +786,7 @@ EXPORT_SYMBOL_GPL(register_trace_event);
int __unregister_trace_event(struct trace_event *event)
{
hlist_del(&event->node);
list_del(&event->list);
free_trace_event_type(event->type);
return 0;
}

View File

@ -246,7 +246,7 @@ int traceprobe_parse_event_name(const char **pevent, const char **pgroup,
return -EINVAL;
}
strlcpy(buf, event, slash - event + 1);
if (!is_good_name(buf)) {
if (!is_good_system_name(buf)) {
trace_probe_log_err(offset, BAD_GROUP_NAME);
return -EINVAL;
}

View File

@ -50,7 +50,7 @@ static void do_simple_thread_func(int cnt, const char *fmt, ...)
trace_foo_with_template_print("I have to be different", cnt);
trace_foo_rel_loc("Hello __rel_loc", cnt, bitmask);
trace_foo_rel_loc("Hello __rel_loc", cnt, bitmask, current->cpus_ptr);
}
static void simple_thread_func(int cnt)

View File

@ -200,6 +200,16 @@
*
* __assign_bitmask(target_cpus, cpumask_bits(bar), nr_cpumask_bits);
*
* __cpumask: This is pretty much the same as __bitmask but is specific for
* CPU masks. The type displayed to the user via the format files will
* be "cpumaks_t" such that user space may deal with them differently
* if they choose to do so, and the bits is always set to nr_cpumask_bits.
*
* __cpumask(target_cpu)
*
* To assign a cpumask, use the __assign_cpumask() helper macro.
*
* __assign_cpumask(target_cpus, cpumask_bits(bar));
*
* fast_assign: This is a C like function that is used to store the items
* into the ring buffer. A special variable called "__entry" will be the
@ -212,8 +222,8 @@
* This is also used to print out the data from the trace files.
* Again, the __entry macro is used to access the data from the ring buffer.
*
* Note, __dynamic_array, __string, and __bitmask require special helpers
* to access the data.
* Note, __dynamic_array, __string, __bitmask and __cpumask require special
* helpers to access the data.
*
* For __dynamic_array(int, foo, bar) use __get_dynamic_array(foo)
* Use __get_dynamic_array_len(foo) to get the length of the array
@ -226,6 +236,8 @@
*
* For __bitmask(target_cpus, nr_cpumask_bits) use __get_bitmask(target_cpus)
*
* For __cpumask(target_cpus) use __get_cpumask(target_cpus)
*
*
* Note, that for both the assign and the printk, __entry is the handler
* to the data structure in the ring buffer, and is defined by the
@ -288,6 +300,7 @@ TRACE_EVENT(foo_bar,
__dynamic_array(int, list, __length_of(lst))
__string( str, string )
__bitmask( cpus, num_possible_cpus() )
__cpumask( cpum )
__vstring( vstr, fmt, va )
),
@ -299,9 +312,10 @@ TRACE_EVENT(foo_bar,
__assign_str(str, string);
__assign_vstr(vstr, fmt, va);
__assign_bitmask(cpus, cpumask_bits(mask), num_possible_cpus());
__assign_cpumask(cpum, cpumask_bits(mask));
),
TP_printk("foo %s %d %s %s %s %s (%s) %s", __entry->foo, __entry->bar,
TP_printk("foo %s %d %s %s %s %s (%s) (%s) %s", __entry->foo, __entry->bar,
/*
* Notice here the use of some helper functions. This includes:
@ -345,7 +359,8 @@ TRACE_EVENT(foo_bar,
__print_array(__get_dynamic_array(list),
__get_dynamic_array_len(list) / sizeof(int),
sizeof(int)),
__get_str(str), __get_bitmask(cpus), __get_str(vstr))
__get_str(str), __get_bitmask(cpus), __get_cpumask(cpum),
__get_str(vstr))
);
/*
@ -542,15 +557,16 @@ DEFINE_EVENT_PRINT(foo_template, foo_with_template_print,
TRACE_EVENT(foo_rel_loc,
TP_PROTO(const char *foo, int bar, unsigned long *mask),
TP_PROTO(const char *foo, int bar, unsigned long *mask, const cpumask_t *cpus),
TP_ARGS(foo, bar, mask),
TP_ARGS(foo, bar, mask, cpus),
TP_STRUCT__entry(
__rel_string( foo, foo )
__field( int, bar )
__rel_bitmask( bitmask,
BITS_PER_BYTE * sizeof(unsigned long) )
__rel_cpumask( cpumask )
),
TP_fast_assign(
@ -558,10 +574,12 @@ TRACE_EVENT(foo_rel_loc,
__entry->bar = bar;
__assign_rel_bitmask(bitmask, mask,
BITS_PER_BYTE * sizeof(unsigned long));
__assign_rel_cpumask(cpumask, cpus);
),
TP_printk("foo_rel_loc %s, %d, %s", __get_rel_str(foo), __entry->bar,
__get_rel_bitmask(bitmask))
TP_printk("foo_rel_loc %s, %d, %s, %s", __get_rel_str(foo), __entry->bar,
__get_rel_bitmask(bitmask),
__get_rel_cpumask(cpumask))
);
#endif