Merge branch 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull perf updates from Ingo Molnar:
 "The main updates in this cycle were:

   - Lots of perf tooling changes too voluminous to list (big perf trace
     and perf stat improvements, lots of libtraceevent reorganization,
     etc.), so I'll list the authors and refer to the changelog for
     details:

       Benjamin Peterson, Jérémie Galarneau, Kim Phillips, Peter
       Zijlstra, Ravi Bangoria, Sangwon Hong, Sean V Kelley, Steven
       Rostedt, Thomas Gleixner, Ding Xiang, Eduardo Habkost, Thomas
       Richter, Andi Kleen, Sanskriti Sharma, Adrian Hunter, Tzvetomir
       Stoyanov, Arnaldo Carvalho de Melo, Jiri Olsa.

     ... with the bulk of the changes written by Jiri Olsa, Tzvetomir
     Stoyanov and Arnaldo Carvalho de Melo.

   - Continued intel_rdt work with a focus on playing well with perf
     events. This also imported some non-perf RDT work due to
     dependencies. (Reinette Chatre)

   - Implement counter freezing for Arch Perfmon v4 (Skylake and newer).
     This allows to speed up the PMI handler by avoiding unnecessary MSR
     writes and make it more accurate. (Andi Kleen)

   - kprobes cleanups and simplification (Masami Hiramatsu)

   - Intel Goldmont PMU updates (Kan Liang)

   - ... plus misc other fixes and updates"

* 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (155 commits)
  kprobes/x86: Use preempt_enable() in optimized_callback()
  x86/intel_rdt: Prevent pseudo-locking from using stale pointers
  kprobes, x86/ptrace.h: Make regs_get_kernel_stack_nth() not fault on bad stack
  perf/x86/intel: Export mem events only if there's PEBS support
  x86/cpu: Drop pointless static qualifier in punit_dev_state_show()
  x86/intel_rdt: Fix initial allocation to consider CDP
  x86/intel_rdt: CBM overlap should also check for overlap with CDP peer
  x86/intel_rdt: Introduce utility to obtain CDP peer
  tools lib traceevent, perf tools: Move struct tep_handler definition in a local header file
  tools lib traceevent: Separate out tep_strerror() for strerror_r() issues
  perf python: More portable way to make CFLAGS work with clang
  perf python: Make clang_has_option() work on Python 3
  perf tools: Free temporary 'sys' string in read_event_files()
  perf tools: Avoid double free in read_event_file()
  perf tools: Free 'printk' string in parse_ftrace_printk()
  perf tools: Cleanup trace-event-info 'tdata' leak
  perf strbuf: Match va_{add,copy} with va_end
  perf test: S390 does not support watchpoints in test 22
  perf auxtrace: Include missing asm/bitsperlong.h to get BITS_PER_LONG
  tools include: Adopt linux/bits.h
  ...
This commit is contained in:
Linus Torvalds 2018-10-23 13:32:18 +01:00
commit c05f3642f4
141 changed files with 6259 additions and 3868 deletions

View File

@ -856,6 +856,11 @@
causing system reset or hang due to sending causing system reset or hang due to sending
INIT from AP to BSP. INIT from AP to BSP.
disable_counter_freezing [HW]
Disable Intel PMU counter freezing feature.
The feature only exists starting from
Arch Perfmon v4 (Skylake and newer).
disable_ddw [PPC/PSERIES] disable_ddw [PPC/PSERIES]
Disable Dynamic DMA Window support. Use this if Disable Dynamic DMA Window support. Use this if
to workaround buggy firmware. to workaround buggy firmware.

View File

@ -520,18 +520,24 @@ the pseudo-locked region:
2) Cache hit and miss measurements using model specific precision counters if 2) Cache hit and miss measurements using model specific precision counters if
available. Depending on the levels of cache on the system the pseudo_lock_l2 available. Depending on the levels of cache on the system the pseudo_lock_l2
and pseudo_lock_l3 tracepoints are available. and pseudo_lock_l3 tracepoints are available.
WARNING: triggering this measurement uses from two (for just L2
measurements) to four (for L2 and L3 measurements) precision counters on
the system, if any other measurements are in progress the counters and
their corresponding event registers will be clobbered.
When a pseudo-locked region is created a new debugfs directory is created for When a pseudo-locked region is created a new debugfs directory is created for
it in debugfs as /sys/kernel/debug/resctrl/<newdir>. A single it in debugfs as /sys/kernel/debug/resctrl/<newdir>. A single
write-only file, pseudo_lock_measure, is present in this directory. The write-only file, pseudo_lock_measure, is present in this directory. The
measurement on the pseudo-locked region depends on the number, 1 or 2, measurement of the pseudo-locked region depends on the number written to this
written to this debugfs file. Since the measurements are recorded with the debugfs file:
tracing infrastructure the relevant tracepoints need to be enabled before the 1 - writing "1" to the pseudo_lock_measure file will trigger the latency
measurement is triggered. measurement captured in the pseudo_lock_mem_latency tracepoint. See
example below.
2 - writing "2" to the pseudo_lock_measure file will trigger the L2 cache
residency (cache hits and misses) measurement captured in the
pseudo_lock_l2 tracepoint. See example below.
3 - writing "3" to the pseudo_lock_measure file will trigger the L3 cache
residency (cache hits and misses) measurement captured in the
pseudo_lock_l3 tracepoint.
All measurements are recorded with the tracing infrastructure. This requires
the relevant tracepoints to be enabled before the measurement is triggered.
Example of latency debugging interface: Example of latency debugging interface:
In this example a pseudo-locked region named "newlock" was created. Here is In this example a pseudo-locked region named "newlock" was created. Here is

View File

@ -1033,6 +1033,27 @@ static inline void x86_assign_hw_event(struct perf_event *event,
} }
} }
/**
* x86_perf_rdpmc_index - Return PMC counter used for event
* @event: the perf_event to which the PMC counter was assigned
*
* The counter assigned to this performance event may change if interrupts
* are enabled. This counter should thus never be used while interrupts are
* enabled. Before this function is used to obtain the assigned counter the
* event should be checked for validity using, for example,
* perf_event_read_local(), within the same interrupt disabled section in
* which this counter is planned to be used.
*
* Return: The index of the performance monitoring counter assigned to
* @perf_event.
*/
int x86_perf_rdpmc_index(struct perf_event *event)
{
lockdep_assert_irqs_disabled();
return event->hw.event_base_rdpmc;
}
static inline int match_prev_assignment(struct hw_perf_event *hwc, static inline int match_prev_assignment(struct hw_perf_event *hwc,
struct cpu_hw_events *cpuc, struct cpu_hw_events *cpuc,
int i) int i)
@ -1584,7 +1605,7 @@ static void __init pmu_check_apic(void)
} }
static struct attribute_group x86_pmu_format_group = { static struct attribute_group x86_pmu_format_group __ro_after_init = {
.name = "format", .name = "format",
.attrs = NULL, .attrs = NULL,
}; };
@ -1631,9 +1652,9 @@ __init struct attribute **merge_attr(struct attribute **a, struct attribute **b)
struct attribute **new; struct attribute **new;
int j, i; int j, i;
for (j = 0; a[j]; j++) for (j = 0; a && a[j]; j++)
; ;
for (i = 0; b[i]; i++) for (i = 0; b && b[i]; i++)
j++; j++;
j++; j++;
@ -1642,9 +1663,9 @@ __init struct attribute **merge_attr(struct attribute **a, struct attribute **b)
return NULL; return NULL;
j = 0; j = 0;
for (i = 0; a[i]; i++) for (i = 0; a && a[i]; i++)
new[j++] = a[i]; new[j++] = a[i];
for (i = 0; b[i]; i++) for (i = 0; b && b[i]; i++)
new[j++] = b[i]; new[j++] = b[i];
new[j] = NULL; new[j] = NULL;
@ -1715,7 +1736,7 @@ static struct attribute *events_attr[] = {
NULL, NULL,
}; };
static struct attribute_group x86_pmu_events_group = { static struct attribute_group x86_pmu_events_group __ro_after_init = {
.name = "events", .name = "events",
.attrs = events_attr, .attrs = events_attr,
}; };
@ -2230,7 +2251,7 @@ static struct attribute *x86_pmu_attrs[] = {
NULL, NULL,
}; };
static struct attribute_group x86_pmu_attr_group = { static struct attribute_group x86_pmu_attr_group __ro_after_init = {
.attrs = x86_pmu_attrs, .attrs = x86_pmu_attrs,
}; };
@ -2248,7 +2269,7 @@ static struct attribute *x86_pmu_caps_attrs[] = {
NULL NULL
}; };
static struct attribute_group x86_pmu_caps_group = { static struct attribute_group x86_pmu_caps_group __ro_after_init = {
.name = "caps", .name = "caps",
.attrs = x86_pmu_caps_attrs, .attrs = x86_pmu_caps_attrs,
}; };

View File

@ -242,7 +242,7 @@ EVENT_ATTR_STR(mem-loads, mem_ld_nhm, "event=0x0b,umask=0x10,ldlat=3");
EVENT_ATTR_STR(mem-loads, mem_ld_snb, "event=0xcd,umask=0x1,ldlat=3"); EVENT_ATTR_STR(mem-loads, mem_ld_snb, "event=0xcd,umask=0x1,ldlat=3");
EVENT_ATTR_STR(mem-stores, mem_st_snb, "event=0xcd,umask=0x2"); EVENT_ATTR_STR(mem-stores, mem_st_snb, "event=0xcd,umask=0x2");
static struct attribute *nhm_events_attrs[] = { static struct attribute *nhm_mem_events_attrs[] = {
EVENT_PTR(mem_ld_nhm), EVENT_PTR(mem_ld_nhm),
NULL, NULL,
}; };
@ -278,8 +278,6 @@ EVENT_ATTR_STR_HT(topdown-recovery-bubbles.scale, td_recovery_bubbles_scale,
"4", "2"); "4", "2");
static struct attribute *snb_events_attrs[] = { static struct attribute *snb_events_attrs[] = {
EVENT_PTR(mem_ld_snb),
EVENT_PTR(mem_st_snb),
EVENT_PTR(td_slots_issued), EVENT_PTR(td_slots_issued),
EVENT_PTR(td_slots_retired), EVENT_PTR(td_slots_retired),
EVENT_PTR(td_fetch_bubbles), EVENT_PTR(td_fetch_bubbles),
@ -290,6 +288,12 @@ static struct attribute *snb_events_attrs[] = {
NULL, NULL,
}; };
static struct attribute *snb_mem_events_attrs[] = {
EVENT_PTR(mem_ld_snb),
EVENT_PTR(mem_st_snb),
NULL,
};
static struct event_constraint intel_hsw_event_constraints[] = { static struct event_constraint intel_hsw_event_constraints[] = {
FIXED_EVENT_CONSTRAINT(0x00c0, 0), /* INST_RETIRED.ANY */ FIXED_EVENT_CONSTRAINT(0x00c0, 0), /* INST_RETIRED.ANY */
FIXED_EVENT_CONSTRAINT(0x003c, 1), /* CPU_CLK_UNHALTED.CORE */ FIXED_EVENT_CONSTRAINT(0x003c, 1), /* CPU_CLK_UNHALTED.CORE */
@ -1995,6 +1999,18 @@ static void intel_pmu_nhm_enable_all(int added)
intel_pmu_enable_all(added); intel_pmu_enable_all(added);
} }
static void enable_counter_freeze(void)
{
update_debugctlmsr(get_debugctlmsr() |
DEBUGCTLMSR_FREEZE_PERFMON_ON_PMI);
}
static void disable_counter_freeze(void)
{
update_debugctlmsr(get_debugctlmsr() &
~DEBUGCTLMSR_FREEZE_PERFMON_ON_PMI);
}
static inline u64 intel_pmu_get_status(void) static inline u64 intel_pmu_get_status(void)
{ {
u64 status; u64 status;
@ -2200,59 +2216,15 @@ static void intel_pmu_reset(void)
local_irq_restore(flags); local_irq_restore(flags);
} }
/* static int handle_pmi_common(struct pt_regs *regs, u64 status)
* This handler is triggered by the local APIC, so the APIC IRQ handling
* rules apply:
*/
static int intel_pmu_handle_irq(struct pt_regs *regs)
{ {
struct perf_sample_data data; struct perf_sample_data data;
struct cpu_hw_events *cpuc; struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
int bit, loops; int bit;
u64 status; int handled = 0;
int handled;
int pmu_enabled;
cpuc = this_cpu_ptr(&cpu_hw_events);
/*
* Save the PMU state.
* It needs to be restored when leaving the handler.
*/
pmu_enabled = cpuc->enabled;
/*
* No known reason to not always do late ACK,
* but just in case do it opt-in.
*/
if (!x86_pmu.late_ack)
apic_write(APIC_LVTPC, APIC_DM_NMI);
intel_bts_disable_local();
cpuc->enabled = 0;
__intel_pmu_disable_all();
handled = intel_pmu_drain_bts_buffer();
handled += intel_bts_interrupt();
status = intel_pmu_get_status();
if (!status)
goto done;
loops = 0;
again:
intel_pmu_lbr_read();
intel_pmu_ack_status(status);
if (++loops > 100) {
static bool warned = false;
if (!warned) {
WARN(1, "perfevents: irq loop stuck!\n");
perf_event_print_debug();
warned = true;
}
intel_pmu_reset();
goto done;
}
inc_irq_stat(apic_perf_irqs); inc_irq_stat(apic_perf_irqs);
/* /*
* Ignore a range of extra bits in status that do not indicate * Ignore a range of extra bits in status that do not indicate
* overflow by themselves. * overflow by themselves.
@ -2261,7 +2233,7 @@ again:
GLOBAL_STATUS_ASIF | GLOBAL_STATUS_ASIF |
GLOBAL_STATUS_LBRS_FROZEN); GLOBAL_STATUS_LBRS_FROZEN);
if (!status) if (!status)
goto done; return 0;
/* /*
* In case multiple PEBS events are sampled at the same time, * In case multiple PEBS events are sampled at the same time,
* it is possible to have GLOBAL_STATUS bit 62 set indicating * it is possible to have GLOBAL_STATUS bit 62 set indicating
@ -2331,6 +2303,146 @@ again:
x86_pmu_stop(event, 0); x86_pmu_stop(event, 0);
} }
return handled;
}
static bool disable_counter_freezing;
static int __init intel_perf_counter_freezing_setup(char *s)
{
disable_counter_freezing = true;
pr_info("Intel PMU Counter freezing feature disabled\n");
return 1;
}
__setup("disable_counter_freezing", intel_perf_counter_freezing_setup);
/*
* Simplified handler for Arch Perfmon v4:
* - We rely on counter freezing/unfreezing to enable/disable the PMU.
* This is done automatically on PMU ack.
* - Ack the PMU only after the APIC.
*/
static int intel_pmu_handle_irq_v4(struct pt_regs *regs)
{
struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
int handled = 0;
bool bts = false;
u64 status;
int pmu_enabled = cpuc->enabled;
int loops = 0;
/* PMU has been disabled because of counter freezing */
cpuc->enabled = 0;
if (test_bit(INTEL_PMC_IDX_FIXED_BTS, cpuc->active_mask)) {
bts = true;
intel_bts_disable_local();
handled = intel_pmu_drain_bts_buffer();
handled += intel_bts_interrupt();
}
status = intel_pmu_get_status();
if (!status)
goto done;
again:
intel_pmu_lbr_read();
if (++loops > 100) {
static bool warned;
if (!warned) {
WARN(1, "perfevents: irq loop stuck!\n");
perf_event_print_debug();
warned = true;
}
intel_pmu_reset();
goto done;
}
handled += handle_pmi_common(regs, status);
done:
/* Ack the PMI in the APIC */
apic_write(APIC_LVTPC, APIC_DM_NMI);
/*
* The counters start counting immediately while ack the status.
* Make it as close as possible to IRET. This avoids bogus
* freezing on Skylake CPUs.
*/
if (status) {
intel_pmu_ack_status(status);
} else {
/*
* CPU may issues two PMIs very close to each other.
* When the PMI handler services the first one, the
* GLOBAL_STATUS is already updated to reflect both.
* When it IRETs, the second PMI is immediately
* handled and it sees clear status. At the meantime,
* there may be a third PMI, because the freezing bit
* isn't set since the ack in first PMI handlers.
* Double check if there is more work to be done.
*/
status = intel_pmu_get_status();
if (status)
goto again;
}
if (bts)
intel_bts_enable_local();
cpuc->enabled = pmu_enabled;
return handled;
}
/*
* This handler is triggered by the local APIC, so the APIC IRQ handling
* rules apply:
*/
static int intel_pmu_handle_irq(struct pt_regs *regs)
{
struct cpu_hw_events *cpuc;
int loops;
u64 status;
int handled;
int pmu_enabled;
cpuc = this_cpu_ptr(&cpu_hw_events);
/*
* Save the PMU state.
* It needs to be restored when leaving the handler.
*/
pmu_enabled = cpuc->enabled;
/*
* No known reason to not always do late ACK,
* but just in case do it opt-in.
*/
if (!x86_pmu.late_ack)
apic_write(APIC_LVTPC, APIC_DM_NMI);
intel_bts_disable_local();
cpuc->enabled = 0;
__intel_pmu_disable_all();
handled = intel_pmu_drain_bts_buffer();
handled += intel_bts_interrupt();
status = intel_pmu_get_status();
if (!status)
goto done;
loops = 0;
again:
intel_pmu_lbr_read();
intel_pmu_ack_status(status);
if (++loops > 100) {
static bool warned;
if (!warned) {
WARN(1, "perfevents: irq loop stuck!\n");
perf_event_print_debug();
warned = true;
}
intel_pmu_reset();
goto done;
}
handled += handle_pmi_common(regs, status);
/* /*
* Repeat if there is more work to be done: * Repeat if there is more work to be done:
*/ */
@ -3350,6 +3462,9 @@ static void intel_pmu_cpu_starting(int cpu)
if (x86_pmu.version > 1) if (x86_pmu.version > 1)
flip_smm_bit(&x86_pmu.attr_freeze_on_smi); flip_smm_bit(&x86_pmu.attr_freeze_on_smi);
if (x86_pmu.counter_freezing)
enable_counter_freeze();
if (!cpuc->shared_regs) if (!cpuc->shared_regs)
return; return;
@ -3421,6 +3536,9 @@ static void intel_pmu_cpu_dying(int cpu)
free_excl_cntrs(cpu); free_excl_cntrs(cpu);
fini_debug_store_on_cpu(cpu); fini_debug_store_on_cpu(cpu);
if (x86_pmu.counter_freezing)
disable_counter_freeze();
} }
static void intel_pmu_sched_task(struct perf_event_context *ctx, static void intel_pmu_sched_task(struct perf_event_context *ctx,
@ -3725,6 +3843,40 @@ static __init void intel_nehalem_quirk(void)
} }
} }
static bool intel_glp_counter_freezing_broken(int cpu)
{
u32 rev = UINT_MAX; /* default to broken for unknown stepping */
switch (cpu_data(cpu).x86_stepping) {
case 1:
rev = 0x28;
break;
case 8:
rev = 0x6;
break;
}
return (cpu_data(cpu).microcode < rev);
}
static __init void intel_glp_counter_freezing_quirk(void)
{
/* Check if it's already disabled */
if (disable_counter_freezing)
return;
/*
* If the system starts with the wrong ucode, leave the
* counter-freezing feature permanently disabled.
*/
if (intel_glp_counter_freezing_broken(raw_smp_processor_id())) {
pr_info("PMU counter freezing disabled due to CPU errata,"
"please upgrade microcode\n");
x86_pmu.counter_freezing = false;
x86_pmu.handle_irq = intel_pmu_handle_irq;
}
}
/* /*
* enable software workaround for errata: * enable software workaround for errata:
* SNB: BJ122 * SNB: BJ122
@ -3764,8 +3916,6 @@ EVENT_ATTR_STR(cycles-t, cycles_t, "event=0x3c,in_tx=1");
EVENT_ATTR_STR(cycles-ct, cycles_ct, "event=0x3c,in_tx=1,in_tx_cp=1"); EVENT_ATTR_STR(cycles-ct, cycles_ct, "event=0x3c,in_tx=1,in_tx_cp=1");
static struct attribute *hsw_events_attrs[] = { static struct attribute *hsw_events_attrs[] = {
EVENT_PTR(mem_ld_hsw),
EVENT_PTR(mem_st_hsw),
EVENT_PTR(td_slots_issued), EVENT_PTR(td_slots_issued),
EVENT_PTR(td_slots_retired), EVENT_PTR(td_slots_retired),
EVENT_PTR(td_fetch_bubbles), EVENT_PTR(td_fetch_bubbles),
@ -3776,6 +3926,12 @@ static struct attribute *hsw_events_attrs[] = {
NULL NULL
}; };
static struct attribute *hsw_mem_events_attrs[] = {
EVENT_PTR(mem_ld_hsw),
EVENT_PTR(mem_st_hsw),
NULL,
};
static struct attribute *hsw_tsx_events_attrs[] = { static struct attribute *hsw_tsx_events_attrs[] = {
EVENT_PTR(tx_start), EVENT_PTR(tx_start),
EVENT_PTR(tx_commit), EVENT_PTR(tx_commit),
@ -3792,13 +3948,6 @@ static struct attribute *hsw_tsx_events_attrs[] = {
NULL NULL
}; };
static __init struct attribute **get_hsw_events_attrs(void)
{
return boot_cpu_has(X86_FEATURE_RTM) ?
merge_attr(hsw_events_attrs, hsw_tsx_events_attrs) :
hsw_events_attrs;
}
static ssize_t freeze_on_smi_show(struct device *cdev, static ssize_t freeze_on_smi_show(struct device *cdev,
struct device_attribute *attr, struct device_attribute *attr,
char *buf) char *buf)
@ -3875,9 +4024,32 @@ static struct attribute *intel_pmu_attrs[] = {
NULL, NULL,
}; };
static __init struct attribute **
get_events_attrs(struct attribute **base,
struct attribute **mem,
struct attribute **tsx)
{
struct attribute **attrs = base;
struct attribute **old;
if (mem && x86_pmu.pebs)
attrs = merge_attr(attrs, mem);
if (tsx && boot_cpu_has(X86_FEATURE_RTM)) {
old = attrs;
attrs = merge_attr(attrs, tsx);
if (old != base)
kfree(old);
}
return attrs;
}
__init int intel_pmu_init(void) __init int intel_pmu_init(void)
{ {
struct attribute **extra_attr = NULL; struct attribute **extra_attr = NULL;
struct attribute **mem_attr = NULL;
struct attribute **tsx_attr = NULL;
struct attribute **to_free = NULL; struct attribute **to_free = NULL;
union cpuid10_edx edx; union cpuid10_edx edx;
union cpuid10_eax eax; union cpuid10_eax eax;
@ -3935,6 +4107,9 @@ __init int intel_pmu_init(void)
max((int)edx.split.num_counters_fixed, assume); max((int)edx.split.num_counters_fixed, assume);
} }
if (version >= 4)
x86_pmu.counter_freezing = !disable_counter_freezing;
if (boot_cpu_has(X86_FEATURE_PDCM)) { if (boot_cpu_has(X86_FEATURE_PDCM)) {
u64 capabilities; u64 capabilities;
@ -3986,7 +4161,7 @@ __init int intel_pmu_init(void)
x86_pmu.enable_all = intel_pmu_nhm_enable_all; x86_pmu.enable_all = intel_pmu_nhm_enable_all;
x86_pmu.extra_regs = intel_nehalem_extra_regs; x86_pmu.extra_regs = intel_nehalem_extra_regs;
x86_pmu.cpu_events = nhm_events_attrs; mem_attr = nhm_mem_events_attrs;
/* UOPS_ISSUED.STALLED_CYCLES */ /* UOPS_ISSUED.STALLED_CYCLES */
intel_perfmon_event_map[PERF_COUNT_HW_STALLED_CYCLES_FRONTEND] = intel_perfmon_event_map[PERF_COUNT_HW_STALLED_CYCLES_FRONTEND] =
@ -4004,11 +4179,11 @@ __init int intel_pmu_init(void)
name = "nehalem"; name = "nehalem";
break; break;
case INTEL_FAM6_ATOM_PINEVIEW: case INTEL_FAM6_ATOM_BONNELL:
case INTEL_FAM6_ATOM_LINCROFT: case INTEL_FAM6_ATOM_BONNELL_MID:
case INTEL_FAM6_ATOM_PENWELL: case INTEL_FAM6_ATOM_SALTWELL:
case INTEL_FAM6_ATOM_CLOVERVIEW: case INTEL_FAM6_ATOM_SALTWELL_MID:
case INTEL_FAM6_ATOM_CEDARVIEW: case INTEL_FAM6_ATOM_SALTWELL_TABLET:
memcpy(hw_cache_event_ids, atom_hw_cache_event_ids, memcpy(hw_cache_event_ids, atom_hw_cache_event_ids,
sizeof(hw_cache_event_ids)); sizeof(hw_cache_event_ids));
@ -4021,9 +4196,11 @@ __init int intel_pmu_init(void)
name = "bonnell"; name = "bonnell";
break; break;
case INTEL_FAM6_ATOM_SILVERMONT1: case INTEL_FAM6_ATOM_SILVERMONT:
case INTEL_FAM6_ATOM_SILVERMONT2: case INTEL_FAM6_ATOM_SILVERMONT_X:
case INTEL_FAM6_ATOM_SILVERMONT_MID:
case INTEL_FAM6_ATOM_AIRMONT: case INTEL_FAM6_ATOM_AIRMONT:
case INTEL_FAM6_ATOM_AIRMONT_MID:
memcpy(hw_cache_event_ids, slm_hw_cache_event_ids, memcpy(hw_cache_event_ids, slm_hw_cache_event_ids,
sizeof(hw_cache_event_ids)); sizeof(hw_cache_event_ids));
memcpy(hw_cache_extra_regs, slm_hw_cache_extra_regs, memcpy(hw_cache_extra_regs, slm_hw_cache_extra_regs,
@ -4042,7 +4219,7 @@ __init int intel_pmu_init(void)
break; break;
case INTEL_FAM6_ATOM_GOLDMONT: case INTEL_FAM6_ATOM_GOLDMONT:
case INTEL_FAM6_ATOM_DENVERTON: case INTEL_FAM6_ATOM_GOLDMONT_X:
memcpy(hw_cache_event_ids, glm_hw_cache_event_ids, memcpy(hw_cache_event_ids, glm_hw_cache_event_ids,
sizeof(hw_cache_event_ids)); sizeof(hw_cache_event_ids));
memcpy(hw_cache_extra_regs, glm_hw_cache_extra_regs, memcpy(hw_cache_extra_regs, glm_hw_cache_extra_regs,
@ -4068,7 +4245,8 @@ __init int intel_pmu_init(void)
name = "goldmont"; name = "goldmont";
break; break;
case INTEL_FAM6_ATOM_GEMINI_LAKE: case INTEL_FAM6_ATOM_GOLDMONT_PLUS:
x86_add_quirk(intel_glp_counter_freezing_quirk);
memcpy(hw_cache_event_ids, glp_hw_cache_event_ids, memcpy(hw_cache_event_ids, glp_hw_cache_event_ids,
sizeof(hw_cache_event_ids)); sizeof(hw_cache_event_ids));
memcpy(hw_cache_extra_regs, glp_hw_cache_extra_regs, memcpy(hw_cache_extra_regs, glp_hw_cache_extra_regs,
@ -4112,7 +4290,7 @@ __init int intel_pmu_init(void)
x86_pmu.extra_regs = intel_westmere_extra_regs; x86_pmu.extra_regs = intel_westmere_extra_regs;
x86_pmu.flags |= PMU_FL_HAS_RSP_1; x86_pmu.flags |= PMU_FL_HAS_RSP_1;
x86_pmu.cpu_events = nhm_events_attrs; mem_attr = nhm_mem_events_attrs;
/* UOPS_ISSUED.STALLED_CYCLES */ /* UOPS_ISSUED.STALLED_CYCLES */
intel_perfmon_event_map[PERF_COUNT_HW_STALLED_CYCLES_FRONTEND] = intel_perfmon_event_map[PERF_COUNT_HW_STALLED_CYCLES_FRONTEND] =
@ -4152,6 +4330,7 @@ __init int intel_pmu_init(void)
x86_pmu.flags |= PMU_FL_NO_HT_SHARING; x86_pmu.flags |= PMU_FL_NO_HT_SHARING;
x86_pmu.cpu_events = snb_events_attrs; x86_pmu.cpu_events = snb_events_attrs;
mem_attr = snb_mem_events_attrs;
/* UOPS_ISSUED.ANY,c=1,i=1 to count stall cycles */ /* UOPS_ISSUED.ANY,c=1,i=1 to count stall cycles */
intel_perfmon_event_map[PERF_COUNT_HW_STALLED_CYCLES_FRONTEND] = intel_perfmon_event_map[PERF_COUNT_HW_STALLED_CYCLES_FRONTEND] =
@ -4192,6 +4371,7 @@ __init int intel_pmu_init(void)
x86_pmu.flags |= PMU_FL_NO_HT_SHARING; x86_pmu.flags |= PMU_FL_NO_HT_SHARING;
x86_pmu.cpu_events = snb_events_attrs; x86_pmu.cpu_events = snb_events_attrs;
mem_attr = snb_mem_events_attrs;
/* UOPS_ISSUED.ANY,c=1,i=1 to count stall cycles */ /* UOPS_ISSUED.ANY,c=1,i=1 to count stall cycles */
intel_perfmon_event_map[PERF_COUNT_HW_STALLED_CYCLES_FRONTEND] = intel_perfmon_event_map[PERF_COUNT_HW_STALLED_CYCLES_FRONTEND] =
@ -4226,10 +4406,12 @@ __init int intel_pmu_init(void)
x86_pmu.hw_config = hsw_hw_config; x86_pmu.hw_config = hsw_hw_config;
x86_pmu.get_event_constraints = hsw_get_event_constraints; x86_pmu.get_event_constraints = hsw_get_event_constraints;
x86_pmu.cpu_events = get_hsw_events_attrs(); x86_pmu.cpu_events = hsw_events_attrs;
x86_pmu.lbr_double_abort = true; x86_pmu.lbr_double_abort = true;
extra_attr = boot_cpu_has(X86_FEATURE_RTM) ? extra_attr = boot_cpu_has(X86_FEATURE_RTM) ?
hsw_format_attr : nhm_format_attr; hsw_format_attr : nhm_format_attr;
mem_attr = hsw_mem_events_attrs;
tsx_attr = hsw_tsx_events_attrs;
pr_cont("Haswell events, "); pr_cont("Haswell events, ");
name = "haswell"; name = "haswell";
break; break;
@ -4265,10 +4447,12 @@ __init int intel_pmu_init(void)
x86_pmu.hw_config = hsw_hw_config; x86_pmu.hw_config = hsw_hw_config;
x86_pmu.get_event_constraints = hsw_get_event_constraints; x86_pmu.get_event_constraints = hsw_get_event_constraints;
x86_pmu.cpu_events = get_hsw_events_attrs(); x86_pmu.cpu_events = hsw_events_attrs;
x86_pmu.limit_period = bdw_limit_period; x86_pmu.limit_period = bdw_limit_period;
extra_attr = boot_cpu_has(X86_FEATURE_RTM) ? extra_attr = boot_cpu_has(X86_FEATURE_RTM) ?
hsw_format_attr : nhm_format_attr; hsw_format_attr : nhm_format_attr;
mem_attr = hsw_mem_events_attrs;
tsx_attr = hsw_tsx_events_attrs;
pr_cont("Broadwell events, "); pr_cont("Broadwell events, ");
name = "broadwell"; name = "broadwell";
break; break;
@ -4324,7 +4508,9 @@ __init int intel_pmu_init(void)
hsw_format_attr : nhm_format_attr; hsw_format_attr : nhm_format_attr;
extra_attr = merge_attr(extra_attr, skl_format_attr); extra_attr = merge_attr(extra_attr, skl_format_attr);
to_free = extra_attr; to_free = extra_attr;
x86_pmu.cpu_events = get_hsw_events_attrs(); x86_pmu.cpu_events = hsw_events_attrs;
mem_attr = hsw_mem_events_attrs;
tsx_attr = hsw_tsx_events_attrs;
intel_pmu_pebs_data_source_skl( intel_pmu_pebs_data_source_skl(
boot_cpu_data.x86_model == INTEL_FAM6_SKYLAKE_X); boot_cpu_data.x86_model == INTEL_FAM6_SKYLAKE_X);
pr_cont("Skylake events, "); pr_cont("Skylake events, ");
@ -4357,6 +4543,9 @@ __init int intel_pmu_init(void)
WARN_ON(!x86_pmu.format_attrs); WARN_ON(!x86_pmu.format_attrs);
} }
x86_pmu.cpu_events = get_events_attrs(x86_pmu.cpu_events,
mem_attr, tsx_attr);
if (x86_pmu.num_counters > INTEL_PMC_MAX_GENERIC) { if (x86_pmu.num_counters > INTEL_PMC_MAX_GENERIC) {
WARN(1, KERN_ERR "hw perf events %d > max(%d), clipping!", WARN(1, KERN_ERR "hw perf events %d > max(%d), clipping!",
x86_pmu.num_counters, INTEL_PMC_MAX_GENERIC); x86_pmu.num_counters, INTEL_PMC_MAX_GENERIC);
@ -4431,6 +4620,13 @@ __init int intel_pmu_init(void)
pr_cont("full-width counters, "); pr_cont("full-width counters, ");
} }
/*
* For arch perfmon 4 use counter freezing to avoid
* several MSR accesses in the PMI.
*/
if (x86_pmu.counter_freezing)
x86_pmu.handle_irq = intel_pmu_handle_irq_v4;
kfree(to_free); kfree(to_free);
return 0; return 0;
} }

View File

@ -559,8 +559,8 @@ static const struct x86_cpu_id intel_cstates_match[] __initconst = {
X86_CSTATES_MODEL(INTEL_FAM6_HASWELL_ULT, hswult_cstates), X86_CSTATES_MODEL(INTEL_FAM6_HASWELL_ULT, hswult_cstates),
X86_CSTATES_MODEL(INTEL_FAM6_ATOM_SILVERMONT1, slm_cstates), X86_CSTATES_MODEL(INTEL_FAM6_ATOM_SILVERMONT, slm_cstates),
X86_CSTATES_MODEL(INTEL_FAM6_ATOM_SILVERMONT2, slm_cstates), X86_CSTATES_MODEL(INTEL_FAM6_ATOM_SILVERMONT_X, slm_cstates),
X86_CSTATES_MODEL(INTEL_FAM6_ATOM_AIRMONT, slm_cstates), X86_CSTATES_MODEL(INTEL_FAM6_ATOM_AIRMONT, slm_cstates),
X86_CSTATES_MODEL(INTEL_FAM6_BROADWELL_CORE, snb_cstates), X86_CSTATES_MODEL(INTEL_FAM6_BROADWELL_CORE, snb_cstates),
@ -581,9 +581,9 @@ static const struct x86_cpu_id intel_cstates_match[] __initconst = {
X86_CSTATES_MODEL(INTEL_FAM6_XEON_PHI_KNM, knl_cstates), X86_CSTATES_MODEL(INTEL_FAM6_XEON_PHI_KNM, knl_cstates),
X86_CSTATES_MODEL(INTEL_FAM6_ATOM_GOLDMONT, glm_cstates), X86_CSTATES_MODEL(INTEL_FAM6_ATOM_GOLDMONT, glm_cstates),
X86_CSTATES_MODEL(INTEL_FAM6_ATOM_DENVERTON, glm_cstates), X86_CSTATES_MODEL(INTEL_FAM6_ATOM_GOLDMONT_X, glm_cstates),
X86_CSTATES_MODEL(INTEL_FAM6_ATOM_GEMINI_LAKE, glm_cstates), X86_CSTATES_MODEL(INTEL_FAM6_ATOM_GOLDMONT_PLUS, glm_cstates),
{ }, { },
}; };
MODULE_DEVICE_TABLE(x86cpu, intel_cstates_match); MODULE_DEVICE_TABLE(x86cpu, intel_cstates_match);

View File

@ -95,7 +95,7 @@ static ssize_t pt_cap_show(struct device *cdev,
return snprintf(buf, PAGE_SIZE, "%x\n", pt_cap_get(cap)); return snprintf(buf, PAGE_SIZE, "%x\n", pt_cap_get(cap));
} }
static struct attribute_group pt_cap_group = { static struct attribute_group pt_cap_group __ro_after_init = {
.name = "caps", .name = "caps",
}; };

View File

@ -777,9 +777,9 @@ static const struct x86_cpu_id rapl_cpu_match[] __initconst = {
X86_RAPL_MODEL_MATCH(INTEL_FAM6_CANNONLAKE_MOBILE, skl_rapl_init), X86_RAPL_MODEL_MATCH(INTEL_FAM6_CANNONLAKE_MOBILE, skl_rapl_init),
X86_RAPL_MODEL_MATCH(INTEL_FAM6_ATOM_GOLDMONT, hsw_rapl_init), X86_RAPL_MODEL_MATCH(INTEL_FAM6_ATOM_GOLDMONT, hsw_rapl_init),
X86_RAPL_MODEL_MATCH(INTEL_FAM6_ATOM_DENVERTON, hsw_rapl_init), X86_RAPL_MODEL_MATCH(INTEL_FAM6_ATOM_GOLDMONT_X, hsw_rapl_init),
X86_RAPL_MODEL_MATCH(INTEL_FAM6_ATOM_GEMINI_LAKE, hsw_rapl_init), X86_RAPL_MODEL_MATCH(INTEL_FAM6_ATOM_GOLDMONT_PLUS, hsw_rapl_init),
{}, {},
}; };

View File

@ -69,14 +69,14 @@ static bool test_intel(int idx)
case INTEL_FAM6_BROADWELL_GT3E: case INTEL_FAM6_BROADWELL_GT3E:
case INTEL_FAM6_BROADWELL_X: case INTEL_FAM6_BROADWELL_X:
case INTEL_FAM6_ATOM_SILVERMONT1: case INTEL_FAM6_ATOM_SILVERMONT:
case INTEL_FAM6_ATOM_SILVERMONT2: case INTEL_FAM6_ATOM_SILVERMONT_X:
case INTEL_FAM6_ATOM_AIRMONT: case INTEL_FAM6_ATOM_AIRMONT:
case INTEL_FAM6_ATOM_GOLDMONT: case INTEL_FAM6_ATOM_GOLDMONT:
case INTEL_FAM6_ATOM_DENVERTON: case INTEL_FAM6_ATOM_GOLDMONT_X:
case INTEL_FAM6_ATOM_GEMINI_LAKE: case INTEL_FAM6_ATOM_GOLDMONT_PLUS:
case INTEL_FAM6_XEON_PHI_KNL: case INTEL_FAM6_XEON_PHI_KNL:
case INTEL_FAM6_XEON_PHI_KNM: case INTEL_FAM6_XEON_PHI_KNM:

View File

@ -560,9 +560,11 @@ struct x86_pmu {
struct event_constraint *event_constraints; struct event_constraint *event_constraints;
struct x86_pmu_quirk *quirks; struct x86_pmu_quirk *quirks;
int perfctr_second_write; int perfctr_second_write;
bool late_ack;
u64 (*limit_period)(struct perf_event *event, u64 l); u64 (*limit_period)(struct perf_event *event, u64 l);
/* PMI handler bits */
unsigned int late_ack :1,
counter_freezing :1;
/* /*
* sysfs attrs * sysfs attrs
*/ */

View File

@ -8,9 +8,6 @@
* The "_X" parts are generally the EP and EX Xeons, or the * The "_X" parts are generally the EP and EX Xeons, or the
* "Extreme" ones, like Broadwell-E. * "Extreme" ones, like Broadwell-E.
* *
* Things ending in "2" are usually because we have no better
* name for them. There's no processor called "SILVERMONT2".
*
* While adding a new CPUID for a new microarchitecture, add a new * While adding a new CPUID for a new microarchitecture, add a new
* group to keep logically sorted out in chronological order. Within * group to keep logically sorted out in chronological order. Within
* that group keep the CPUID for the variants sorted by model number. * that group keep the CPUID for the variants sorted by model number.
@ -57,19 +54,23 @@
/* "Small Core" Processors (Atom) */ /* "Small Core" Processors (Atom) */
#define INTEL_FAM6_ATOM_PINEVIEW 0x1C #define INTEL_FAM6_ATOM_BONNELL 0x1C /* Diamondville, Pineview */
#define INTEL_FAM6_ATOM_LINCROFT 0x26 #define INTEL_FAM6_ATOM_BONNELL_MID 0x26 /* Silverthorne, Lincroft */
#define INTEL_FAM6_ATOM_PENWELL 0x27
#define INTEL_FAM6_ATOM_CLOVERVIEW 0x35 #define INTEL_FAM6_ATOM_SALTWELL 0x36 /* Cedarview */
#define INTEL_FAM6_ATOM_CEDARVIEW 0x36 #define INTEL_FAM6_ATOM_SALTWELL_MID 0x27 /* Penwell */
#define INTEL_FAM6_ATOM_SILVERMONT1 0x37 /* BayTrail/BYT / Valleyview */ #define INTEL_FAM6_ATOM_SALTWELL_TABLET 0x35 /* Cloverview */
#define INTEL_FAM6_ATOM_SILVERMONT2 0x4D /* Avaton/Rangely */
#define INTEL_FAM6_ATOM_AIRMONT 0x4C /* CherryTrail / Braswell */ #define INTEL_FAM6_ATOM_SILVERMONT 0x37 /* Bay Trail, Valleyview */
#define INTEL_FAM6_ATOM_MERRIFIELD 0x4A /* Tangier */ #define INTEL_FAM6_ATOM_SILVERMONT_X 0x4D /* Avaton, Rangely */
#define INTEL_FAM6_ATOM_MOOREFIELD 0x5A /* Anniedale */ #define INTEL_FAM6_ATOM_SILVERMONT_MID 0x4A /* Merriefield */
#define INTEL_FAM6_ATOM_GOLDMONT 0x5C
#define INTEL_FAM6_ATOM_DENVERTON 0x5F /* Goldmont Microserver */ #define INTEL_FAM6_ATOM_AIRMONT 0x4C /* Cherry Trail, Braswell */
#define INTEL_FAM6_ATOM_GEMINI_LAKE 0x7A #define INTEL_FAM6_ATOM_AIRMONT_MID 0x5A /* Moorefield */
#define INTEL_FAM6_ATOM_GOLDMONT 0x5C /* Apollo Lake */
#define INTEL_FAM6_ATOM_GOLDMONT_X 0x5F /* Denverton */
#define INTEL_FAM6_ATOM_GOLDMONT_PLUS 0x7A /* Gemini Lake */
/* Xeon Phi */ /* Xeon Phi */

View File

@ -164,6 +164,7 @@
#define DEBUGCTLMSR_BTS_OFF_OS (1UL << 9) #define DEBUGCTLMSR_BTS_OFF_OS (1UL << 9)
#define DEBUGCTLMSR_BTS_OFF_USR (1UL << 10) #define DEBUGCTLMSR_BTS_OFF_USR (1UL << 10)
#define DEBUGCTLMSR_FREEZE_LBRS_ON_PMI (1UL << 11) #define DEBUGCTLMSR_FREEZE_LBRS_ON_PMI (1UL << 11)
#define DEBUGCTLMSR_FREEZE_PERFMON_ON_PMI (1UL << 12)
#define DEBUGCTLMSR_FREEZE_IN_SMM_BIT 14 #define DEBUGCTLMSR_FREEZE_IN_SMM_BIT 14
#define DEBUGCTLMSR_FREEZE_IN_SMM (1UL << DEBUGCTLMSR_FREEZE_IN_SMM_BIT) #define DEBUGCTLMSR_FREEZE_IN_SMM (1UL << DEBUGCTLMSR_FREEZE_IN_SMM_BIT)

View File

@ -278,6 +278,7 @@ struct perf_guest_switch_msr {
extern struct perf_guest_switch_msr *perf_guest_get_msrs(int *nr); extern struct perf_guest_switch_msr *perf_guest_get_msrs(int *nr);
extern void perf_get_x86_pmu_capability(struct x86_pmu_capability *cap); extern void perf_get_x86_pmu_capability(struct x86_pmu_capability *cap);
extern void perf_check_microcode(void); extern void perf_check_microcode(void);
extern int x86_perf_rdpmc_index(struct perf_event *event);
#else #else
static inline struct perf_guest_switch_msr *perf_guest_get_msrs(int *nr) static inline struct perf_guest_switch_msr *perf_guest_get_msrs(int *nr)
{ {

View File

@ -238,24 +238,52 @@ static inline int regs_within_kernel_stack(struct pt_regs *regs,
(kernel_stack_pointer(regs) & ~(THREAD_SIZE - 1))); (kernel_stack_pointer(regs) & ~(THREAD_SIZE - 1)));
} }
/**
* regs_get_kernel_stack_nth_addr() - get the address of the Nth entry on stack
* @regs: pt_regs which contains kernel stack pointer.
* @n: stack entry number.
*
* regs_get_kernel_stack_nth() returns the address of the @n th entry of the
* kernel stack which is specified by @regs. If the @n th entry is NOT in
* the kernel stack, this returns NULL.
*/
static inline unsigned long *regs_get_kernel_stack_nth_addr(struct pt_regs *regs, unsigned int n)
{
unsigned long *addr = (unsigned long *)kernel_stack_pointer(regs);
addr += n;
if (regs_within_kernel_stack(regs, (unsigned long)addr))
return addr;
else
return NULL;
}
/* To avoid include hell, we can't include uaccess.h */
extern long probe_kernel_read(void *dst, const void *src, size_t size);
/** /**
* regs_get_kernel_stack_nth() - get Nth entry of the stack * regs_get_kernel_stack_nth() - get Nth entry of the stack
* @regs: pt_regs which contains kernel stack pointer. * @regs: pt_regs which contains kernel stack pointer.
* @n: stack entry number. * @n: stack entry number.
* *
* regs_get_kernel_stack_nth() returns @n th entry of the kernel stack which * regs_get_kernel_stack_nth() returns @n th entry of the kernel stack which
* is specified by @regs. If the @n th entry is NOT in the kernel stack, * is specified by @regs. If the @n th entry is NOT in the kernel stack
* this returns 0. * this returns 0.
*/ */
static inline unsigned long regs_get_kernel_stack_nth(struct pt_regs *regs, static inline unsigned long regs_get_kernel_stack_nth(struct pt_regs *regs,
unsigned int n) unsigned int n)
{ {
unsigned long *addr = (unsigned long *)kernel_stack_pointer(regs); unsigned long *addr;
addr += n; unsigned long val;
if (regs_within_kernel_stack(regs, (unsigned long)addr)) long ret;
return *addr;
else addr = regs_get_kernel_stack_nth_addr(regs, n);
return 0; if (addr) {
ret = probe_kernel_read(&val, addr, sizeof(val));
if (!ret)
return val;
}
return 0;
} }
#define arch_has_single_step() (1) #define arch_has_single_step() (1)

View File

@ -949,11 +949,11 @@ static void identify_cpu_without_cpuid(struct cpuinfo_x86 *c)
} }
static const __initconst struct x86_cpu_id cpu_no_speculation[] = { static const __initconst struct x86_cpu_id cpu_no_speculation[] = {
{ X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_CEDARVIEW, X86_FEATURE_ANY }, { X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_SALTWELL, X86_FEATURE_ANY },
{ X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_CLOVERVIEW, X86_FEATURE_ANY }, { X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_SALTWELL_TABLET, X86_FEATURE_ANY },
{ X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_LINCROFT, X86_FEATURE_ANY }, { X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_BONNELL_MID, X86_FEATURE_ANY },
{ X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_PENWELL, X86_FEATURE_ANY }, { X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_SALTWELL_MID, X86_FEATURE_ANY },
{ X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_PINEVIEW, X86_FEATURE_ANY }, { X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_BONNELL, X86_FEATURE_ANY },
{ X86_VENDOR_CENTAUR, 5 }, { X86_VENDOR_CENTAUR, 5 },
{ X86_VENDOR_INTEL, 5 }, { X86_VENDOR_INTEL, 5 },
{ X86_VENDOR_NSC, 5 }, { X86_VENDOR_NSC, 5 },
@ -968,10 +968,10 @@ static const __initconst struct x86_cpu_id cpu_no_meltdown[] = {
/* Only list CPUs which speculate but are non susceptible to SSB */ /* Only list CPUs which speculate but are non susceptible to SSB */
static const __initconst struct x86_cpu_id cpu_no_spec_store_bypass[] = { static const __initconst struct x86_cpu_id cpu_no_spec_store_bypass[] = {
{ X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_SILVERMONT1 }, { X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_SILVERMONT },
{ X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_AIRMONT }, { X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_AIRMONT },
{ X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_SILVERMONT2 }, { X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_SILVERMONT_X },
{ X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_MERRIFIELD }, { X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_SILVERMONT_MID },
{ X86_VENDOR_INTEL, 6, INTEL_FAM6_CORE_YONAH }, { X86_VENDOR_INTEL, 6, INTEL_FAM6_CORE_YONAH },
{ X86_VENDOR_INTEL, 6, INTEL_FAM6_XEON_PHI_KNL }, { X86_VENDOR_INTEL, 6, INTEL_FAM6_XEON_PHI_KNL },
{ X86_VENDOR_INTEL, 6, INTEL_FAM6_XEON_PHI_KNM }, { X86_VENDOR_INTEL, 6, INTEL_FAM6_XEON_PHI_KNM },
@ -984,14 +984,14 @@ static const __initconst struct x86_cpu_id cpu_no_spec_store_bypass[] = {
static const __initconst struct x86_cpu_id cpu_no_l1tf[] = { static const __initconst struct x86_cpu_id cpu_no_l1tf[] = {
/* in addition to cpu_no_speculation */ /* in addition to cpu_no_speculation */
{ X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_SILVERMONT1 }, { X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_SILVERMONT },
{ X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_SILVERMONT2 }, { X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_SILVERMONT_X },
{ X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_AIRMONT }, { X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_AIRMONT },
{ X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_MERRIFIELD }, { X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_SILVERMONT_MID },
{ X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_MOOREFIELD }, { X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_AIRMONT_MID },
{ X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_GOLDMONT }, { X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_GOLDMONT },
{ X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_DENVERTON }, { X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_GOLDMONT_X },
{ X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_GEMINI_LAKE }, { X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_GOLDMONT_PLUS },
{ X86_VENDOR_INTEL, 6, INTEL_FAM6_XEON_PHI_KNL }, { X86_VENDOR_INTEL, 6, INTEL_FAM6_XEON_PHI_KNL },
{ X86_VENDOR_INTEL, 6, INTEL_FAM6_XEON_PHI_KNM }, { X86_VENDOR_INTEL, 6, INTEL_FAM6_XEON_PHI_KNM },
{} {}

View File

@ -485,9 +485,7 @@ static int domain_setup_mon_state(struct rdt_resource *r, struct rdt_domain *d)
size_t tsize; size_t tsize;
if (is_llc_occupancy_enabled()) { if (is_llc_occupancy_enabled()) {
d->rmid_busy_llc = kcalloc(BITS_TO_LONGS(r->num_rmid), d->rmid_busy_llc = bitmap_zalloc(r->num_rmid, GFP_KERNEL);
sizeof(unsigned long),
GFP_KERNEL);
if (!d->rmid_busy_llc) if (!d->rmid_busy_llc)
return -ENOMEM; return -ENOMEM;
INIT_DELAYED_WORK(&d->cqm_limbo, cqm_handle_limbo); INIT_DELAYED_WORK(&d->cqm_limbo, cqm_handle_limbo);
@ -496,7 +494,7 @@ static int domain_setup_mon_state(struct rdt_resource *r, struct rdt_domain *d)
tsize = sizeof(*d->mbm_total); tsize = sizeof(*d->mbm_total);
d->mbm_total = kcalloc(r->num_rmid, tsize, GFP_KERNEL); d->mbm_total = kcalloc(r->num_rmid, tsize, GFP_KERNEL);
if (!d->mbm_total) { if (!d->mbm_total) {
kfree(d->rmid_busy_llc); bitmap_free(d->rmid_busy_llc);
return -ENOMEM; return -ENOMEM;
} }
} }
@ -504,7 +502,7 @@ static int domain_setup_mon_state(struct rdt_resource *r, struct rdt_domain *d)
tsize = sizeof(*d->mbm_local); tsize = sizeof(*d->mbm_local);
d->mbm_local = kcalloc(r->num_rmid, tsize, GFP_KERNEL); d->mbm_local = kcalloc(r->num_rmid, tsize, GFP_KERNEL);
if (!d->mbm_local) { if (!d->mbm_local) {
kfree(d->rmid_busy_llc); bitmap_free(d->rmid_busy_llc);
kfree(d->mbm_total); kfree(d->mbm_total);
return -ENOMEM; return -ENOMEM;
} }
@ -610,9 +608,16 @@ static void domain_remove_cpu(int cpu, struct rdt_resource *r)
cancel_delayed_work(&d->cqm_limbo); cancel_delayed_work(&d->cqm_limbo);
} }
/*
* rdt_domain "d" is going to be freed below, so clear
* its pointer from pseudo_lock_region struct.
*/
if (d->plr)
d->plr->d = NULL;
kfree(d->ctrl_val); kfree(d->ctrl_val);
kfree(d->mbps_val); kfree(d->mbps_val);
kfree(d->rmid_busy_llc); bitmap_free(d->rmid_busy_llc);
kfree(d->mbm_total); kfree(d->mbm_total);
kfree(d->mbm_local); kfree(d->mbm_local);
kfree(d); kfree(d);

View File

@ -404,8 +404,16 @@ int rdtgroup_schemata_show(struct kernfs_open_file *of,
for_each_alloc_enabled_rdt_resource(r) for_each_alloc_enabled_rdt_resource(r)
seq_printf(s, "%s:uninitialized\n", r->name); seq_printf(s, "%s:uninitialized\n", r->name);
} else if (rdtgrp->mode == RDT_MODE_PSEUDO_LOCKED) { } else if (rdtgrp->mode == RDT_MODE_PSEUDO_LOCKED) {
seq_printf(s, "%s:%d=%x\n", rdtgrp->plr->r->name, if (!rdtgrp->plr->d) {
rdtgrp->plr->d->id, rdtgrp->plr->cbm); rdt_last_cmd_clear();
rdt_last_cmd_puts("Cache domain offline\n");
ret = -ENODEV;
} else {
seq_printf(s, "%s:%d=%x\n",
rdtgrp->plr->r->name,
rdtgrp->plr->d->id,
rdtgrp->plr->cbm);
}
} else { } else {
closid = rdtgrp->closid; closid = rdtgrp->closid;
for_each_alloc_enabled_rdt_resource(r) { for_each_alloc_enabled_rdt_resource(r) {

View File

@ -17,6 +17,7 @@
#include <linux/debugfs.h> #include <linux/debugfs.h>
#include <linux/kthread.h> #include <linux/kthread.h>
#include <linux/mman.h> #include <linux/mman.h>
#include <linux/perf_event.h>
#include <linux/pm_qos.h> #include <linux/pm_qos.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/uaccess.h> #include <linux/uaccess.h>
@ -26,6 +27,7 @@
#include <asm/intel_rdt_sched.h> #include <asm/intel_rdt_sched.h>
#include <asm/perf_event.h> #include <asm/perf_event.h>
#include "../../events/perf_event.h" /* For X86_CONFIG() */
#include "intel_rdt.h" #include "intel_rdt.h"
#define CREATE_TRACE_POINTS #define CREATE_TRACE_POINTS
@ -91,7 +93,7 @@ static u64 get_prefetch_disable_bits(void)
*/ */
return 0xF; return 0xF;
case INTEL_FAM6_ATOM_GOLDMONT: case INTEL_FAM6_ATOM_GOLDMONT:
case INTEL_FAM6_ATOM_GEMINI_LAKE: case INTEL_FAM6_ATOM_GOLDMONT_PLUS:
/* /*
* SDM defines bits of MSR_MISC_FEATURE_CONTROL register * SDM defines bits of MSR_MISC_FEATURE_CONTROL register
* as: * as:
@ -106,16 +108,6 @@ static u64 get_prefetch_disable_bits(void)
return 0; return 0;
} }
/*
* Helper to write 64bit value to MSR without tracing. Used when
* use of the cache should be restricted and use of registers used
* for local variables avoided.
*/
static inline void pseudo_wrmsrl_notrace(unsigned int msr, u64 val)
{
__wrmsr(msr, (u32)(val & 0xffffffffULL), (u32)(val >> 32));
}
/** /**
* pseudo_lock_minor_get - Obtain available minor number * pseudo_lock_minor_get - Obtain available minor number
* @minor: Pointer to where new minor number will be stored * @minor: Pointer to where new minor number will be stored
@ -888,31 +880,14 @@ static int measure_cycles_lat_fn(void *_plr)
struct pseudo_lock_region *plr = _plr; struct pseudo_lock_region *plr = _plr;
unsigned long i; unsigned long i;
u64 start, end; u64 start, end;
#ifdef CONFIG_KASAN
/*
* The registers used for local register variables are also used
* when KASAN is active. When KASAN is active we use a regular
* variable to ensure we always use a valid pointer to access memory.
* The cost is that accessing this pointer, which could be in
* cache, will be included in the measurement of memory read latency.
*/
void *mem_r; void *mem_r;
#else
#ifdef CONFIG_X86_64
register void *mem_r asm("rbx");
#else
register void *mem_r asm("ebx");
#endif /* CONFIG_X86_64 */
#endif /* CONFIG_KASAN */
local_irq_disable(); local_irq_disable();
/* /*
* The wrmsr call may be reordered with the assignment below it. * Disable hardware prefetchers.
* Call wrmsr as directly as possible to avoid tracing clobbering
* local register variable used for memory pointer.
*/ */
__wrmsr(MSR_MISC_FEATURE_CONTROL, prefetch_disable_bits, 0x0); wrmsr(MSR_MISC_FEATURE_CONTROL, prefetch_disable_bits, 0x0);
mem_r = plr->kmem; mem_r = READ_ONCE(plr->kmem);
/* /*
* Dummy execute of the time measurement to load the needed * Dummy execute of the time measurement to load the needed
* instructions into the L1 instruction cache. * instructions into the L1 instruction cache.
@ -934,157 +909,240 @@ static int measure_cycles_lat_fn(void *_plr)
return 0; return 0;
} }
static int measure_cycles_perf_fn(void *_plr) /*
* Create a perf_event_attr for the hit and miss perf events that will
* be used during the performance measurement. A perf_event maintains
* a pointer to its perf_event_attr so a unique attribute structure is
* created for each perf_event.
*
* The actual configuration of the event is set right before use in order
* to use the X86_CONFIG macro.
*/
static struct perf_event_attr perf_miss_attr = {
.type = PERF_TYPE_RAW,
.size = sizeof(struct perf_event_attr),
.pinned = 1,
.disabled = 0,
.exclude_user = 1,
};
static struct perf_event_attr perf_hit_attr = {
.type = PERF_TYPE_RAW,
.size = sizeof(struct perf_event_attr),
.pinned = 1,
.disabled = 0,
.exclude_user = 1,
};
struct residency_counts {
u64 miss_before, hits_before;
u64 miss_after, hits_after;
};
static int measure_residency_fn(struct perf_event_attr *miss_attr,
struct perf_event_attr *hit_attr,
struct pseudo_lock_region *plr,
struct residency_counts *counts)
{ {
unsigned long long l3_hits = 0, l3_miss = 0; u64 hits_before = 0, hits_after = 0, miss_before = 0, miss_after = 0;
u64 l3_hit_bits = 0, l3_miss_bits = 0; struct perf_event *miss_event, *hit_event;
struct pseudo_lock_region *plr = _plr; int hit_pmcnum, miss_pmcnum;
unsigned long long l2_hits, l2_miss;
u64 l2_hit_bits, l2_miss_bits;
unsigned long i;
#ifdef CONFIG_KASAN
/*
* The registers used for local register variables are also used
* when KASAN is active. When KASAN is active we use regular variables
* at the cost of including cache access latency to these variables
* in the measurements.
*/
unsigned int line_size; unsigned int line_size;
unsigned int size; unsigned int size;
unsigned long i;
void *mem_r; void *mem_r;
#else u64 tmp;
register unsigned int line_size asm("esi");
register unsigned int size asm("edi");
#ifdef CONFIG_X86_64
register void *mem_r asm("rbx");
#else
register void *mem_r asm("ebx");
#endif /* CONFIG_X86_64 */
#endif /* CONFIG_KASAN */
/* miss_event = perf_event_create_kernel_counter(miss_attr, plr->cpu,
* Non-architectural event for the Goldmont Microarchitecture NULL, NULL, NULL);
* from Intel x86 Architecture Software Developer Manual (SDM): if (IS_ERR(miss_event))
* MEM_LOAD_UOPS_RETIRED D1H (event number)
* Umask values:
* L1_HIT 01H
* L2_HIT 02H
* L1_MISS 08H
* L2_MISS 10H
*
* On Broadwell Microarchitecture the MEM_LOAD_UOPS_RETIRED event
* has two "no fix" errata associated with it: BDM35 and BDM100. On
* this platform we use the following events instead:
* L2_RQSTS 24H (Documented in https://download.01.org/perfmon/BDW/)
* REFERENCES FFH
* MISS 3FH
* LONGEST_LAT_CACHE 2EH (Documented in SDM)
* REFERENCE 4FH
* MISS 41H
*/
/*
* Start by setting flags for IA32_PERFEVTSELx:
* OS (Operating system mode) 0x2
* INT (APIC interrupt enable) 0x10
* EN (Enable counter) 0x40
*
* Then add the Umask value and event number to select performance
* event.
*/
switch (boot_cpu_data.x86_model) {
case INTEL_FAM6_ATOM_GOLDMONT:
case INTEL_FAM6_ATOM_GEMINI_LAKE:
l2_hit_bits = (0x52ULL << 16) | (0x2 << 8) | 0xd1;
l2_miss_bits = (0x52ULL << 16) | (0x10 << 8) | 0xd1;
break;
case INTEL_FAM6_BROADWELL_X:
/* On BDW the l2_hit_bits count references, not hits */
l2_hit_bits = (0x52ULL << 16) | (0xff << 8) | 0x24;
l2_miss_bits = (0x52ULL << 16) | (0x3f << 8) | 0x24;
/* On BDW the l3_hit_bits count references, not hits */
l3_hit_bits = (0x52ULL << 16) | (0x4f << 8) | 0x2e;
l3_miss_bits = (0x52ULL << 16) | (0x41 << 8) | 0x2e;
break;
default:
goto out; goto out;
}
hit_event = perf_event_create_kernel_counter(hit_attr, plr->cpu,
NULL, NULL, NULL);
if (IS_ERR(hit_event))
goto out_miss;
local_irq_disable(); local_irq_disable();
/* /*
* Call wrmsr direcly to avoid the local register variables from * Check any possible error state of events used by performing
* being overwritten due to reordering of their assignment with * one local read.
* the wrmsr calls.
*/ */
__wrmsr(MSR_MISC_FEATURE_CONTROL, prefetch_disable_bits, 0x0); if (perf_event_read_local(miss_event, &tmp, NULL, NULL)) {
/* Disable events and reset counters */ local_irq_enable();
pseudo_wrmsrl_notrace(MSR_ARCH_PERFMON_EVENTSEL0, 0x0); goto out_hit;
pseudo_wrmsrl_notrace(MSR_ARCH_PERFMON_EVENTSEL0 + 1, 0x0);
pseudo_wrmsrl_notrace(MSR_ARCH_PERFMON_PERFCTR0, 0x0);
pseudo_wrmsrl_notrace(MSR_ARCH_PERFMON_PERFCTR0 + 1, 0x0);
if (l3_hit_bits > 0) {
pseudo_wrmsrl_notrace(MSR_ARCH_PERFMON_EVENTSEL0 + 2, 0x0);
pseudo_wrmsrl_notrace(MSR_ARCH_PERFMON_EVENTSEL0 + 3, 0x0);
pseudo_wrmsrl_notrace(MSR_ARCH_PERFMON_PERFCTR0 + 2, 0x0);
pseudo_wrmsrl_notrace(MSR_ARCH_PERFMON_PERFCTR0 + 3, 0x0);
} }
/* Set and enable the L2 counters */ if (perf_event_read_local(hit_event, &tmp, NULL, NULL)) {
pseudo_wrmsrl_notrace(MSR_ARCH_PERFMON_EVENTSEL0, l2_hit_bits); local_irq_enable();
pseudo_wrmsrl_notrace(MSR_ARCH_PERFMON_EVENTSEL0 + 1, l2_miss_bits); goto out_hit;
if (l3_hit_bits > 0) {
pseudo_wrmsrl_notrace(MSR_ARCH_PERFMON_EVENTSEL0 + 2,
l3_hit_bits);
pseudo_wrmsrl_notrace(MSR_ARCH_PERFMON_EVENTSEL0 + 3,
l3_miss_bits);
} }
mem_r = plr->kmem;
size = plr->size; /*
line_size = plr->line_size; * Disable hardware prefetchers.
*/
wrmsr(MSR_MISC_FEATURE_CONTROL, prefetch_disable_bits, 0x0);
/* Initialize rest of local variables */
/*
* Performance event has been validated right before this with
* interrupts disabled - it is thus safe to read the counter index.
*/
miss_pmcnum = x86_perf_rdpmc_index(miss_event);
hit_pmcnum = x86_perf_rdpmc_index(hit_event);
line_size = READ_ONCE(plr->line_size);
mem_r = READ_ONCE(plr->kmem);
size = READ_ONCE(plr->size);
/*
* Read counter variables twice - first to load the instructions
* used in L1 cache, second to capture accurate value that does not
* include cache misses incurred because of instruction loads.
*/
rdpmcl(hit_pmcnum, hits_before);
rdpmcl(miss_pmcnum, miss_before);
/*
* From SDM: Performing back-to-back fast reads are not guaranteed
* to be monotonic.
* Use LFENCE to ensure all previous instructions are retired
* before proceeding.
*/
rmb();
rdpmcl(hit_pmcnum, hits_before);
rdpmcl(miss_pmcnum, miss_before);
/*
* Use LFENCE to ensure all previous instructions are retired
* before proceeding.
*/
rmb();
for (i = 0; i < size; i += line_size) { for (i = 0; i < size; i += line_size) {
/*
* Add a barrier to prevent speculative execution of this
* loop reading beyond the end of the buffer.
*/
rmb();
asm volatile("mov (%0,%1,1), %%eax\n\t" asm volatile("mov (%0,%1,1), %%eax\n\t"
: :
: "r" (mem_r), "r" (i) : "r" (mem_r), "r" (i)
: "%eax", "memory"); : "%eax", "memory");
} }
/* /*
* Call wrmsr directly (no tracing) to not influence * Use LFENCE to ensure all previous instructions are retired
* the cache access counters as they are disabled. * before proceeding.
*/ */
pseudo_wrmsrl_notrace(MSR_ARCH_PERFMON_EVENTSEL0, rmb();
l2_hit_bits & ~(0x40ULL << 16)); rdpmcl(hit_pmcnum, hits_after);
pseudo_wrmsrl_notrace(MSR_ARCH_PERFMON_EVENTSEL0 + 1, rdpmcl(miss_pmcnum, miss_after);
l2_miss_bits & ~(0x40ULL << 16)); /*
if (l3_hit_bits > 0) { * Use LFENCE to ensure all previous instructions are retired
pseudo_wrmsrl_notrace(MSR_ARCH_PERFMON_EVENTSEL0 + 2, * before proceeding.
l3_hit_bits & ~(0x40ULL << 16)); */
pseudo_wrmsrl_notrace(MSR_ARCH_PERFMON_EVENTSEL0 + 3, rmb();
l3_miss_bits & ~(0x40ULL << 16)); /* Re-enable hardware prefetchers */
}
l2_hits = native_read_pmc(0);
l2_miss = native_read_pmc(1);
if (l3_hit_bits > 0) {
l3_hits = native_read_pmc(2);
l3_miss = native_read_pmc(3);
}
wrmsr(MSR_MISC_FEATURE_CONTROL, 0x0, 0x0); wrmsr(MSR_MISC_FEATURE_CONTROL, 0x0, 0x0);
local_irq_enable(); local_irq_enable();
out_hit:
perf_event_release_kernel(hit_event);
out_miss:
perf_event_release_kernel(miss_event);
out:
/* /*
* On BDW we count references and misses, need to adjust. Sometimes * All counts will be zero on failure.
* the "hits" counter is a bit more than the references, for
* example, x references but x + 1 hits. To not report invalid
* hit values in this case we treat that as misses eaqual to
* references.
*/ */
if (boot_cpu_data.x86_model == INTEL_FAM6_BROADWELL_X) counts->miss_before = miss_before;
l2_hits -= (l2_miss > l2_hits ? l2_hits : l2_miss); counts->hits_before = hits_before;
trace_pseudo_lock_l2(l2_hits, l2_miss); counts->miss_after = miss_after;
if (l3_hit_bits > 0) { counts->hits_after = hits_after;
if (boot_cpu_data.x86_model == INTEL_FAM6_BROADWELL_X) return 0;
l3_hits -= (l3_miss > l3_hits ? l3_hits : l3_miss); }
trace_pseudo_lock_l3(l3_hits, l3_miss);
static int measure_l2_residency(void *_plr)
{
struct pseudo_lock_region *plr = _plr;
struct residency_counts counts = {0};
/*
* Non-architectural event for the Goldmont Microarchitecture
* from Intel x86 Architecture Software Developer Manual (SDM):
* MEM_LOAD_UOPS_RETIRED D1H (event number)
* Umask values:
* L2_HIT 02H
* L2_MISS 10H
*/
switch (boot_cpu_data.x86_model) {
case INTEL_FAM6_ATOM_GOLDMONT:
case INTEL_FAM6_ATOM_GOLDMONT_PLUS:
perf_miss_attr.config = X86_CONFIG(.event = 0xd1,
.umask = 0x10);
perf_hit_attr.config = X86_CONFIG(.event = 0xd1,
.umask = 0x2);
break;
default:
goto out;
} }
measure_residency_fn(&perf_miss_attr, &perf_hit_attr, plr, &counts);
/*
* If a failure prevented the measurements from succeeding
* tracepoints will still be written and all counts will be zero.
*/
trace_pseudo_lock_l2(counts.hits_after - counts.hits_before,
counts.miss_after - counts.miss_before);
out:
plr->thread_done = 1;
wake_up_interruptible(&plr->lock_thread_wq);
return 0;
}
static int measure_l3_residency(void *_plr)
{
struct pseudo_lock_region *plr = _plr;
struct residency_counts counts = {0};
/*
* On Broadwell Microarchitecture the MEM_LOAD_UOPS_RETIRED event
* has two "no fix" errata associated with it: BDM35 and BDM100. On
* this platform the following events are used instead:
* LONGEST_LAT_CACHE 2EH (Documented in SDM)
* REFERENCE 4FH
* MISS 41H
*/
switch (boot_cpu_data.x86_model) {
case INTEL_FAM6_BROADWELL_X:
/* On BDW the hit event counts references, not hits */
perf_hit_attr.config = X86_CONFIG(.event = 0x2e,
.umask = 0x4f);
perf_miss_attr.config = X86_CONFIG(.event = 0x2e,
.umask = 0x41);
break;
default:
goto out;
}
measure_residency_fn(&perf_miss_attr, &perf_hit_attr, plr, &counts);
/*
* If a failure prevented the measurements from succeeding
* tracepoints will still be written and all counts will be zero.
*/
counts.miss_after -= counts.miss_before;
if (boot_cpu_data.x86_model == INTEL_FAM6_BROADWELL_X) {
/*
* On BDW references and misses are counted, need to adjust.
* Sometimes the "hits" counter is a bit more than the
* references, for example, x references but x + 1 hits.
* To not report invalid hit values in this case we treat
* that as misses equal to references.
*/
/* First compute the number of cache references measured */
counts.hits_after -= counts.hits_before;
/* Next convert references to cache hits */
counts.hits_after -= min(counts.miss_after, counts.hits_after);
} else {
counts.hits_after -= counts.hits_before;
}
trace_pseudo_lock_l3(counts.hits_after, counts.miss_after);
out: out:
plr->thread_done = 1; plr->thread_done = 1;
wake_up_interruptible(&plr->lock_thread_wq); wake_up_interruptible(&plr->lock_thread_wq);
@ -1116,6 +1174,11 @@ static int pseudo_lock_measure_cycles(struct rdtgroup *rdtgrp, int sel)
goto out; goto out;
} }
if (!plr->d) {
ret = -ENODEV;
goto out;
}
plr->thread_done = 0; plr->thread_done = 0;
cpu = cpumask_first(&plr->d->cpu_mask); cpu = cpumask_first(&plr->d->cpu_mask);
if (!cpu_online(cpu)) { if (!cpu_online(cpu)) {
@ -1123,13 +1186,20 @@ static int pseudo_lock_measure_cycles(struct rdtgroup *rdtgrp, int sel)
goto out; goto out;
} }
plr->cpu = cpu;
if (sel == 1) if (sel == 1)
thread = kthread_create_on_node(measure_cycles_lat_fn, plr, thread = kthread_create_on_node(measure_cycles_lat_fn, plr,
cpu_to_node(cpu), cpu_to_node(cpu),
"pseudo_lock_measure/%u", "pseudo_lock_measure/%u",
cpu); cpu);
else if (sel == 2) else if (sel == 2)
thread = kthread_create_on_node(measure_cycles_perf_fn, plr, thread = kthread_create_on_node(measure_l2_residency, plr,
cpu_to_node(cpu),
"pseudo_lock_measure/%u",
cpu);
else if (sel == 3)
thread = kthread_create_on_node(measure_l3_residency, plr,
cpu_to_node(cpu), cpu_to_node(cpu),
"pseudo_lock_measure/%u", "pseudo_lock_measure/%u",
cpu); cpu);
@ -1173,7 +1243,7 @@ static ssize_t pseudo_lock_measure_trigger(struct file *file,
buf[buf_size] = '\0'; buf[buf_size] = '\0';
ret = kstrtoint(buf, 10, &sel); ret = kstrtoint(buf, 10, &sel);
if (ret == 0) { if (ret == 0) {
if (sel != 1) if (sel != 1 && sel != 2 && sel != 3)
return -EINVAL; return -EINVAL;
ret = debugfs_file_get(file->f_path.dentry); ret = debugfs_file_get(file->f_path.dentry);
if (ret) if (ret)
@ -1429,6 +1499,11 @@ static int pseudo_lock_dev_mmap(struct file *filp, struct vm_area_struct *vma)
plr = rdtgrp->plr; plr = rdtgrp->plr;
if (!plr->d) {
mutex_unlock(&rdtgroup_mutex);
return -ENODEV;
}
/* /*
* Task is required to run with affinity to the cpus associated * Task is required to run with affinity to the cpus associated
* with the pseudo-locked region. If this is not the case the task * with the pseudo-locked region. If this is not the case the task

View File

@ -268,17 +268,27 @@ static int rdtgroup_cpus_show(struct kernfs_open_file *of,
struct seq_file *s, void *v) struct seq_file *s, void *v)
{ {
struct rdtgroup *rdtgrp; struct rdtgroup *rdtgrp;
struct cpumask *mask;
int ret = 0; int ret = 0;
rdtgrp = rdtgroup_kn_lock_live(of->kn); rdtgrp = rdtgroup_kn_lock_live(of->kn);
if (rdtgrp) { if (rdtgrp) {
if (rdtgrp->mode == RDT_MODE_PSEUDO_LOCKED) if (rdtgrp->mode == RDT_MODE_PSEUDO_LOCKED) {
seq_printf(s, is_cpu_list(of) ? "%*pbl\n" : "%*pb\n", if (!rdtgrp->plr->d) {
cpumask_pr_args(&rdtgrp->plr->d->cpu_mask)); rdt_last_cmd_clear();
else rdt_last_cmd_puts("Cache domain offline\n");
ret = -ENODEV;
} else {
mask = &rdtgrp->plr->d->cpu_mask;
seq_printf(s, is_cpu_list(of) ?
"%*pbl\n" : "%*pb\n",
cpumask_pr_args(mask));
}
} else {
seq_printf(s, is_cpu_list(of) ? "%*pbl\n" : "%*pb\n", seq_printf(s, is_cpu_list(of) ? "%*pbl\n" : "%*pb\n",
cpumask_pr_args(&rdtgrp->cpu_mask)); cpumask_pr_args(&rdtgrp->cpu_mask));
}
} else { } else {
ret = -ENOENT; ret = -ENOENT;
} }
@ -961,7 +971,78 @@ static int rdtgroup_mode_show(struct kernfs_open_file *of,
} }
/** /**
* rdtgroup_cbm_overlaps - Does CBM for intended closid overlap with other * rdt_cdp_peer_get - Retrieve CDP peer if it exists
* @r: RDT resource to which RDT domain @d belongs
* @d: Cache instance for which a CDP peer is requested
* @r_cdp: RDT resource that shares hardware with @r (RDT resource peer)
* Used to return the result.
* @d_cdp: RDT domain that shares hardware with @d (RDT domain peer)
* Used to return the result.
*
* RDT resources are managed independently and by extension the RDT domains
* (RDT resource instances) are managed independently also. The Code and
* Data Prioritization (CDP) RDT resources, while managed independently,
* could refer to the same underlying hardware. For example,
* RDT_RESOURCE_L2CODE and RDT_RESOURCE_L2DATA both refer to the L2 cache.
*
* When provided with an RDT resource @r and an instance of that RDT
* resource @d rdt_cdp_peer_get() will return if there is a peer RDT
* resource and the exact instance that shares the same hardware.
*
* Return: 0 if a CDP peer was found, <0 on error or if no CDP peer exists.
* If a CDP peer was found, @r_cdp will point to the peer RDT resource
* and @d_cdp will point to the peer RDT domain.
*/
static int rdt_cdp_peer_get(struct rdt_resource *r, struct rdt_domain *d,
struct rdt_resource **r_cdp,
struct rdt_domain **d_cdp)
{
struct rdt_resource *_r_cdp = NULL;
struct rdt_domain *_d_cdp = NULL;
int ret = 0;
switch (r->rid) {
case RDT_RESOURCE_L3DATA:
_r_cdp = &rdt_resources_all[RDT_RESOURCE_L3CODE];
break;
case RDT_RESOURCE_L3CODE:
_r_cdp = &rdt_resources_all[RDT_RESOURCE_L3DATA];
break;
case RDT_RESOURCE_L2DATA:
_r_cdp = &rdt_resources_all[RDT_RESOURCE_L2CODE];
break;
case RDT_RESOURCE_L2CODE:
_r_cdp = &rdt_resources_all[RDT_RESOURCE_L2DATA];
break;
default:
ret = -ENOENT;
goto out;
}
/*
* When a new CPU comes online and CDP is enabled then the new
* RDT domains (if any) associated with both CDP RDT resources
* are added in the same CPU online routine while the
* rdtgroup_mutex is held. It should thus not happen for one
* RDT domain to exist and be associated with its RDT CDP
* resource but there is no RDT domain associated with the
* peer RDT CDP resource. Hence the WARN.
*/
_d_cdp = rdt_find_domain(_r_cdp, d->id, NULL);
if (WARN_ON(!_d_cdp)) {
_r_cdp = NULL;
ret = -EINVAL;
}
out:
*r_cdp = _r_cdp;
*d_cdp = _d_cdp;
return ret;
}
/**
* __rdtgroup_cbm_overlaps - Does CBM for intended closid overlap with other
* @r: Resource to which domain instance @d belongs. * @r: Resource to which domain instance @d belongs.
* @d: The domain instance for which @closid is being tested. * @d: The domain instance for which @closid is being tested.
* @cbm: Capacity bitmask being tested. * @cbm: Capacity bitmask being tested.
@ -980,8 +1061,8 @@ static int rdtgroup_mode_show(struct kernfs_open_file *of,
* *
* Return: false if CBM does not overlap, true if it does. * Return: false if CBM does not overlap, true if it does.
*/ */
bool rdtgroup_cbm_overlaps(struct rdt_resource *r, struct rdt_domain *d, static bool __rdtgroup_cbm_overlaps(struct rdt_resource *r, struct rdt_domain *d,
unsigned long cbm, int closid, bool exclusive) unsigned long cbm, int closid, bool exclusive)
{ {
enum rdtgrp_mode mode; enum rdtgrp_mode mode;
unsigned long ctrl_b; unsigned long ctrl_b;
@ -1016,6 +1097,41 @@ bool rdtgroup_cbm_overlaps(struct rdt_resource *r, struct rdt_domain *d,
return false; return false;
} }
/**
* rdtgroup_cbm_overlaps - Does CBM overlap with other use of hardware
* @r: Resource to which domain instance @d belongs.
* @d: The domain instance for which @closid is being tested.
* @cbm: Capacity bitmask being tested.
* @closid: Intended closid for @cbm.
* @exclusive: Only check if overlaps with exclusive resource groups
*
* Resources that can be allocated using a CBM can use the CBM to control
* the overlap of these allocations. rdtgroup_cmb_overlaps() is the test
* for overlap. Overlap test is not limited to the specific resource for
* which the CBM is intended though - when dealing with CDP resources that
* share the underlying hardware the overlap check should be performed on
* the CDP resource sharing the hardware also.
*
* Refer to description of __rdtgroup_cbm_overlaps() for the details of the
* overlap test.
*
* Return: true if CBM overlap detected, false if there is no overlap
*/
bool rdtgroup_cbm_overlaps(struct rdt_resource *r, struct rdt_domain *d,
unsigned long cbm, int closid, bool exclusive)
{
struct rdt_resource *r_cdp;
struct rdt_domain *d_cdp;
if (__rdtgroup_cbm_overlaps(r, d, cbm, closid, exclusive))
return true;
if (rdt_cdp_peer_get(r, d, &r_cdp, &d_cdp) < 0)
return false;
return __rdtgroup_cbm_overlaps(r_cdp, d_cdp, cbm, closid, exclusive);
}
/** /**
* rdtgroup_mode_test_exclusive - Test if this resource group can be exclusive * rdtgroup_mode_test_exclusive - Test if this resource group can be exclusive
* *
@ -1176,6 +1292,7 @@ static int rdtgroup_size_show(struct kernfs_open_file *of,
struct rdt_resource *r; struct rdt_resource *r;
struct rdt_domain *d; struct rdt_domain *d;
unsigned int size; unsigned int size;
int ret = 0;
bool sep; bool sep;
u32 ctrl; u32 ctrl;
@ -1186,11 +1303,18 @@ static int rdtgroup_size_show(struct kernfs_open_file *of,
} }
if (rdtgrp->mode == RDT_MODE_PSEUDO_LOCKED) { if (rdtgrp->mode == RDT_MODE_PSEUDO_LOCKED) {
seq_printf(s, "%*s:", max_name_width, rdtgrp->plr->r->name); if (!rdtgrp->plr->d) {
size = rdtgroup_cbm_to_size(rdtgrp->plr->r, rdt_last_cmd_clear();
rdtgrp->plr->d, rdt_last_cmd_puts("Cache domain offline\n");
rdtgrp->plr->cbm); ret = -ENODEV;
seq_printf(s, "%d=%u\n", rdtgrp->plr->d->id, size); } else {
seq_printf(s, "%*s:", max_name_width,
rdtgrp->plr->r->name);
size = rdtgroup_cbm_to_size(rdtgrp->plr->r,
rdtgrp->plr->d,
rdtgrp->plr->cbm);
seq_printf(s, "%d=%u\n", rdtgrp->plr->d->id, size);
}
goto out; goto out;
} }
@ -1220,7 +1344,7 @@ static int rdtgroup_size_show(struct kernfs_open_file *of,
out: out:
rdtgroup_kn_unlock(of->kn); rdtgroup_kn_unlock(of->kn);
return 0; return ret;
} }
/* rdtgroup information files for one cache resource. */ /* rdtgroup information files for one cache resource. */
@ -2354,14 +2478,16 @@ static void cbm_ensure_valid(u32 *_val, struct rdt_resource *r)
*/ */
static int rdtgroup_init_alloc(struct rdtgroup *rdtgrp) static int rdtgroup_init_alloc(struct rdtgroup *rdtgrp)
{ {
struct rdt_resource *r_cdp = NULL;
struct rdt_domain *d_cdp = NULL;
u32 used_b = 0, unused_b = 0; u32 used_b = 0, unused_b = 0;
u32 closid = rdtgrp->closid; u32 closid = rdtgrp->closid;
struct rdt_resource *r; struct rdt_resource *r;
unsigned long tmp_cbm; unsigned long tmp_cbm;
enum rdtgrp_mode mode; enum rdtgrp_mode mode;
struct rdt_domain *d; struct rdt_domain *d;
u32 peer_ctl, *ctrl;
int i, ret; int i, ret;
u32 *ctrl;
for_each_alloc_enabled_rdt_resource(r) { for_each_alloc_enabled_rdt_resource(r) {
/* /*
@ -2371,6 +2497,7 @@ static int rdtgroup_init_alloc(struct rdtgroup *rdtgrp)
if (r->rid == RDT_RESOURCE_MBA) if (r->rid == RDT_RESOURCE_MBA)
continue; continue;
list_for_each_entry(d, &r->domains, list) { list_for_each_entry(d, &r->domains, list) {
rdt_cdp_peer_get(r, d, &r_cdp, &d_cdp);
d->have_new_ctrl = false; d->have_new_ctrl = false;
d->new_ctrl = r->cache.shareable_bits; d->new_ctrl = r->cache.shareable_bits;
used_b = r->cache.shareable_bits; used_b = r->cache.shareable_bits;
@ -2380,9 +2507,19 @@ static int rdtgroup_init_alloc(struct rdtgroup *rdtgrp)
mode = rdtgroup_mode_by_closid(i); mode = rdtgroup_mode_by_closid(i);
if (mode == RDT_MODE_PSEUDO_LOCKSETUP) if (mode == RDT_MODE_PSEUDO_LOCKSETUP)
break; break;
used_b |= *ctrl; /*
* If CDP is active include peer
* domain's usage to ensure there
* is no overlap with an exclusive
* group.
*/
if (d_cdp)
peer_ctl = d_cdp->ctrl_val[i];
else
peer_ctl = 0;
used_b |= *ctrl | peer_ctl;
if (mode == RDT_MODE_SHAREABLE) if (mode == RDT_MODE_SHAREABLE)
d->new_ctrl |= *ctrl; d->new_ctrl |= *ctrl | peer_ctl;
} }
} }
if (d->plr && d->plr->cbm > 0) if (d->plr && d->plr->cbm > 0)
@ -2805,6 +2942,13 @@ static int rdtgroup_show_options(struct seq_file *seq, struct kernfs_root *kf)
{ {
if (rdt_resources_all[RDT_RESOURCE_L3DATA].alloc_enabled) if (rdt_resources_all[RDT_RESOURCE_L3DATA].alloc_enabled)
seq_puts(seq, ",cdp"); seq_puts(seq, ",cdp");
if (rdt_resources_all[RDT_RESOURCE_L2DATA].alloc_enabled)
seq_puts(seq, ",cdpl2");
if (is_mba_sc(&rdt_resources_all[RDT_RESOURCE_MBA]))
seq_puts(seq, ",mba_MBps");
return 0; return 0;
} }

View File

@ -179,7 +179,7 @@ optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs)
opt_pre_handler(&op->kp, regs); opt_pre_handler(&op->kp, regs);
__this_cpu_write(current_kprobe, NULL); __this_cpu_write(current_kprobe, NULL);
} }
preempt_enable_no_resched(); preempt_enable();
} }
NOKPROBE_SYMBOL(optimized_callback); NOKPROBE_SYMBOL(optimized_callback);

View File

@ -636,7 +636,7 @@ unsigned long native_calibrate_tsc(void)
case INTEL_FAM6_KABYLAKE_DESKTOP: case INTEL_FAM6_KABYLAKE_DESKTOP:
crystal_khz = 24000; /* 24.0 MHz */ crystal_khz = 24000; /* 24.0 MHz */
break; break;
case INTEL_FAM6_ATOM_DENVERTON: case INTEL_FAM6_ATOM_GOLDMONT_X:
crystal_khz = 25000; /* 25.0 MHz */ crystal_khz = 25000; /* 25.0 MHz */
break; break;
case INTEL_FAM6_ATOM_GOLDMONT: case INTEL_FAM6_ATOM_GOLDMONT:

View File

@ -59,12 +59,12 @@ static const struct freq_desc freq_desc_ann = {
}; };
static const struct x86_cpu_id tsc_msr_cpu_ids[] = { static const struct x86_cpu_id tsc_msr_cpu_ids[] = {
INTEL_CPU_FAM6(ATOM_PENWELL, freq_desc_pnw), INTEL_CPU_FAM6(ATOM_SALTWELL_MID, freq_desc_pnw),
INTEL_CPU_FAM6(ATOM_CLOVERVIEW, freq_desc_clv), INTEL_CPU_FAM6(ATOM_SALTWELL_TABLET, freq_desc_clv),
INTEL_CPU_FAM6(ATOM_SILVERMONT1, freq_desc_byt), INTEL_CPU_FAM6(ATOM_SILVERMONT, freq_desc_byt),
INTEL_CPU_FAM6(ATOM_SILVERMONT_MID, freq_desc_tng),
INTEL_CPU_FAM6(ATOM_AIRMONT, freq_desc_cht), INTEL_CPU_FAM6(ATOM_AIRMONT, freq_desc_cht),
INTEL_CPU_FAM6(ATOM_MERRIFIELD, freq_desc_tng), INTEL_CPU_FAM6(ATOM_AIRMONT_MID, freq_desc_ann),
INTEL_CPU_FAM6(ATOM_MOOREFIELD, freq_desc_ann),
{} {}
}; };

View File

@ -115,7 +115,7 @@ static struct dentry *punit_dbg_file;
static int punit_dbgfs_register(struct punit_device *punit_device) static int punit_dbgfs_register(struct punit_device *punit_device)
{ {
static struct dentry *dev_state; struct dentry *dev_state;
punit_dbg_file = debugfs_create_dir("punit_atom", NULL); punit_dbg_file = debugfs_create_dir("punit_atom", NULL);
if (!punit_dbg_file) if (!punit_dbg_file)
@ -143,8 +143,8 @@ static void punit_dbgfs_unregister(void)
(kernel_ulong_t)&drv_data } (kernel_ulong_t)&drv_data }
static const struct x86_cpu_id intel_punit_cpu_ids[] = { static const struct x86_cpu_id intel_punit_cpu_ids[] = {
ICPU(INTEL_FAM6_ATOM_SILVERMONT1, punit_device_byt), ICPU(INTEL_FAM6_ATOM_SILVERMONT, punit_device_byt),
ICPU(INTEL_FAM6_ATOM_MERRIFIELD, punit_device_tng), ICPU(INTEL_FAM6_ATOM_SILVERMONT_MID, punit_device_tng),
ICPU(INTEL_FAM6_ATOM_AIRMONT, punit_device_cht), ICPU(INTEL_FAM6_ATOM_AIRMONT, punit_device_cht),
{} {}
}; };

View File

@ -68,7 +68,7 @@ static struct bt_sfi_data tng_bt_sfi_data __initdata = {
{ X86_VENDOR_INTEL, 6, model, X86_FEATURE_ANY, (kernel_ulong_t)&ddata } { X86_VENDOR_INTEL, 6, model, X86_FEATURE_ANY, (kernel_ulong_t)&ddata }
static const struct x86_cpu_id bt_sfi_cpu_ids[] = { static const struct x86_cpu_id bt_sfi_cpu_ids[] = {
ICPU(INTEL_FAM6_ATOM_MERRIFIELD, tng_bt_sfi_data), ICPU(INTEL_FAM6_ATOM_SILVERMONT_MID, tng_bt_sfi_data),
{} {}
}; };

View File

@ -314,7 +314,7 @@ static const struct lpss_device_desc bsw_spi_dev_desc = {
#define ICPU(model) { X86_VENDOR_INTEL, 6, model, X86_FEATURE_ANY, } #define ICPU(model) { X86_VENDOR_INTEL, 6, model, X86_FEATURE_ANY, }
static const struct x86_cpu_id lpss_cpu_ids[] = { static const struct x86_cpu_id lpss_cpu_ids[] = {
ICPU(INTEL_FAM6_ATOM_SILVERMONT1), /* Valleyview, Bay Trail */ ICPU(INTEL_FAM6_ATOM_SILVERMONT), /* Valleyview, Bay Trail */
ICPU(INTEL_FAM6_ATOM_AIRMONT), /* Braswell, Cherry Trail */ ICPU(INTEL_FAM6_ATOM_AIRMONT), /* Braswell, Cherry Trail */
{} {}
}; };

View File

@ -54,7 +54,7 @@ static const struct always_present_id always_present_ids[] = {
* Bay / Cherry Trail PWM directly poked by GPU driver in win10, * Bay / Cherry Trail PWM directly poked by GPU driver in win10,
* but Linux uses a separate PWM driver, harmless if not used. * but Linux uses a separate PWM driver, harmless if not used.
*/ */
ENTRY("80860F09", "1", ICPU(INTEL_FAM6_ATOM_SILVERMONT1), {}), ENTRY("80860F09", "1", ICPU(INTEL_FAM6_ATOM_SILVERMONT), {}),
ENTRY("80862288", "1", ICPU(INTEL_FAM6_ATOM_AIRMONT), {}), ENTRY("80862288", "1", ICPU(INTEL_FAM6_ATOM_AIRMONT), {}),
/* /*
* The INT0002 device is necessary to clear wakeup interrupt sources * The INT0002 device is necessary to clear wakeup interrupt sources

View File

@ -1816,7 +1816,7 @@ static const struct pstate_funcs knl_funcs = {
static const struct x86_cpu_id intel_pstate_cpu_ids[] = { static const struct x86_cpu_id intel_pstate_cpu_ids[] = {
ICPU(INTEL_FAM6_SANDYBRIDGE, core_funcs), ICPU(INTEL_FAM6_SANDYBRIDGE, core_funcs),
ICPU(INTEL_FAM6_SANDYBRIDGE_X, core_funcs), ICPU(INTEL_FAM6_SANDYBRIDGE_X, core_funcs),
ICPU(INTEL_FAM6_ATOM_SILVERMONT1, silvermont_funcs), ICPU(INTEL_FAM6_ATOM_SILVERMONT, silvermont_funcs),
ICPU(INTEL_FAM6_IVYBRIDGE, core_funcs), ICPU(INTEL_FAM6_IVYBRIDGE, core_funcs),
ICPU(INTEL_FAM6_HASWELL_CORE, core_funcs), ICPU(INTEL_FAM6_HASWELL_CORE, core_funcs),
ICPU(INTEL_FAM6_BROADWELL_CORE, core_funcs), ICPU(INTEL_FAM6_BROADWELL_CORE, core_funcs),
@ -1833,7 +1833,7 @@ static const struct x86_cpu_id intel_pstate_cpu_ids[] = {
ICPU(INTEL_FAM6_XEON_PHI_KNL, knl_funcs), ICPU(INTEL_FAM6_XEON_PHI_KNL, knl_funcs),
ICPU(INTEL_FAM6_XEON_PHI_KNM, knl_funcs), ICPU(INTEL_FAM6_XEON_PHI_KNM, knl_funcs),
ICPU(INTEL_FAM6_ATOM_GOLDMONT, core_funcs), ICPU(INTEL_FAM6_ATOM_GOLDMONT, core_funcs),
ICPU(INTEL_FAM6_ATOM_GEMINI_LAKE, core_funcs), ICPU(INTEL_FAM6_ATOM_GOLDMONT_PLUS, core_funcs),
ICPU(INTEL_FAM6_SKYLAKE_X, core_funcs), ICPU(INTEL_FAM6_SKYLAKE_X, core_funcs),
{} {}
}; };

View File

@ -1541,7 +1541,7 @@ static struct dunit_ops dnv_ops = {
static const struct x86_cpu_id pnd2_cpuids[] = { static const struct x86_cpu_id pnd2_cpuids[] = {
{ X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_GOLDMONT, 0, (kernel_ulong_t)&apl_ops }, { X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_GOLDMONT, 0, (kernel_ulong_t)&apl_ops },
{ X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_DENVERTON, 0, (kernel_ulong_t)&dnv_ops }, { X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_GOLDMONT_X, 0, (kernel_ulong_t)&dnv_ops },
{ } { }
}; };
MODULE_DEVICE_TABLE(x86cpu, pnd2_cpuids); MODULE_DEVICE_TABLE(x86cpu, pnd2_cpuids);

View File

@ -1073,14 +1073,14 @@ static const struct x86_cpu_id intel_idle_ids[] __initconst = {
INTEL_CPU_FAM6(WESTMERE, idle_cpu_nehalem), INTEL_CPU_FAM6(WESTMERE, idle_cpu_nehalem),
INTEL_CPU_FAM6(WESTMERE_EP, idle_cpu_nehalem), INTEL_CPU_FAM6(WESTMERE_EP, idle_cpu_nehalem),
INTEL_CPU_FAM6(NEHALEM_EX, idle_cpu_nehalem), INTEL_CPU_FAM6(NEHALEM_EX, idle_cpu_nehalem),
INTEL_CPU_FAM6(ATOM_PINEVIEW, idle_cpu_atom), INTEL_CPU_FAM6(ATOM_BONNELL, idle_cpu_atom),
INTEL_CPU_FAM6(ATOM_LINCROFT, idle_cpu_lincroft), INTEL_CPU_FAM6(ATOM_BONNELL_MID, idle_cpu_lincroft),
INTEL_CPU_FAM6(WESTMERE_EX, idle_cpu_nehalem), INTEL_CPU_FAM6(WESTMERE_EX, idle_cpu_nehalem),
INTEL_CPU_FAM6(SANDYBRIDGE, idle_cpu_snb), INTEL_CPU_FAM6(SANDYBRIDGE, idle_cpu_snb),
INTEL_CPU_FAM6(SANDYBRIDGE_X, idle_cpu_snb), INTEL_CPU_FAM6(SANDYBRIDGE_X, idle_cpu_snb),
INTEL_CPU_FAM6(ATOM_CEDARVIEW, idle_cpu_atom), INTEL_CPU_FAM6(ATOM_SALTWELL, idle_cpu_atom),
INTEL_CPU_FAM6(ATOM_SILVERMONT1, idle_cpu_byt), INTEL_CPU_FAM6(ATOM_SILVERMONT, idle_cpu_byt),
INTEL_CPU_FAM6(ATOM_MERRIFIELD, idle_cpu_tangier), INTEL_CPU_FAM6(ATOM_SILVERMONT_MID, idle_cpu_tangier),
INTEL_CPU_FAM6(ATOM_AIRMONT, idle_cpu_cht), INTEL_CPU_FAM6(ATOM_AIRMONT, idle_cpu_cht),
INTEL_CPU_FAM6(IVYBRIDGE, idle_cpu_ivb), INTEL_CPU_FAM6(IVYBRIDGE, idle_cpu_ivb),
INTEL_CPU_FAM6(IVYBRIDGE_X, idle_cpu_ivt), INTEL_CPU_FAM6(IVYBRIDGE_X, idle_cpu_ivt),
@ -1088,7 +1088,7 @@ static const struct x86_cpu_id intel_idle_ids[] __initconst = {
INTEL_CPU_FAM6(HASWELL_X, idle_cpu_hsw), INTEL_CPU_FAM6(HASWELL_X, idle_cpu_hsw),
INTEL_CPU_FAM6(HASWELL_ULT, idle_cpu_hsw), INTEL_CPU_FAM6(HASWELL_ULT, idle_cpu_hsw),
INTEL_CPU_FAM6(HASWELL_GT3E, idle_cpu_hsw), INTEL_CPU_FAM6(HASWELL_GT3E, idle_cpu_hsw),
INTEL_CPU_FAM6(ATOM_SILVERMONT2, idle_cpu_avn), INTEL_CPU_FAM6(ATOM_SILVERMONT_X, idle_cpu_avn),
INTEL_CPU_FAM6(BROADWELL_CORE, idle_cpu_bdw), INTEL_CPU_FAM6(BROADWELL_CORE, idle_cpu_bdw),
INTEL_CPU_FAM6(BROADWELL_GT3E, idle_cpu_bdw), INTEL_CPU_FAM6(BROADWELL_GT3E, idle_cpu_bdw),
INTEL_CPU_FAM6(BROADWELL_X, idle_cpu_bdw), INTEL_CPU_FAM6(BROADWELL_X, idle_cpu_bdw),
@ -1101,8 +1101,8 @@ static const struct x86_cpu_id intel_idle_ids[] __initconst = {
INTEL_CPU_FAM6(XEON_PHI_KNL, idle_cpu_knl), INTEL_CPU_FAM6(XEON_PHI_KNL, idle_cpu_knl),
INTEL_CPU_FAM6(XEON_PHI_KNM, idle_cpu_knl), INTEL_CPU_FAM6(XEON_PHI_KNM, idle_cpu_knl),
INTEL_CPU_FAM6(ATOM_GOLDMONT, idle_cpu_bxt), INTEL_CPU_FAM6(ATOM_GOLDMONT, idle_cpu_bxt),
INTEL_CPU_FAM6(ATOM_GEMINI_LAKE, idle_cpu_bxt), INTEL_CPU_FAM6(ATOM_GOLDMONT_PLUS, idle_cpu_bxt),
INTEL_CPU_FAM6(ATOM_DENVERTON, idle_cpu_dnv), INTEL_CPU_FAM6(ATOM_GOLDMONT_X, idle_cpu_dnv),
{} {}
}; };
@ -1319,7 +1319,7 @@ static void intel_idle_state_table_update(void)
ivt_idle_state_table_update(); ivt_idle_state_table_update();
break; break;
case INTEL_FAM6_ATOM_GOLDMONT: case INTEL_FAM6_ATOM_GOLDMONT:
case INTEL_FAM6_ATOM_GEMINI_LAKE: case INTEL_FAM6_ATOM_GOLDMONT_PLUS:
bxt_idle_state_table_update(); bxt_idle_state_table_update();
break; break;
case INTEL_FAM6_SKYLAKE_DESKTOP: case INTEL_FAM6_SKYLAKE_DESKTOP:

View File

@ -247,7 +247,7 @@ static const struct sdhci_acpi_chip sdhci_acpi_chip_int = {
static bool sdhci_acpi_byt(void) static bool sdhci_acpi_byt(void)
{ {
static const struct x86_cpu_id byt[] = { static const struct x86_cpu_id byt[] = {
{ X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_SILVERMONT1 }, { X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_SILVERMONT },
{} {}
}; };

View File

@ -62,8 +62,8 @@ static const struct pci_platform_pm_ops mid_pci_platform_pm = {
* arch/x86/platform/intel-mid/pwr.c. * arch/x86/platform/intel-mid/pwr.c.
*/ */
static const struct x86_cpu_id lpss_cpu_ids[] = { static const struct x86_cpu_id lpss_cpu_ids[] = {
ICPU(INTEL_FAM6_ATOM_PENWELL), ICPU(INTEL_FAM6_ATOM_SALTWELL_MID),
ICPU(INTEL_FAM6_ATOM_MERRIFIELD), ICPU(INTEL_FAM6_ATOM_SILVERMONT_MID),
{} {}
}; };

View File

@ -60,7 +60,7 @@ static const struct x86_cpu_id int0002_cpu_ids[] = {
/* /*
* Limit ourselves to Cherry Trail for now, until testing shows we * Limit ourselves to Cherry Trail for now, until testing shows we
* need to handle the INT0002 device on Baytrail too. * need to handle the INT0002 device on Baytrail too.
* ICPU(INTEL_FAM6_ATOM_SILVERMONT1), * Valleyview, Bay Trail * * ICPU(INTEL_FAM6_ATOM_SILVERMONT), * Valleyview, Bay Trail *
*/ */
ICPU(INTEL_FAM6_ATOM_AIRMONT), /* Braswell, Cherry Trail */ ICPU(INTEL_FAM6_ATOM_AIRMONT), /* Braswell, Cherry Trail */
{} {}

View File

@ -125,8 +125,8 @@ static const struct mid_pb_ddata mrfld_ddata = {
{ X86_VENDOR_INTEL, 6, model, X86_FEATURE_ANY, (kernel_ulong_t)&ddata } { X86_VENDOR_INTEL, 6, model, X86_FEATURE_ANY, (kernel_ulong_t)&ddata }
static const struct x86_cpu_id mid_pb_cpu_ids[] = { static const struct x86_cpu_id mid_pb_cpu_ids[] = {
ICPU(INTEL_FAM6_ATOM_PENWELL, mfld_ddata), ICPU(INTEL_FAM6_ATOM_SALTWELL_MID, mfld_ddata),
ICPU(INTEL_FAM6_ATOM_MERRIFIELD, mrfld_ddata), ICPU(INTEL_FAM6_ATOM_SILVERMONT_MID, mrfld_ddata),
{} {}
}; };

View File

@ -320,7 +320,7 @@ static struct telemetry_debugfs_conf telem_apl_debugfs_conf = {
static const struct x86_cpu_id telemetry_debugfs_cpu_ids[] = { static const struct x86_cpu_id telemetry_debugfs_cpu_ids[] = {
TELEM_DEBUGFS_CPU(INTEL_FAM6_ATOM_GOLDMONT, telem_apl_debugfs_conf), TELEM_DEBUGFS_CPU(INTEL_FAM6_ATOM_GOLDMONT, telem_apl_debugfs_conf),
TELEM_DEBUGFS_CPU(INTEL_FAM6_ATOM_GEMINI_LAKE, telem_apl_debugfs_conf), TELEM_DEBUGFS_CPU(INTEL_FAM6_ATOM_GOLDMONT_PLUS, telem_apl_debugfs_conf),
{} {}
}; };

View File

@ -192,7 +192,7 @@ static struct telemetry_plt_config telem_glk_config = {
static const struct x86_cpu_id telemetry_cpu_ids[] = { static const struct x86_cpu_id telemetry_cpu_ids[] = {
TELEM_CPU(INTEL_FAM6_ATOM_GOLDMONT, telem_apl_config), TELEM_CPU(INTEL_FAM6_ATOM_GOLDMONT, telem_apl_config),
TELEM_CPU(INTEL_FAM6_ATOM_GEMINI_LAKE, telem_glk_config), TELEM_CPU(INTEL_FAM6_ATOM_GOLDMONT_PLUS, telem_glk_config),
{} {}
}; };

View File

@ -1157,13 +1157,13 @@ static const struct x86_cpu_id rapl_ids[] __initconst = {
INTEL_CPU_FAM6(KABYLAKE_DESKTOP, rapl_defaults_core), INTEL_CPU_FAM6(KABYLAKE_DESKTOP, rapl_defaults_core),
INTEL_CPU_FAM6(CANNONLAKE_MOBILE, rapl_defaults_core), INTEL_CPU_FAM6(CANNONLAKE_MOBILE, rapl_defaults_core),
INTEL_CPU_FAM6(ATOM_SILVERMONT1, rapl_defaults_byt), INTEL_CPU_FAM6(ATOM_SILVERMONT, rapl_defaults_byt),
INTEL_CPU_FAM6(ATOM_AIRMONT, rapl_defaults_cht), INTEL_CPU_FAM6(ATOM_AIRMONT, rapl_defaults_cht),
INTEL_CPU_FAM6(ATOM_MERRIFIELD, rapl_defaults_tng), INTEL_CPU_FAM6(ATOM_SILVERMONT_MID, rapl_defaults_tng),
INTEL_CPU_FAM6(ATOM_MOOREFIELD, rapl_defaults_ann), INTEL_CPU_FAM6(ATOM_AIRMONT_MID, rapl_defaults_ann),
INTEL_CPU_FAM6(ATOM_GOLDMONT, rapl_defaults_core), INTEL_CPU_FAM6(ATOM_GOLDMONT, rapl_defaults_core),
INTEL_CPU_FAM6(ATOM_GEMINI_LAKE, rapl_defaults_core), INTEL_CPU_FAM6(ATOM_GOLDMONT_PLUS, rapl_defaults_core),
INTEL_CPU_FAM6(ATOM_DENVERTON, rapl_defaults_core), INTEL_CPU_FAM6(ATOM_GOLDMONT_X, rapl_defaults_core),
INTEL_CPU_FAM6(XEON_PHI_KNL, rapl_defaults_hsw_server), INTEL_CPU_FAM6(XEON_PHI_KNL, rapl_defaults_hsw_server),
INTEL_CPU_FAM6(XEON_PHI_KNM, rapl_defaults_hsw_server), INTEL_CPU_FAM6(XEON_PHI_KNM, rapl_defaults_hsw_server),

View File

@ -45,7 +45,7 @@ static irqreturn_t soc_irq_thread_fn(int irq, void *dev_data)
} }
static const struct x86_cpu_id soc_thermal_ids[] = { static const struct x86_cpu_id soc_thermal_ids[] = {
{ X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_SILVERMONT1, 0, { X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_SILVERMONT, 0,
BYT_SOC_DTS_APIC_IRQ}, BYT_SOC_DTS_APIC_IRQ},
{} {}
}; };

View File

@ -459,10 +459,20 @@ void perf_aux_output_end(struct perf_output_handle *handle, unsigned long size)
if (size || handle->aux_flags) { if (size || handle->aux_flags) {
/* /*
* Only send RECORD_AUX if we have something useful to communicate * Only send RECORD_AUX if we have something useful to communicate
*
* Note: the OVERWRITE records by themselves are not considered
* useful, as they don't communicate any *new* information,
* aside from the short-lived offset, that becomes history at
* the next event sched-in and therefore isn't useful.
* The userspace that needs to copy out AUX data in overwrite
* mode should know to use user_page::aux_head for the actual
* offset. So, from now on we don't output AUX records that
* have *only* OVERWRITE flag set.
*/ */
perf_event_aux_event(handle->event, aux_head, size, if (handle->aux_flags & ~(u64)PERF_AUX_FLAG_OVERWRITE)
handle->aux_flags); perf_event_aux_event(handle->event, aux_head, size,
handle->aux_flags);
} }
rb->user_page->aux_head = rb->aux_head; rb->user_page->aux_head = rb->aux_head;

View File

@ -546,8 +546,14 @@ static void do_free_cleaned_kprobes(void)
struct optimized_kprobe *op, *tmp; struct optimized_kprobe *op, *tmp;
list_for_each_entry_safe(op, tmp, &freeing_list, list) { list_for_each_entry_safe(op, tmp, &freeing_list, list) {
BUG_ON(!kprobe_unused(&op->kp));
list_del_init(&op->list); list_del_init(&op->list);
if (WARN_ON_ONCE(!kprobe_unused(&op->kp))) {
/*
* This must not happen, but if there is a kprobe
* still in use, keep it on kprobes hash list.
*/
continue;
}
free_aggr_kprobe(&op->kp); free_aggr_kprobe(&op->kp);
} }
} }
@ -700,11 +706,11 @@ static void unoptimize_kprobe(struct kprobe *p, bool force)
} }
/* Cancel unoptimizing for reusing */ /* Cancel unoptimizing for reusing */
static void reuse_unused_kprobe(struct kprobe *ap) static int reuse_unused_kprobe(struct kprobe *ap)
{ {
struct optimized_kprobe *op; struct optimized_kprobe *op;
int ret;
BUG_ON(!kprobe_unused(ap));
/* /*
* Unused kprobe MUST be on the way of delayed unoptimizing (means * Unused kprobe MUST be on the way of delayed unoptimizing (means
* there is still a relative jump) and disabled. * there is still a relative jump) and disabled.
@ -714,8 +720,12 @@ static void reuse_unused_kprobe(struct kprobe *ap)
/* Enable the probe again */ /* Enable the probe again */
ap->flags &= ~KPROBE_FLAG_DISABLED; ap->flags &= ~KPROBE_FLAG_DISABLED;
/* Optimize it again (remove from op->list) */ /* Optimize it again (remove from op->list) */
BUG_ON(!kprobe_optready(ap)); ret = kprobe_optready(ap);
if (ret)
return ret;
optimize_kprobe(ap); optimize_kprobe(ap);
return 0;
} }
/* Remove optimized instructions */ /* Remove optimized instructions */
@ -940,11 +950,16 @@ static void __disarm_kprobe(struct kprobe *p, bool reopt)
#define kprobe_disarmed(p) kprobe_disabled(p) #define kprobe_disarmed(p) kprobe_disabled(p)
#define wait_for_kprobe_optimizer() do {} while (0) #define wait_for_kprobe_optimizer() do {} while (0)
/* There should be no unused kprobes can be reused without optimization */ static int reuse_unused_kprobe(struct kprobe *ap)
static void reuse_unused_kprobe(struct kprobe *ap)
{ {
/*
* If the optimized kprobe is NOT supported, the aggr kprobe is
* released at the same time that the last aggregated kprobe is
* unregistered.
* Thus there should be no chance to reuse unused kprobe.
*/
printk(KERN_ERR "Error: There should be no unused kprobe here.\n"); printk(KERN_ERR "Error: There should be no unused kprobe here.\n");
BUG_ON(kprobe_unused(ap)); return -EINVAL;
} }
static void free_aggr_kprobe(struct kprobe *p) static void free_aggr_kprobe(struct kprobe *p)
@ -1259,8 +1274,6 @@ NOKPROBE_SYMBOL(cleanup_rp_inst);
/* Add the new probe to ap->list */ /* Add the new probe to ap->list */
static int add_new_kprobe(struct kprobe *ap, struct kprobe *p) static int add_new_kprobe(struct kprobe *ap, struct kprobe *p)
{ {
BUG_ON(kprobe_gone(ap) || kprobe_gone(p));
if (p->post_handler) if (p->post_handler)
unoptimize_kprobe(ap, true); /* Fall back to normal kprobe */ unoptimize_kprobe(ap, true); /* Fall back to normal kprobe */
@ -1318,9 +1331,12 @@ static int register_aggr_kprobe(struct kprobe *orig_p, struct kprobe *p)
goto out; goto out;
} }
init_aggr_kprobe(ap, orig_p); init_aggr_kprobe(ap, orig_p);
} else if (kprobe_unused(ap)) } else if (kprobe_unused(ap)) {
/* This probe is going to die. Rescue it */ /* This probe is going to die. Rescue it */
reuse_unused_kprobe(ap); ret = reuse_unused_kprobe(ap);
if (ret)
goto out;
}
if (kprobe_gone(ap)) { if (kprobe_gone(ap)) {
/* /*
@ -1704,7 +1720,6 @@ noclean:
return 0; return 0;
disarmed: disarmed:
BUG_ON(!kprobe_disarmed(ap));
hlist_del_rcu(&ap->hlist); hlist_del_rcu(&ap->hlist);
return 0; return 0;
} }

View File

@ -787,7 +787,7 @@ static struct snd_soc_card byt_rt5651_card = {
}; };
static const struct x86_cpu_id baytrail_cpu_ids[] = { static const struct x86_cpu_id baytrail_cpu_ids[] = {
{ X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_SILVERMONT1 }, /* Valleyview */ { X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_SILVERMONT }, /* Valleyview */
{} {}
}; };

View File

@ -3,8 +3,6 @@
#define _TOOLS_LINUX_BITOPS_H_ #define _TOOLS_LINUX_BITOPS_H_
#include <asm/types.h> #include <asm/types.h>
#include <linux/compiler.h>
#ifndef __WORDSIZE #ifndef __WORDSIZE
#define __WORDSIZE (__SIZEOF_LONG__ * 8) #define __WORDSIZE (__SIZEOF_LONG__ * 8)
#endif #endif
@ -12,10 +10,9 @@
#ifndef BITS_PER_LONG #ifndef BITS_PER_LONG
# define BITS_PER_LONG __WORDSIZE # define BITS_PER_LONG __WORDSIZE
#endif #endif
#include <linux/bits.h>
#include <linux/compiler.h>
#define BIT_MASK(nr) (1UL << ((nr) % BITS_PER_LONG))
#define BIT_WORD(nr) ((nr) / BITS_PER_LONG)
#define BITS_PER_BYTE 8
#define BITS_TO_LONGS(nr) DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(long)) #define BITS_TO_LONGS(nr) DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(long))
#define BITS_TO_U64(nr) DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(u64)) #define BITS_TO_U64(nr) DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(u64))
#define BITS_TO_U32(nr) DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(u32)) #define BITS_TO_U32(nr) DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(u32))

View File

@ -0,0 +1,26 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __LINUX_BITS_H
#define __LINUX_BITS_H
#include <asm/bitsperlong.h>
#define BIT(nr) (1UL << (nr))
#define BIT_ULL(nr) (1ULL << (nr))
#define BIT_MASK(nr) (1UL << ((nr) % BITS_PER_LONG))
#define BIT_WORD(nr) ((nr) / BITS_PER_LONG)
#define BIT_ULL_MASK(nr) (1ULL << ((nr) % BITS_PER_LONG_LONG))
#define BIT_ULL_WORD(nr) ((nr) / BITS_PER_LONG_LONG)
#define BITS_PER_BYTE 8
/*
* Create a contiguous bitmask starting at bit position @l and ending at
* position @h. For example
* GENMASK_ULL(39, 21) gives us the 64bit vector 0x000000ffffe00000.
*/
#define GENMASK(h, l) \
(((~0UL) - (1UL << (l)) + 1) & (~0UL >> (BITS_PER_LONG - 1 - (h))))
#define GENMASK_ULL(h, l) \
(((~0ULL) - (1ULL << (l)) + 1) & \
(~0ULL >> (BITS_PER_LONG_LONG - 1 - (h))))
#endif /* __LINUX_BITS_H */

View File

@ -52,4 +52,11 @@ static inline bool __must_check IS_ERR_OR_NULL(__force const void *ptr)
return unlikely(!ptr) || IS_ERR_VALUE((unsigned long)ptr); return unlikely(!ptr) || IS_ERR_VALUE((unsigned long)ptr);
} }
static inline int __must_check PTR_ERR_OR_ZERO(__force const void *ptr)
{
if (IS_ERR(ptr))
return PTR_ERR(ptr);
else
return 0;
}
#endif /* _LINUX_ERR_H */ #endif /* _LINUX_ERR_H */

View File

@ -23,6 +23,13 @@ void pager_init(const char *pager_env)
subcmd_config.pager_env = pager_env; subcmd_config.pager_env = pager_env;
} }
static const char *forced_pager;
void force_pager(const char *pager)
{
forced_pager = pager;
}
static void pager_preexec(void) static void pager_preexec(void)
{ {
/* /*
@ -66,7 +73,9 @@ void setup_pager(void)
const char *pager = getenv(subcmd_config.pager_env); const char *pager = getenv(subcmd_config.pager_env);
struct winsize sz; struct winsize sz;
if (!isatty(1)) if (forced_pager)
pager = forced_pager;
if (!isatty(1) && !forced_pager)
return; return;
if (ioctl(1, TIOCGWINSZ, &sz) == 0) if (ioctl(1, TIOCGWINSZ, &sz) == 0)
pager_columns = sz.ws_col; pager_columns = sz.ws_col;

View File

@ -7,5 +7,6 @@ extern void pager_init(const char *pager_env);
extern void setup_pager(void); extern void setup_pager(void);
extern int pager_in_use(void); extern int pager_in_use(void);
extern int pager_get_columns(void); extern int pager_get_columns(void);
extern void force_pager(const char *);
#endif /* __SUBCMD_PAGER_H */ #endif /* __SUBCMD_PAGER_H */

View File

@ -4,6 +4,8 @@ libtraceevent-y += trace-seq.o
libtraceevent-y += parse-filter.o libtraceevent-y += parse-filter.o
libtraceevent-y += parse-utils.o libtraceevent-y += parse-utils.o
libtraceevent-y += kbuffer-parse.o libtraceevent-y += kbuffer-parse.o
libtraceevent-y += tep_strerror.o
libtraceevent-y += event-parse-api.o
plugin_jbd2-y += plugin_jbd2.o plugin_jbd2-y += plugin_jbd2.o
plugin_hrtimer-y += plugin_hrtimer.o plugin_hrtimer-y += plugin_hrtimer.o

View File

@ -0,0 +1,275 @@
// SPDX-License-Identifier: LGPL-2.1
/*
* Copyright (C) 2009, 2010 Red Hat Inc, Steven Rostedt <srostedt@redhat.com>
*
*/
#include "event-parse.h"
#include "event-parse-local.h"
#include "event-utils.h"
/**
* tep_get_first_event - returns the first event in the events array
* @tep: a handle to the tep_handle
*
* This returns pointer to the first element of the events array
* If @tep is NULL, NULL is returned.
*/
struct tep_event_format *tep_get_first_event(struct tep_handle *tep)
{
if (tep && tep->events)
return tep->events[0];
return NULL;
}
/**
* tep_get_events_count - get the number of defined events
* @tep: a handle to the tep_handle
*
* This returns number of elements in event array
* If @tep is NULL, 0 is returned.
*/
int tep_get_events_count(struct tep_handle *tep)
{
if(tep)
return tep->nr_events;
return 0;
}
/**
* tep_set_flag - set event parser flag
* @tep: a handle to the tep_handle
* @flag: flag, or combination of flags to be set
* can be any combination from enum tep_flag
*
* This sets a flag or mbination of flags from enum tep_flag
*/
void tep_set_flag(struct tep_handle *tep, int flag)
{
if(tep)
tep->flags |= flag;
}
unsigned short __tep_data2host2(struct tep_handle *pevent, unsigned short data)
{
unsigned short swap;
if (!pevent || pevent->host_bigendian == pevent->file_bigendian)
return data;
swap = ((data & 0xffULL) << 8) |
((data & (0xffULL << 8)) >> 8);
return swap;
}
unsigned int __tep_data2host4(struct tep_handle *pevent, unsigned int data)
{
unsigned int swap;
if (!pevent || pevent->host_bigendian == pevent->file_bigendian)
return data;
swap = ((data & 0xffULL) << 24) |
((data & (0xffULL << 8)) << 8) |
((data & (0xffULL << 16)) >> 8) |
((data & (0xffULL << 24)) >> 24);
return swap;
}
unsigned long long
__tep_data2host8(struct tep_handle *pevent, unsigned long long data)
{
unsigned long long swap;
if (!pevent || pevent->host_bigendian == pevent->file_bigendian)
return data;
swap = ((data & 0xffULL) << 56) |
((data & (0xffULL << 8)) << 40) |
((data & (0xffULL << 16)) << 24) |
((data & (0xffULL << 24)) << 8) |
((data & (0xffULL << 32)) >> 8) |
((data & (0xffULL << 40)) >> 24) |
((data & (0xffULL << 48)) >> 40) |
((data & (0xffULL << 56)) >> 56);
return swap;
}
/**
* tep_get_header_page_size - get size of the header page
* @pevent: a handle to the tep_handle
*
* This returns size of the header page
* If @pevent is NULL, 0 is returned.
*/
int tep_get_header_page_size(struct tep_handle *pevent)
{
if(pevent)
return pevent->header_page_size_size;
return 0;
}
/**
* tep_get_cpus - get the number of CPUs
* @pevent: a handle to the tep_handle
*
* This returns the number of CPUs
* If @pevent is NULL, 0 is returned.
*/
int tep_get_cpus(struct tep_handle *pevent)
{
if(pevent)
return pevent->cpus;
return 0;
}
/**
* tep_set_cpus - set the number of CPUs
* @pevent: a handle to the tep_handle
*
* This sets the number of CPUs
*/
void tep_set_cpus(struct tep_handle *pevent, int cpus)
{
if(pevent)
pevent->cpus = cpus;
}
/**
* tep_get_long_size - get the size of a long integer on the current machine
* @pevent: a handle to the tep_handle
*
* This returns the size of a long integer on the current machine
* If @pevent is NULL, 0 is returned.
*/
int tep_get_long_size(struct tep_handle *pevent)
{
if(pevent)
return pevent->long_size;
return 0;
}
/**
* tep_set_long_size - set the size of a long integer on the current machine
* @pevent: a handle to the tep_handle
* @size: size, in bytes, of a long integer
*
* This sets the size of a long integer on the current machine
*/
void tep_set_long_size(struct tep_handle *pevent, int long_size)
{
if(pevent)
pevent->long_size = long_size;
}
/**
* tep_get_page_size - get the size of a memory page on the current machine
* @pevent: a handle to the tep_handle
*
* This returns the size of a memory page on the current machine
* If @pevent is NULL, 0 is returned.
*/
int tep_get_page_size(struct tep_handle *pevent)
{
if(pevent)
return pevent->page_size;
return 0;
}
/**
* tep_set_page_size - set the size of a memory page on the current machine
* @pevent: a handle to the tep_handle
* @_page_size: size of a memory page, in bytes
*
* This sets the size of a memory page on the current machine
*/
void tep_set_page_size(struct tep_handle *pevent, int _page_size)
{
if(pevent)
pevent->page_size = _page_size;
}
/**
* tep_is_file_bigendian - get if the file is in big endian order
* @pevent: a handle to the tep_handle
*
* This returns if the file is in big endian order
* If @pevent is NULL, 0 is returned.
*/
int tep_is_file_bigendian(struct tep_handle *pevent)
{
if(pevent)
return pevent->file_bigendian;
return 0;
}
/**
* tep_set_file_bigendian - set if the file is in big endian order
* @pevent: a handle to the tep_handle
* @endian: non zero, if the file is in big endian order
*
* This sets if the file is in big endian order
*/
void tep_set_file_bigendian(struct tep_handle *pevent, enum tep_endian endian)
{
if(pevent)
pevent->file_bigendian = endian;
}
/**
* tep_is_host_bigendian - get if the order of the current host is big endian
* @pevent: a handle to the tep_handle
*
* This gets if the order of the current host is big endian
* If @pevent is NULL, 0 is returned.
*/
int tep_is_host_bigendian(struct tep_handle *pevent)
{
if(pevent)
return pevent->host_bigendian;
return 0;
}
/**
* tep_set_host_bigendian - set the order of the local host
* @pevent: a handle to the tep_handle
* @endian: non zero, if the local host has big endian order
*
* This sets the order of the local host
*/
void tep_set_host_bigendian(struct tep_handle *pevent, enum tep_endian endian)
{
if(pevent)
pevent->host_bigendian = endian;
}
/**
* tep_is_latency_format - get if the latency output format is configured
* @pevent: a handle to the tep_handle
*
* This gets if the latency output format is configured
* If @pevent is NULL, 0 is returned.
*/
int tep_is_latency_format(struct tep_handle *pevent)
{
if(pevent)
return pevent->latency_format;
return 0;
}
/**
* tep_set_latency_format - set the latency output format
* @pevent: a handle to the tep_handle
* @lat: non zero for latency output format
*
* This sets the latency output format
*/
void tep_set_latency_format(struct tep_handle *pevent, int lat)
{
if(pevent)
pevent->latency_format = lat;
}

View File

@ -0,0 +1,92 @@
// SPDX-License-Identifier: LGPL-2.1
/*
* Copyright (C) 2009, 2010 Red Hat Inc, Steven Rostedt <srostedt@redhat.com>
*
*/
#ifndef _PARSE_EVENTS_INT_H
#define _PARSE_EVENTS_INT_H
struct cmdline;
struct cmdline_list;
struct func_map;
struct func_list;
struct event_handler;
struct func_resolver;
struct tep_handle {
int ref_count;
int header_page_ts_offset;
int header_page_ts_size;
int header_page_size_offset;
int header_page_size_size;
int header_page_data_offset;
int header_page_data_size;
int header_page_overwrite;
enum tep_endian file_bigendian;
enum tep_endian host_bigendian;
int latency_format;
int old_format;
int cpus;
int long_size;
int page_size;
struct cmdline *cmdlines;
struct cmdline_list *cmdlist;
int cmdline_count;
struct func_map *func_map;
struct func_resolver *func_resolver;
struct func_list *funclist;
unsigned int func_count;
struct printk_map *printk_map;
struct printk_list *printklist;
unsigned int printk_count;
struct tep_event_format **events;
int nr_events;
struct tep_event_format **sort_events;
enum tep_event_sort_type last_type;
int type_offset;
int type_size;
int pid_offset;
int pid_size;
int pc_offset;
int pc_size;
int flags_offset;
int flags_size;
int ld_offset;
int ld_size;
int print_raw;
int test_filters;
int flags;
struct tep_format_field *bprint_ip_field;
struct tep_format_field *bprint_fmt_field;
struct tep_format_field *bprint_buf_field;
struct event_handler *handlers;
struct tep_function_handler *func_handlers;
/* cache */
struct tep_event_format *last_event;
char *trace_clock;
};
#endif /* _PARSE_EVENTS_INT_H */

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -14,7 +14,9 @@
#include <unistd.h> #include <unistd.h>
#include <dirent.h> #include <dirent.h>
#include "event-parse.h" #include "event-parse.h"
#include "event-parse-local.h"
#include "event-utils.h" #include "event-utils.h"
#include "trace-seq.h"
#define LOCAL_PLUGIN_DIR ".traceevent/plugins" #define LOCAL_PLUGIN_DIR ".traceevent/plugins"
@ -30,8 +32,8 @@ static struct trace_plugin_options {
char *value; char *value;
} *trace_plugin_options; } *trace_plugin_options;
struct plugin_list { struct tep_plugin_list {
struct plugin_list *next; struct tep_plugin_list *next;
char *name; char *name;
void *handle; void *handle;
}; };
@ -258,7 +260,7 @@ void tep_plugin_remove_options(struct tep_plugin_option *options)
*/ */
void tep_print_plugins(struct trace_seq *s, void tep_print_plugins(struct trace_seq *s,
const char *prefix, const char *suffix, const char *prefix, const char *suffix,
const struct plugin_list *list) const struct tep_plugin_list *list)
{ {
while (list) { while (list) {
trace_seq_printf(s, "%s%s%s", prefix, list->name, suffix); trace_seq_printf(s, "%s%s%s", prefix, list->name, suffix);
@ -270,9 +272,9 @@ static void
load_plugin(struct tep_handle *pevent, const char *path, load_plugin(struct tep_handle *pevent, const char *path,
const char *file, void *data) const char *file, void *data)
{ {
struct plugin_list **plugin_list = data; struct tep_plugin_list **plugin_list = data;
tep_plugin_load_func func; tep_plugin_load_func func;
struct plugin_list *list; struct tep_plugin_list *list;
const char *alias; const char *alias;
char *plugin; char *plugin;
void *handle; void *handle;
@ -416,20 +418,20 @@ load_plugins(struct tep_handle *pevent, const char *suffix,
free(path); free(path);
} }
struct plugin_list* struct tep_plugin_list*
tep_load_plugins(struct tep_handle *pevent) tep_load_plugins(struct tep_handle *pevent)
{ {
struct plugin_list *list = NULL; struct tep_plugin_list *list = NULL;
load_plugins(pevent, ".so", load_plugin, &list); load_plugins(pevent, ".so", load_plugin, &list);
return list; return list;
} }
void void
tep_unload_plugins(struct plugin_list *plugin_list, struct tep_handle *pevent) tep_unload_plugins(struct tep_plugin_list *plugin_list, struct tep_handle *pevent)
{ {
tep_plugin_unload_func func; tep_plugin_unload_func func;
struct plugin_list *list; struct tep_plugin_list *list;
while (plugin_list) { while (plugin_list) {
list = plugin_list; list = plugin_list;

File diff suppressed because it is too large Load Diff

View File

@ -23,6 +23,7 @@
#include "event-parse.h" #include "event-parse.h"
#include "event-utils.h" #include "event-utils.h"
#include "trace-seq.h"
static struct func_stack { static struct func_stack {
int size; int size;
@ -123,7 +124,7 @@ static int add_and_get_index(const char *parent, const char *child, int cpu)
} }
static int function_handler(struct trace_seq *s, struct tep_record *record, static int function_handler(struct trace_seq *s, struct tep_record *record,
struct event_format *event, void *context) struct tep_event_format *event, void *context)
{ {
struct tep_handle *pevent = event->pevent; struct tep_handle *pevent = event->pevent;
unsigned long long function; unsigned long long function;

View File

@ -23,10 +23,11 @@
#include <string.h> #include <string.h>
#include "event-parse.h" #include "event-parse.h"
#include "trace-seq.h"
static int timer_expire_handler(struct trace_seq *s, static int timer_expire_handler(struct trace_seq *s,
struct tep_record *record, struct tep_record *record,
struct event_format *event, void *context) struct tep_event_format *event, void *context)
{ {
trace_seq_printf(s, "hrtimer="); trace_seq_printf(s, "hrtimer=");
@ -46,7 +47,7 @@ static int timer_expire_handler(struct trace_seq *s,
static int timer_start_handler(struct trace_seq *s, static int timer_start_handler(struct trace_seq *s,
struct tep_record *record, struct tep_record *record,
struct event_format *event, void *context) struct tep_event_format *event, void *context)
{ {
trace_seq_printf(s, "hrtimer="); trace_seq_printf(s, "hrtimer=");

View File

@ -22,6 +22,7 @@
#include <string.h> #include <string.h>
#include "event-parse.h" #include "event-parse.h"
#include "trace-seq.h"
#define MINORBITS 20 #define MINORBITS 20
#define MINORMASK ((1U << MINORBITS) - 1) #define MINORMASK ((1U << MINORBITS) - 1)

View File

@ -22,11 +22,12 @@
#include <string.h> #include <string.h>
#include "event-parse.h" #include "event-parse.h"
#include "trace-seq.h"
static int call_site_handler(struct trace_seq *s, struct tep_record *record, static int call_site_handler(struct trace_seq *s, struct tep_record *record,
struct event_format *event, void *context) struct tep_event_format *event, void *context)
{ {
struct format_field *field; struct tep_format_field *field;
unsigned long long val, addr; unsigned long long val, addr;
void *data = record->data; void *data = record->data;
const char *func; const char *func;

View File

@ -23,6 +23,7 @@
#include <stdint.h> #include <stdint.h>
#include "event-parse.h" #include "event-parse.h"
#include "trace-seq.h"
#ifdef HAVE_UDIS86 #ifdef HAVE_UDIS86
@ -248,7 +249,7 @@ static const char *find_exit_reason(unsigned isa, int val)
} }
static int print_exit_reason(struct trace_seq *s, struct tep_record *record, static int print_exit_reason(struct trace_seq *s, struct tep_record *record,
struct event_format *event, const char *field) struct tep_event_format *event, const char *field)
{ {
unsigned long long isa; unsigned long long isa;
unsigned long long val; unsigned long long val;
@ -269,7 +270,7 @@ static int print_exit_reason(struct trace_seq *s, struct tep_record *record,
} }
static int kvm_exit_handler(struct trace_seq *s, struct tep_record *record, static int kvm_exit_handler(struct trace_seq *s, struct tep_record *record,
struct event_format *event, void *context) struct tep_event_format *event, void *context)
{ {
unsigned long long info1 = 0, info2 = 0; unsigned long long info1 = 0, info2 = 0;
@ -292,7 +293,7 @@ static int kvm_exit_handler(struct trace_seq *s, struct tep_record *record,
static int kvm_emulate_insn_handler(struct trace_seq *s, static int kvm_emulate_insn_handler(struct trace_seq *s,
struct tep_record *record, struct tep_record *record,
struct event_format *event, void *context) struct tep_event_format *event, void *context)
{ {
unsigned long long rip, csbase, len, flags, failed; unsigned long long rip, csbase, len, flags, failed;
int llen; int llen;
@ -331,7 +332,7 @@ static int kvm_emulate_insn_handler(struct trace_seq *s,
static int kvm_nested_vmexit_inject_handler(struct trace_seq *s, struct tep_record *record, static int kvm_nested_vmexit_inject_handler(struct trace_seq *s, struct tep_record *record,
struct event_format *event, void *context) struct tep_event_format *event, void *context)
{ {
if (print_exit_reason(s, record, event, "exit_code") < 0) if (print_exit_reason(s, record, event, "exit_code") < 0)
return -1; return -1;
@ -345,7 +346,7 @@ static int kvm_nested_vmexit_inject_handler(struct trace_seq *s, struct tep_reco
} }
static int kvm_nested_vmexit_handler(struct trace_seq *s, struct tep_record *record, static int kvm_nested_vmexit_handler(struct trace_seq *s, struct tep_record *record,
struct event_format *event, void *context) struct tep_event_format *event, void *context)
{ {
tep_print_num_field(s, "rip %llx ", event, "rip", record, 1); tep_print_num_field(s, "rip %llx ", event, "rip", record, 1);
@ -371,7 +372,7 @@ union kvm_mmu_page_role {
}; };
static int kvm_mmu_print_role(struct trace_seq *s, struct tep_record *record, static int kvm_mmu_print_role(struct trace_seq *s, struct tep_record *record,
struct event_format *event, void *context) struct tep_event_format *event, void *context)
{ {
unsigned long long val; unsigned long long val;
static const char *access_str[] = { static const char *access_str[] = {
@ -418,7 +419,7 @@ static int kvm_mmu_print_role(struct trace_seq *s, struct tep_record *record,
static int kvm_mmu_get_page_handler(struct trace_seq *s, static int kvm_mmu_get_page_handler(struct trace_seq *s,
struct tep_record *record, struct tep_record *record,
struct event_format *event, void *context) struct tep_event_format *event, void *context)
{ {
unsigned long long val; unsigned long long val;

View File

@ -22,13 +22,14 @@
#include <string.h> #include <string.h>
#include "event-parse.h" #include "event-parse.h"
#include "trace-seq.h"
#define INDENT 65 #define INDENT 65
static void print_string(struct trace_seq *s, struct event_format *event, static void print_string(struct trace_seq *s, struct tep_event_format *event,
const char *name, const void *data) const char *name, const void *data)
{ {
struct format_field *f = tep_find_field(event, name); struct tep_format_field *f = tep_find_field(event, name);
int offset; int offset;
int length; int length;
@ -59,7 +60,7 @@ static void print_string(struct trace_seq *s, struct event_format *event,
static int drv_bss_info_changed(struct trace_seq *s, static int drv_bss_info_changed(struct trace_seq *s,
struct tep_record *record, struct tep_record *record,
struct event_format *event, void *context) struct tep_event_format *event, void *context)
{ {
void *data = record->data; void *data = record->data;

View File

@ -22,6 +22,7 @@
#include <string.h> #include <string.h>
#include "event-parse.h" #include "event-parse.h"
#include "trace-seq.h"
static void write_state(struct trace_seq *s, int val) static void write_state(struct trace_seq *s, int val)
{ {
@ -44,7 +45,7 @@ static void write_state(struct trace_seq *s, int val)
trace_seq_putc(s, 'R'); trace_seq_putc(s, 'R');
} }
static void write_and_save_comm(struct format_field *field, static void write_and_save_comm(struct tep_format_field *field,
struct tep_record *record, struct tep_record *record,
struct trace_seq *s, int pid) struct trace_seq *s, int pid)
{ {
@ -66,9 +67,9 @@ static void write_and_save_comm(struct format_field *field,
static int sched_wakeup_handler(struct trace_seq *s, static int sched_wakeup_handler(struct trace_seq *s,
struct tep_record *record, struct tep_record *record,
struct event_format *event, void *context) struct tep_event_format *event, void *context)
{ {
struct format_field *field; struct tep_format_field *field;
unsigned long long val; unsigned long long val;
if (tep_get_field_val(s, event, "pid", record, &val, 1)) if (tep_get_field_val(s, event, "pid", record, &val, 1))
@ -95,9 +96,9 @@ static int sched_wakeup_handler(struct trace_seq *s,
static int sched_switch_handler(struct trace_seq *s, static int sched_switch_handler(struct trace_seq *s,
struct tep_record *record, struct tep_record *record,
struct event_format *event, void *context) struct tep_event_format *event, void *context)
{ {
struct format_field *field; struct tep_format_field *field;
unsigned long long val; unsigned long long val;
if (tep_get_field_val(s, event, "prev_pid", record, &val, 1)) if (tep_get_field_val(s, event, "prev_pid", record, &val, 1))

View File

@ -3,6 +3,7 @@
#include <string.h> #include <string.h>
#include <inttypes.h> #include <inttypes.h>
#include "event-parse.h" #include "event-parse.h"
#include "trace-seq.h"
typedef unsigned long sector_t; typedef unsigned long sector_t;
typedef uint64_t u64; typedef uint64_t u64;

View File

@ -3,6 +3,7 @@
#include <stdlib.h> #include <stdlib.h>
#include <string.h> #include <string.h>
#include "event-parse.h" #include "event-parse.h"
#include "trace-seq.h"
#define __HYPERVISOR_set_trap_table 0 #define __HYPERVISOR_set_trap_table 0
#define __HYPERVISOR_mmu_update 1 #define __HYPERVISOR_mmu_update 1

View File

@ -0,0 +1,53 @@
// SPDX-License-Identifier: LGPL-2.1
#undef _GNU_SOURCE
#include <string.h>
#include <stdio.h>
#include "event-parse.h"
#undef _PE
#define _PE(code, str) str
static const char * const tep_error_str[] = {
TEP_ERRORS
};
#undef _PE
/*
* The tools so far have been using the strerror_r() GNU variant, that returns
* a string, be it the buffer passed or something else.
*
* But that, besides being tricky in cases where we expect that the function
* using strerror_r() returns the error formatted in a provided buffer (we have
* to check if it returned something else and copy that instead), breaks the
* build on systems not using glibc, like Alpine Linux, where musl libc is
* used.
*
* So, introduce yet another wrapper, str_error_r(), that has the GNU
* interface, but uses the portable XSI variant of strerror_r(), so that users
* rest asured that the provided buffer is used and it is what is returned.
*/
int tep_strerror(struct tep_handle *tep __maybe_unused,
enum tep_errno errnum, char *buf, size_t buflen)
{
const char *msg;
int idx;
if (!buflen)
return 0;
if (errnum >= 0) {
int err = strerror_r(errnum, buf, buflen);
buf[buflen - 1] = 0;
return err;
}
if (errnum <= __TEP_ERRNO__START ||
errnum >= __TEP_ERRNO__END)
return -1;
idx = errnum - __TEP_ERRNO__START - 1;
msg = tep_error_str[idx];
snprintf(buf, buflen, "%s", msg);
return 0;
}

View File

@ -3,6 +3,8 @@
* Copyright (C) 2009 Red Hat Inc, Steven Rostedt <srostedt@redhat.com> * Copyright (C) 2009 Red Hat Inc, Steven Rostedt <srostedt@redhat.com>
* *
*/ */
#include "trace-seq.h"
#include <stdio.h> #include <stdio.h>
#include <stdlib.h> #include <stdlib.h>
#include <string.h> #include <string.h>

View File

@ -0,0 +1,55 @@
// SPDX-License-Identifier: LGPL-2.1
/*
* Copyright (C) 2009, 2010 Red Hat Inc, Steven Rostedt <srostedt@redhat.com>
*
*/
#ifndef _TRACE_SEQ_H
#define _TRACE_SEQ_H
#include <stdarg.h>
#include <stdio.h>
/* ----------------------- trace_seq ----------------------- */
#ifndef TRACE_SEQ_BUF_SIZE
#define TRACE_SEQ_BUF_SIZE 4096
#endif
enum trace_seq_fail {
TRACE_SEQ__GOOD,
TRACE_SEQ__BUFFER_POISONED,
TRACE_SEQ__MEM_ALLOC_FAILED,
};
/*
* Trace sequences are used to allow a function to call several other functions
* to create a string of data to use (up to a max of PAGE_SIZE).
*/
struct trace_seq {
char *buffer;
unsigned int buffer_size;
unsigned int len;
unsigned int readpos;
enum trace_seq_fail state;
};
void trace_seq_init(struct trace_seq *s);
void trace_seq_reset(struct trace_seq *s);
void trace_seq_destroy(struct trace_seq *s);
extern int trace_seq_printf(struct trace_seq *s, const char *fmt, ...)
__attribute__ ((format (printf, 2, 3)));
extern int trace_seq_vprintf(struct trace_seq *s, const char *fmt, va_list args)
__attribute__ ((format (printf, 2, 0)));
extern int trace_seq_puts(struct trace_seq *s, const char *str);
extern int trace_seq_putc(struct trace_seq *s, unsigned char c);
extern void trace_seq_terminate(struct trace_seq *s);
extern int trace_seq_do_fprintf(struct trace_seq *s, FILE *fp);
extern int trace_seq_do_printf(struct trace_seq *s);
#endif /* _TRACE_SEQ_H */

View File

@ -779,7 +779,9 @@ endif
ifndef NO_LIBBPF ifndef NO_LIBBPF
$(call QUIET_INSTALL, bpf-headers) \ $(call QUIET_INSTALL, bpf-headers) \
$(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perf_include_instdir_SQ)/bpf'; \ $(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perf_include_instdir_SQ)/bpf'; \
$(INSTALL) include/bpf/*.h -t '$(DESTDIR_SQ)$(perf_include_instdir_SQ)/bpf' $(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perf_include_instdir_SQ)/bpf/linux'; \
$(INSTALL) include/bpf/*.h -t '$(DESTDIR_SQ)$(perf_include_instdir_SQ)/bpf'; \
$(INSTALL) include/bpf/linux/*.h -t '$(DESTDIR_SQ)$(perf_include_instdir_SQ)/bpf/linux'
$(call QUIET_INSTALL, bpf-examples) \ $(call QUIET_INSTALL, bpf-examples) \
$(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perf_examples_instdir_SQ)/bpf'; \ $(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perf_examples_instdir_SQ)/bpf'; \
$(INSTALL) examples/bpf/*.c -t '$(DESTDIR_SQ)$(perf_examples_instdir_SQ)/bpf' $(INSTALL) examples/bpf/*.c -t '$(DESTDIR_SQ)$(perf_examples_instdir_SQ)/bpf'

View File

@ -8,6 +8,63 @@ struct arm64_annotate {
jump_insn; jump_insn;
}; };
static int arm64_mov__parse(struct arch *arch __maybe_unused,
struct ins_operands *ops,
struct map_symbol *ms __maybe_unused)
{
char *s = strchr(ops->raw, ','), *target, *endptr;
if (s == NULL)
return -1;
*s = '\0';
ops->source.raw = strdup(ops->raw);
*s = ',';
if (ops->source.raw == NULL)
return -1;
target = ++s;
ops->target.raw = strdup(target);
if (ops->target.raw == NULL)
goto out_free_source;
ops->target.addr = strtoull(target, &endptr, 16);
if (endptr == target)
goto out_free_target;
s = strchr(endptr, '<');
if (s == NULL)
goto out_free_target;
endptr = strchr(s + 1, '>');
if (endptr == NULL)
goto out_free_target;
*endptr = '\0';
*s = ' ';
ops->target.name = strdup(s);
*s = '<';
*endptr = '>';
if (ops->target.name == NULL)
goto out_free_target;
return 0;
out_free_target:
zfree(&ops->target.raw);
out_free_source:
zfree(&ops->source.raw);
return -1;
}
static int mov__scnprintf(struct ins *ins, char *bf, size_t size,
struct ins_operands *ops);
static struct ins_ops arm64_mov_ops = {
.parse = arm64_mov__parse,
.scnprintf = mov__scnprintf,
};
static struct ins_ops *arm64__associate_instruction_ops(struct arch *arch, const char *name) static struct ins_ops *arm64__associate_instruction_ops(struct arch *arch, const char *name)
{ {
struct arm64_annotate *arm = arch->priv; struct arm64_annotate *arm = arch->priv;
@ -21,7 +78,7 @@ static struct ins_ops *arm64__associate_instruction_ops(struct arch *arch, const
else if (!strcmp(name, "ret")) else if (!strcmp(name, "ret"))
ops = &ret_ops; ops = &ret_ops;
else else
return NULL; ops = &arm64_mov_ops;
arch__associate_ins_ops(arch, name, ops); arch__associate_ins_ops(arch, name, ops);
return ops; return ops;

View File

@ -100,8 +100,6 @@ out_free_source:
return -1; return -1;
} }
static int mov__scnprintf(struct ins *ins, char *bf, size_t size,
struct ins_operands *ops);
static struct ins_ops s390_mov_ops = { static struct ins_ops s390_mov_ops = {
.parse = s390_mov__parse, .parse = s390_mov__parse,

View File

@ -283,12 +283,11 @@ out_put:
return ret; return ret;
} }
static int process_feature_event(struct perf_tool *tool, static int process_feature_event(struct perf_session *session,
union perf_event *event, union perf_event *event)
struct perf_session *session)
{ {
if (event->feat.feat_id < HEADER_LAST_FEATURE) if (event->feat.feat_id < HEADER_LAST_FEATURE)
return perf_event__process_feature(tool, event, session); return perf_event__process_feature(session, event);
return 0; return 0;
} }

View File

@ -86,12 +86,10 @@ static int perf_event__drop_oe(struct perf_tool *tool __maybe_unused,
} }
#endif #endif
static int perf_event__repipe_op2_synth(struct perf_tool *tool, static int perf_event__repipe_op2_synth(struct perf_session *session,
union perf_event *event, union perf_event *event)
struct perf_session *session
__maybe_unused)
{ {
return perf_event__repipe_synth(tool, event); return perf_event__repipe_synth(session->tool, event);
} }
static int perf_event__repipe_attr(struct perf_tool *tool, static int perf_event__repipe_attr(struct perf_tool *tool,
@ -133,10 +131,10 @@ static int copy_bytes(struct perf_inject *inject, int fd, off_t size)
return 0; return 0;
} }
static s64 perf_event__repipe_auxtrace(struct perf_tool *tool, static s64 perf_event__repipe_auxtrace(struct perf_session *session,
union perf_event *event, union perf_event *event)
struct perf_session *session)
{ {
struct perf_tool *tool = session->tool;
struct perf_inject *inject = container_of(tool, struct perf_inject, struct perf_inject *inject = container_of(tool, struct perf_inject,
tool); tool);
int ret; int ret;
@ -174,9 +172,8 @@ static s64 perf_event__repipe_auxtrace(struct perf_tool *tool,
#else #else
static s64 static s64
perf_event__repipe_auxtrace(struct perf_tool *tool __maybe_unused, perf_event__repipe_auxtrace(struct perf_session *session __maybe_unused,
union perf_event *event __maybe_unused, union perf_event *event __maybe_unused)
struct perf_session *session __maybe_unused)
{ {
pr_err("AUX area tracing not supported\n"); pr_err("AUX area tracing not supported\n");
return -EINVAL; return -EINVAL;
@ -362,26 +359,24 @@ static int perf_event__repipe_exit(struct perf_tool *tool,
return err; return err;
} }
static int perf_event__repipe_tracing_data(struct perf_tool *tool, static int perf_event__repipe_tracing_data(struct perf_session *session,
union perf_event *event, union perf_event *event)
struct perf_session *session)
{ {
int err; int err;
perf_event__repipe_synth(tool, event); perf_event__repipe_synth(session->tool, event);
err = perf_event__process_tracing_data(tool, event, session); err = perf_event__process_tracing_data(session, event);
return err; return err;
} }
static int perf_event__repipe_id_index(struct perf_tool *tool, static int perf_event__repipe_id_index(struct perf_session *session,
union perf_event *event, union perf_event *event)
struct perf_session *session)
{ {
int err; int err;
perf_event__repipe_synth(tool, event); perf_event__repipe_synth(session->tool, event);
err = perf_event__process_id_index(tool, event, session); err = perf_event__process_id_index(session, event);
return err; return err;
} }
@ -803,7 +798,8 @@ int cmd_inject(int argc, const char **argv)
"kallsyms pathname"), "kallsyms pathname"),
OPT_BOOLEAN('f', "force", &data.force, "don't complain, do it"), OPT_BOOLEAN('f', "force", &data.force, "don't complain, do it"),
OPT_CALLBACK_OPTARG(0, "itrace", &inject.itrace_synth_opts, OPT_CALLBACK_OPTARG(0, "itrace", &inject.itrace_synth_opts,
NULL, "opts", "Instruction Tracing options", NULL, "opts", "Instruction Tracing options\n"
ITRACE_HELP,
itrace_parse_synth_opts), itrace_parse_synth_opts),
OPT_BOOLEAN(0, "strip", &inject.strip, OPT_BOOLEAN(0, "strip", &inject.strip,
"strip non-synthesized events (use with --itrace)"), "strip non-synthesized events (use with --itrace)"),

View File

@ -106,9 +106,12 @@ static bool switch_output_time(struct record *rec)
trigger_is_ready(&switch_output_trigger); trigger_is_ready(&switch_output_trigger);
} }
static int record__write(struct record *rec, void *bf, size_t size) static int record__write(struct record *rec, struct perf_mmap *map __maybe_unused,
void *bf, size_t size)
{ {
if (perf_data__write(rec->session->data, bf, size) < 0) { struct perf_data_file *file = &rec->session->data->file;
if (perf_data_file__write(file, bf, size) < 0) {
pr_err("failed to write perf data, error: %m\n"); pr_err("failed to write perf data, error: %m\n");
return -1; return -1;
} }
@ -127,15 +130,15 @@ static int process_synthesized_event(struct perf_tool *tool,
struct machine *machine __maybe_unused) struct machine *machine __maybe_unused)
{ {
struct record *rec = container_of(tool, struct record, tool); struct record *rec = container_of(tool, struct record, tool);
return record__write(rec, event, event->header.size); return record__write(rec, NULL, event, event->header.size);
} }
static int record__pushfn(void *to, void *bf, size_t size) static int record__pushfn(struct perf_mmap *map, void *to, void *bf, size_t size)
{ {
struct record *rec = to; struct record *rec = to;
rec->samples++; rec->samples++;
return record__write(rec, bf, size); return record__write(rec, map, bf, size);
} }
static volatile int done; static volatile int done;
@ -170,6 +173,7 @@ static void record__sig_exit(void)
#ifdef HAVE_AUXTRACE_SUPPORT #ifdef HAVE_AUXTRACE_SUPPORT
static int record__process_auxtrace(struct perf_tool *tool, static int record__process_auxtrace(struct perf_tool *tool,
struct perf_mmap *map,
union perf_event *event, void *data1, union perf_event *event, void *data1,
size_t len1, void *data2, size_t len2) size_t len1, void *data2, size_t len2)
{ {
@ -197,21 +201,21 @@ static int record__process_auxtrace(struct perf_tool *tool,
if (padding) if (padding)
padding = 8 - padding; padding = 8 - padding;
record__write(rec, event, event->header.size); record__write(rec, map, event, event->header.size);
record__write(rec, data1, len1); record__write(rec, map, data1, len1);
if (len2) if (len2)
record__write(rec, data2, len2); record__write(rec, map, data2, len2);
record__write(rec, &pad, padding); record__write(rec, map, &pad, padding);
return 0; return 0;
} }
static int record__auxtrace_mmap_read(struct record *rec, static int record__auxtrace_mmap_read(struct record *rec,
struct auxtrace_mmap *mm) struct perf_mmap *map)
{ {
int ret; int ret;
ret = auxtrace_mmap__read(mm, rec->itr, &rec->tool, ret = auxtrace_mmap__read(map, rec->itr, &rec->tool,
record__process_auxtrace); record__process_auxtrace);
if (ret < 0) if (ret < 0)
return ret; return ret;
@ -223,11 +227,11 @@ static int record__auxtrace_mmap_read(struct record *rec,
} }
static int record__auxtrace_mmap_read_snapshot(struct record *rec, static int record__auxtrace_mmap_read_snapshot(struct record *rec,
struct auxtrace_mmap *mm) struct perf_mmap *map)
{ {
int ret; int ret;
ret = auxtrace_mmap__read_snapshot(mm, rec->itr, &rec->tool, ret = auxtrace_mmap__read_snapshot(map, rec->itr, &rec->tool,
record__process_auxtrace, record__process_auxtrace,
rec->opts.auxtrace_snapshot_size); rec->opts.auxtrace_snapshot_size);
if (ret < 0) if (ret < 0)
@ -245,13 +249,12 @@ static int record__auxtrace_read_snapshot_all(struct record *rec)
int rc = 0; int rc = 0;
for (i = 0; i < rec->evlist->nr_mmaps; i++) { for (i = 0; i < rec->evlist->nr_mmaps; i++) {
struct auxtrace_mmap *mm = struct perf_mmap *map = &rec->evlist->mmap[i];
&rec->evlist->mmap[i].auxtrace_mmap;
if (!mm->base) if (!map->auxtrace_mmap.base)
continue; continue;
if (record__auxtrace_mmap_read_snapshot(rec, mm) != 0) { if (record__auxtrace_mmap_read_snapshot(rec, map) != 0) {
rc = -1; rc = -1;
goto out; goto out;
} }
@ -295,7 +298,7 @@ static int record__auxtrace_init(struct record *rec)
static inline static inline
int record__auxtrace_mmap_read(struct record *rec __maybe_unused, int record__auxtrace_mmap_read(struct record *rec __maybe_unused,
struct auxtrace_mmap *mm __maybe_unused) struct perf_mmap *map __maybe_unused)
{ {
return 0; return 0;
} }
@ -529,17 +532,17 @@ static int record__mmap_read_evlist(struct record *rec, struct perf_evlist *evli
return 0; return 0;
for (i = 0; i < evlist->nr_mmaps; i++) { for (i = 0; i < evlist->nr_mmaps; i++) {
struct auxtrace_mmap *mm = &maps[i].auxtrace_mmap; struct perf_mmap *map = &maps[i];
if (maps[i].base) { if (map->base) {
if (perf_mmap__push(&maps[i], rec, record__pushfn) != 0) { if (perf_mmap__push(map, rec, record__pushfn) != 0) {
rc = -1; rc = -1;
goto out; goto out;
} }
} }
if (mm->base && !rec->opts.auxtrace_snapshot_mode && if (map->auxtrace_mmap.base && !rec->opts.auxtrace_snapshot_mode &&
record__auxtrace_mmap_read(rec, mm) != 0) { record__auxtrace_mmap_read(rec, map) != 0) {
rc = -1; rc = -1;
goto out; goto out;
} }
@ -550,7 +553,7 @@ static int record__mmap_read_evlist(struct record *rec, struct perf_evlist *evli
* at least one event. * at least one event.
*/ */
if (bytes_written != rec->bytes_written) if (bytes_written != rec->bytes_written)
rc = record__write(rec, &finished_round_event, sizeof(finished_round_event)); rc = record__write(rec, NULL, &finished_round_event, sizeof(finished_round_event));
if (overwrite) if (overwrite)
perf_evlist__toggle_bkw_mmap(evlist, BKW_MMAP_EMPTY); perf_evlist__toggle_bkw_mmap(evlist, BKW_MMAP_EMPTY);
@ -758,7 +761,7 @@ static int record__synthesize(struct record *rec, bool tail)
* We need to synthesize events first, because some * We need to synthesize events first, because some
* features works on top of them (on report side). * features works on top of them (on report side).
*/ */
err = perf_event__synthesize_attrs(tool, session, err = perf_event__synthesize_attrs(tool, rec->evlist,
process_synthesized_event); process_synthesized_event);
if (err < 0) { if (err < 0) {
pr_err("Couldn't synthesize attrs.\n"); pr_err("Couldn't synthesize attrs.\n");

View File

@ -201,14 +201,13 @@ static void setup_forced_leader(struct report *report,
perf_evlist__force_leader(evlist); perf_evlist__force_leader(evlist);
} }
static int process_feature_event(struct perf_tool *tool, static int process_feature_event(struct perf_session *session,
union perf_event *event, union perf_event *event)
struct perf_session *session __maybe_unused)
{ {
struct report *rep = container_of(tool, struct report, tool); struct report *rep = container_of(session->tool, struct report, tool);
if (event->feat.feat_id < HEADER_LAST_FEATURE) if (event->feat.feat_id < HEADER_LAST_FEATURE)
return perf_event__process_feature(tool, event, session); return perf_event__process_feature(session, event);
if (event->feat.feat_id != HEADER_LAST_FEATURE) { if (event->feat.feat_id != HEADER_LAST_FEATURE) {
pr_err("failed: wrong feature ID: %" PRIu64 "\n", pr_err("failed: wrong feature ID: %" PRIu64 "\n",
@ -1106,7 +1105,7 @@ int cmd_report(int argc, const char **argv)
OPT_CALLBACK(0, "percentage", NULL, "relative|absolute", OPT_CALLBACK(0, "percentage", NULL, "relative|absolute",
"how to display percentage of filtered entries", parse_filter_percentage), "how to display percentage of filtered entries", parse_filter_percentage),
OPT_CALLBACK_OPTARG(0, "itrace", &itrace_synth_opts, NULL, "opts", OPT_CALLBACK_OPTARG(0, "itrace", &itrace_synth_opts, NULL, "opts",
"Instruction Tracing options", "Instruction Tracing options\n" ITRACE_HELP,
itrace_parse_synth_opts), itrace_parse_synth_opts),
OPT_BOOLEAN(0, "full-source-path", &srcline_full_filename, OPT_BOOLEAN(0, "full-source-path", &srcline_full_filename,
"Show full source file name path for source lines"), "Show full source file name path for source lines"),

View File

@ -406,9 +406,10 @@ static int perf_evsel__check_attr(struct perf_evsel *evsel,
PERF_OUTPUT_WEIGHT)) PERF_OUTPUT_WEIGHT))
return -EINVAL; return -EINVAL;
if (PRINT_FIELD(SYM) && !PRINT_FIELD(IP) && !PRINT_FIELD(ADDR)) { if (PRINT_FIELD(SYM) &&
!(evsel->attr.sample_type & (PERF_SAMPLE_IP|PERF_SAMPLE_ADDR))) {
pr_err("Display of symbols requested but neither sample IP nor " pr_err("Display of symbols requested but neither sample IP nor "
"sample address\nis selected. Hence, no addresses to convert " "sample address\navailable. Hence, no addresses to convert "
"to symbols.\n"); "to symbols.\n");
return -EINVAL; return -EINVAL;
} }
@ -417,10 +418,9 @@ static int perf_evsel__check_attr(struct perf_evsel *evsel,
"selected.\n"); "selected.\n");
return -EINVAL; return -EINVAL;
} }
if (PRINT_FIELD(DSO) && !PRINT_FIELD(IP) && !PRINT_FIELD(ADDR) && if (PRINT_FIELD(DSO) &&
!PRINT_FIELD(BRSTACK) && !PRINT_FIELD(BRSTACKSYM) && !PRINT_FIELD(BRSTACKOFF)) { !(evsel->attr.sample_type & (PERF_SAMPLE_IP|PERF_SAMPLE_ADDR))) {
pr_err("Display of DSO requested but no address to convert. Select\n" pr_err("Display of DSO requested but no address to convert.\n");
"sample IP, sample address, brstack, brstacksym, or brstackoff.\n");
return -EINVAL; return -EINVAL;
} }
if (PRINT_FIELD(SRCLINE) && !PRINT_FIELD(IP)) { if (PRINT_FIELD(SRCLINE) && !PRINT_FIELD(IP)) {
@ -1115,6 +1115,7 @@ static int perf_sample__fprintf_callindent(struct perf_sample *sample,
const char *name = NULL; const char *name = NULL;
static int spacing; static int spacing;
int len = 0; int len = 0;
int dlen = 0;
u64 ip = 0; u64 ip = 0;
/* /*
@ -1141,6 +1142,12 @@ static int perf_sample__fprintf_callindent(struct perf_sample *sample,
ip = sample->ip; ip = sample->ip;
} }
if (PRINT_FIELD(DSO) && !(PRINT_FIELD(IP) || PRINT_FIELD(ADDR))) {
dlen += fprintf(fp, "(");
dlen += map__fprintf_dsoname(al->map, fp);
dlen += fprintf(fp, ")\t");
}
if (name) if (name)
len = fprintf(fp, "%*s%s", (int)depth * 4, "", name); len = fprintf(fp, "%*s%s", (int)depth * 4, "", name);
else if (ip) else if (ip)
@ -1159,7 +1166,7 @@ static int perf_sample__fprintf_callindent(struct perf_sample *sample,
if (len < spacing) if (len < spacing)
len += fprintf(fp, "%*s", spacing - len, ""); len += fprintf(fp, "%*s", spacing - len, "");
return len; return len + dlen;
} }
static int perf_sample__fprintf_insn(struct perf_sample *sample, static int perf_sample__fprintf_insn(struct perf_sample *sample,
@ -1255,6 +1262,18 @@ static struct {
{0, NULL} {0, NULL}
}; };
static const char *sample_flags_to_name(u32 flags)
{
int i;
for (i = 0; sample_flags[i].name ; i++) {
if (sample_flags[i].flags == flags)
return sample_flags[i].name;
}
return NULL;
}
static int perf_sample__fprintf_flags(u32 flags, FILE *fp) static int perf_sample__fprintf_flags(u32 flags, FILE *fp)
{ {
const char *chars = PERF_IP_FLAG_CHARS; const char *chars = PERF_IP_FLAG_CHARS;
@ -1264,11 +1283,20 @@ static int perf_sample__fprintf_flags(u32 flags, FILE *fp)
char str[33]; char str[33];
int i, pos = 0; int i, pos = 0;
for (i = 0; sample_flags[i].name ; i++) { name = sample_flags_to_name(flags & ~PERF_IP_FLAG_IN_TX);
if (sample_flags[i].flags == (flags & ~PERF_IP_FLAG_IN_TX)) { if (name)
name = sample_flags[i].name; return fprintf(fp, " %-15s%4s ", name, in_tx ? "(x)" : "");
break;
} if (flags & PERF_IP_FLAG_TRACE_BEGIN) {
name = sample_flags_to_name(flags & ~(PERF_IP_FLAG_IN_TX | PERF_IP_FLAG_TRACE_BEGIN));
if (name)
return fprintf(fp, " tr strt %-7s%4s ", name, in_tx ? "(x)" : "");
}
if (flags & PERF_IP_FLAG_TRACE_END) {
name = sample_flags_to_name(flags & ~(PERF_IP_FLAG_IN_TX | PERF_IP_FLAG_TRACE_END));
if (name)
return fprintf(fp, " tr end %-7s%4s ", name, in_tx ? "(x)" : "");
} }
for (i = 0; i < n; i++, flags >>= 1) { for (i = 0; i < n; i++, flags >>= 1) {
@ -1281,10 +1309,7 @@ static int perf_sample__fprintf_flags(u32 flags, FILE *fp)
} }
str[pos] = 0; str[pos] = 0;
if (name) return fprintf(fp, " %-19s ", str);
return fprintf(fp, " %-7s%4s ", name, in_tx ? "(x)" : "");
return fprintf(fp, " %-11s ", str);
} }
struct printer_data { struct printer_data {
@ -1544,7 +1569,8 @@ struct metric_ctx {
FILE *fp; FILE *fp;
}; };
static void script_print_metric(void *ctx, const char *color, static void script_print_metric(struct perf_stat_config *config __maybe_unused,
void *ctx, const char *color,
const char *fmt, const char *fmt,
const char *unit, double val) const char *unit, double val)
{ {
@ -1562,7 +1588,8 @@ static void script_print_metric(void *ctx, const char *color,
fprintf(mctx->fp, " %s\n", unit); fprintf(mctx->fp, " %s\n", unit);
} }
static void script_new_line(void *ctx) static void script_new_line(struct perf_stat_config *config __maybe_unused,
void *ctx)
{ {
struct metric_ctx *mctx = ctx; struct metric_ctx *mctx = ctx;
@ -1608,7 +1635,7 @@ static void perf_sample__fprint_metric(struct perf_script *script,
evsel_script(evsel)->val = val; evsel_script(evsel)->val = val;
if (evsel_script(evsel->leader)->gnum == evsel->leader->nr_members) { if (evsel_script(evsel->leader)->gnum == evsel->leader->nr_members) {
for_each_group_member (ev2, evsel->leader) { for_each_group_member (ev2, evsel->leader) {
perf_stat__print_shadow_stats(ev2, perf_stat__print_shadow_stats(&stat_config, ev2,
evsel_script(ev2)->val, evsel_script(ev2)->val,
sample->cpu, sample->cpu,
&ctx, &ctx,
@ -2489,6 +2516,8 @@ parse:
output[j].fields &= ~all_output_options[i].field; output[j].fields &= ~all_output_options[i].field;
else else
output[j].fields |= all_output_options[i].field; output[j].fields |= all_output_options[i].field;
output[j].user_set = true;
output[j].wildcard_set = true;
} }
} }
} else { } else {
@ -2499,7 +2528,8 @@ parse:
rc = -EINVAL; rc = -EINVAL;
goto out; goto out;
} }
output[type].fields |= all_output_options[i].field; output[type].user_set = true;
output[type].wildcard_set = true;
} }
} }
@ -2963,9 +2993,8 @@ static void script__setup_sample_type(struct perf_script *script)
} }
} }
static int process_stat_round_event(struct perf_tool *tool __maybe_unused, static int process_stat_round_event(struct perf_session *session,
union perf_event *event, union perf_event *event)
struct perf_session *session)
{ {
struct stat_round_event *round = &event->stat_round; struct stat_round_event *round = &event->stat_round;
struct perf_evsel *counter; struct perf_evsel *counter;
@ -2979,9 +3008,8 @@ static int process_stat_round_event(struct perf_tool *tool __maybe_unused,
return 0; return 0;
} }
static int process_stat_config_event(struct perf_tool *tool __maybe_unused, static int process_stat_config_event(struct perf_session *session __maybe_unused,
union perf_event *event, union perf_event *event)
struct perf_session *session __maybe_unused)
{ {
perf_event__read_stat_config(&stat_config, &event->stat_config); perf_event__read_stat_config(&stat_config, &event->stat_config);
return 0; return 0;
@ -3007,10 +3035,10 @@ static int set_maps(struct perf_script *script)
} }
static static
int process_thread_map_event(struct perf_tool *tool, int process_thread_map_event(struct perf_session *session,
union perf_event *event, union perf_event *event)
struct perf_session *session __maybe_unused)
{ {
struct perf_tool *tool = session->tool;
struct perf_script *script = container_of(tool, struct perf_script, tool); struct perf_script *script = container_of(tool, struct perf_script, tool);
if (script->threads) { if (script->threads) {
@ -3026,10 +3054,10 @@ int process_thread_map_event(struct perf_tool *tool,
} }
static static
int process_cpu_map_event(struct perf_tool *tool __maybe_unused, int process_cpu_map_event(struct perf_session *session,
union perf_event *event, union perf_event *event)
struct perf_session *session __maybe_unused)
{ {
struct perf_tool *tool = session->tool;
struct perf_script *script = container_of(tool, struct perf_script, tool); struct perf_script *script = container_of(tool, struct perf_script, tool);
if (script->cpus) { if (script->cpus) {
@ -3044,21 +3072,21 @@ int process_cpu_map_event(struct perf_tool *tool __maybe_unused,
return set_maps(script); return set_maps(script);
} }
static int process_feature_event(struct perf_tool *tool, static int process_feature_event(struct perf_session *session,
union perf_event *event, union perf_event *event)
struct perf_session *session)
{ {
if (event->feat.feat_id < HEADER_LAST_FEATURE) if (event->feat.feat_id < HEADER_LAST_FEATURE)
return perf_event__process_feature(tool, event, session); return perf_event__process_feature(session, event);
return 0; return 0;
} }
#ifdef HAVE_AUXTRACE_SUPPORT #ifdef HAVE_AUXTRACE_SUPPORT
static int perf_script__process_auxtrace_info(struct perf_tool *tool, static int perf_script__process_auxtrace_info(struct perf_session *session,
union perf_event *event, union perf_event *event)
struct perf_session *session)
{ {
int ret = perf_event__process_auxtrace_info(tool, event, session); struct perf_tool *tool = session->tool;
int ret = perf_event__process_auxtrace_info(session, event);
if (ret == 0) { if (ret == 0) {
struct perf_script *script = container_of(tool, struct perf_script, tool); struct perf_script *script = container_of(tool, struct perf_script, tool);
@ -3193,7 +3221,7 @@ int cmd_script(int argc, const char **argv)
OPT_BOOLEAN(0, "ns", &nanosecs, OPT_BOOLEAN(0, "ns", &nanosecs,
"Use 9 decimal places when displaying time"), "Use 9 decimal places when displaying time"),
OPT_CALLBACK_OPTARG(0, "itrace", &itrace_synth_opts, NULL, "opts", OPT_CALLBACK_OPTARG(0, "itrace", &itrace_synth_opts, NULL, "opts",
"Instruction Tracing options", "Instruction Tracing options\n" ITRACE_HELP,
itrace_parse_synth_opts), itrace_parse_synth_opts),
OPT_BOOLEAN(0, "full-source-path", &srcline_full_filename, OPT_BOOLEAN(0, "full-source-path", &srcline_full_filename,
"Show full source file name path for source lines"), "Show full source file name path for source lines"),

File diff suppressed because it is too large Load Diff

View File

@ -181,7 +181,7 @@ static int __tp_field__init_uint(struct tp_field *field, int size, int offset, b
return 0; return 0;
} }
static int tp_field__init_uint(struct tp_field *field, struct format_field *format_field, bool needs_swap) static int tp_field__init_uint(struct tp_field *field, struct tep_format_field *format_field, bool needs_swap)
{ {
return __tp_field__init_uint(field, format_field->size, format_field->offset, needs_swap); return __tp_field__init_uint(field, format_field->size, format_field->offset, needs_swap);
} }
@ -198,7 +198,7 @@ static int __tp_field__init_ptr(struct tp_field *field, int offset)
return 0; return 0;
} }
static int tp_field__init_ptr(struct tp_field *field, struct format_field *format_field) static int tp_field__init_ptr(struct tp_field *field, struct tep_format_field *format_field)
{ {
return __tp_field__init_ptr(field, format_field->offset); return __tp_field__init_ptr(field, format_field->offset);
} }
@ -214,7 +214,7 @@ static int perf_evsel__init_tp_uint_field(struct perf_evsel *evsel,
struct tp_field *field, struct tp_field *field,
const char *name) const char *name)
{ {
struct format_field *format_field = perf_evsel__field(evsel, name); struct tep_format_field *format_field = perf_evsel__field(evsel, name);
if (format_field == NULL) if (format_field == NULL)
return -1; return -1;
@ -230,7 +230,7 @@ static int perf_evsel__init_tp_ptr_field(struct perf_evsel *evsel,
struct tp_field *field, struct tp_field *field,
const char *name) const char *name)
{ {
struct format_field *format_field = perf_evsel__field(evsel, name); struct tep_format_field *format_field = perf_evsel__field(evsel, name);
if (format_field == NULL) if (format_field == NULL)
return -1; return -1;
@ -288,6 +288,13 @@ static int perf_evsel__init_augmented_syscall_tp_args(struct perf_evsel *evsel)
return __tp_field__init_ptr(&sc->args, sc->id.offset + sizeof(u64)); return __tp_field__init_ptr(&sc->args, sc->id.offset + sizeof(u64));
} }
static int perf_evsel__init_augmented_syscall_tp_ret(struct perf_evsel *evsel)
{
struct syscall_tp *sc = evsel->priv;
return __tp_field__init_uint(&sc->ret, sizeof(u64), sc->id.offset + sizeof(u64), evsel->needs_swap);
}
static int perf_evsel__init_raw_syscall_tp(struct perf_evsel *evsel, void *handler) static int perf_evsel__init_raw_syscall_tp(struct perf_evsel *evsel, void *handler)
{ {
evsel->priv = malloc(sizeof(struct syscall_tp)); evsel->priv = malloc(sizeof(struct syscall_tp));
@ -498,16 +505,6 @@ static const char *clockid[] = {
}; };
static DEFINE_STRARRAY(clockid); static DEFINE_STRARRAY(clockid);
static const char *socket_families[] = {
"UNSPEC", "LOCAL", "INET", "AX25", "IPX", "APPLETALK", "NETROM",
"BRIDGE", "ATMPVC", "X25", "INET6", "ROSE", "DECnet", "NETBEUI",
"SECURITY", "KEY", "NETLINK", "PACKET", "ASH", "ECONET", "ATMSVC",
"RDS", "SNA", "IRDA", "PPPOX", "WANPIPE", "LLC", "IB", "CAN", "TIPC",
"BLUETOOTH", "IUCV", "RXRPC", "ISDN", "PHONET", "IEEE802154", "CAIF",
"ALG", "NFC", "VSOCK",
};
static DEFINE_STRARRAY(socket_families);
static size_t syscall_arg__scnprintf_access_mode(char *bf, size_t size, static size_t syscall_arg__scnprintf_access_mode(char *bf, size_t size,
struct syscall_arg *arg) struct syscall_arg *arg)
{ {
@ -631,6 +628,8 @@ static struct syscall_fmt {
} syscall_fmts[] = { } syscall_fmts[] = {
{ .name = "access", { .name = "access",
.arg = { [1] = { .scnprintf = SCA_ACCMODE, /* mode */ }, }, }, .arg = { [1] = { .scnprintf = SCA_ACCMODE, /* mode */ }, }, },
{ .name = "bind",
.arg = { [1] = { .scnprintf = SCA_SOCKADDR, /* umyaddr */ }, }, },
{ .name = "bpf", { .name = "bpf",
.arg = { [0] = STRARRAY(cmd, bpf_cmd), }, }, .arg = { [0] = STRARRAY(cmd, bpf_cmd), }, },
{ .name = "brk", .hexret = true, { .name = "brk", .hexret = true,
@ -645,6 +644,8 @@ static struct syscall_fmt {
[4] = { .name = "tls", .scnprintf = SCA_HEX, }, }, }, [4] = { .name = "tls", .scnprintf = SCA_HEX, }, }, },
{ .name = "close", { .name = "close",
.arg = { [0] = { .scnprintf = SCA_CLOSE_FD, /* fd */ }, }, }, .arg = { [0] = { .scnprintf = SCA_CLOSE_FD, /* fd */ }, }, },
{ .name = "connect",
.arg = { [1] = { .scnprintf = SCA_SOCKADDR, /* servaddr */ }, }, },
{ .name = "epoll_ctl", { .name = "epoll_ctl",
.arg = { [1] = STRARRAY(op, epoll_ctl_ops), }, }, .arg = { [1] = STRARRAY(op, epoll_ctl_ops), }, },
{ .name = "eventfd2", { .name = "eventfd2",
@ -801,7 +802,8 @@ static struct syscall_fmt {
{ .name = "sendmsg", { .name = "sendmsg",
.arg = { [2] = { .scnprintf = SCA_MSG_FLAGS, /* flags */ }, }, }, .arg = { [2] = { .scnprintf = SCA_MSG_FLAGS, /* flags */ }, }, },
{ .name = "sendto", { .name = "sendto",
.arg = { [3] = { .scnprintf = SCA_MSG_FLAGS, /* flags */ }, }, }, .arg = { [3] = { .scnprintf = SCA_MSG_FLAGS, /* flags */ },
[4] = { .scnprintf = SCA_SOCKADDR, /* addr */ }, }, },
{ .name = "set_tid_address", .errpid = true, }, { .name = "set_tid_address", .errpid = true, },
{ .name = "setitimer", { .name = "setitimer",
.arg = { [0] = STRARRAY(which, itimers), }, }, .arg = { [0] = STRARRAY(which, itimers), }, },
@ -830,6 +832,7 @@ static struct syscall_fmt {
.arg = { [2] = { .scnprintf = SCA_SIGNUM, /* sig */ }, }, }, .arg = { [2] = { .scnprintf = SCA_SIGNUM, /* sig */ }, }, },
{ .name = "tkill", { .name = "tkill",
.arg = { [1] = { .scnprintf = SCA_SIGNUM, /* sig */ }, }, }, .arg = { [1] = { .scnprintf = SCA_SIGNUM, /* sig */ }, }, },
{ .name = "umount2", .alias = "umount", },
{ .name = "uname", .alias = "newuname", }, { .name = "uname", .alias = "newuname", },
{ .name = "unlinkat", { .name = "unlinkat",
.arg = { [0] = { .scnprintf = SCA_FDAT, /* dfd */ }, }, }, .arg = { [0] = { .scnprintf = SCA_FDAT, /* dfd */ }, }, },
@ -856,13 +859,15 @@ static struct syscall_fmt *syscall_fmt__find(const char *name)
/* /*
* is_exit: is this "exit" or "exit_group"? * is_exit: is this "exit" or "exit_group"?
* is_open: is this "open" or "openat"? To associate the fd returned in sys_exit with the pathname in sys_enter. * is_open: is this "open" or "openat"? To associate the fd returned in sys_exit with the pathname in sys_enter.
* args_size: sum of the sizes of the syscall arguments, anything after that is augmented stuff: pathname for openat, etc.
*/ */
struct syscall { struct syscall {
struct event_format *tp_format; struct tep_event_format *tp_format;
int nr_args; int nr_args;
int args_size;
bool is_exit; bool is_exit;
bool is_open; bool is_open;
struct format_field *args; struct tep_format_field *args;
const char *name; const char *name;
struct syscall_fmt *fmt; struct syscall_fmt *fmt;
struct syscall_arg_fmt *arg_fmt; struct syscall_arg_fmt *arg_fmt;
@ -1095,11 +1100,21 @@ static void thread__set_filename_pos(struct thread *thread, const char *bf,
ttrace->filename.entry_str_pos = bf - ttrace->entry_str; ttrace->filename.entry_str_pos = bf - ttrace->entry_str;
} }
static size_t syscall_arg__scnprintf_augmented_string(struct syscall_arg *arg, char *bf, size_t size)
{
struct augmented_arg *augmented_arg = arg->augmented.args;
return scnprintf(bf, size, "%.*s", augmented_arg->size, augmented_arg->value);
}
static size_t syscall_arg__scnprintf_filename(char *bf, size_t size, static size_t syscall_arg__scnprintf_filename(char *bf, size_t size,
struct syscall_arg *arg) struct syscall_arg *arg)
{ {
unsigned long ptr = arg->val; unsigned long ptr = arg->val;
if (arg->augmented.args)
return syscall_arg__scnprintf_augmented_string(arg, bf, size);
if (!arg->trace->vfs_getname) if (!arg->trace->vfs_getname)
return scnprintf(bf, size, "%#x", ptr); return scnprintf(bf, size, "%#x", ptr);
@ -1142,11 +1157,9 @@ static void sig_handler(int sig)
interrupted = sig == SIGINT; interrupted = sig == SIGINT;
} }
static size_t trace__fprintf_entry_head(struct trace *trace, struct thread *thread, static size_t trace__fprintf_comm_tid(struct trace *trace, struct thread *thread, FILE *fp)
u64 duration, bool duration_calculated, u64 tstamp, FILE *fp)
{ {
size_t printed = trace__fprintf_tstamp(trace, tstamp, fp); size_t printed = 0;
printed += fprintf_duration(duration, duration_calculated, fp);
if (trace->multiple_threads) { if (trace->multiple_threads) {
if (trace->show_comm) if (trace->show_comm)
@ -1157,6 +1170,14 @@ static size_t trace__fprintf_entry_head(struct trace *trace, struct thread *thre
return printed; return printed;
} }
static size_t trace__fprintf_entry_head(struct trace *trace, struct thread *thread,
u64 duration, bool duration_calculated, u64 tstamp, FILE *fp)
{
size_t printed = trace__fprintf_tstamp(trace, tstamp, fp);
printed += fprintf_duration(duration, duration_calculated, fp);
return printed + trace__fprintf_comm_tid(trace, thread, fp);
}
static int trace__process_event(struct trace *trace, struct machine *machine, static int trace__process_event(struct trace *trace, struct machine *machine,
union perf_event *event, struct perf_sample *sample) union perf_event *event, struct perf_sample *sample)
{ {
@ -1258,10 +1279,12 @@ static int syscall__alloc_arg_fmts(struct syscall *sc, int nr_args)
static int syscall__set_arg_fmts(struct syscall *sc) static int syscall__set_arg_fmts(struct syscall *sc)
{ {
struct format_field *field; struct tep_format_field *field, *last_field = NULL;
int idx = 0, len; int idx = 0, len;
for (field = sc->args; field; field = field->next, ++idx) { for (field = sc->args; field; field = field->next, ++idx) {
last_field = field;
if (sc->fmt && sc->fmt->arg[idx].scnprintf) if (sc->fmt && sc->fmt->arg[idx].scnprintf)
continue; continue;
@ -1270,7 +1293,7 @@ static int syscall__set_arg_fmts(struct syscall *sc)
strcmp(field->name, "path") == 0 || strcmp(field->name, "path") == 0 ||
strcmp(field->name, "pathname") == 0)) strcmp(field->name, "pathname") == 0))
sc->arg_fmt[idx].scnprintf = SCA_FILENAME; sc->arg_fmt[idx].scnprintf = SCA_FILENAME;
else if (field->flags & FIELD_IS_POINTER) else if (field->flags & TEP_FIELD_IS_POINTER)
sc->arg_fmt[idx].scnprintf = syscall_arg__scnprintf_hex; sc->arg_fmt[idx].scnprintf = syscall_arg__scnprintf_hex;
else if (strcmp(field->type, "pid_t") == 0) else if (strcmp(field->type, "pid_t") == 0)
sc->arg_fmt[idx].scnprintf = SCA_PID; sc->arg_fmt[idx].scnprintf = SCA_PID;
@ -1292,6 +1315,9 @@ static int syscall__set_arg_fmts(struct syscall *sc)
} }
} }
if (last_field)
sc->args_size = last_field->offset + last_field->size;
return 0; return 0;
} }
@ -1472,14 +1498,18 @@ static size_t syscall__scnprintf_val(struct syscall *sc, char *bf, size_t size,
} }
static size_t syscall__scnprintf_args(struct syscall *sc, char *bf, size_t size, static size_t syscall__scnprintf_args(struct syscall *sc, char *bf, size_t size,
unsigned char *args, struct trace *trace, unsigned char *args, void *augmented_args, int augmented_args_size,
struct thread *thread) struct trace *trace, struct thread *thread)
{ {
size_t printed = 0; size_t printed = 0;
unsigned long val; unsigned long val;
u8 bit = 1; u8 bit = 1;
struct syscall_arg arg = { struct syscall_arg arg = {
.args = args, .args = args,
.augmented = {
.size = augmented_args_size,
.args = augmented_args,
},
.idx = 0, .idx = 0,
.mask = 0, .mask = 0,
.trace = trace, .trace = trace,
@ -1495,7 +1525,7 @@ static size_t syscall__scnprintf_args(struct syscall *sc, char *bf, size_t size,
ttrace->ret_scnprintf = NULL; ttrace->ret_scnprintf = NULL;
if (sc->args != NULL) { if (sc->args != NULL) {
struct format_field *field; struct tep_format_field *field;
for (field = sc->args; field; for (field = sc->args; field;
field = field->next, ++arg.idx, bit <<= 1) { field = field->next, ++arg.idx, bit <<= 1) {
@ -1654,6 +1684,17 @@ static int trace__fprintf_sample(struct trace *trace, struct perf_evsel *evsel,
return printed; return printed;
} }
static void *syscall__augmented_args(struct syscall *sc, struct perf_sample *sample, int *augmented_args_size)
{
void *augmented_args = NULL;
*augmented_args_size = sample->raw_size - sc->args_size;
if (*augmented_args_size > 0)
augmented_args = sample->raw_data + sc->args_size;
return augmented_args;
}
static int trace__sys_enter(struct trace *trace, struct perf_evsel *evsel, static int trace__sys_enter(struct trace *trace, struct perf_evsel *evsel,
union perf_event *event __maybe_unused, union perf_event *event __maybe_unused,
struct perf_sample *sample) struct perf_sample *sample)
@ -1663,6 +1704,8 @@ static int trace__sys_enter(struct trace *trace, struct perf_evsel *evsel,
size_t printed = 0; size_t printed = 0;
struct thread *thread; struct thread *thread;
int id = perf_evsel__sc_tp_uint(evsel, id, sample), err = -1; int id = perf_evsel__sc_tp_uint(evsel, id, sample), err = -1;
int augmented_args_size = 0;
void *augmented_args = NULL;
struct syscall *sc = trace__syscall_info(trace, evsel, id); struct syscall *sc = trace__syscall_info(trace, evsel, id);
struct thread_trace *ttrace; struct thread_trace *ttrace;
@ -1686,13 +1729,24 @@ static int trace__sys_enter(struct trace *trace, struct perf_evsel *evsel,
if (!(trace->duration_filter || trace->summary_only || trace->min_stack)) if (!(trace->duration_filter || trace->summary_only || trace->min_stack))
trace__printf_interrupted_entry(trace); trace__printf_interrupted_entry(trace);
/*
* If this is raw_syscalls.sys_enter, then it always comes with the 6 possible
* arguments, even if the syscall being handled, say "openat", uses only 4 arguments
* this breaks syscall__augmented_args() check for augmented args, as we calculate
* syscall->args_size using each syscalls:sys_enter_NAME tracefs format file,
* so when handling, say the openat syscall, we end up getting 6 args for the
* raw_syscalls:sys_enter event, when we expected just 4, we end up mistakenly
* thinking that the extra 2 u64 args are the augmented filename, so just check
* here and avoid using augmented syscalls when the evsel is the raw_syscalls one.
*/
if (evsel != trace->syscalls.events.sys_enter)
augmented_args = syscall__augmented_args(sc, sample, &augmented_args_size);
ttrace->entry_time = sample->time; ttrace->entry_time = sample->time;
msg = ttrace->entry_str; msg = ttrace->entry_str;
printed += scnprintf(msg + printed, trace__entry_str_size - printed, "%s(", sc->name); printed += scnprintf(msg + printed, trace__entry_str_size - printed, "%s(", sc->name);
printed += syscall__scnprintf_args(sc, msg + printed, trace__entry_str_size - printed, printed += syscall__scnprintf_args(sc, msg + printed, trace__entry_str_size - printed,
args, trace, thread); args, augmented_args, augmented_args_size, trace, thread);
if (sc->is_exit) { if (sc->is_exit) {
if (!(trace->duration_filter || trace->summary_only || trace->failure_only || trace->min_stack)) { if (!(trace->duration_filter || trace->summary_only || trace->failure_only || trace->min_stack)) {
@ -1723,7 +1777,8 @@ static int trace__fprintf_sys_enter(struct trace *trace, struct perf_evsel *evse
int id = perf_evsel__sc_tp_uint(evsel, id, sample), err = -1; int id = perf_evsel__sc_tp_uint(evsel, id, sample), err = -1;
struct syscall *sc = trace__syscall_info(trace, evsel, id); struct syscall *sc = trace__syscall_info(trace, evsel, id);
char msg[1024]; char msg[1024];
void *args; void *args, *augmented_args = NULL;
int augmented_args_size;
if (sc == NULL) if (sc == NULL)
return -1; return -1;
@ -1738,7 +1793,8 @@ static int trace__fprintf_sys_enter(struct trace *trace, struct perf_evsel *evse
goto out_put; goto out_put;
args = perf_evsel__sc_tp_ptr(evsel, args, sample); args = perf_evsel__sc_tp_ptr(evsel, args, sample);
syscall__scnprintf_args(sc, msg, sizeof(msg), args, trace, thread); augmented_args = syscall__augmented_args(sc, sample, &augmented_args_size);
syscall__scnprintf_args(sc, msg, sizeof(msg), args, augmented_args, augmented_args_size, trace, thread);
fprintf(trace->output, "%s", msg); fprintf(trace->output, "%s", msg);
err = 0; err = 0;
out_put: out_put:
@ -2022,6 +2078,7 @@ static int trace__event_handler(struct trace *trace, struct perf_evsel *evsel,
union perf_event *event __maybe_unused, union perf_event *event __maybe_unused,
struct perf_sample *sample) struct perf_sample *sample)
{ {
struct thread *thread = machine__findnew_thread(trace->host, sample->pid, sample->tid);
int callchain_ret = 0; int callchain_ret = 0;
if (sample->callchain) { if (sample->callchain) {
@ -2039,13 +2096,31 @@ static int trace__event_handler(struct trace *trace, struct perf_evsel *evsel,
if (trace->trace_syscalls) if (trace->trace_syscalls)
fprintf(trace->output, "( ): "); fprintf(trace->output, "( ): ");
if (thread)
trace__fprintf_comm_tid(trace, thread, trace->output);
if (evsel == trace->syscalls.events.augmented) {
int id = perf_evsel__sc_tp_uint(evsel, id, sample);
struct syscall *sc = trace__syscall_info(trace, evsel, id);
if (sc) {
fprintf(trace->output, "%s(", sc->name);
trace__fprintf_sys_enter(trace, evsel, sample);
fputc(')', trace->output);
goto newline;
}
/*
* XXX: Not having the associated syscall info or not finding/adding
* the thread should never happen, but if it does...
* fall thru and print it as a bpf_output event.
*/
}
fprintf(trace->output, "%s:", evsel->name); fprintf(trace->output, "%s:", evsel->name);
if (perf_evsel__is_bpf_output(evsel)) { if (perf_evsel__is_bpf_output(evsel)) {
if (evsel == trace->syscalls.events.augmented) bpf_output__fprintf(trace, sample);
trace__fprintf_sys_enter(trace, evsel, sample);
else
bpf_output__fprintf(trace, sample);
} else if (evsel->tp_format) { } else if (evsel->tp_format) {
if (strncmp(evsel->tp_format->name, "sys_enter_", 10) || if (strncmp(evsel->tp_format->name, "sys_enter_", 10) ||
trace__fprintf_sys_enter(trace, evsel, sample)) { trace__fprintf_sys_enter(trace, evsel, sample)) {
@ -2055,12 +2130,14 @@ static int trace__event_handler(struct trace *trace, struct perf_evsel *evsel,
} }
} }
newline:
fprintf(trace->output, "\n"); fprintf(trace->output, "\n");
if (callchain_ret > 0) if (callchain_ret > 0)
trace__fprintf_callchain(trace, sample); trace__fprintf_callchain(trace, sample);
else if (callchain_ret < 0) else if (callchain_ret < 0)
pr_err("Problem processing %s callchain, skipping...\n", perf_evsel__name(evsel)); pr_err("Problem processing %s callchain, skipping...\n", perf_evsel__name(evsel));
thread__put(thread);
out: out:
return 0; return 0;
} }
@ -3276,12 +3353,8 @@ int cmd_trace(int argc, const char **argv)
goto out; goto out;
} }
if (evsel) { if (evsel)
if (perf_evsel__init_augmented_syscall_tp(evsel) ||
perf_evsel__init_augmented_syscall_tp_args(evsel))
goto out;
trace.syscalls.events.augmented = evsel; trace.syscalls.events.augmented = evsel;
}
err = bpf__setup_stdout(trace.evlist); err = bpf__setup_stdout(trace.evlist);
if (err) { if (err) {
@ -3326,6 +3399,34 @@ int cmd_trace(int argc, const char **argv)
} }
} }
/*
* If we are augmenting syscalls, then combine what we put in the
* __augmented_syscalls__ BPF map with what is in the
* syscalls:sys_exit_FOO tracepoints, i.e. just like we do without BPF,
* combining raw_syscalls:sys_enter with raw_syscalls:sys_exit.
*
* We'll switch to look at two BPF maps, one for sys_enter and the
* other for sys_exit when we start augmenting the sys_exit paths with
* buffers that are being copied from kernel to userspace, think 'read'
* syscall.
*/
if (trace.syscalls.events.augmented) {
evsel = trace.syscalls.events.augmented;
if (perf_evsel__init_augmented_syscall_tp(evsel) ||
perf_evsel__init_augmented_syscall_tp_args(evsel))
goto out;
evsel->handler = trace__sys_enter;
evlist__for_each_entry(trace.evlist, evsel) {
if (strstarts(perf_evsel__name(evsel), "syscalls:sys_exit_")) {
perf_evsel__init_augmented_syscall_tp(evsel);
perf_evsel__init_augmented_syscall_tp_ret(evsel);
evsel->handler = trace__sys_exit;
}
}
}
if ((argc >= 1) && (strcmp(argv[0], "record") == 0)) if ((argc >= 1) && (strcmp(argv[0], "record") == 0))
return trace__record(&trace, argc-1, &argv[1]); return trace__record(&trace, argc-1, &argv[1]);

View File

@ -14,6 +14,7 @@ include/uapi/linux/sched.h
include/uapi/linux/stat.h include/uapi/linux/stat.h
include/uapi/linux/vhost.h include/uapi/linux/vhost.h
include/uapi/sound/asound.h include/uapi/sound/asound.h
include/linux/bits.h
include/linux/hash.h include/linux/hash.h
include/uapi/linux/hw_breakpoint.h include/uapi/linux/hw_breakpoint.h
arch/x86/include/asm/disabled-features.h arch/x86/include/asm/disabled-features.h

View File

@ -30,3 +30,4 @@ perf-test mainporcelain common
perf-timechart mainporcelain common perf-timechart mainporcelain common
perf-top mainporcelain common perf-top mainporcelain common
perf-trace mainporcelain audit perf-trace mainporcelain audit
perf-version mainporcelain common

View File

@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0 // SPDX-License-Identifier: GPL-2.0
/* /*
* Augment the openat syscall with the contents of the filename pointer argument. * Augment syscalls with the contents of the pointer arguments.
* *
* Test it with: * Test it with:
* *
@ -10,15 +10,14 @@
* the last one should be the one for '/etc/passwd'. * the last one should be the one for '/etc/passwd'.
* *
* This matches what is marshalled into the raw_syscall:sys_enter payload * This matches what is marshalled into the raw_syscall:sys_enter payload
* expected by the 'perf trace' beautifiers, and can be used by them unmodified, * expected by the 'perf trace' beautifiers, and can be used by them, that will
* which will be done as that feature is implemented in the next csets, for now * check if perf_sample->raw_data is more than what is expected for each
* it will appear in a dump done by the default tracepoint handler in 'perf trace', * syscalls:sys_{enter,exit}_SYSCALL tracepoint, uing the extra data as the
* that uses bpf_output__fprintf() to just dump those contents, as done with * contents of pointer arguments.
* the bpf-output event associated with the __bpf_output__ map declared in
* tools/perf/include/bpf/stdio.h.
*/ */
#include <stdio.h> #include <stdio.h>
#include <linux/socket.h>
struct bpf_map SEC("maps") __augmented_syscalls__ = { struct bpf_map SEC("maps") __augmented_syscalls__ = {
.type = BPF_MAP_TYPE_PERF_EVENT_ARRAY, .type = BPF_MAP_TYPE_PERF_EVENT_ARRAY,
@ -27,6 +26,44 @@ struct bpf_map SEC("maps") __augmented_syscalls__ = {
.max_entries = __NR_CPUS__, .max_entries = __NR_CPUS__,
}; };
struct syscall_exit_args {
unsigned long long common_tp_fields;
long syscall_nr;
long ret;
};
struct augmented_filename {
unsigned int size;
int reserved;
char value[256];
};
#define augmented_filename_syscall(syscall) \
struct augmented_enter_##syscall##_args { \
struct syscall_enter_##syscall##_args args; \
struct augmented_filename filename; \
}; \
int syscall_enter(syscall)(struct syscall_enter_##syscall##_args *args) \
{ \
struct augmented_enter_##syscall##_args augmented_args = { .filename.reserved = 0, }; \
unsigned int len = sizeof(augmented_args); \
probe_read(&augmented_args.args, sizeof(augmented_args.args), args); \
augmented_args.filename.size = probe_read_str(&augmented_args.filename.value, \
sizeof(augmented_args.filename.value), \
args->filename_ptr); \
if (augmented_args.filename.size < sizeof(augmented_args.filename.value)) { \
len -= sizeof(augmented_args.filename.value) - augmented_args.filename.size; \
len &= sizeof(augmented_args.filename.value) - 1; \
} \
perf_event_output(args, &__augmented_syscalls__, BPF_F_CURRENT_CPU, \
&augmented_args, len); \
return 0; \
} \
int syscall_exit(syscall)(struct syscall_exit_args *args) \
{ \
return 1; /* 0 as soon as we start copying data returned by the kernel, e.g. 'read' */ \
}
struct syscall_enter_openat_args { struct syscall_enter_openat_args {
unsigned long long common_tp_fields; unsigned long long common_tp_fields;
long syscall_nr; long syscall_nr;
@ -36,20 +73,101 @@ struct syscall_enter_openat_args {
long mode; long mode;
}; };
struct augmented_enter_openat_args { augmented_filename_syscall(openat);
struct syscall_enter_openat_args args;
char filename[64]; struct syscall_enter_open_args {
unsigned long long common_tp_fields;
long syscall_nr;
char *filename_ptr;
long flags;
long mode;
}; };
int syscall_enter(openat)(struct syscall_enter_openat_args *args) augmented_filename_syscall(open);
{
struct augmented_enter_openat_args augmented_args;
probe_read(&augmented_args.args, sizeof(augmented_args.args), args); struct syscall_enter_inotify_add_watch_args {
probe_read_str(&augmented_args.filename, sizeof(augmented_args.filename), args->filename_ptr); unsigned long long common_tp_fields;
perf_event_output(args, &__augmented_syscalls__, BPF_F_CURRENT_CPU, long syscall_nr;
&augmented_args, sizeof(augmented_args)); long fd;
return 1; char *filename_ptr;
long mask;
};
augmented_filename_syscall(inotify_add_watch);
struct statbuf;
struct syscall_enter_newstat_args {
unsigned long long common_tp_fields;
long syscall_nr;
char *filename_ptr;
struct stat *statbuf;
};
augmented_filename_syscall(newstat);
#ifndef _K_SS_MAXSIZE
#define _K_SS_MAXSIZE 128
#endif
#define augmented_sockaddr_syscall(syscall) \
struct augmented_enter_##syscall##_args { \
struct syscall_enter_##syscall##_args args; \
struct sockaddr_storage addr; \
}; \
int syscall_enter(syscall)(struct syscall_enter_##syscall##_args *args) \
{ \
struct augmented_enter_##syscall##_args augmented_args; \
unsigned long addrlen = sizeof(augmented_args.addr); \
probe_read(&augmented_args.args, sizeof(augmented_args.args), args); \
/* FIXME_CLANG_OPTIMIZATION_THAT_ACCESSES_USER_CONTROLLED_ADDRLEN_DESPITE_THIS_CHECK */ \
/* if (addrlen > augmented_args.args.addrlen) */ \
/* addrlen = augmented_args.args.addrlen; */ \
/* */ \
probe_read(&augmented_args.addr, addrlen, args->addr_ptr); \
perf_event_output(args, &__augmented_syscalls__, BPF_F_CURRENT_CPU, \
&augmented_args, \
sizeof(augmented_args) - sizeof(augmented_args.addr) + addrlen); \
return 0; \
} \
int syscall_exit(syscall)(struct syscall_exit_args *args) \
{ \
return 1; /* 0 as soon as we start copying data returned by the kernel, e.g. 'read' */ \
} }
struct sockaddr;
struct syscall_enter_bind_args {
unsigned long long common_tp_fields;
long syscall_nr;
long fd;
struct sockaddr *addr_ptr;
unsigned long addrlen;
};
augmented_sockaddr_syscall(bind);
struct syscall_enter_connect_args {
unsigned long long common_tp_fields;
long syscall_nr;
long fd;
struct sockaddr *addr_ptr;
unsigned long addrlen;
};
augmented_sockaddr_syscall(connect);
struct syscall_enter_sendto_args {
unsigned long long common_tp_fields;
long syscall_nr;
long fd;
void *buff;
long len;
unsigned long flags;
struct sockaddr *addr_ptr;
long addr_len;
};
augmented_sockaddr_syscall(sendto);
license(GPL); license(GPL);

View File

@ -0,0 +1,80 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Augment the filename syscalls with the contents of the filename pointer argument
* filtering only those that do not start with /etc/.
*
* Test it with:
*
* perf trace -e tools/perf/examples/bpf/augmented_syscalls.c cat /etc/passwd > /dev/null
*
* It'll catch some openat syscalls related to the dynamic linked and
* the last one should be the one for '/etc/passwd'.
*
* This matches what is marshalled into the raw_syscall:sys_enter payload
* expected by the 'perf trace' beautifiers, and can be used by them unmodified,
* which will be done as that feature is implemented in the next csets, for now
* it will appear in a dump done by the default tracepoint handler in 'perf trace',
* that uses bpf_output__fprintf() to just dump those contents, as done with
* the bpf-output event associated with the __bpf_output__ map declared in
* tools/perf/include/bpf/stdio.h.
*/
#include <stdio.h>
struct bpf_map SEC("maps") __augmented_syscalls__ = {
.type = BPF_MAP_TYPE_PERF_EVENT_ARRAY,
.key_size = sizeof(int),
.value_size = sizeof(u32),
.max_entries = __NR_CPUS__,
};
struct augmented_filename {
int size;
int reserved;
char value[64];
};
#define augmented_filename_syscall_enter(syscall) \
struct augmented_enter_##syscall##_args { \
struct syscall_enter_##syscall##_args args; \
struct augmented_filename filename; \
}; \
int syscall_enter(syscall)(struct syscall_enter_##syscall##_args *args) \
{ \
char etc[6] = "/etc/"; \
struct augmented_enter_##syscall##_args augmented_args = { .filename.reserved = 0, }; \
probe_read(&augmented_args.args, sizeof(augmented_args.args), args); \
augmented_args.filename.size = probe_read_str(&augmented_args.filename.value, \
sizeof(augmented_args.filename.value), \
args->filename_ptr); \
if (__builtin_memcmp(augmented_args.filename.value, etc, 4) != 0) \
return 0; \
perf_event_output(args, &__augmented_syscalls__, BPF_F_CURRENT_CPU, \
&augmented_args, \
(sizeof(augmented_args) - sizeof(augmented_args.filename.value) + \
augmented_args.filename.size)); \
return 0; \
}
struct syscall_enter_openat_args {
unsigned long long common_tp_fields;
long syscall_nr;
long dfd;
char *filename_ptr;
long flags;
long mode;
};
augmented_filename_syscall_enter(openat);
struct syscall_enter_open_args {
unsigned long long common_tp_fields;
long syscall_nr;
char *filename_ptr;
long flags;
long mode;
};
augmented_filename_syscall_enter(open);
license(GPL);

View File

@ -26,6 +26,9 @@ struct bpf_map {
#define syscall_enter(name) \ #define syscall_enter(name) \
SEC("syscalls:sys_enter_" #name) syscall_enter_ ## name SEC("syscalls:sys_enter_" #name) syscall_enter_ ## name
#define syscall_exit(name) \
SEC("syscalls:sys_exit_" #name) syscall_exit_ ## name
#define license(name) \ #define license(name) \
char _license[] SEC("license") = #name; \ char _license[] SEC("license") = #name; \
int _version SEC("version") = LINUX_VERSION_CODE; int _version SEC("version") = LINUX_VERSION_CODE;

View File

@ -0,0 +1,24 @@
/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
#ifndef _UAPI_LINUX_SOCKET_H
#define _UAPI_LINUX_SOCKET_H
/*
* Desired design of maximum size and alignment (see RFC2553)
*/
#define _K_SS_MAXSIZE 128 /* Implementation specific max size */
#define _K_SS_ALIGNSIZE (__alignof__ (struct sockaddr *))
/* Implementation specific desired alignment */
typedef unsigned short __kernel_sa_family_t;
struct __kernel_sockaddr_storage {
__kernel_sa_family_t ss_family; /* address family */
/* Following field(s) are implementation specific */
char __data[_K_SS_MAXSIZE - sizeof(unsigned short)];
/* space to achieve desired size, */
/* _SS_MAXSIZE value minus size of ss_family */
} __attribute__ ((aligned(_K_SS_ALIGNSIZE))); /* force desired alignment */
#define sockaddr_storage __kernel_sockaddr_storage
#endif /* _UAPI_LINUX_SOCKET_H */

View File

@ -0,0 +1,23 @@
[
{
"ArchStdEvent": "BR_IMMED_SPEC",
},
{
"ArchStdEvent": "BR_RETURN_SPEC",
},
{
"ArchStdEvent": "BR_INDIRECT_SPEC",
},
{
"PublicDescription": "Mispredicted or not predicted branch speculatively executed",
"EventCode": "0x10",
"EventName": "BR_MIS_PRED",
"BriefDescription": "Branch mispredicted"
},
{
"PublicDescription": "Predictable branch speculatively executed",
"EventCode": "0x12",
"EventName": "BR_PRED",
"BriefDescription": "Predictable branch"
},
]

View File

@ -0,0 +1,26 @@
[
{
"ArchStdEvent": "BUS_ACCESS_RD",
},
{
"ArchStdEvent": "BUS_ACCESS_WR",
},
{
"ArchStdEvent": "BUS_ACCESS_SHARED",
},
{
"ArchStdEvent": "BUS_ACCESS_NOT_SHARED",
},
{
"ArchStdEvent": "BUS_ACCESS_NORMAL",
},
{
"ArchStdEvent": "BUS_ACCESS_PERIPH",
},
{
"PublicDescription": "Bus access",
"EventCode": "0x19",
"EventName": "BUS_ACCESS",
"BriefDescription": "Bus access"
},
]

View File

@ -0,0 +1,191 @@
[
{
"ArchStdEvent": "L1D_CACHE_RD",
},
{
"ArchStdEvent": "L1D_CACHE_WR",
},
{
"ArchStdEvent": "L1D_CACHE_REFILL_RD",
},
{
"ArchStdEvent": "L1D_CACHE_INVAL",
},
{
"ArchStdEvent": "L1D_TLB_REFILL_RD",
},
{
"ArchStdEvent": "L1D_TLB_REFILL_WR",
},
{
"ArchStdEvent": "L2D_CACHE_RD",
},
{
"ArchStdEvent": "L2D_CACHE_WR",
},
{
"ArchStdEvent": "L2D_CACHE_REFILL_RD",
},
{
"ArchStdEvent": "L2D_CACHE_REFILL_WR",
},
{
"ArchStdEvent": "L2D_CACHE_WB_VICTIM",
},
{
"ArchStdEvent": "L2D_CACHE_WB_CLEAN",
},
{
"ArchStdEvent": "L2D_CACHE_INVAL",
},
{
"PublicDescription": "Level 1 instruction cache refill",
"EventCode": "0x01",
"EventName": "L1I_CACHE_REFILL",
"BriefDescription": "L1I cache refill"
},
{
"PublicDescription": "Level 1 instruction TLB refill",
"EventCode": "0x02",
"EventName": "L1I_TLB_REFILL",
"BriefDescription": "L1I TLB refill"
},
{
"PublicDescription": "Level 1 data cache refill",
"EventCode": "0x03",
"EventName": "L1D_CACHE_REFILL",
"BriefDescription": "L1D cache refill"
},
{
"PublicDescription": "Level 1 data cache access",
"EventCode": "0x04",
"EventName": "L1D_CACHE_ACCESS",
"BriefDescription": "L1D cache access"
},
{
"PublicDescription": "Level 1 data TLB refill",
"EventCode": "0x05",
"EventName": "L1D_TLB_REFILL",
"BriefDescription": "L1D TLB refill"
},
{
"PublicDescription": "Level 1 instruction cache access",
"EventCode": "0x14",
"EventName": "L1I_CACHE_ACCESS",
"BriefDescription": "L1I cache access"
},
{
"PublicDescription": "Level 2 data cache access",
"EventCode": "0x16",
"EventName": "L2D_CACHE_ACCESS",
"BriefDescription": "L2D cache access"
},
{
"PublicDescription": "Level 2 data refill",
"EventCode": "0x17",
"EventName": "L2D_CACHE_REFILL",
"BriefDescription": "L2D cache refill"
},
{
"PublicDescription": "Level 2 data cache, Write-Back",
"EventCode": "0x18",
"EventName": "L2D_CACHE_WB",
"BriefDescription": "L2D cache Write-Back"
},
{
"PublicDescription": "Level 1 data TLB access. This event counts any load or store operation which accesses the data L1 TLB",
"EventCode": "0x25",
"EventName": "L1D_TLB_ACCESS",
"BriefDescription": "L1D TLB access"
},
{
"PublicDescription": "Level 1 instruction TLB access. This event counts any instruction fetch which accesses the instruction L1 TLB",
"EventCode": "0x26",
"EventName": "L1I_TLB_ACCESS",
"BriefDescription": "L1I TLB access"
},
{
"PublicDescription": "Level 2 access to data TLB that caused a page table walk. This event counts on any data access which causes L2D_TLB_REFILL to count",
"EventCode": "0x34",
"EventName": "L2D_TLB_ACCESS",
"BriefDescription": "L2D TLB access"
},
{
"PublicDescription": "Level 2 access to instruciton TLB that caused a page table walk. This event counts on any instruciton access which causes L2I_TLB_REFILL to count",
"EventCode": "0x35",
"EventName": "L2I_TLB_ACCESS",
"BriefDescription": "L2D TLB access"
},
{
"PublicDescription": "Branch target buffer misprediction",
"EventCode": "0x102",
"EventName": "BTB_MIS_PRED",
"BriefDescription": "BTB misprediction"
},
{
"PublicDescription": "ITB miss",
"EventCode": "0x103",
"EventName": "ITB_MISS",
"BriefDescription": "ITB miss"
},
{
"PublicDescription": "DTB miss",
"EventCode": "0x104",
"EventName": "DTB_MISS",
"BriefDescription": "DTB miss"
},
{
"PublicDescription": "Level 1 data cache late miss",
"EventCode": "0x105",
"EventName": "L1D_CACHE_LATE_MISS",
"BriefDescription": "L1D cache late miss"
},
{
"PublicDescription": "Level 1 data cache prefetch request",
"EventCode": "0x106",
"EventName": "L1D_CACHE_PREFETCH",
"BriefDescription": "L1D cache prefetch"
},
{
"PublicDescription": "Level 2 data cache prefetch request",
"EventCode": "0x107",
"EventName": "L2D_CACHE_PREFETCH",
"BriefDescription": "L2D cache prefetch"
},
{
"PublicDescription": "Level 1 stage 2 TLB refill",
"EventCode": "0x111",
"EventName": "L1_STAGE2_TLB_REFILL",
"BriefDescription": "L1 stage 2 TLB refill"
},
{
"PublicDescription": "Page walk cache level-0 stage-1 hit",
"EventCode": "0x112",
"EventName": "PAGE_WALK_L0_STAGE1_HIT",
"BriefDescription": "Page walk, L0 stage-1 hit"
},
{
"PublicDescription": "Page walk cache level-1 stage-1 hit",
"EventCode": "0x113",
"EventName": "PAGE_WALK_L1_STAGE1_HIT",
"BriefDescription": "Page walk, L1 stage-1 hit"
},
{
"PublicDescription": "Page walk cache level-2 stage-1 hit",
"EventCode": "0x114",
"EventName": "PAGE_WALK_L2_STAGE1_HIT",
"BriefDescription": "Page walk, L2 stage-1 hit"
},
{
"PublicDescription": "Page walk cache level-1 stage-2 hit",
"EventCode": "0x115",
"EventName": "PAGE_WALK_L1_STAGE2_HIT",
"BriefDescription": "Page walk, L1 stage-2 hit"
},
{
"PublicDescription": "Page walk cache level-2 stage-2 hit",
"EventCode": "0x116",
"EventName": "PAGE_WALK_L2_STAGE2_HIT",
"BriefDescription": "Page walk, L2 stage-2 hit"
},
]

View File

@ -0,0 +1,20 @@
[
{
"PublicDescription": "The number of core clock cycles",
"EventCode": "0x11",
"EventName": "CPU_CYCLES",
"BriefDescription": "Clock cycles"
},
{
"PublicDescription": "FSU clocking gated off cycle",
"EventCode": "0x101",
"EventName": "FSU_CLOCK_OFF_CYCLES",
"BriefDescription": "FSU clocking gated off cycle"
},
{
"PublicDescription": "Wait state cycle",
"EventCode": "0x110",
"EventName": "Wait_CYCLES",
"BriefDescription": "Wait state cycle"
},
]

View File

@ -1,32 +0,0 @@
[
{
"ArchStdEvent": "L1D_CACHE_RD",
},
{
"ArchStdEvent": "L1D_CACHE_WR",
},
{
"ArchStdEvent": "L1D_CACHE_REFILL_RD",
},
{
"ArchStdEvent": "L1D_CACHE_REFILL_WR",
},
{
"ArchStdEvent": "L1D_TLB_REFILL_RD",
},
{
"ArchStdEvent": "L1D_TLB_REFILL_WR",
},
{
"ArchStdEvent": "L1D_TLB_RD",
},
{
"ArchStdEvent": "L1D_TLB_WR",
},
{
"ArchStdEvent": "BUS_ACCESS_RD",
},
{
"ArchStdEvent": "BUS_ACCESS_WR",
}
]

View File

@ -0,0 +1,50 @@
[
{
"ArchStdEvent": "EXC_UNDEF",
},
{
"ArchStdEvent": "EXC_SVC",
},
{
"ArchStdEvent": "EXC_PABORT",
},
{
"ArchStdEvent": "EXC_DABORT",
},
{
"ArchStdEvent": "EXC_IRQ",
},
{
"ArchStdEvent": "EXC_FIQ",
},
{
"ArchStdEvent": "EXC_HVC",
},
{
"ArchStdEvent": "EXC_TRAP_PABORT",
},
{
"ArchStdEvent": "EXC_TRAP_DABORT",
},
{
"ArchStdEvent": "EXC_TRAP_OTHER",
},
{
"ArchStdEvent": "EXC_TRAP_IRQ",
},
{
"ArchStdEvent": "EXC_TRAP_FIQ",
},
{
"PublicDescription": "Exception taken",
"EventCode": "0x09",
"EventName": "EXC_TAKEN",
"BriefDescription": "Exception taken"
},
{
"PublicDescription": "Instruction architecturally executed, condition check pass, exception return",
"EventCode": "0x0a",
"EventName": "EXC_RETURN",
"BriefDescription": "Exception return"
},
]

View File

@ -0,0 +1,89 @@
[
{
"ArchStdEvent": "LD_SPEC",
},
{
"ArchStdEvent": "ST_SPEC",
},
{
"ArchStdEvent": "LDST_SPEC",
},
{
"ArchStdEvent": "DP_SPEC",
},
{
"ArchStdEvent": "ASE_SPEC",
},
{
"ArchStdEvent": "VFP_SPEC",
},
{
"ArchStdEvent": "PC_WRITE_SPEC",
},
{
"ArchStdEvent": "CRYPTO_SPEC",
},
{
"ArchStdEvent": "ISB_SPEC",
},
{
"ArchStdEvent": "DSB_SPEC",
},
{
"ArchStdEvent": "DMB_SPEC",
},
{
"ArchStdEvent": "RC_LD_SPEC",
},
{
"ArchStdEvent": "RC_ST_SPEC",
},
{
"PublicDescription": "Instruction architecturally executed, software increment",
"EventCode": "0x00",
"EventName": "SW_INCR",
"BriefDescription": "Software increment"
},
{
"PublicDescription": "Instruction architecturally executed",
"EventCode": "0x08",
"EventName": "INST_RETIRED",
"BriefDescription": "Instruction retired"
},
{
"PublicDescription": "Instruction architecturally executed, condition code check pass, write to CONTEXTIDR",
"EventCode": "0x0b",
"EventName": "CID_WRITE_RETIRED",
"BriefDescription": "Write to CONTEXTIDR"
},
{
"PublicDescription": "Operation speculatively executed",
"EventCode": "0x1b",
"EventName": "INST_SPEC",
"BriefDescription": "Speculatively executed"
},
{
"PublicDescription": "Instruction architecturally executed (condition check pass), write to TTBR",
"EventCode": "0x1c",
"EventName": "TTBR_WRITE_RETIRED",
"BriefDescription": "Instruction executed, TTBR write"
},
{
"PublicDescription": "Instruction architecturally executed, branch. This event counts all branches, taken or not. This excludes exception entries, debug entries and CCFAIL branches",
"EventCode": "0x21",
"EventName": "BR_RETIRED",
"BriefDescription": "Branch retired"
},
{
"PublicDescription": "Instruction architecturally executed, mispredicted branch. This event counts any branch counted by BR_RETIRED which is not correctly predicted and causes a pipeline flush",
"EventCode": "0x22",
"EventName": "BR_MISPRED_RETIRED",
"BriefDescription": "Mispredicted branch retired"
},
{
"PublicDescription": "Operation speculatively executed, NOP",
"EventCode": "0x100",
"EventName": "NOP_SPEC",
"BriefDescription": "Speculatively executed, NOP"
},
]

View File

@ -0,0 +1,14 @@
[
{
"ArchStdEvent": "LDREX_SPEC",
},
{
"ArchStdEvent": "STREX_PASS_SPEC",
},
{
"ArchStdEvent": "STREX_FAIL_SPEC",
},
{
"ArchStdEvent": "STREX_SPEC",
},
]

View File

@ -0,0 +1,29 @@
[
{
"ArchStdEvent": "MEM_ACCESS_RD",
},
{
"ArchStdEvent": "MEM_ACCESS_WR",
},
{
"ArchStdEvent": "UNALIGNED_LD_SPEC",
},
{
"ArchStdEvent": "UNALIGNED_ST_SPEC",
},
{
"ArchStdEvent": "UNALIGNED_LDST_SPEC",
},
{
"PublicDescription": "Data memory access",
"EventCode": "0x13",
"EventName": "MEM_ACCESS",
"BriefDescription": "Memory access"
},
{
"PublicDescription": "Local memory error. This event counts any correctable or uncorrectable memory error (ECC or parity) in the protected core RAMs",
"EventCode": "0x1a",
"EventName": "MEM_ERROR",
"BriefDescription": "Memory error"
},
]

View File

@ -0,0 +1,50 @@
[
{
"PublicDescription": "Decode starved for instruction cycle",
"EventCode": "0x108",
"EventName": "DECODE_STALL",
"BriefDescription": "Decode starved"
},
{
"PublicDescription": "Op dispatch stalled cycle",
"EventCode": "0x109",
"EventName": "DISPATCH_STALL",
"BriefDescription": "Dispatch stalled"
},
{
"PublicDescription": "IXA Op non-issue",
"EventCode": "0x10a",
"EventName": "IXA_STALL",
"BriefDescription": "IXA stalled"
},
{
"PublicDescription": "IXB Op non-issue",
"EventCode": "0x10b",
"EventName": "IXB_STALL",
"BriefDescription": "IXB stalled"
},
{
"PublicDescription": "BX Op non-issue",
"EventCode": "0x10c",
"EventName": "BX_STALL",
"BriefDescription": "BX stalled"
},
{
"PublicDescription": "LX Op non-issue",
"EventCode": "0x10d",
"EventName": "LX_STALL",
"BriefDescription": "LX stalled"
},
{
"PublicDescription": "SX Op non-issue",
"EventCode": "0x10e",
"EventName": "SX_STALL",
"BriefDescription": "SX stalled"
},
{
"PublicDescription": "FX Op non-issue",
"EventCode": "0x10f",
"EventName": "FX_STALL",
"BriefDescription": "FX stalled"
},
]

View File

@ -21,6 +21,7 @@ perf-y += python-use.o
perf-y += bp_signal.o perf-y += bp_signal.o
perf-y += bp_signal_overflow.o perf-y += bp_signal_overflow.o
perf-y += bp_account.o perf-y += bp_account.o
perf-y += wp.o
perf-y += task-exit.o perf-y += task-exit.o
perf-y += sw-clock.o perf-y += sw-clock.o
perf-y += mmap-thread-lookup.o perf-y += mmap-thread-lookup.o

View File

@ -120,6 +120,16 @@ static struct test generic_tests[] = {
.func = test__bp_accounting, .func = test__bp_accounting,
.is_supported = test__bp_signal_is_supported, .is_supported = test__bp_signal_is_supported,
}, },
{
.desc = "Watchpoint",
.func = test__wp,
.is_supported = test__wp_is_supported,
.subtest = {
.skip_if_fail = false,
.get_nr = test__wp_subtest_get_nr,
.get_desc = test__wp_subtest_get_desc,
},
},
{ {
.desc = "Number of exit events of a simple workload", .desc = "Number of exit events of a simple workload",
.func = test__task_exit, .func = test__task_exit,

View File

@ -8,7 +8,7 @@
static int perf_evsel__test_field(struct perf_evsel *evsel, const char *name, static int perf_evsel__test_field(struct perf_evsel *evsel, const char *name,
int size, bool should_be_signed) int size, bool should_be_signed)
{ {
struct format_field *field = perf_evsel__field(evsel, name); struct tep_format_field *field = perf_evsel__field(evsel, name);
int is_signed; int is_signed;
int ret = 0; int ret = 0;
@ -17,7 +17,7 @@ static int perf_evsel__test_field(struct perf_evsel *evsel, const char *name,
return -1; return -1;
} }
is_signed = !!(field->flags | FIELD_IS_SIGNED); is_signed = !!(field->flags | TEP_FIELD_IS_SIGNED);
if (should_be_signed && !is_signed) { if (should_be_signed && !is_signed) {
pr_debug("%s: \"%s\" signedness(%d) is wrong, should be %d\n", pr_debug("%s: \"%s\" signedness(%d) is wrong, should be %d\n",
evsel->name, name, is_signed, should_be_signed); evsel->name, name, is_signed, should_be_signed);

View File

@ -48,7 +48,7 @@ trace_libc_inet_pton_backtrace() {
*) *)
eventattr='max-stack=3' eventattr='max-stack=3'
echo "getaddrinfo\+0x[[:xdigit:]]+[[:space:]]\($libc\)$" >> $expected echo "getaddrinfo\+0x[[:xdigit:]]+[[:space:]]\($libc\)$" >> $expected
echo ".*\+0x[[:xdigit:]]+[[:space:]]\(.*/bin/ping.*\)$" >> $expected echo ".*(\+0x[[:xdigit:]]+|\[unknown\])[[:space:]]\(.*/bin/ping.*\)$" >> $expected
;; ;;
esac esac

View File

@ -59,6 +59,9 @@ int test__python_use(struct test *test, int subtest);
int test__bp_signal(struct test *test, int subtest); int test__bp_signal(struct test *test, int subtest);
int test__bp_signal_overflow(struct test *test, int subtest); int test__bp_signal_overflow(struct test *test, int subtest);
int test__bp_accounting(struct test *test, int subtest); int test__bp_accounting(struct test *test, int subtest);
int test__wp(struct test *test, int subtest);
const char *test__wp_subtest_get_desc(int subtest);
int test__wp_subtest_get_nr(void);
int test__task_exit(struct test *test, int subtest); int test__task_exit(struct test *test, int subtest);
int test__mem(struct test *test, int subtest); int test__mem(struct test *test, int subtest);
int test__sw_clock_freq(struct test *test, int subtest); int test__sw_clock_freq(struct test *test, int subtest);
@ -106,6 +109,7 @@ int test__unit_number__scnprint(struct test *test, int subtest);
int test__mem2node(struct test *t, int subtest); int test__mem2node(struct test *t, int subtest);
bool test__bp_signal_is_supported(void); bool test__bp_signal_is_supported(void);
bool test__wp_is_supported(void);
#if defined(__arm__) || defined(__aarch64__) #if defined(__arm__) || defined(__aarch64__)
#ifdef HAVE_DWARF_UNWIND_SUPPORT #ifdef HAVE_DWARF_UNWIND_SUPPORT

241
tools/perf/tests/wp.c Normal file
View File

@ -0,0 +1,241 @@
// SPDX-License-Identifier: GPL-2.0
#include <stdlib.h>
#include <sys/ioctl.h>
#include <linux/hw_breakpoint.h>
#include "tests.h"
#include "debug.h"
#include "cloexec.h"
#define WP_TEST_ASSERT_VAL(fd, text, val) \
do { \
long long count; \
wp_read(fd, &count, sizeof(long long)); \
TEST_ASSERT_VAL(text, count == val); \
} while (0)
volatile u64 data1;
volatile u8 data2[3];
static int wp_read(int fd, long long *count, int size)
{
int ret = read(fd, count, size);
if (ret != size) {
pr_debug("failed to read: %d\n", ret);
return -1;
}
return 0;
}
static void get__perf_event_attr(struct perf_event_attr *attr, int wp_type,
void *wp_addr, unsigned long wp_len)
{
memset(attr, 0, sizeof(struct perf_event_attr));
attr->type = PERF_TYPE_BREAKPOINT;
attr->size = sizeof(struct perf_event_attr);
attr->config = 0;
attr->bp_type = wp_type;
attr->bp_addr = (unsigned long)wp_addr;
attr->bp_len = wp_len;
attr->sample_period = 1;
attr->sample_type = PERF_SAMPLE_IP;
attr->exclude_kernel = 1;
attr->exclude_hv = 1;
}
static int __event(int wp_type, void *wp_addr, unsigned long wp_len)
{
int fd;
struct perf_event_attr attr;
get__perf_event_attr(&attr, wp_type, wp_addr, wp_len);
fd = sys_perf_event_open(&attr, 0, -1, -1,
perf_event_open_cloexec_flag());
if (fd < 0)
pr_debug("failed opening event %x\n", attr.bp_type);
return fd;
}
static int wp_ro_test(void)
{
int fd;
unsigned long tmp, tmp1 = rand();
fd = __event(HW_BREAKPOINT_R, (void *)&data1, sizeof(data1));
if (fd < 0)
return -1;
tmp = data1;
WP_TEST_ASSERT_VAL(fd, "RO watchpoint", 1);
data1 = tmp1 + tmp;
WP_TEST_ASSERT_VAL(fd, "RO watchpoint", 1);
close(fd);
return 0;
}
static int wp_wo_test(void)
{
int fd;
unsigned long tmp, tmp1 = rand();
fd = __event(HW_BREAKPOINT_W, (void *)&data1, sizeof(data1));
if (fd < 0)
return -1;
tmp = data1;
WP_TEST_ASSERT_VAL(fd, "WO watchpoint", 0);
data1 = tmp1 + tmp;
WP_TEST_ASSERT_VAL(fd, "WO watchpoint", 1);
close(fd);
return 0;
}
static int wp_rw_test(void)
{
int fd;
unsigned long tmp, tmp1 = rand();
fd = __event(HW_BREAKPOINT_R | HW_BREAKPOINT_W, (void *)&data1,
sizeof(data1));
if (fd < 0)
return -1;
tmp = data1;
WP_TEST_ASSERT_VAL(fd, "RW watchpoint", 1);
data1 = tmp1 + tmp;
WP_TEST_ASSERT_VAL(fd, "RW watchpoint", 2);
close(fd);
return 0;
}
static int wp_modify_test(void)
{
int fd, ret;
unsigned long tmp = rand();
struct perf_event_attr new_attr;
fd = __event(HW_BREAKPOINT_W, (void *)&data1, sizeof(data1));
if (fd < 0)
return -1;
data1 = tmp;
WP_TEST_ASSERT_VAL(fd, "Modify watchpoint", 1);
/* Modify watchpoint with disabled = 1 */
get__perf_event_attr(&new_attr, HW_BREAKPOINT_W, (void *)&data2[0],
sizeof(u8) * 2);
new_attr.disabled = 1;
ret = ioctl(fd, PERF_EVENT_IOC_MODIFY_ATTRIBUTES, &new_attr);
if (ret < 0) {
pr_debug("ioctl(PERF_EVENT_IOC_MODIFY_ATTRIBUTES) failed\n");
close(fd);
return ret;
}
data2[1] = tmp; /* Not Counted */
WP_TEST_ASSERT_VAL(fd, "Modify watchpoint", 1);
/* Enable the event */
ioctl(fd, PERF_EVENT_IOC_ENABLE, 0);
if (ret < 0) {
pr_debug("Failed to enable event\n");
close(fd);
return ret;
}
data2[1] = tmp; /* Counted */
WP_TEST_ASSERT_VAL(fd, "Modify watchpoint", 2);
data2[2] = tmp; /* Not Counted */
WP_TEST_ASSERT_VAL(fd, "Modify watchpoint", 2);
close(fd);
return 0;
}
static bool wp_ro_supported(void)
{
#if defined (__x86_64__) || defined (__i386__)
return false;
#else
return true;
#endif
}
static void wp_ro_skip_msg(void)
{
#if defined (__x86_64__) || defined (__i386__)
pr_debug("Hardware does not support read only watchpoints.\n");
#endif
}
static struct {
const char *desc;
int (*target_func)(void);
bool (*is_supported)(void);
void (*skip_msg)(void);
} wp_testcase_table[] = {
{
.desc = "Read Only Watchpoint",
.target_func = &wp_ro_test,
.is_supported = &wp_ro_supported,
.skip_msg = &wp_ro_skip_msg,
},
{
.desc = "Write Only Watchpoint",
.target_func = &wp_wo_test,
},
{
.desc = "Read / Write Watchpoint",
.target_func = &wp_rw_test,
},
{
.desc = "Modify Watchpoint",
.target_func = &wp_modify_test,
},
};
int test__wp_subtest_get_nr(void)
{
return (int)ARRAY_SIZE(wp_testcase_table);
}
const char *test__wp_subtest_get_desc(int i)
{
if (i < 0 || i >= (int)ARRAY_SIZE(wp_testcase_table))
return NULL;
return wp_testcase_table[i].desc;
}
int test__wp(struct test *test __maybe_unused, int i)
{
if (i < 0 || i >= (int)ARRAY_SIZE(wp_testcase_table))
return TEST_FAIL;
if (wp_testcase_table[i].is_supported &&
!wp_testcase_table[i].is_supported()) {
wp_testcase_table[i].skip_msg();
return TEST_SKIP;
}
return !wp_testcase_table[i].target_func() ? TEST_OK : TEST_FAIL;
}
/* The s390 so far does not have support for
* instruction breakpoint using the perf_event_open() system call.
*/
bool test__wp_is_supported(void)
{
#if defined(__s390x__)
return false;
#else
return true;
#endif
}

View File

@ -7,5 +7,6 @@ endif
libperf-y += kcmp.o libperf-y += kcmp.o
libperf-y += pkey_alloc.o libperf-y += pkey_alloc.o
libperf-y += prctl.o libperf-y += prctl.o
libperf-y += sockaddr.o
libperf-y += socket.o libperf-y += socket.o
libperf-y += statx.o libperf-y += statx.o

View File

@ -30,9 +30,36 @@ struct thread;
size_t pid__scnprintf_fd(struct trace *trace, pid_t pid, int fd, char *bf, size_t size); size_t pid__scnprintf_fd(struct trace *trace, pid_t pid, int fd, char *bf, size_t size);
extern struct strarray strarray__socket_families;
/**
* augmented_arg: extra payload for syscall pointer arguments
* If perf_sample->raw_size is more than what a syscall sys_enter_FOO puts,
* then its the arguments contents, so that we can show more than just a
* pointer. This will be done initially with eBPF, the start of that is at the
* tools/perf/examples/bpf/augmented_syscalls.c example for the openat, but
* will eventually be done automagically caching the running kernel tracefs
* events data into an eBPF C script, that then gets compiled and its .o file
* cached for subsequent use. For char pointers like the ones for 'open' like
* syscalls its easy, for the rest we should use DWARF or better, BTF, much
* more compact.
*
* @size: 8 if all we need is an integer, otherwise all of the augmented arg.
* @int_arg: will be used for integer like pointer contents, like 'accept's 'upeer_addrlen'
* @value: u64 aligned, for structs, pathnames
*/
struct augmented_arg {
int size;
int int_arg;
u64 value[];
};
/** /**
* @val: value of syscall argument being formatted * @val: value of syscall argument being formatted
* @args: All the args, use syscall_args__val(arg, nth) to access one * @args: All the args, use syscall_args__val(arg, nth) to access one
* @augmented_args: Extra data that can be collected, for instance, with eBPF for expanding the pathname for open, etc
* @augmented_args_size: augmented_args total payload size
* @thread: tid state (maps, pid, tid, etc) * @thread: tid state (maps, pid, tid, etc)
* @trace: 'perf trace' internals: all threads, etc * @trace: 'perf trace' internals: all threads, etc
* @parm: private area, may be an strarray, for instance * @parm: private area, may be an strarray, for instance
@ -43,6 +70,10 @@ size_t pid__scnprintf_fd(struct trace *trace, pid_t pid, int fd, char *bf, size_
struct syscall_arg { struct syscall_arg {
unsigned long val; unsigned long val;
unsigned char *args; unsigned char *args;
struct {
struct augmented_arg *args;
int size;
} augmented;
struct thread *thread; struct thread *thread;
struct trace *trace; struct trace *trace;
void *parm; void *parm;
@ -106,6 +137,9 @@ size_t syscall_arg__scnprintf_prctl_arg2(char *bf, size_t size, struct syscall_a
size_t syscall_arg__scnprintf_prctl_arg3(char *bf, size_t size, struct syscall_arg *arg); size_t syscall_arg__scnprintf_prctl_arg3(char *bf, size_t size, struct syscall_arg *arg);
#define SCA_PRCTL_ARG3 syscall_arg__scnprintf_prctl_arg3 #define SCA_PRCTL_ARG3 syscall_arg__scnprintf_prctl_arg3
size_t syscall_arg__scnprintf_sockaddr(char *bf, size_t size, struct syscall_arg *arg);
#define SCA_SOCKADDR syscall_arg__scnprintf_sockaddr
size_t syscall_arg__scnprintf_socket_protocol(char *bf, size_t size, struct syscall_arg *arg); size_t syscall_arg__scnprintf_socket_protocol(char *bf, size_t size, struct syscall_arg *arg);
#define SCA_SK_PROTO syscall_arg__scnprintf_socket_protocol #define SCA_SK_PROTO syscall_arg__scnprintf_socket_protocol

View File

@ -0,0 +1,76 @@
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2018, Red Hat Inc, Arnaldo Carvalho de Melo <acme@redhat.com>
#include "trace/beauty/beauty.h"
#include <sys/socket.h>
#include <sys/types.h>
#include <sys/un.h>
#include <arpa/inet.h>
static const char *socket_families[] = {
"UNSPEC", "LOCAL", "INET", "AX25", "IPX", "APPLETALK", "NETROM",
"BRIDGE", "ATMPVC", "X25", "INET6", "ROSE", "DECnet", "NETBEUI",
"SECURITY", "KEY", "NETLINK", "PACKET", "ASH", "ECONET", "ATMSVC",
"RDS", "SNA", "IRDA", "PPPOX", "WANPIPE", "LLC", "IB", "CAN", "TIPC",
"BLUETOOTH", "IUCV", "RXRPC", "ISDN", "PHONET", "IEEE802154", "CAIF",
"ALG", "NFC", "VSOCK",
};
DEFINE_STRARRAY(socket_families);
static size_t af_inet__scnprintf(struct sockaddr *sa, char *bf, size_t size)
{
struct sockaddr_in *sin = (struct sockaddr_in *)sa;
char tmp[16];
return scnprintf(bf, size, ", port: %d, addr: %s", ntohs(sin->sin_port),
inet_ntop(sin->sin_family, &sin->sin_addr, tmp, sizeof(tmp)));
}
static size_t af_inet6__scnprintf(struct sockaddr *sa, char *bf, size_t size)
{
struct sockaddr_in6 *sin6 = (struct sockaddr_in6 *)sa;
u32 flowinfo = ntohl(sin6->sin6_flowinfo);
char tmp[512];
size_t printed = scnprintf(bf, size, ", port: %d, addr: %s", ntohs(sin6->sin6_port),
inet_ntop(sin6->sin6_family, &sin6->sin6_addr, tmp, sizeof(tmp)));
if (flowinfo != 0)
printed += scnprintf(bf + printed, size - printed, ", flowinfo: %lu", flowinfo);
if (sin6->sin6_scope_id != 0)
printed += scnprintf(bf + printed, size - printed, ", scope_id: %lu", sin6->sin6_scope_id);
return printed;
}
static size_t af_local__scnprintf(struct sockaddr *sa, char *bf, size_t size)
{
struct sockaddr_un *sun = (struct sockaddr_un *)sa;
return scnprintf(bf, size, ", path: %s", sun->sun_path);
}
static size_t (*af_scnprintfs[])(struct sockaddr *sa, char *bf, size_t size) = {
[AF_LOCAL] = af_local__scnprintf,
[AF_INET] = af_inet__scnprintf,
[AF_INET6] = af_inet6__scnprintf,
};
static size_t syscall_arg__scnprintf_augmented_sockaddr(struct syscall_arg *arg, char *bf, size_t size)
{
struct sockaddr *sa = (struct sockaddr *)arg->augmented.args;
char family[32];
size_t printed;
strarray__scnprintf(&strarray__socket_families, family, sizeof(family), "%d", sa->sa_family);
printed = scnprintf(bf, size, "{ .family: %s", family);
if (sa->sa_family < ARRAY_SIZE(af_scnprintfs) && af_scnprintfs[sa->sa_family])
printed += af_scnprintfs[sa->sa_family](sa, bf + printed, size - printed);
return printed + scnprintf(bf + printed, size - printed, " }");
}
size_t syscall_arg__scnprintf_sockaddr(char *bf, size_t size, struct syscall_arg *arg)
{
if (arg->augmented.args)
return syscall_arg__scnprintf_augmented_sockaddr(arg, bf, size);
return scnprintf(bf, size, "%#x", arg->val);
}

View File

@ -73,6 +73,7 @@ libperf-y += vdso.o
libperf-y += counts.o libperf-y += counts.o
libperf-y += stat.o libperf-y += stat.o
libperf-y += stat-shadow.o libperf-y += stat-shadow.o
libperf-y += stat-display.o
libperf-y += record.o libperf-y += record.o
libperf-y += srcline.o libperf-y += srcline.o
libperf-y += data.o libperf-y += data.o

View File

@ -906,9 +906,8 @@ out_free:
return err; return err;
} }
int perf_event__process_auxtrace_info(struct perf_tool *tool __maybe_unused, int perf_event__process_auxtrace_info(struct perf_session *session,
union perf_event *event, union perf_event *event)
struct perf_session *session)
{ {
enum auxtrace_type type = event->auxtrace_info.type; enum auxtrace_type type = event->auxtrace_info.type;
@ -932,9 +931,8 @@ int perf_event__process_auxtrace_info(struct perf_tool *tool __maybe_unused,
} }
} }
s64 perf_event__process_auxtrace(struct perf_tool *tool, s64 perf_event__process_auxtrace(struct perf_session *session,
union perf_event *event, union perf_event *event)
struct perf_session *session)
{ {
s64 err; s64 err;
@ -950,7 +948,7 @@ s64 perf_event__process_auxtrace(struct perf_tool *tool,
if (!session->auxtrace || event->header.type != PERF_RECORD_AUXTRACE) if (!session->auxtrace || event->header.type != PERF_RECORD_AUXTRACE)
return -EINVAL; return -EINVAL;
err = session->auxtrace->process_auxtrace_event(session, event, tool); err = session->auxtrace->process_auxtrace_event(session, event, session->tool);
if (err < 0) if (err < 0)
return err; return err;
@ -1185,9 +1183,8 @@ void events_stats__auxtrace_error_warn(const struct events_stats *stats)
} }
} }
int perf_event__process_auxtrace_error(struct perf_tool *tool __maybe_unused, int perf_event__process_auxtrace_error(struct perf_session *session,
union perf_event *event, union perf_event *event)
struct perf_session *session)
{ {
if (auxtrace__dont_decode(session)) if (auxtrace__dont_decode(session))
return 0; return 0;
@ -1196,11 +1193,12 @@ int perf_event__process_auxtrace_error(struct perf_tool *tool __maybe_unused,
return 0; return 0;
} }
static int __auxtrace_mmap__read(struct auxtrace_mmap *mm, static int __auxtrace_mmap__read(struct perf_mmap *map,
struct auxtrace_record *itr, struct auxtrace_record *itr,
struct perf_tool *tool, process_auxtrace_t fn, struct perf_tool *tool, process_auxtrace_t fn,
bool snapshot, size_t snapshot_size) bool snapshot, size_t snapshot_size)
{ {
struct auxtrace_mmap *mm = &map->auxtrace_mmap;
u64 head, old = mm->prev, offset, ref; u64 head, old = mm->prev, offset, ref;
unsigned char *data = mm->base; unsigned char *data = mm->base;
size_t size, head_off, old_off, len1, len2, padding; size_t size, head_off, old_off, len1, len2, padding;
@ -1287,7 +1285,7 @@ static int __auxtrace_mmap__read(struct auxtrace_mmap *mm,
ev.auxtrace.tid = mm->tid; ev.auxtrace.tid = mm->tid;
ev.auxtrace.cpu = mm->cpu; ev.auxtrace.cpu = mm->cpu;
if (fn(tool, &ev, data1, len1, data2, len2)) if (fn(tool, map, &ev, data1, len1, data2, len2))
return -1; return -1;
mm->prev = head; mm->prev = head;
@ -1306,18 +1304,18 @@ static int __auxtrace_mmap__read(struct auxtrace_mmap *mm,
return 1; return 1;
} }
int auxtrace_mmap__read(struct auxtrace_mmap *mm, struct auxtrace_record *itr, int auxtrace_mmap__read(struct perf_mmap *map, struct auxtrace_record *itr,
struct perf_tool *tool, process_auxtrace_t fn) struct perf_tool *tool, process_auxtrace_t fn)
{ {
return __auxtrace_mmap__read(mm, itr, tool, fn, false, 0); return __auxtrace_mmap__read(map, itr, tool, fn, false, 0);
} }
int auxtrace_mmap__read_snapshot(struct auxtrace_mmap *mm, int auxtrace_mmap__read_snapshot(struct perf_mmap *map,
struct auxtrace_record *itr, struct auxtrace_record *itr,
struct perf_tool *tool, process_auxtrace_t fn, struct perf_tool *tool, process_auxtrace_t fn,
size_t snapshot_size) size_t snapshot_size)
{ {
return __auxtrace_mmap__read(mm, itr, tool, fn, true, snapshot_size); return __auxtrace_mmap__read(map, itr, tool, fn, true, snapshot_size);
} }
/** /**

Some files were not shown because too many files have changed in this diff Show More