License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 14:07:57 +00:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
RAS: Add a tracepoint for reporting memory controller events
Add a new tracepoint-based hardware events report method for
reporting Memory Controller events.
Part of the description bellow is shamelessly copied from Tony
Luck's notes about the Hardware Error BoF during LPC 2010 [1].
Tony, thanks for your notes and discussions to generate the
h/w error reporting requirements.
[1] http://lwn.net/Articles/416669/
We have several subsystems & methods for reporting hardware errors:
1) EDAC ("Error Detection and Correction"). In its original form
this consisted of a platform specific driver that read topology
information and error counts from chipset registers and reported
the results via a sysfs interface.
2) mcelog - x86 specific decoding of machine check bank registers
reporting in binary form via /dev/mcelog. Recent additions make use
of the APEI extensions that were documented in version 4.0a of the
ACPI specification to acquire more information about errors without
having to rely reading chipset registers directly. A user level
programs decodes into somewhat human readable format.
3) drivers/edac/mce_amd.c - this driver hooks into the mcelog path and
decodes errors reported via machine check bank registers in AMD
processors to the console log using printk();
Each of these mechanisms has a band of followers ... and none
of them appear to meet all the needs of all users.
As part of a RAS subsystem, let's encapsulate the memory error hardware
events into a trace facility.
The tracepoint printk will be displayed like:
mc_event: [quant] (Corrected|Uncorrected|Fatal) error:[error msg] on [label] ([location] [edac_mc detail] [driver_detail]
Where:
[quant] is the quantity of errors
[error msg] is the driver-specific error message
(e. g. "memory read", "bus error", ...);
[location] is the location in terms of memory controller and
branch/channel/slot, channel/slot or csrow/channel;
[label] is the memory stick label;
[edac_mc detail] describes the address location of the error
and the syndrome;
[driver detail] is driver-specifig error message details,
when needed/provided (e. g. "area:DMA", ...)
For example:
mc_event: 1 Corrected error:memory read on memory stick DIMM_1A (mc:0 location:0:0:0 page:0x586b6e offset:0xa66 grain:32 syndrome:0x0 area:DMA)
Of course, any userspace tools meant to handle errors should not parse
the above data. They should, instead, use the binary fields provided by
the tracepoint, mapping them directly into their Management Information
Base.
NOTE: The original patch was providing an additional mechanism for
MCA-based trace events that also contained MCA error register data.
However, as no agreement was reached so far for the MCA-based trace
events, for now, let's add events only for memory errors.
A latter patch is planned to change the tracepoint, for those types
of event.
Cc: Aristeu Rozanski <arozansk@redhat.com>
Cc: Doug Thompson <norsk5@yahoo.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@redhat.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
2012-02-23 11:10:34 +00:00
|
|
|
#undef TRACE_SYSTEM
|
|
|
|
#define TRACE_SYSTEM ras
|
|
|
|
#define TRACE_INCLUDE_FILE ras_event
|
|
|
|
|
|
|
|
#if !defined(_TRACE_HW_EVENT_MC_H) || defined(TRACE_HEADER_MULTI_READ)
|
|
|
|
#define _TRACE_HW_EVENT_MC_H
|
|
|
|
|
|
|
|
#include <linux/tracepoint.h>
|
|
|
|
#include <linux/edac.h>
|
|
|
|
#include <linux/ktime.h>
|
2014-08-13 06:22:37 +00:00
|
|
|
#include <linux/pci.h>
|
2014-06-11 20:57:27 +00:00
|
|
|
#include <linux/aer.h>
|
2014-06-18 02:33:07 +00:00
|
|
|
#include <linux/cper.h>
|
2015-06-24 23:57:36 +00:00
|
|
|
#include <linux/mm.h>
|
2014-06-18 02:33:07 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* MCE Extended Error Log trace event
|
|
|
|
*
|
|
|
|
* These events are generated when hardware detects a corrected or
|
|
|
|
* uncorrected event.
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* memory trace event */
|
|
|
|
|
|
|
|
#if defined(CONFIG_ACPI_EXTLOG) || defined(CONFIG_ACPI_EXTLOG_MODULE)
|
|
|
|
TRACE_EVENT(extlog_mem_event,
|
|
|
|
TP_PROTO(struct cper_sec_mem_err *mem,
|
|
|
|
u32 err_seq,
|
2019-01-25 14:30:35 +00:00
|
|
|
const guid_t *fru_id,
|
2014-06-18 02:33:07 +00:00
|
|
|
const char *fru_text,
|
|
|
|
u8 sev),
|
|
|
|
|
|
|
|
TP_ARGS(mem, err_seq, fru_id, fru_text, sev),
|
|
|
|
|
|
|
|
TP_STRUCT__entry(
|
|
|
|
__field(u32, err_seq)
|
|
|
|
__field(u8, etype)
|
|
|
|
__field(u8, sev)
|
|
|
|
__field(u64, pa)
|
|
|
|
__field(u8, pa_mask_lsb)
|
2019-01-25 14:30:35 +00:00
|
|
|
__field_struct(guid_t, fru_id)
|
2014-06-18 02:33:07 +00:00
|
|
|
__string(fru_text, fru_text)
|
|
|
|
__field_struct(struct cper_mem_err_compact, data)
|
|
|
|
),
|
|
|
|
|
|
|
|
TP_fast_assign(
|
|
|
|
__entry->err_seq = err_seq;
|
|
|
|
if (mem->validation_bits & CPER_MEM_VALID_ERROR_TYPE)
|
|
|
|
__entry->etype = mem->error_type;
|
|
|
|
else
|
|
|
|
__entry->etype = ~0;
|
|
|
|
__entry->sev = sev;
|
|
|
|
if (mem->validation_bits & CPER_MEM_VALID_PA)
|
|
|
|
__entry->pa = mem->physical_addr;
|
|
|
|
else
|
|
|
|
__entry->pa = ~0ull;
|
|
|
|
|
|
|
|
if (mem->validation_bits & CPER_MEM_VALID_PA_MASK)
|
|
|
|
__entry->pa_mask_lsb = (u8)__ffs64(mem->physical_addr_mask);
|
|
|
|
else
|
|
|
|
__entry->pa_mask_lsb = ~0;
|
|
|
|
__entry->fru_id = *fru_id;
|
|
|
|
__assign_str(fru_text, fru_text);
|
|
|
|
cper_mem_err_pack(mem, &__entry->data);
|
|
|
|
),
|
|
|
|
|
|
|
|
TP_printk("{%d} %s error: %s physical addr: %016llx (mask lsb: %x) %sFRU: %pUl %.20s",
|
|
|
|
__entry->err_seq,
|
|
|
|
cper_severity_str(__entry->sev),
|
|
|
|
cper_mem_err_type_str(__entry->etype),
|
|
|
|
__entry->pa,
|
|
|
|
__entry->pa_mask_lsb,
|
|
|
|
cper_mem_err_unpack(p, &__entry->data),
|
|
|
|
&__entry->fru_id,
|
|
|
|
__get_str(fru_text))
|
|
|
|
);
|
|
|
|
#endif
|
RAS: Add a tracepoint for reporting memory controller events
Add a new tracepoint-based hardware events report method for
reporting Memory Controller events.
Part of the description bellow is shamelessly copied from Tony
Luck's notes about the Hardware Error BoF during LPC 2010 [1].
Tony, thanks for your notes and discussions to generate the
h/w error reporting requirements.
[1] http://lwn.net/Articles/416669/
We have several subsystems & methods for reporting hardware errors:
1) EDAC ("Error Detection and Correction"). In its original form
this consisted of a platform specific driver that read topology
information and error counts from chipset registers and reported
the results via a sysfs interface.
2) mcelog - x86 specific decoding of machine check bank registers
reporting in binary form via /dev/mcelog. Recent additions make use
of the APEI extensions that were documented in version 4.0a of the
ACPI specification to acquire more information about errors without
having to rely reading chipset registers directly. A user level
programs decodes into somewhat human readable format.
3) drivers/edac/mce_amd.c - this driver hooks into the mcelog path and
decodes errors reported via machine check bank registers in AMD
processors to the console log using printk();
Each of these mechanisms has a band of followers ... and none
of them appear to meet all the needs of all users.
As part of a RAS subsystem, let's encapsulate the memory error hardware
events into a trace facility.
The tracepoint printk will be displayed like:
mc_event: [quant] (Corrected|Uncorrected|Fatal) error:[error msg] on [label] ([location] [edac_mc detail] [driver_detail]
Where:
[quant] is the quantity of errors
[error msg] is the driver-specific error message
(e. g. "memory read", "bus error", ...);
[location] is the location in terms of memory controller and
branch/channel/slot, channel/slot or csrow/channel;
[label] is the memory stick label;
[edac_mc detail] describes the address location of the error
and the syndrome;
[driver detail] is driver-specifig error message details,
when needed/provided (e. g. "area:DMA", ...)
For example:
mc_event: 1 Corrected error:memory read on memory stick DIMM_1A (mc:0 location:0:0:0 page:0x586b6e offset:0xa66 grain:32 syndrome:0x0 area:DMA)
Of course, any userspace tools meant to handle errors should not parse
the above data. They should, instead, use the binary fields provided by
the tracepoint, mapping them directly into their Management Information
Base.
NOTE: The original patch was providing an additional mechanism for
MCA-based trace events that also contained MCA error register data.
However, as no agreement was reached so far for the MCA-based trace
events, for now, let's add events only for memory errors.
A latter patch is planned to change the tracepoint, for those types
of event.
Cc: Aristeu Rozanski <arozansk@redhat.com>
Cc: Doug Thompson <norsk5@yahoo.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@redhat.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
2012-02-23 11:10:34 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Hardware Events Report
|
|
|
|
*
|
|
|
|
* Those events are generated when hardware detected a corrected or
|
|
|
|
* uncorrected event, and are meant to replace the current API to report
|
|
|
|
* errors defined on both EDAC and MCE subsystems.
|
|
|
|
*
|
|
|
|
* FIXME: Add events for handling memory errors originated from the
|
|
|
|
* MCE subsystem.
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Hardware-independent Memory Controller specific events
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Default error mechanisms for Memory Controller errors (CE and UE)
|
|
|
|
*/
|
|
|
|
TRACE_EVENT(mc_event,
|
|
|
|
|
|
|
|
TP_PROTO(const unsigned int err_type,
|
|
|
|
const char *error_msg,
|
|
|
|
const char *label,
|
|
|
|
const int error_count,
|
|
|
|
const u8 mc_index,
|
|
|
|
const s8 top_layer,
|
|
|
|
const s8 mid_layer,
|
|
|
|
const s8 low_layer,
|
|
|
|
unsigned long address,
|
|
|
|
const u8 grain_bits,
|
|
|
|
unsigned long syndrome,
|
|
|
|
const char *driver_detail),
|
|
|
|
|
|
|
|
TP_ARGS(err_type, error_msg, label, error_count, mc_index,
|
|
|
|
top_layer, mid_layer, low_layer, address, grain_bits,
|
|
|
|
syndrome, driver_detail),
|
|
|
|
|
|
|
|
TP_STRUCT__entry(
|
|
|
|
__field( unsigned int, error_type )
|
|
|
|
__string( msg, error_msg )
|
|
|
|
__string( label, label )
|
|
|
|
__field( u16, error_count )
|
|
|
|
__field( u8, mc_index )
|
|
|
|
__field( s8, top_layer )
|
|
|
|
__field( s8, middle_layer )
|
|
|
|
__field( s8, lower_layer )
|
|
|
|
__field( long, address )
|
|
|
|
__field( u8, grain_bits )
|
|
|
|
__field( long, syndrome )
|
|
|
|
__string( driver_detail, driver_detail )
|
|
|
|
),
|
|
|
|
|
|
|
|
TP_fast_assign(
|
|
|
|
__entry->error_type = err_type;
|
|
|
|
__assign_str(msg, error_msg);
|
|
|
|
__assign_str(label, label);
|
|
|
|
__entry->error_count = error_count;
|
|
|
|
__entry->mc_index = mc_index;
|
|
|
|
__entry->top_layer = top_layer;
|
|
|
|
__entry->middle_layer = mid_layer;
|
|
|
|
__entry->lower_layer = low_layer;
|
|
|
|
__entry->address = address;
|
|
|
|
__entry->grain_bits = grain_bits;
|
|
|
|
__entry->syndrome = syndrome;
|
|
|
|
__assign_str(driver_detail, driver_detail);
|
|
|
|
),
|
|
|
|
|
|
|
|
TP_printk("%d %s error%s:%s%s on %s (mc:%d location:%d:%d:%d address:0x%08lx grain:%d syndrome:0x%08lx%s%s)",
|
|
|
|
__entry->error_count,
|
2013-02-20 00:26:22 +00:00
|
|
|
mc_event_error_type(__entry->error_type),
|
RAS: Add a tracepoint for reporting memory controller events
Add a new tracepoint-based hardware events report method for
reporting Memory Controller events.
Part of the description bellow is shamelessly copied from Tony
Luck's notes about the Hardware Error BoF during LPC 2010 [1].
Tony, thanks for your notes and discussions to generate the
h/w error reporting requirements.
[1] http://lwn.net/Articles/416669/
We have several subsystems & methods for reporting hardware errors:
1) EDAC ("Error Detection and Correction"). In its original form
this consisted of a platform specific driver that read topology
information and error counts from chipset registers and reported
the results via a sysfs interface.
2) mcelog - x86 specific decoding of machine check bank registers
reporting in binary form via /dev/mcelog. Recent additions make use
of the APEI extensions that were documented in version 4.0a of the
ACPI specification to acquire more information about errors without
having to rely reading chipset registers directly. A user level
programs decodes into somewhat human readable format.
3) drivers/edac/mce_amd.c - this driver hooks into the mcelog path and
decodes errors reported via machine check bank registers in AMD
processors to the console log using printk();
Each of these mechanisms has a band of followers ... and none
of them appear to meet all the needs of all users.
As part of a RAS subsystem, let's encapsulate the memory error hardware
events into a trace facility.
The tracepoint printk will be displayed like:
mc_event: [quant] (Corrected|Uncorrected|Fatal) error:[error msg] on [label] ([location] [edac_mc detail] [driver_detail]
Where:
[quant] is the quantity of errors
[error msg] is the driver-specific error message
(e. g. "memory read", "bus error", ...);
[location] is the location in terms of memory controller and
branch/channel/slot, channel/slot or csrow/channel;
[label] is the memory stick label;
[edac_mc detail] describes the address location of the error
and the syndrome;
[driver detail] is driver-specifig error message details,
when needed/provided (e. g. "area:DMA", ...)
For example:
mc_event: 1 Corrected error:memory read on memory stick DIMM_1A (mc:0 location:0:0:0 page:0x586b6e offset:0xa66 grain:32 syndrome:0x0 area:DMA)
Of course, any userspace tools meant to handle errors should not parse
the above data. They should, instead, use the binary fields provided by
the tracepoint, mapping them directly into their Management Information
Base.
NOTE: The original patch was providing an additional mechanism for
MCA-based trace events that also contained MCA error register data.
However, as no agreement was reached so far for the MCA-based trace
events, for now, let's add events only for memory errors.
A latter patch is planned to change the tracepoint, for those types
of event.
Cc: Aristeu Rozanski <arozansk@redhat.com>
Cc: Doug Thompson <norsk5@yahoo.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@redhat.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
2012-02-23 11:10:34 +00:00
|
|
|
__entry->error_count > 1 ? "s" : "",
|
2016-07-01 23:44:35 +00:00
|
|
|
__get_str(msg)[0] ? " " : "",
|
RAS: Add a tracepoint for reporting memory controller events
Add a new tracepoint-based hardware events report method for
reporting Memory Controller events.
Part of the description bellow is shamelessly copied from Tony
Luck's notes about the Hardware Error BoF during LPC 2010 [1].
Tony, thanks for your notes and discussions to generate the
h/w error reporting requirements.
[1] http://lwn.net/Articles/416669/
We have several subsystems & methods for reporting hardware errors:
1) EDAC ("Error Detection and Correction"). In its original form
this consisted of a platform specific driver that read topology
information and error counts from chipset registers and reported
the results via a sysfs interface.
2) mcelog - x86 specific decoding of machine check bank registers
reporting in binary form via /dev/mcelog. Recent additions make use
of the APEI extensions that were documented in version 4.0a of the
ACPI specification to acquire more information about errors without
having to rely reading chipset registers directly. A user level
programs decodes into somewhat human readable format.
3) drivers/edac/mce_amd.c - this driver hooks into the mcelog path and
decodes errors reported via machine check bank registers in AMD
processors to the console log using printk();
Each of these mechanisms has a band of followers ... and none
of them appear to meet all the needs of all users.
As part of a RAS subsystem, let's encapsulate the memory error hardware
events into a trace facility.
The tracepoint printk will be displayed like:
mc_event: [quant] (Corrected|Uncorrected|Fatal) error:[error msg] on [label] ([location] [edac_mc detail] [driver_detail]
Where:
[quant] is the quantity of errors
[error msg] is the driver-specific error message
(e. g. "memory read", "bus error", ...);
[location] is the location in terms of memory controller and
branch/channel/slot, channel/slot or csrow/channel;
[label] is the memory stick label;
[edac_mc detail] describes the address location of the error
and the syndrome;
[driver detail] is driver-specifig error message details,
when needed/provided (e. g. "area:DMA", ...)
For example:
mc_event: 1 Corrected error:memory read on memory stick DIMM_1A (mc:0 location:0:0:0 page:0x586b6e offset:0xa66 grain:32 syndrome:0x0 area:DMA)
Of course, any userspace tools meant to handle errors should not parse
the above data. They should, instead, use the binary fields provided by
the tracepoint, mapping them directly into their Management Information
Base.
NOTE: The original patch was providing an additional mechanism for
MCA-based trace events that also contained MCA error register data.
However, as no agreement was reached so far for the MCA-based trace
events, for now, let's add events only for memory errors.
A latter patch is planned to change the tracepoint, for those types
of event.
Cc: Aristeu Rozanski <arozansk@redhat.com>
Cc: Doug Thompson <norsk5@yahoo.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@redhat.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
2012-02-23 11:10:34 +00:00
|
|
|
__get_str(msg),
|
|
|
|
__get_str(label),
|
|
|
|
__entry->mc_index,
|
|
|
|
__entry->top_layer,
|
|
|
|
__entry->middle_layer,
|
|
|
|
__entry->lower_layer,
|
|
|
|
__entry->address,
|
|
|
|
1 << __entry->grain_bits,
|
|
|
|
__entry->syndrome,
|
2016-07-01 23:44:35 +00:00
|
|
|
__get_str(driver_detail)[0] ? " " : "",
|
RAS: Add a tracepoint for reporting memory controller events
Add a new tracepoint-based hardware events report method for
reporting Memory Controller events.
Part of the description bellow is shamelessly copied from Tony
Luck's notes about the Hardware Error BoF during LPC 2010 [1].
Tony, thanks for your notes and discussions to generate the
h/w error reporting requirements.
[1] http://lwn.net/Articles/416669/
We have several subsystems & methods for reporting hardware errors:
1) EDAC ("Error Detection and Correction"). In its original form
this consisted of a platform specific driver that read topology
information and error counts from chipset registers and reported
the results via a sysfs interface.
2) mcelog - x86 specific decoding of machine check bank registers
reporting in binary form via /dev/mcelog. Recent additions make use
of the APEI extensions that were documented in version 4.0a of the
ACPI specification to acquire more information about errors without
having to rely reading chipset registers directly. A user level
programs decodes into somewhat human readable format.
3) drivers/edac/mce_amd.c - this driver hooks into the mcelog path and
decodes errors reported via machine check bank registers in AMD
processors to the console log using printk();
Each of these mechanisms has a band of followers ... and none
of them appear to meet all the needs of all users.
As part of a RAS subsystem, let's encapsulate the memory error hardware
events into a trace facility.
The tracepoint printk will be displayed like:
mc_event: [quant] (Corrected|Uncorrected|Fatal) error:[error msg] on [label] ([location] [edac_mc detail] [driver_detail]
Where:
[quant] is the quantity of errors
[error msg] is the driver-specific error message
(e. g. "memory read", "bus error", ...);
[location] is the location in terms of memory controller and
branch/channel/slot, channel/slot or csrow/channel;
[label] is the memory stick label;
[edac_mc detail] describes the address location of the error
and the syndrome;
[driver detail] is driver-specifig error message details,
when needed/provided (e. g. "area:DMA", ...)
For example:
mc_event: 1 Corrected error:memory read on memory stick DIMM_1A (mc:0 location:0:0:0 page:0x586b6e offset:0xa66 grain:32 syndrome:0x0 area:DMA)
Of course, any userspace tools meant to handle errors should not parse
the above data. They should, instead, use the binary fields provided by
the tracepoint, mapping them directly into their Management Information
Base.
NOTE: The original patch was providing an additional mechanism for
MCA-based trace events that also contained MCA error register data.
However, as no agreement was reached so far for the MCA-based trace
events, for now, let's add events only for memory errors.
A latter patch is planned to change the tracepoint, for those types
of event.
Cc: Aristeu Rozanski <arozansk@redhat.com>
Cc: Doug Thompson <norsk5@yahoo.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@redhat.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
2012-02-23 11:10:34 +00:00
|
|
|
__get_str(driver_detail))
|
|
|
|
);
|
|
|
|
|
2017-06-21 18:17:13 +00:00
|
|
|
/*
|
|
|
|
* ARM Processor Events Report
|
|
|
|
*
|
|
|
|
* This event is generated when hardware detects an ARM processor error
|
|
|
|
* has occurred. UEFI 2.6 spec section N.2.4.4.
|
|
|
|
*/
|
|
|
|
TRACE_EVENT(arm_event,
|
|
|
|
|
|
|
|
TP_PROTO(const struct cper_sec_proc_arm *proc),
|
|
|
|
|
|
|
|
TP_ARGS(proc),
|
|
|
|
|
|
|
|
TP_STRUCT__entry(
|
|
|
|
__field(u64, mpidr)
|
|
|
|
__field(u64, midr)
|
|
|
|
__field(u32, running_state)
|
|
|
|
__field(u32, psci_state)
|
|
|
|
__field(u8, affinity)
|
|
|
|
),
|
|
|
|
|
|
|
|
TP_fast_assign(
|
|
|
|
if (proc->validation_bits & CPER_ARM_VALID_AFFINITY_LEVEL)
|
|
|
|
__entry->affinity = proc->affinity_level;
|
|
|
|
else
|
|
|
|
__entry->affinity = ~0;
|
|
|
|
if (proc->validation_bits & CPER_ARM_VALID_MPIDR)
|
|
|
|
__entry->mpidr = proc->mpidr;
|
|
|
|
else
|
|
|
|
__entry->mpidr = 0ULL;
|
|
|
|
__entry->midr = proc->midr;
|
|
|
|
if (proc->validation_bits & CPER_ARM_VALID_RUNNING_STATE) {
|
|
|
|
__entry->running_state = proc->running_state;
|
|
|
|
__entry->psci_state = proc->psci_state;
|
|
|
|
} else {
|
|
|
|
__entry->running_state = ~0;
|
|
|
|
__entry->psci_state = ~0;
|
|
|
|
}
|
|
|
|
),
|
|
|
|
|
|
|
|
TP_printk("affinity level: %d; MPIDR: %016llx; MIDR: %016llx; "
|
|
|
|
"running state: %d; PSCI state: %d",
|
|
|
|
__entry->affinity, __entry->mpidr, __entry->midr,
|
|
|
|
__entry->running_state, __entry->psci_state)
|
|
|
|
);
|
|
|
|
|
2017-06-21 18:17:12 +00:00
|
|
|
/*
|
|
|
|
* Non-Standard Section Report
|
|
|
|
*
|
|
|
|
* This event is generated when hardware detected a hardware
|
|
|
|
* error event, which may be of non-standard section as defined
|
|
|
|
* in UEFI spec appendix "Common Platform Error Record", or may
|
|
|
|
* be of sections for which TRACE_EVENT is not defined.
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
TRACE_EVENT(non_standard_event,
|
|
|
|
|
2019-01-25 14:30:35 +00:00
|
|
|
TP_PROTO(const guid_t *sec_type,
|
|
|
|
const guid_t *fru_id,
|
2017-06-21 18:17:12 +00:00
|
|
|
const char *fru_text,
|
|
|
|
const u8 sev,
|
|
|
|
const u8 *err,
|
|
|
|
const u32 len),
|
|
|
|
|
|
|
|
TP_ARGS(sec_type, fru_id, fru_text, sev, err, len),
|
|
|
|
|
|
|
|
TP_STRUCT__entry(
|
|
|
|
__array(char, sec_type, UUID_SIZE)
|
|
|
|
__array(char, fru_id, UUID_SIZE)
|
|
|
|
__string(fru_text, fru_text)
|
|
|
|
__field(u8, sev)
|
|
|
|
__field(u32, len)
|
|
|
|
__dynamic_array(u8, buf, len)
|
|
|
|
),
|
|
|
|
|
|
|
|
TP_fast_assign(
|
|
|
|
memcpy(__entry->sec_type, sec_type, UUID_SIZE);
|
|
|
|
memcpy(__entry->fru_id, fru_id, UUID_SIZE);
|
|
|
|
__assign_str(fru_text, fru_text);
|
|
|
|
__entry->sev = sev;
|
|
|
|
__entry->len = len;
|
|
|
|
memcpy(__get_dynamic_array(buf), err, len);
|
|
|
|
),
|
|
|
|
|
|
|
|
TP_printk("severity: %d; sec type:%pU; FRU: %pU %s; data len:%d; raw data:%s",
|
|
|
|
__entry->sev, __entry->sec_type,
|
|
|
|
__entry->fru_id, __get_str(fru_text),
|
|
|
|
__entry->len,
|
|
|
|
__print_hex(__get_dynamic_array(buf), __entry->len))
|
|
|
|
);
|
|
|
|
|
2014-06-11 20:57:27 +00:00
|
|
|
/*
|
|
|
|
* PCIe AER Trace event
|
|
|
|
*
|
|
|
|
* These events are generated when hardware detects a corrected or
|
|
|
|
* uncorrected event on a PCIe device. The event report has
|
|
|
|
* the following structure:
|
|
|
|
*
|
|
|
|
* char * dev_name - The name of the slot where the device resides
|
|
|
|
* ([domain:]bus:device.function).
|
|
|
|
* u32 status - Either the correctable or uncorrectable register
|
|
|
|
* indicating what error or errors have been seen
|
|
|
|
* u8 severity - error severity 0:NONFATAL 1:FATAL 2:CORRECTED
|
|
|
|
*/
|
|
|
|
|
2014-08-13 06:22:39 +00:00
|
|
|
#define aer_correctable_errors \
|
|
|
|
{PCI_ERR_COR_RCVR, "Receiver Error"}, \
|
|
|
|
{PCI_ERR_COR_BAD_TLP, "Bad TLP"}, \
|
|
|
|
{PCI_ERR_COR_BAD_DLLP, "Bad DLLP"}, \
|
|
|
|
{PCI_ERR_COR_REP_ROLL, "RELAY_NUM Rollover"}, \
|
|
|
|
{PCI_ERR_COR_REP_TIMER, "Replay Timer Timeout"}, \
|
|
|
|
{PCI_ERR_COR_ADV_NFAT, "Advisory Non-Fatal Error"}, \
|
|
|
|
{PCI_ERR_COR_INTERNAL, "Corrected Internal Error"}, \
|
|
|
|
{PCI_ERR_COR_LOG_OVER, "Header Log Overflow"}
|
|
|
|
|
|
|
|
#define aer_uncorrectable_errors \
|
2014-08-13 06:22:40 +00:00
|
|
|
{PCI_ERR_UNC_UND, "Undefined"}, \
|
2014-08-13 06:22:39 +00:00
|
|
|
{PCI_ERR_UNC_DLP, "Data Link Protocol Error"}, \
|
|
|
|
{PCI_ERR_UNC_SURPDN, "Surprise Down Error"}, \
|
2014-08-13 06:22:37 +00:00
|
|
|
{PCI_ERR_UNC_POISON_TLP,"Poisoned TLP"}, \
|
2014-08-13 06:22:39 +00:00
|
|
|
{PCI_ERR_UNC_FCP, "Flow Control Protocol Error"}, \
|
2014-08-13 06:22:37 +00:00
|
|
|
{PCI_ERR_UNC_COMP_TIME, "Completion Timeout"}, \
|
|
|
|
{PCI_ERR_UNC_COMP_ABORT,"Completer Abort"}, \
|
|
|
|
{PCI_ERR_UNC_UNX_COMP, "Unexpected Completion"}, \
|
|
|
|
{PCI_ERR_UNC_RX_OVER, "Receiver Overflow"}, \
|
|
|
|
{PCI_ERR_UNC_MALF_TLP, "Malformed TLP"}, \
|
2014-08-13 06:22:39 +00:00
|
|
|
{PCI_ERR_UNC_ECRC, "ECRC Error"}, \
|
|
|
|
{PCI_ERR_UNC_UNSUP, "Unsupported Request Error"}, \
|
|
|
|
{PCI_ERR_UNC_ACSV, "ACS Violation"}, \
|
|
|
|
{PCI_ERR_UNC_INTN, "Uncorrectable Internal Error"},\
|
|
|
|
{PCI_ERR_UNC_MCBTLP, "MC Blocked TLP"}, \
|
|
|
|
{PCI_ERR_UNC_ATOMEG, "AtomicOp Egress Blocked"}, \
|
|
|
|
{PCI_ERR_UNC_TLPPRE, "TLP Prefix Blocked Error"}
|
2014-06-11 20:57:27 +00:00
|
|
|
|
|
|
|
TRACE_EVENT(aer_event,
|
|
|
|
TP_PROTO(const char *dev_name,
|
|
|
|
const u32 status,
|
2018-05-08 23:04:56 +00:00
|
|
|
const u8 severity,
|
|
|
|
const u8 tlp_header_valid,
|
|
|
|
struct aer_header_log_regs *tlp),
|
2014-06-11 20:57:27 +00:00
|
|
|
|
2018-05-08 23:04:56 +00:00
|
|
|
TP_ARGS(dev_name, status, severity, tlp_header_valid, tlp),
|
2014-06-11 20:57:27 +00:00
|
|
|
|
|
|
|
TP_STRUCT__entry(
|
|
|
|
__string( dev_name, dev_name )
|
|
|
|
__field( u32, status )
|
|
|
|
__field( u8, severity )
|
2018-05-08 23:04:56 +00:00
|
|
|
__field( u8, tlp_header_valid)
|
|
|
|
__array( u32, tlp_header, 4 )
|
2014-06-11 20:57:27 +00:00
|
|
|
),
|
|
|
|
|
|
|
|
TP_fast_assign(
|
|
|
|
__assign_str(dev_name, dev_name);
|
|
|
|
__entry->status = status;
|
|
|
|
__entry->severity = severity;
|
2018-05-08 23:04:56 +00:00
|
|
|
__entry->tlp_header_valid = tlp_header_valid;
|
|
|
|
if (tlp_header_valid) {
|
|
|
|
__entry->tlp_header[0] = tlp->dw0;
|
|
|
|
__entry->tlp_header[1] = tlp->dw1;
|
|
|
|
__entry->tlp_header[2] = tlp->dw2;
|
|
|
|
__entry->tlp_header[3] = tlp->dw3;
|
|
|
|
}
|
2014-06-11 20:57:27 +00:00
|
|
|
),
|
|
|
|
|
2018-05-08 23:04:56 +00:00
|
|
|
TP_printk("%s PCIe Bus Error: severity=%s, %s, TLP Header=%s\n",
|
2014-06-11 20:57:27 +00:00
|
|
|
__get_str(dev_name),
|
|
|
|
__entry->severity == AER_CORRECTABLE ? "Corrected" :
|
|
|
|
__entry->severity == AER_FATAL ?
|
|
|
|
"Fatal" : "Uncorrected, non-fatal",
|
|
|
|
__entry->severity == AER_CORRECTABLE ?
|
|
|
|
__print_flags(__entry->status, "|", aer_correctable_errors) :
|
2018-05-08 23:04:56 +00:00
|
|
|
__print_flags(__entry->status, "|", aer_uncorrectable_errors),
|
|
|
|
__entry->tlp_header_valid ?
|
|
|
|
__print_array(__entry->tlp_header, 4, 4) :
|
|
|
|
"Not available")
|
2014-06-11 20:57:27 +00:00
|
|
|
);
|
|
|
|
|
2015-06-24 23:57:36 +00:00
|
|
|
/*
|
|
|
|
* memory-failure recovery action result event
|
|
|
|
*
|
|
|
|
* unsigned long pfn - Page Frame Number of the corrupted page
|
|
|
|
* int type - Page types of the corrupted page
|
|
|
|
* int result - Result of recovery action
|
|
|
|
*/
|
|
|
|
|
|
|
|
#ifdef CONFIG_MEMORY_FAILURE
|
|
|
|
#define MF_ACTION_RESULT \
|
|
|
|
EM ( MF_IGNORED, "Ignored" ) \
|
|
|
|
EM ( MF_FAILED, "Failed" ) \
|
|
|
|
EM ( MF_DELAYED, "Delayed" ) \
|
|
|
|
EMe ( MF_RECOVERED, "Recovered" )
|
|
|
|
|
|
|
|
#define MF_PAGE_TYPE \
|
|
|
|
EM ( MF_MSG_KERNEL, "reserved kernel page" ) \
|
|
|
|
EM ( MF_MSG_KERNEL_HIGH_ORDER, "high-order kernel page" ) \
|
|
|
|
EM ( MF_MSG_SLAB, "kernel slab page" ) \
|
|
|
|
EM ( MF_MSG_DIFFERENT_COMPOUND, "different compound page after locking" ) \
|
|
|
|
EM ( MF_MSG_POISONED_HUGE, "huge page already hardware poisoned" ) \
|
|
|
|
EM ( MF_MSG_HUGE, "huge page" ) \
|
|
|
|
EM ( MF_MSG_FREE_HUGE, "free huge page" ) \
|
2020-10-16 03:07:21 +00:00
|
|
|
EM ( MF_MSG_NON_PMD_HUGE, "non-pmd-sized huge page" ) \
|
2015-06-24 23:57:36 +00:00
|
|
|
EM ( MF_MSG_UNMAP_FAILED, "unmapping failed page" ) \
|
|
|
|
EM ( MF_MSG_DIRTY_SWAPCACHE, "dirty swapcache page" ) \
|
|
|
|
EM ( MF_MSG_CLEAN_SWAPCACHE, "clean swapcache page" ) \
|
|
|
|
EM ( MF_MSG_DIRTY_MLOCKED_LRU, "dirty mlocked LRU page" ) \
|
|
|
|
EM ( MF_MSG_CLEAN_MLOCKED_LRU, "clean mlocked LRU page" ) \
|
|
|
|
EM ( MF_MSG_DIRTY_UNEVICTABLE_LRU, "dirty unevictable LRU page" ) \
|
|
|
|
EM ( MF_MSG_CLEAN_UNEVICTABLE_LRU, "clean unevictable LRU page" ) \
|
|
|
|
EM ( MF_MSG_DIRTY_LRU, "dirty LRU page" ) \
|
|
|
|
EM ( MF_MSG_CLEAN_LRU, "clean LRU page" ) \
|
|
|
|
EM ( MF_MSG_TRUNCATED_LRU, "already truncated LRU page" ) \
|
|
|
|
EM ( MF_MSG_BUDDY, "free buddy page" ) \
|
|
|
|
EM ( MF_MSG_BUDDY_2ND, "free buddy page (2nd try)" ) \
|
2020-10-16 03:07:21 +00:00
|
|
|
EM ( MF_MSG_DAX, "dax page" ) \
|
|
|
|
EM ( MF_MSG_UNSPLIT_THP, "unsplit thp" ) \
|
2015-06-24 23:57:36 +00:00
|
|
|
EMe ( MF_MSG_UNKNOWN, "unknown page" )
|
|
|
|
|
|
|
|
/*
|
|
|
|
* First define the enums in MM_ACTION_RESULT to be exported to userspace
|
|
|
|
* via TRACE_DEFINE_ENUM().
|
|
|
|
*/
|
|
|
|
#undef EM
|
|
|
|
#undef EMe
|
|
|
|
#define EM(a, b) TRACE_DEFINE_ENUM(a);
|
|
|
|
#define EMe(a, b) TRACE_DEFINE_ENUM(a);
|
|
|
|
|
|
|
|
MF_ACTION_RESULT
|
|
|
|
MF_PAGE_TYPE
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Now redefine the EM() and EMe() macros to map the enums to the strings
|
|
|
|
* that will be printed in the output.
|
|
|
|
*/
|
|
|
|
#undef EM
|
|
|
|
#undef EMe
|
|
|
|
#define EM(a, b) { a, b },
|
|
|
|
#define EMe(a, b) { a, b }
|
|
|
|
|
|
|
|
TRACE_EVENT(memory_failure_event,
|
|
|
|
TP_PROTO(unsigned long pfn,
|
|
|
|
int type,
|
|
|
|
int result),
|
|
|
|
|
|
|
|
TP_ARGS(pfn, type, result),
|
|
|
|
|
|
|
|
TP_STRUCT__entry(
|
|
|
|
__field(unsigned long, pfn)
|
|
|
|
__field(int, type)
|
|
|
|
__field(int, result)
|
|
|
|
),
|
|
|
|
|
|
|
|
TP_fast_assign(
|
|
|
|
__entry->pfn = pfn;
|
|
|
|
__entry->type = type;
|
|
|
|
__entry->result = result;
|
|
|
|
),
|
|
|
|
|
|
|
|
TP_printk("pfn %#lx: recovery action for %s: %s",
|
|
|
|
__entry->pfn,
|
|
|
|
__print_symbolic(__entry->type, MF_PAGE_TYPE),
|
|
|
|
__print_symbolic(__entry->result, MF_ACTION_RESULT)
|
|
|
|
)
|
|
|
|
);
|
|
|
|
#endif /* CONFIG_MEMORY_FAILURE */
|
RAS: Add a tracepoint for reporting memory controller events
Add a new tracepoint-based hardware events report method for
reporting Memory Controller events.
Part of the description bellow is shamelessly copied from Tony
Luck's notes about the Hardware Error BoF during LPC 2010 [1].
Tony, thanks for your notes and discussions to generate the
h/w error reporting requirements.
[1] http://lwn.net/Articles/416669/
We have several subsystems & methods for reporting hardware errors:
1) EDAC ("Error Detection and Correction"). In its original form
this consisted of a platform specific driver that read topology
information and error counts from chipset registers and reported
the results via a sysfs interface.
2) mcelog - x86 specific decoding of machine check bank registers
reporting in binary form via /dev/mcelog. Recent additions make use
of the APEI extensions that were documented in version 4.0a of the
ACPI specification to acquire more information about errors without
having to rely reading chipset registers directly. A user level
programs decodes into somewhat human readable format.
3) drivers/edac/mce_amd.c - this driver hooks into the mcelog path and
decodes errors reported via machine check bank registers in AMD
processors to the console log using printk();
Each of these mechanisms has a band of followers ... and none
of them appear to meet all the needs of all users.
As part of a RAS subsystem, let's encapsulate the memory error hardware
events into a trace facility.
The tracepoint printk will be displayed like:
mc_event: [quant] (Corrected|Uncorrected|Fatal) error:[error msg] on [label] ([location] [edac_mc detail] [driver_detail]
Where:
[quant] is the quantity of errors
[error msg] is the driver-specific error message
(e. g. "memory read", "bus error", ...);
[location] is the location in terms of memory controller and
branch/channel/slot, channel/slot or csrow/channel;
[label] is the memory stick label;
[edac_mc detail] describes the address location of the error
and the syndrome;
[driver detail] is driver-specifig error message details,
when needed/provided (e. g. "area:DMA", ...)
For example:
mc_event: 1 Corrected error:memory read on memory stick DIMM_1A (mc:0 location:0:0:0 page:0x586b6e offset:0xa66 grain:32 syndrome:0x0 area:DMA)
Of course, any userspace tools meant to handle errors should not parse
the above data. They should, instead, use the binary fields provided by
the tracepoint, mapping them directly into their Management Information
Base.
NOTE: The original patch was providing an additional mechanism for
MCA-based trace events that also contained MCA error register data.
However, as no agreement was reached so far for the MCA-based trace
events, for now, let's add events only for memory errors.
A latter patch is planned to change the tracepoint, for those types
of event.
Cc: Aristeu Rozanski <arozansk@redhat.com>
Cc: Doug Thompson <norsk5@yahoo.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@redhat.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
2012-02-23 11:10:34 +00:00
|
|
|
#endif /* _TRACE_HW_EVENT_MC_H */
|
|
|
|
|
|
|
|
/* This part must be outside protection */
|
|
|
|
#include <trace/define_trace.h>
|