Merge remote-tracking branch 'torvalds/master' into perf/core

To pick up BPF changes we'll need.

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
This commit is contained in:
Arnaldo Carvalho de Melo 2019-11-26 11:06:19 -03:00
commit 2ea352d596
3187 changed files with 194867 additions and 59759 deletions

View File

@ -0,0 +1,57 @@
What: /sys/kernel/debug/hisi_hpre/<bdf>/cluster[0-3]/regs
Date: Sep 2019
Contact: linux-crypto@vger.kernel.org
Description: Dump debug registers from the HPRE cluster.
Only available for PF.
What: /sys/kernel/debug/hisi_hpre/<bdf>/cluster[0-3]/cluster_ctrl
Date: Sep 2019
Contact: linux-crypto@vger.kernel.org
Description: Write the HPRE core selection in the cluster into this file,
and then we can read the debug information of the core.
Only available for PF.
What: /sys/kernel/debug/hisi_hpre/<bdf>/rdclr_en
Date: Sep 2019
Contact: linux-crypto@vger.kernel.org
Description: HPRE cores debug registers read clear control. 1 means enable
register read clear, otherwise 0. Writing to this file has no
functional effect, only enable or disable counters clear after
reading of these registers.
Only available for PF.
What: /sys/kernel/debug/hisi_hpre/<bdf>/current_qm
Date: Sep 2019
Contact: linux-crypto@vger.kernel.org
Description: One HPRE controller has one PF and multiple VFs, each function
has a QM. Select the QM which below qm refers to.
Only available for PF.
What: /sys/kernel/debug/hisi_hpre/<bdf>/regs
Date: Sep 2019
Contact: linux-crypto@vger.kernel.org
Description: Dump debug registers from the HPRE.
Only available for PF.
What: /sys/kernel/debug/hisi_hpre/<bdf>/qm/qm_regs
Date: Sep 2019
Contact: linux-crypto@vger.kernel.org
Description: Dump debug registers from the QM.
Available for PF and VF in host. VF in guest currently only
has one debug register.
What: /sys/kernel/debug/hisi_hpre/<bdf>/qm/current_q
Date: Sep 2019
Contact: linux-crypto@vger.kernel.org
Description: One QM may contain multiple queues. Select specific queue to
show its debug registers in above qm_regs.
Only available for PF.
What: /sys/kernel/debug/hisi_hpre/<bdf>/qm/clear_enable
Date: Sep 2019
Contact: linux-crypto@vger.kernel.org
Description: QM debug registers(qm_regs) read clear control. 1 means enable
register read clear, otherwise 0.
Writing to this file has no functional effect, only enable or
disable counters clear after reading of these registers.
Only available for PF.

View File

@ -0,0 +1,43 @@
What: /sys/kernel/debug/hisi_sec/<bdf>/sec_dfx
Date: Oct 2019
Contact: linux-crypto@vger.kernel.org
Description: Dump the debug registers of SEC cores.
Only available for PF.
What: /sys/kernel/debug/hisi_sec/<bdf>/clear_enable
Date: Oct 2019
Contact: linux-crypto@vger.kernel.org
Description: Enabling/disabling of clear action after reading
the SEC debug registers.
0: disable, 1: enable.
Only available for PF, and take no other effect on SEC.
What: /sys/kernel/debug/hisi_sec/<bdf>/current_qm
Date: Oct 2019
Contact: linux-crypto@vger.kernel.org
Description: One SEC controller has one PF and multiple VFs, each function
has a QM. This file can be used to select the QM which below
qm refers to.
Only available for PF.
What: /sys/kernel/debug/hisi_sec/<bdf>/qm/qm_regs
Date: Oct 2019
Contact: linux-crypto@vger.kernel.org
Description: Dump of QM related debug registers.
Available for PF and VF in host. VF in guest currently only
has one debug register.
What: /sys/kernel/debug/hisi_sec/<bdf>/qm/current_q
Date: Oct 2019
Contact: linux-crypto@vger.kernel.org
Description: One QM of SEC may contain multiple queues. Select specific
queue to show its debug registers in above 'qm_regs'.
Only available for PF.
What: /sys/kernel/debug/hisi_sec/<bdf>/qm/clear_enable
Date: Oct 2019
Contact: linux-crypto@vger.kernel.org
Description: Enabling/disabling of clear action after reading
the SEC's QM debug registers.
0: disable, 1: enable.
Only available for PF, and take no other effect on SEC.

View File

@ -29,4 +29,9 @@ Description:
17 - sectors discarded
18 - time spent discarding
Kernel 5.5+ appends two more fields for flush requests:
19 - flush requests completed successfully
20 - time spent flushing
For more details refer to Documentation/admin-guide/iostats.rst

View File

@ -15,6 +15,12 @@ Description:
9 - I/Os currently in progress
10 - time spent doing I/Os (ms)
11 - weighted time spent doing I/Os (ms)
12 - discards completed
13 - discards merged
14 - sectors discarded
15 - time spent discarding (ms)
16 - flush requests completed
17 - time spent flushing (ms)
For more details refer Documentation/admin-guide/iostats.rst

View File

@ -51,6 +51,14 @@ Description:
packet processing. See the network driver for the exact
meaning of this value.
What: /sys/class/<iface>/statistics/rx_errors
Date: April 2005
KernelVersion: 2.6.12
Contact: netdev@vger.kernel.org
Description:
Indicates the number of receive errors on this network device.
See the network driver for the exact meaning of this value.
What: /sys/class/<iface>/statistics/rx_fifo_errors
Date: April 2005
KernelVersion: 2.6.12
@ -88,6 +96,14 @@ Description:
due to lack of capacity in the receive side. See the network
driver for the exact meaning of this value.
What: /sys/class/<iface>/statistics/rx_nohandler
Date: February 2016
KernelVersion: 4.6
Contact: netdev@vger.kernel.org
Description:
Indicates the number of received packets that were dropped on
an inactive device by the network core.
What: /sys/class/<iface>/statistics/rx_over_errors
Date: April 2005
KernelVersion: 2.6.12

View File

@ -1334,7 +1334,7 @@ PAGE_SIZE multiple when read back.
pgdeactivate
Amount of pages moved to the inactive LRU lis
Amount of pages moved to the inactive LRU list
pglazyfree

View File

@ -177,6 +177,11 @@ bitmap_flush_interval:number
The bitmap flush interval in milliseconds. The metadata buffers
are synchronized when this interval expires.
fix_padding
Use a smaller padding of the tag area that is more
space-efficient. If this option is not present, large padding is
used - that is for compatibility with older kernels.
The journal mode (D/J), buffer_sectors, journal_watermark, commit_time can
be changed when reloading the target (load an inactive table and swap the

View File

@ -417,3 +417,5 @@ Version History
deadlock/potential data corruption. Update superblock when
specific devices are requested via rebuild. Fix RAID leg
rebuild errors.
1.15.0 Fix size extensions not being synchronized in case of new MD bitmap
pages allocated; also fix those not occuring after previous reductions

View File

@ -121,6 +121,15 @@ Field 15 -- # of milliseconds spent discarding
This is the total number of milliseconds spent by all discards (as
measured from __make_request() to end_that_request_last()).
Field 16 -- # of flush requests completed
This is the total number of flush requests completed successfully.
Block layer combines flush requests and executes at most one at a time.
This counts flush requests executed by disk. Not tracked for partitions.
Field 17 -- # of milliseconds spent flushing
This is the total number of milliseconds spent by all flush requests.
To avoid introducing performance bottlenecks, no locks are held while
modifying these counters. This implies that minor inaccuracies may be
introduced when changes collide, so (for instance) adding up all the

View File

@ -3110,9 +3110,9 @@
[X86,PV_OPS] Disable paravirtualized VMware scheduler
clock and use the default one.
no-steal-acc [X86,KVM] Disable paravirtualized steal time accounting.
steal time is computed, but won't influence scheduler
behaviour
no-steal-acc [X86,KVM,ARM64] Disable paravirtualized steal time
accounting. steal time is computed, but won't
influence scheduler behaviour
nolapic [X86-32,APIC] Do not enable or use the local APIC.

View File

@ -17,7 +17,8 @@ The "format" directory describes format of the config (event ID) and config1
(AXI filtering) fields of the perf_event_attr structure, see /sys/bus/event_source/
devices/imx8_ddr0/format/. The "events" directory describes the events types
hardware supported that can be used with perf tool, see /sys/bus/event_source/
devices/imx8_ddr0/events/.
devices/imx8_ddr0/events/. The "caps" directory describes filter features implemented
in DDR PMU, see /sys/bus/events_source/devices/imx8_ddr0/caps/.
e.g.::
perf stat -a -e imx8_ddr0/cycles/ cmd
perf stat -a -e imx8_ddr0/read/,imx8_ddr0/write/ cmd
@ -25,9 +26,12 @@ devices/imx8_ddr0/events/.
AXI filtering is only used by CSV modes 0x41 (axid-read) and 0x42 (axid-write)
to count reading or writing matches filter setting. Filter setting is various
from different DRAM controller implementations, which is distinguished by quirks
in the driver.
in the driver. You also can dump info from userspace, filter in "caps" directory
indicates whether PMU supports AXI ID filter or not; enhanced_filter indicates
whether PMU supports enhanced AXI ID filter or not. Value 0 for un-supported, and
value 1 for supported.
* With DDR_CAP_AXI_ID_FILTER quirk.
* With DDR_CAP_AXI_ID_FILTER quirk(filter: 1, enhanced_filter: 0).
Filter is defined with two configuration parts:
--AXI_ID defines AxID matching value.
--AXI_MASKING defines which bits of AxID are meaningful for the matching.
@ -50,3 +54,8 @@ in the driver.
axi_id to monitor a specific id, rather than having to specify axi_mask.
e.g.::
perf stat -a -e imx8_ddr0/axid-read,axi_id=0x12/ cmd, which will monitor ARID=0x12
* With DDR_CAP_AXI_ID_FILTER_ENHANCED quirk(filter: 1, enhanced_filter: 1).
This is an extension to the DDR_CAP_AXI_ID_FILTER quirk which permits
counting the number of bytes (as opposed to the number of bursts) from DDR
read and write transactions concurrently with another set of data counters.

View File

@ -3,24 +3,26 @@ Cavium ThunderX2 SoC Performance Monitoring Unit (PMU UNCORE)
=============================================================
The ThunderX2 SoC PMU consists of independent, system-wide, per-socket
PMUs such as the Level 3 Cache (L3C) and DDR4 Memory Controller (DMC).
PMUs such as the Level 3 Cache (L3C), DDR4 Memory Controller (DMC) and
Cavium Coherent Processor Interconnect (CCPI2).
The DMC has 8 interleaved channels and the L3C has 16 interleaved tiles.
Events are counted for the default channel (i.e. channel 0) and prorated
to the total number of channels/tiles.
The DMC and L3C support up to 4 counters. Counters are independently
programmable and can be started and stopped individually. Each counter
can be set to a different event. Counters are 32-bit and do not support
an overflow interrupt; they are read every 2 seconds.
The DMC and L3C support up to 4 counters, while the CCPI2 supports up to 8
counters. Counters are independently programmable to different events and
can be started and stopped individually. None of the counters support an
overflow interrupt. DMC and L3C counters are 32-bit and read every 2 seconds.
The CCPI2 counters are 64-bit and assumed not to overflow in normal operation.
PMU UNCORE (perf) driver:
The thunderx2_pmu driver registers per-socket perf PMUs for the DMC and
L3C devices. Each PMU can be used to count up to 4 events
simultaneously. The PMUs provide a description of their available events
and configuration options under sysfs, see
/sys/devices/uncore_<l3c_S/dmc_S/>; S is the socket id.
L3C devices. Each PMU can be used to count up to 4 (DMC/L3C) or up to 8
(CCPI2) events simultaneously. The PMUs provide a description of their
available events and configuration options under sysfs, see
/sys/devices/uncore_<l3c_S/dmc_S/ccpi2_S/>; S is the socket id.
The driver does not support sampling, therefore "perf record" will not
work. Per-task perf sessions are also not supported.

View File

@ -330,9 +330,12 @@ There can be multiple csrows and multiple channels.
.. [#f4] Nowadays, the term DIMM (Dual In-line Memory Module) is widely
used to refer to a memory module, although there are other memory
packaging alternatives, like SO-DIMM, SIMM, etc. Along this document,
and inside the EDAC system, the term "dimm" is used for all memory
modules, even when they use a different kind of packaging.
packaging alternatives, like SO-DIMM, SIMM, etc. The UEFI
specification (Version 2.7) defines a memory module in the Common
Platform Error Record (CPER) section to be an SMBIOS Memory Device
(Type 17). Along this document, and inside the EDAC subsystem, the term
"dimm" is used for all memory modules, even when they use a
different kind of packaging.
Memory controllers allow for several csrows, with 8 csrows being a
typical value. Yet, the actual number of csrows depends on the layout of
@ -349,12 +352,14 @@ controllers. The following example will assume 2 channels:
| | ``ch0`` | ``ch1`` |
+============+===========+===========+
| ``csrow0`` | DIMM_A0 | DIMM_B0 |
+------------+ | |
| ``csrow1`` | | |
| | rank0 | rank0 |
+------------+ - | - |
| ``csrow1`` | rank1 | rank1 |
+------------+-----------+-----------+
| ``csrow2`` | DIMM_A1 | DIMM_B1 |
+------------+ | |
| ``csrow3`` | | |
| | rank0 | rank0 |
+------------+ - | - |
| ``csrow3`` | rank1 | rank1 |
+------------+-----------+-----------+
In the above example, there are 4 physical slots on the motherboard
@ -374,11 +379,13 @@ which the memory DIMM is placed. Thus, when 1 DIMM is placed in each
Channel, the csrows cross both DIMMs.
Memory DIMMs come single or dual "ranked". A rank is a populated csrow.
Thus, 2 single ranked DIMMs, placed in slots DIMM_A0 and DIMM_B0 above
will have just one csrow (csrow0). csrow1 will be empty. On the other
hand, when 2 dual ranked DIMMs are similarly placed, then both csrow0
and csrow1 will be populated. The pattern repeats itself for csrow2 and
csrow3.
In the example above 2 dual ranked DIMMs are similarly placed. Thus,
both csrow0 and csrow1 are populated. On the other hand, when 2 single
ranked DIMMs are placed in slots DIMM_A0 and DIMM_B0, then they will
have just one csrow (csrow0) and csrow1 will be empty. The pattern
repeats itself for csrow2 and csrow3. Also note that some memory
controllers don't have any logic to identify the memory module, see
``rankX`` directories below.
The representation of the above is reflected in the directory
tree in EDAC's sysfs interface. Starting in directory

View File

@ -213,6 +213,9 @@ Before jumping into the kernel, the following conditions must be met:
- ICC_SRE_EL3.Enable (bit 3) must be initialiased to 0b1.
- ICC_SRE_EL3.SRE (bit 0) must be initialised to 0b1.
- ICC_CTLR_EL3.PMHE (bit 6) must be set to the same value across
all CPUs the kernel is executing on, and must stay constant
for the lifetime of the kernel.
- If the kernel is entered at EL1:

View File

@ -168,8 +168,15 @@ infrastructure:
+------------------------------+---------+---------+
3) MIDR_EL1 - Main ID Register
3) ID_AA64PFR1_EL1 - Processor Feature Register 1
+------------------------------+---------+---------+
| Name | bits | visible |
+------------------------------+---------+---------+
| SSBS | [7-4] | y |
+------------------------------+---------+---------+
4) MIDR_EL1 - Main ID Register
+------------------------------+---------+---------+
| Name | bits | visible |
+------------------------------+---------+---------+
@ -188,11 +195,15 @@ infrastructure:
as available on the CPU where it is fetched and is not a system
wide safe value.
4) ID_AA64ISAR1_EL1 - Instruction set attribute register 1
5) ID_AA64ISAR1_EL1 - Instruction set attribute register 1
+------------------------------+---------+---------+
| Name | bits | visible |
+------------------------------+---------+---------+
| SB | [39-36] | y |
+------------------------------+---------+---------+
| FRINTTS | [35-32] | y |
+------------------------------+---------+---------+
| GPI | [31-28] | y |
+------------------------------+---------+---------+
| GPA | [27-24] | y |
@ -210,7 +221,7 @@ infrastructure:
| DPB | [3-0] | y |
+------------------------------+---------+---------+
5) ID_AA64MMFR2_EL1 - Memory model feature register 2
6) ID_AA64MMFR2_EL1 - Memory model feature register 2
+------------------------------+---------+---------+
| Name | bits | visible |
@ -218,7 +229,7 @@ infrastructure:
| AT | [35-32] | y |
+------------------------------+---------+---------+
6) ID_AA64ZFR0_EL1 - SVE feature ID register 0
7) ID_AA64ZFR0_EL1 - SVE feature ID register 0
+------------------------------+---------+---------+
| Name | bits | visible |

View File

@ -119,10 +119,6 @@ HWCAP_LRCPC
HWCAP_DCPOP
Functionality implied by ID_AA64ISAR1_EL1.DPB == 0b0001.
HWCAP2_DCPODP
Functionality implied by ID_AA64ISAR1_EL1.DPB == 0b0010.
HWCAP_SHA3
Functionality implied by ID_AA64ISAR0_EL1.SHA3 == 0b0001.
@ -141,6 +137,41 @@ HWCAP_SHA512
HWCAP_SVE
Functionality implied by ID_AA64PFR0_EL1.SVE == 0b0001.
HWCAP_ASIMDFHM
Functionality implied by ID_AA64ISAR0_EL1.FHM == 0b0001.
HWCAP_DIT
Functionality implied by ID_AA64PFR0_EL1.DIT == 0b0001.
HWCAP_USCAT
Functionality implied by ID_AA64MMFR2_EL1.AT == 0b0001.
HWCAP_ILRCPC
Functionality implied by ID_AA64ISAR1_EL1.LRCPC == 0b0010.
HWCAP_FLAGM
Functionality implied by ID_AA64ISAR0_EL1.TS == 0b0001.
HWCAP_SSBS
Functionality implied by ID_AA64PFR1_EL1.SSBS == 0b0010.
HWCAP_SB
Functionality implied by ID_AA64ISAR1_EL1.SB == 0b0001.
HWCAP_PACA
Functionality implied by ID_AA64ISAR1_EL1.APA == 0b0001 or
ID_AA64ISAR1_EL1.API == 0b0001, as described by
Documentation/arm64/pointer-authentication.rst.
HWCAP_PACG
Functionality implied by ID_AA64ISAR1_EL1.GPA == 0b0001 or
ID_AA64ISAR1_EL1.GPI == 0b0001, as described by
Documentation/arm64/pointer-authentication.rst.
HWCAP2_DCPODP
Functionality implied by ID_AA64ISAR1_EL1.DPB == 0b0010.
HWCAP2_SVE2
Functionality implied by ID_AA64ZFR0_EL1.SVEVer == 0b0001.
@ -165,38 +196,10 @@ HWCAP2_SVESM4
Functionality implied by ID_AA64ZFR0_EL1.SM4 == 0b0001.
HWCAP_ASIMDFHM
Functionality implied by ID_AA64ISAR0_EL1.FHM == 0b0001.
HWCAP_DIT
Functionality implied by ID_AA64PFR0_EL1.DIT == 0b0001.
HWCAP_USCAT
Functionality implied by ID_AA64MMFR2_EL1.AT == 0b0001.
HWCAP_ILRCPC
Functionality implied by ID_AA64ISAR1_EL1.LRCPC == 0b0010.
HWCAP_FLAGM
Functionality implied by ID_AA64ISAR0_EL1.TS == 0b0001.
HWCAP2_FLAGM2
Functionality implied by ID_AA64ISAR0_EL1.TS == 0b0010.
HWCAP_SSBS
Functionality implied by ID_AA64PFR1_EL1.SSBS == 0b0010.
HWCAP_PACA
Functionality implied by ID_AA64ISAR1_EL1.APA == 0b0001 or
ID_AA64ISAR1_EL1.API == 0b0001, as described by
Documentation/arm64/pointer-authentication.rst.
HWCAP_PACG
Functionality implied by ID_AA64ISAR1_EL1.GPA == 0b0001 or
ID_AA64ISAR1_EL1.GPI == 0b0001, as described by
Documentation/arm64/pointer-authentication.rst.
HWCAP2_FRINT
Functionality implied by ID_AA64ISAR1_EL1.FRINTTS == 0b0001.

View File

@ -70,8 +70,12 @@ stable kernels.
+----------------+-----------------+-----------------+-----------------------------+
| ARM | Cortex-A57 | #834220 | ARM64_ERRATUM_834220 |
+----------------+-----------------+-----------------+-----------------------------+
| ARM | Cortex-A57 | #1319537 | ARM64_ERRATUM_1319367 |
+----------------+-----------------+-----------------+-----------------------------+
| ARM | Cortex-A72 | #853709 | N/A |
+----------------+-----------------+-----------------+-----------------------------+
| ARM | Cortex-A72 | #1319367 | ARM64_ERRATUM_1319367 |
+----------------+-----------------+-----------------+-----------------------------+
| ARM | Cortex-A73 | #858921 | ARM64_ERRATUM_858921 |
+----------------+-----------------+-----------------+-----------------------------+
| ARM | Cortex-A55 | #1024718 | ARM64_ERRATUM_1024718 |
@ -88,6 +92,8 @@ stable kernels.
+----------------+-----------------+-----------------+-----------------------------+
| ARM | Neoverse-N1 | #1349291 | N/A |
+----------------+-----------------+-----------------+-----------------------------+
| ARM | Neoverse-N1 | #1542419 | ARM64_ERRATUM_1542419 |
+----------------+-----------------+-----------------+-----------------------------+
| ARM | MMU-500 | #841119,826419 | N/A |
+----------------+-----------------+-----------------+-----------------------------+
+----------------+-----------------+-----------------+-----------------------------+

View File

@ -41,6 +41,8 @@ discard I/Os requests number of discard I/Os processed
discard merges requests number of discard I/Os merged with in-queue I/O
discard sectors sectors number of sectors discarded
discard ticks milliseconds total wait time for discard requests
flush I/Os requests number of flush I/Os processed
flush ticks milliseconds total wait time for flush requests
=============== ============= =================================================
read I/Os, write I/Os, discard I/0s
@ -48,6 +50,14 @@ read I/Os, write I/Os, discard I/0s
These values increment when an I/O request completes.
flush I/Os
==========
These values increment when an flush I/O request completes.
Block layer combines flush requests and executes at most one at a time.
This counts flush requests executed by disk. Not tracked for partitions.
read merges, write merges, discard merges
=========================================
@ -62,8 +72,8 @@ discarded from this block device. The "sectors" in question are the
standard UNIX 512-byte sectors, not any device- or filesystem-specific
block size. The counters are incremented when the I/O completes.
read ticks, write ticks, discard ticks
======================================
read ticks, write ticks, discard ticks, flush ticks
===================================================
These values count the number of milliseconds that I/O requests have
waited on this block device. If there are multiple I/O requests waiting,

View File

@ -47,6 +47,15 @@ Program types
prog_flow_dissector
Testing BPF
===========
.. toctree::
:maxdepth: 1
s390
.. Links:
.. _Documentation/networking/filter.txt: ../networking/filter.txt
.. _man-pages: https://www.kernel.org/doc/man-pages/

View File

@ -142,3 +142,6 @@ BPF flow dissector doesn't support exporting all the metadata that in-kernel
C-based implementation can export. Notable example is single VLAN (802.1Q)
and double VLAN (802.1AD) tags. Please refer to the ``struct bpf_flow_keys``
for a set of information that's currently can be exported from the BPF context.
When BPF flow dissector is attached to the root network namespace (machine-wide
policy), users can't override it in their child network namespaces.

205
Documentation/bpf/s390.rst Normal file
View File

@ -0,0 +1,205 @@
===================
Testing BPF on s390
===================
1. Introduction
***************
IBM Z are mainframe computers, which are descendants of IBM System/360 from
year 1964. They are supported by the Linux kernel under the name "s390". This
document describes how to test BPF in an s390 QEMU guest.
2. One-time setup
*****************
The following is required to build and run the test suite:
* s390 GCC
* s390 development headers and libraries
* Clang with BPF support
* QEMU with s390 support
* Disk image with s390 rootfs
Debian supports installing compiler and libraries for s390 out of the box.
Users of other distros may use debootstrap in order to set up a Debian chroot::
sudo debootstrap \
--variant=minbase \
--include=sudo \
testing \
./s390-toolchain
sudo mount --rbind /dev ./s390-toolchain/dev
sudo mount --rbind /proc ./s390-toolchain/proc
sudo mount --rbind /sys ./s390-toolchain/sys
sudo chroot ./s390-toolchain
Once on Debian, the build prerequisites can be installed as follows::
sudo dpkg --add-architecture s390x
sudo apt-get update
sudo apt-get install \
bc \
bison \
cmake \
debootstrap \
dwarves \
flex \
g++ \
gcc \
g++-s390x-linux-gnu \
gcc-s390x-linux-gnu \
gdb-multiarch \
git \
make \
python3 \
qemu-system-misc \
qemu-utils \
rsync \
libcap-dev:s390x \
libelf-dev:s390x \
libncurses-dev
Latest Clang targeting BPF can be installed as follows::
git clone https://github.com/llvm/llvm-project.git
ln -s ../../clang llvm-project/llvm/tools/
mkdir llvm-project-build
cd llvm-project-build
cmake \
-DLLVM_TARGETS_TO_BUILD=BPF \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_INSTALL_PREFIX=/opt/clang-bpf \
../llvm-project/llvm
make
sudo make install
export PATH=/opt/clang-bpf/bin:$PATH
The disk image can be prepared using a loopback mount and debootstrap::
qemu-img create -f raw ./s390.img 1G
sudo losetup -f ./s390.img
sudo mkfs.ext4 /dev/loopX
mkdir ./s390.rootfs
sudo mount /dev/loopX ./s390.rootfs
sudo debootstrap \
--foreign \
--arch=s390x \
--variant=minbase \
--include=" \
iproute2, \
iputils-ping, \
isc-dhcp-client, \
kmod, \
libcap2, \
libelf1, \
netcat, \
procps" \
testing \
./s390.rootfs
sudo umount ./s390.rootfs
sudo losetup -d /dev/loopX
3. Compilation
**************
In addition to the usual Kconfig options required to run the BPF test suite, it
is also helpful to select::
CONFIG_NET_9P=y
CONFIG_9P_FS=y
CONFIG_NET_9P_VIRTIO=y
CONFIG_VIRTIO_PCI=y
as that would enable a very easy way to share files with the s390 virtual
machine.
Compiling kernel, modules and testsuite, as well as preparing gdb scripts to
simplify debugging, can be done using the following commands::
make ARCH=s390 CROSS_COMPILE=s390x-linux-gnu- menuconfig
make ARCH=s390 CROSS_COMPILE=s390x-linux-gnu- bzImage modules scripts_gdb
make ARCH=s390 CROSS_COMPILE=s390x-linux-gnu- \
-C tools/testing/selftests \
TARGETS=bpf \
INSTALL_PATH=$PWD/tools/testing/selftests/kselftest_install \
install
4. Running the test suite
*************************
The virtual machine can be started as follows::
qemu-system-s390x \
-cpu max,zpci=on \
-smp 2 \
-m 4G \
-kernel linux/arch/s390/boot/compressed/vmlinux \
-drive file=./s390.img,if=virtio,format=raw \
-nographic \
-append 'root=/dev/vda rw console=ttyS1' \
-virtfs local,path=./linux,security_model=none,mount_tag=linux \
-object rng-random,filename=/dev/urandom,id=rng0 \
-device virtio-rng-ccw,rng=rng0 \
-netdev user,id=net0 \
-device virtio-net-ccw,netdev=net0
When using this on a real IBM Z, ``-enable-kvm`` may be added for better
performance. When starting the virtual machine for the first time, disk image
setup must be finalized using the following command::
/debootstrap/debootstrap --second-stage
Directory with the code built on the host as well as ``/proc`` and ``/sys``
need to be mounted as follows::
mkdir -p /linux
mount -t 9p linux /linux
mount -t proc proc /proc
mount -t sysfs sys /sys
After that, the test suite can be run using the following commands::
cd /linux/tools/testing/selftests/kselftest_install
./run_kselftest.sh
As usual, tests can be also run individually::
cd /linux/tools/testing/selftests/bpf
./test_verifier
5. Debugging
************
It is possible to debug the s390 kernel using QEMU GDB stub, which is activated
by passing ``-s`` to QEMU.
It is preferable to turn KASLR off, so that gdb would know where to find the
kernel image in memory, by building the kernel with::
RANDOMIZE_BASE=n
GDB can then be attached using the following command::
gdb-multiarch -ex 'target remote localhost:1234' ./vmlinux
6. Network
**********
In case one needs to use the network in the virtual machine in order to e.g.
install additional packages, it can be configured using::
dhclient eth0
7. Links
********
This document is a compilation of techniques, whose more comprehensive
descriptions can be found by following these links:
- `Debootstrap <https://wiki.debian.org/EmDebian/CrossDebootstrap>`_
- `Multiarch <https://wiki.debian.org/Multiarch/HOWTO>`_
- `Building LLVM <https://llvm.org/docs/CMake.html>`_
- `Cross-compiling the kernel <https://wiki.gentoo.org/wiki/Embedded_Handbook/General/Cross-compiling_the_kernel>`_
- `QEMU s390x Guest Support <https://wiki.qemu.org/Documentation/Platforms/S390X>`_
- `Plan 9 folder sharing over Virtio <https://wiki.qemu.org/Documentation/9psetup>`_
- `Using GDB with QEMU <https://wiki.osdev.org/Kernel_Debugging#Use_GDB_with_QEMU>`_

View File

@ -79,6 +79,18 @@ has the added benefit of providing a unique identifier. On 64-bit machines
the first 32 bits are zeroed. The kernel will print ``(ptrval)`` until it
gathers enough entropy. If you *really* want the address see %px below.
Error Pointers
--------------
::
%pe -ENOSPC
For printing error pointers (i.e. a pointer for which IS_ERR() is true)
as a symbolic error name. Error values for which no symbolic name is
known are printed in decimal, while a non-ERR_PTR passed as the
argument to %pe gets treated as ordinary %p.
Symbols/Function Pointers
-------------------------

View File

@ -5,7 +5,7 @@ Block Cipher Algorithm Definitions
:doc: Block Cipher Algorithm Definitions
.. kernel-doc:: include/linux/crypto.h
:functions: crypto_alg ablkcipher_alg blkcipher_alg cipher_alg compress_alg
:functions: crypto_alg cipher_alg compress_alg
Symmetric Key Cipher API
------------------------
@ -33,30 +33,3 @@ Single Block Cipher API
.. kernel-doc:: include/linux/crypto.h
:functions: crypto_alloc_cipher crypto_free_cipher crypto_has_cipher crypto_cipher_blocksize crypto_cipher_setkey crypto_cipher_encrypt_one crypto_cipher_decrypt_one
Asynchronous Block Cipher API - Deprecated
------------------------------------------
.. kernel-doc:: include/linux/crypto.h
:doc: Asynchronous Block Cipher API
.. kernel-doc:: include/linux/crypto.h
:functions: crypto_free_ablkcipher crypto_has_ablkcipher crypto_ablkcipher_ivsize crypto_ablkcipher_blocksize crypto_ablkcipher_setkey crypto_ablkcipher_reqtfm crypto_ablkcipher_encrypt crypto_ablkcipher_decrypt
Asynchronous Cipher Request Handle - Deprecated
-----------------------------------------------
.. kernel-doc:: include/linux/crypto.h
:doc: Asynchronous Cipher Request Handle
.. kernel-doc:: include/linux/crypto.h
:functions: crypto_ablkcipher_reqsize ablkcipher_request_set_tfm ablkcipher_request_alloc ablkcipher_request_free ablkcipher_request_set_callback ablkcipher_request_set_crypt
Synchronous Block Cipher API - Deprecated
-----------------------------------------
.. kernel-doc:: include/linux/crypto.h
:doc: Synchronous Block Cipher API
.. kernel-doc:: include/linux/crypto.h
:functions: crypto_alloc_blkcipher crypto_free_blkcipher crypto_has_blkcipher crypto_blkcipher_name crypto_blkcipher_ivsize crypto_blkcipher_blocksize crypto_blkcipher_setkey crypto_blkcipher_encrypt crypto_blkcipher_encrypt_iv crypto_blkcipher_decrypt crypto_blkcipher_decrypt_iv crypto_blkcipher_set_iv crypto_blkcipher_get_iv

View File

@ -201,10 +201,6 @@ the aforementioned cipher types:
- CRYPTO_ALG_TYPE_AEAD Authenticated Encryption with Associated Data
(MAC)
- CRYPTO_ALG_TYPE_BLKCIPHER Synchronous multi-block cipher
- CRYPTO_ALG_TYPE_ABLKCIPHER Asynchronous multi-block cipher
- CRYPTO_ALG_TYPE_KPP Key-agreement Protocol Primitive (KPP) such as
an ECDH or DH implementation

View File

@ -63,8 +63,6 @@ request by using:
When your driver receives a crypto_request, you must to transfer it to
the crypto engine via one of:
* crypto_transfer_ablkcipher_request_to_engine()
* crypto_transfer_aead_request_to_engine()
* crypto_transfer_akcipher_request_to_engine()
@ -75,8 +73,6 @@ the crypto engine via one of:
At the end of the request process, a call to one of the following functions is needed:
* crypto_finalize_ablkcipher_request()
* crypto_finalize_aead_request()
* crypto_finalize_akcipher_request()

View File

@ -128,25 +128,20 @@ process requests that are unaligned. This implies, however, additional
overhead as the kernel crypto API needs to perform the realignment of
the data which may imply moving of data.
Cipher Definition With struct blkcipher_alg and ablkcipher_alg
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Cipher Definition With struct skcipher_alg
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Struct blkcipher_alg defines a synchronous block cipher whereas struct
ablkcipher_alg defines an asynchronous block cipher.
Struct skcipher_alg defines a multi-block cipher, or more generally, a
length-preserving symmetric cipher algorithm.
Please refer to the single block cipher description for schematics of
the block cipher usage.
Scatterlist handling
~~~~~~~~~~~~~~~~~~~~
Specifics Of Asynchronous Multi-Block Cipher
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
There are a couple of specifics to the asynchronous interface.
First of all, some of the drivers will want to use the Generic
ScatterWalk in case the hardware needs to be fed separate chunks of the
scatterlist which contains the plaintext and will contain the
ciphertext. Please refer to the ScatterWalk interface offered by the
Linux kernel scatter / gather list implementation.
Some drivers will want to use the Generic ScatterWalk in case the
hardware needs to be fed separate chunks of the scatterlist which
contains the plaintext and will contain the ciphertext. Please refer
to the ScatterWalk interface offered by the Linux kernel scatter /
gather list implementation.
Hashing [HASH]
--------------

View File

@ -24,6 +24,7 @@ whole; patches welcome!
gdb-kernel-debugging
kgdb
kselftest
kunit/index
.. only:: subproject and html

View File

@ -0,0 +1,16 @@
.. SPDX-License-Identifier: GPL-2.0
=============
API Reference
=============
.. toctree::
test
This section documents the KUnit kernel testing API. It is divided into the
following sections:
================================= ==============================================
:doc:`test` documents all of the standard testing API
excluding mocking or mocking related features.
================================= ==============================================

View File

@ -0,0 +1,11 @@
.. SPDX-License-Identifier: GPL-2.0
========
Test API
========
This file documents all of the standard testing API excluding mocking or mocking
related features.
.. kernel-doc:: include/kunit/test.h
:internal:

View File

@ -0,0 +1,62 @@
.. SPDX-License-Identifier: GPL-2.0
==========================
Frequently Asked Questions
==========================
How is this different from Autotest, kselftest, etc?
====================================================
KUnit is a unit testing framework. Autotest, kselftest (and some others) are
not.
A `unit test <https://martinfowler.com/bliki/UnitTest.html>`_ is supposed to
test a single unit of code in isolation, hence the name. A unit test should be
the finest granularity of testing and as such should allow all possible code
paths to be tested in the code under test; this is only possible if the code
under test is very small and does not have any external dependencies outside of
the test's control like hardware.
There are no testing frameworks currently available for the kernel that do not
require installing the kernel on a test machine or in a VM and all require
tests to be written in userspace and run on the kernel under test; this is true
for Autotest, kselftest, and some others, disqualifying any of them from being
considered unit testing frameworks.
Does KUnit support running on architectures other than UML?
===========================================================
Yes, well, mostly.
For the most part, the KUnit core framework (what you use to write the tests)
can compile to any architecture; it compiles like just another part of the
kernel and runs when the kernel boots. However, there is some infrastructure,
like the KUnit Wrapper (``tools/testing/kunit/kunit.py``) that does not support
other architectures.
In short, this means that, yes, you can run KUnit on other architectures, but
it might require more work than using KUnit on UML.
For more information, see :ref:`kunit-on-non-uml`.
What is the difference between a unit test and these other kinds of tests?
==========================================================================
Most existing tests for the Linux kernel would be categorized as an integration
test, or an end-to-end test.
- A unit test is supposed to test a single unit of code in isolation, hence the
name. A unit test should be the finest granularity of testing and as such
should allow all possible code paths to be tested in the code under test; this
is only possible if the code under test is very small and does not have any
external dependencies outside of the test's control like hardware.
- An integration test tests the interaction between a minimal set of components,
usually just two or three. For example, someone might write an integration
test to test the interaction between a driver and a piece of hardware, or to
test the interaction between the userspace libraries the kernel provides and
the kernel itself; however, one of these tests would probably not test the
entire kernel along with hardware interactions and interactions with the
userspace.
- An end-to-end test usually tests the entire system from the perspective of the
code under test. For example, someone might write an end-to-end test for the
kernel by installing a production configuration of the kernel on production
hardware with a production userspace and then trying to exercise some behavior
that depends on interactions between the hardware, the kernel, and userspace.

View File

@ -0,0 +1,79 @@
.. SPDX-License-Identifier: GPL-2.0
=========================================
KUnit - Unit Testing for the Linux Kernel
=========================================
.. toctree::
:maxdepth: 2
start
usage
api/index
faq
What is KUnit?
==============
KUnit is a lightweight unit testing and mocking framework for the Linux kernel.
These tests are able to be run locally on a developer's workstation without a VM
or special hardware.
KUnit is heavily inspired by JUnit, Python's unittest.mock, and
Googletest/Googlemock for C++. KUnit provides facilities for defining unit test
cases, grouping related test cases into test suites, providing common
infrastructure for running tests, and much more.
Get started now: :doc:`start`
Why KUnit?
==========
A unit test is supposed to test a single unit of code in isolation, hence the
name. A unit test should be the finest granularity of testing and as such should
allow all possible code paths to be tested in the code under test; this is only
possible if the code under test is very small and does not have any external
dependencies outside of the test's control like hardware.
Outside of KUnit, there are no testing frameworks currently
available for the kernel that do not require installing the kernel on a test
machine or in a VM and all require tests to be written in userspace running on
the kernel; this is true for Autotest, and kselftest, disqualifying
any of them from being considered unit testing frameworks.
KUnit addresses the problem of being able to run tests without needing a virtual
machine or actual hardware with User Mode Linux. User Mode Linux is a Linux
architecture, like ARM or x86; however, unlike other architectures it compiles
to a standalone program that can be run like any other program directly inside
of a host operating system; to be clear, it does not require any virtualization
support; it is just a regular program.
KUnit is fast. Excluding build time, from invocation to completion KUnit can run
several dozen tests in only 10 to 20 seconds; this might not sound like a big
deal to some people, but having such fast and easy to run tests fundamentally
changes the way you go about testing and even writing code in the first place.
Linus himself said in his `git talk at Google
<https://gist.github.com/lorn/1272686/revisions#diff-53c65572127855f1b003db4064a94573R874>`_:
"... a lot of people seem to think that performance is about doing the
same thing, just doing it faster, and that is not true. That is not what
performance is all about. If you can do something really fast, really
well, people will start using it differently."
In this context Linus was talking about branching and merging,
but this point also applies to testing. If your tests are slow, unreliable, are
difficult to write, and require a special setup or special hardware to run,
then you wait a lot longer to write tests, and you wait a lot longer to run
tests; this means that tests are likely to break, unlikely to test a lot of
things, and are unlikely to be rerun once they pass. If your tests are really
fast, you run them all the time, every time you make a change, and every time
someone sends you some code. Why trust that someone ran all their tests
correctly on every change when you can just run them yourself in less time than
it takes to read their test log?
How do I use it?
================
* :doc:`start` - for new users of KUnit
* :doc:`usage` - for a more detailed explanation of KUnit features
* :doc:`api/index` - for the list of KUnit APIs used for testing

View File

@ -0,0 +1,180 @@
.. SPDX-License-Identifier: GPL-2.0
===============
Getting Started
===============
Installing dependencies
=======================
KUnit has the same dependencies as the Linux kernel. As long as you can build
the kernel, you can run KUnit.
KUnit Wrapper
=============
Included with KUnit is a simple Python wrapper that helps format the output to
easily use and read KUnit output. It handles building and running the kernel, as
well as formatting the output.
The wrapper can be run with:
.. code-block:: bash
./tools/testing/kunit/kunit.py run
Creating a kunitconfig
======================
The Python script is a thin wrapper around Kbuild as such, it needs to be
configured with a ``kunitconfig`` file. This file essentially contains the
regular Kernel config, with the specific test targets as well.
.. code-block:: bash
git clone -b master https://kunit.googlesource.com/kunitconfig $PATH_TO_KUNITCONFIG_REPO
cd $PATH_TO_LINUX_REPO
ln -s $PATH_TO_KUNIT_CONFIG_REPO/kunitconfig kunitconfig
You may want to add kunitconfig to your local gitignore.
Verifying KUnit Works
---------------------
To make sure that everything is set up correctly, simply invoke the Python
wrapper from your kernel repo:
.. code-block:: bash
./tools/testing/kunit/kunit.py run
.. note::
You may want to run ``make mrproper`` first.
If everything worked correctly, you should see the following:
.. code-block:: bash
Generating .config ...
Building KUnit Kernel ...
Starting KUnit Kernel ...
followed by a list of tests that are run. All of them should be passing.
.. note::
Because it is building a lot of sources for the first time, the ``Building
kunit kernel`` step may take a while.
Writing your first test
=======================
In your kernel repo let's add some code that we can test. Create a file
``drivers/misc/example.h`` with the contents:
.. code-block:: c
int misc_example_add(int left, int right);
create a file ``drivers/misc/example.c``:
.. code-block:: c
#include <linux/errno.h>
#include "example.h"
int misc_example_add(int left, int right)
{
return left + right;
}
Now add the following lines to ``drivers/misc/Kconfig``:
.. code-block:: kconfig
config MISC_EXAMPLE
bool "My example"
and the following lines to ``drivers/misc/Makefile``:
.. code-block:: make
obj-$(CONFIG_MISC_EXAMPLE) += example.o
Now we are ready to write the test. The test will be in
``drivers/misc/example-test.c``:
.. code-block:: c
#include <kunit/test.h>
#include "example.h"
/* Define the test cases. */
static void misc_example_add_test_basic(struct kunit *test)
{
KUNIT_EXPECT_EQ(test, 1, misc_example_add(1, 0));
KUNIT_EXPECT_EQ(test, 2, misc_example_add(1, 1));
KUNIT_EXPECT_EQ(test, 0, misc_example_add(-1, 1));
KUNIT_EXPECT_EQ(test, INT_MAX, misc_example_add(0, INT_MAX));
KUNIT_EXPECT_EQ(test, -1, misc_example_add(INT_MAX, INT_MIN));
}
static void misc_example_test_failure(struct kunit *test)
{
KUNIT_FAIL(test, "This test never passes.");
}
static struct kunit_case misc_example_test_cases[] = {
KUNIT_CASE(misc_example_add_test_basic),
KUNIT_CASE(misc_example_test_failure),
{}
};
static struct kunit_suite misc_example_test_suite = {
.name = "misc-example",
.test_cases = misc_example_test_cases,
};
kunit_test_suite(misc_example_test_suite);
Now add the following to ``drivers/misc/Kconfig``:
.. code-block:: kconfig
config MISC_EXAMPLE_TEST
bool "Test for my example"
depends on MISC_EXAMPLE && KUNIT
and the following to ``drivers/misc/Makefile``:
.. code-block:: make
obj-$(CONFIG_MISC_EXAMPLE_TEST) += example-test.o
Now add it to your ``kunitconfig``:
.. code-block:: none
CONFIG_MISC_EXAMPLE=y
CONFIG_MISC_EXAMPLE_TEST=y
Now you can run the test:
.. code-block:: bash
./tools/testing/kunit/kunit.py
You should see the following failure:
.. code-block:: none
...
[16:08:57] [PASSED] misc-example:misc_example_add_test_basic
[16:08:57] [FAILED] misc-example:misc_example_test_failure
[16:08:57] EXPECTATION FAILED at drivers/misc/example-test.c:17
[16:08:57] This test never passes.
...
Congrats! You just wrote your first KUnit test!
Next Steps
==========
* Check out the :doc:`usage` page for a more
in-depth explanation of KUnit.

View File

@ -0,0 +1,576 @@
.. SPDX-License-Identifier: GPL-2.0
===========
Using KUnit
===========
The purpose of this document is to describe what KUnit is, how it works, how it
is intended to be used, and all the concepts and terminology that are needed to
understand it. This guide assumes a working knowledge of the Linux kernel and
some basic knowledge of testing.
For a high level introduction to KUnit, including setting up KUnit for your
project, see :doc:`start`.
Organization of this document
=============================
This document is organized into two main sections: Testing and Isolating
Behavior. The first covers what a unit test is and how to use KUnit to write
them. The second covers how to use KUnit to isolate code and make it possible
to unit test code that was otherwise un-unit-testable.
Testing
=======
What is KUnit?
--------------
"K" is short for "kernel" so "KUnit" is the "(Linux) Kernel Unit Testing
Framework." KUnit is intended first and foremost for writing unit tests; it is
general enough that it can be used to write integration tests; however, this is
a secondary goal. KUnit has no ambition of being the only testing framework for
the kernel; for example, it does not intend to be an end-to-end testing
framework.
What is Unit Testing?
---------------------
A `unit test <https://martinfowler.com/bliki/UnitTest.html>`_ is a test that
tests code at the smallest possible scope, a *unit* of code. In the C
programming language that's a function.
Unit tests should be written for all the publicly exposed functions in a
compilation unit; so that is all the functions that are exported in either a
*class* (defined below) or all functions which are **not** static.
Writing Tests
-------------
Test Cases
~~~~~~~~~~
The fundamental unit in KUnit is the test case. A test case is a function with
the signature ``void (*)(struct kunit *test)``. It calls a function to be tested
and then sets *expectations* for what should happen. For example:
.. code-block:: c
void example_test_success(struct kunit *test)
{
}
void example_test_failure(struct kunit *test)
{
KUNIT_FAIL(test, "This test never passes.");
}
In the above example ``example_test_success`` always passes because it does
nothing; no expectations are set, so all expectations pass. On the other hand
``example_test_failure`` always fails because it calls ``KUNIT_FAIL``, which is
a special expectation that logs a message and causes the test case to fail.
Expectations
~~~~~~~~~~~~
An *expectation* is a way to specify that you expect a piece of code to do
something in a test. An expectation is called like a function. A test is made
by setting expectations about the behavior of a piece of code under test; when
one or more of the expectations fail, the test case fails and information about
the failure is logged. For example:
.. code-block:: c
void add_test_basic(struct kunit *test)
{
KUNIT_EXPECT_EQ(test, 1, add(1, 0));
KUNIT_EXPECT_EQ(test, 2, add(1, 1));
}
In the above example ``add_test_basic`` makes a number of assertions about the
behavior of a function called ``add``; the first parameter is always of type
``struct kunit *``, which contains information about the current test context;
the second parameter, in this case, is what the value is expected to be; the
last value is what the value actually is. If ``add`` passes all of these
expectations, the test case, ``add_test_basic`` will pass; if any one of these
expectations fail, the test case will fail.
It is important to understand that a test case *fails* when any expectation is
violated; however, the test will continue running, potentially trying other
expectations until the test case ends or is otherwise terminated. This is as
opposed to *assertions* which are discussed later.
To learn about more expectations supported by KUnit, see :doc:`api/test`.
.. note::
A single test case should be pretty short, pretty easy to understand,
focused on a single behavior.
For example, if we wanted to properly test the add function above, we would
create additional tests cases which would each test a different property that an
add function should have like this:
.. code-block:: c
void add_test_basic(struct kunit *test)
{
KUNIT_EXPECT_EQ(test, 1, add(1, 0));
KUNIT_EXPECT_EQ(test, 2, add(1, 1));
}
void add_test_negative(struct kunit *test)
{
KUNIT_EXPECT_EQ(test, 0, add(-1, 1));
}
void add_test_max(struct kunit *test)
{
KUNIT_EXPECT_EQ(test, INT_MAX, add(0, INT_MAX));
KUNIT_EXPECT_EQ(test, -1, add(INT_MAX, INT_MIN));
}
void add_test_overflow(struct kunit *test)
{
KUNIT_EXPECT_EQ(test, INT_MIN, add(INT_MAX, 1));
}
Notice how it is immediately obvious what all the properties that we are testing
for are.
Assertions
~~~~~~~~~~
KUnit also has the concept of an *assertion*. An assertion is just like an
expectation except the assertion immediately terminates the test case if it is
not satisfied.
For example:
.. code-block:: c
static void mock_test_do_expect_default_return(struct kunit *test)
{
struct mock_test_context *ctx = test->priv;
struct mock *mock = ctx->mock;
int param0 = 5, param1 = -5;
const char *two_param_types[] = {"int", "int"};
const void *two_params[] = {&param0, &param1};
const void *ret;
ret = mock->do_expect(mock,
"test_printk", test_printk,
two_param_types, two_params,
ARRAY_SIZE(two_params));
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ret);
KUNIT_EXPECT_EQ(test, -4, *((int *) ret));
}
In this example, the method under test should return a pointer to a value, so
if the pointer returned by the method is null or an errno, we don't want to
bother continuing the test since the following expectation could crash the test
case. `ASSERT_NOT_ERR_OR_NULL(...)` allows us to bail out of the test case if
the appropriate conditions have not been satisfied to complete the test.
Test Suites
~~~~~~~~~~~
Now obviously one unit test isn't very helpful; the power comes from having
many test cases covering all of your behaviors. Consequently it is common to
have many *similar* tests; in order to reduce duplication in these closely
related tests most unit testing frameworks provide the concept of a *test
suite*, in KUnit we call it a *test suite*; all it is is just a collection of
test cases for a unit of code with a set up function that gets invoked before
every test cases and then a tear down function that gets invoked after every
test case completes.
Example:
.. code-block:: c
static struct kunit_case example_test_cases[] = {
KUNIT_CASE(example_test_foo),
KUNIT_CASE(example_test_bar),
KUNIT_CASE(example_test_baz),
{}
};
static struct kunit_suite example_test_suite = {
.name = "example",
.init = example_test_init,
.exit = example_test_exit,
.test_cases = example_test_cases,
};
kunit_test_suite(example_test_suite);
In the above example the test suite, ``example_test_suite``, would run the test
cases ``example_test_foo``, ``example_test_bar``, and ``example_test_baz``,
each would have ``example_test_init`` called immediately before it and would
have ``example_test_exit`` called immediately after it.
``kunit_test_suite(example_test_suite)`` registers the test suite with the
KUnit test framework.
.. note::
A test case will only be run if it is associated with a test suite.
For a more information on these types of things see the :doc:`api/test`.
Isolating Behavior
==================
The most important aspect of unit testing that other forms of testing do not
provide is the ability to limit the amount of code under test to a single unit.
In practice, this is only possible by being able to control what code gets run
when the unit under test calls a function and this is usually accomplished
through some sort of indirection where a function is exposed as part of an API
such that the definition of that function can be changed without affecting the
rest of the code base. In the kernel this primarily comes from two constructs,
classes, structs that contain function pointers that are provided by the
implementer, and architecture specific functions which have definitions selected
at compile time.
Classes
-------
Classes are not a construct that is built into the C programming language;
however, it is an easily derived concept. Accordingly, pretty much every project
that does not use a standardized object oriented library (like GNOME's GObject)
has their own slightly different way of doing object oriented programming; the
Linux kernel is no exception.
The central concept in kernel object oriented programming is the class. In the
kernel, a *class* is a struct that contains function pointers. This creates a
contract between *implementers* and *users* since it forces them to use the
same function signature without having to call the function directly. In order
for it to truly be a class, the function pointers must specify that a pointer
to the class, known as a *class handle*, be one of the parameters; this makes
it possible for the member functions (also known as *methods*) to have access
to member variables (more commonly known as *fields*) allowing the same
implementation to have multiple *instances*.
Typically a class can be *overridden* by *child classes* by embedding the
*parent class* in the child class. Then when a method provided by the child
class is called, the child implementation knows that the pointer passed to it is
of a parent contained within the child; because of this, the child can compute
the pointer to itself because the pointer to the parent is always a fixed offset
from the pointer to the child; this offset is the offset of the parent contained
in the child struct. For example:
.. code-block:: c
struct shape {
int (*area)(struct shape *this);
};
struct rectangle {
struct shape parent;
int length;
int width;
};
int rectangle_area(struct shape *this)
{
struct rectangle *self = container_of(this, struct shape, parent);
return self->length * self->width;
};
void rectangle_new(struct rectangle *self, int length, int width)
{
self->parent.area = rectangle_area;
self->length = length;
self->width = width;
}
In this example (as in most kernel code) the operation of computing the pointer
to the child from the pointer to the parent is done by ``container_of``.
Faking Classes
~~~~~~~~~~~~~~
In order to unit test a piece of code that calls a method in a class, the
behavior of the method must be controllable, otherwise the test ceases to be a
unit test and becomes an integration test.
A fake just provides an implementation of a piece of code that is different than
what runs in a production instance, but behaves identically from the standpoint
of the callers; this is usually done to replace a dependency that is hard to
deal with, or is slow.
A good example for this might be implementing a fake EEPROM that just stores the
"contents" in an internal buffer. For example, let's assume we have a class that
represents an EEPROM:
.. code-block:: c
struct eeprom {
ssize_t (*read)(struct eeprom *this, size_t offset, char *buffer, size_t count);
ssize_t (*write)(struct eeprom *this, size_t offset, const char *buffer, size_t count);
};
And we want to test some code that buffers writes to the EEPROM:
.. code-block:: c
struct eeprom_buffer {
ssize_t (*write)(struct eeprom_buffer *this, const char *buffer, size_t count);
int flush(struct eeprom_buffer *this);
size_t flush_count; /* Flushes when buffer exceeds flush_count. */
};
struct eeprom_buffer *new_eeprom_buffer(struct eeprom *eeprom);
void destroy_eeprom_buffer(struct eeprom *eeprom);
We can easily test this code by *faking out* the underlying EEPROM:
.. code-block:: c
struct fake_eeprom {
struct eeprom parent;
char contents[FAKE_EEPROM_CONTENTS_SIZE];
};
ssize_t fake_eeprom_read(struct eeprom *parent, size_t offset, char *buffer, size_t count)
{
struct fake_eeprom *this = container_of(parent, struct fake_eeprom, parent);
count = min(count, FAKE_EEPROM_CONTENTS_SIZE - offset);
memcpy(buffer, this->contents + offset, count);
return count;
}
ssize_t fake_eeprom_write(struct eeprom *this, size_t offset, const char *buffer, size_t count)
{
struct fake_eeprom *this = container_of(parent, struct fake_eeprom, parent);
count = min(count, FAKE_EEPROM_CONTENTS_SIZE - offset);
memcpy(this->contents + offset, buffer, count);
return count;
}
void fake_eeprom_init(struct fake_eeprom *this)
{
this->parent.read = fake_eeprom_read;
this->parent.write = fake_eeprom_write;
memset(this->contents, 0, FAKE_EEPROM_CONTENTS_SIZE);
}
We can now use it to test ``struct eeprom_buffer``:
.. code-block:: c
struct eeprom_buffer_test {
struct fake_eeprom *fake_eeprom;
struct eeprom_buffer *eeprom_buffer;
};
static void eeprom_buffer_test_does_not_write_until_flush(struct kunit *test)
{
struct eeprom_buffer_test *ctx = test->priv;
struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer;
struct fake_eeprom *fake_eeprom = ctx->fake_eeprom;
char buffer[] = {0xff};
eeprom_buffer->flush_count = SIZE_MAX;
eeprom_buffer->write(eeprom_buffer, buffer, 1);
KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0);
eeprom_buffer->write(eeprom_buffer, buffer, 1);
KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0);
eeprom_buffer->flush(eeprom_buffer);
KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff);
KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff);
}
static void eeprom_buffer_test_flushes_after_flush_count_met(struct kunit *test)
{
struct eeprom_buffer_test *ctx = test->priv;
struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer;
struct fake_eeprom *fake_eeprom = ctx->fake_eeprom;
char buffer[] = {0xff};
eeprom_buffer->flush_count = 2;
eeprom_buffer->write(eeprom_buffer, buffer, 1);
KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0);
eeprom_buffer->write(eeprom_buffer, buffer, 1);
KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff);
KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff);
}
static void eeprom_buffer_test_flushes_increments_of_flush_count(struct kunit *test)
{
struct eeprom_buffer_test *ctx = test->priv;
struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer;
struct fake_eeprom *fake_eeprom = ctx->fake_eeprom;
char buffer[] = {0xff, 0xff};
eeprom_buffer->flush_count = 2;
eeprom_buffer->write(eeprom_buffer, buffer, 1);
KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0);
eeprom_buffer->write(eeprom_buffer, buffer, 2);
KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff);
KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff);
/* Should have only flushed the first two bytes. */
KUNIT_EXPECT_EQ(test, fake_eeprom->contents[2], 0);
}
static int eeprom_buffer_test_init(struct kunit *test)
{
struct eeprom_buffer_test *ctx;
ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
ctx->fake_eeprom = kunit_kzalloc(test, sizeof(*ctx->fake_eeprom), GFP_KERNEL);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->fake_eeprom);
fake_eeprom_init(ctx->fake_eeprom);
ctx->eeprom_buffer = new_eeprom_buffer(&ctx->fake_eeprom->parent);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->eeprom_buffer);
test->priv = ctx;
return 0;
}
static void eeprom_buffer_test_exit(struct kunit *test)
{
struct eeprom_buffer_test *ctx = test->priv;
destroy_eeprom_buffer(ctx->eeprom_buffer);
}
.. _kunit-on-non-uml:
KUnit on non-UML architectures
==============================
By default KUnit uses UML as a way to provide dependencies for code under test.
Under most circumstances KUnit's usage of UML should be treated as an
implementation detail of how KUnit works under the hood. Nevertheless, there
are instances where being able to run architecture specific code, or test
against real hardware is desirable. For these reasons KUnit supports running on
other architectures.
Running existing KUnit tests on non-UML architectures
-----------------------------------------------------
There are some special considerations when running existing KUnit tests on
non-UML architectures:
* Hardware may not be deterministic, so a test that always passes or fails
when run under UML may not always do so on real hardware.
* Hardware and VM environments may not be hermetic. KUnit tries its best to
provide a hermetic environment to run tests; however, it cannot manage state
that it doesn't know about outside of the kernel. Consequently, tests that
may be hermetic on UML may not be hermetic on other architectures.
* Some features and tooling may not be supported outside of UML.
* Hardware and VMs are slower than UML.
None of these are reasons not to run your KUnit tests on real hardware; they are
only things to be aware of when doing so.
The biggest impediment will likely be that certain KUnit features and
infrastructure may not support your target environment. For example, at this
time the KUnit Wrapper (``tools/testing/kunit/kunit.py``) does not work outside
of UML. Unfortunately, there is no way around this. Using UML (or even just a
particular architecture) allows us to make a lot of assumptions that make it
possible to do things which might otherwise be impossible.
Nevertheless, all core KUnit framework features are fully supported on all
architectures, and using them is straightforward: all you need to do is to take
your kunitconfig, your Kconfig options for the tests you would like to run, and
merge them into whatever config your are using for your platform. That's it!
For example, let's say you have the following kunitconfig:
.. code-block:: none
CONFIG_KUNIT=y
CONFIG_KUNIT_EXAMPLE_TEST=y
If you wanted to run this test on an x86 VM, you might add the following config
options to your ``.config``:
.. code-block:: none
CONFIG_KUNIT=y
CONFIG_KUNIT_EXAMPLE_TEST=y
CONFIG_SERIAL_8250=y
CONFIG_SERIAL_8250_CONSOLE=y
All these new options do is enable support for a common serial console needed
for logging.
Next, you could build a kernel with these tests as follows:
.. code-block:: bash
make ARCH=x86 olddefconfig
make ARCH=x86
Once you have built a kernel, you could run it on QEMU as follows:
.. code-block:: bash
qemu-system-x86_64 -enable-kvm \
-m 1024 \
-kernel arch/x86_64/boot/bzImage \
-append 'console=ttyS0' \
--nographic
Interspersed in the kernel logs you might see the following:
.. code-block:: none
TAP version 14
# Subtest: example
1..1
# example_simple_test: initializing
ok 1 - example_simple_test
ok 1 - example
Congratulations, you just ran a KUnit test on the x86 architecture!
Writing new tests for other architectures
-----------------------------------------
The first thing you must do is ask yourself whether it is necessary to write a
KUnit test for a specific architecture, and then whether it is necessary to
write that test for a particular piece of hardware. In general, writing a test
that depends on having access to a particular piece of hardware or software (not
included in the Linux source repo) should be avoided at all costs.
Even if you only ever plan on running your KUnit test on your hardware
configuration, other people may want to run your tests and may not have access
to your hardware. If you write your test to run on UML, then anyone can run your
tests without knowing anything about your particular setup, and you can still
run your tests on your hardware setup just by compiling for your architecture.
.. important::
Always prefer tests that run on UML to tests that only run under a particular
architecture, and always prefer tests that run under QEMU or another easy
(and monitarily free) to obtain software environment to a specific piece of
hardware.
Nevertheless, there are still valid reasons to write an architecture or hardware
specific test: for example, you might want to test some code that really belongs
in ``arch/some-arch/*``. Even so, try your best to write the test so that it
does not depend on physical hardware: if some of your test cases don't need the
hardware, only require the hardware for tests that actually need it.
Now that you have narrowed down exactly what bits are hardware specific, the
actual procedure for writing and running the tests is pretty much the same as
writing normal KUnit tests. One special caveat is that you have to reset
hardware state in between test cases; if this is not possible, you may only be
able to run one test case per invocation.
.. TODO(brendanhiggins@google.com): Add an actual example of an architecture
dependent KUnit test.

View File

@ -0,0 +1,60 @@
# SPDX-License-Identifier: GPL-2.0
%YAML 1.2
---
$id: http://devicetree.org/schemas/crypto/allwinner,sun8i-ss.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Allwinner Security System v2 driver
maintainers:
- Corentin Labbe <corentin.labbe@gmail.com>
properties:
compatible:
enum:
- allwinner,sun8i-a83t-crypto
- allwinner,sun9i-a80-crypto
reg:
maxItems: 1
interrupts:
maxItems: 1
clocks:
items:
- description: Bus clock
- description: Module clock
clock-names:
items:
- const: bus
- const: mod
resets:
maxItems: 1
required:
- compatible
- reg
- interrupts
- clocks
- clock-names
- resets
additionalProperties: false
examples:
- |
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/clock/sun8i-a83t-ccu.h>
#include <dt-bindings/reset/sun8i-a83t-ccu.h>
crypto: crypto@1c15000 {
compatible = "allwinner,sun8i-a83t-crypto";
reg = <0x01c15000 0x1000>;
interrupts = <GIC_SPI 94 IRQ_TYPE_LEVEL_HIGH>;
resets = <&ccu RST_BUS_SS>;
clocks = <&ccu CLK_BUS_SS>, <&ccu CLK_SS>;
clock-names = "bus", "mod";
};

View File

@ -0,0 +1,52 @@
# SPDX-License-Identifier: GPL-2.0
%YAML 1.2
---
$id: http://devicetree.org/schemas/crypto/amlogic,gxl-crypto.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Amlogic GXL Cryptographic Offloader
maintainers:
- Corentin Labbe <clabbe@baylibre.com>
properties:
compatible:
items:
- const: amlogic,gxl-crypto
reg:
maxItems: 1
interrupts:
items:
- description: "Interrupt for flow 0"
- description: "Interrupt for flow 1"
clocks:
maxItems: 1
clock-names:
const: blkmv
required:
- compatible
- reg
- interrupts
- clocks
- clock-names
additionalProperties: false
examples:
- |
#include <dt-bindings/interrupt-controller/irq.h>
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/clock/gxbb-clkc.h>
crypto: crypto-engine@c883e000 {
compatible = "amlogic,gxl-crypto";
reg = <0x0 0xc883e000 0x0 0x36>;
interrupts = <GIC_SPI 188 IRQ_TYPE_EDGE_RISING>, <GIC_SPI 189 IRQ_TYPE_EDGE_RISING>;
clocks = <&clkc CLKID_BLKMV>;
clock-names = "blkmv";
};

View File

@ -66,6 +66,9 @@ Sub-nodes:
details of individual regulator device can be found in:
Documentation/devicetree/bindings/regulator/regulator.txt
regulator-initial-mode may be specified for buck regulators using mode values
from include/dt-bindings/regulator/dlg,da9063-regulator.h.
- rtc : This node defines settings required for the Real-Time Clock associated
with the DA9062. There are currently no entries in this binding, however
compatible = "dlg,da9062-rtc" should be added if a node is created.
@ -96,6 +99,7 @@ Example:
regulator-max-microvolt = <1570000>;
regulator-min-microamp = <500000>;
regulator-max-microamp = <2000000>;
regulator-initial-mode = <DA9063_BUCK_MODE_SYNC>;
regulator-boot-on;
};
DA9062_LDO1: ldo1 {

View File

@ -16,3 +16,17 @@ value must be one of the following values:
ralink,mt7620a-soc
ralink,mt7620n-soc
ralink,mt7628a-soc
ralink,mt7688a-soc
2. Boards
GARDENA smart Gateway (MT7688)
This board is based on the MediaTek MT7688 and equipped with 128 MiB
of DDR and 8 MiB of flash (SPI NOR) and additional 128MiB SPI NAND
storage.
------------------------------
Required root node properties:
- compatible = "gardena,smart-gateway-mt7688", "ralink,mt7688a-soc",
"ralink,mt7628a-soc";

View File

@ -0,0 +1,53 @@
* Cadence NAND controller
Required properties:
- compatible : "cdns,hp-nfc"
- reg : Contains two entries, each of which is a tuple consisting of a
physical address and length. The first entry is the address and
length of the controller register set. The second entry is the
address and length of the Slave DMA data port.
- reg-names: should contain "reg" and "sdma"
- #address-cells: should be 1. The cell encodes the chip select connection.
- #size-cells : should be 0.
- interrupts : The interrupt number.
- clocks: phandle of the controller core clock (nf_clk).
Optional properties:
- dmas: shall reference DMA channel associated to the NAND controller
- cdns,board-delay-ps : Estimated Board delay. The value includes the total
round trip delay for the signals and is used for deciding on values
associated with data read capture. The example formula for SDR mode is
the following:
board delay = RE#PAD delay + PCB trace to device + PCB trace from device
+ DQ PAD delay
Child nodes represent the available NAND chips.
Required properties of NAND chips:
- reg: shall contain the native Chip Select ids from 0 to max supported by
the cadence nand flash controller
See Documentation/devicetree/bindings/mtd/nand.txt for more details on
generic bindings.
Example:
nand_controller: nand-controller@60000000 {
compatible = "cdns,hp-nfc";
#address-cells = <1>;
#size-cells = <0>;
reg = <0x60000000 0x10000>, <0x80000000 0x10000>;
reg-names = "reg", "sdma";
clocks = <&nf_clk>;
cdns,board-delay-ps = <4830>;
interrupts = <2 0>;
nand@0 {
reg = <0>;
label = "nand-1";
};
nand@1 {
reg = <1>;
label = "nand-2";
};
};

View File

@ -0,0 +1,22 @@
Flash device on Intel IXP4xx SoC
This flash is regular CFI compatible (Intel or AMD extended) flash chips with
specific big-endian or mixed-endian memory access pattern.
Required properties:
- compatible : must be "intel,ixp4xx-flash", "cfi-flash";
- reg : memory address for the flash chip
- bank-width : width in bytes of flash interface, should be <2>
For the rest of the properties, see mtd-physmap.txt.
The device tree may optionally contain sub-nodes describing partitions of the
address space. See partition.txt for more detail.
Example:
flash@50000000 {
compatible = "intel,ixp4xx-flash", "cfi-flash";
reg = <0x50000000 0x01000000>;
bank-width = <2>;
};

View File

@ -44,6 +44,12 @@ Optional properties:
Admission Control Block supports reporting the number of packets in-flight in a
switch queue
- resets: a single phandle and reset identifier pair. See
Documentation/devicetree/binding/reset/reset.txt for details.
- reset-names: If the "reset" property is specified, this property should have
the value "switch" to denote the switch reset line.
Port subnodes:
Optional properties:

View File

@ -2,7 +2,7 @@
Required properties:
- compatible: should contain one of "brcm,genet-v1", "brcm,genet-v2",
"brcm,genet-v3", "brcm,genet-v4", "brcm,genet-v5".
"brcm,genet-v3", "brcm,genet-v4", "brcm,genet-v5", "brcm,bcm2711-genet-v5".
- reg: address and length of the register set for the device
- interrupts and/or interrupts-extended: must be two cells, the first cell
is the general purpose interrupt line, while the second cell is the

View File

@ -14,6 +14,8 @@ Required properties:
* "brcm,bcm4330-bt"
* "brcm,bcm43438-bt"
* "brcm,bcm4345c5"
* "brcm,bcm43540-bt"
* "brcm,bcm4335a0"
Optional properties:

View File

@ -121,6 +121,11 @@ properties:
and is useful for determining certain configuration settings
such as flow control thresholds.
sfp:
$ref: /schemas/types.yaml#definitions/phandle
description:
Specifies a reference to a node representing a SFP cage.
tx-fifo-depth:
$ref: /schemas/types.yaml#definitions/uint32
description:

View File

@ -153,6 +153,11 @@ properties:
Delay after the reset was deasserted in microseconds. If
this property is missing the delay will be skipped.
sfp:
$ref: /schemas/types.yaml#definitions/phandle
description:
Specifies a reference to a node representing a SFP cage.
required:
- reg

View File

@ -9,6 +9,7 @@ Required properties:
- "aspeed,ast2400-mac"
- "aspeed,ast2500-mac"
- "aspeed,ast2600-mac"
- reg: Address and length of the register set for the device
- interrupts: Should contain ethernet controller interrupt
@ -23,6 +24,13 @@ Optional properties:
- no-hw-checksum: Used to disable HW checksum support. Here for backward
compatibility as the driver now should have correct defaults based on
the SoC.
- clocks: In accordance with the generic clock bindings. Must describe the MAC
IP clock, and optionally an RMII RCLK gate for the AST2500/AST2600. The
required MAC clock must be the first cell.
- clock-names:
- "MACCLK": The MAC IP clock
- "RCLK": Clock gate for the RMII RCLK
Example:

View File

@ -10,6 +10,11 @@ Optional properties:
absent, "rmii" is assumed.
- use-iram: Use LPC32xx internal SRAM (IRAM) for DMA buffering
Optional subnodes:
- mdio : specifies the mdio bus, used as a container for phy nodes according to
phy.txt in the same directory
Example:
mac: ethernet@31060000 {

View File

@ -0,0 +1,46 @@
* NXP Semiconductors PN532 NFC Controller
Required properties:
- compatible: Should be
- "nxp,pn532" Place a node with this inside the devicetree node of the bus
where the NFC chip is connected to.
Currently the kernel has phy bindings for uart and i2c.
- "nxp,pn532-i2c" (DEPRECATED) only works for the i2c binding.
- "nxp,pn533-i2c" (DEPRECATED) only works for the i2c binding.
Required properties if connected on i2c:
- clock-frequency: I²C work frequency.
- reg: for the I²C bus address. This is fixed at 0x24 for the PN532.
- interrupts: GPIO interrupt to which the chip is connected
Optional SoC Specific Properties:
- pinctrl-names: Contains only one value - "default".
- pintctrl-0: Specifies the pin control groups used for this controller.
Example (for ARM-based BeagleBone with PN532 on I2C2):
&i2c2 {
pn532: nfc@24 {
compatible = "nxp,pn532";
reg = <0x24>;
clock-frequency = <400000>;
interrupt-parent = <&gpio1>;
interrupts = <17 IRQ_TYPE_EDGE_FALLING>;
};
};
Example (for PN532 connected via uart):
uart4: serial@49042000 {
compatible = "ti,omap3-uart";
pn532: nfc {
compatible = "nxp,pn532";
};
};

View File

@ -1,29 +0,0 @@
* NXP Semiconductors PN532 NFC Controller
Required properties:
- compatible: Should be "nxp,pn532-i2c" or "nxp,pn533-i2c".
- clock-frequency: I²C work frequency.
- reg: address on the bus
- interrupts: GPIO interrupt to which the chip is connected
Optional SoC Specific Properties:
- pinctrl-names: Contains only one value - "default".
- pintctrl-0: Specifies the pin control groups used for this controller.
Example (for ARM-based BeagleBone with PN532 on I2C2):
&i2c2 {
pn532: pn532@24 {
compatible = "nxp,pn532-i2c";
reg = <0x24>;
clock-frequency = <400000>;
interrupt-parent = <&gpio1>;
interrupts = <17 IRQ_TYPE_EDGE_FALLING>;
};
};

View File

@ -0,0 +1,111 @@
# SPDX-License-Identifier: GPL-2.0+
%YAML 1.2
---
$id: http://devicetree.org/schemas/net/qca,ar803x.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm Atheros AR803x PHY
maintainers:
- Andrew Lunn <andrew@lunn.ch>
- Florian Fainelli <f.fainelli@gmail.com>
- Heiner Kallweit <hkallweit1@gmail.com>
description: |
Bindings for Qualcomm Atheros AR803x PHYs
allOf:
- $ref: ethernet-phy.yaml#
properties:
qca,clk-out-frequency:
description: Clock output frequency in Hertz.
allOf:
- $ref: /schemas/types.yaml#/definitions/uint32
- enum: [ 25000000, 50000000, 62500000, 125000000 ]
qca,clk-out-strength:
description: Clock output driver strength.
allOf:
- $ref: /schemas/types.yaml#/definitions/uint32
- enum: [ 0, 1, 2 ]
qca,keep-pll-enabled:
description: |
If set, keep the PLL enabled even if there is no link. Useful if you
want to use the clock output without an ethernet link.
Only supported on the AR8031.
type: boolean
vddio-supply:
description: |
RGMII I/O voltage regulator (see regulator/regulator.yaml).
The PHY supports RGMII I/O voltages of 1.5V, 1.8V and 2.5V. You can
either connect this to the vddio-regulator (1.5V / 1.8V) or the
vddh-regulator (2.5V).
Only supported on the AR8031.
vddio-regulator:
type: object
description:
Initial data for the VDDIO regulator. Set this to 1.5V or 1.8V.
allOf:
- $ref: /schemas/regulator/regulator.yaml
vddh-regulator:
type: object
description:
Dummy subnode to model the external connection of the PHY VDDH
regulator to VDDIO.
allOf:
- $ref: /schemas/regulator/regulator.yaml
examples:
- |
#include <dt-bindings/net/qca-ar803x.h>
ethernet {
#address-cells = <1>;
#size-cells = <0>;
phy-mode = "rgmii-id";
ethernet-phy@0 {
reg = <0>;
qca,clk-out-frequency = <125000000>;
qca,clk-out-strength = <AR803X_STRENGTH_FULL>;
vddio-supply = <&vddio>;
vddio: vddio-regulator {
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <1800000>;
};
};
};
- |
#include <dt-bindings/net/qca-ar803x.h>
ethernet {
#address-cells = <1>;
#size-cells = <0>;
phy-mode = "rgmii-id";
ethernet-phy@0 {
reg = <0>;
qca,clk-out-frequency = <50000000>;
qca,keep-pll-enabled;
vddio-supply = <&vddh>;
vddh: vddh-regulator {
};
};
};

View File

@ -0,0 +1,114 @@
# SPDX-License-Identifier: GPL-2.0
%YAML 1.2
---
$id: http://devicetree.org/schemas/net/renesas,ether.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Renesas Electronics SH EtherMAC
allOf:
- $ref: ethernet-controller.yaml#
maintainers:
- Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>
properties:
compatible:
oneOf:
- items:
- enum:
- renesas,gether-r8a7740 # device is a part of R8A7740 SoC
- renesas,gether-r8a77980 # device is a part of R8A77980 SoC
- renesas,ether-r7s72100 # device is a part of R7S72100 SoC
- renesas,ether-r7s9210 # device is a part of R7S9210 SoC
- items:
- enum:
- renesas,ether-r8a7778 # device is a part of R8A7778 SoC
- renesas,ether-r8a7779 # device is a part of R8A7779 SoC
- enum:
- renesas,rcar-gen1-ether # a generic R-Car Gen1 device
- items:
- enum:
- renesas,ether-r8a7745 # device is a part of R8A7745 SoC
- renesas,ether-r8a7743 # device is a part of R8A7743 SoC
- renesas,ether-r8a7790 # device is a part of R8A7790 SoC
- renesas,ether-r8a7791 # device is a part of R8A7791 SoC
- renesas,ether-r8a7793 # device is a part of R8A7793 SoC
- renesas,ether-r8a7794 # device is a part of R8A7794 SoC
- enum:
- renesas,rcar-gen2-ether # a generic R-Car Gen2 or RZ/G1 device
reg:
items:
- description: E-DMAC/feLic registers
- description: TSU registers
minItems: 1
interrupts:
maxItems: 1
'#address-cells':
description: number of address cells for the MDIO bus
const: 1
'#size-cells':
description: number of size cells on the MDIO bus
const: 0
clocks:
maxItems: 1
pinctrl-0: true
pinctrl-names: true
renesas,no-ether-link:
type: boolean
description:
specify when a board does not provide a proper Ether LINK signal
renesas,ether-link-active-low:
type: boolean
description:
specify when the Ether LINK signal is active-low instead of normal
active-high
required:
- compatible
- reg
- interrupts
- phy-mode
- phy-handle
- '#address-cells'
- '#size-cells'
- clocks
- pinctrl-0
examples:
# Lager board
- |
#include <dt-bindings/clock/r8a7790-clock.h>
#include <dt-bindings/interrupt-controller/irq.h>
ethernet@ee700000 {
compatible = "renesas,ether-r8a7790", "renesas,rcar-gen2-ether";
reg = <0 0xee700000 0 0x400>;
interrupt-parent = <&gic>;
interrupts = <0 162 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp8_clks R8A7790_CLK_ETHER>;
phy-mode = "rmii";
phy-handle = <&phy1>;
pinctrl-0 = <&ether_pins>;
pinctrl-names = "default";
renesas,ether-link-active-low;
#address-cells = <1>;
#size-cells = <0>;
phy1: ethernet-phy@1 {
reg = <1>;
interrupt-parent = <&irqc0>;
interrupts = <0 IRQ_TYPE_LEVEL_LOW>;
pinctrl-0 = <&phy1_pins>;
pinctrl-names = "default";
};
};

View File

@ -1,69 +0,0 @@
* Renesas Electronics SH EtherMAC
This file provides information on what the device node for the SH EtherMAC
interface contains.
Required properties:
- compatible: Must contain one or more of the following:
"renesas,gether-r8a7740" if the device is a part of R8A7740 SoC.
"renesas,ether-r8a7743" if the device is a part of R8A7743 SoC.
"renesas,ether-r8a7745" if the device is a part of R8A7745 SoC.
"renesas,ether-r8a7778" if the device is a part of R8A7778 SoC.
"renesas,ether-r8a7779" if the device is a part of R8A7779 SoC.
"renesas,ether-r8a7790" if the device is a part of R8A7790 SoC.
"renesas,ether-r8a7791" if the device is a part of R8A7791 SoC.
"renesas,ether-r8a7793" if the device is a part of R8A7793 SoC.
"renesas,ether-r8a7794" if the device is a part of R8A7794 SoC.
"renesas,gether-r8a77980" if the device is a part of R8A77980 SoC.
"renesas,ether-r7s72100" if the device is a part of R7S72100 SoC.
"renesas,ether-r7s9210" if the device is a part of R7S9210 SoC.
"renesas,rcar-gen1-ether" for a generic R-Car Gen1 device.
"renesas,rcar-gen2-ether" for a generic R-Car Gen2 or RZ/G1
device.
When compatible with the generic version, nodes must list
the SoC-specific version corresponding to the platform
first followed by the generic version.
- reg: offset and length of (1) the E-DMAC/feLic register block (required),
(2) the TSU register block (optional).
- interrupts: interrupt specifier for the sole interrupt.
- phy-mode: see ethernet.txt file in the same directory.
- phy-handle: see ethernet.txt file in the same directory.
- #address-cells: number of address cells for the MDIO bus, must be equal to 1.
- #size-cells: number of size cells on the MDIO bus, must be equal to 0.
- clocks: clock phandle and specifier pair.
- pinctrl-0: phandle, referring to a default pin configuration node.
Optional properties:
- pinctrl-names: pin configuration state name ("default").
- renesas,no-ether-link: boolean, specify when a board does not provide a proper
Ether LINK signal.
- renesas,ether-link-active-low: boolean, specify when the Ether LINK signal is
active-low instead of normal active-high.
Example (Lager board):
ethernet@ee700000 {
compatible = "renesas,ether-r8a7790",
"renesas,rcar-gen2-ether";
reg = <0 0xee700000 0 0x400>;
interrupt-parent = <&gic>;
interrupts = <0 162 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp8_clks R8A7790_CLK_ETHER>;
phy-mode = "rmii";
phy-handle = <&phy1>;
pinctrl-0 = <&ether_pins>;
pinctrl-names = "default";
renesas,ether-link-active-low;
#address-cells = <1>;
#size-cells = <0>;
phy1: ethernet-phy@1 {
reg = <1>;
interrupt-parent = <&irqc0>;
interrupts = <0 IRQ_TYPE_LEVEL_LOW>;
pinctrl-0 = <&phy1_pins>;
pinctrl-names = "default";
};
};

View File

@ -0,0 +1,240 @@
# SPDX-License-Identifier: GPL-2.0
%YAML 1.2
---
$id: http://devicetree.org/schemas/net/ti,cpsw-switch.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: TI SoC Ethernet Switch Controller (CPSW) Device Tree Bindings
maintainers:
- Grygorii Strashko <grygorii.strashko@ti.com>
- Sekhar Nori <nsekhar@ti.com>
description:
The 3-port switch gigabit ethernet subsystem provides ethernet packet
communication and can be configured as an ethernet switch. It provides the
gigabit media independent interface (GMII),reduced gigabit media
independent interface (RGMII), reduced media independent interface (RMII),
the management data input output (MDIO) for physical layer device (PHY)
management.
properties:
compatible:
oneOf:
- const: ti,cpsw-switch
- items:
- const: ti,am335x-cpsw-switch
- const: ti,cpsw-switch
- items:
- const: ti,am4372-cpsw-switch
- const: ti,cpsw-switch
- items:
- const: ti,dra7-cpsw-switch
- const: ti,cpsw-switch
reg:
maxItems: 1
description:
The physical base address and size of full the CPSW module IO range
ranges: true
clocks:
maxItems: 1
description: CPSW functional clock
clock-names:
maxItems: 1
items:
- const: fck
interrupts:
items:
- description: RX_THRESH interrupt
- description: RX interrupt
- description: TX interrupt
- description: MISC interrupt
interrupt-names:
items:
- const: "rx_thresh"
- const: "rx"
- const: "tx"
- const: "misc"
pinctrl-names: true
syscon:
$ref: /schemas/types.yaml#definitions/phandle
description:
Phandle to the system control device node which provides access to
efuse IO range with MAC addresses
ethernet-ports:
type: object
properties:
'#address-cells':
const: 1
'#size-cells':
const: 0
patternProperties:
"^port@[0-9]+$":
type: object
minItems: 1
maxItems: 2
description: CPSW external ports
allOf:
- $ref: ethernet-controller.yaml#
properties:
reg:
maxItems: 1
enum: [1, 2]
description: CPSW port number
phys:
$ref: /schemas/types.yaml#definitions/phandle-array
maxItems: 1
description: phandle on phy-gmii-sel PHY
label:
$ref: /schemas/types.yaml#/definitions/string-array
maxItems: 1
description: label associated with this port
ti,dual-emac-pvid:
$ref: /schemas/types.yaml#/definitions/uint32
maxItems: 1
minimum: 1
maximum: 1024
description:
Specifies default PORT VID to be used to segregate
ports. Default value - CPSW port number.
required:
- reg
- phys
mdio:
type: object
allOf:
- $ref: "ti,davinci-mdio.yaml#"
description:
CPSW MDIO bus.
cpts:
type: object
description:
The Common Platform Time Sync (CPTS) module
properties:
clocks:
maxItems: 1
description: CPTS reference clock
clock-names:
maxItems: 1
items:
- const: cpts
cpts_clock_mult:
$ref: /schemas/types.yaml#/definitions/uint32
description:
Numerator to convert input clock ticks into ns
cpts_clock_shift:
$ref: /schemas/types.yaml#/definitions/uint32
description:
Denominator to convert input clock ticks into ns.
Mult and shift will be calculated basing on CPTS rftclk frequency if
both cpts_clock_shift and cpts_clock_mult properties are not provided.
required:
- clocks
- clock-names
required:
- compatible
- reg
- ranges
- clocks
- clock-names
- interrupts
- interrupt-names
- '#address-cells'
- '#size-cells'
examples:
- |
#include <dt-bindings/interrupt-controller/irq.h>
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/clock/dra7.h>
mac_sw: switch@0 {
compatible = "ti,dra7-cpsw-switch","ti,cpsw-switch";
reg = <0x0 0x4000>;
ranges = <0 0 0x4000>;
clocks = <&gmac_main_clk>;
clock-names = "fck";
#address-cells = <1>;
#size-cells = <1>;
syscon = <&scm_conf>;
inctrl-names = "default", "sleep";
interrupts = <GIC_SPI 334 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 335 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 336 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 337 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "rx_thresh", "rx", "tx", "misc";
ethernet-ports {
#address-cells = <1>;
#size-cells = <0>;
cpsw_port1: port@1 {
reg = <1>;
label = "port1";
mac-address = [ 00 00 00 00 00 00 ];
phys = <&phy_gmii_sel 1>;
phy-handle = <&ethphy0_sw>;
phy-mode = "rgmii";
ti,dual_emac_pvid = <1>;
};
cpsw_port2: port@2 {
reg = <2>;
label = "wan";
mac-address = [ 00 00 00 00 00 00 ];
phys = <&phy_gmii_sel 2>;
phy-handle = <&ethphy1_sw>;
phy-mode = "rgmii";
ti,dual_emac_pvid = <2>;
};
};
davinci_mdio_sw: mdio@1000 {
compatible = "ti,cpsw-mdio","ti,davinci_mdio";
reg = <0x1000 0x100>;
clocks = <&gmac_clkctrl DRA7_GMAC_GMAC_CLKCTRL 0>;
clock-names = "fck";
#address-cells = <1>;
#size-cells = <0>;
bus_freq = <1000000>;
ethphy0_sw: ethernet-phy@0 {
reg = <0>;
};
ethphy1_sw: ethernet-phy@1 {
reg = <1>;
};
};
cpts {
clocks = <&gmac_clkctrl DRA7_GMAC_GMAC_CLKCTRL 25>;
clock-names = "cpts";
};
};

View File

@ -0,0 +1,84 @@
# SPDX-License-Identifier: GPL-2.0
# Copyright (C) 2019 Texas Instruments Incorporated
%YAML 1.2
---
$id: "http://devicetree.org/schemas/net/ti,dp83869.yaml#"
$schema: "http://devicetree.org/meta-schemas/core.yaml#"
title: TI DP83869 ethernet PHY
allOf:
- $ref: "ethernet-controller.yaml#"
maintainers:
- Dan Murphy <dmurphy@ti.com>
description: |
The DP83869HM device is a robust, fully-featured Gigabit (PHY) transceiver
with integrated PMD sublayers that supports 10BASE-Te, 100BASE-TX and
1000BASE-T Ethernet protocols. The DP83869 also supports 1000BASE-X and
100BASE-FX Fiber protocols.
This device interfaces to the MAC layer through Reduced GMII (RGMII) and
SGMII The DP83869HM supports Media Conversion in Managed mode. In this mode,
the DP83869HM can run 1000BASE-X-to-1000BASE-T and 100BASE-FX-to-100BASE-TX
conversions. The DP83869HM can also support Bridge Conversion from RGMII to
SGMII and SGMII to RGMII.
Specifications about the charger can be found at:
http://www.ti.com/lit/ds/symlink/dp83869hm.pdf
properties:
reg:
maxItems: 1
ti,min-output-impedance:
type: boolean
description: |
MAC Interface Impedance control to set the programmable output impedance
to a minimum value (35 ohms).
ti,max-output-impedance:
type: boolean
description: |
MAC Interface Impedance control to set the programmable output impedance
to a maximum value (70 ohms).
tx-fifo-depth:
$ref: /schemas/types.yaml#definitions/uint32
description: |
Transmitt FIFO depth see dt-bindings/net/ti-dp83869.h for values
rx-fifo-depth:
$ref: /schemas/types.yaml#definitions/uint32
description: |
Receive FIFO depth see dt-bindings/net/ti-dp83869.h for values
ti,clk-output-sel:
$ref: /schemas/types.yaml#definitions/uint32
description: |
Muxing option for CLK_OUT pin see dt-bindings/net/ti-dp83869.h for values.
ti,op-mode:
$ref: /schemas/types.yaml#definitions/uint32
description: |
Operational mode for the PHY. If this is not set then the operational
mode is set by the straps. see dt-bindings/net/ti-dp83869.h for values
required:
- reg
examples:
- |
#include <dt-bindings/net/ti-dp83869.h>
mdio0 {
#address-cells = <1>;
#size-cells = <0>;
ethphy0: ethernet-phy@0 {
reg = <0>;
tx-fifo-depth = <DP83869_PHYCR_FIFO_DEPTH_4_B_NIB>;
rx-fifo-depth = <DP83869_PHYCR_FIFO_DEPTH_4_B_NIB>;
ti,op-mode = <DP83869_RGMII_COPPER_ETHERNET>;
ti,max-output-impedance = "true";
ti,clk-output-sel = <DP83869_CLK_O_SEL_CHN_A_RCLK>;
};
};

View File

@ -81,6 +81,12 @@ Optional properties:
Definition: Name of external front end module used. Some valid FEM names
for example: "microsemi-lx5586", "sky85703-11"
and "sky85803" etc.
- qcom,snoc-host-cap-8bit-quirk:
Usage: Optional
Value type: <empty>
Definition: Quirk specifying that the firmware expects the 8bit version
of the host capability QMI request
- qcom,xo-cal-data: xo cal offset to be configured in xo trim register.
Example (to supply PCI based wifi block details):

View File

@ -6,6 +6,7 @@ Required properties:
"arm,ccn-502"
"arm,ccn-504"
"arm,ccn-508"
"arm,ccn-512"
- reg: (standard registers property) physical address and size
(16MB) of the configuration registers block

View File

@ -5,6 +5,7 @@ Required properties:
- compatible: should be one of:
"fsl,imx8-ddr-pmu"
"fsl,imx8m-ddr-pmu"
"fsl,imx8mp-ddr-pmu"
- reg: physical address and size

View File

@ -0,0 +1,69 @@
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/ptp/ptp-idtcm.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: IDT ClockMatrix (TM) PTP Clock Device Tree Bindings
maintainers:
- Vincent Cheng <vincent.cheng.xh@renesas.com>
properties:
compatible:
enum:
# For System Synchronizer
- idt,8a34000
- idt,8a34001
- idt,8a34002
- idt,8a34003
- idt,8a34004
- idt,8a34005
- idt,8a34006
- idt,8a34007
- idt,8a34008
- idt,8a34009
# For Port Synchronizer
- idt,8a34010
- idt,8a34011
- idt,8a34012
- idt,8a34013
- idt,8a34014
- idt,8a34015
- idt,8a34016
- idt,8a34017
- idt,8a34018
- idt,8a34019
# For Universal Frequency Translator (UFT)
- idt,8a34040
- idt,8a34041
- idt,8a34042
- idt,8a34043
- idt,8a34044
- idt,8a34045
- idt,8a34046
- idt,8a34047
- idt,8a34048
- idt,8a34049
reg:
maxItems: 1
description:
I2C slave address of the device.
required:
- compatible
- reg
examples:
- |
i2c@1 {
compatible = "abc,acme-1234";
reg = <0x01 0x400>;
#address-cells = <1>;
#size-cells = <0>;
phc@5b {
compatible = "idt,8a34000";
reg = <0x5b>;
};
};

View File

@ -50,6 +50,10 @@ properties:
description: startup time in microseconds
$ref: /schemas/types.yaml#/definitions/uint32
off-on-delay-us:
description: off delay time in microseconds
$ref: /schemas/types.yaml#/definitions/uint32
enable-active-high:
description:
Polarity of GPIO is Active high. If this property is missing,

View File

@ -28,6 +28,8 @@ Supported regulator node names:
PM8150L: smps1 - smps8, ldo1 - ldo11, bob, flash, rgb
PM8998: smps1 - smps13, ldo1 - ldo28, lvs1 - lvs2
PMI8998: bob
PM6150: smps1 - smps5, ldo1 - ldo19
PM6150L: smps1 - smps8, ldo1 - ldo11, bob
========================
First Level Nodes - PMIC
@ -43,6 +45,8 @@ First Level Nodes - PMIC
"qcom,pm8150l-rpmh-regulators"
"qcom,pm8998-rpmh-regulators"
"qcom,pmi8998-rpmh-regulators"
"qcom,pm6150-rpmh-regulators"
"qcom,pm6150l-rpmh-regulators"
- qcom,pmic-id
Usage: required

View File

@ -22,6 +22,7 @@ Regulator nodes are identified by their compatible:
"qcom,rpm-pm8841-regulators"
"qcom,rpm-pm8916-regulators"
"qcom,rpm-pm8941-regulators"
"qcom,rpm-pm8950-regulators"
"qcom,rpm-pm8994-regulators"
"qcom,rpm-pm8998-regulators"
"qcom,rpm-pma8084-regulators"
@ -54,6 +55,26 @@ Regulator nodes are identified by their compatible:
Definition: reference to regulator supplying the input pin, as
described in the data sheet
- vdd_s1-supply:
- vdd_s2-supply:
- vdd_s3-supply:
- vdd_s4-supply:
- vdd_s4-supply:
- vdd_s5-supply:
- vdd_s6-supply:
- vdd_l1_l19-supply:
- vdd_l2_l23-supply:
- vdd_l3-supply:
- vdd_l4_l5_l6_l7_l16-supply:
- vdd_l8_l11_l12_l17_l22-supply:
- vdd_l9_l10_l13_l14_l15_l18-supply:
- vdd_l20-supply:
- vdd_l21-supply:
Usage: optional (pm8950 only)
Value type: <phandle>
Definition: reference to regulator supplying the input pin, as
described in the data sheet
- vdd_s1-supply:
- vdd_s2-supply:
- vdd_s3-supply:

View File

@ -4,10 +4,12 @@ Qualcomm SPMI Regulators
Usage: required
Value type: <string>
Definition: must be one of:
"qcom,pm8004-regulators"
"qcom,pm8005-regulators"
"qcom,pm8841-regulators"
"qcom,pm8916-regulators"
"qcom,pm8941-regulators"
"qcom,pm8950-regulators"
"qcom,pm8994-regulators"
"qcom,pmi8994-regulators"
"qcom,pms405-regulators"
@ -72,6 +74,26 @@ Qualcomm SPMI Regulators
Definition: Reference to regulator supplying the input pin, as
described in the data sheet.
- vdd_s1-supply:
- vdd_s2-supply:
- vdd_s3-supply:
- vdd_s4-supply:
- vdd_s4-supply:
- vdd_s5-supply:
- vdd_s6-supply:
- vdd_l1_l19-supply:
- vdd_l2_l23-supply:
- vdd_l3-supply:
- vdd_l4_l5_l6_l7_l16-supply:
- vdd_l8_l11_l12_l17_l22-supply:
- vdd_l9_l10_l13_l14_l15_l18-supply:
- vdd_l20-supply:
- vdd_l21-supply:
Usage: optional (pm8950 only)
Value type: <phandle>
Definition: reference to regulator supplying the input pin, as
described in the data sheet
- vdd_s1-supply:
- vdd_s2-supply:
- vdd_s3-supply:
@ -139,6 +161,9 @@ The regulator node houses sub-nodes for each regulator within the device. Each
sub-node is identified using the node's name, with valid values listed for each
of the PMICs below.
pm8005:
s2, s5
pm8005:
s1, s2, s3, s4

View File

@ -38,7 +38,12 @@ properties:
type: boolean
regulator-boot-on:
description: bootloader/firmware enabled regulator
description: bootloader/firmware enabled regulator.
It's expected that this regulator was left on by the bootloader.
If the bootloader didn't leave it on then OS should turn it on
at boot but shouldn't prevent it from being turned off later.
This property is intended to only be used for regulators where
software cannot read the state of the regulator.
type: boolean
regulator-allow-bypass:

View File

@ -1,7 +1,7 @@
Atmel TRNG (True Random Number Generator) block
Required properties:
- compatible : Should be "atmel,at91sam9g45-trng"
- compatible : Should be "atmel,at91sam9g45-trng" or "microchip,sam9x60-trng"
- reg : Offset and length of the register set of this block
- interrupts : the interrupt number for the TRNG block
- clocks: should contain the TRNG clk source

View File

@ -0,0 +1,12 @@
NPCM SoC Random Number Generator
Required properties:
- compatible : "nuvoton,npcm750-rng" for the NPCM7XX BMC.
- reg : Specifies physical base address and size of the registers.
Example:
rng: rng@f000b000 {
compatible = "nuvoton,npcm750-rng";
reg = <0xf000b000 0x8>;
};

View File

@ -0,0 +1,27 @@
OMAP ROM RNG driver binding
Secure SoCs may provide RNG via secure ROM calls like Nokia N900 does. The
implementation can depend on the SoC secure ROM used.
- compatible:
Usage: required
Value type: <string>
Definition: must be "nokia,n900-rom-rng"
- clocks:
Usage: required
Value type: <prop-encoded-array>
Definition: reference to the the RNG interface clock
- clock-names:
Usage: required
Value type: <stringlist>
Definition: must be "ick"
Example:
rom_rng: rng {
compatible = "nokia,n900-rom-rng";
clocks = <&rng_ick>;
clock-names = "ick";
};

View File

@ -0,0 +1,17 @@
Exynos True Random Number Generator
Required properties:
- compatible : Should be "samsung,exynos5250-trng".
- reg : Specifies base physical address and size of the registers map.
- clocks : Phandle to clock-controller plus clock-specifier pair.
- clock-names : "secss" as a clock name.
Example:
rng@10830600 {
compatible = "samsung,exynos5250-trng";
reg = <0x10830600 0x100>;
clocks = <&clock CLK_SSS>;
clock-names = "secss";
};

View File

@ -0,0 +1,19 @@
* H1 Secure Microcontroller with Cr50 Firmware on SPI Bus.
H1 Secure Microcontroller running Cr50 firmware provides several
functions, including TPM-like functionality. It communicates over
SPI using the FIFO protocol described in the PTP Spec, section 6.
Required properties:
- compatible: Should be "google,cr50".
- spi-max-frequency: Maximum SPI frequency.
Example:
&spi0 {
tpm@0 {
compatible = "google,cr50";
reg = <0>;
spi-max-frequency = <800000>;
};
};

View File

@ -0,0 +1,57 @@
# SPDX-License-Identifier: GPL-2.0
%YAML 1.2
---
$id: http://devicetree.org/schemas/spi/renesas,hspi.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Renesas HSPI
maintainers:
- Geert Uytterhoeven <geert+renesas@glider.be>
allOf:
- $ref: spi-controller.yaml#
properties:
compatible:
items:
- enum:
- renesas,hspi-r8a7778 # R-Car M1A
- renesas,hspi-r8a7779 # R-Car H1
- const: renesas,hspi
reg:
maxItems: 1
interrupts:
maxItems: 1
clocks:
maxItems: 1
power-domains:
maxItems: 1
required:
- compatible
- reg
- interrupts
- clocks
- '#address-cells'
- '#size-cells'
examples:
- |
#include <dt-bindings/clock/r8a7778-clock.h>
#include <dt-bindings/interrupt-controller/irq.h>
hspi0: spi@fffc7000 {
compatible = "renesas,hspi-r8a7778", "renesas,hspi";
reg = <0xfffc7000 0x18>;
interrupts = <0 63 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp0_clks R8A7778_CLK_HSPI>;
power-domains = <&cpg_clocks>;
#address-cells = <1>;
#size-cells = <0>;
};

View File

@ -0,0 +1,11 @@
Renesas RZ/N1 SPI Controller
This controller is based on the Synopsys DW Synchronous Serial Interface and
inherits all properties defined in snps,dw-apb-ssi.txt except for the
compatible property.
Required properties:
- compatible : The device specific string followed by the generic RZ/N1 string.
Therefore it must be one of:
"renesas,r9a06g032-spi", "renesas,rzn1-spi"
"renesas,r9a06g033-spi", "renesas,rzn1-spi"

View File

@ -0,0 +1,159 @@
# SPDX-License-Identifier: GPL-2.0
%YAML 1.2
---
$id: http://devicetree.org/schemas/spi/renesas,sh-msiof.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Renesas MSIOF SPI controller
maintainers:
- Geert Uytterhoeven <geert+renesas@glider.be>
allOf:
- $ref: spi-controller.yaml#
properties:
compatible:
oneOf:
- items:
- const: renesas,msiof-sh73a0 # SH-Mobile AG5
- const: renesas,sh-mobile-msiof # generic SH-Mobile compatible
# device
- items:
- enum:
- renesas,msiof-r8a7743 # RZ/G1M
- renesas,msiof-r8a7744 # RZ/G1N
- renesas,msiof-r8a7745 # RZ/G1E
- renesas,msiof-r8a77470 # RZ/G1C
- renesas,msiof-r8a7790 # R-Car H2
- renesas,msiof-r8a7791 # R-Car M2-W
- renesas,msiof-r8a7792 # R-Car V2H
- renesas,msiof-r8a7793 # R-Car M2-N
- renesas,msiof-r8a7794 # R-Car E2
- const: renesas,rcar-gen2-msiof # generic R-Car Gen2 and RZ/G1
# compatible device
- items:
- enum:
- renesas,msiof-r8a774a1 # RZ/G2M
- renesas,msiof-r8a774b1 # RZ/G2N
- renesas,msiof-r8a774c0 # RZ/G2E
- renesas,msiof-r8a7795 # R-Car H3
- renesas,msiof-r8a7796 # R-Car M3-W
- renesas,msiof-r8a77965 # R-Car M3-N
- renesas,msiof-r8a77970 # R-Car V3M
- renesas,msiof-r8a77980 # R-Car V3H
- renesas,msiof-r8a77990 # R-Car E3
- renesas,msiof-r8a77995 # R-Car D3
- const: renesas,rcar-gen3-msiof # generic R-Car Gen3 and RZ/G2
# compatible device
- items:
- const: renesas,sh-msiof # deprecated
reg:
minItems: 1
maxItems: 2
oneOf:
- items:
- description: CPU and DMA engine registers
- items:
- description: CPU registers
- description: DMA engine registers
interrupts:
maxItems: 1
clocks:
maxItems: 1
num-cs:
description: |
Total number of chip selects (default is 1).
Up to 3 native chip selects are supported:
0: MSIOF_SYNC
1: MSIOF_SS1
2: MSIOF_SS2
Hardware limitations related to chip selects:
- Native chip selects are always deasserted in between transfers
that are part of the same message. Use cs-gpios to work around
this.
- All slaves using native chip selects must use the same spi-cs-high
configuration. Use cs-gpios to work around this.
- When using GPIO chip selects, at least one native chip select must
be left unused, as it will be driven anyway.
minimum: 1
maximum: 3
default: 1
dmas:
minItems: 2
maxItems: 4
dma-names:
minItems: 2
maxItems: 4
items:
enum: [ tx, rx ]
renesas,dtdl:
description: delay sync signal (setup) in transmit mode.
allOf:
- $ref: /schemas/types.yaml#/definitions/uint32
- enum:
- 0 # no bit delay
- 50 # 0.5-clock-cycle delay
- 100 # 1-clock-cycle delay
- 150 # 1.5-clock-cycle delay
- 200 # 2-clock-cycle delay
renesas,syncdl:
description: delay sync signal (hold) in transmit mode
allOf:
- $ref: /schemas/types.yaml#/definitions/uint32
- enum:
- 0 # no bit delay
- 50 # 0.5-clock-cycle delay
- 100 # 1-clock-cycle delay
- 150 # 1.5-clock-cycle delay
- 200 # 2-clock-cycle delay
- 300 # 3-clock-cycle delay
renesas,tx-fifo-size:
# deprecated for soctype-specific bindings
description: |
Override the default TX fifo size. Unit is words. Ignored if 0.
allOf:
- $ref: /schemas/types.yaml#/definitions/uint32
- maxItems: 1
default: 64
renesas,rx-fifo-size:
# deprecated for soctype-specific bindings
description: |
Override the default RX fifo size. Unit is words. Ignored if 0.
allOf:
- $ref: /schemas/types.yaml#/definitions/uint32
- maxItems: 1
default: 64
required:
- compatible
- reg
- interrupts
- '#address-cells'
- '#size-cells'
examples:
- |
#include <dt-bindings/clock/r8a7791-clock.h>
#include <dt-bindings/interrupt-controller/irq.h>
msiof0: spi@e6e20000 {
compatible = "renesas,msiof-r8a7791", "renesas,rcar-gen2-msiof";
reg = <0 0xe6e20000 0 0x0064>;
interrupts = <0 156 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp0_clks R8A7791_CLK_MSIOF0>;
dmas = <&dmac0 0x51>, <&dmac0 0x52>;
dma-names = "tx", "rx";
#address-cells = <1>;
#size-cells = <0>;
};

View File

@ -1,26 +0,0 @@
Renesas HSPI.
Required properties:
- compatible : "renesas,hspi-<soctype>", "renesas,hspi" as fallback.
Examples with soctypes are:
- "renesas,hspi-r8a7778" (R-Car M1)
- "renesas,hspi-r8a7779" (R-Car H1)
- reg : Offset and length of the register set for the device
- interrupts : Interrupt specifier
- #address-cells : Must be <1>
- #size-cells : Must be <0>
Pinctrl properties might be needed, too. See
Documentation/devicetree/bindings/pinctrl/renesas,*.
Example:
hspi0: spi@fffc7000 {
compatible = "renesas,hspi-r8a7778", "renesas,hspi";
reg = <0xfffc7000 0x18>;
interrupt-parent = <&gic>;
interrupts = <0 63 IRQ_TYPE_LEVEL_HIGH>;
#address-cells = <1>;
#size-cells = <0>;
};

View File

@ -1,105 +0,0 @@
Renesas MSIOF spi controller
Required properties:
- compatible : "renesas,msiof-r8a7743" (RZ/G1M)
"renesas,msiof-r8a7744" (RZ/G1N)
"renesas,msiof-r8a7745" (RZ/G1E)
"renesas,msiof-r8a77470" (RZ/G1C)
"renesas,msiof-r8a774a1" (RZ/G2M)
"renesas,msiof-r8a774c0" (RZ/G2E)
"renesas,msiof-r8a7790" (R-Car H2)
"renesas,msiof-r8a7791" (R-Car M2-W)
"renesas,msiof-r8a7792" (R-Car V2H)
"renesas,msiof-r8a7793" (R-Car M2-N)
"renesas,msiof-r8a7794" (R-Car E2)
"renesas,msiof-r8a7795" (R-Car H3)
"renesas,msiof-r8a7796" (R-Car M3-W)
"renesas,msiof-r8a77965" (R-Car M3-N)
"renesas,msiof-r8a77970" (R-Car V3M)
"renesas,msiof-r8a77980" (R-Car V3H)
"renesas,msiof-r8a77990" (R-Car E3)
"renesas,msiof-r8a77995" (R-Car D3)
"renesas,msiof-sh73a0" (SH-Mobile AG5)
"renesas,sh-mobile-msiof" (generic SH-Mobile compatibile device)
"renesas,rcar-gen2-msiof" (generic R-Car Gen2 and RZ/G1 compatible device)
"renesas,rcar-gen3-msiof" (generic R-Car Gen3 and RZ/G2 compatible device)
"renesas,sh-msiof" (deprecated)
When compatible with the generic version, nodes
must list the SoC-specific version corresponding
to the platform first followed by the generic
version.
- reg : A list of offsets and lengths of the register sets for
the device.
If only one register set is present, it is to be used
by both the CPU and the DMA engine.
If two register sets are present, the first is to be
used by the CPU, and the second is to be used by the
DMA engine.
- interrupts : Interrupt specifier
- #address-cells : Must be <1>
- #size-cells : Must be <0>
Optional properties:
- clocks : Must contain a reference to the functional clock.
- num-cs : Total number of chip selects (default is 1).
Up to 3 native chip selects are supported:
0: MSIOF_SYNC
1: MSIOF_SS1
2: MSIOF_SS2
Hardware limitations related to chip selects:
- Native chip selects are always deasserted in
between transfers that are part of the same
message. Use cs-gpios to work around this.
- All slaves using native chip selects must use the
same spi-cs-high configuration. Use cs-gpios to
work around this.
- When using GPIO chip selects, at least one native
chip select must be left unused, as it will be
driven anyway.
- dmas : Must contain a list of two references to DMA
specifiers, one for transmission, and one for
reception.
- dma-names : Must contain a list of two DMA names, "tx" and "rx".
- spi-slave : Empty property indicating the SPI controller is used
in slave mode.
- renesas,dtdl : delay sync signal (setup) in transmit mode.
Must contain one of the following values:
0 (no bit delay)
50 (0.5-clock-cycle delay)
100 (1-clock-cycle delay)
150 (1.5-clock-cycle delay)
200 (2-clock-cycle delay)
- renesas,syncdl : delay sync signal (hold) in transmit mode.
Must contain one of the following values:
0 (no bit delay)
50 (0.5-clock-cycle delay)
100 (1-clock-cycle delay)
150 (1.5-clock-cycle delay)
200 (2-clock-cycle delay)
300 (3-clock-cycle delay)
Optional properties, deprecated for soctype-specific bindings:
- renesas,tx-fifo-size : Overrides the default tx fifo size given in words
(default is 64)
- renesas,rx-fifo-size : Overrides the default rx fifo size given in words
(default is 64)
Pinctrl properties might be needed, too. See
Documentation/devicetree/bindings/pinctrl/renesas,*.
Example:
msiof0: spi@e6e20000 {
compatible = "renesas,msiof-r8a7791",
"renesas,rcar-gen2-msiof";
reg = <0 0xe6e20000 0 0x0064>;
interrupts = <0 156 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp0_clks R8A7791_CLK_MSIOF0>;
dmas = <&dmac0 0x51>, <&dmac0 0x52>;
dma-names = "tx", "rx";
#address-cells = <1>;
#size-cells = <0>;
};

View File

@ -16,7 +16,8 @@ Required properties:
Optional properties:
- clock-names : Contains the names of the clocks:
"ssi_clk", for the core clock used to generate the external SPI clock.
"pclk", the interface clock, required for register access.
"pclk", the interface clock, required for register access. If a clock domain
used to enable this clock then it should be named "pclk_clkdomain".
- cs-gpios : Specifies the gpio pins to be used for chipselects.
- num-cs : The number of chipselects. If omitted, this will default to 4.
- reg-io-width : The I/O register width (in bytes) implemented by this

View File

@ -1,37 +0,0 @@
SiFive SPI controller Device Tree Bindings
------------------------------------------
Required properties:
- compatible : Should be "sifive,<chip>-spi" and "sifive,spi<version>".
Supported compatible strings are:
"sifive,fu540-c000-spi" for the SiFive SPI v0 as integrated
onto the SiFive FU540 chip, and "sifive,spi0" for the SiFive
SPI v0 IP block with no chip integration tweaks.
Please refer to sifive-blocks-ip-versioning.txt for details
- reg : Physical base address and size of SPI registers map
A second (optional) range can indicate memory mapped flash
- interrupts : Must contain one entry
- interrupt-parent : Must be core interrupt controller
- clocks : Must reference the frequency given to the controller
- #address-cells : Must be '1', indicating which CS to use
- #size-cells : Must be '0'
Optional properties:
- sifive,fifo-depth : Depth of hardware queues; defaults to 8
- sifive,max-bits-per-word : Maximum bits per word; defaults to 8
SPI RTL that corresponds to the IP block version numbers can be found here:
https://github.com/sifive/sifive-blocks/tree/master/src/main/scala/devices/spi
Example:
spi: spi@10040000 {
compatible = "sifive,fu540-c000-spi", "sifive,spi0";
reg = <0x0 0x10040000 0x0 0x1000 0x0 0x20000000 0x0 0x10000000>;
interrupt-parent = <&plic>;
interrupts = <51>;
clocks = <&tlclk>;
#address-cells = <1>;
#size-cells = <0>;
sifive,fifo-depth = <8>;
sifive,max-bits-per-word = <8>;
};

View File

@ -0,0 +1,86 @@
# SPDX-License-Identifier: GPL-2.0
%YAML 1.2
---
$id: http://devicetree.org/schemas/spi/spi-sifive.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: SiFive SPI controller
maintainers:
- Pragnesh Patel <pragnesh.patel@sifive.com>
- Paul Walmsley <paul.walmsley@sifive.com>
- Palmer Dabbelt <palmer@sifive.com>
allOf:
- $ref: "spi-controller.yaml#"
properties:
compatible:
items:
- const: sifive,fu540-c000-spi
- const: sifive,spi0
description:
Should be "sifive,<chip>-spi" and "sifive,spi<version>".
Supported compatible strings are -
"sifive,fu540-c000-spi" for the SiFive SPI v0 as integrated
onto the SiFive FU540 chip, and "sifive,spi0" for the SiFive
SPI v0 IP block with no chip integration tweaks.
Please refer to sifive-blocks-ip-versioning.txt for details
SPI RTL that corresponds to the IP block version numbers can be found here -
https://github.com/sifive/sifive-blocks/tree/master/src/main/scala/devices/spi
reg:
maxItems: 1
description:
Physical base address and size of SPI registers map
A second (optional) range can indicate memory mapped flash
interrupts:
maxItems: 1
clocks:
maxItems: 1
description:
Must reference the frequency given to the controller
sifive,fifo-depth:
description:
Depth of hardware queues; defaults to 8
allOf:
- $ref: "/schemas/types.yaml#/definitions/uint32"
- enum: [ 8 ]
- default: 8
sifive,max-bits-per-word:
description:
Maximum bits per word; defaults to 8
allOf:
- $ref: "/schemas/types.yaml#/definitions/uint32"
- enum: [ 0, 1, 2, 3, 4, 5, 6, 7, 8 ]
- default: 8
required:
- compatible
- reg
- interrupts
- clocks
examples:
- |
spi: spi@10040000 {
compatible = "sifive,fu540-c000-spi", "sifive,spi0";
reg = <0x0 0x10040000 0x0 0x1000 0x0 0x20000000 0x0 0x10000000>;
interrupt-parent = <&plic>;
interrupts = <51>;
clocks = <&tlclk>;
#address-cells = <1>;
#size-cells = <0>;
sifive,fifo-depth = <8>;
sifive,max-bits-per-word = <8>;
};
...

View File

@ -1,47 +0,0 @@
* STMicroelectronics Quad Serial Peripheral Interface(QSPI)
Required properties:
- compatible: should be "st,stm32f469-qspi"
- reg: the first contains the register location and length.
the second contains the memory mapping address and length
- reg-names: should contain the reg names "qspi" "qspi_mm"
- interrupts: should contain the interrupt for the device
- clocks: the phandle of the clock needed by the QSPI controller
- A pinctrl must be defined to set pins in mode of operation for QSPI transfer
Optional properties:
- resets: must contain the phandle to the reset controller.
A spi flash (NOR/NAND) must be a child of spi node and could have some
properties. Also see jedec,spi-nor.txt.
Required properties:
- reg: chip-Select number (QSPI controller may connect 2 flashes)
- spi-max-frequency: max frequency of spi bus
Optional properties:
- spi-rx-bus-width: see ./spi-bus.txt for the description
- dmas: DMA specifiers for tx and rx dma. See the DMA client binding,
Documentation/devicetree/bindings/dma/dma.txt.
- dma-names: DMA request names should include "tx" and "rx" if present.
Example:
qspi: spi@a0001000 {
compatible = "st,stm32f469-qspi";
reg = <0xa0001000 0x1000>, <0x90000000 0x10000000>;
reg-names = "qspi", "qspi_mm";
interrupts = <91>;
resets = <&rcc STM32F4_AHB3_RESET(QSPI)>;
clocks = <&rcc 0 STM32F4_AHB3_CLOCK(QSPI)>;
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_qspi0>;
flash@0 {
compatible = "jedec,spi-nor";
reg = <0>;
spi-rx-bus-width = <4>;
spi-max-frequency = <108000000>;
...
};
};

View File

@ -8,7 +8,8 @@ Required properties:
number.
Optional properties:
- xlnx,num-ss-bits : Number of chip selects used.
- xlnx,num-ss-bits : Number of chip selects used.
- xlnx,num-transfer-bits : Number of bits per transfer. This will be 8 if not specified
Example:
axi_quad_spi@41e00000 {
@ -17,5 +18,6 @@ Example:
interrupts = <0 31 1>;
reg = <0x41e00000 0x10000>;
xlnx,num-ss-bits = <0x1>;
xlnx,num-transfer-bits = <32>;
};

View File

@ -0,0 +1,83 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/spi/st,stm32-qspi.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: STMicroelectronics STM32 Quad Serial Peripheral Interface (QSPI) bindings
maintainers:
- Christophe Kerello <christophe.kerello@st.com>
- Patrice Chotard <patrice.chotard@st.com>
allOf:
- $ref: "spi-controller.yaml#"
properties:
compatible:
const: st,stm32f469-qspi
reg:
items:
- description: registers
- description: memory mapping
reg-names:
items:
- const: qspi
- const: qspi_mm
clocks:
maxItems: 1
interrupts:
maxItems: 1
resets:
maxItems: 1
dmas:
items:
- description: tx DMA channel
- description: rx DMA channel
dma-names:
items:
- const: tx
- const: rx
required:
- compatible
- reg
- reg-names
- clocks
- interrupts
examples:
- |
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/clock/stm32mp1-clks.h>
#include <dt-bindings/reset/stm32mp1-resets.h>
spi@58003000 {
compatible = "st,stm32f469-qspi";
reg = <0x58003000 0x1000>, <0x70000000 0x10000000>;
reg-names = "qspi", "qspi_mm";
interrupts = <GIC_SPI 92 IRQ_TYPE_LEVEL_HIGH>;
dmas = <&mdma1 22 0x10 0x100002 0x0 0x0>,
<&mdma1 22 0x10 0x100008 0x0 0x0>;
dma-names = "tx", "rx";
clocks = <&rcc QSPI_K>;
resets = <&rcc QSPI_R>;
#address-cells = <1>;
#size-cells = <0>;
flash@0 {
compatible = "jedec,spi-nor";
reg = <0>;
spi-rx-bus-width = <4>;
spi-max-frequency = <108000000>;
};
};
...

View File

@ -343,6 +343,8 @@ patternProperties:
description: Freescale Semiconductor
"^fujitsu,.*":
description: Fujitsu Ltd.
"^gardena,.*":
description: GARDENA GmbH
"^gateworks,.*":
description: Gateworks Corporation
"^gcw,.*":

View File

@ -250,23 +250,23 @@ High-level taskfile hooks
::
void (*qc_prep) (struct ata_queued_cmd *qc);
enum ata_completion_errors (*qc_prep) (struct ata_queued_cmd *qc);
int (*qc_issue) (struct ata_queued_cmd *qc);
Higher-level hooks, these two hooks can potentially supercede several of
Higher-level hooks, these two hooks can potentially supersede several of
the above taskfile/DMA engine hooks. ``->qc_prep`` is called after the
buffers have been DMA-mapped, and is typically used to populate the
hardware's DMA scatter-gather table. Most drivers use the standard
:c:func:`ata_qc_prep` helper function, but more advanced drivers roll their
own.
hardware's DMA scatter-gather table. Some drivers use the standard
:c:func:`ata_bmdma_qc_prep` and :c:func:`ata_bmdma_dumb_qc_prep` helper
functions, but more advanced drivers roll their own.
``->qc_issue`` is used to make a command active, once the hardware and S/G
tables have been prepared. IDE BMDMA drivers use the helper function
:c:func:`ata_qc_issue_prot` for taskfile protocol-based dispatch. More
:c:func:`ata_sff_qc_issue` for taskfile protocol-based dispatch. More
advanced drivers implement their own ``->qc_issue``.
:c:func:`ata_qc_issue_prot` calls ``->tf_load()``, ``->bmdma_setup()``, and
:c:func:`ata_sff_qc_issue` calls ``->sff_tf_load()``, ``->bmdma_setup()``, and
``->bmdma_start()`` as necessary to initiate a transfer.
Exception and probe handling (EH)

View File

@ -129,6 +129,8 @@ To facilitate such consumers NVMEM framework provides below apis::
struct nvmem_device *nvmem_device_get(struct device *dev, const char *name);
struct nvmem_device *devm_nvmem_device_get(struct device *dev,
const char *name);
struct nvmem_device *nvmem_device_find(void *data,
int (*match)(struct device *dev, const void *data));
void nvmem_device_put(struct nvmem_device *nvmem);
int nvmem_device_read(struct nvmem_device *nvmem, unsigned int offset,
size_t bytes, void *buf);

View File

@ -256,13 +256,8 @@ alternative master keys or to support rotating master keys. Instead,
the master keys may be wrapped in userspace, e.g. as is done by the
`fscrypt <https://github.com/google/fscrypt>`_ tool.
Including the inode number in the IVs was considered. However, it was
rejected as it would have prevented ext4 filesystems from being
resized, and by itself still wouldn't have been sufficient to prevent
the same key from being directly reused for both XTS and CTS-CBC.
DIRECT_KEY and per-mode keys
----------------------------
DIRECT_KEY policies
-------------------
The Adiantum encryption mode (see `Encryption modes and usage`_) is
suitable for both contents and filenames encryption, and it accepts
@ -285,6 +280,21 @@ IV. Moreover:
key derived using the KDF. Users may use the same master key for
other v2 encryption policies.
IV_INO_LBLK_64 policies
-----------------------
When FSCRYPT_POLICY_FLAG_IV_INO_LBLK_64 is set in the fscrypt policy,
the encryption keys are derived from the master key, encryption mode
number, and filesystem UUID. This normally results in all files
protected by the same master key sharing a single contents encryption
key and a single filenames encryption key. To still encrypt different
files' data differently, inode numbers are included in the IVs.
Consequently, shrinking the filesystem may not be allowed.
This format is optimized for use with inline encryption hardware
compliant with the UFS or eMMC standards, which support only 64 IV
bits per I/O request and may have only a small number of keyslots.
Key identifiers
---------------
@ -308,8 +318,9 @@ If unsure, you should use the (AES-256-XTS, AES-256-CTS-CBC) pair.
AES-128-CBC was added only for low-powered embedded devices with
crypto accelerators such as CAAM or CESA that do not support XTS. To
use AES-128-CBC, CONFIG_CRYPTO_SHA256 (or another SHA-256
implementation) must be enabled so that ESSIV can be used.
use AES-128-CBC, CONFIG_CRYPTO_ESSIV and CONFIG_CRYPTO_SHA256 (or
another SHA-256 implementation) must be enabled so that ESSIV can be
used.
Adiantum is a (primarily) stream cipher-based mode that is fast even
on CPUs without dedicated crypto instructions. It's also a true
@ -341,10 +352,16 @@ a little endian number, except that:
is encrypted with AES-256 where the AES-256 key is the SHA-256 hash
of the file's data encryption key.
- In the "direct key" configuration (FSCRYPT_POLICY_FLAG_DIRECT_KEY
set in the fscrypt_policy), the file's nonce is also appended to the
IV. Currently this is only allowed with the Adiantum encryption
mode.
- With `DIRECT_KEY policies`_, the file's nonce is appended to the IV.
Currently this is only allowed with the Adiantum encryption mode.
- With `IV_INO_LBLK_64 policies`_, the logical block number is limited
to 32 bits and is placed in bits 0-31 of the IV. The inode number
(which is also limited to 32 bits) is placed in bits 32-63.
Note that because file logical block numbers are included in the IVs,
filesystems must enforce that blocks are never shifted around within
encrypted files, e.g. via "collapse range" or "insert range".
Filenames encryption
--------------------
@ -354,10 +371,10 @@ the requirements to retain support for efficient directory lookups and
filenames of up to 255 bytes, the same IV is used for every filename
in a directory.
However, each encrypted directory still uses a unique key; or
alternatively (for the "direct key" configuration) has the file's
nonce included in the IVs. Thus, IV reuse is limited to within a
single directory.
However, each encrypted directory still uses a unique key, or
alternatively has the file's nonce (for `DIRECT_KEY policies`_) or
inode number (for `IV_INO_LBLK_64 policies`_) included in the IVs.
Thus, IV reuse is limited to within a single directory.
With CTS-CBC, the IV reuse means that when the plaintext filenames
share a common prefix at least as long as the cipher block size (16
@ -431,12 +448,15 @@ This structure must be initialized as follows:
(1) for ``contents_encryption_mode`` and FSCRYPT_MODE_AES_256_CTS
(4) for ``filenames_encryption_mode``.
- ``flags`` must contain a value from ``<linux/fscrypt.h>`` which
identifies the amount of NUL-padding to use when encrypting
filenames. If unsure, use FSCRYPT_POLICY_FLAGS_PAD_32 (0x3).
Additionally, if the encryption modes are both
FSCRYPT_MODE_ADIANTUM, this can contain
FSCRYPT_POLICY_FLAG_DIRECT_KEY; see `DIRECT_KEY and per-mode keys`_.
- ``flags`` contains optional flags from ``<linux/fscrypt.h>``:
- FSCRYPT_POLICY_FLAGS_PAD_*: The amount of NUL padding to use when
encrypting filenames. If unsure, use FSCRYPT_POLICY_FLAGS_PAD_32
(0x3).
- FSCRYPT_POLICY_FLAG_DIRECT_KEY: See `DIRECT_KEY policies`_.
- FSCRYPT_POLICY_FLAG_IV_INO_LBLK_64: See `IV_INO_LBLK_64
policies`_. This is mutually exclusive with DIRECT_KEY and is not
supported on v1 policies.
- For v2 encryption policies, ``__reserved`` must be zeroed.
@ -1089,7 +1109,7 @@ policy structs (see `Setting an encryption policy`_), except that the
context structs also contain a nonce. The nonce is randomly generated
by the kernel and is used as KDF input or as a tweak to cause
different files to be encrypted differently; see `Per-file keys`_ and
`DIRECT_KEY and per-mode keys`_.
`DIRECT_KEY policies`_.
Data path changes
-----------------

View File

@ -226,6 +226,14 @@ To do so, check for FS_VERITY_FL (0x00100000) in the returned flags.
The verity flag is not settable via FS_IOC_SETFLAGS. You must use
FS_IOC_ENABLE_VERITY instead, since parameters must be provided.
statx
-----
Since Linux v5.5, the statx() system call sets STATX_ATTR_VERITY if
the file has fs-verity enabled. This can perform better than
FS_IOC_GETFLAGS and FS_IOC_MEASURE_VERITY because it doesn't require
opening the file, and opening verity files can be expensive.
Accessing verity files
======================
@ -398,7 +406,7 @@ pages have been read into the pagecache. (See `Verifying data`_.)
ext4
----
ext4 supports fs-verity since Linux TODO and e2fsprogs v1.45.2.
ext4 supports fs-verity since Linux v5.4 and e2fsprogs v1.45.2.
To create verity files on an ext4 filesystem, the filesystem must have
been formatted with ``-O verity`` or had ``tune2fs -O verity`` run on
@ -434,7 +442,7 @@ also only supports extent-based files.
f2fs
----
f2fs supports fs-verity since Linux TODO and f2fs-tools v1.11.0.
f2fs supports fs-verity since Linux v5.4 and f2fs-tools v1.11.0.
To create verity files on an f2fs filesystem, the filesystem must have
been formatted with ``-O verity``.

View File

@ -233,6 +233,7 @@ Code Seq# Include File Comments
'f' 00-0F fs/ext4/ext4.h conflict!
'f' 00-0F linux/fs.h conflict!
'f' 00-0F fs/ocfs2/ocfs2_fs.h conflict!
'f' 13-27 linux/fscrypt.h
'f' 81-8F linux/fsverity.h
'g' 00-0F linux/usb/gadgetfs.h
'g' 20-2F linux/usb/g_printer.h

View File

@ -12,6 +12,7 @@ Kernel Livepatching
cumulative-patches
module-elf-format
shadow-vars
system-state
.. only:: subproject and html

View File

@ -0,0 +1,167 @@
====================
System State Changes
====================
Some users are really reluctant to reboot a system. This brings the need
to provide more livepatches and maintain some compatibility between them.
Maintaining more livepatches is much easier with cumulative livepatches.
Each new livepatch completely replaces any older one. It can keep,
add, and even remove fixes. And it is typically safe to replace any version
of the livepatch with any other one thanks to the atomic replace feature.
The problems might come with shadow variables and callbacks. They might
change the system behavior or state so that it is no longer safe to
go back and use an older livepatch or the original kernel code. Also
any new livepatch must be able to detect what changes have already been
done by the already installed livepatches.
This is where the livepatch system state tracking gets useful. It
allows to:
- store data needed to manipulate and restore the system state
- define compatibility between livepatches using a change id
and version
1. Livepatch system state API
=============================
The state of the system might get modified either by several livepatch callbacks
or by the newly used code. Also it must be possible to find changes done by
already installed livepatches.
Each modified state is described by struct klp_state, see
include/linux/livepatch.h.
Each livepatch defines an array of struct klp_states. They mention
all states that the livepatch modifies.
The livepatch author must define the following two fields for each
struct klp_state:
- *id*
- Non-zero number used to identify the affected system state.
- *version*
- Number describing the variant of the system state change that
is supported by the given livepatch.
The state can be manipulated using two functions:
- *klp_get_state(patch, id)*
- Get struct klp_state associated with the given livepatch
and state id.
- *klp_get_prev_state(id)*
- Get struct klp_state associated with the given feature id and
already installed livepatches.
2. Livepatch compatibility
==========================
The system state version is used to prevent loading incompatible livepatches.
The check is done when the livepatch is enabled. The rules are:
- Any completely new system state modification is allowed.
- System state modifications with the same or higher version are allowed
for already modified system states.
- Cumulative livepatches must handle all system state modifications from
already installed livepatches.
- Non-cumulative livepatches are allowed to touch already modified
system states.
3. Supported scenarios
======================
Livepatches have their life-cycle and the same is true for the system
state changes. Every compatible livepatch has to support the following
scenarios:
- Modify the system state when the livepatch gets enabled and the state
has not been already modified by a livepatches that are being
replaced.
- Take over or update the system state modification when is has already
been done by a livepatch that is being replaced.
- Restore the original state when the livepatch is disabled.
- Restore the previous state when the transition is reverted.
It might be the original system state or the state modification
done by livepatches that were being replaced.
- Remove any already made changes when error occurs and the livepatch
cannot get enabled.
4. Expected usage
=================
System states are usually modified by livepatch callbacks. The expected
role of each callback is as follows:
*pre_patch()*
- Allocate *state->data* when necessary. The allocation might fail
and *pre_patch()* is the only callback that could stop loading
of the livepatch. The allocation is not needed when the data
are already provided by previously installed livepatches.
- Do any other preparatory action that is needed by
the new code even before the transition gets finished.
For example, initialize *state->data*.
The system state itself is typically modified in *post_patch()*
when the entire system is able to handle it.
- Clean up its own mess in case of error. It might be done by a custom
code or by calling *post_unpatch()* explicitly.
*post_patch()*
- Copy *state->data* from the previous livepatch when they are
compatible.
- Do the actual system state modification. Eventually allow
the new code to use it.
- Make sure that *state->data* has all necessary information.
- Free *state->data* from replaces livepatches when they are
not longer needed.
*pre_unpatch()*
- Prevent the code, added by the livepatch, relying on the system
state change.
- Revert the system state modification..
*post_unpatch()*
- Distinguish transition reverse and livepatch disabling by
checking *klp_get_prev_state()*.
- In case of transition reverse, restore the previous system
state. It might mean doing nothing.
- Remove any not longer needed setting or data.
.. note::
*pre_unpatch()* typically does symmetric operations to *post_patch()*.
Except that it is called only when the livepatch is being disabled.
Therefore it does not need to care about any previously installed
livepatch.
*post_unpatch()* typically does symmetric operations to *pre_patch()*.
It might be called also during the transition reverse. Therefore it
has to handle the state of the previously installed livepatches.

View File

@ -40,13 +40,13 @@ allocates memory for this UMEM using whatever means it feels is most
appropriate (malloc, mmap, huge pages, etc). This memory area is then
registered with the kernel using the new setsockopt XDP_UMEM_REG. The
UMEM also has two rings: the FILL ring and the COMPLETION ring. The
fill ring is used by the application to send down addr for the kernel
FILL ring is used by the application to send down addr for the kernel
to fill in with RX packet data. References to these frames will then
appear in the RX ring once each packet has been received. The
completion ring, on the other hand, contains frame addr that the
COMPLETION ring, on the other hand, contains frame addr that the
kernel has transmitted completely and can now be used again by user
space, for either TX or RX. Thus, the frame addrs appearing in the
completion ring are addrs that were previously transmitted using the
COMPLETION ring are addrs that were previously transmitted using the
TX ring. In summary, the RX and FILL rings are used for the RX path
and the TX and COMPLETION rings are used for the TX path.
@ -91,11 +91,16 @@ Concepts
========
In order to use an AF_XDP socket, a number of associated objects need
to be setup.
to be setup. These objects and their options are explained in the
following sections.
Jonathan Corbet has also written an excellent article on LWN,
"Accelerating networking with AF_XDP". It can be found at
https://lwn.net/Articles/750845/.
For an overview on how AF_XDP works, you can also take a look at the
Linux Plumbers paper from 2018 on the subject:
http://vger.kernel.org/lpc_net2018_talks/lpc18_paper_af_xdp_perf-v2.pdf. Do
NOT consult the paper from 2017 on "AF_PACKET v4", the first attempt
at AF_XDP. Nearly everything changed since then. Jonathan Corbet has
also written an excellent article on LWN, "Accelerating networking
with AF_XDP". It can be found at https://lwn.net/Articles/750845/.
UMEM
----
@ -113,22 +118,22 @@ the next socket B can do this by setting the XDP_SHARED_UMEM flag in
struct sockaddr_xdp member sxdp_flags, and passing the file descriptor
of A to struct sockaddr_xdp member sxdp_shared_umem_fd.
The UMEM has two single-producer/single-consumer rings, that are used
The UMEM has two single-producer/single-consumer rings that are used
to transfer ownership of UMEM frames between the kernel and the
user-space application.
Rings
-----
There are a four different kind of rings: Fill, Completion, RX and
There are a four different kind of rings: FILL, COMPLETION, RX and
TX. All rings are single-producer/single-consumer, so the user-space
application need explicit synchronization of multiple
processes/threads are reading/writing to them.
The UMEM uses two rings: Fill and Completion. Each socket associated
The UMEM uses two rings: FILL and COMPLETION. Each socket associated
with the UMEM must have an RX queue, TX queue or both. Say, that there
is a setup with four sockets (all doing TX and RX). Then there will be
one Fill ring, one Completion ring, four TX rings and four RX rings.
one FILL ring, one COMPLETION ring, four TX rings and four RX rings.
The rings are head(producer)/tail(consumer) based rings. A producer
writes the data ring at the index pointed out by struct xdp_ring
@ -146,7 +151,7 @@ The size of the rings need to be of size power of two.
UMEM Fill Ring
~~~~~~~~~~~~~~
The Fill ring is used to transfer ownership of UMEM frames from
The FILL ring is used to transfer ownership of UMEM frames from
user-space to kernel-space. The UMEM addrs are passed in the ring. As
an example, if the UMEM is 64k and each chunk is 4k, then the UMEM has
16 chunks and can pass addrs between 0 and 64k.
@ -164,8 +169,8 @@ chunks mode, then the incoming addr will be left untouched.
UMEM Completion Ring
~~~~~~~~~~~~~~~~~~~~
The Completion Ring is used transfer ownership of UMEM frames from
kernel-space to user-space. Just like the Fill ring, UMEM indicies are
The COMPLETION Ring is used transfer ownership of UMEM frames from
kernel-space to user-space. Just like the FILL ring, UMEM indices are
used.
Frames passed from the kernel to user-space are frames that has been
@ -181,7 +186,7 @@ The RX ring is the receiving side of a socket. Each entry in the ring
is a struct xdp_desc descriptor. The descriptor contains UMEM offset
(addr) and the length of the data (len).
If no frames have been passed to kernel via the Fill ring, no
If no frames have been passed to kernel via the FILL ring, no
descriptors will (or can) appear on the RX ring.
The user application consumes struct xdp_desc descriptors from this
@ -199,8 +204,24 @@ be relaxed in the future.
The user application produces struct xdp_desc descriptors to this
ring.
Libbpf
======
Libbpf is a helper library for eBPF and XDP that makes using these
technologies a lot simpler. It also contains specific helper functions
in tools/lib/bpf/xsk.h for facilitating the use of AF_XDP. It
contains two types of functions: those that can be used to make the
setup of AF_XDP socket easier and ones that can be used in the data
plane to access the rings safely and quickly. To see an example on how
to use this API, please take a look at the sample application in
samples/bpf/xdpsock_usr.c which uses libbpf for both setup and data
plane operations.
We recommend that you use this library unless you have become a power
user. It will make your program a lot simpler.
XSKMAP / BPF_MAP_TYPE_XSKMAP
----------------------------
============================
On XDP side there is a BPF map type BPF_MAP_TYPE_XSKMAP (XSKMAP) that
is used in conjunction with bpf_redirect_map() to pass the ingress
@ -216,21 +237,202 @@ queue 17. Only the XDP program executing for eth0 and queue 17 will
successfully pass data to the socket. Please refer to the sample
application (samples/bpf/) in for an example.
Configuration Flags and Socket Options
======================================
These are the various configuration flags that can be used to control
and monitor the behavior of AF_XDP sockets.
XDP_COPY and XDP_ZERO_COPY bind flags
-------------------------------------
When you bind to a socket, the kernel will first try to use zero-copy
copy. If zero-copy is not supported, it will fall back on using copy
mode, i.e. copying all packets out to user space. But if you would
like to force a certain mode, you can use the following flags. If you
pass the XDP_COPY flag to the bind call, the kernel will force the
socket into copy mode. If it cannot use copy mode, the bind call will
fail with an error. Conversely, the XDP_ZERO_COPY flag will force the
socket into zero-copy mode or fail.
XDP_SHARED_UMEM bind flag
-------------------------
This flag enables you to bind multiple sockets to the same UMEM, but
only if they share the same queue id. In this mode, each socket has
their own RX and TX rings, but the UMEM (tied to the fist socket
created) only has a single FILL ring and a single COMPLETION
ring. To use this mode, create the first socket and bind it in the normal
way. Create a second socket and create an RX and a TX ring, or at
least one of them, but no FILL or COMPLETION rings as the ones from
the first socket will be used. In the bind call, set he
XDP_SHARED_UMEM option and provide the initial socket's fd in the
sxdp_shared_umem_fd field. You can attach an arbitrary number of extra
sockets this way.
What socket will then a packet arrive on? This is decided by the XDP
program. Put all the sockets in the XSK_MAP and just indicate which
index in the array you would like to send each packet to. A simple
round-robin example of distributing packets is shown below:
.. code-block:: c
#include <linux/bpf.h>
#include "bpf_helpers.h"
#define MAX_SOCKS 16
struct {
__uint(type, BPF_MAP_TYPE_XSKMAP);
__uint(max_entries, MAX_SOCKS);
__uint(key_size, sizeof(int));
__uint(value_size, sizeof(int));
} xsks_map SEC(".maps");
static unsigned int rr;
SEC("xdp_sock") int xdp_sock_prog(struct xdp_md *ctx)
{
rr = (rr + 1) & (MAX_SOCKS - 1);
return bpf_redirect_map(&xsks_map, rr, XDP_DROP);
}
Note, that since there is only a single set of FILL and COMPLETION
rings, and they are single producer, single consumer rings, you need
to make sure that multiple processes or threads do not use these rings
concurrently. There are no synchronization primitives in the
libbpf code that protects multiple users at this point in time.
Libbpf uses this mode if you create more than one socket tied to the
same umem. However, note that you need to supply the
XSK_LIBBPF_FLAGS__INHIBIT_PROG_LOAD libbpf_flag with the
xsk_socket__create calls and load your own XDP program as there is no
built in one in libbpf that will route the traffic for you.
XDP_USE_NEED_WAKEUP bind flag
-----------------------------
This option adds support for a new flag called need_wakeup that is
present in the FILL ring and the TX ring, the rings for which user
space is a producer. When this option is set in the bind call, the
need_wakeup flag will be set if the kernel needs to be explicitly
woken up by a syscall to continue processing packets. If the flag is
zero, no syscall is needed.
If the flag is set on the FILL ring, the application needs to call
poll() to be able to continue to receive packets on the RX ring. This
can happen, for example, when the kernel has detected that there are no
more buffers on the FILL ring and no buffers left on the RX HW ring of
the NIC. In this case, interrupts are turned off as the NIC cannot
receive any packets (as there are no buffers to put them in), and the
need_wakeup flag is set so that user space can put buffers on the
FILL ring and then call poll() so that the kernel driver can put these
buffers on the HW ring and start to receive packets.
If the flag is set for the TX ring, it means that the application
needs to explicitly notify the kernel to send any packets put on the
TX ring. This can be accomplished either by a poll() call, as in the
RX path, or by calling sendto().
An example of how to use this flag can be found in
samples/bpf/xdpsock_user.c. An example with the use of libbpf helpers
would look like this for the TX path:
.. code-block:: c
if (xsk_ring_prod__needs_wakeup(&my_tx_ring))
sendto(xsk_socket__fd(xsk_handle), NULL, 0, MSG_DONTWAIT, NULL, 0);
I.e., only use the syscall if the flag is set.
We recommend that you always enable this mode as it usually leads to
better performance especially if you run the application and the
driver on the same core, but also if you use different cores for the
application and the kernel driver, as it reduces the number of
syscalls needed for the TX path.
XDP_{RX|TX|UMEM_FILL|UMEM_COMPLETION}_RING setsockopts
------------------------------------------------------
These setsockopts sets the number of descriptors that the RX, TX,
FILL, and COMPLETION rings respectively should have. It is mandatory
to set the size of at least one of the RX and TX rings. If you set
both, you will be able to both receive and send traffic from your
application, but if you only want to do one of them, you can save
resources by only setting up one of them. Both the FILL ring and the
COMPLETION ring are mandatory as you need to have a UMEM tied to your
socket. But if the XDP_SHARED_UMEM flag is used, any socket after the
first one does not have a UMEM and should in that case not have any
FILL or COMPLETION rings created as the ones from the shared umem will
be used. Note, that the rings are single-producer single-consumer, so
do not try to access them from multiple processes at the same
time. See the XDP_SHARED_UMEM section.
In libbpf, you can create Rx-only and Tx-only sockets by supplying
NULL to the rx and tx arguments, respectively, to the
xsk_socket__create function.
If you create a Tx-only socket, we recommend that you do not put any
packets on the fill ring. If you do this, drivers might think you are
going to receive something when you in fact will not, and this can
negatively impact performance.
XDP_UMEM_REG setsockopt
-----------------------
This setsockopt registers a UMEM to a socket. This is the area that
contain all the buffers that packet can recide in. The call takes a
pointer to the beginning of this area and the size of it. Moreover, it
also has parameter called chunk_size that is the size that the UMEM is
divided into. It can only be 2K or 4K at the moment. If you have an
UMEM area that is 128K and a chunk size of 2K, this means that you
will be able to hold a maximum of 128K / 2K = 64 packets in your UMEM
area and that your largest packet size can be 2K.
There is also an option to set the headroom of each single buffer in
the UMEM. If you set this to N bytes, it means that the packet will
start N bytes into the buffer leaving the first N bytes for the
application to use. The final option is the flags field, but it will
be dealt with in separate sections for each UMEM flag.
XDP_STATISTICS getsockopt
-------------------------
Gets drop statistics of a socket that can be useful for debug
purposes. The supported statistics are shown below:
.. code-block:: c
struct xdp_statistics {
__u64 rx_dropped; /* Dropped for reasons other than invalid desc */
__u64 rx_invalid_descs; /* Dropped due to invalid descriptor */
__u64 tx_invalid_descs; /* Dropped due to invalid descriptor */
};
XDP_OPTIONS getsockopt
----------------------
Gets options from an XDP socket. The only one supported so far is
XDP_OPTIONS_ZEROCOPY which tells you if zero-copy is on or not.
Usage
=====
In order to use AF_XDP sockets there are two parts needed. The
In order to use AF_XDP sockets two parts are needed. The
user-space application and the XDP program. For a complete setup and
usage example, please refer to the sample application. The user-space
side is xdpsock_user.c and the XDP side is part of libbpf.
The XDP code sample included in tools/lib/bpf/xsk.c is the following::
The XDP code sample included in tools/lib/bpf/xsk.c is the following:
.. code-block:: c
SEC("xdp_sock") int xdp_sock_prog(struct xdp_md *ctx)
{
int index = ctx->rx_queue_index;
// A set entry here means that the correspnding queue_id
// A set entry here means that the corresponding queue_id
// has an active AF_XDP socket bound to it.
if (bpf_map_lookup_elem(&xsks_map, &index))
return bpf_redirect_map(&xsks_map, index, 0);
@ -238,7 +440,10 @@ The XDP code sample included in tools/lib/bpf/xsk.c is the following::
return XDP_PASS;
}
Naive ring dequeue and enqueue could look like this::
A simple but not so performance ring dequeue and enqueue could look
like this:
.. code-block:: c
// struct xdp_rxtx_ring {
// __u32 *producer;
@ -287,17 +492,16 @@ Naive ring dequeue and enqueue could look like this::
return 0;
}
For a more optimized version, please refer to the sample application.
But please use the libbpf functions as they are optimized and ready to
use. Will make your life easier.
Sample application
==================
There is a xdpsock benchmarking/test application included that
demonstrates how to use AF_XDP sockets with both private and shared
UMEMs. Say that you would like your UDP traffic from port 4242 to end
up in queue 16, that we will enable AF_XDP on. Here, we use ethtool
for this::
demonstrates how to use AF_XDP sockets with private UMEMs. Say that
you would like your UDP traffic from port 4242 to end up in queue 16,
that we will enable AF_XDP on. Here, we use ethtool for this::
ethtool -N p3p2 rx-flow-hash udp4 fn
ethtool -N p3p2 flow-type udp4 src-port 4242 dst-port 4242 \
@ -311,13 +515,18 @@ using::
For XDP_SKB mode, use the switch "-S" instead of "-N" and all options
can be displayed with "-h", as usual.
This sample application uses libbpf to make the setup and usage of
AF_XDP simpler. If you want to know how the raw uapi of AF_XDP is
really used to make something more advanced, take a look at the libbpf
code in tools/lib/bpf/xsk.[ch].
FAQ
=======
Q: I am not seeing any traffic on the socket. What am I doing wrong?
A: When a netdev of a physical NIC is initialized, Linux usually
allocates one Rx and Tx queue pair per core. So on a 8 core system,
allocates one RX and TX queue pair per core. So on a 8 core system,
queue ids 0 to 7 will be allocated, one per core. In the AF_XDP
bind call or the xsk_socket__create libbpf function call, you
specify a specific queue id to bind to and it is only the traffic
@ -343,9 +552,21 @@ A: When a netdev of a physical NIC is initialized, Linux usually
sudo ethtool -N <interface> flow-type udp4 src-port 4242 dst-port \
4242 action 2
A number of other ways are possible all up to the capabilitites of
A number of other ways are possible all up to the capabilities of
the NIC you have.
Q: Can I use the XSKMAP to implement a switch betwen different umems
in copy mode?
A: The short answer is no, that is not supported at the moment. The
XSKMAP can only be used to switch traffic coming in on queue id X
to sockets bound to the same queue id X. The XSKMAP can contain
sockets bound to different queue ids, for example X and Y, but only
traffic goming in from queue id Y can be directed to sockets bound
to the same queue id Y. In zero-copy mode, you should use the
switch, or other distribution mechanism, in your NIC to direct
traffic to the correct queue id and socket.
Credits
=======

View File

@ -1,5 +1,5 @@
aQuantia AQtion Driver for the aQuantia Multi-Gigabit PCI Express Family of
Ethernet Adapters
Marvell(Aquantia) AQtion Driver for the aQuantia Multi-Gigabit PCI Express
Family of Ethernet Adapters
=============================================================================
Contents
@ -325,6 +325,46 @@ Supported ethtool options
Example:
ethtool -N eth0 flow-type udp4 action 0 loc 32
UDP GSO hardware offload
---------------------------------
UDP GSO allows to boost UDP tx rates by offloading UDP headers allocation
into hardware. A special userspace socket option is required for this,
could be validated with /kernel/tools/testing/selftests/net/
udpgso_bench_tx -u -4 -D 10.0.1.1 -s 6300 -S 100
Will cause sending out of 100 byte sized UDP packets formed from single
6300 bytes user buffer.
UDP GSO is configured by:
ethtool -K eth0 tx-udp-segmentation on
Private flags (testing)
---------------------------------
Atlantic driver supports private flags for hardware custom features:
$ ethtool --show-priv-flags ethX
Private flags for ethX:
DMASystemLoopback : off
PKTSystemLoopback : off
DMANetworkLoopback : off
PHYInternalLoopback: off
PHYExternalLoopback: off
Example:
$ ethtool --set-priv-flags ethX DMASystemLoopback on
DMASystemLoopback: DMA Host loopback.
PKTSystemLoopback: Packet buffer host loopback.
DMANetworkLoopback: Network side loopback on DMA block.
PHYInternalLoopback: Internal loopback on Phy.
PHYExternalLoopback: External loopback on Phy (with loopback ethernet cable).
Command Line Parameters
=======================
The following command line parameters are available on atlantic driver:
@ -426,7 +466,7 @@ Support
If an issue is identified with the released source code on the supported
kernel with a supported adapter, email the specific information related
to the issue to support@aquantia.com
to the issue to aqn_support@marvell.com
License
=======

View File

@ -129,9 +129,9 @@ CONFIG_AQUANTIA_PHY=y
DPAA Ethernet Frame Processing
==============================
On Rx, buffers for the incoming frames are retrieved from one of the three
existing buffers pools. The driver initializes and seeds these, each with
buffers of different sizes: 1KB, 2KB and 4KB.
On Rx, buffers for the incoming frames are retrieved from the buffers found
in the dedicated interface buffer pool. The driver initializes and seeds these
with one page buffers.
On Tx, all transmitted frames are returned to the driver through Tx
confirmation frame queues. The driver is then responsible for freeing the
@ -254,7 +254,7 @@ The following statistics are exported for each interface through ethtool:
The driver also exports the following information in sysfs:
- the FQ IDs for each FQ type
/sys/devices/platform/dpaa-ethernet.0/net/<int>/fqids
/sys/devices/platform/soc/<addr>.fman/<addr>.ethernet/dpaa-ethernet.<id>/net/fm<nr>-mac<nr>/fqids
- the IDs of the buffer pools in use
/sys/devices/platform/dpaa-ethernet.0/net/<int>/bpids
- the ID of the buffer pool in use
/sys/devices/platform/soc/<addr>.fman/<addr>.ethernet/dpaa-ethernet.<id>/net/fm<nr>-mac<nr>/bpids

View File

@ -8,3 +8,4 @@ DPAA2 Documentation
overview
dpio-driver
ethernet-driver
mac-phy-support

View File

@ -0,0 +1,191 @@
.. SPDX-License-Identifier: GPL-2.0
.. include:: <isonum.txt>
=======================
DPAA2 MAC / PHY support
=======================
:Copyright: |copy| 2019 NXP
Overview
--------
The DPAA2 MAC / PHY support consists of a set of APIs that help DPAA2 network
drivers (dpaa2-eth, dpaa2-ethsw) interract with the PHY library.
DPAA2 Software Architecture
---------------------------
Among other DPAA2 objects, the fsl-mc bus exports DPNI objects (abstracting a
network interface) and DPMAC objects (abstracting a MAC). The dpaa2-eth driver
probes on the DPNI object and connects to and configures a DPMAC object with
the help of phylink.
Data connections may be established between a DPNI and a DPMAC, or between two
DPNIs. Depending on the connection type, the netif_carrier_[on/off] is handled
directly by the dpaa2-eth driver or by phylink.
.. code-block:: none
Sources of abstracted link state information presented by the MC firmware
+--------------------------------------+
+------------+ +---------+ | xgmac_mdio |
| net_device | | phylink |--| +-----+ +-----+ +-----+ +-----+ |
+------------+ +---------+ | | PHY | | PHY | | PHY | | PHY | |
| | | +-----+ +-----+ +-----+ +-----+ |
+------------------------------------+ | External MDIO bus |
| dpaa2-eth | +--------------------------------------+
+------------------------------------+
| | Linux
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| | MC firmware
| /| V
+----------+ / | +----------+
| | / | | |
| | | | | |
| DPNI |<------| |<------| DPMAC |
| | | | | |
| | \ |<---+ | |
+----------+ \ | | +----------+
\| |
|
+--------------------------------------+
| MC firmware polling MAC PCS for link |
| +-----+ +-----+ +-----+ +-----+ |
| | PCS | | PCS | | PCS | | PCS | |
| +-----+ +-----+ +-----+ +-----+ |
| Internal MDIO bus |
+--------------------------------------+
Depending on an MC firmware configuration setting, each MAC may be in one of two modes:
- DPMAC_LINK_TYPE_FIXED: the link state management is handled exclusively by
the MC firmware by polling the MAC PCS. Without the need to register a
phylink instance, the dpaa2-eth driver will not bind to the connected dpmac
object at all.
- DPMAC_LINK_TYPE_PHY: The MC firmware is left waiting for link state update
events, but those are in fact passed strictly between the dpaa2-mac (based on
phylink) and its attached net_device driver (dpaa2-eth, dpaa2-ethsw),
effectively bypassing the firmware.
Implementation
--------------
At probe time or when a DPNI's endpoint is dynamically changed, the dpaa2-eth
is responsible to find out if the peer object is a DPMAC and if this is the
case, to integrate it with PHYLINK using the dpaa2_mac_connect() API, which
will do the following:
- look up the device tree for PHYLINK-compatible of binding (phy-handle)
- will create a PHYLINK instance associated with the received net_device
- connect to the PHY using phylink_of_phy_connect()
The following phylink_mac_ops callback are implemented:
- .validate() will populate the supported linkmodes with the MAC capabilities
only when the phy_interface_t is RGMII_* (at the moment, this is the only
link type supported by the driver).
- .mac_config() will configure the MAC in the new configuration using the
dpmac_set_link_state() MC firmware API.
- .mac_link_up() / .mac_link_down() will update the MAC link using the same
API described above.
At driver unbind() or when the DPNI object is disconnected from the DPMAC, the
dpaa2-eth driver calls dpaa2_mac_disconnect() which will, in turn, disconnect
from the PHY and destroy the PHYLINK instance.
In case of a DPNI-DPMAC connection, an 'ip link set dev eth0 up' would start
the following sequence of operations:
(1) phylink_start() called from .dev_open().
(2) The .mac_config() and .mac_link_up() callbacks are called by PHYLINK.
(3) In order to configure the HW MAC, the MC Firmware API
dpmac_set_link_state() is called.
(4) The firmware will eventually setup the HW MAC in the new configuration.
(5) A netif_carrier_on() call is made directly from PHYLINK on the associated
net_device.
(6) The dpaa2-eth driver handles the LINK_STATE_CHANGE irq in order to
enable/disable Rx taildrop based on the pause frame settings.
.. code-block:: none
+---------+ +---------+
| PHYLINK |-------------->| eth0 |
+---------+ (5) +---------+
(1) ^ |
| |
| v (2)
+-----------------------------------+
| dpaa2-eth |
+-----------------------------------+
| ^ (6)
| |
v (3) |
+---------+---------------+---------+
| DPMAC | | DPNI |
+---------+ +---------+
| MC Firmware |
+-----------------------------------+
|
|
v (4)
+-----------------------------------+
| HW MAC |
+-----------------------------------+
In case of a DPNI-DPNI connection, a usual sequence of operations looks like
the following:
(1) ip link set dev eth0 up
(2) The dpni_enable() MC API called on the associated fsl_mc_device.
(3) ip link set dev eth1 up
(4) The dpni_enable() MC API called on the associated fsl_mc_device.
(5) The LINK_STATE_CHANGED irq is received by both instances of the dpaa2-eth
driver because now the operational link state is up.
(6) The netif_carrier_on() is called on the exported net_device from
link_state_update().
.. code-block:: none
+---------+ +---------+
| eth0 | | eth1 |
+---------+ +---------+
| ^ ^ |
| | | |
(1) v | (6) (6) | v (3)
+---------+ +---------+
|dpaa2-eth| |dpaa2-eth|
+---------+ +---------+
| ^ ^ |
| | | |
(2) v | (5) (5) | v (4)
+---------+---------------+---------+
| DPNI | | DPNI |
+---------+ +---------+
| MC Firmware |
+-----------------------------------+
Exported API
------------
Any DPAA2 driver that drivers endpoints of DPMAC objects should service its
_EVENT_ENDPOINT_CHANGED irq and connect/disconnect from the associated DPMAC
when necessary using the below listed API::
- int dpaa2_mac_connect(struct dpaa2_mac *mac);
- void dpaa2_mac_disconnect(struct dpaa2_mac *mac);
A phylink integration is necessary only when the partner DPMAC is not of TYPE_FIXED.
One can check for this condition using the below API::
- bool dpaa2_mac_is_type_fixed(struct fsl_mc_device *dpmac_dev,struct fsl_mc_io *mc_io);
Before connection to a MAC, the caller must allocate and populate the
dpaa2_mac structure with the associated net_device, a pointer to the MC portal
to be used and the actual fsl_mc_device structure of the DPMAC.

View File

@ -154,6 +154,27 @@ User command examples:
values:
cmode runtime value smfs
enable_roce: RoCE enablement state
----------------------------------
RoCE enablement state controls driver support for RoCE traffic.
When RoCE is disabled, there is no gid table, only raw ethernet QPs are supported and traffic on the well known UDP RoCE port is handled as raw ethernet traffic.
To change RoCE enablement state a user must change the driverinit cmode value and run devlink reload.
User command examples:
- Disable RoCE::
$ devlink dev param set pci/0000:06:00.0 name enable_roce value false cmode driverinit
$ devlink dev reload pci/0000:06:00.0
- Read RoCE enablement state::
$ devlink dev param show pci/0000:06:00.0 name enable_roce
pci/0000:06:00.0:
name enable_roce type generic
values:
cmode driverinit value true
Devlink health reporters
========================

View File

@ -0,0 +1,209 @@
* Texas Instruments CPSW switchdev based ethernet driver 2.0
- Port renaming
On older udev versions renaming of ethX to swXpY will not be automatically
supported
In order to rename via udev:
ip -d link show dev sw0p1 | grep switchid
SUBSYSTEM=="net", ACTION=="add", ATTR{phys_switch_id}==<switchid>, \
ATTR{phys_port_name}!="", NAME="sw0$attr{phys_port_name}"
====================
# Dual mac mode
====================
- The new (cpsw_new.c) driver is operating in dual-emac mode by default, thus
working as 2 individual network interfaces. Main differences from legacy CPSW
driver are:
- optimized promiscuous mode: The P0_UNI_FLOOD (both ports) is enabled in
addition to ALLMULTI (current port) instead of ALE_BYPASS.
So, Ports in promiscuous mode will keep possibility of mcast and vlan filtering,
which is provides significant benefits when ports are joined to the same bridge,
but without enabling "switch" mode, or to different bridges.
- learning disabled on ports as it make not too much sense for
segregated ports - no forwarding in HW.
- enabled basic support for devlink.
devlink dev show
platform/48484000.switch
devlink dev param show
platform/48484000.switch:
name switch_mode type driver-specific
values:
cmode runtime value false
name ale_bypass type driver-specific
values:
cmode runtime value false
Devlink configuration parameters
====================
See Documentation/networking/devlink-params-ti-cpsw-switch.txt
====================
# Bridging in dual mac mode
====================
The dual_mac mode requires two vids to be reserved for internal purposes,
which, by default, equal CPSW Port numbers. As result, bridge has to be
configured in vlan unaware mode or default_pvid has to be adjusted.
ip link add name br0 type bridge
ip link set dev br0 type bridge vlan_filtering 0
echo 0 > /sys/class/net/br0/bridge/default_pvid
ip link set dev sw0p1 master br0
ip link set dev sw0p2 master br0
- or -
ip link add name br0 type bridge
ip link set dev br0 type bridge vlan_filtering 0
echo 100 > /sys/class/net/br0/bridge/default_pvid
ip link set dev br0 type bridge vlan_filtering 1
ip link set dev sw0p1 master br0
ip link set dev sw0p2 master br0
====================
# Enabling "switch"
====================
The Switch mode can be enabled by configuring devlink driver parameter
"switch_mode" to 1/true:
devlink dev param set platform/48484000.switch \
name switch_mode value 1 cmode runtime
This can be done regardless of the state of Port's netdev devices - UP/DOWN, but
Port's netdev devices have to be in UP before joining to the bridge to avoid
overwriting of bridge configuration as CPSW switch driver copletly reloads its
configuration when first Port changes its state to UP.
When the both interfaces joined the bridge - CPSW switch driver will enable
marking packets with offload_fwd_mark flag unless "ale_bypass=0"
All configuration is implemented via switchdev API.
====================
# Bridge setup
====================
devlink dev param set platform/48484000.switch \
name switch_mode value 1 cmode runtime
ip link add name br0 type bridge
ip link set dev br0 type bridge ageing_time 1000
ip link set dev sw0p1 up
ip link set dev sw0p2 up
ip link set dev sw0p1 master br0
ip link set dev sw0p2 master br0
[*] bridge vlan add dev br0 vid 1 pvid untagged self
[*] if vlan_filtering=1. where default_pvid=1
=================
# On/off STP
=================
ip link set dev BRDEV type bridge stp_state 1/0
Note. Steps [*] are mandatory.
====================
# VLAN configuration
====================
bridge vlan add dev br0 vid 1 pvid untagged self <---- add cpu port to VLAN 1
Note. This step is mandatory for bridge/default_pvid.
=================
# Add extra VLANs
=================
1. untagged:
bridge vlan add dev sw0p1 vid 100 pvid untagged master
bridge vlan add dev sw0p2 vid 100 pvid untagged master
bridge vlan add dev br0 vid 100 pvid untagged self <---- Add cpu port to VLAN100
2. tagged:
bridge vlan add dev sw0p1 vid 100 master
bridge vlan add dev sw0p2 vid 100 master
bridge vlan add dev br0 vid 100 pvid tagged self <---- Add cpu port to VLAN100
====
FDBs
====
FDBs are automatically added on the appropriate switch port upon detection
Manually adding FDBs:
bridge fdb add aa:bb:cc:dd:ee:ff dev sw0p1 master vlan 100
bridge fdb add aa:bb:cc:dd:ee:fe dev sw0p2 master <---- Add on all VLANs
====
MDBs
====
MDBs are automatically added on the appropriate switch port upon detection
Manually adding MDBs:
bridge mdb add dev br0 port sw0p1 grp 239.1.1.1 permanent vid 100
bridge mdb add dev br0 port sw0p1 grp 239.1.1.1 permanent <---- Add on all VLANs
==================
Multicast flooding
==================
CPU port mcast_flooding is always on
Turning flooding on/off on swithch ports:
bridge link set dev sw0p1 mcast_flood on/off
==================
Access and Trunk port
==================
bridge vlan add dev sw0p1 vid 100 pvid untagged master
bridge vlan add dev sw0p2 vid 100 master
bridge vlan add dev br0 vid 100 self
ip link add link br0 name br0.100 type vlan id 100
Note. Setting PVID on Bridge device itself working only for
default VLAN (default_pvid).
=====================
NFS
=====================
The only way for NFS to work is by chrooting to a minimal environment when
switch configuration that will affect connectivity is needed.
Assuming you are booting NFS with eth1 interface(the script is hacky and
it's just there to prove NFS is doable).
setup.sh:
#!/bin/sh
mkdir proc
mount -t proc none /proc
ifconfig br0 > /dev/null
if [ $? -ne 0 ]; then
echo "Setting up bridge"
ip link add name br0 type bridge
ip link set dev br0 type bridge ageing_time 1000
ip link set dev br0 type bridge vlan_filtering 1
ip link set eth1 down
ip link set eth1 name sw0p1
ip link set dev sw0p1 up
ip link set dev sw0p2 up
ip link set dev sw0p2 master br0
ip link set dev sw0p1 master br0
bridge vlan add dev br0 vid 1 pvid untagged self
ifconfig sw0p1 0.0.0.0
udhchc -i br0
fi
umount /proc
run_nfs.sh:
#!/bin/sh
mkdir /tmp/root/bin -p
mkdir /tmp/root/lib -p
cp -r /lib/ /tmp/root/
cp -r /bin/ /tmp/root/
cp /sbin/ip /tmp/root/bin
cp /sbin/bridge /tmp/root/bin
cp /sbin/ifconfig /tmp/root/bin
cp /sbin/udhcpc /tmp/root/bin
cp /path/to/setup.sh /tmp/root/bin
chroot /tmp/root/ busybox sh /bin/setup.sh
run ./run_nfs.sh

View File

@ -0,0 +1,17 @@
flow_steering_mode [DEVICE, DRIVER-SPECIFIC]
Controls the flow steering mode of the driver.
Two modes are supported:
1. 'dmfs' - Device managed flow steering.
2. 'smfs - Software/Driver managed flow steering.
In DMFS mode, the HW steering entities are created and
managed through the Firmware.
In SMFS mode, the HW steering entities are created and
managed though by the driver directly into Hardware
without firmware intervention.
Type: String
Configuration mode: runtime
enable_roce [DEVICE, GENERIC]
Enable handling of RoCE traffic in the device.
Defaultly enabled.
Configuration mode: driverinit

View File

@ -0,0 +1,7 @@
ATU_hash [DEVICE, DRIVER-SPECIFIC]
Select one of four possible hashing algorithms for
MAC addresses in the Address Translation Unit.
A value of 3 seems to work better than the default of
1 when many MAC addresses have the same OUI.
Configuration mode: runtime
Type: u8. 0-3 valid.

View File

@ -0,0 +1,10 @@
ale_bypass [DEVICE, DRIVER-SPECIFIC]
Allows to enable ALE_CONTROL(4).BYPASS mode for debug purposes.
All packets will be sent to the Host port only if enabled.
Type: bool
Configuration mode: runtime
switch_mode [DEVICE, DRIVER-SPECIFIC]
Enable switch mode
Type: bool
Configuration mode: runtime

View File

@ -65,3 +65,7 @@ reset_dev_on_drv_probe [DEVICE, GENERIC]
Reset only if device firmware can be found in the
filesystem.
Type: u8
enable_roce [DEVICE, GENERIC]
Enable handling of RoCE traffic in the device.
Type: Boolean

View File

@ -162,6 +162,67 @@ be added to the following table:
- ``drop``
- Traps packets that the device decided to drop because they could not be
enqueued to a transmission queue which is full
* - ``non_ip``
- ``drop``
- Traps packets that the device decided to drop because they need to
undergo a layer 3 lookup, but are not IP or MPLS packets
* - ``uc_dip_over_mc_dmac``
- ``drop``
- Traps packets that the device decided to drop because they need to be
routed and they have a unicast destination IP and a multicast destination
MAC
* - ``dip_is_loopback_address``
- ``drop``
- Traps packets that the device decided to drop because they need to be
routed and their destination IP is the loopback address (i.e., 127.0.0.0/8
and ::1/128)
* - ``sip_is_mc``
- ``drop``
- Traps packets that the device decided to drop because they need to be
routed and their source IP is multicast (i.e., 224.0.0.0/8 and ff::/8)
* - ``sip_is_loopback_address``
- ``drop``
- Traps packets that the device decided to drop because they need to be
routed and their source IP is the loopback address (i.e., 127.0.0.0/8 and ::1/128)
* - ``ip_header_corrupted``
- ``drop``
- Traps packets that the device decided to drop because they need to be
routed and their IP header is corrupted: wrong checksum, wrong IP version
or too short Internet Header Length (IHL)
* - ``ipv4_sip_is_limited_bc``
- ``drop``
- Traps packets that the device decided to drop because they need to be
routed and their source IP is limited broadcast (i.e., 255.255.255.255/32)
* - ``ipv6_mc_dip_reserved_scope``
- ``drop``
- Traps IPv6 packets that the device decided to drop because they need to
be routed and their IPv6 multicast destination IP has a reserved scope
(i.e., ffx0::/16)
* - ``ipv6_mc_dip_interface_local_scope``
- ``drop``
- Traps IPv6 packets that the device decided to drop because they need to
be routed and their IPv6 multicast destination IP has an interface-local scope
(i.e., ffx1::/16)
* - ``mtu_value_is_too_small``
- ``exception``
- Traps packets that should have been routed by the device, but were bigger
than the MTU of the egress interface
* - ``unresolved_neigh``
- ``exception``
- Traps packets that did not have a matching IP neighbour after routing
* - ``mc_reverse_path_forwarding``
- ``exception``
- Traps multicast IP packets that failed reverse-path forwarding (RPF)
check during multicast routing
* - ``reject_route``
- ``exception``
- Traps packets that hit reject routes (i.e., "unreachable", "prohibit")
* - ``ipv4_lpm_miss``
- ``exception``
- Traps unicast IPv4 packets that did not match any route
* - ``ipv6_lpm_miss``
- ``exception``
- Traps unicast IPv6 packets that did not match any route
Driver-specific Packet Traps
============================

View File

@ -770,10 +770,10 @@ Some core changes of the new internal format:
callq foo
mov %rax,%r13
mov %rbx,%rdi
mov $0x2,%esi
mov $0x3,%edx
mov $0x4,%ecx
mov $0x5,%r8d
mov $0x6,%esi
mov $0x7,%edx
mov $0x8,%ecx
mov $0x9,%r8d
callq bar
add %r13,%rax
mov -0x228(%rbp),%rbx

View File

@ -33,6 +33,7 @@ Contents:
scaling
tls
tls-offload
nfc
.. only:: subproject and html

Some files were not shown because too many files have changed in this diff Show More