2020-04-30 16:04:13 +00:00
|
|
|
.. SPDX-License-Identifier: GPL-2.0
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2020-04-30 16:04:13 +00:00
|
|
|
====================================
|
|
|
|
HOWTO for the linux packet generator
|
|
|
|
====================================
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2015-02-24 02:33:29 +00:00
|
|
|
Enable CONFIG_NET_PKTGEN to compile and build pktgen either in-kernel
|
|
|
|
or as a module. A module is preferred; modprobe pktgen if needed. Once
|
2015-02-24 02:31:52 +00:00
|
|
|
running, pktgen creates a thread for each CPU with affinity to that CPU.
|
|
|
|
Monitoring and controlling is done via /proc. It is easiest to select a
|
|
|
|
suitable sample script and configure that.
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2020-04-30 16:04:13 +00:00
|
|
|
On a dual CPU::
|
|
|
|
|
|
|
|
ps aux | grep pkt
|
|
|
|
root 129 0.3 0.0 0 0 ? SW 2003 523:20 [kpktgend_0]
|
|
|
|
root 130 0.3 0.0 0 0 ? SW 2003 509:50 [kpktgend_1]
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
|
2020-04-30 16:04:13 +00:00
|
|
|
For monitoring and control pktgen creates::
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
/proc/net/pktgen/pgctrl
|
|
|
|
/proc/net/pktgen/kpktgend_X
|
2020-04-30 16:04:13 +00:00
|
|
|
/proc/net/pktgen/ethX
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
|
pktgen: document tuning for max NIC performance
Using pktgen I'm seeing the ixgbe driver "push-back", due TX ring
running full. Thus, the TX ring is artificially limiting pktgen.
(Diagnose via "ethtool -S", look for "tx_restart_queue" or "tx_busy"
counters.)
Using ixgbe, the real reason behind the TX ring running full, is due
to TX ring not being cleaned up fast enough. The ixgbe driver combines
TX+RX ring cleanups, and the cleanup interval is affected by the
ethtool --coalesce setting of parameter "rx-usecs".
Do not increase the default NIC TX ring buffer or default cleanup
interval. Instead simply document that pktgen needs special NIC
tuning for maximum packet per sec performance.
Performance results with pktgen with clone_skb=100000.
TX ring size 512 (default), adjusting "rx-usecs":
(Single CPU performance, E5-2630, ixgbe)
- 3935002 pps - rx-usecs: 1 (irqs: 9346)
- 5132350 pps - rx-usecs: 10 (irqs: 99157)
- 5375111 pps - rx-usecs: 20 (irqs: 50154)
- 5454050 pps - rx-usecs: 30 (irqs: 33872)
- 5496320 pps - rx-usecs: 40 (irqs: 26197)
- 5502510 pps - rx-usecs: 50 (irqs: 21527)
TX ring size adjusting (ethtool -G), "rx-usecs==1" (default):
- 3935002 pps - tx-size: 512
- 5354401 pps - tx-size: 768
- 5356847 pps - tx-size: 1024
- 5327595 pps - tx-size: 1536
- 5356779 pps - tx-size: 2048
- 5353438 pps - tx-size: 4096
Notice after commit 6f25cd47d (pktgen: fix xmit test for BQL enabled
devices) pktgen uses netif_xmit_frozen_or_drv_stopped() and ignores
the BQL "stack" pause (QUEUE_STATE_STACK_XOFF) flag. This allow us to put
more pressure on the TX ring buffers.
It is the ixgbe_maybe_stop_tx() call that stops the transmits, and
pktgen respecting this in the call to netif_xmit_frozen_or_drv_stopped(txq).
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-26 11:16:27 +00:00
|
|
|
Tuning NIC for max performance
|
|
|
|
==============================
|
|
|
|
|
2015-02-24 02:31:52 +00:00
|
|
|
The default NIC settings are (likely) not tuned for pktgen's artificial
|
pktgen: document tuning for max NIC performance
Using pktgen I'm seeing the ixgbe driver "push-back", due TX ring
running full. Thus, the TX ring is artificially limiting pktgen.
(Diagnose via "ethtool -S", look for "tx_restart_queue" or "tx_busy"
counters.)
Using ixgbe, the real reason behind the TX ring running full, is due
to TX ring not being cleaned up fast enough. The ixgbe driver combines
TX+RX ring cleanups, and the cleanup interval is affected by the
ethtool --coalesce setting of parameter "rx-usecs".
Do not increase the default NIC TX ring buffer or default cleanup
interval. Instead simply document that pktgen needs special NIC
tuning for maximum packet per sec performance.
Performance results with pktgen with clone_skb=100000.
TX ring size 512 (default), adjusting "rx-usecs":
(Single CPU performance, E5-2630, ixgbe)
- 3935002 pps - rx-usecs: 1 (irqs: 9346)
- 5132350 pps - rx-usecs: 10 (irqs: 99157)
- 5375111 pps - rx-usecs: 20 (irqs: 50154)
- 5454050 pps - rx-usecs: 30 (irqs: 33872)
- 5496320 pps - rx-usecs: 40 (irqs: 26197)
- 5502510 pps - rx-usecs: 50 (irqs: 21527)
TX ring size adjusting (ethtool -G), "rx-usecs==1" (default):
- 3935002 pps - tx-size: 512
- 5354401 pps - tx-size: 768
- 5356847 pps - tx-size: 1024
- 5327595 pps - tx-size: 1536
- 5356779 pps - tx-size: 2048
- 5353438 pps - tx-size: 4096
Notice after commit 6f25cd47d (pktgen: fix xmit test for BQL enabled
devices) pktgen uses netif_xmit_frozen_or_drv_stopped() and ignores
the BQL "stack" pause (QUEUE_STATE_STACK_XOFF) flag. This allow us to put
more pressure on the TX ring buffers.
It is the ixgbe_maybe_stop_tx() call that stops the transmits, and
pktgen respecting this in the call to netif_xmit_frozen_or_drv_stopped(txq).
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-26 11:16:27 +00:00
|
|
|
overload type of benchmarking, as this could hurt the normal use-case.
|
|
|
|
|
2020-04-30 16:04:13 +00:00
|
|
|
Specifically increasing the TX ring buffer in the NIC::
|
|
|
|
|
pktgen: document tuning for max NIC performance
Using pktgen I'm seeing the ixgbe driver "push-back", due TX ring
running full. Thus, the TX ring is artificially limiting pktgen.
(Diagnose via "ethtool -S", look for "tx_restart_queue" or "tx_busy"
counters.)
Using ixgbe, the real reason behind the TX ring running full, is due
to TX ring not being cleaned up fast enough. The ixgbe driver combines
TX+RX ring cleanups, and the cleanup interval is affected by the
ethtool --coalesce setting of parameter "rx-usecs".
Do not increase the default NIC TX ring buffer or default cleanup
interval. Instead simply document that pktgen needs special NIC
tuning for maximum packet per sec performance.
Performance results with pktgen with clone_skb=100000.
TX ring size 512 (default), adjusting "rx-usecs":
(Single CPU performance, E5-2630, ixgbe)
- 3935002 pps - rx-usecs: 1 (irqs: 9346)
- 5132350 pps - rx-usecs: 10 (irqs: 99157)
- 5375111 pps - rx-usecs: 20 (irqs: 50154)
- 5454050 pps - rx-usecs: 30 (irqs: 33872)
- 5496320 pps - rx-usecs: 40 (irqs: 26197)
- 5502510 pps - rx-usecs: 50 (irqs: 21527)
TX ring size adjusting (ethtool -G), "rx-usecs==1" (default):
- 3935002 pps - tx-size: 512
- 5354401 pps - tx-size: 768
- 5356847 pps - tx-size: 1024
- 5327595 pps - tx-size: 1536
- 5356779 pps - tx-size: 2048
- 5353438 pps - tx-size: 4096
Notice after commit 6f25cd47d (pktgen: fix xmit test for BQL enabled
devices) pktgen uses netif_xmit_frozen_or_drv_stopped() and ignores
the BQL "stack" pause (QUEUE_STATE_STACK_XOFF) flag. This allow us to put
more pressure on the TX ring buffers.
It is the ixgbe_maybe_stop_tx() call that stops the transmits, and
pktgen respecting this in the call to netif_xmit_frozen_or_drv_stopped(txq).
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-26 11:16:27 +00:00
|
|
|
# ethtool -G ethX tx 1024
|
|
|
|
|
|
|
|
A larger TX ring can improve pktgen's performance, while it can hurt
|
|
|
|
in the general case, 1) because the TX ring buffer might get larger
|
2015-02-24 02:31:52 +00:00
|
|
|
than the CPU's L1/L2 cache, 2) because it allows more queueing in the
|
pktgen: document tuning for max NIC performance
Using pktgen I'm seeing the ixgbe driver "push-back", due TX ring
running full. Thus, the TX ring is artificially limiting pktgen.
(Diagnose via "ethtool -S", look for "tx_restart_queue" or "tx_busy"
counters.)
Using ixgbe, the real reason behind the TX ring running full, is due
to TX ring not being cleaned up fast enough. The ixgbe driver combines
TX+RX ring cleanups, and the cleanup interval is affected by the
ethtool --coalesce setting of parameter "rx-usecs".
Do not increase the default NIC TX ring buffer or default cleanup
interval. Instead simply document that pktgen needs special NIC
tuning for maximum packet per sec performance.
Performance results with pktgen with clone_skb=100000.
TX ring size 512 (default), adjusting "rx-usecs":
(Single CPU performance, E5-2630, ixgbe)
- 3935002 pps - rx-usecs: 1 (irqs: 9346)
- 5132350 pps - rx-usecs: 10 (irqs: 99157)
- 5375111 pps - rx-usecs: 20 (irqs: 50154)
- 5454050 pps - rx-usecs: 30 (irqs: 33872)
- 5496320 pps - rx-usecs: 40 (irqs: 26197)
- 5502510 pps - rx-usecs: 50 (irqs: 21527)
TX ring size adjusting (ethtool -G), "rx-usecs==1" (default):
- 3935002 pps - tx-size: 512
- 5354401 pps - tx-size: 768
- 5356847 pps - tx-size: 1024
- 5327595 pps - tx-size: 1536
- 5356779 pps - tx-size: 2048
- 5353438 pps - tx-size: 4096
Notice after commit 6f25cd47d (pktgen: fix xmit test for BQL enabled
devices) pktgen uses netif_xmit_frozen_or_drv_stopped() and ignores
the BQL "stack" pause (QUEUE_STATE_STACK_XOFF) flag. This allow us to put
more pressure on the TX ring buffers.
It is the ixgbe_maybe_stop_tx() call that stops the transmits, and
pktgen respecting this in the call to netif_xmit_frozen_or_drv_stopped(txq).
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-26 11:16:27 +00:00
|
|
|
NIC HW layer (which is bad for bufferbloat).
|
|
|
|
|
2015-02-24 02:31:52 +00:00
|
|
|
One should hesitate to conclude that packets/descriptors in the HW
|
pktgen: document tuning for max NIC performance
Using pktgen I'm seeing the ixgbe driver "push-back", due TX ring
running full. Thus, the TX ring is artificially limiting pktgen.
(Diagnose via "ethtool -S", look for "tx_restart_queue" or "tx_busy"
counters.)
Using ixgbe, the real reason behind the TX ring running full, is due
to TX ring not being cleaned up fast enough. The ixgbe driver combines
TX+RX ring cleanups, and the cleanup interval is affected by the
ethtool --coalesce setting of parameter "rx-usecs".
Do not increase the default NIC TX ring buffer or default cleanup
interval. Instead simply document that pktgen needs special NIC
tuning for maximum packet per sec performance.
Performance results with pktgen with clone_skb=100000.
TX ring size 512 (default), adjusting "rx-usecs":
(Single CPU performance, E5-2630, ixgbe)
- 3935002 pps - rx-usecs: 1 (irqs: 9346)
- 5132350 pps - rx-usecs: 10 (irqs: 99157)
- 5375111 pps - rx-usecs: 20 (irqs: 50154)
- 5454050 pps - rx-usecs: 30 (irqs: 33872)
- 5496320 pps - rx-usecs: 40 (irqs: 26197)
- 5502510 pps - rx-usecs: 50 (irqs: 21527)
TX ring size adjusting (ethtool -G), "rx-usecs==1" (default):
- 3935002 pps - tx-size: 512
- 5354401 pps - tx-size: 768
- 5356847 pps - tx-size: 1024
- 5327595 pps - tx-size: 1536
- 5356779 pps - tx-size: 2048
- 5353438 pps - tx-size: 4096
Notice after commit 6f25cd47d (pktgen: fix xmit test for BQL enabled
devices) pktgen uses netif_xmit_frozen_or_drv_stopped() and ignores
the BQL "stack" pause (QUEUE_STATE_STACK_XOFF) flag. This allow us to put
more pressure on the TX ring buffers.
It is the ixgbe_maybe_stop_tx() call that stops the transmits, and
pktgen respecting this in the call to netif_xmit_frozen_or_drv_stopped(txq).
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-26 11:16:27 +00:00
|
|
|
TX ring cause delay. Drivers usually delay cleaning up the
|
2015-02-24 02:31:52 +00:00
|
|
|
ring-buffers for various performance reasons, and packets stalling
|
|
|
|
the TX ring might just be waiting for cleanup.
|
pktgen: document tuning for max NIC performance
Using pktgen I'm seeing the ixgbe driver "push-back", due TX ring
running full. Thus, the TX ring is artificially limiting pktgen.
(Diagnose via "ethtool -S", look for "tx_restart_queue" or "tx_busy"
counters.)
Using ixgbe, the real reason behind the TX ring running full, is due
to TX ring not being cleaned up fast enough. The ixgbe driver combines
TX+RX ring cleanups, and the cleanup interval is affected by the
ethtool --coalesce setting of parameter "rx-usecs".
Do not increase the default NIC TX ring buffer or default cleanup
interval. Instead simply document that pktgen needs special NIC
tuning for maximum packet per sec performance.
Performance results with pktgen with clone_skb=100000.
TX ring size 512 (default), adjusting "rx-usecs":
(Single CPU performance, E5-2630, ixgbe)
- 3935002 pps - rx-usecs: 1 (irqs: 9346)
- 5132350 pps - rx-usecs: 10 (irqs: 99157)
- 5375111 pps - rx-usecs: 20 (irqs: 50154)
- 5454050 pps - rx-usecs: 30 (irqs: 33872)
- 5496320 pps - rx-usecs: 40 (irqs: 26197)
- 5502510 pps - rx-usecs: 50 (irqs: 21527)
TX ring size adjusting (ethtool -G), "rx-usecs==1" (default):
- 3935002 pps - tx-size: 512
- 5354401 pps - tx-size: 768
- 5356847 pps - tx-size: 1024
- 5327595 pps - tx-size: 1536
- 5356779 pps - tx-size: 2048
- 5353438 pps - tx-size: 4096
Notice after commit 6f25cd47d (pktgen: fix xmit test for BQL enabled
devices) pktgen uses netif_xmit_frozen_or_drv_stopped() and ignores
the BQL "stack" pause (QUEUE_STATE_STACK_XOFF) flag. This allow us to put
more pressure on the TX ring buffers.
It is the ixgbe_maybe_stop_tx() call that stops the transmits, and
pktgen respecting this in the call to netif_xmit_frozen_or_drv_stopped(txq).
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-26 11:16:27 +00:00
|
|
|
|
2015-02-24 02:31:52 +00:00
|
|
|
This cleanup issue is specifically the case for the driver ixgbe
|
|
|
|
(Intel 82599 chip). This driver (ixgbe) combines TX+RX ring cleanups,
|
pktgen: document tuning for max NIC performance
Using pktgen I'm seeing the ixgbe driver "push-back", due TX ring
running full. Thus, the TX ring is artificially limiting pktgen.
(Diagnose via "ethtool -S", look for "tx_restart_queue" or "tx_busy"
counters.)
Using ixgbe, the real reason behind the TX ring running full, is due
to TX ring not being cleaned up fast enough. The ixgbe driver combines
TX+RX ring cleanups, and the cleanup interval is affected by the
ethtool --coalesce setting of parameter "rx-usecs".
Do not increase the default NIC TX ring buffer or default cleanup
interval. Instead simply document that pktgen needs special NIC
tuning for maximum packet per sec performance.
Performance results with pktgen with clone_skb=100000.
TX ring size 512 (default), adjusting "rx-usecs":
(Single CPU performance, E5-2630, ixgbe)
- 3935002 pps - rx-usecs: 1 (irqs: 9346)
- 5132350 pps - rx-usecs: 10 (irqs: 99157)
- 5375111 pps - rx-usecs: 20 (irqs: 50154)
- 5454050 pps - rx-usecs: 30 (irqs: 33872)
- 5496320 pps - rx-usecs: 40 (irqs: 26197)
- 5502510 pps - rx-usecs: 50 (irqs: 21527)
TX ring size adjusting (ethtool -G), "rx-usecs==1" (default):
- 3935002 pps - tx-size: 512
- 5354401 pps - tx-size: 768
- 5356847 pps - tx-size: 1024
- 5327595 pps - tx-size: 1536
- 5356779 pps - tx-size: 2048
- 5353438 pps - tx-size: 4096
Notice after commit 6f25cd47d (pktgen: fix xmit test for BQL enabled
devices) pktgen uses netif_xmit_frozen_or_drv_stopped() and ignores
the BQL "stack" pause (QUEUE_STATE_STACK_XOFF) flag. This allow us to put
more pressure on the TX ring buffers.
It is the ixgbe_maybe_stop_tx() call that stops the transmits, and
pktgen respecting this in the call to netif_xmit_frozen_or_drv_stopped(txq).
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-26 11:16:27 +00:00
|
|
|
and the cleanup interval is affected by the ethtool --coalesce setting
|
|
|
|
of parameter "rx-usecs".
|
|
|
|
|
2020-04-30 16:04:13 +00:00
|
|
|
For ixgbe use e.g. "30" resulting in approx 33K interrupts/sec (1/30*10^6)::
|
|
|
|
|
pktgen: document tuning for max NIC performance
Using pktgen I'm seeing the ixgbe driver "push-back", due TX ring
running full. Thus, the TX ring is artificially limiting pktgen.
(Diagnose via "ethtool -S", look for "tx_restart_queue" or "tx_busy"
counters.)
Using ixgbe, the real reason behind the TX ring running full, is due
to TX ring not being cleaned up fast enough. The ixgbe driver combines
TX+RX ring cleanups, and the cleanup interval is affected by the
ethtool --coalesce setting of parameter "rx-usecs".
Do not increase the default NIC TX ring buffer or default cleanup
interval. Instead simply document that pktgen needs special NIC
tuning for maximum packet per sec performance.
Performance results with pktgen with clone_skb=100000.
TX ring size 512 (default), adjusting "rx-usecs":
(Single CPU performance, E5-2630, ixgbe)
- 3935002 pps - rx-usecs: 1 (irqs: 9346)
- 5132350 pps - rx-usecs: 10 (irqs: 99157)
- 5375111 pps - rx-usecs: 20 (irqs: 50154)
- 5454050 pps - rx-usecs: 30 (irqs: 33872)
- 5496320 pps - rx-usecs: 40 (irqs: 26197)
- 5502510 pps - rx-usecs: 50 (irqs: 21527)
TX ring size adjusting (ethtool -G), "rx-usecs==1" (default):
- 3935002 pps - tx-size: 512
- 5354401 pps - tx-size: 768
- 5356847 pps - tx-size: 1024
- 5327595 pps - tx-size: 1536
- 5356779 pps - tx-size: 2048
- 5353438 pps - tx-size: 4096
Notice after commit 6f25cd47d (pktgen: fix xmit test for BQL enabled
devices) pktgen uses netif_xmit_frozen_or_drv_stopped() and ignores
the BQL "stack" pause (QUEUE_STATE_STACK_XOFF) flag. This allow us to put
more pressure on the TX ring buffers.
It is the ixgbe_maybe_stop_tx() call that stops the transmits, and
pktgen respecting this in the call to netif_xmit_frozen_or_drv_stopped(txq).
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-26 11:16:27 +00:00
|
|
|
# ethtool -C ethX rx-usecs 30
|
|
|
|
|
|
|
|
|
2015-05-21 10:16:40 +00:00
|
|
|
Kernel threads
|
|
|
|
==============
|
|
|
|
Pktgen creates a thread for each CPU with affinity to that CPU.
|
|
|
|
Which is controlled through procfile /proc/net/pktgen/kpktgend_X.
|
|
|
|
|
2020-04-30 16:04:13 +00:00
|
|
|
Example: /proc/net/pktgen/kpktgend_0::
|
2015-05-21 10:16:40 +00:00
|
|
|
|
|
|
|
Running:
|
|
|
|
Stopped: eth4@0
|
|
|
|
Result: OK: add_device=eth4@0
|
|
|
|
|
|
|
|
Most important are the devices assigned to the thread.
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2015-05-21 10:16:40 +00:00
|
|
|
The two basic thread commands are:
|
2020-04-30 16:04:13 +00:00
|
|
|
|
2015-05-21 10:16:40 +00:00
|
|
|
* add_device DEVICE@NAME -- adds a single device
|
|
|
|
* rem_device_all -- remove all associated devices
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2016-04-25 06:36:56 +00:00
|
|
|
When adding a device to a thread, a corresponding procfile is created
|
2015-05-21 10:16:40 +00:00
|
|
|
which is used for configuring this device. Thus, device names need to
|
|
|
|
be unique.
|
|
|
|
|
|
|
|
To support adding the same device to multiple threads, which is useful
|
2016-04-25 06:36:56 +00:00
|
|
|
with multi queue NICs, the device naming scheme is extended with "@":
|
2020-04-30 16:04:13 +00:00
|
|
|
device@something
|
2015-05-21 10:16:40 +00:00
|
|
|
|
|
|
|
The part after "@" can be anything, but it is custom to use the thread
|
|
|
|
number.
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
Viewing devices
|
|
|
|
===============
|
|
|
|
|
2015-02-24 02:31:52 +00:00
|
|
|
The Params section holds configured information. The Current section
|
|
|
|
holds running statistics. The Result is printed after a run or after
|
2020-04-30 16:04:13 +00:00
|
|
|
interruption. Example::
|
|
|
|
|
|
|
|
/proc/net/pktgen/eth4@0
|
|
|
|
|
|
|
|
Params: count 100000 min_pkt_size: 60 max_pkt_size: 60
|
|
|
|
frags: 0 delay: 0 clone_skb: 64 ifname: eth4@0
|
|
|
|
flows: 0 flowlen: 0
|
|
|
|
queue_map_min: 0 queue_map_max: 0
|
|
|
|
dst_min: 192.168.81.2 dst_max:
|
|
|
|
src_min: src_max:
|
|
|
|
src_mac: 90:e2:ba:0a:56:b4 dst_mac: 00:1b:21:3c:9d:f8
|
|
|
|
udp_src_min: 9 udp_src_max: 109 udp_dst_min: 9 udp_dst_max: 9
|
|
|
|
src_mac_count: 0 dst_mac_count: 0
|
|
|
|
Flags: UDPSRC_RND NO_TIMESTAMP QUEUE_MAP_CPU
|
|
|
|
Current:
|
|
|
|
pkts-sofar: 100000 errors: 0
|
|
|
|
started: 623913381008us stopped: 623913396439us idle: 25us
|
|
|
|
seq_num: 100001 cur_dst_mac_offset: 0 cur_src_mac_offset: 0
|
|
|
|
cur_saddr: 192.168.8.3 cur_daddr: 192.168.81.2
|
|
|
|
cur_udp_dst: 9 cur_udp_src: 42
|
|
|
|
cur_queue_map: 0
|
|
|
|
flows: 0
|
|
|
|
Result: OK: 15430(c15405+d25) usec, 100000 (60byte,0frags)
|
|
|
|
6480562pps 3110Mb/sec (3110669760bps) errors: 0
|
2015-05-21 10:16:40 +00:00
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2015-05-21 10:16:40 +00:00
|
|
|
Configuring devices
|
|
|
|
===================
|
2015-02-24 02:32:07 +00:00
|
|
|
This is done via the /proc interface, and most easily done via pgset
|
|
|
|
as defined in the sample scripts.
|
2018-01-18 18:31:33 +00:00
|
|
|
You need to specify PGDEV environment variable to use functions from sample
|
2020-04-30 16:04:13 +00:00
|
|
|
scripts, i.e.::
|
|
|
|
|
|
|
|
export PGDEV=/proc/net/pktgen/eth4@0
|
|
|
|
source samples/pktgen/functions.sh
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2020-04-30 16:04:13 +00:00
|
|
|
Examples::
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2018-01-18 18:31:33 +00:00
|
|
|
pg_ctrl start starts injection.
|
|
|
|
pg_ctrl stop aborts injection. Also, ^C aborts generator.
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
pgset "clone_skb 1" sets the number of copies of the same packet
|
|
|
|
pgset "clone_skb 0" use single SKB for all transmits
|
2014-10-01 00:53:21 +00:00
|
|
|
pgset "burst 8" uses xmit_more API to queue 8 copies of the same
|
2020-04-30 16:04:13 +00:00
|
|
|
packet and update HW tx queue tail pointer once.
|
|
|
|
"burst 1" is the default
|
2005-04-16 22:20:36 +00:00
|
|
|
pgset "pkt_size 9014" sets packet size to 9014
|
|
|
|
pgset "frags 5" packet will consist of 5 fragments
|
|
|
|
pgset "count 200000" sets number of packets to send, set to zero
|
2020-04-30 16:04:13 +00:00
|
|
|
for continuous sends until explicitly stopped.
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
pgset "delay 5000" adds delay to hard_start_xmit(). nanoseconds
|
|
|
|
|
|
|
|
pgset "dst 10.0.0.1" sets IP destination address
|
2020-04-30 16:04:13 +00:00
|
|
|
(BEWARE! This generator is very aggressive!)
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
pgset "dst_min 10.0.0.1" Same as dst
|
|
|
|
pgset "dst_max 10.0.0.254" Set the maximum destination IP.
|
|
|
|
pgset "src_min 10.0.0.1" Set the minimum (or only) source IP.
|
|
|
|
pgset "src_max 10.0.0.254" Set the maximum source IP.
|
|
|
|
pgset "dst6 fec0::1" IPV6 destination address
|
|
|
|
pgset "src6 fec0::2" IPV6 source address
|
|
|
|
pgset "dstmac 00:00:00:00:00:00" sets MAC destination address
|
|
|
|
pgset "srcmac 00:00:00:00:00:00" sets MAC source address
|
|
|
|
|
2009-10-02 20:24:59 +00:00
|
|
|
pgset "queue_map_min 0" Sets the min value of tx queue interval
|
|
|
|
pgset "queue_map_max 7" Sets the max value of tx queue interval, for multiqueue devices
|
2020-04-30 16:04:13 +00:00
|
|
|
To select queue 1 of a given device,
|
|
|
|
use queue_map_min=1 and queue_map_max=1
|
2009-10-02 20:24:59 +00:00
|
|
|
|
2015-05-21 10:15:56 +00:00
|
|
|
pgset "src_mac_count 1" Sets the number of MACs we'll range through.
|
2020-04-30 16:04:13 +00:00
|
|
|
The 'minimum' MAC is what you set with srcmac.
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
pgset "dst_mac_count 1" Sets the number of MACs we'll range through.
|
2020-04-30 16:04:13 +00:00
|
|
|
The 'minimum' MAC is what you set with dstmac.
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
pgset "flag [name]" Set a flag to determine behaviour. Current flags
|
2020-04-30 16:04:13 +00:00
|
|
|
are: IPSRC_RND # IP source is random (between min/max)
|
|
|
|
IPDST_RND # IP destination is random
|
|
|
|
UDPSRC_RND, UDPDST_RND,
|
|
|
|
MACSRC_RND, MACDST_RND
|
|
|
|
TXSIZE_RND, IPV6,
|
|
|
|
MPLS_RND, VID_RND, SVID_RND
|
|
|
|
FLOW_SEQ,
|
|
|
|
QUEUE_MAP_RND # queue map random
|
|
|
|
QUEUE_MAP_CPU # queue map mirrors smp_processor_id()
|
|
|
|
UDPCSUM,
|
|
|
|
IPSEC # IPsec encapsulation (needs CONFIG_XFRM)
|
|
|
|
NODE_ALLOC # node specific memory allocation
|
|
|
|
NO_TIMESTAMP # disable timestamping
|
2018-01-18 18:31:33 +00:00
|
|
|
pgset 'flag ![name]' Clear a flag to determine behaviour.
|
2020-04-30 16:04:13 +00:00
|
|
|
Note that you might need to use single quote in
|
|
|
|
interactive mode, so that your shell wouldn't expand
|
|
|
|
the specified flag as a history command.
|
2009-10-02 20:24:59 +00:00
|
|
|
|
2018-01-18 18:31:33 +00:00
|
|
|
pgset "spi [SPI_VALUE]" Set specific SA used to transform packet.
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
pgset "udp_src_min 9" set UDP source port min, If < udp_src_max, then
|
2020-04-30 16:04:13 +00:00
|
|
|
cycle through the port range.
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
pgset "udp_src_max 9" set UDP source port max.
|
|
|
|
pgset "udp_dst_min 9" set UDP destination port min, If < udp_dst_max, then
|
2020-04-30 16:04:13 +00:00
|
|
|
cycle through the port range.
|
2005-04-16 22:20:36 +00:00
|
|
|
pgset "udp_dst_max 9" set UDP destination port max.
|
|
|
|
|
2006-03-23 09:10:26 +00:00
|
|
|
pgset "mpls 0001000a,0002000a,0000000a" set MPLS labels (in this example
|
2020-04-30 16:04:13 +00:00
|
|
|
outer label=16,middle label=32,
|
2006-03-23 09:10:26 +00:00
|
|
|
inner label=0 (IPv4 NULL)) Note that
|
|
|
|
there must be no spaces between the
|
|
|
|
arguments. Leading zeros are required.
|
|
|
|
Do not set the bottom of stack bit,
|
2006-11-30 03:55:36 +00:00
|
|
|
that's done automatically. If you do
|
2006-03-23 09:10:26 +00:00
|
|
|
set the bottom of stack bit, that
|
|
|
|
indicates that you want to randomly
|
|
|
|
generate that address and the flag
|
|
|
|
MPLS_RND will be turned on. You
|
|
|
|
can have any mix of random and fixed
|
|
|
|
labels in the label stack.
|
|
|
|
|
|
|
|
pgset "mpls 0" turn off mpls (or any invalid argument works too!)
|
|
|
|
|
2006-09-27 23:33:05 +00:00
|
|
|
pgset "vlan_id 77" set VLAN ID 0-4095
|
|
|
|
pgset "vlan_p 3" set priority bit 0-7 (default 0)
|
|
|
|
pgset "vlan_cfi 0" set canonical format identifier 0-1 (default 0)
|
|
|
|
|
|
|
|
pgset "svlan_id 22" set SVLAN ID 0-4095
|
|
|
|
pgset "svlan_p 3" set priority bit 0-7 (default 0)
|
|
|
|
pgset "svlan_cfi 0" set canonical format identifier 0-1 (default 0)
|
|
|
|
|
|
|
|
pgset "vlan_id 9999" > 4095 remove vlan and svlan tags
|
|
|
|
pgset "svlan 9999" > 4095 remove svlan tag
|
|
|
|
|
|
|
|
|
|
|
|
pgset "tos XX" set former IPv4 TOS field (e.g. "tos 28" for AF11 no ECN, default 00)
|
|
|
|
pgset "traffic_class XX" set former IPv6 TRAFFIC CLASS (e.g. "traffic_class B8" for EF no ECN, default 00)
|
|
|
|
|
2010-06-09 22:49:57 +00:00
|
|
|
pgset "rate 300M" set rate to 300 Mb/s
|
|
|
|
pgset "ratep 1000000" set rate to 1Mpps
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2015-05-07 14:35:32 +00:00
|
|
|
pgset "xmit_mode netif_receive" RX inject into stack netif_receive_skb()
|
|
|
|
Works with "burst" but not with "clone_skb".
|
|
|
|
Default xmit_mode is "start_xmit".
|
|
|
|
|
2015-02-24 02:32:07 +00:00
|
|
|
Sample scripts
|
|
|
|
==============
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2015-05-21 10:17:33 +00:00
|
|
|
A collection of tutorial scripts and helpers for pktgen is in the
|
|
|
|
samples/pktgen directory. The helper parameters.sh file support easy
|
2016-04-25 06:36:56 +00:00
|
|
|
and consistent parameter parsing across the sample scripts.
|
2015-05-21 10:17:33 +00:00
|
|
|
|
2020-04-30 16:04:13 +00:00
|
|
|
Usage example and help::
|
|
|
|
|
2015-05-21 10:17:33 +00:00
|
|
|
./pktgen_sample01_simple.sh -i eth4 -m 00:1B:21:3C:9D:F8 -d 192.168.8.2
|
|
|
|
|
2020-04-30 16:04:13 +00:00
|
|
|
Usage:::
|
|
|
|
|
|
|
|
./pktgen_sample01_simple.sh [-vx] -i ethX
|
|
|
|
|
2015-05-21 10:17:33 +00:00
|
|
|
-i : ($DEV) output interface/device (required)
|
|
|
|
-s : ($PKT_SIZE) packet size
|
|
|
|
-d : ($DEST_IP) destination IP
|
|
|
|
-m : ($DST_MAC) destination MAC-addr
|
|
|
|
-t : ($THREADS) threads to start
|
|
|
|
-c : ($SKB_CLONE) SKB clones send before alloc new SKB
|
|
|
|
-b : ($BURST) HW level bursting of SKBs
|
|
|
|
-v : ($VERBOSE) verbose
|
|
|
|
-x : ($DEBUG) debug
|
|
|
|
|
|
|
|
The global variables being set are also listed. E.g. the required
|
|
|
|
interface/device parameter "-i" sets variable $DEV. Copy the
|
|
|
|
pktgen_sampleXX scripts and modify them to fit your own needs.
|
|
|
|
|
2020-04-30 16:04:13 +00:00
|
|
|
The old scripts::
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2020-04-30 16:04:13 +00:00
|
|
|
pktgen.conf-1-2 # 1 CPU 2 dev
|
|
|
|
pktgen.conf-1-1-rdos # 1 CPU 1 dev w. route DoS
|
|
|
|
pktgen.conf-1-1-ip6 # 1 CPU 1 dev ipv6
|
|
|
|
pktgen.conf-1-1-ip6-rdos # 1 CPU 1 dev ipv6 w. route DoS
|
|
|
|
pktgen.conf-1-1-flows # 1 CPU 1 dev multiple flows.
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
|
|
|
|
Interrupt affinity
|
|
|
|
===================
|
2015-02-24 02:31:52 +00:00
|
|
|
Note that when adding devices to a specific CPU it is a good idea to
|
|
|
|
also assign /proc/irq/XX/smp_affinity so that the TX interrupts are bound
|
|
|
|
to the same CPU. This reduces cache bouncing when freeing skbs.
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2015-05-21 10:16:40 +00:00
|
|
|
Plus using the device flag QUEUE_MAP_CPU, which maps the SKBs TX queue
|
|
|
|
to the running threads CPU (directly from smp_processor_id()).
|
|
|
|
|
2014-01-03 03:18:34 +00:00
|
|
|
Enable IPsec
|
|
|
|
============
|
2015-02-24 02:31:52 +00:00
|
|
|
Default IPsec transformation with ESP encapsulation plus transport mode
|
2020-04-30 16:04:13 +00:00
|
|
|
can be enabled by simply setting::
|
2014-01-03 03:18:34 +00:00
|
|
|
|
2020-04-30 16:04:13 +00:00
|
|
|
pgset "flag IPSEC"
|
|
|
|
pgset "flows 1"
|
2014-01-03 03:18:34 +00:00
|
|
|
|
|
|
|
To avoid breaking existing testbed scripts for using AH type and tunnel mode,
|
2015-02-24 02:31:52 +00:00
|
|
|
you can use "pgset spi SPI_VALUE" to specify which transformation mode
|
2014-01-03 03:18:34 +00:00
|
|
|
to employ.
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
Current commands and configuration options
|
|
|
|
==========================================
|
|
|
|
|
2020-04-30 16:04:13 +00:00
|
|
|
**Pgcontrol commands**::
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2020-04-30 16:04:13 +00:00
|
|
|
start
|
|
|
|
stop
|
|
|
|
reset
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2020-04-30 16:04:13 +00:00
|
|
|
**Thread commands**::
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2020-04-30 16:04:13 +00:00
|
|
|
add_device
|
|
|
|
rem_device_all
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
|
2020-04-30 16:04:13 +00:00
|
|
|
**Device commands**::
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2020-04-30 16:04:13 +00:00
|
|
|
count
|
|
|
|
clone_skb
|
|
|
|
burst
|
|
|
|
debug
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2020-04-30 16:04:13 +00:00
|
|
|
frags
|
|
|
|
delay
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2020-04-30 16:04:13 +00:00
|
|
|
src_mac_count
|
|
|
|
dst_mac_count
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2020-04-30 16:04:13 +00:00
|
|
|
pkt_size
|
|
|
|
min_pkt_size
|
|
|
|
max_pkt_size
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2020-04-30 16:04:13 +00:00
|
|
|
queue_map_min
|
|
|
|
queue_map_max
|
|
|
|
skb_priority
|
2015-05-21 10:16:26 +00:00
|
|
|
|
2020-04-30 16:04:13 +00:00
|
|
|
tos (ipv4)
|
|
|
|
traffic_class (ipv6)
|
2015-05-21 10:16:26 +00:00
|
|
|
|
2020-04-30 16:04:13 +00:00
|
|
|
mpls
|
2006-03-23 09:10:26 +00:00
|
|
|
|
2020-04-30 16:04:13 +00:00
|
|
|
udp_src_min
|
|
|
|
udp_src_max
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2020-04-30 16:04:13 +00:00
|
|
|
udp_dst_min
|
|
|
|
udp_dst_max
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2020-04-30 16:04:13 +00:00
|
|
|
node
|
2015-05-21 10:16:26 +00:00
|
|
|
|
2020-04-30 16:04:13 +00:00
|
|
|
flag
|
|
|
|
IPSRC_RND
|
|
|
|
IPDST_RND
|
|
|
|
UDPSRC_RND
|
|
|
|
UDPDST_RND
|
|
|
|
MACSRC_RND
|
|
|
|
MACDST_RND
|
|
|
|
TXSIZE_RND
|
|
|
|
IPV6
|
|
|
|
MPLS_RND
|
|
|
|
VID_RND
|
|
|
|
SVID_RND
|
|
|
|
FLOW_SEQ
|
|
|
|
QUEUE_MAP_RND
|
|
|
|
QUEUE_MAP_CPU
|
|
|
|
UDPCSUM
|
|
|
|
IPSEC
|
|
|
|
NODE_ALLOC
|
|
|
|
NO_TIMESTAMP
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2020-04-30 16:04:13 +00:00
|
|
|
spi (ipsec)
|
2015-05-21 10:16:26 +00:00
|
|
|
|
2020-04-30 16:04:13 +00:00
|
|
|
dst_min
|
|
|
|
dst_max
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2020-04-30 16:04:13 +00:00
|
|
|
src_min
|
|
|
|
src_max
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2020-04-30 16:04:13 +00:00
|
|
|
dst_mac
|
|
|
|
src_mac
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2020-04-30 16:04:13 +00:00
|
|
|
clear_counters
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2020-04-30 16:04:13 +00:00
|
|
|
src6
|
|
|
|
dst6
|
|
|
|
dst6_max
|
|
|
|
dst6_min
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2020-04-30 16:04:13 +00:00
|
|
|
flows
|
|
|
|
flowlen
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2020-04-30 16:04:13 +00:00
|
|
|
rate
|
|
|
|
ratep
|
2010-06-09 22:49:57 +00:00
|
|
|
|
2020-04-30 16:04:13 +00:00
|
|
|
xmit_mode <start_xmit|netif_receive>
|
2015-05-07 14:35:32 +00:00
|
|
|
|
2020-04-30 16:04:13 +00:00
|
|
|
vlan_cfi
|
|
|
|
vlan_id
|
|
|
|
vlan_p
|
2015-05-21 10:16:26 +00:00
|
|
|
|
2020-04-30 16:04:13 +00:00
|
|
|
svlan_cfi
|
|
|
|
svlan_id
|
|
|
|
svlan_p
|
2015-05-21 10:16:26 +00:00
|
|
|
|
2015-05-07 14:35:32 +00:00
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
References:
|
2020-04-30 16:04:13 +00:00
|
|
|
|
|
|
|
- ftp://robur.slu.se/pub/Linux/net-development/pktgen-testing/
|
|
|
|
- tp://robur.slu.se/pub/Linux/net-development/pktgen-testing/examples/
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
Paper from Linux-Kongress in Erlangen 2004.
|
2020-04-30 16:04:13 +00:00
|
|
|
- ftp://robur.slu.se/pub/Linux/net-development/pktgen-testing/pktgen_paper.pdf
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
Thanks to:
|
2020-04-30 16:04:13 +00:00
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
Grant Grundler for testing on IA-64 and parisc, Harald Welte, Lennert Buytenhek
|
|
|
|
Stephen Hemminger, Andi Kleen, Dave Miller and many others.
|
|
|
|
|
|
|
|
|
2006-03-23 09:10:26 +00:00
|
|
|
Good luck with the linux net-development.
|