forked from Minki/linux
9e641bdcfa
tun_do_read always adds current thread to wait queue, even if a packet is ready to read. This is inefficient because both sleeper and waker want to acquire the wait queue spin lock when packet rate is high. We restructure the read function and use common kernel networking routines to handle receive, sleep and wakeup. With the change available packets are checked first before the reading thread is added to the wait queue. Ran performance tests with the following configuration: - my packet generator -> tap1 -> br0 -> tap0 -> my packet consumer - sender pinned to one core and receiver pinned to another core - sender send small UDP packets (64 bytes total) as fast as it can - sandy bridge cores - throughput are receiver side goodput numbers The results are baseline: 731k pkts/sec, cpu utilization at 1.50 cpus changed: 783k pkts/sec, cpu utilization at 1.53 cpus The performance difference is largely determined by packet rate and inter-cpu communication cost. For example, if the sender and receiver are pinned to different cpu sockets, the results are baseline: 558k pkts/sec, cpu utilization at 1.71 cpus changed: 690k pkts/sec, cpu utilization at 1.67 cpus Co-authored-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Xi Wang <xii@google.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net> |
||
---|---|---|
.. | ||
appletalk | ||
arcnet | ||
bonding | ||
caif | ||
can | ||
cris | ||
dsa | ||
ethernet | ||
fddi | ||
hamradio | ||
hippi | ||
hyperv | ||
ieee802154 | ||
irda | ||
phy | ||
plip | ||
ppp | ||
slip | ||
team | ||
usb | ||
vmxnet3 | ||
wan | ||
wimax | ||
wireless | ||
xen-netback | ||
dummy.c | ||
eql.c | ||
ifb.c | ||
Kconfig | ||
LICENSE.SRC | ||
loopback.c | ||
macvlan.c | ||
macvtap.c | ||
Makefile | ||
mdio.c | ||
mii.c | ||
netconsole.c | ||
nlmon.c | ||
ntb_netdev.c | ||
rionet.c | ||
sb1000.c | ||
Space.c | ||
sungem_phy.c | ||
tun.c | ||
veth.c | ||
virtio_net.c | ||
vxlan.c | ||
xen-netfront.c |