License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 14:07:57 +00:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
2013-03-25 14:49:35 +00:00
|
|
|
#ifndef __NET_IP_TUNNELS_H
|
|
|
|
#define __NET_IP_TUNNELS_H 1
|
|
|
|
|
|
|
|
#include <linux/if_tunnel.h>
|
|
|
|
#include <linux/netdevice.h>
|
|
|
|
#include <linux/skbuff.h>
|
2015-08-28 18:48:20 +00:00
|
|
|
#include <linux/socket.h>
|
2013-03-25 14:49:35 +00:00
|
|
|
#include <linux/types.h>
|
|
|
|
#include <linux/u64_stats_sync.h>
|
2016-03-16 00:42:51 +00:00
|
|
|
#include <linux/bitops.h>
|
|
|
|
|
2013-03-25 14:49:35 +00:00
|
|
|
#include <net/dsfield.h>
|
|
|
|
#include <net/gro_cells.h>
|
|
|
|
#include <net/inet_ecn.h>
|
2014-09-17 19:25:58 +00:00
|
|
|
#include <net/netns/generic.h>
|
2013-03-25 14:49:35 +00:00
|
|
|
#include <net/rtnetlink.h>
|
2015-07-21 08:44:00 +00:00
|
|
|
#include <net/lwtunnel.h>
|
2016-02-12 14:43:55 +00:00
|
|
|
#include <net/dst_cache.h>
|
2013-03-25 14:49:35 +00:00
|
|
|
|
|
|
|
#if IS_ENABLED(CONFIG_IPV6)
|
|
|
|
#include <net/ipv6.h>
|
|
|
|
#include <net/ip6_fib.h>
|
|
|
|
#include <net/ip6_route.h>
|
|
|
|
#endif
|
|
|
|
|
|
|
|
/* Keep error state on tunnel for 30 sec */
|
|
|
|
#define IPTUNNEL_ERR_TIMEO (30*HZ)
|
|
|
|
|
2015-07-21 08:43:54 +00:00
|
|
|
/* Used to memset ip_tunnel padding. */
|
2015-08-20 11:56:22 +00:00
|
|
|
#define IP_TUNNEL_KEY_SIZE offsetofend(struct ip_tunnel_key, tp_dst)
|
2015-07-21 08:43:54 +00:00
|
|
|
|
2015-08-20 11:56:23 +00:00
|
|
|
/* Used to memset ipv4 address padding. */
|
|
|
|
#define IP_TUNNEL_KEY_IPV4_PAD offsetofend(struct ip_tunnel_key, u.ipv4.dst)
|
|
|
|
#define IP_TUNNEL_KEY_IPV4_PAD_LEN \
|
2019-12-09 18:31:43 +00:00
|
|
|
(sizeof_field(struct ip_tunnel_key, u) - \
|
|
|
|
sizeof_field(struct ip_tunnel_key, u.ipv4))
|
2015-08-20 11:56:23 +00:00
|
|
|
|
2015-07-21 08:43:54 +00:00
|
|
|
struct ip_tunnel_key {
|
|
|
|
__be64 tun_id;
|
2015-08-20 11:56:23 +00:00
|
|
|
union {
|
|
|
|
struct {
|
|
|
|
__be32 src;
|
|
|
|
__be32 dst;
|
|
|
|
} ipv4;
|
|
|
|
struct {
|
|
|
|
struct in6_addr src;
|
|
|
|
struct in6_addr dst;
|
|
|
|
} ipv6;
|
|
|
|
} u;
|
2015-07-21 08:43:54 +00:00
|
|
|
__be16 tun_flags;
|
2015-08-20 11:56:24 +00:00
|
|
|
u8 tos; /* TOS for IPv4, TC for IPv6 */
|
|
|
|
u8 ttl; /* TTL for IPv4, HL for IPv6 */
|
2016-03-09 02:00:02 +00:00
|
|
|
__be32 label; /* Flow Label for IPv6 */
|
2015-07-21 08:43:54 +00:00
|
|
|
__be16 tp_src;
|
|
|
|
__be16 tp_dst;
|
2022-07-25 14:31:37 +00:00
|
|
|
__u8 flow_flags;
|
2015-08-20 11:56:20 +00:00
|
|
|
};
|
2015-07-21 08:43:54 +00:00
|
|
|
|
2015-08-28 18:48:19 +00:00
|
|
|
/* Flags for ip_tunnel_info mode. */
|
|
|
|
#define IP_TUNNEL_INFO_TX 0x01 /* represents tx tunnel parameters */
|
2015-08-28 18:48:20 +00:00
|
|
|
#define IP_TUNNEL_INFO_IPV6 0x02 /* key contains IPv6 addresses */
|
2017-02-01 06:59:51 +00:00
|
|
|
#define IP_TUNNEL_INFO_BRIDGE 0x04 /* represents a bridged tunnel id */
|
2015-07-21 08:43:58 +00:00
|
|
|
|
2016-03-16 00:42:51 +00:00
|
|
|
/* Maximum tunnel options length. */
|
|
|
|
#define IP_TUNNEL_OPTS_MAX \
|
2019-12-09 18:31:43 +00:00
|
|
|
GENMASK((sizeof_field(struct ip_tunnel_info, \
|
2016-03-16 00:42:51 +00:00
|
|
|
options_len) * BITS_PER_BYTE) - 1, 0)
|
|
|
|
|
2015-07-21 08:43:54 +00:00
|
|
|
struct ip_tunnel_info {
|
|
|
|
struct ip_tunnel_key key;
|
2016-02-12 14:43:57 +00:00
|
|
|
#ifdef CONFIG_DST_CACHE
|
|
|
|
struct dst_cache dst_cache;
|
|
|
|
#endif
|
2015-07-21 08:43:54 +00:00
|
|
|
u8 options_len;
|
2015-07-21 08:43:58 +00:00
|
|
|
u8 mode;
|
2015-07-21 08:43:54 +00:00
|
|
|
};
|
|
|
|
|
2013-03-25 14:49:35 +00:00
|
|
|
/* 6rd prefix/relay information */
|
|
|
|
#ifdef CONFIG_IPV6_SIT_6RD
|
|
|
|
struct ip_tunnel_6rd_parm {
|
|
|
|
struct in6_addr prefix;
|
|
|
|
__be32 relay_prefix;
|
|
|
|
u16 prefixlen;
|
|
|
|
u16 relay_prefixlen;
|
|
|
|
};
|
|
|
|
#endif
|
|
|
|
|
2014-09-17 19:25:58 +00:00
|
|
|
struct ip_tunnel_encap {
|
2015-08-20 11:56:21 +00:00
|
|
|
u16 type;
|
|
|
|
u16 flags;
|
2014-09-17 19:25:58 +00:00
|
|
|
__be16 sport;
|
|
|
|
__be16 dport;
|
|
|
|
};
|
|
|
|
|
2013-03-25 14:49:35 +00:00
|
|
|
struct ip_tunnel_prl_entry {
|
|
|
|
struct ip_tunnel_prl_entry __rcu *next;
|
|
|
|
__be32 addr;
|
|
|
|
u16 flags;
|
|
|
|
struct rcu_head rcu_head;
|
|
|
|
};
|
|
|
|
|
2015-08-08 06:51:42 +00:00
|
|
|
struct metadata_dst;
|
|
|
|
|
2013-03-25 14:49:35 +00:00
|
|
|
struct ip_tunnel {
|
|
|
|
struct ip_tunnel __rcu *next;
|
|
|
|
struct hlist_node hash_node;
|
2021-12-05 04:22:05 +00:00
|
|
|
|
2013-03-25 14:49:35 +00:00
|
|
|
struct net_device *dev;
|
2021-12-05 04:22:05 +00:00
|
|
|
netdevice_tracker dev_tracker;
|
|
|
|
|
2013-06-26 14:11:28 +00:00
|
|
|
struct net *net; /* netns for packet i/o */
|
2013-03-25 14:49:35 +00:00
|
|
|
|
|
|
|
unsigned long err_time; /* Time when the last ICMP error
|
|
|
|
* arrived */
|
2016-04-14 00:02:21 +00:00
|
|
|
int err_count; /* Number of arrived ICMP errors */
|
2013-03-25 14:49:35 +00:00
|
|
|
|
|
|
|
/* These four fields used only by GRE */
|
2015-08-20 11:56:21 +00:00
|
|
|
u32 i_seqno; /* The last seen seqno */
|
ip_gre, ip6_gre: Fix race condition on o_seqno in collect_md mode
As pointed out by Jakub Kicinski, currently using TUNNEL_SEQ in
collect_md mode is racy for [IP6]GRE[TAP] devices. Consider the
following sequence of events:
1. An [IP6]GRE[TAP] device is created in collect_md mode using "ip link
add ... external". "ip" ignores "[o]seq" if "external" is specified,
so TUNNEL_SEQ is off, and the device is marked as NETIF_F_LLTX (i.e.
it uses lockless TX);
2. Someone sets TUNNEL_SEQ on outgoing skb's, using e.g.
bpf_skb_set_tunnel_key() in an eBPF program attached to this device;
3. gre_fb_xmit() or __gre6_xmit() processes these skb's:
gre_build_header(skb, tun_hlen,
flags, protocol,
tunnel_id_to_key32(tun_info->key.tun_id),
(flags & TUNNEL_SEQ) ? htonl(tunnel->o_seqno++)
: 0); ^^^^^^^^^^^^^^^^^
Since we are not using the TX lock (&txq->_xmit_lock), multiple CPUs may
try to do this tunnel->o_seqno++ in parallel, which is racy. Fix it by
making o_seqno atomic_t.
As mentioned by Eric Dumazet in commit b790e01aee74 ("ip_gre: lockless
xmit"), making o_seqno atomic_t increases "chance for packets being out
of order at receiver" when NETIF_F_LLTX is on.
Maybe a better fix would be:
1. Do not ignore "oseq" in external mode. Users MUST specify "oseq" if
they want the kernel to allow sequencing of outgoing packets;
2. Reject all outgoing TUNNEL_SEQ packets if the device was not created
with "oseq".
Unfortunately, that would break userspace.
We could now make [IP6]GRE[TAP] devices always NETIF_F_LLTX, but let us
do it in separate patches to keep this fix minimal.
Suggested-by: Jakub Kicinski <kuba@kernel.org>
Fixes: 77a5196a804e ("gre: add sequence number for collect md mode.")
Signed-off-by: Peilin Ye <peilin.ye@bytedance.com>
Acked-by: William Tu <u9012063@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2022-04-21 22:09:02 +00:00
|
|
|
atomic_t o_seqno; /* The last output seqno */
|
2014-09-17 19:25:58 +00:00
|
|
|
int tun_hlen; /* Precalculated header length */
|
2013-03-25 14:49:35 +00:00
|
|
|
|
2017-12-14 00:38:56 +00:00
|
|
|
/* These four fields used only by ERSPAN */
|
2017-08-22 16:40:28 +00:00
|
|
|
u32 index; /* ERSPAN type II index */
|
2017-12-14 00:38:56 +00:00
|
|
|
u8 erspan_ver; /* ERSPAN version */
|
|
|
|
u8 dir; /* ERSPAN direction */
|
|
|
|
u16 hwid; /* ERSPAN hardware ID */
|
2017-08-22 16:40:28 +00:00
|
|
|
|
2016-02-12 14:43:55 +00:00
|
|
|
struct dst_cache dst_cache;
|
2014-01-02 19:48:26 +00:00
|
|
|
|
2013-03-25 14:49:35 +00:00
|
|
|
struct ip_tunnel_parm parms;
|
|
|
|
|
2016-04-14 00:02:21 +00:00
|
|
|
int mlink;
|
2014-09-17 19:25:58 +00:00
|
|
|
int encap_hlen; /* Encap header length (FOU,GUE) */
|
|
|
|
int hlen; /* tun_hlen + encap_hlen */
|
2016-04-14 00:02:21 +00:00
|
|
|
struct ip_tunnel_encap encap;
|
2014-09-17 19:25:58 +00:00
|
|
|
|
2013-03-25 14:49:35 +00:00
|
|
|
/* for SIT */
|
|
|
|
#ifdef CONFIG_IPV6_SIT_6RD
|
|
|
|
struct ip_tunnel_6rd_parm ip6rd;
|
|
|
|
#endif
|
|
|
|
struct ip_tunnel_prl_entry __rcu *prl; /* potential router list */
|
|
|
|
unsigned int prl_count; /* # of entries in PRL */
|
netns: make struct pernet_operations::id unsigned int
Make struct pernet_operations::id unsigned.
There are 2 reasons to do so:
1)
This field is really an index into an zero based array and
thus is unsigned entity. Using negative value is out-of-bound
access by definition.
2)
On x86_64 unsigned 32-bit data which are mixed with pointers
via array indexing or offsets added or subtracted to pointers
are preffered to signed 32-bit data.
"int" being used as an array index needs to be sign-extended
to 64-bit before being used.
void f(long *p, int i)
{
g(p[i]);
}
roughly translates to
movsx rsi, esi
mov rdi, [rsi+...]
call g
MOVSX is 3 byte instruction which isn't necessary if the variable is
unsigned because x86_64 is zero extending by default.
Now, there is net_generic() function which, you guessed it right, uses
"int" as an array index:
static inline void *net_generic(const struct net *net, int id)
{
...
ptr = ng->ptr[id - 1];
...
}
And this function is used a lot, so those sign extensions add up.
Patch snipes ~1730 bytes on allyesconfig kernel (without all junk
messing with code generation):
add/remove: 0/0 grow/shrink: 70/598 up/down: 396/-2126 (-1730)
Unfortunately some functions actually grow bigger.
This is a semmingly random artefact of code generation with register
allocator being used differently. gcc decides that some variable
needs to live in new r8+ registers and every access now requires REX
prefix. Or it is shifted into r12, so [r12+0] addressing mode has to be
used which is longer than [r8]
However, overall balance is in negative direction:
add/remove: 0/0 grow/shrink: 70/598 up/down: 396/-2126 (-1730)
function old new delta
nfsd4_lock 3886 3959 +73
tipc_link_build_proto_msg 1096 1140 +44
mac80211_hwsim_new_radio 2776 2808 +32
tipc_mon_rcv 1032 1058 +26
svcauth_gss_legacy_init 1413 1429 +16
tipc_bcbase_select_primary 379 392 +13
nfsd4_exchange_id 1247 1260 +13
nfsd4_setclientid_confirm 782 793 +11
...
put_client_renew_locked 494 480 -14
ip_set_sockfn_get 730 716 -14
geneve_sock_add 829 813 -16
nfsd4_sequence_done 721 703 -18
nlmclnt_lookup_host 708 686 -22
nfsd4_lockt 1085 1063 -22
nfs_get_client 1077 1050 -27
tcf_bpf_init 1106 1076 -30
nfsd4_encode_fattr 5997 5930 -67
Total: Before=154856051, After=154854321, chg -0.00%
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-11-17 01:58:21 +00:00
|
|
|
unsigned int ip_tnl_net_id;
|
2013-03-25 14:49:35 +00:00
|
|
|
struct gro_cells gro_cells;
|
2017-04-19 16:30:54 +00:00
|
|
|
__u32 fwmark;
|
2015-08-08 06:51:42 +00:00
|
|
|
bool collect_md;
|
2016-06-14 21:53:02 +00:00
|
|
|
bool ignore_df;
|
2013-03-25 14:49:35 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
struct tnl_ptk_info {
|
|
|
|
__be16 flags;
|
|
|
|
__be16 proto;
|
|
|
|
__be32 key;
|
|
|
|
__be32 seq;
|
2016-06-19 04:52:05 +00:00
|
|
|
int hdr_len;
|
2013-03-25 14:49:35 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
#define PACKET_RCVD 0
|
|
|
|
#define PACKET_REJECT 1
|
2016-05-03 15:10:08 +00:00
|
|
|
#define PACKET_NEXT 2
|
2013-03-25 14:49:35 +00:00
|
|
|
|
2013-08-06 05:51:37 +00:00
|
|
|
#define IP_TNL_HASH_BITS 7
|
2013-03-25 14:49:35 +00:00
|
|
|
#define IP_TNL_HASH_SIZE (1 << IP_TNL_HASH_BITS)
|
|
|
|
|
|
|
|
struct ip_tunnel_net {
|
|
|
|
struct net_device *fb_tunnel_dev;
|
net: do not create fallback tunnels for non-default namespaces
fallback tunnels (like tunl0, gre0, gretap0, erspan0, sit0,
ip6tnl0, ip6gre0) are automatically created when the corresponding
module is loaded.
These tunnels are also automatically created when a new network
namespace is created, at a great cost.
In many cases, netns are used for isolation purposes, and these
extra network devices are a waste of resources. We are using
thousands of netns per host, and hit the netns creation/delete
bottleneck a lot. (Many thanks to Kirill for recent work on this)
Add a new sysctl so that we can opt-out from this automatic creation.
Note that these tunnels are still created for the initial namespace,
to be the least intrusive for typical setups.
Tested:
lpk43:~# cat add_del_unshare.sh
for i in `seq 1 40`
do
(for j in `seq 1 100` ; do unshare -n /bin/true >/dev/null ; done) &
done
wait
lpk43:~# echo 0 >/proc/sys/net/core/fb_tunnels_only_for_init_net
lpk43:~# time ./add_del_unshare.sh
real 0m37.521s
user 0m0.886s
sys 7m7.084s
lpk43:~# echo 1 >/proc/sys/net/core/fb_tunnels_only_for_init_net
lpk43:~# time ./add_del_unshare.sh
real 0m4.761s
user 0m0.851s
sys 1m8.343s
lpk43:~#
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-08 20:51:41 +00:00
|
|
|
struct rtnl_link_ops *rtnl_link_ops;
|
2013-08-06 05:51:37 +00:00
|
|
|
struct hlist_head tunnels[IP_TNL_HASH_SIZE];
|
2015-08-08 06:51:42 +00:00
|
|
|
struct ip_tunnel __rcu *collect_md_tun;
|
net: do not create fallback tunnels for non-default namespaces
fallback tunnels (like tunl0, gre0, gretap0, erspan0, sit0,
ip6tnl0, ip6gre0) are automatically created when the corresponding
module is loaded.
These tunnels are also automatically created when a new network
namespace is created, at a great cost.
In many cases, netns are used for isolation purposes, and these
extra network devices are a waste of resources. We are using
thousands of netns per host, and hit the netns creation/delete
bottleneck a lot. (Many thanks to Kirill for recent work on this)
Add a new sysctl so that we can opt-out from this automatic creation.
Note that these tunnels are still created for the initial namespace,
to be the least intrusive for typical setups.
Tested:
lpk43:~# cat add_del_unshare.sh
for i in `seq 1 40`
do
(for j in `seq 1 100` ; do unshare -n /bin/true >/dev/null ; done) &
done
wait
lpk43:~# echo 0 >/proc/sys/net/core/fb_tunnels_only_for_init_net
lpk43:~# time ./add_del_unshare.sh
real 0m37.521s
user 0m0.886s
sys 7m7.084s
lpk43:~# echo 1 >/proc/sys/net/core/fb_tunnels_only_for_init_net
lpk43:~# time ./add_del_unshare.sh
real 0m4.761s
user 0m0.851s
sys 1m8.343s
lpk43:~#
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-08 20:51:41 +00:00
|
|
|
int type;
|
2013-03-25 14:49:35 +00:00
|
|
|
};
|
|
|
|
|
2015-08-31 01:09:38 +00:00
|
|
|
static inline void ip_tunnel_key_init(struct ip_tunnel_key *key,
|
|
|
|
__be32 saddr, __be32 daddr,
|
2016-03-09 02:00:02 +00:00
|
|
|
u8 tos, u8 ttl, __be32 label,
|
2015-08-31 01:09:38 +00:00
|
|
|
__be16 tp_src, __be16 tp_dst,
|
|
|
|
__be64 tun_id, __be16 tun_flags)
|
2015-07-21 08:43:54 +00:00
|
|
|
{
|
2015-08-31 01:09:38 +00:00
|
|
|
key->tun_id = tun_id;
|
|
|
|
key->u.ipv4.src = saddr;
|
|
|
|
key->u.ipv4.dst = daddr;
|
|
|
|
memset((unsigned char *)key + IP_TUNNEL_KEY_IPV4_PAD,
|
2015-08-20 11:56:23 +00:00
|
|
|
0, IP_TUNNEL_KEY_IPV4_PAD_LEN);
|
2015-08-31 01:09:38 +00:00
|
|
|
key->tos = tos;
|
|
|
|
key->ttl = ttl;
|
2016-03-09 02:00:02 +00:00
|
|
|
key->label = label;
|
2015-08-31 01:09:38 +00:00
|
|
|
key->tun_flags = tun_flags;
|
2015-07-21 08:43:54 +00:00
|
|
|
|
|
|
|
/* For the tunnel types on the top of IPsec, the tp_src and tp_dst of
|
|
|
|
* the upper tunnel are used.
|
|
|
|
* E.g: GRE over IPSEC, the tp_src and tp_port are zero.
|
|
|
|
*/
|
2015-08-31 01:09:38 +00:00
|
|
|
key->tp_src = tp_src;
|
|
|
|
key->tp_dst = tp_dst;
|
2015-07-21 08:43:54 +00:00
|
|
|
|
|
|
|
/* Clear struct padding. */
|
2015-08-31 01:09:38 +00:00
|
|
|
if (sizeof(*key) != IP_TUNNEL_KEY_SIZE)
|
|
|
|
memset((unsigned char *)key + IP_TUNNEL_KEY_SIZE,
|
|
|
|
0, sizeof(*key) - IP_TUNNEL_KEY_SIZE);
|
2015-07-21 08:43:54 +00:00
|
|
|
}
|
|
|
|
|
bpf, vxlan, geneve, gre: fix usage of dst_cache on xmit
The assumptions from commit 0c1d70af924b ("net: use dst_cache for vxlan
device"), 468dfffcd762 ("geneve: add dst caching support") and 3c1cb4d2604c
("net/ipv4: add dst cache support for gre lwtunnels") on dst_cache usage
when ip_tunnel_info is used is unfortunately not always valid as assumed.
While it seems correct for ip_tunnel_info front-ends such as OVS, eBPF
however can fill in ip_tunnel_info for consumers like vxlan, geneve or gre
with different remote dsts, tos, etc, therefore they cannot be assumed as
packet independent.
Right now vxlan, geneve, gre would cache the dst for eBPF and every packet
would reuse the same entry that was first created on the initial route
lookup. eBPF doesn't store/cache the ip_tunnel_info, so each skb may have
a different one.
Fix it by adding a flag that checks the ip_tunnel_info. Also the !tos test
in vxlan needs to be handeled differently in this context as it is currently
inferred from ip_tunnel_info as well if present. ip_tunnel_dst_cache_usable()
helper is added for the three tunnel cases, which checks if we can use dst
cache.
Fixes: 0c1d70af924b ("net: use dst_cache for vxlan device")
Fixes: 468dfffcd762 ("geneve: add dst caching support")
Fixes: 3c1cb4d2604c ("net/ipv4: add dst cache support for gre lwtunnels")
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Paolo Abeni <pabeni@redhat.com>
Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-03-04 14:15:07 +00:00
|
|
|
static inline bool
|
|
|
|
ip_tunnel_dst_cache_usable(const struct sk_buff *skb,
|
|
|
|
const struct ip_tunnel_info *info)
|
|
|
|
{
|
|
|
|
if (skb->mark)
|
|
|
|
return false;
|
|
|
|
if (!info)
|
|
|
|
return true;
|
|
|
|
if (info->key.tun_flags & TUNNEL_NOCACHE)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2015-08-28 18:48:20 +00:00
|
|
|
static inline unsigned short ip_tunnel_info_af(const struct ip_tunnel_info
|
|
|
|
*tun_info)
|
|
|
|
{
|
|
|
|
return tun_info->mode & IP_TUNNEL_INFO_IPV6 ? AF_INET6 : AF_INET;
|
|
|
|
}
|
|
|
|
|
2016-09-08 13:23:45 +00:00
|
|
|
static inline __be64 key32_to_tunnel_id(__be32 key)
|
|
|
|
{
|
|
|
|
#ifdef __BIG_ENDIAN
|
|
|
|
return (__force __be64)key;
|
|
|
|
#else
|
|
|
|
return (__force __be64)((__force u64)key << 32);
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Returns the least-significant 32 bits of a __be64. */
|
|
|
|
static inline __be32 tunnel_id_to_key32(__be64 tun_id)
|
|
|
|
{
|
|
|
|
#ifdef __BIG_ENDIAN
|
|
|
|
return (__force __be32)tun_id;
|
|
|
|
#else
|
|
|
|
return (__force __be32)((__force u64)tun_id >> 32);
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2013-06-21 23:17:11 +00:00
|
|
|
#ifdef CONFIG_INET
|
|
|
|
|
2018-02-27 13:53:38 +00:00
|
|
|
static inline void ip_tunnel_init_flow(struct flowi4 *fl4,
|
|
|
|
int proto,
|
|
|
|
__be32 daddr, __be32 saddr,
|
2022-04-13 17:43:20 +00:00
|
|
|
__be32 key, __u8 tos,
|
|
|
|
struct net *net, int oif,
|
2022-08-18 07:41:18 +00:00
|
|
|
__u32 mark, __u32 tun_inner_hash,
|
|
|
|
__u8 flow_flags)
|
2018-02-27 13:53:38 +00:00
|
|
|
{
|
|
|
|
memset(fl4, 0, sizeof(*fl4));
|
2022-04-13 17:43:20 +00:00
|
|
|
|
|
|
|
if (oif) {
|
|
|
|
fl4->flowi4_l3mdev = l3mdev_master_upper_ifindex_by_index_rcu(net, oif);
|
|
|
|
/* Legacy VRF/l3mdev use case */
|
|
|
|
fl4->flowi4_oif = fl4->flowi4_l3mdev ? 0 : oif;
|
|
|
|
}
|
|
|
|
|
2018-02-27 13:53:38 +00:00
|
|
|
fl4->daddr = daddr;
|
|
|
|
fl4->saddr = saddr;
|
|
|
|
fl4->flowi4_tos = tos;
|
|
|
|
fl4->flowi4_proto = proto;
|
|
|
|
fl4->fl4_gre_key = key;
|
|
|
|
fl4->flowi4_mark = mark;
|
2019-02-24 03:36:20 +00:00
|
|
|
fl4->flowi4_multipath_hash = tun_inner_hash;
|
2022-08-18 07:41:18 +00:00
|
|
|
fl4->flowi4_flags = flow_flags;
|
2018-02-27 13:53:38 +00:00
|
|
|
}
|
|
|
|
|
2013-03-25 14:49:35 +00:00
|
|
|
int ip_tunnel_init(struct net_device *dev);
|
|
|
|
void ip_tunnel_uninit(struct net_device *dev);
|
|
|
|
void ip_tunnel_dellink(struct net_device *dev, struct list_head *head);
|
2015-01-15 14:11:17 +00:00
|
|
|
struct net *ip_tunnel_get_link_net(const struct net_device *dev);
|
2015-04-02 15:07:02 +00:00
|
|
|
int ip_tunnel_get_iflink(const struct net_device *dev);
|
netns: make struct pernet_operations::id unsigned int
Make struct pernet_operations::id unsigned.
There are 2 reasons to do so:
1)
This field is really an index into an zero based array and
thus is unsigned entity. Using negative value is out-of-bound
access by definition.
2)
On x86_64 unsigned 32-bit data which are mixed with pointers
via array indexing or offsets added or subtracted to pointers
are preffered to signed 32-bit data.
"int" being used as an array index needs to be sign-extended
to 64-bit before being used.
void f(long *p, int i)
{
g(p[i]);
}
roughly translates to
movsx rsi, esi
mov rdi, [rsi+...]
call g
MOVSX is 3 byte instruction which isn't necessary if the variable is
unsigned because x86_64 is zero extending by default.
Now, there is net_generic() function which, you guessed it right, uses
"int" as an array index:
static inline void *net_generic(const struct net *net, int id)
{
...
ptr = ng->ptr[id - 1];
...
}
And this function is used a lot, so those sign extensions add up.
Patch snipes ~1730 bytes on allyesconfig kernel (without all junk
messing with code generation):
add/remove: 0/0 grow/shrink: 70/598 up/down: 396/-2126 (-1730)
Unfortunately some functions actually grow bigger.
This is a semmingly random artefact of code generation with register
allocator being used differently. gcc decides that some variable
needs to live in new r8+ registers and every access now requires REX
prefix. Or it is shifted into r12, so [r12+0] addressing mode has to be
used which is longer than [r8]
However, overall balance is in negative direction:
add/remove: 0/0 grow/shrink: 70/598 up/down: 396/-2126 (-1730)
function old new delta
nfsd4_lock 3886 3959 +73
tipc_link_build_proto_msg 1096 1140 +44
mac80211_hwsim_new_radio 2776 2808 +32
tipc_mon_rcv 1032 1058 +26
svcauth_gss_legacy_init 1413 1429 +16
tipc_bcbase_select_primary 379 392 +13
nfsd4_exchange_id 1247 1260 +13
nfsd4_setclientid_confirm 782 793 +11
...
put_client_renew_locked 494 480 -14
ip_set_sockfn_get 730 716 -14
geneve_sock_add 829 813 -16
nfsd4_sequence_done 721 703 -18
nlmclnt_lookup_host 708 686 -22
nfsd4_lockt 1085 1063 -22
nfs_get_client 1077 1050 -27
tcf_bpf_init 1106 1076 -30
nfsd4_encode_fattr 5997 5930 -67
Total: Before=154856051, After=154854321, chg -0.00%
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-11-17 01:58:21 +00:00
|
|
|
int ip_tunnel_init_net(struct net *net, unsigned int ip_tnl_net_id,
|
2013-06-07 20:26:05 +00:00
|
|
|
struct rtnl_link_ops *ops, char *devname);
|
2013-03-25 14:49:35 +00:00
|
|
|
|
2017-09-19 23:27:09 +00:00
|
|
|
void ip_tunnel_delete_nets(struct list_head *list_net, unsigned int id,
|
|
|
|
struct rtnl_link_ops *ops);
|
2013-03-25 14:49:35 +00:00
|
|
|
|
|
|
|
void ip_tunnel_xmit(struct sk_buff *skb, struct net_device *dev,
|
2013-05-27 23:48:15 +00:00
|
|
|
const struct iphdr *tnl_params, const u8 protocol);
|
2016-09-15 20:00:29 +00:00
|
|
|
void ip_md_tunnel_xmit(struct sk_buff *skb, struct net_device *dev,
|
2019-01-22 10:39:50 +00:00
|
|
|
const u8 proto, int tunnel_hlen);
|
2020-05-19 13:03:13 +00:00
|
|
|
int ip_tunnel_ctl(struct net_device *dev, struct ip_tunnel_parm *p, int cmd);
|
2021-07-27 13:45:06 +00:00
|
|
|
int ip_tunnel_siocdevprivate(struct net_device *dev, struct ifreq *ifr,
|
|
|
|
void __user *data, int cmd);
|
vxlan, gre, geneve: Set a large MTU on ovs-created tunnel devices
Prior to 4.3, openvswitch tunnel vports (vxlan, gre and geneve) could
transmit vxlan packets of any size, constrained only by the ability to
send out the resulting packets. 4.3 introduced netdevs corresponding
to tunnel vports. These netdevs have an MTU, which limits the size of
a packet that can be successfully encapsulated. The default MTU
values are low (1500 or less), which is awkwardly small in the context
of physical networks supporting jumbo frames, and leads to a
conspicuous change in behaviour for userspace.
Instead, set the MTU on openvswitch-created netdevs to be the relevant
maximum (i.e. the maximum IP packet size minus any relevant overhead),
effectively restoring the behaviour prior to 4.3.
Signed-off-by: David Wragg <david@weave.works>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-10 00:05:58 +00:00
|
|
|
int __ip_tunnel_change_mtu(struct net_device *dev, int new_mtu, bool strict);
|
2013-03-25 14:49:35 +00:00
|
|
|
int ip_tunnel_change_mtu(struct net_device *dev, int new_mtu);
|
|
|
|
|
|
|
|
struct ip_tunnel *ip_tunnel_lookup(struct ip_tunnel_net *itn,
|
|
|
|
int link, __be16 flags,
|
|
|
|
__be32 remote, __be32 local,
|
|
|
|
__be32 key);
|
|
|
|
|
|
|
|
int ip_tunnel_rcv(struct ip_tunnel *tunnel, struct sk_buff *skb,
|
2015-08-08 06:51:42 +00:00
|
|
|
const struct tnl_ptk_info *tpi, struct metadata_dst *tun_dst,
|
|
|
|
bool log_ecn_error);
|
2013-03-25 14:49:35 +00:00
|
|
|
int ip_tunnel_changelink(struct net_device *dev, struct nlattr *tb[],
|
2017-04-19 16:30:54 +00:00
|
|
|
struct ip_tunnel_parm *p, __u32 fwmark);
|
2013-03-25 14:49:35 +00:00
|
|
|
int ip_tunnel_newlink(struct net_device *dev, struct nlattr *tb[],
|
2017-04-19 16:30:54 +00:00
|
|
|
struct ip_tunnel_parm *p, __u32 fwmark);
|
netns: make struct pernet_operations::id unsigned int
Make struct pernet_operations::id unsigned.
There are 2 reasons to do so:
1)
This field is really an index into an zero based array and
thus is unsigned entity. Using negative value is out-of-bound
access by definition.
2)
On x86_64 unsigned 32-bit data which are mixed with pointers
via array indexing or offsets added or subtracted to pointers
are preffered to signed 32-bit data.
"int" being used as an array index needs to be sign-extended
to 64-bit before being used.
void f(long *p, int i)
{
g(p[i]);
}
roughly translates to
movsx rsi, esi
mov rdi, [rsi+...]
call g
MOVSX is 3 byte instruction which isn't necessary if the variable is
unsigned because x86_64 is zero extending by default.
Now, there is net_generic() function which, you guessed it right, uses
"int" as an array index:
static inline void *net_generic(const struct net *net, int id)
{
...
ptr = ng->ptr[id - 1];
...
}
And this function is used a lot, so those sign extensions add up.
Patch snipes ~1730 bytes on allyesconfig kernel (without all junk
messing with code generation):
add/remove: 0/0 grow/shrink: 70/598 up/down: 396/-2126 (-1730)
Unfortunately some functions actually grow bigger.
This is a semmingly random artefact of code generation with register
allocator being used differently. gcc decides that some variable
needs to live in new r8+ registers and every access now requires REX
prefix. Or it is shifted into r12, so [r12+0] addressing mode has to be
used which is longer than [r8]
However, overall balance is in negative direction:
add/remove: 0/0 grow/shrink: 70/598 up/down: 396/-2126 (-1730)
function old new delta
nfsd4_lock 3886 3959 +73
tipc_link_build_proto_msg 1096 1140 +44
mac80211_hwsim_new_radio 2776 2808 +32
tipc_mon_rcv 1032 1058 +26
svcauth_gss_legacy_init 1413 1429 +16
tipc_bcbase_select_primary 379 392 +13
nfsd4_exchange_id 1247 1260 +13
nfsd4_setclientid_confirm 782 793 +11
...
put_client_renew_locked 494 480 -14
ip_set_sockfn_get 730 716 -14
geneve_sock_add 829 813 -16
nfsd4_sequence_done 721 703 -18
nlmclnt_lookup_host 708 686 -22
nfsd4_lockt 1085 1063 -22
nfs_get_client 1077 1050 -27
tcf_bpf_init 1106 1076 -30
nfsd4_encode_fattr 5997 5930 -67
Total: Before=154856051, After=154854321, chg -0.00%
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-11-17 01:58:21 +00:00
|
|
|
void ip_tunnel_setup(struct net_device *dev, unsigned int net_id);
|
2016-05-18 16:06:13 +00:00
|
|
|
|
2022-09-29 13:52:02 +00:00
|
|
|
bool ip_tunnel_netlink_encap_parms(struct nlattr *data[],
|
|
|
|
struct ip_tunnel_encap *encap);
|
|
|
|
|
2022-09-29 13:52:03 +00:00
|
|
|
void ip_tunnel_netlink_parms(struct nlattr *data[],
|
|
|
|
struct ip_tunnel_parm *parms);
|
|
|
|
|
2020-06-30 01:06:18 +00:00
|
|
|
extern const struct header_ops ip_tunnel_header_ops;
|
|
|
|
__be16 ip_tunnel_parse_protocol(const struct sk_buff *skb);
|
|
|
|
|
2016-05-18 16:06:13 +00:00
|
|
|
struct ip_tunnel_encap_ops {
|
|
|
|
size_t (*encap_hlen)(struct ip_tunnel_encap *e);
|
|
|
|
int (*build_header)(struct sk_buff *skb, struct ip_tunnel_encap *e,
|
|
|
|
u8 *protocol, struct flowi4 *fl4);
|
udp: Support for error handlers of tunnels with arbitrary destination port
ICMP error handling is currently not possible for UDP tunnels not
employing a receiving socket with local destination port matching the
remote one, because we have no way to look them up.
Add an err_handler tunnel encapsulation operation that can be exported by
tunnels in order to pass the error to the protocol implementing the
encapsulation. We can't easily use a lookup function as we did for VXLAN
and GENEVE, as protocol error handlers, which would be in turn called by
implementations of this new operation, handle the errors themselves,
together with the tunnel lookup.
Without a socket, we can't be sure which encapsulation error handler is
the appropriate one: encapsulation handlers (the ones for FoU and GUE
introduced in the next patch, e.g.) will need to check the new error codes
returned by protocol handlers to figure out if errors match the given
encapsulation, and, in turn, report this error back, so that we can try
all of them in __udp{4,6}_lib_err_encap_no_sk() until we have a match.
v2:
- Name all arguments in err_handler prototypes (David Miller)
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: Sabrina Dubroca <sd@queasysnail.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-08 11:19:22 +00:00
|
|
|
int (*err_handler)(struct sk_buff *skb, u32 info);
|
2016-05-18 16:06:13 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
#define MAX_IPTUN_ENCAP_OPS 8
|
|
|
|
|
|
|
|
extern const struct ip_tunnel_encap_ops __rcu *
|
|
|
|
iptun_encaps[MAX_IPTUN_ENCAP_OPS];
|
|
|
|
|
|
|
|
int ip_tunnel_encap_add_ops(const struct ip_tunnel_encap_ops *op,
|
|
|
|
unsigned int num);
|
|
|
|
int ip_tunnel_encap_del_ops(const struct ip_tunnel_encap_ops *op,
|
|
|
|
unsigned int num);
|
|
|
|
|
2014-09-17 19:25:58 +00:00
|
|
|
int ip_tunnel_encap_setup(struct ip_tunnel *t,
|
|
|
|
struct ip_tunnel_encap *ipencap);
|
2013-03-25 14:49:35 +00:00
|
|
|
|
2018-12-30 22:24:36 +00:00
|
|
|
static inline bool pskb_inet_may_pull(struct sk_buff *skb)
|
|
|
|
{
|
|
|
|
int nhlen;
|
|
|
|
|
|
|
|
switch (skb->protocol) {
|
|
|
|
#if IS_ENABLED(CONFIG_IPV6)
|
|
|
|
case htons(ETH_P_IPV6):
|
|
|
|
nhlen = sizeof(struct ipv6hdr);
|
|
|
|
break;
|
|
|
|
#endif
|
|
|
|
case htons(ETH_P_IP):
|
|
|
|
nhlen = sizeof(struct iphdr);
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
nhlen = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
return pskb_network_may_pull(skb, nhlen);
|
|
|
|
}
|
|
|
|
|
2016-05-18 16:06:13 +00:00
|
|
|
static inline int ip_encap_hlen(struct ip_tunnel_encap *e)
|
|
|
|
{
|
|
|
|
const struct ip_tunnel_encap_ops *ops;
|
|
|
|
int hlen = -EINVAL;
|
|
|
|
|
|
|
|
if (e->type == TUNNEL_ENCAP_NONE)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
if (e->type >= MAX_IPTUN_ENCAP_OPS)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
rcu_read_lock();
|
|
|
|
ops = rcu_dereference(iptun_encaps[e->type]);
|
|
|
|
if (likely(ops && ops->encap_hlen))
|
|
|
|
hlen = ops->encap_hlen(e);
|
|
|
|
rcu_read_unlock();
|
|
|
|
|
|
|
|
return hlen;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int ip_tunnel_encap(struct sk_buff *skb, struct ip_tunnel *t,
|
|
|
|
u8 *protocol, struct flowi4 *fl4)
|
|
|
|
{
|
|
|
|
const struct ip_tunnel_encap_ops *ops;
|
|
|
|
int ret = -EINVAL;
|
|
|
|
|
|
|
|
if (t->encap.type == TUNNEL_ENCAP_NONE)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
if (t->encap.type >= MAX_IPTUN_ENCAP_OPS)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
rcu_read_lock();
|
|
|
|
ops = rcu_dereference(iptun_encaps[t->encap.type]);
|
|
|
|
if (likely(ops && ops->build_header))
|
|
|
|
ret = ops->build_header(skb, &t->encap, protocol, fl4);
|
|
|
|
rcu_read_unlock();
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2013-03-25 14:49:35 +00:00
|
|
|
/* Extract dsfield from inner protocol */
|
|
|
|
static inline u8 ip_tunnel_get_dsfield(const struct iphdr *iph,
|
|
|
|
const struct sk_buff *skb)
|
|
|
|
{
|
2022-07-21 20:27:19 +00:00
|
|
|
__be16 payload_protocol = skb_protocol(skb, true);
|
|
|
|
|
|
|
|
if (payload_protocol == htons(ETH_P_IP))
|
2013-03-25 14:49:35 +00:00
|
|
|
return iph->tos;
|
2022-07-21 20:27:19 +00:00
|
|
|
else if (payload_protocol == htons(ETH_P_IPV6))
|
2013-03-25 14:49:35 +00:00
|
|
|
return ipv6_get_dsfield((const struct ipv6hdr *)iph);
|
|
|
|
else
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2018-04-17 06:11:28 +00:00
|
|
|
static inline u8 ip_tunnel_get_ttl(const struct iphdr *iph,
|
|
|
|
const struct sk_buff *skb)
|
|
|
|
{
|
2022-07-21 20:27:19 +00:00
|
|
|
__be16 payload_protocol = skb_protocol(skb, true);
|
|
|
|
|
|
|
|
if (payload_protocol == htons(ETH_P_IP))
|
2018-04-17 06:11:28 +00:00
|
|
|
return iph->ttl;
|
2022-07-21 20:27:19 +00:00
|
|
|
else if (payload_protocol == htons(ETH_P_IPV6))
|
2018-04-17 06:11:28 +00:00
|
|
|
return ((const struct ipv6hdr *)iph)->hop_limit;
|
|
|
|
else
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2013-03-25 14:49:35 +00:00
|
|
|
/* Propogate ECN bits out */
|
|
|
|
static inline u8 ip_tunnel_ecn_encap(u8 tos, const struct iphdr *iph,
|
|
|
|
const struct sk_buff *skb)
|
|
|
|
{
|
|
|
|
u8 inner = ip_tunnel_get_dsfield(iph, skb);
|
|
|
|
|
|
|
|
return INET_ECN_encapsulate(tos, inner);
|
|
|
|
}
|
|
|
|
|
2016-04-05 12:47:12 +00:00
|
|
|
int __iptunnel_pull_header(struct sk_buff *skb, int hdr_len,
|
|
|
|
__be16 inner_proto, bool raw_proto, bool xnet);
|
|
|
|
|
|
|
|
static inline int iptunnel_pull_header(struct sk_buff *skb, int hdr_len,
|
|
|
|
__be16 inner_proto, bool xnet)
|
|
|
|
{
|
|
|
|
return __iptunnel_pull_header(skb, hdr_len, inner_proto, false, xnet);
|
|
|
|
}
|
|
|
|
|
2015-12-24 22:34:54 +00:00
|
|
|
void iptunnel_xmit(struct sock *sk, struct rtable *rt, struct sk_buff *skb,
|
|
|
|
__be32 src, __be32 dst, u8 proto,
|
|
|
|
u8 tos, u8 ttl, __be16 df, bool xnet);
|
2015-09-22 16:12:11 +00:00
|
|
|
struct metadata_dst *iptunnel_metadata_reply(struct metadata_dst *md,
|
|
|
|
gfp_t flags);
|
tunnels: PMTU discovery support for directly bridged IP packets
It's currently possible to bridge Ethernet tunnels carrying IP
packets directly to external interfaces without assigning them
addresses and routes on the bridged network itself: this is the case
for UDP tunnels bridged with a standard bridge or by Open vSwitch.
PMTU discovery is currently broken with those configurations, because
the encapsulation effectively decreases the MTU of the link, and
while we are able to account for this using PMTU discovery on the
lower layer, we don't have a way to relay ICMP or ICMPv6 messages
needed by the sender, because we don't have valid routes to it.
On the other hand, as a tunnel endpoint, we can't fragment packets
as a general approach: this is for instance clearly forbidden for
VXLAN by RFC 7348, section 4.3:
VTEPs MUST NOT fragment VXLAN packets. Intermediate routers may
fragment encapsulated VXLAN packets due to the larger frame size.
The destination VTEP MAY silently discard such VXLAN fragments.
The same paragraph recommends that the MTU over the physical network
accomodates for encapsulations, but this isn't a practical option for
complex topologies, especially for typical Open vSwitch use cases.
Further, it states that:
Other techniques like Path MTU discovery (see [RFC1191] and
[RFC1981]) MAY be used to address this requirement as well.
Now, PMTU discovery already works for routed interfaces, we get
route exceptions created by the encapsulation device as they receive
ICMP Fragmentation Needed and ICMPv6 Packet Too Big messages, and
we already rebuild those messages with the appropriate MTU and route
them back to the sender.
Add the missing bits for bridged cases:
- checks in skb_tunnel_check_pmtu() to understand if it's appropriate
to trigger a reply according to RFC 1122 section 3.2.2 for ICMP and
RFC 4443 section 2.4 for ICMPv6. This function is already called by
UDP tunnels
- a new function generating those ICMP or ICMPv6 replies. We can't
reuse icmp_send() and icmp6_send() as we don't see the sender as a
valid destination. This doesn't need to be generic, as we don't
cover any other type of ICMP errors given that we only provide an
encapsulation function to the sender
While at it, make the MTU check in skb_tunnel_check_pmtu() accurate:
we might receive GSO buffers here, and the passed headroom already
includes the inner MAC length, so we don't have to account for it
a second time (that would imply three MAC headers on the wire, but
there are just two).
This issue became visible while bridging IPv6 packets with 4500 bytes
of payload over GENEVE using IPv4 with a PMTU of 4000. Given the 50
bytes of encapsulation headroom, we would advertise MTU as 3950, and
we would reject fragmented IPv6 datagrams of 3958 bytes size on the
wire. We're exclusively dealing with network MTU here, though, so we
could get Ethernet frames up to 3964 octets in that case.
v2:
- moved skb_tunnel_check_pmtu() to ip_tunnel_core.c (David Ahern)
- split IPv4/IPv6 functions (David Ahern)
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-08-04 05:53:43 +00:00
|
|
|
int skb_tunnel_check_pmtu(struct sk_buff *skb, struct dst_entry *encap_dst,
|
|
|
|
int headroom, bool reply);
|
2013-06-18 00:49:56 +00:00
|
|
|
|
2016-04-14 19:33:37 +00:00
|
|
|
int iptunnel_handle_offloads(struct sk_buff *skb, int gso_type_mask);
|
2013-10-19 18:42:55 +00:00
|
|
|
|
2016-03-19 16:32:02 +00:00
|
|
|
static inline int iptunnel_pull_offloads(struct sk_buff *skb)
|
|
|
|
{
|
|
|
|
if (skb_is_gso(skb)) {
|
|
|
|
int err;
|
|
|
|
|
|
|
|
err = skb_unclone(skb, GFP_ATOMIC);
|
|
|
|
if (unlikely(err))
|
|
|
|
return err;
|
|
|
|
skb_shinfo(skb)->gso_type &= ~(NETIF_F_GSO_ENCAP_ALL >>
|
|
|
|
NETIF_F_GSO_SHIFT);
|
|
|
|
}
|
|
|
|
|
|
|
|
skb->encapsulation = 0;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2015-12-24 22:34:54 +00:00
|
|
|
static inline void iptunnel_xmit_stats(struct net_device *dev, int pkt_len)
|
2013-03-25 14:49:35 +00:00
|
|
|
{
|
2015-12-24 22:34:54 +00:00
|
|
|
if (pkt_len > 0) {
|
|
|
|
struct pcpu_sw_netstats *tstats = get_cpu_ptr(dev->tstats);
|
2013-03-25 14:49:35 +00:00
|
|
|
|
|
|
|
u64_stats_update_begin(&tstats->syncp);
|
2022-06-08 15:46:37 +00:00
|
|
|
u64_stats_add(&tstats->tx_bytes, pkt_len);
|
|
|
|
u64_stats_inc(&tstats->tx_packets);
|
2013-03-25 14:49:35 +00:00
|
|
|
u64_stats_update_end(&tstats->syncp);
|
ip_tunnel: disable preemption when updating per-cpu tstats
Drivers like vxlan use the recently introduced
udp_tunnel_xmit_skb/udp_tunnel6_xmit_skb APIs. udp_tunnel6_xmit_skb
makes use of ip6tunnel_xmit, and ip6tunnel_xmit, after sending the
packet, updates the struct stats using the usual
u64_stats_update_begin/end calls on this_cpu_ptr(dev->tstats).
udp_tunnel_xmit_skb makes use of iptunnel_xmit, which doesn't touch
tstats, so drivers like vxlan, immediately after, call
iptunnel_xmit_stats, which does the same thing - calls
u64_stats_update_begin/end on this_cpu_ptr(dev->tstats).
While vxlan is probably fine (I don't know?), calling a similar function
from, say, an unbound workqueue, on a fully preemptable kernel causes
real issues:
[ 188.434537] BUG: using smp_processor_id() in preemptible [00000000] code: kworker/u8:0/6
[ 188.435579] caller is debug_smp_processor_id+0x17/0x20
[ 188.435583] CPU: 0 PID: 6 Comm: kworker/u8:0 Not tainted 4.2.6 #2
[ 188.435607] Call Trace:
[ 188.435611] [<ffffffff8234e936>] dump_stack+0x4f/0x7b
[ 188.435615] [<ffffffff81915f3d>] check_preemption_disabled+0x19d/0x1c0
[ 188.435619] [<ffffffff81915f77>] debug_smp_processor_id+0x17/0x20
The solution would be to protect the whole
this_cpu_ptr(dev->tstats)/u64_stats_update_begin/end blocks with
disabling preemption and then reenabling it.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-11-12 16:35:58 +00:00
|
|
|
put_cpu_ptr(tstats);
|
2013-03-25 14:49:35 +00:00
|
|
|
} else {
|
2015-12-24 22:34:54 +00:00
|
|
|
struct net_device_stats *err_stats = &dev->stats;
|
|
|
|
|
|
|
|
if (pkt_len < 0) {
|
|
|
|
err_stats->tx_errors++;
|
|
|
|
err_stats->tx_aborted_errors++;
|
|
|
|
} else {
|
|
|
|
err_stats->tx_dropped++;
|
|
|
|
}
|
2013-03-25 14:49:35 +00:00
|
|
|
}
|
|
|
|
}
|
2013-06-21 23:17:11 +00:00
|
|
|
|
2015-08-31 01:09:38 +00:00
|
|
|
static inline void *ip_tunnel_info_opts(struct ip_tunnel_info *info)
|
2015-07-21 08:43:58 +00:00
|
|
|
{
|
|
|
|
return info + 1;
|
|
|
|
}
|
|
|
|
|
2015-08-31 01:09:38 +00:00
|
|
|
static inline void ip_tunnel_info_opts_get(void *to,
|
|
|
|
const struct ip_tunnel_info *info)
|
|
|
|
{
|
|
|
|
memcpy(to, info + 1, info->options_len);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void ip_tunnel_info_opts_set(struct ip_tunnel_info *info,
|
2018-06-27 04:39:36 +00:00
|
|
|
const void *from, int len,
|
|
|
|
__be16 flags)
|
2015-08-31 01:09:38 +00:00
|
|
|
{
|
|
|
|
info->options_len = len;
|
2020-11-11 00:16:40 +00:00
|
|
|
if (len > 0) {
|
|
|
|
memcpy(ip_tunnel_info_opts(info), from, len);
|
|
|
|
info->key.tun_flags |= flags;
|
|
|
|
}
|
2015-08-31 01:09:38 +00:00
|
|
|
}
|
|
|
|
|
2015-07-21 08:44:00 +00:00
|
|
|
static inline struct ip_tunnel_info *lwt_tun_info(struct lwtunnel_state *lwtstate)
|
|
|
|
{
|
|
|
|
return (struct ip_tunnel_info *)lwtstate->data;
|
|
|
|
}
|
|
|
|
|
2018-05-08 16:06:58 +00:00
|
|
|
DECLARE_STATIC_KEY_FALSE(ip_tunnel_metadata_cnt);
|
2015-07-21 08:44:01 +00:00
|
|
|
|
|
|
|
/* Returns > 0 if metadata should be collected */
|
|
|
|
static inline int ip_tunnel_collect_metadata(void)
|
|
|
|
{
|
2018-05-08 16:06:58 +00:00
|
|
|
return static_branch_unlikely(&ip_tunnel_metadata_cnt);
|
2015-07-21 08:44:01 +00:00
|
|
|
}
|
|
|
|
|
2015-07-23 08:08:44 +00:00
|
|
|
void __init ip_tunnel_core_init(void);
|
|
|
|
|
2015-07-21 08:44:01 +00:00
|
|
|
void ip_tunnel_need_metadata(void);
|
|
|
|
void ip_tunnel_unneed_metadata(void);
|
|
|
|
|
2015-07-22 12:43:58 +00:00
|
|
|
#else /* CONFIG_INET */
|
|
|
|
|
|
|
|
static inline struct ip_tunnel_info *lwt_tun_info(struct lwtunnel_state *lwtstate)
|
|
|
|
{
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void ip_tunnel_need_metadata(void)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void ip_tunnel_unneed_metadata(void)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
2016-03-08 22:36:03 +00:00
|
|
|
static inline void ip_tunnel_info_opts_get(void *to,
|
|
|
|
const struct ip_tunnel_info *info)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void ip_tunnel_info_opts_set(struct ip_tunnel_info *info,
|
2018-06-27 04:39:36 +00:00
|
|
|
const void *from, int len,
|
|
|
|
__be16 flags)
|
2016-03-08 22:36:03 +00:00
|
|
|
{
|
|
|
|
info->options_len = 0;
|
|
|
|
}
|
|
|
|
|
2013-06-21 23:17:11 +00:00
|
|
|
#endif /* CONFIG_INET */
|
|
|
|
|
2013-03-25 14:49:35 +00:00
|
|
|
#endif /* __NET_IP_TUNNELS_H */
|