Commit Graph

953952 Commits

Author SHA1 Message Date
Martin KaFai Lau
8a3feed90e bpf, selftest: Fix flaky tcp_hdr_options test when adding addr to lo
The tcp_hdr_options test adds a "::eB9F" addr to the lo dev.
However, this non loopback address will have a race on ipv6 dad
which may lead to EADDRNOTAVAIL error from time to time.

Even nodad is used in the iproute2 command, there is still a race in
when the route will be added.  This will then lead to ENETUNREACH from
time to time.

To avoid the above, this patch uses the default loopback address "::1"
to do the test.

Fixes: ad2f8eb009 ("bpf: selftests: Tcp header options")
Reported-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20201012234940.1707941-1-kafai@fb.com
2020-10-15 20:53:15 +02:00
Lorenz Bauer
f58423aeab bpf, sockmap: Add locking annotations to iterator
The sparse checker currently outputs the following warnings:

    include/linux/rcupdate.h:632:9: sparse: sparse: context imbalance in 'sock_hash_seq_start' - wrong count at exit
    include/linux/rcupdate.h:632:9: sparse: sparse: context imbalance in 'sock_map_seq_start' - wrong count at exit

Add the necessary __acquires and __release annotations to make the
iterator locking schema palatable to sparse. Also add __must_hold
for good measure.

The kernel codebase uses both __acquires(rcu) and __acquires(RCU).
I couldn't find any guidance which one is preferred, so I used
what is easier to type out.

Fixes: 0365351524 ("net: Allow iterating sockmap and sockhash")
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Acked-by: Jakub Sitnicki <jakub@cloudflare.com>
Link: https://lore.kernel.org/bpf/20201012091850.67452-1-lmb@cloudflare.com
2020-10-15 20:49:56 +02:00
Alexei Starovoitov
e688c3db7c bpf: Fix register equivalence tracking.
The 64-bit JEQ/JNE handling in reg_set_min_max() was clearing reg->id in either
true or false branch. In the case 'if (reg->id)' check was done on the other
branch the counter part register would have reg->id == 0 when called into
find_equal_scalars(). In such case the helper would incorrectly identify other
registers with id == 0 as equivalent and propagate the state incorrectly.
Fix it by preserving ID across reg_set_min_max().

In other words any kind of comparison operator on the scalar register
should preserve its ID to recognize:

r1 = r2
if (r1 == 20) {
  #1 here both r1 and r2 == 20
} else if (r2 < 20) {
  #2 here both r1 and r2 < 20
}

The patch is addressing #1 case. The #2 was working correctly already.

Fixes: 75748837b7 ("bpf: Propagate scalar ranges through register assignments.")
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Tested-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20201014175608.1416-1-alexei.starovoitov@gmail.com
2020-10-15 16:05:31 +02:00
Jakub Kicinski
ccdf7fae3a Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next
Alexei Starovoitov says:

====================
pull-request: bpf-next 2020-10-12

The main changes are:

1) The BPF verifier improvements to track register allocation pattern, from Alexei and Yonghong.

2) libbpf relocation support for different size load/store, from Andrii.

3) bpf_redirect_peer() helper and support for inner map array with different max_entries, from Daniel.

4) BPF support for per-cpu variables, form Hao.

5) sockmap improvements, from John.
====================

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-12 16:16:50 -07:00
Jakub Kicinski
a308283fdb Merge git://git.kernel.org/pub/scm/linux/kernel/git/pablo/nf-next
Pablo Neira Ayuso says:

====================
Netfilter/IPVS updates for net-next

The following patchset contains Netfilter/IPVS updates for net-next:

1) Inspect the reply packets coming from DR/TUN and refresh connection
   state and timeout, from longguang yue and Julian Anastasov.

2) Series to add support for the inet ingress chain type in nf_tables.
====================

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-12 15:00:36 -07:00
Jakub Kicinski
547848af58 Merge branch 'bnxt_en-Updates-for-net-next'
Michael Chan says:

====================
bnxt_en: Updates for net-next.

This series contains these main changes:

1. Change of default message level to enable more logging.
2. Some cleanups related to processing async events from firmware.
3. Allow online ethtool selftest on multi-function PFs.
4. Return stored firmware version information to devlink.

v2:
Patch 3: Change bnxt_reset_task() to silent mode.
Patch 8 & 9: Ensure we copy NULL terminated fw strings to devlink.
Patch 8 & 9: Return directly after the last bnxt_dl_info_put() call.
Patch 9: If FW call to get stored dev info fails, return success to
         devlink without the stored versions.
====================

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-12 14:42:55 -07:00
Vasundhara Volam
1388875b39 bnxt_en: Add stored FW version info to devlink info_get cb.
This patch adds FW versions stored in the flash to devlink info_get
callback.  Return the correct fw.psid running version using the
newly added bp->nvm_cfg_ver.

v2:
Ensure stored pkg_name string is NULL terminated when copied to
devlink.

Return directly from the last call to bnxt_dl_info_put().

If the FW call to get stored version fails for any reason, return
success immediately to devlink without the stored versions.

Reviewed-by: Andy Gospodarek <gospo@broadcom.com>
Signed-off-by: Vasundhara Volam <vasundhara-v.volam@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Link: https://lore.kernel.org/r/1602493854-29283-10-git-send-email-michael.chan@broadcom.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-12 14:27:03 -07:00
Vasundhara Volam
7154917a12 bnxt_en: Refactor bnxt_dl_info_get().
Add a new function bnxt_dl_info_put() to simplify the code, as there
are more stored firmware version fields to be added in the next patch.

Also, rename fw_ver variable name to ncsi_ver for better naming while
copying to devlink info_get cb.

v2:
Ensure active_pkg_name string is NULL terminated when copied to
devlink.

Return directly from the last call to bnxt_dl_info_put().

Reviewed-by: Pavan Chebbi <pavan.chebbi@broadcom.com>
Reviewed-by: Andy Gospodarek <gospo@broadcom.com>
Signed-off-by: Vasundhara Volam <vasundhara-v.volam@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Link: https://lore.kernel.org/r/1602493854-29283-9-git-send-email-michael.chan@broadcom.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-12 14:27:03 -07:00
Vasundhara Volam
4933f6753b bnxt_en: Add bnxt_hwrm_nvm_get_dev_info() to query NVM info.
Add a new bnxt_hwrm_nvm_get_dev_info() to query firmware version
information via NVM_GET_DEV_INFO firmware command.  Use it to
get the running version of the NVM configuration information.

This new function will also be used in subsequent patches to get the
stored firmware versions.

Reviewed-by: Andy Gospodarek <gospo@broadcom.com>
Signed-off-by: Vasundhara Volam <vasundhara-v.volam@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Link: https://lore.kernel.org/r/1602493854-29283-8-git-send-email-michael.chan@broadcom.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-12 14:27:03 -07:00
Michael Chan
8eddb3e7ce bnxt_en: Log unknown link speed appropriately.
If the VF virtual link is set to always enabled, the speed may be
unknown when the physical link is down.  The driver currently logs
the link speed as 4294967295 Mbps which is SPEED_UNKNOWN.  Modify
the link up log message as "speed unknown" which makes more sense.

Reviewed-by: Vasundhara Volam <vasundhara-v.volam@broadcom.com>
Reviewed-by: Edwin Peer <edwin.peer@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Link: https://lore.kernel.org/r/1602493854-29283-7-git-send-email-michael.chan@broadcom.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-12 14:27:03 -07:00
Michael Chan
c966c67c09 bnxt_en: Log event_data1 and event_data2 when handling RESET_NOTIFY event.
Log these values that contain useful firmware state information.

Reviewed-by: Edwin Peer <edwin.peer@broadcom.com>
Reviewed-by: Vasundhara Volam <vasundhara-v.volam@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Link: https://lore.kernel.org/r/1602493854-29283-6-git-send-email-michael.chan@broadcom.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-12 14:27:03 -07:00
Michael Chan
03ab8ca1e9 bnxt_en: Simplify bnxt_async_event_process().
event_data1 and event_data2 are used when processing most events.
Store these in local variables at the beginning of the function to
simplify many of the case statements.

Reviewed-by: Edwin Peer <edwin.peer@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Link: https://lore.kernel.org/r/1602493854-29283-5-git-send-email-michael.chan@broadcom.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-12 14:27:03 -07:00
Michael Chan
8fb35cd302 bnxt_en: Set driver default message level.
Currently, bp->msg_enable has default value of 0.  It is more useful
to have the commonly used NETIF_MSG_DRV and NETIF_MSG_HW enabled by
default.

v2: Change the fall back bnxt_reset_task() inside bnxt_rx_ring_reset()
to silent mode.  With older fw, we would take the fall back path and
it would be very noisy.

Reviewed-by: Edwin Peer <edwin.peer@broadcom.com>
Reviewed-by: Vasundhara Volam <vasundhara-v.volam@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Link: https://lore.kernel.org/r/1602493854-29283-4-git-send-email-michael.chan@broadcom.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-12 14:27:03 -07:00
Vasundhara Volam
6896cb35ee bnxt_en: Enable online self tests for multi-host/NPAR mode.
Online self tests are not disruptive and can be run in NPAR mode
and in multi-host NIC as well.

Reviewed-by: Edwin Peer <edwin.peer@broadcom.com>
Signed-off-by: Vasundhara Volam <vasundhara-v.volam@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Link: https://lore.kernel.org/r/1602493854-29283-3-git-send-email-michael.chan@broadcom.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-12 14:27:02 -07:00
Vasundhara Volam
cf223bfaf7 bnxt_en: Return -EROFS to user space, if NVM writes are not permitted.
If NVRAM resources are locked, NVM writes are not permitted. In such
scenarios, firmware returns HWRM_ERR_CODE_RESOURCE_LOCKED error to
firmware commands.

Reviewed-by: Edwin Peer <edwin.peer@broadcom.com>
Signed-off-by: Vasundhara Volam <vasundhara-v.volam@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Link: https://lore.kernel.org/r/1602493854-29283-2-git-send-email-michael.chan@broadcom.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-12 14:27:02 -07:00
Jakub Kicinski
2ad119d998 linux-can-next-for-5.10-20201012
-----BEGIN PGP SIGNATURE-----
 
 iQFHBAABCgAxFiEEK3kIWJt9yTYMP3ehqclaivrt76kFAl+EDvMTHG1rbEBwZW5n
 dXRyb25peC5kZQAKCRCpyVqK+u3vqU1TB/9uiELtngYqICr29XptMj/JWjPy56J+
 fBt/O93MG2WxYynqkJ2/9tMsxguDc5zwGcLGBcjW8gxJtpAW5vJ6/hfrcquLYjTj
 fFrkajCcjTzGn583Btq5kpd/fRy7pbFq3T1uj6dbBSuXTCqowpoE5r9rxDKDOq2W
 y7ttgpTa0EUJvbgZMVpEDZJ3gVyqDmZ0xR+gYQmUpkclUlFjCbRecB0CmGAGhnxF
 5eSx8GY5dqTL4S0ca5f1p0EmdCfFHTDjMDL/7e4pfkyzl6UY6Ir1Q4N8GEp/Ko12
 Y9j3wOr0pmEIZTbFcfB0234DLf1vHTENlxiKiitknzD4gW0Jahn5H5Yr
 =ljVH
 -----END PGP SIGNATURE-----

Merge tag 'linux-can-next-for-5.10-20201012' of git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next

Marc Kleine-Budde says:

====================
linux-can-next-for-5.10-20201012

Both patches are by Oliver Hartkopp, the first one addresses Jakub's review
comments of the ISOTP protocol, the other one removes version strings from
various CAN protocols.
====================

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-12 09:53:08 -07:00
Ondrej Zary
15f5e48f93 cx82310_eth: use netdev_err instead of dev_err
Use netdev_err for better device identification in syslog.

Signed-off-by: Ondrej Zary <linux@zary.sk>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-12 09:46:40 -07:00
Ondrej Zary
ca139d76b0 cx82310_eth: re-enable ethernet mode after router reboot
When the router is rebooted without a power cycle, the USB device
remains connected but its configuration is reset. This results in
a non-working ethernet connection with messages like this in syslog:
	usb 2-2: RX packet too long: 65535 B

Re-enable ethernet mode when receiving a packet with invalid size of
0xffff.

Signed-off-by: Ondrej Zary <linux@zary.sk>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-12 09:46:40 -07:00
Oliver Hartkopp
f726f3d371 can: remove obsolete version strings
As pointed out by Jakub Kicinski here:
http://lore.kernel.org/r/20201009175751.5c54097f@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com
this patch removes the obsolete version information of the different
CAN protocols and the AF_CAN core module.

Signed-off-by: Oliver Hartkopp <socketcan@hartkopp.net>
Link: https://lore.kernel.org/r/20201012074354.25839-2-socketcan@hartkopp.net
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2020-10-12 10:06:39 +02:00
Oliver Hartkopp
ac911bfeb3 can: isotp: implement cleanups / improvements from review
As pointed out by Jakub Kicinski here:
http://lore.kernel.org/r/20201009175751.5c54097f@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com
this patch addresses the remarked issues:

- remove empty line in comment
- remove default=y for CAN_ISOTP in Kconfig
- make use of pr_notice_once()
- use GFP_ATOMIC instead of gfp_any() in soft hrtimer context

The version strings in the CAN subsystem are removed by a separate patch.

Signed-off-by: Oliver Hartkopp <socketcan@hartkopp.net>
Link: https://lore.kernel.org/r/20201012074354.25839-1-socketcan@hartkopp.net
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2020-10-12 10:06:08 +02:00
Alexei Starovoitov
376dcfe3a4 Merge branch 'bpf, sockmap: allow verdict only sk_skb progs'
John Fastabend says:

====================

This allows a sockmap sk_skb verdict programs to run without a parser. For
some use cases, such as verdict program that support streaming data or a
l3/l4 proxy that does not use data in packet, loading the nop parser
'return skb->len' is an extra unnecessary complexity. With this series we
simply call the verdict program directly from data_ready instead of
bouncing through the strparser logic.

Patches 1,2 do the lifting on the sockmap side then patches 3,4 add the
selftests.

This applies on top of the series here,

  sockmap/sk_skb program memory acct fixes
  https://patchwork.ozlabs.org/project/netdev/list/?series=206975

it will apply without the above series cleanly, but will have an incorrect
memory accounting causing a failure in ./test_sockmap. I could have left
it so the series passed without above series, but it seemed odd to have
it out there and then require yet another patch to fix it up here.

Thanks.
---
====================

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2020-10-11 18:09:45 -07:00
John Fastabend
a24fb420a5 bpf, selftests: Add three new sockmap tests for verdict only programs
Here we add three new tests for sockmap to test having a verdict program
without setting the parser program.

The first test covers the most simply case,

   sender         proxy_recv proxy_send      recv
     |                |                       |
     |              verdict -----+            |
     |                |          |            |
     +----------------+          +------------+

We load the verdict program on the proxy_recv socket without a
parser program. It then does a redirect into the send path of the
proxy_send socket using sendpage_locked().

Next we test the drop case to ensure if we kfree_skb as a result of
the verdict program everything behaves as expected.

Next we test the same configuration above, but with ktls and a
redirect into socket ingress queue. Shown here

   tls                                       tls
   sender         proxy_recv proxy_send      recv
     |                |                       |
     |              verdict ------------------+
     |                |      redirect_ingress
     +----------------+

Also to set up ping/pong test

Signed-off-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/160239302638.8495.17125996694402793471.stgit@john-Precision-5820-Tower
2020-10-11 18:09:44 -07:00
John Fastabend
cdf43c4bfa bpf, selftests: Add option to test_sockmap to omit adding parser program
Add option to allow running without a parser program in place. To test
with ping/pong program use,

 # test_sockmap -t ping --txmsg_omit_skb_parser

this will send packets between two socket bouncing through a proxy
socket that does not use a parser program.

   (ping)                                    (pong)
   sender         proxy_recv proxy_send      recv
     |                |                       |
     |              verdict -----+            |
     |                |          |            |
     +----------------+          +------------+

Signed-off-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/160239300387.8495.11908295143121563076.stgit@john-Precision-5820-Tower
2020-10-11 18:09:44 -07:00
John Fastabend
ef5659280e bpf, sockmap: Allow skipping sk_skb parser program
Currently, we often run with a nop parser namely one that just does
this, 'return skb->len'. This happens when either our verdict program
can handle streaming data or it is only looking at socket data such
as IP addresses and other metadata associated with the flow. The second
case is common for a L3/L4 proxy for instance.

So lets allow loading programs without the parser then we can skip
the stream parser logic and avoid having to add a BPF program that
is effectively a nop.

Signed-off-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/160239297866.8495.13345662302749219672.stgit@john-Precision-5820-Tower
2020-10-11 18:09:44 -07:00
John Fastabend
743df8b774 bpf, sockmap: Check skb_verdict and skb_parser programs explicitly
We are about to allow skb_verdict to run without skb_parser programs
as a first step change code to check each program type specifically.
This should be a mechanical change without any impact to actual result.

Signed-off-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/160239294756.8495.5796595770890272219.stgit@john-Precision-5820-Tower
2020-10-11 18:09:44 -07:00
Alexei Starovoitov
20a6d91518 Merge branch 'sockmap/sk_skb program memory acct fixes'
John Fastabend says:

====================

Users of sockmap and skmsg trying to build proxys and other tools
have pointed out to me the error handling can be problematic. If
the proxy is under-provisioned and/or the BPF admin does not have
the ability to update/modify memory provisions on the sockets
its possible data may be dropped. For some things we have retries
so everything works out OK, but for most things this is likely
not great. And things go bad.

The original design dropped memory accounting on the receive
socket as early as possible. We did this early in sk_skb
handling and then charged it to the redirect socket immediately
after running the BPF program.

But, this design caused a fundamental problem. Namely, what should we do
if we redirect to a socket that has already reached its socket memory
limits. For proxy use cases the network admin can tune memory limits.
But, in general we punted on this problem and told folks to simply make
your memory limits high enough to handle your workload. This is not a
really good answer. When deploying into environments where we expect this
to be transparent its no longer the case because we need to tune params.
In fact its really only viable in cases where we have fine grained
control over the application. For example a proxy redirecting from an
ingress socket to an egress socket. The result is I get bug
reports because its surprising for one, but more importantly also breaks
some use cases. So lets fix it.

This series cleans up the different cases so that in many common
modes, such as passing packet up to receive socket, we can simply
use the underlying assumption that the TCP stack already has done
memory accounting.

Next instead of trying to do memory accounting against the socket
we plan to redirect into we keep memory accounting on the receive
socket until the skb can be put on the redirect socket. This means
if we do an egress redirect to a socket and sock_writable() returns
EAGAIN we can requeue the skb on the workqueue and try again. The
same scenario plays out for ingress. If the skb can not be put on
the receive queue of the redirect socket than we simply requeue and
retry. In both cases memory is still accounted for against the
receiving socket.

This also handles head of line blocking. With the above scheme the
skb is on a queue associated with the socket it will be sent/recv'd
on, but the memory accounting is against the received socket. This
means the receive socket can advance to the next skb and avoid head
of line blocking. At least until its receive memory on the socket
runs out. This will put some maximum size on the amount of data any
socket can enqueue giving us bounds on the skb lists so they can't grow
indefinitely.

Overall I think this is a win. Tested with test_sockmap.

These are fixes, but I tagged it for bpf-next considering we are
at -rc8.

v1->v2: Fix uninitialized/unused variables (kernel test robot)
v2->v3: fix typo in patch2 err=0 needs to be <0 so use err=-EIO
---
====================

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2020-10-11 18:00:57 -07:00
John Fastabend
0b17ad25d8 bpf, sockmap: Add memory accounting so skbs on ingress lists are visible
Move skb->sk assignment out of sk_psock_bpf_run() and into individual
callers. Then we can use proper skb_set_owner_r() call to assign a
sk to a skb. This improves things by also charging the truesize against
the sockets sk_rmem_alloc counter. With this done we get some accounting
in place to ensure the memory associated with skbs on the workqueue are
still being accounted for somewhere. Finally, by using skb_set_owner_r
the destructor is setup so we can just let the normal skb_kfree logic
recover the memory. Combined with previous patch dropping skb_orphan()
we now can recover from memory pressure and maintain accounting.

Note, we will charge the skbs against their originating socket even
if being redirected into another socket. Once the skb completes the
redirect op the kfree_skb will give the memory back. This is important
because if we charged the socket we are redirecting to (like it was
done before this series) the sock_writeable() test could fail because
of the skb trying to be sent is already charged against the socket.

Also TLS case is special. Here we wait until we have decided not to
simply PASS the packet up the stack. In the case where we PASS the
packet up the stack we already have an skb which is accounted for on
the TLS socket context.

For the parser case we continue to just set/clear skb->sk this is
because the skb being used here may be combined with other skbs or
turned into multiple skbs depending on the parser logic. For example
the parser could request a payload length greater than skb->len so
that the strparser needs to collect multiple skbs. At any rate
the final result will be handled in the strparser recv callback.

Fixes: 604326b41a ("bpf, sockmap: convert to generic sk_msg interface")
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/160226867513.5692.10579573214635925960.stgit@john-Precision-5820-Tower
2020-10-11 18:00:57 -07:00
John Fastabend
10d58d0063 bpf, sockmap: Remove skb_orphan and let normal skb_kfree do cleanup
Calling skb_orphan() is unnecessary in the strp rcv handler because the skb
is from a skb_clone() in __strp_recv. So it never has a destructor or a
sk assigned. Plus its confusing to read because it might hint to the reader
that the skb could have an sk assigned which is not true. Even if we did
have an sk assigned it would be cleaner to simply wait for the upcoming
kfree_skb().

Additionally, move the comment about strparser clone up so its closer to
the logic it is describing and add to it so that it is more complete.

Fixes: 604326b41a ("bpf, sockmap: convert to generic sk_msg interface")
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/160226865548.5692.9098315689984599579.stgit@john-Precision-5820-Tower
2020-10-11 18:00:57 -07:00
John Fastabend
9047f19e7c bpf, sockmap: Remove dropped data on errors in redirect case
In the sk_skb redirect case we didn't handle the case where we overrun
the sk_rmem_alloc entry on ingress redirect or sk_wmem_alloc on egress.
Because we didn't have anything implemented we simply dropped the skb.
This meant data could be dropped if socket memory accounting was in
place.

This fixes the above dropped data case by moving the memory checks
later in the code where we actually do the send or recv. This pushes
those checks into the workqueue and allows us to return an EAGAIN error
which in turn allows us to try again later from the workqueue.

Fixes: 51199405f9 ("bpf: skb_verdict, support SK_PASS on RX BPF path")
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/160226863689.5692.13861422742592309285.stgit@john-Precision-5820-Tower
2020-10-11 18:00:57 -07:00
John Fastabend
29545f4977 bpf, sockmap: Remove skb_set_owner_w wmem will be taken later from sendpage
The skb_set_owner_w is unnecessary here. The sendpage call will create a
fresh skb and set the owner correctly from workqueue. Its also not entirely
harmless because it consumes cycles, but also impacts resource accounting
by increasing sk_wmem_alloc. This is charging the socket we are going to
send to for the skb, but we will put it on the workqueue for some time
before this happens so we are artifically inflating sk_wmem_alloc for
this period. Further, we don't know how many skbs will be used to send the
packet or how it will be broken up when sent over the new socket so
charging it with one big sum is also not correct when the workqueue may
break it up if facing memory pressure. Seeing we don't know how/when
this is going to be sent drop the early accounting.

A later patch will do proper accounting charged on receive socket for
the case where skbs get enqueued on the workqueue.

Fixes: 604326b41a ("bpf, sockmap: convert to generic sk_msg interface")
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/160226861708.5692.17964237936462425136.stgit@john-Precision-5820-Tower
2020-10-11 18:00:57 -07:00
John Fastabend
9ecbfb06a0 bpf, sockmap: On receive programs try to fast track SK_PASS ingress
When we receive an skb and the ingress skb verdict program returns
SK_PASS we currently set the ingress flag and put it on the workqueue
so it can be turned into a sk_msg and put on the sk_msg ingress queue.
Then finally telling userspace with data_ready hook.

Here we observe that if the workqueue is empty then we can try to
convert into a sk_msg type and call data_ready directly without
bouncing through a workqueue. Its a common pattern to have a recv
verdict program for visibility that always returns SK_PASS. In this
case unless there is an ENOMEM error or we overrun the socket we
can avoid the workqueue completely only using it when we fall back
to error cases caused by memory pressure.

By doing this we eliminate another case where data may be dropped
if errors occur on memory limits in workqueue.

Fixes: 51199405f9 ("bpf: skb_verdict, support SK_PASS on RX BPF path")
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/160226859704.5692.12929678876744977669.stgit@john-Precision-5820-Tower
2020-10-11 18:00:57 -07:00
John Fastabend
cfea28f890 bpf, sockmap: Skb verdict SK_PASS to self already checked rmem limits
For sk_skb case where skb_verdict program returns SK_PASS to continue to
pass packet up the stack, the memory limits were already checked before
enqueuing in skb_queue_tail from TCP side. So, lets remove the extra checks
here. The theory is if the TCP stack believes we have memory to receive
the packet then lets trust the stack and not double check the limits.

In fact the accounting here can cause a drop if sk_rmem_alloc has increased
after the stack accepted this packet, but before the duplicate check here.
And worse if this happens because TCP stack already believes the data has
been received there is no retransmit.

Fixes: 51199405f9 ("bpf: skb_verdict, support SK_PASS on RX BPF path")
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/160226857664.5692.668205469388498375.stgit@john-Precision-5820-Tower
2020-10-11 18:00:57 -07:00
Pablo Neira Ayuso
793d5d6124 netfilter: flowtable: reduce calls to pskb_may_pull()
Make two unfront calls to pskb_may_pull() to linearize the network and
transport header.

Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2020-10-12 01:58:10 +02:00
Pablo Neira Ayuso
d3519cb89f netfilter: nf_tables: add inet ingress support
This patch adds a new ingress hook for the inet family. The inet ingress
hook emulates the IP receive path code, therefore, unclean packets are
drop before walking over the ruleset in this basechain.

This patch also introduces the nft_base_chain_netdev() helper function
to check if this hook is bound to one or more devices (through the hook
list infrastructure). This check allows to perform the same handling for
the inet ingress as it would be a netdev ingress chain from the control
plane.

Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2020-10-12 01:57:34 +02:00
Pablo Neira Ayuso
60a3815da7 netfilter: add inet ingress support
This patch adds the NF_INET_INGRESS pseudohook for the NFPROTO_INET
family. This is a mapping this new hook to the existing NFPROTO_NETDEV
and NF_NETDEV_INGRESS hook. The hook does not guarantee that packets are
inet only, users must filter out non-ip traffic explicitly.

This infrastructure makes it easier to support this new hook in nf_tables.

Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2020-10-12 01:57:34 +02:00
Pablo Neira Ayuso
ddcfa710d4 netfilter: add nf_ingress_hook() helper function
Add helper function to check if this is an ingress hook.

Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2020-10-12 01:57:34 +02:00
Pablo Neira Ayuso
afd9024cd1 netfilter: add nf_static_key_{inc,dec}
Add helper functions increment and decrement the hook static keys.

Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2020-10-12 01:57:34 +02:00
longguang.yue
073b04e76b ipvs: inspect reply packets from DR/TUN real servers
Just like for MASQ, inspect the reply packets coming from DR/TUN
real servers and alter the connection's state and timeout
according to the protocol.

It's ipvs's duty to do traffic statistic if packets get hit,
no matter what mode it is.

Signed-off-by: longguang.yue <bigclouds@163.com>
Signed-off-by: Julian Anastasov <ja@ssi.bg>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2020-10-12 01:57:34 +02:00
Alexei Starovoitov
ebb034b15b bpf: Migrate from patchwork.ozlabs.org to patchwork.kernel.org.
Move the bpf/bpf-next patch processing queue to patchwork.kernel.org.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20201011200149.66537-1-alexei.starovoitov@gmail.com
2020-10-11 22:05:47 +02:00
Toke Høiland-Jørgensen
d1c362e1dd bpf: Always return target ifindex in bpf_fib_lookup
The bpf_fib_lookup() helper performs a neighbour lookup for the destination
IP and returns BPF_FIB_LKUP_NO_NEIGH if this fails, with the expectation
that the BPF program will pass the packet up the stack in this case.
However, with the addition of bpf_redirect_neigh() that can be used instead
to perform the neighbour lookup, at the cost of a bit of duplicated work.

For that we still need the target ifindex, and since bpf_fib_lookup()
already has that at the time it performs the neighbour lookup, there is
really no reason why it can't just return it in any case. So let's just
always return the ifindex if the FIB lookup itself succeeds.

Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Link: https://lore.kernel.org/bpf/20201009184234.134214-1-toke@redhat.com
2020-10-11 21:59:20 +02:00
Alexei Starovoitov
52b07e56af Merge branch 'samples: bpf: Refactor XDP programs with libbpf'
"Daniel T. Lee" says:

====================
To avoid confusion caused by the increasing fragmentation of the BPF
Loader program, this commit would like to convert the previous bpf_load
loader with the libbpf loader.

Thanks to libbpf's bpf_link interface, managing the tracepoint BPF
program is much easier. bpf_program__attach_tracepoint manages the
enable of tracepoint event and attach of BPF programs to it with a
single interface bpf_link, so there is no need to manage event_fd and
prog_fd separately.

And due to addition of generic bpf_program__attach() to libbpf, it is
now possible to attach BPF programs with __attach() instead of
explicitly calling __attach_<type>().

This patchset refactors xdp_monitor with using this libbpf API, and the
bpf_load is removed and migrated to libbpf. Also, attach_tracepoint()
is replaced with the generic __attach() method in xdp_redirect_cpu.
Moreover, maps in kern program have been converted to BTF-defined map.
---
Changes in v2:
 - added cleanup logic for bpf_link and bpf_object in xdp_monitor
 - program section match with bpf_program__is_<type> instead of strncmp
 - revert BTF key/val type to default of BPF_MAP_TYPE_PERF_EVENT_ARRAY
 - split increment into seperate satement
 - refactor pointer array initialization
 - error code cleanup
====================

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2020-10-11 12:14:43 -07:00
Daniel T. Lee
321f632450 samples: bpf: Refactor XDP kern program maps with BTF-defined map
Most of the samples were converted to use the new BTF-defined MAP as
they moved to libbpf, but some of the samples were missing.

Instead of using the previous BPF MAP definition, this commit refactors
xdp_monitor and xdp_sample_pkts_kern MAP definition with the new
BTF-defined MAP format.

Also, this commit removes the max_entries attribute at PERF_EVENT_ARRAY
map type. The libbpf's bpf_object__create_map() will automatically
set max_entries to the maximum configured number of CPUs on the host.

Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20201010181734.1109-4-danieltimlee@gmail.com
2020-10-11 12:14:36 -07:00
Daniel T. Lee
151936bf51 samples: bpf: Replace attach_tracepoint() to attach() in xdp_redirect_cpu
>From commit d7a18ea7e8 ("libbpf: Add generic bpf_program__attach()"),
for some BPF programs, it is now possible to attach BPF programs
with __attach() instead of explicitly calling __attach_<type>().

This commit refactors the __attach_tracepoint() with libbpf's generic
__attach() method. In addition, this refactors the logic of setting
the map FD to simplify the code. Also, the missing removal of
bpf_load.o in Makefile has been fixed.

Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20201010181734.1109-3-danieltimlee@gmail.com
2020-10-11 12:14:36 -07:00
Daniel T. Lee
8ac91df6de samples: bpf: Refactor xdp_monitor with libbpf
To avoid confusion caused by the increasing fragmentation of the BPF
Loader program, this commit would like to change to the libbpf loader
instead of using the bpf_load.

Thanks to libbpf's bpf_link interface, managing the tracepoint BPF
program is much easier. bpf_program__attach_tracepoint manages the
enable of tracepoint event and attach of BPF programs to it with a
single interface bpf_link, so there is no need to manage event_fd and
prog_fd separately.

This commit refactors xdp_monitor with using this libbpf API, and the
bpf_load is removed and migrated to libbpf.

Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20201010181734.1109-2-danieltimlee@gmail.com
2020-10-11 12:14:36 -07:00
Jakub Kicinski
bc081a693a Merge branch 'Offload-tc-vlan-mangle-to-mscc_ocelot-switch'
Vladimir Oltean says:

====================
Offload tc-vlan mangle to mscc_ocelot switch

This series offloads one more action to the VCAP IS1 ingress TCAM, which
is to change the classified VLAN for packets, according to the VCAP IS1
keys (VLAN, source MAC, source IP, EtherType, etc).
====================

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-11 11:19:25 -07:00
Vladimir Oltean
82c200be7c selftests: net: mscc: ocelot: add test for VLAN modify action
Create a test that changes a VLAN ID from 200 to 300.

We also need to modify the preferences of the filters installed for the
other rules so that they are unique, because we now install the "tc-vlan
modify" filter in VCAP IS1 only temporarily, and we need to perform the
deletion by filter preference number.

Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-11 11:19:04 -07:00
Vladimir Oltean
ea440cd2d9 net: dsa: tag_ocelot: use VLAN information from tagging header when available
When the Extraction Frame Header contains a valid classified VLAN, use
that instead of the VLAN header present in the packet.

Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-11 11:19:04 -07:00
Vladimir Oltean
70edfae15a net: mscc: ocelot: offload VLAN mangle action to VCAP IS1
The VCAP_IS1_ACT_VID_REPLACE_ENA action, from the VCAP IS1 ingress TCAM,
changes the classified VLAN.

We are only exposing this ability for switch ports that are under VLAN
aware bridges. This is because in standalone ports mode and under a
bridge with vlan_filtering=0, the ocelot driver configures the switch to
operate as VLAN-unaware, so the classified VLAN is not derived from the
802.1Q header from the packet, but instead is always equal to the
port-based VLAN ID of the ingress port. We _can_ still change the
classified VLAN for packets when operating in this mode, but the end
result will most likely be a drop, since both the ingress and the egress
port need to be members of the modified VLAN. And even if we install the
new classified VLAN into the VLAN table of the switch, the result would
still not be as expected: we wouldn't see, on the output port, the
modified VLAN tag, but the original one, even though the classified VLAN
was indeed modified. This is because of how the hardware works: on
egress, what is pushed to the frame is a "port tag", which gives us the
following options:

- Tag all frames with port tag (derived from the classified VLAN)
- Tag all frames with port tag, except if the classified VLAN is 0 or
  equal to the native VLAN of the egress port
- No port tag

Needless to say, in VLAN-unaware mode we are disabling the port tag.
Otherwise, the existing VLAN tag would be ignored, and a second VLAN
tag (the port tag), holding the classified VLAN, would be pushed
(instead of replacing the existing 802.1Q tag). This is definitely not
what the user wanted when installing a "vlan modify" action.

So it is simply not worth bothering with VLAN modify rules under other
configurations except when the ports are fully VLAN-aware.

Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-11 11:19:04 -07:00
Jakub Kicinski
bea4b3095b Merge branch 'enetc-Migrate-to-PHYLINK-and-PCS_LYNX'
Claudiu Manoil says:

====================
enetc: Migrate to PHYLINK and PCS_LYNX

Transitioning the enetc driver from phylib to phylink.
Offloading the serdes configuration to the PCS_LYNX
module is a mandatory part of this transition. Aiming
for a cleaner, more maintainable design, and better
code reuse.
The first 2 patches are clean up prerequisites.

Tested on a p1028rdb board.

v2: validate() explicitly rejects now all interface modes not
supported by the driver instead of relying on the device tree
to provide only supported interfaces, and dropped redundant
activation of pcs_poll (addressing Ioana's findings)
====================

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-11 11:04:56 -07:00
Claudiu Manoil
71b77a7a27 enetc: Migrate to PHYLINK and PCS_LYNX
This is a methodical transition of the driver from phylib
to phylink, following the guidelines from sfp-phylink.rst.
The MAC register configurations based on interface mode
were moved from the probing path to the mac_config() hook.
MAC enable and disable commands (enabling Rx and Tx paths
at MAC level) were also extracted and assigned to their
corresponding phylink hooks.
As part of the migration to phylink, the serdes configuration
from the driver was offloaded to the PCS_LYNX module,
introduced in commit 0da4c3d393 ("net: phy: add Lynx PCS module"),
the PCS_LYNX module being a mandatory component required to
make the enetc driver work with phylink.

Signed-off-by: Claudiu Manoil <claudiu.manoil@nxp.com>
Reviewed-by: Ioana Ciornei <ioana.cionei@nxp.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-11 11:04:42 -07:00