Including fixes from bluetooth, wireless and netfilter.

No known outstanding regressions.
 
 Current release - regressions:
 
   - wifi: iwlwifi: fix hibernation
 
   - eth: ionic: prevent tx_timeout due to frequent doorbell ringing
 
 Previous releases - regressions:
 
   - sched: fix sch_fq incorrect behavior for small weights
 
   - wifi:
     - iwlwifi: take the mutex before running link selection
     - wfx: repair open network AP mode
 
   - netfilter: restore IP sanity checks for netdev/egress
 
   - tcp: fix forever orphan socket caused by tcp_abort
 
   - mptcp: close subflow when receiving TCP+FIN
 
   - bluetooth: fix random crash seen while removing btnxpuart driver
 
 Previous releases - always broken:
 
   - mptcp: more fixes for the in-kernel PM
 
   - eth: bonding: change ipsec_lock from spin lock to mutex
 
   - eth: mana: fix race of mana_hwc_post_rx_wqe and new hwc response
 
 Misc:
 
   - documentation: drop special comment style for net code
 
 Signed-off-by: Paolo Abeni <pabeni@redhat.com>
 -----BEGIN PGP SIGNATURE-----
 
 iQJGBAABCAAwFiEEg1AjqC77wbdLX2LbKSR5jcyPE6QFAmbQcqISHHBhYmVuaUBy
 ZWRoYXQuY29tAAoJECkkeY3MjxOkOJkP+QHZx2LCilc0uvrYkqWBz7aYEigISK+6
 NdGiF/c9FO/dvmisUbs7i48TXKplHu56bR0YTVm2pdKUNcXO5jUgy+s4n9uncsCF
 /Cq8WnaXJ3THqKKNlMnSeTJ1URE47iagI+LdX4g9a5HE5GgrORcHm4mfcn7m68EP
 pZ+TaPDw9jp+o+1nkpqgPe8Vdz1dPlqC1S2KQMl0S60WcSlYgDpVUtVU5m2mitJ1
 giNHXcU2UXxFFyvqhHXyQIFIkKU6sNbD32cm9VXDMw680KjmBM63Fz5EIiZospkz
 efQWHn/xwqZNEOLdN5sPUtKLv02D8sTusfTaGWaNulmNd346ABrkS+fDjHqRxLFb
 OBKXOn4AG1OPCs4MgrgtJf7mrcwJErbE/21dLqGPZWkOzqbe+7r8pXn0XtS9m+gR
 V0IkSpwrS1D394nP2TVmw0B+LwXMA3ItzAjNa8Lemr7TVEpAmuCdAz2XiR13KZfw
 B0DBtWCWr5skgWZdlhofi71OOXC9bWIq75i+aSZR/KXqRj4I38OsargSVNQve/H4
 OmOitt1w0ZRhmb+HmW+xgIffikGsi9YEO165UUzUQCQVn00r0EHP8T3Q2d1/2Hzx
 FtnaYFsBAI2TV2m2DTuJNWc0Fa3G08tJWkoEqbrWeAuebOk0oKWExsXzVfMOrig1
 a9nNA98DmlN3
 =yJIE
 -----END PGP SIGNATURE-----

Merge tag 'net-6.11-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Paolo Abeni:
 "Including fixes from bluetooth, wireless and netfilter.

  No known outstanding regressions.

  Current release - regressions:

   - wifi: iwlwifi: fix hibernation

   - eth: ionic: prevent tx_timeout due to frequent doorbell ringing

  Previous releases - regressions:

   - sched: fix sch_fq incorrect behavior for small weights

   - wifi:
      - iwlwifi: take the mutex before running link selection
      - wfx: repair open network AP mode

   - netfilter: restore IP sanity checks for netdev/egress

   - tcp: fix forever orphan socket caused by tcp_abort

   - mptcp: close subflow when receiving TCP+FIN

   - bluetooth: fix random crash seen while removing btnxpuart driver

  Previous releases - always broken:

   - mptcp: more fixes for the in-kernel PM

   - eth: bonding: change ipsec_lock from spin lock to mutex

   - eth: mana: fix race of mana_hwc_post_rx_wqe and new hwc response

  Misc:

   - documentation: drop special comment style for net code"

* tag 'net-6.11-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (57 commits)
  nfc: pn533: Add poll mod list filling check
  mailmap: update entry for Sriram Yagnaraman
  selftests: mptcp: join: check re-re-adding ID 0 signal
  mptcp: pm: ADD_ADDR 0 is not a new address
  selftests: mptcp: join: validate event numbers
  mptcp: avoid duplicated SUB_CLOSED events
  selftests: mptcp: join: check re-re-adding ID 0 endp
  mptcp: pm: fix ID 0 endp usage after multiple re-creations
  mptcp: pm: do not remove already closed subflows
  selftests: mptcp: join: no extra msg if no counter
  selftests: mptcp: join: check re-adding init endp with != id
  mptcp: pm: reset MPC endp ID when re-added
  mptcp: pm: skip connecting to already established sf
  mptcp: pm: send ACK on an active subflow
  selftests: mptcp: join: check removing ID 0 endpoint
  mptcp: pm: fix RM_ADDR ID for the initial subflow
  mptcp: pm: reuse ID 0 after delete and re-add
  net: busy-poll: use ktime_get_ns() instead of local_clock()
  sctp: fix association labeling in the duplicate COOKIE-ECHO case
  mptcp: pr_debug: add missing \n at the end
  ...
This commit is contained in:
Linus Torvalds 2024-08-30 06:14:39 +12:00
commit 0dd5dd63ba
52 changed files with 867 additions and 368 deletions

View File

@ -614,6 +614,7 @@ Simon Kelley <simon@thekelleys.org.uk>
Sricharan Ramabadhran <quic_srichara@quicinc.com> <sricharan@codeaurora.org>
Srinivas Ramana <quic_sramana@quicinc.com> <sramana@codeaurora.org>
Sriram R <quic_srirrama@quicinc.com> <srirrama@codeaurora.org>
Sriram Yagnaraman <sriram.yagnaraman@ericsson.com> <sriram.yagnaraman@est.tech>
Stanislav Fomichev <sdf@fomichev.me> <sdf@google.com>
Stefan Wahren <wahrenst@gmx.net> <stefan.wahren@i2se.com>
Stéphane Witzmann <stephane.witzmann@ubpmes.univ-bpclermont.fr>

View File

@ -629,18 +629,6 @@ The preferred style for long (multi-line) comments is:
* with beginning and ending almost-blank lines.
*/
For files in net/ and drivers/net/ the preferred style for long (multi-line)
comments is a little different.
.. code-block:: c
/* The preferred comment style for files in net/ and drivers/net
* looks like this.
*
* It is nearly the same as the generally preferred comment style,
* but there is no initial almost-blank line.
*/
It's also important to comment data, whether they are basic types or derived
types. To this end, use just one data declaration per line (no commas for
multiple data declarations). This leaves you room for a small comment on each

View File

@ -355,23 +355,6 @@ just do it. As a result, a sequence of smaller series gets merged quicker and
with better review coverage. Re-posting large series also increases the mailing
list traffic.
Multi-line comments
~~~~~~~~~~~~~~~~~~~
Comment style convention is slightly different for networking and most of
the tree. Instead of this::
/*
* foobar blah blah blah
* another line of text
*/
it is requested that you make it look like this::
/* foobar blah blah blah
* another line of text
*/
Local variable ordering ("reverse xmas tree", "RCS")
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

View File

@ -12,6 +12,7 @@
#include <linux/acpi.h>
#include <acpi/acpi_bus.h>
#include <asm/unaligned.h>
#include <linux/efi.h>
#include <net/bluetooth/bluetooth.h>
#include <net/bluetooth/hci_core.h>
@ -26,6 +27,8 @@
#define ECDSA_OFFSET 644
#define ECDSA_HEADER_LEN 320
#define BTINTEL_EFI_DSBR L"UefiCnvCommonDSBR"
enum {
DSM_SET_WDISABLE2_DELAY = 1,
DSM_SET_RESET_METHOD = 3,
@ -2616,6 +2619,120 @@ static u8 btintel_classify_pkt_type(struct hci_dev *hdev, struct sk_buff *skb)
return hci_skb_pkt_type(skb);
}
/*
* UefiCnvCommonDSBR UEFI variable provides information from the OEM platforms
* if they have replaced the BRI (Bluetooth Radio Interface) resistor to
* overcome the potential STEP errors on their designs. Based on the
* configauration, bluetooth firmware shall adjust the BRI response line drive
* strength. The below structure represents DSBR data.
* struct {
* u8 header;
* u32 dsbr;
* } __packed;
*
* header - defines revision number of the structure
* dsbr - defines drive strength BRI response
* bit0
* 0 - instructs bluetooth firmware to use default values
* 1 - instructs bluetooth firmware to override default values
* bit3:1
* Reserved
* bit7:4
* DSBR override values (only if bit0 is set. Default value is 0xF
* bit31:7
* Reserved
* Expected values for dsbr field:
* 1. 0xF1 - indicates that the resistor on board is 33 Ohm
* 2. 0x00 or 0xB1 - indicates that the resistor on board is 10 Ohm
* 3. Non existing UEFI variable or invalid (none of the above) - indicates
* that the resistor on board is 10 Ohm
* Even if uefi variable is not present, driver shall send 0xfc0a command to
* firmware to use default values.
*
*/
static int btintel_uefi_get_dsbr(u32 *dsbr_var)
{
struct btintel_dsbr {
u8 header;
u32 dsbr;
} __packed data;
efi_status_t status;
unsigned long data_size = 0;
efi_guid_t guid = EFI_GUID(0xe65d8884, 0xd4af, 0x4b20, 0x8d, 0x03,
0x77, 0x2e, 0xcc, 0x3d, 0xa5, 0x31);
if (!IS_ENABLED(CONFIG_EFI))
return -EOPNOTSUPP;
if (!efi_rt_services_supported(EFI_RT_SUPPORTED_GET_VARIABLE))
return -EOPNOTSUPP;
status = efi.get_variable(BTINTEL_EFI_DSBR, &guid, NULL, &data_size,
NULL);
if (status != EFI_BUFFER_TOO_SMALL || !data_size)
return -EIO;
status = efi.get_variable(BTINTEL_EFI_DSBR, &guid, NULL, &data_size,
&data);
if (status != EFI_SUCCESS)
return -ENXIO;
*dsbr_var = data.dsbr;
return 0;
}
static int btintel_set_dsbr(struct hci_dev *hdev, struct intel_version_tlv *ver)
{
struct btintel_dsbr_cmd {
u8 enable;
u8 dsbr;
} __packed;
struct btintel_dsbr_cmd cmd;
struct sk_buff *skb;
u8 status;
u32 dsbr;
bool apply_dsbr;
int err;
/* DSBR command needs to be sent for BlazarI + B0 step product after
* downloading IML image.
*/
apply_dsbr = (ver->img_type == BTINTEL_IMG_IML &&
((ver->cnvi_top & 0xfff) == BTINTEL_CNVI_BLAZARI) &&
INTEL_CNVX_TOP_STEP(ver->cnvi_top) == 0x01);
if (!apply_dsbr)
return 0;
dsbr = 0;
err = btintel_uefi_get_dsbr(&dsbr);
if (err < 0)
bt_dev_dbg(hdev, "Error reading efi: %ls (%d)",
BTINTEL_EFI_DSBR, err);
cmd.enable = dsbr & BIT(0);
cmd.dsbr = dsbr >> 4 & 0xF;
bt_dev_info(hdev, "dsbr: enable: 0x%2.2x value: 0x%2.2x", cmd.enable,
cmd.dsbr);
skb = __hci_cmd_sync(hdev, 0xfc0a, sizeof(cmd), &cmd, HCI_CMD_TIMEOUT);
if (IS_ERR(skb))
return -bt_to_errno(PTR_ERR(skb));
status = skb->data[0];
kfree_skb(skb);
if (status)
return -bt_to_errno(status);
return 0;
}
int btintel_bootloader_setup_tlv(struct hci_dev *hdev,
struct intel_version_tlv *ver)
{
@ -2650,6 +2767,13 @@ int btintel_bootloader_setup_tlv(struct hci_dev *hdev,
if (err)
return err;
/* set drive strength of BRI response */
err = btintel_set_dsbr(hdev, ver);
if (err) {
bt_dev_err(hdev, "Failed to send dsbr command (%d)", err);
return err;
}
/* If image type returned is BTINTEL_IMG_IML, then controller supports
* intermediate loader image
*/

View File

@ -449,6 +449,23 @@ static bool ps_wakeup(struct btnxpuart_dev *nxpdev)
return false;
}
static void ps_cleanup(struct btnxpuart_dev *nxpdev)
{
struct ps_data *psdata = &nxpdev->psdata;
u8 ps_state;
mutex_lock(&psdata->ps_lock);
ps_state = psdata->ps_state;
mutex_unlock(&psdata->ps_lock);
if (ps_state != PS_STATE_AWAKE)
ps_control(psdata->hdev, PS_STATE_AWAKE);
ps_cancel_timer(nxpdev);
cancel_work_sync(&psdata->work);
mutex_destroy(&psdata->ps_lock);
}
static int send_ps_cmd(struct hci_dev *hdev, void *data)
{
struct btnxpuart_dev *nxpdev = hci_get_drvdata(hdev);
@ -1363,7 +1380,6 @@ static int btnxpuart_close(struct hci_dev *hdev)
{
struct btnxpuart_dev *nxpdev = hci_get_drvdata(hdev);
ps_wakeup(nxpdev);
serdev_device_close(nxpdev->serdev);
skb_queue_purge(&nxpdev->txq);
if (!IS_ERR_OR_NULL(nxpdev->rx_skb)) {
@ -1516,8 +1532,8 @@ static void nxp_serdev_remove(struct serdev_device *serdev)
nxpdev->new_baudrate = nxpdev->fw_init_baudrate;
nxp_set_baudrate_cmd(hdev, NULL);
}
ps_cancel_timer(nxpdev);
}
ps_cleanup(nxpdev);
hci_unregister_dev(hdev);
hci_free_dev(hdev);
}

View File

@ -427,6 +427,8 @@ static int bond_ipsec_add_sa(struct xfrm_state *xs,
struct netlink_ext_ack *extack)
{
struct net_device *bond_dev = xs->xso.dev;
struct net_device *real_dev;
netdevice_tracker tracker;
struct bond_ipsec *ipsec;
struct bonding *bond;
struct slave *slave;
@ -438,74 +440,80 @@ static int bond_ipsec_add_sa(struct xfrm_state *xs,
rcu_read_lock();
bond = netdev_priv(bond_dev);
slave = rcu_dereference(bond->curr_active_slave);
if (!slave) {
real_dev = slave ? slave->dev : NULL;
netdev_hold(real_dev, &tracker, GFP_ATOMIC);
rcu_read_unlock();
return -ENODEV;
if (!real_dev) {
err = -ENODEV;
goto out;
}
if (!slave->dev->xfrmdev_ops ||
!slave->dev->xfrmdev_ops->xdo_dev_state_add ||
netif_is_bond_master(slave->dev)) {
if (!real_dev->xfrmdev_ops ||
!real_dev->xfrmdev_ops->xdo_dev_state_add ||
netif_is_bond_master(real_dev)) {
NL_SET_ERR_MSG_MOD(extack, "Slave does not support ipsec offload");
rcu_read_unlock();
return -EINVAL;
err = -EINVAL;
goto out;
}
ipsec = kmalloc(sizeof(*ipsec), GFP_ATOMIC);
ipsec = kmalloc(sizeof(*ipsec), GFP_KERNEL);
if (!ipsec) {
rcu_read_unlock();
return -ENOMEM;
err = -ENOMEM;
goto out;
}
xs->xso.real_dev = slave->dev;
err = slave->dev->xfrmdev_ops->xdo_dev_state_add(xs, extack);
xs->xso.real_dev = real_dev;
err = real_dev->xfrmdev_ops->xdo_dev_state_add(xs, extack);
if (!err) {
ipsec->xs = xs;
INIT_LIST_HEAD(&ipsec->list);
spin_lock_bh(&bond->ipsec_lock);
mutex_lock(&bond->ipsec_lock);
list_add(&ipsec->list, &bond->ipsec_list);
spin_unlock_bh(&bond->ipsec_lock);
mutex_unlock(&bond->ipsec_lock);
} else {
kfree(ipsec);
}
rcu_read_unlock();
out:
netdev_put(real_dev, &tracker);
return err;
}
static void bond_ipsec_add_sa_all(struct bonding *bond)
{
struct net_device *bond_dev = bond->dev;
struct net_device *real_dev;
struct bond_ipsec *ipsec;
struct slave *slave;
rcu_read_lock();
slave = rcu_dereference(bond->curr_active_slave);
if (!slave)
goto out;
slave = rtnl_dereference(bond->curr_active_slave);
real_dev = slave ? slave->dev : NULL;
if (!real_dev)
return;
if (!slave->dev->xfrmdev_ops ||
!slave->dev->xfrmdev_ops->xdo_dev_state_add ||
netif_is_bond_master(slave->dev)) {
spin_lock_bh(&bond->ipsec_lock);
mutex_lock(&bond->ipsec_lock);
if (!real_dev->xfrmdev_ops ||
!real_dev->xfrmdev_ops->xdo_dev_state_add ||
netif_is_bond_master(real_dev)) {
if (!list_empty(&bond->ipsec_list))
slave_warn(bond_dev, slave->dev,
slave_warn(bond_dev, real_dev,
"%s: no slave xdo_dev_state_add\n",
__func__);
spin_unlock_bh(&bond->ipsec_lock);
goto out;
}
spin_lock_bh(&bond->ipsec_lock);
list_for_each_entry(ipsec, &bond->ipsec_list, list) {
ipsec->xs->xso.real_dev = slave->dev;
if (slave->dev->xfrmdev_ops->xdo_dev_state_add(ipsec->xs, NULL)) {
slave_warn(bond_dev, slave->dev, "%s: failed to add SA\n", __func__);
/* If new state is added before ipsec_lock acquired */
if (ipsec->xs->xso.real_dev == real_dev)
continue;
ipsec->xs->xso.real_dev = real_dev;
if (real_dev->xfrmdev_ops->xdo_dev_state_add(ipsec->xs, NULL)) {
slave_warn(bond_dev, real_dev, "%s: failed to add SA\n", __func__);
ipsec->xs->xso.real_dev = NULL;
}
}
spin_unlock_bh(&bond->ipsec_lock);
out:
rcu_read_unlock();
mutex_unlock(&bond->ipsec_lock);
}
/**
@ -515,6 +523,8 @@ out:
static void bond_ipsec_del_sa(struct xfrm_state *xs)
{
struct net_device *bond_dev = xs->xso.dev;
struct net_device *real_dev;
netdevice_tracker tracker;
struct bond_ipsec *ipsec;
struct bonding *bond;
struct slave *slave;
@ -525,6 +535,9 @@ static void bond_ipsec_del_sa(struct xfrm_state *xs)
rcu_read_lock();
bond = netdev_priv(bond_dev);
slave = rcu_dereference(bond->curr_active_slave);
real_dev = slave ? slave->dev : NULL;
netdev_hold(real_dev, &tracker, GFP_ATOMIC);
rcu_read_unlock();
if (!slave)
goto out;
@ -532,18 +545,19 @@ static void bond_ipsec_del_sa(struct xfrm_state *xs)
if (!xs->xso.real_dev)
goto out;
WARN_ON(xs->xso.real_dev != slave->dev);
WARN_ON(xs->xso.real_dev != real_dev);
if (!slave->dev->xfrmdev_ops ||
!slave->dev->xfrmdev_ops->xdo_dev_state_delete ||
netif_is_bond_master(slave->dev)) {
slave_warn(bond_dev, slave->dev, "%s: no slave xdo_dev_state_delete\n", __func__);
if (!real_dev->xfrmdev_ops ||
!real_dev->xfrmdev_ops->xdo_dev_state_delete ||
netif_is_bond_master(real_dev)) {
slave_warn(bond_dev, real_dev, "%s: no slave xdo_dev_state_delete\n", __func__);
goto out;
}
slave->dev->xfrmdev_ops->xdo_dev_state_delete(xs);
real_dev->xfrmdev_ops->xdo_dev_state_delete(xs);
out:
spin_lock_bh(&bond->ipsec_lock);
netdev_put(real_dev, &tracker);
mutex_lock(&bond->ipsec_lock);
list_for_each_entry(ipsec, &bond->ipsec_list, list) {
if (ipsec->xs == xs) {
list_del(&ipsec->list);
@ -551,40 +565,72 @@ out:
break;
}
}
spin_unlock_bh(&bond->ipsec_lock);
rcu_read_unlock();
mutex_unlock(&bond->ipsec_lock);
}
static void bond_ipsec_del_sa_all(struct bonding *bond)
{
struct net_device *bond_dev = bond->dev;
struct net_device *real_dev;
struct bond_ipsec *ipsec;
struct slave *slave;
rcu_read_lock();
slave = rcu_dereference(bond->curr_active_slave);
if (!slave) {
rcu_read_unlock();
slave = rtnl_dereference(bond->curr_active_slave);
real_dev = slave ? slave->dev : NULL;
if (!real_dev)
return;
}
spin_lock_bh(&bond->ipsec_lock);
mutex_lock(&bond->ipsec_lock);
list_for_each_entry(ipsec, &bond->ipsec_list, list) {
if (!ipsec->xs->xso.real_dev)
continue;
if (!slave->dev->xfrmdev_ops ||
!slave->dev->xfrmdev_ops->xdo_dev_state_delete ||
netif_is_bond_master(slave->dev)) {
slave_warn(bond_dev, slave->dev,
if (!real_dev->xfrmdev_ops ||
!real_dev->xfrmdev_ops->xdo_dev_state_delete ||
netif_is_bond_master(real_dev)) {
slave_warn(bond_dev, real_dev,
"%s: no slave xdo_dev_state_delete\n",
__func__);
} else {
slave->dev->xfrmdev_ops->xdo_dev_state_delete(ipsec->xs);
real_dev->xfrmdev_ops->xdo_dev_state_delete(ipsec->xs);
if (real_dev->xfrmdev_ops->xdo_dev_state_free)
real_dev->xfrmdev_ops->xdo_dev_state_free(ipsec->xs);
}
}
spin_unlock_bh(&bond->ipsec_lock);
mutex_unlock(&bond->ipsec_lock);
}
static void bond_ipsec_free_sa(struct xfrm_state *xs)
{
struct net_device *bond_dev = xs->xso.dev;
struct net_device *real_dev;
netdevice_tracker tracker;
struct bonding *bond;
struct slave *slave;
if (!bond_dev)
return;
rcu_read_lock();
bond = netdev_priv(bond_dev);
slave = rcu_dereference(bond->curr_active_slave);
real_dev = slave ? slave->dev : NULL;
netdev_hold(real_dev, &tracker, GFP_ATOMIC);
rcu_read_unlock();
if (!slave)
goto out;
if (!xs->xso.real_dev)
goto out;
WARN_ON(xs->xso.real_dev != real_dev);
if (real_dev && real_dev->xfrmdev_ops &&
real_dev->xfrmdev_ops->xdo_dev_state_free)
real_dev->xfrmdev_ops->xdo_dev_state_free(xs);
out:
netdev_put(real_dev, &tracker);
}
/**
@ -627,6 +673,7 @@ out:
static const struct xfrmdev_ops bond_xfrmdev_ops = {
.xdo_dev_state_add = bond_ipsec_add_sa,
.xdo_dev_state_delete = bond_ipsec_del_sa,
.xdo_dev_state_free = bond_ipsec_free_sa,
.xdo_dev_offload_ok = bond_ipsec_offload_ok,
};
#endif /* CONFIG_XFRM_OFFLOAD */
@ -5877,7 +5924,7 @@ void bond_setup(struct net_device *bond_dev)
/* set up xfrm device ops (only supported in active-backup right now) */
bond_dev->xfrmdev_ops = &bond_xfrmdev_ops;
INIT_LIST_HEAD(&bond->ipsec_list);
spin_lock_init(&bond->ipsec_lock);
mutex_init(&bond->ipsec_lock);
#endif /* CONFIG_XFRM_OFFLOAD */
/* don't acquire bond device's netif_tx_lock when transmitting */
@ -5926,6 +5973,10 @@ static void bond_uninit(struct net_device *bond_dev)
__bond_release_one(bond_dev, slave->dev, true, true);
netdev_info(bond_dev, "Released all slaves\n");
#ifdef CONFIG_XFRM_OFFLOAD
mutex_destroy(&bond->ipsec_lock);
#endif /* CONFIG_XFRM_OFFLOAD */
bond_set_slave_arr(bond, NULL, NULL);
list_del_rcu(&bond->bond_list);

View File

@ -572,7 +572,7 @@ static bool ftgmac100_rx_packet(struct ftgmac100 *priv, int *processed)
(*processed)++;
return true;
drop:
drop:
/* Clean rxdes0 (which resets own bit) */
rxdes->rxdes0 = cpu_to_le32(status & priv->rxdes0_edorr_mask);
priv->rx_pointer = ftgmac100_next_rx_pointer(priv, pointer);
@ -656,6 +656,11 @@ static bool ftgmac100_tx_complete_packet(struct ftgmac100 *priv)
ftgmac100_free_tx_packet(priv, pointer, skb, txdes, ctl_stat);
txdes->txdes0 = cpu_to_le32(ctl_stat & priv->txdes0_edotr_mask);
/* Ensure the descriptor config is visible before setting the tx
* pointer.
*/
smp_wmb();
priv->tx_clean_pointer = ftgmac100_next_tx_pointer(priv, pointer);
return true;
@ -809,6 +814,11 @@ static netdev_tx_t ftgmac100_hard_start_xmit(struct sk_buff *skb,
dma_wmb();
first->txdes0 = cpu_to_le32(f_ctl_stat);
/* Ensure the descriptor config is visible before setting the tx
* pointer.
*/
smp_wmb();
/* Update next TX pointer */
priv->tx_pointer = pointer;
@ -829,7 +839,7 @@ static netdev_tx_t ftgmac100_hard_start_xmit(struct sk_buff *skb,
return NETDEV_TX_OK;
dma_err:
dma_err:
if (net_ratelimit())
netdev_err(netdev, "map tx fragment failed\n");
@ -851,7 +861,7 @@ static netdev_tx_t ftgmac100_hard_start_xmit(struct sk_buff *skb,
* last fragment, so we know ftgmac100_free_tx_packet()
* hasn't freed the skb yet.
*/
drop:
drop:
/* Drop the packet */
dev_kfree_skb_any(skb);
netdev->stats.tx_dropped++;
@ -1344,7 +1354,7 @@ static void ftgmac100_reset(struct ftgmac100 *priv)
ftgmac100_init_all(priv, true);
netdev_dbg(netdev, "Reset done !\n");
bail:
bail:
if (priv->mii_bus)
mutex_unlock(&priv->mii_bus->mdio_lock);
if (netdev->phydev)
@ -1543,15 +1553,15 @@ static int ftgmac100_open(struct net_device *netdev)
return 0;
err_ncsi:
err_ncsi:
napi_disable(&priv->napi);
netif_stop_queue(netdev);
err_alloc:
err_alloc:
ftgmac100_free_buffers(priv);
free_irq(netdev->irq, netdev);
err_irq:
err_irq:
netif_napi_del(&priv->napi);
err_hw:
err_hw:
iowrite32(0, priv->base + FTGMAC100_OFFSET_IER);
ftgmac100_free_rings(priv);
return err;

View File

@ -52,32 +52,6 @@ static int mana_hwc_verify_resp_msg(const struct hwc_caller_ctx *caller_ctx,
return 0;
}
static void mana_hwc_handle_resp(struct hw_channel_context *hwc, u32 resp_len,
const struct gdma_resp_hdr *resp_msg)
{
struct hwc_caller_ctx *ctx;
int err;
if (!test_bit(resp_msg->response.hwc_msg_id,
hwc->inflight_msg_res.map)) {
dev_err(hwc->dev, "hwc_rx: invalid msg_id = %u\n",
resp_msg->response.hwc_msg_id);
return;
}
ctx = hwc->caller_ctx + resp_msg->response.hwc_msg_id;
err = mana_hwc_verify_resp_msg(ctx, resp_msg, resp_len);
if (err)
goto out;
ctx->status_code = resp_msg->status;
memcpy(ctx->output_buf, resp_msg, resp_len);
out:
ctx->error = err;
complete(&ctx->comp_event);
}
static int mana_hwc_post_rx_wqe(const struct hwc_wq *hwc_rxq,
struct hwc_work_request *req)
{
@ -101,6 +75,40 @@ static int mana_hwc_post_rx_wqe(const struct hwc_wq *hwc_rxq,
return err;
}
static void mana_hwc_handle_resp(struct hw_channel_context *hwc, u32 resp_len,
struct hwc_work_request *rx_req)
{
const struct gdma_resp_hdr *resp_msg = rx_req->buf_va;
struct hwc_caller_ctx *ctx;
int err;
if (!test_bit(resp_msg->response.hwc_msg_id,
hwc->inflight_msg_res.map)) {
dev_err(hwc->dev, "hwc_rx: invalid msg_id = %u\n",
resp_msg->response.hwc_msg_id);
mana_hwc_post_rx_wqe(hwc->rxq, rx_req);
return;
}
ctx = hwc->caller_ctx + resp_msg->response.hwc_msg_id;
err = mana_hwc_verify_resp_msg(ctx, resp_msg, resp_len);
if (err)
goto out;
ctx->status_code = resp_msg->status;
memcpy(ctx->output_buf, resp_msg, resp_len);
out:
ctx->error = err;
/* Must post rx wqe before complete(), otherwise the next rx may
* hit no_wqe error.
*/
mana_hwc_post_rx_wqe(hwc->rxq, rx_req);
complete(&ctx->comp_event);
}
static void mana_hwc_init_event_handler(void *ctx, struct gdma_queue *q_self,
struct gdma_event *event)
{
@ -235,14 +243,12 @@ static void mana_hwc_rx_event_handler(void *ctx, u32 gdma_rxq_id,
return;
}
mana_hwc_handle_resp(hwc, rx_oob->tx_oob_data_size, resp);
mana_hwc_handle_resp(hwc, rx_oob->tx_oob_data_size, rx_req);
/* Do no longer use 'resp', because the buffer is posted to the HW
* in the below mana_hwc_post_rx_wqe().
/* Can no longer use 'resp', because the buffer is posted to the HW
* in mana_hwc_handle_resp() above.
*/
resp = NULL;
mana_hwc_post_rx_wqe(hwc_rxq, rx_req);
}
static void mana_hwc_tx_event_handler(void *ctx, u32 gdma_txq_id,

View File

@ -32,7 +32,7 @@
#define IONIC_ADMIN_DOORBELL_DEADLINE (HZ / 2) /* 500ms */
#define IONIC_TX_DOORBELL_DEADLINE (HZ / 100) /* 10ms */
#define IONIC_RX_MIN_DOORBELL_DEADLINE (HZ / 100) /* 10ms */
#define IONIC_RX_MAX_DOORBELL_DEADLINE (HZ * 5) /* 5s */
#define IONIC_RX_MAX_DOORBELL_DEADLINE (HZ * 4) /* 4s */
struct ionic_dev_bar {
void __iomem *vaddr;

View File

@ -3220,7 +3220,7 @@ int ionic_lif_alloc(struct ionic *ionic)
netdev->netdev_ops = &ionic_netdev_ops;
ionic_ethtool_set_ops(netdev);
netdev->watchdog_timeo = 2 * HZ;
netdev->watchdog_timeo = 5 * HZ;
netif_carrier_off(netdev);
lif->identity = lid;

View File

@ -1452,6 +1452,7 @@ static const struct prueth_pdata am654_icssg_pdata = {
static const struct prueth_pdata am64x_icssg_pdata = {
.fdqring_mode = K3_RINGACC_RING_MODE_RING,
.quirk_10m_link_issue = 1,
.switch_mode = 1,
};

View File

@ -1653,7 +1653,7 @@ static struct sock *gtp_encap_enable_socket(int fd, int type,
sock = sockfd_lookup(fd, &err);
if (!sock) {
pr_debug("gtp socket fd=%d not found\n", fd);
return NULL;
return ERR_PTR(err);
}
sk = sock->sk;

View File

@ -725,22 +725,25 @@ int iwl_acpi_get_wgds_table(struct iwl_fw_runtime *fwrt)
entry = &wifi_pkg->package.elements[entry_idx];
entry_idx++;
if (entry->type != ACPI_TYPE_INTEGER ||
entry->integer.value > num_profiles) {
entry->integer.value > num_profiles ||
entry->integer.value <
rev_data[idx].min_profiles) {
ret = -EINVAL;
goto out_free;
}
num_profiles = entry->integer.value;
/*
* this also validates >= min_profiles since we
* otherwise wouldn't have gotten the data when
* looking up in ACPI
* Check to see if we received package count
* same as max # of profiles
*/
if (wifi_pkg->package.count !=
hdr_size + profile_size * num_profiles) {
ret = -EINVAL;
goto out_free;
}
/* Number of valid profiles */
num_profiles = entry->integer.value;
}
goto read_table;
}

View File

@ -3348,7 +3348,7 @@ void iwl_fw_dbg_stop_restart_recording(struct iwl_fw_runtime *fwrt,
{
int ret __maybe_unused = 0;
if (test_bit(STATUS_FW_ERROR, &fwrt->trans->status))
if (!iwl_trans_fw_running(fwrt->trans))
return;
if (fw_has_capa(&fwrt->fw->ucode_capa,

View File

@ -85,6 +85,10 @@ struct iwl_cfg;
* May sleep
* @wimax_active: invoked when WiMax becomes active. May sleep
* @time_point: called when transport layer wants to collect debug data
* @device_powered_off: called upon resume from hibernation but not only.
* Op_mode needs to reset its internal state because the device did not
* survive the system state transition. The firmware is no longer running,
* etc...
*/
struct iwl_op_mode_ops {
struct iwl_op_mode *(*start)(struct iwl_trans *trans,
@ -107,6 +111,7 @@ struct iwl_op_mode_ops {
void (*time_point)(struct iwl_op_mode *op_mode,
enum iwl_fw_ini_time_point tp_id,
union iwl_dbg_tlv_tp_data *tp_data);
void (*device_powered_off)(struct iwl_op_mode *op_mode);
};
int iwl_opmode_register(const char *name, const struct iwl_op_mode_ops *ops);
@ -204,4 +209,11 @@ static inline void iwl_op_mode_time_point(struct iwl_op_mode *op_mode,
op_mode->ops->time_point(op_mode, tp_id, tp_data);
}
static inline void iwl_op_mode_device_powered_off(struct iwl_op_mode *op_mode)
{
if (!op_mode || !op_mode->ops || !op_mode->ops->device_powered_off)
return;
op_mode->ops->device_powered_off(op_mode);
}
#endif /* __iwl_op_mode_h__ */

View File

@ -1128,8 +1128,8 @@ static inline void iwl_trans_fw_error(struct iwl_trans *trans, bool sync)
/* prevent double restarts due to the same erroneous FW */
if (!test_and_set_bit(STATUS_FW_ERROR, &trans->status)) {
iwl_op_mode_nic_error(trans->op_mode, sync);
trans->state = IWL_TRANS_NO_FW;
iwl_op_mode_nic_error(trans->op_mode, sync);
}
}

View File

@ -3439,6 +3439,16 @@ static int __iwl_mvm_resume(struct iwl_mvm *mvm, bool test)
mutex_lock(&mvm->mutex);
/* Apparently, the device went away and device_powered_off() was called,
* don't even try to read the rt_status, the device is currently
* inaccessible.
*/
if (!test_bit(IWL_MVM_STATUS_IN_D3, &mvm->status)) {
IWL_INFO(mvm,
"Can't resume, device_powered_off() was called during wowlan\n");
goto err;
}
mvm->last_reset_or_resume_time_jiffies = jiffies;
/* get the BSS vif pointer again */

View File

@ -5818,6 +5818,10 @@ static void iwl_mvm_flush_no_vif(struct iwl_mvm *mvm, u32 queues, bool drop)
int i;
if (!iwl_mvm_has_new_tx_api(mvm)) {
/* we can't ask the firmware anything if it is dead */
if (test_bit(IWL_MVM_STATUS_HW_RESTART_REQUESTED,
&mvm->status))
return;
if (drop) {
guard(mvm)(mvm);
iwl_mvm_flush_tx_path(mvm,
@ -5911,8 +5915,11 @@ void iwl_mvm_mac_flush(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
/* this can take a while, and we may need/want other operations
* to succeed while doing this, so do it without the mutex held
* If the firmware is dead, this can't work...
*/
if (!drop && !iwl_mvm_has_new_tx_api(mvm))
if (!drop && !iwl_mvm_has_new_tx_api(mvm) &&
!test_bit(IWL_MVM_STATUS_HW_RESTART_REQUESTED,
&mvm->status))
iwl_trans_wait_tx_queues_empty(mvm->trans, msk);
}

View File

@ -1198,10 +1198,12 @@ static void iwl_mvm_trig_link_selection(struct wiphy *wiphy,
struct iwl_mvm *mvm =
container_of(wk, struct iwl_mvm, trig_link_selection_wk);
mutex_lock(&mvm->mutex);
ieee80211_iterate_active_interfaces(mvm->hw,
IEEE80211_IFACE_ITER_NORMAL,
iwl_mvm_find_link_selection_vif,
NULL);
mutex_unlock(&mvm->mutex);
}
static struct iwl_op_mode *
@ -1511,6 +1513,8 @@ void iwl_mvm_stop_device(struct iwl_mvm *mvm)
clear_bit(IWL_MVM_STATUS_FIRMWARE_RUNNING, &mvm->status);
iwl_mvm_pause_tcm(mvm, false);
iwl_fw_dbg_stop_sync(&mvm->fwrt);
iwl_trans_stop_device(mvm->trans);
iwl_free_fw_paging(&mvm->fwrt);
@ -2090,6 +2094,20 @@ static void iwl_op_mode_mvm_time_point(struct iwl_op_mode *op_mode,
iwl_dbg_tlv_time_point(&mvm->fwrt, tp_id, tp_data);
}
static void iwl_op_mode_mvm_device_powered_off(struct iwl_op_mode *op_mode)
{
struct iwl_mvm *mvm = IWL_OP_MODE_GET_MVM(op_mode);
mutex_lock(&mvm->mutex);
clear_bit(IWL_MVM_STATUS_IN_D3, &mvm->status);
mvm->trans->system_pm_mode = IWL_PLAT_PM_MODE_DISABLED;
iwl_mvm_stop_device(mvm);
#ifdef CONFIG_PM
mvm->fast_resume = false;
#endif
mutex_unlock(&mvm->mutex);
}
#define IWL_MVM_COMMON_OPS \
/* these could be differentiated */ \
.queue_full = iwl_mvm_stop_sw_queue, \
@ -2102,7 +2120,8 @@ static void iwl_op_mode_mvm_time_point(struct iwl_op_mode *op_mode,
/* as we only register one, these MUST be common! */ \
.start = iwl_op_mode_mvm_start, \
.stop = iwl_op_mode_mvm_stop, \
.time_point = iwl_op_mode_mvm_time_point
.time_point = iwl_op_mode_mvm_time_point, \
.device_powered_off = iwl_op_mode_mvm_device_powered_off
static const struct iwl_op_mode_ops iwl_mvm_ops = {
IWL_MVM_COMMON_OPS,

View File

@ -48,6 +48,8 @@
/* Number of iterations on the channel for mei filtered scan */
#define IWL_MEI_SCAN_NUM_ITER 5U
#define WFA_TPC_IE_LEN 9
struct iwl_mvm_scan_timing_params {
u32 suspend_time;
u32 max_out_time;
@ -303,8 +305,8 @@ static int iwl_mvm_max_scan_ie_fw_cmd_room(struct iwl_mvm *mvm)
max_probe_len = SCAN_OFFLOAD_PROBE_REQ_SIZE;
/* we create the 802.11 header and SSID element */
max_probe_len -= 24 + 2;
/* we create the 802.11 header SSID element and WFA TPC element */
max_probe_len -= 24 + 2 + WFA_TPC_IE_LEN;
/* DS parameter set element is added on 2.4GHZ band if required */
if (iwl_mvm_rrm_scan_needed(mvm))
@ -731,8 +733,6 @@ static u8 *iwl_mvm_copy_and_insert_ds_elem(struct iwl_mvm *mvm, const u8 *ies,
return newpos;
}
#define WFA_TPC_IE_LEN 9
static void iwl_mvm_add_tpc_report_ie(u8 *pos)
{
pos[0] = WLAN_EID_VENDOR_SPECIFIC;
@ -837,8 +837,8 @@ static inline bool iwl_mvm_scan_fits(struct iwl_mvm *mvm, int n_ssids,
return ((n_ssids <= PROBE_OPTION_MAX) &&
(n_channels <= mvm->fw->ucode_capa.n_scan_channels) &
(ies->common_ie_len +
ies->len[NL80211_BAND_2GHZ] +
ies->len[NL80211_BAND_5GHZ] <=
ies->len[NL80211_BAND_2GHZ] + ies->len[NL80211_BAND_5GHZ] +
ies->len[NL80211_BAND_6GHZ] <=
iwl_mvm_max_scan_ie_fw_cmd_room(mvm)));
}
@ -1659,6 +1659,17 @@ iwl_mvm_umac_scan_cfg_channels_v7(struct iwl_mvm *mvm,
cfg->v2.channel_num = channels[i]->hw_value;
if (cfg80211_channel_is_psc(channels[i]))
cfg->flags = 0;
if (band == NL80211_BAND_6GHZ) {
/* 6 GHz channels should only appear in a scan request
* that has scan_6ghz set. The only exception is MLO
* scan, which has to be passive.
*/
WARN_ON_ONCE(cfg->flags != 0);
cfg->flags =
cpu_to_le32(IWL_UHB_CHAN_CFG_FLAG_FORCE_PASSIVE);
}
cfg->v2.iter_count = 1;
cfg->v2.iter_interval = 0;
if (version < 17)
@ -3168,18 +3179,16 @@ int iwl_mvm_sched_scan_start(struct iwl_mvm *mvm,
params.n_channels = j;
}
if (non_psc_included &&
!iwl_mvm_scan_fits(mvm, req->n_ssids, ies, params.n_channels)) {
kfree(params.channels);
return -ENOBUFS;
if (!iwl_mvm_scan_fits(mvm, req->n_ssids, ies, params.n_channels)) {
ret = -ENOBUFS;
goto out;
}
uid = iwl_mvm_build_scan_cmd(mvm, vif, &hcmd, &params, type);
if (non_psc_included)
kfree(params.channels);
if (uid < 0)
return uid;
if (uid < 0) {
ret = uid;
goto out;
}
ret = iwl_mvm_send_cmd(mvm, &hcmd);
if (!ret) {
@ -3197,6 +3206,9 @@ int iwl_mvm_sched_scan_start(struct iwl_mvm *mvm,
mvm->sched_scan_pass_all = SCHED_SCAN_PASS_ALL_DISABLED;
}
out:
if (non_psc_included)
kfree(params.channels);
return ret;
}

View File

@ -89,7 +89,8 @@ iwl_pcie_ctxt_info_dbg_enable(struct iwl_trans *trans,
}
break;
default:
IWL_ERR(trans, "WRT: Invalid buffer destination\n");
IWL_DEBUG_FW(trans, "WRT: Invalid buffer destination (%d)\n",
le32_to_cpu(fw_mon_cfg->buf_location));
}
out:
if (dbg_flags)

View File

@ -1577,11 +1577,12 @@ static int iwl_pci_suspend(struct device *device)
return 0;
}
static int iwl_pci_resume(struct device *device)
static int _iwl_pci_resume(struct device *device, bool restore)
{
struct pci_dev *pdev = to_pci_dev(device);
struct iwl_trans *trans = pci_get_drvdata(pdev);
struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
bool device_was_powered_off = false;
/* Before you put code here, think about WoWLAN. You cannot check here
* whether WoWLAN is enabled or not, and your code will run even if
@ -1597,6 +1598,26 @@ static int iwl_pci_resume(struct device *device)
if (!trans->op_mode)
return 0;
/*
* Scratch value was altered, this means the device was powered off, we
* need to reset it completely.
* Note: MAC (bits 0:7) will be cleared upon suspend even with wowlan,
* so assume that any bits there mean that the device is usable.
*/
if (trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_BZ &&
!iwl_read32(trans, CSR_FUNC_SCRATCH))
device_was_powered_off = true;
if (restore || device_was_powered_off) {
trans->state = IWL_TRANS_NO_FW;
/* Hope for the best here ... If one of those steps fails we
* won't really know how to recover.
*/
iwl_pcie_prepare_card_hw(trans);
iwl_finish_nic_init(trans);
iwl_op_mode_device_powered_off(trans->op_mode);
}
/* In WOWLAN, let iwl_trans_pcie_d3_resume do the rest of the work */
if (test_bit(STATUS_DEVICE_ENABLED, &trans->status))
return 0;
@ -1617,9 +1638,23 @@ static int iwl_pci_resume(struct device *device)
return 0;
}
static int iwl_pci_restore(struct device *device)
{
return _iwl_pci_resume(device, true);
}
static int iwl_pci_resume(struct device *device)
{
return _iwl_pci_resume(device, false);
}
static const struct dev_pm_ops iwl_dev_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(iwl_pci_suspend,
iwl_pci_resume)
.suspend = pm_sleep_ptr(iwl_pci_suspend),
.resume = pm_sleep_ptr(iwl_pci_resume),
.freeze = pm_sleep_ptr(iwl_pci_suspend),
.thaw = pm_sleep_ptr(iwl_pci_resume),
.poweroff = pm_sleep_ptr(iwl_pci_suspend),
.restore = pm_sleep_ptr(iwl_pci_restore),
};
#define IWL_PM_OPS (&iwl_dev_pm_ops)

View File

@ -4363,11 +4363,27 @@ int mwifiex_register_cfg80211(struct mwifiex_adapter *adapter)
if (ISSUPP_ADHOC_ENABLED(adapter->fw_cap_info))
wiphy->interface_modes |= BIT(NL80211_IFTYPE_ADHOC);
wiphy->bands[NL80211_BAND_2GHZ] = &mwifiex_band_2ghz;
if (adapter->config_bands & BAND_A)
wiphy->bands[NL80211_BAND_5GHZ] = &mwifiex_band_5ghz;
else
wiphy->bands[NL80211_BAND_2GHZ] = devm_kmemdup(adapter->dev,
&mwifiex_band_2ghz,
sizeof(mwifiex_band_2ghz),
GFP_KERNEL);
if (!wiphy->bands[NL80211_BAND_2GHZ]) {
ret = -ENOMEM;
goto err;
}
if (adapter->config_bands & BAND_A) {
wiphy->bands[NL80211_BAND_5GHZ] = devm_kmemdup(adapter->dev,
&mwifiex_band_5ghz,
sizeof(mwifiex_band_5ghz),
GFP_KERNEL);
if (!wiphy->bands[NL80211_BAND_5GHZ]) {
ret = -ENOMEM;
goto err;
}
} else {
wiphy->bands[NL80211_BAND_5GHZ] = NULL;
}
if (adapter->drcs_enabled && ISSUPP_DRCS_ENABLED(adapter->fw_cap_info))
wiphy->iface_combinations = &mwifiex_iface_comb_ap_sta_drcs;
@ -4461,8 +4477,7 @@ int mwifiex_register_cfg80211(struct mwifiex_adapter *adapter)
if (ret < 0) {
mwifiex_dbg(adapter, ERROR,
"%s: wiphy_register failed: %d\n", __func__, ret);
wiphy_free(wiphy);
return ret;
goto err;
}
if (!adapter->regd) {
@ -4504,4 +4519,9 @@ int mwifiex_register_cfg80211(struct mwifiex_adapter *adapter)
adapter->wiphy = wiphy;
return ret;
err:
wiphy_free(wiphy);
return ret;
}

View File

@ -352,8 +352,11 @@ static int wfx_set_mfp_ap(struct wfx_vif *wvif)
ptr = (u16 *)cfg80211_find_ie(WLAN_EID_RSN, skb->data + ieoffset,
skb->len - ieoffset);
if (unlikely(!ptr))
if (!ptr) {
/* No RSN IE is fine in open networks */
ret = 0;
goto free_skb;
}
ptr += pairwise_cipher_suite_count_offset;
if (WARN_ON(ptr > (u16 *)skb_tail_pointer(skb)))

View File

@ -1723,6 +1723,11 @@ static int pn533_start_poll(struct nfc_dev *nfc_dev,
}
pn533_poll_create_mod_list(dev, im_protocols, tm_protocols);
if (!dev->poll_mod_count) {
nfc_err(dev->dev,
"Poll mod list is empty\n");
return -EINVAL;
}
/* Do not always start polling from the same modulation */
get_random_bytes(&rand_mod, sizeof(rand_mod));

View File

@ -260,7 +260,7 @@ struct bonding {
#ifdef CONFIG_XFRM_OFFLOAD
struct list_head ipsec_list;
/* protecting ipsec_list */
spinlock_t ipsec_lock;
struct mutex ipsec_lock;
#endif /* CONFIG_XFRM_OFFLOAD */
struct bpf_prog *xdp_prog;
};

View File

@ -68,7 +68,7 @@ static inline bool sk_can_busy_loop(struct sock *sk)
static inline unsigned long busy_loop_current_time(void)
{
#ifdef CONFIG_NET_RX_BUSY_POLL
return (unsigned long)(local_clock() >> 10);
return (unsigned long)(ktime_get_ns() >> 10);
#else
return 0;
#endif

View File

@ -19,7 +19,7 @@ static inline void nft_set_pktinfo_ipv4(struct nft_pktinfo *pkt)
static inline int __nft_set_pktinfo_ipv4_validate(struct nft_pktinfo *pkt)
{
struct iphdr *iph, _iph;
u32 len, thoff;
u32 len, thoff, skb_len;
iph = skb_header_pointer(pkt->skb, skb_network_offset(pkt->skb),
sizeof(*iph), &_iph);
@ -30,8 +30,10 @@ static inline int __nft_set_pktinfo_ipv4_validate(struct nft_pktinfo *pkt)
return -1;
len = iph_totlen(pkt->skb, iph);
thoff = skb_network_offset(pkt->skb) + (iph->ihl * 4);
if (pkt->skb->len < len)
thoff = iph->ihl * 4;
skb_len = pkt->skb->len - skb_network_offset(pkt->skb);
if (skb_len < len)
return -1;
else if (len < thoff)
return -1;
@ -40,7 +42,7 @@ static inline int __nft_set_pktinfo_ipv4_validate(struct nft_pktinfo *pkt)
pkt->flags = NFT_PKTINFO_L4PROTO;
pkt->tprot = iph->protocol;
pkt->thoff = thoff;
pkt->thoff = skb_network_offset(pkt->skb) + thoff;
pkt->fragoff = ntohs(iph->frag_off) & IP_OFFSET;
return 0;

View File

@ -31,8 +31,8 @@ static inline int __nft_set_pktinfo_ipv6_validate(struct nft_pktinfo *pkt)
struct ipv6hdr *ip6h, _ip6h;
unsigned int thoff = 0;
unsigned short frag_off;
u32 pkt_len, skb_len;
int protohdr;
u32 pkt_len;
ip6h = skb_header_pointer(pkt->skb, skb_network_offset(pkt->skb),
sizeof(*ip6h), &_ip6h);
@ -43,7 +43,8 @@ static inline int __nft_set_pktinfo_ipv6_validate(struct nft_pktinfo *pkt)
return -1;
pkt_len = ntohs(ip6h->payload_len);
if (pkt_len + sizeof(*ip6h) > pkt->skb->len)
skb_len = pkt->skb->len - skb_network_offset(pkt->skb);
if (pkt_len + sizeof(*ip6h) > skb_len)
return -1;
protohdr = ipv6_find_hdr(pkt->skb, &thoff, -1, &frag_off, &flags);

View File

@ -2406,10 +2406,16 @@ static int hci_suspend_notifier(struct notifier_block *nb, unsigned long action,
/* To avoid a potential race with hci_unregister_dev. */
hci_dev_hold(hdev);
if (action == PM_SUSPEND_PREPARE)
switch (action) {
case PM_HIBERNATION_PREPARE:
case PM_SUSPEND_PREPARE:
ret = hci_suspend_dev(hdev);
else if (action == PM_POST_SUSPEND)
break;
case PM_POST_HIBERNATION:
case PM_POST_SUSPEND:
ret = hci_resume_dev(hdev);
break;
}
if (ret)
bt_dev_err(hdev, "Suspend notifier action (%lu) failed: %d",

View File

@ -235,7 +235,7 @@ static ssize_t speed_show(struct device *dev,
if (!rtnl_trylock())
return restart_syscall();
if (netif_running(netdev) && netif_device_present(netdev)) {
if (netif_running(netdev)) {
struct ethtool_link_ksettings cmd;
if (!__ethtool_get_link_ksettings(netdev, &cmd))

View File

@ -3654,7 +3654,7 @@ static int pktgen_thread_worker(void *arg)
struct pktgen_dev *pkt_dev = NULL;
int cpu = t->cpu;
WARN_ON(smp_processor_id() != cpu);
WARN_ON_ONCE(smp_processor_id() != cpu);
init_waitqueue_head(&t->queue);
complete(&t->start_done);
@ -3989,6 +3989,7 @@ static int __net_init pg_net_init(struct net *net)
goto remove;
}
cpus_read_lock();
for_each_online_cpu(cpu) {
int err;
@ -3997,6 +3998,7 @@ static int __net_init pg_net_init(struct net *net)
pr_warn("Cannot create thread for cpu %d (%d)\n",
cpu, err);
}
cpus_read_unlock();
if (list_empty(&pn->pktgen_threads)) {
pr_err("Initialization failed for all threads\n");

View File

@ -442,6 +442,9 @@ int __ethtool_get_link_ksettings(struct net_device *dev,
if (!dev->ethtool_ops->get_link_ksettings)
return -EOPNOTSUPP;
if (!netif_device_present(dev))
return -ENODEV;
memset(link_ksettings, 0, sizeof(*link_ksettings));
return dev->ethtool_ops->get_link_ksettings(dev, link_ksettings);
}

View File

@ -4637,6 +4637,13 @@ int tcp_abort(struct sock *sk, int err)
/* Don't race with userspace socket closes such as tcp_close. */
lock_sock(sk);
/* Avoid closing the same socket twice. */
if (sk->sk_state == TCP_CLOSE) {
if (!has_current_bpf_ctx())
release_sock(sk);
return -ENOENT;
}
if (sk->sk_state == TCP_LISTEN) {
tcp_set_state(sk, TCP_CLOSE);
inet_csk_listen_stop(sk);
@ -4646,16 +4653,13 @@ int tcp_abort(struct sock *sk, int err)
local_bh_disable();
bh_lock_sock(sk);
if (!sock_flag(sk, SOCK_DEAD)) {
if (tcp_need_reset(sk->sk_state))
tcp_send_active_reset(sk, GFP_ATOMIC,
SK_RST_REASON_NOT_SPECIFIED);
tcp_done_with_error(sk, err);
}
bh_unlock_sock(sk);
local_bh_enable();
tcp_write_queue_purge(sk);
if (!has_current_bpf_ctx())
release_sock(sk);
return 0;

View File

@ -6664,7 +6664,7 @@ static bool ieee80211_mgd_ssid_mismatch(struct ieee80211_sub_if_data *sdata,
return true;
/* hidden SSID: zeroed out */
if (memcmp(elems->ssid, zero_ssid, elems->ssid_len))
if (!memcmp(elems->ssid, zero_ssid, elems->ssid_len))
return false;
return memcmp(elems->ssid, cfg->ssid, cfg->ssid_len);

View File

@ -5348,8 +5348,10 @@ ieee80211_beacon_get_ap(struct ieee80211_hw *hw,
if (beacon->tail)
skb_put_data(skb, beacon->tail, beacon->tail_len);
if (ieee80211_beacon_protect(skb, local, sdata, link) < 0)
if (ieee80211_beacon_protect(skb, local, sdata, link) < 0) {
dev_kfree_skb(skb);
return NULL;
}
ieee80211_beacon_get_finish(hw, vif, link, offs, beacon, skb,
chanctx_conf, csa_off_base);

View File

@ -68,12 +68,12 @@ void __mptcp_fastopen_gen_msk_ackseq(struct mptcp_sock *msk, struct mptcp_subflo
skb = skb_peek_tail(&sk->sk_receive_queue);
if (skb) {
WARN_ON_ONCE(MPTCP_SKB_CB(skb)->end_seq);
pr_debug("msk %p moving seq %llx -> %llx end_seq %llx -> %llx", sk,
pr_debug("msk %p moving seq %llx -> %llx end_seq %llx -> %llx\n", sk,
MPTCP_SKB_CB(skb)->map_seq, MPTCP_SKB_CB(skb)->map_seq + msk->ack_seq,
MPTCP_SKB_CB(skb)->end_seq, MPTCP_SKB_CB(skb)->end_seq + msk->ack_seq);
MPTCP_SKB_CB(skb)->map_seq += msk->ack_seq;
MPTCP_SKB_CB(skb)->end_seq += msk->ack_seq;
}
pr_debug("msk=%p ack_seq=%llx", msk, msk->ack_seq);
pr_debug("msk=%p ack_seq=%llx\n", msk, msk->ack_seq);
}

View File

@ -117,7 +117,7 @@ static void mptcp_parse_option(const struct sk_buff *skb,
mp_opt->suboptions |= OPTION_MPTCP_CSUMREQD;
ptr += 2;
}
pr_debug("MP_CAPABLE version=%x, flags=%x, optlen=%d sndr=%llu, rcvr=%llu len=%d csum=%u",
pr_debug("MP_CAPABLE version=%x, flags=%x, optlen=%d sndr=%llu, rcvr=%llu len=%d csum=%u\n",
version, flags, opsize, mp_opt->sndr_key,
mp_opt->rcvr_key, mp_opt->data_len, mp_opt->csum);
break;
@ -131,7 +131,7 @@ static void mptcp_parse_option(const struct sk_buff *skb,
ptr += 4;
mp_opt->nonce = get_unaligned_be32(ptr);
ptr += 4;
pr_debug("MP_JOIN bkup=%u, id=%u, token=%u, nonce=%u",
pr_debug("MP_JOIN bkup=%u, id=%u, token=%u, nonce=%u\n",
mp_opt->backup, mp_opt->join_id,
mp_opt->token, mp_opt->nonce);
} else if (opsize == TCPOLEN_MPTCP_MPJ_SYNACK) {
@ -142,19 +142,19 @@ static void mptcp_parse_option(const struct sk_buff *skb,
ptr += 8;
mp_opt->nonce = get_unaligned_be32(ptr);
ptr += 4;
pr_debug("MP_JOIN bkup=%u, id=%u, thmac=%llu, nonce=%u",
pr_debug("MP_JOIN bkup=%u, id=%u, thmac=%llu, nonce=%u\n",
mp_opt->backup, mp_opt->join_id,
mp_opt->thmac, mp_opt->nonce);
} else if (opsize == TCPOLEN_MPTCP_MPJ_ACK) {
mp_opt->suboptions |= OPTION_MPTCP_MPJ_ACK;
ptr += 2;
memcpy(mp_opt->hmac, ptr, MPTCPOPT_HMAC_LEN);
pr_debug("MP_JOIN hmac");
pr_debug("MP_JOIN hmac\n");
}
break;
case MPTCPOPT_DSS:
pr_debug("DSS");
pr_debug("DSS\n");
ptr++;
/* we must clear 'mpc_map' be able to detect MP_CAPABLE
@ -169,7 +169,7 @@ static void mptcp_parse_option(const struct sk_buff *skb,
mp_opt->ack64 = (flags & MPTCP_DSS_ACK64) != 0;
mp_opt->use_ack = (flags & MPTCP_DSS_HAS_ACK);
pr_debug("data_fin=%d dsn64=%d use_map=%d ack64=%d use_ack=%d",
pr_debug("data_fin=%d dsn64=%d use_map=%d ack64=%d use_ack=%d\n",
mp_opt->data_fin, mp_opt->dsn64,
mp_opt->use_map, mp_opt->ack64,
mp_opt->use_ack);
@ -207,7 +207,7 @@ static void mptcp_parse_option(const struct sk_buff *skb,
ptr += 4;
}
pr_debug("data_ack=%llu", mp_opt->data_ack);
pr_debug("data_ack=%llu\n", mp_opt->data_ack);
}
if (mp_opt->use_map) {
@ -231,7 +231,7 @@ static void mptcp_parse_option(const struct sk_buff *skb,
ptr += 2;
}
pr_debug("data_seq=%llu subflow_seq=%u data_len=%u csum=%d:%u",
pr_debug("data_seq=%llu subflow_seq=%u data_len=%u csum=%d:%u\n",
mp_opt->data_seq, mp_opt->subflow_seq,
mp_opt->data_len, !!(mp_opt->suboptions & OPTION_MPTCP_CSUMREQD),
mp_opt->csum);
@ -293,7 +293,7 @@ static void mptcp_parse_option(const struct sk_buff *skb,
mp_opt->ahmac = get_unaligned_be64(ptr);
ptr += 8;
}
pr_debug("ADD_ADDR%s: id=%d, ahmac=%llu, echo=%d, port=%d",
pr_debug("ADD_ADDR%s: id=%d, ahmac=%llu, echo=%d, port=%d\n",
(mp_opt->addr.family == AF_INET6) ? "6" : "",
mp_opt->addr.id, mp_opt->ahmac, mp_opt->echo, ntohs(mp_opt->addr.port));
break;
@ -309,7 +309,7 @@ static void mptcp_parse_option(const struct sk_buff *skb,
mp_opt->rm_list.nr = opsize - TCPOLEN_MPTCP_RM_ADDR_BASE;
for (i = 0; i < mp_opt->rm_list.nr; i++)
mp_opt->rm_list.ids[i] = *ptr++;
pr_debug("RM_ADDR: rm_list_nr=%d", mp_opt->rm_list.nr);
pr_debug("RM_ADDR: rm_list_nr=%d\n", mp_opt->rm_list.nr);
break;
case MPTCPOPT_MP_PRIO:
@ -318,7 +318,7 @@ static void mptcp_parse_option(const struct sk_buff *skb,
mp_opt->suboptions |= OPTION_MPTCP_PRIO;
mp_opt->backup = *ptr++ & MPTCP_PRIO_BKUP;
pr_debug("MP_PRIO: prio=%d", mp_opt->backup);
pr_debug("MP_PRIO: prio=%d\n", mp_opt->backup);
break;
case MPTCPOPT_MP_FASTCLOSE:
@ -329,7 +329,7 @@ static void mptcp_parse_option(const struct sk_buff *skb,
mp_opt->rcvr_key = get_unaligned_be64(ptr);
ptr += 8;
mp_opt->suboptions |= OPTION_MPTCP_FASTCLOSE;
pr_debug("MP_FASTCLOSE: recv_key=%llu", mp_opt->rcvr_key);
pr_debug("MP_FASTCLOSE: recv_key=%llu\n", mp_opt->rcvr_key);
break;
case MPTCPOPT_RST:
@ -343,7 +343,7 @@ static void mptcp_parse_option(const struct sk_buff *skb,
flags = *ptr++;
mp_opt->reset_transient = flags & MPTCP_RST_TRANSIENT;
mp_opt->reset_reason = *ptr;
pr_debug("MP_RST: transient=%u reason=%u",
pr_debug("MP_RST: transient=%u reason=%u\n",
mp_opt->reset_transient, mp_opt->reset_reason);
break;
@ -354,7 +354,7 @@ static void mptcp_parse_option(const struct sk_buff *skb,
ptr += 2;
mp_opt->suboptions |= OPTION_MPTCP_FAIL;
mp_opt->fail_seq = get_unaligned_be64(ptr);
pr_debug("MP_FAIL: data_seq=%llu", mp_opt->fail_seq);
pr_debug("MP_FAIL: data_seq=%llu\n", mp_opt->fail_seq);
break;
default:
@ -417,7 +417,7 @@ bool mptcp_syn_options(struct sock *sk, const struct sk_buff *skb,
*size = TCPOLEN_MPTCP_MPC_SYN;
return true;
} else if (subflow->request_join) {
pr_debug("remote_token=%u, nonce=%u", subflow->remote_token,
pr_debug("remote_token=%u, nonce=%u\n", subflow->remote_token,
subflow->local_nonce);
opts->suboptions = OPTION_MPTCP_MPJ_SYN;
opts->join_id = subflow->local_id;
@ -500,7 +500,7 @@ static bool mptcp_established_options_mp(struct sock *sk, struct sk_buff *skb,
*size = TCPOLEN_MPTCP_MPC_ACK;
}
pr_debug("subflow=%p, local_key=%llu, remote_key=%llu map_len=%d",
pr_debug("subflow=%p, local_key=%llu, remote_key=%llu map_len=%d\n",
subflow, subflow->local_key, subflow->remote_key,
data_len);
@ -509,7 +509,7 @@ static bool mptcp_established_options_mp(struct sock *sk, struct sk_buff *skb,
opts->suboptions = OPTION_MPTCP_MPJ_ACK;
memcpy(opts->hmac, subflow->hmac, MPTCPOPT_HMAC_LEN);
*size = TCPOLEN_MPTCP_MPJ_ACK;
pr_debug("subflow=%p", subflow);
pr_debug("subflow=%p\n", subflow);
/* we can use the full delegate action helper only from BH context
* If we are in process context - sk is flushing the backlog at
@ -675,7 +675,7 @@ static bool mptcp_established_options_add_addr(struct sock *sk, struct sk_buff *
*size = len;
if (drop_other_suboptions) {
pr_debug("drop other suboptions");
pr_debug("drop other suboptions\n");
opts->suboptions = 0;
/* note that e.g. DSS could have written into the memory
@ -695,7 +695,7 @@ static bool mptcp_established_options_add_addr(struct sock *sk, struct sk_buff *
} else {
MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_ECHOADDTX);
}
pr_debug("addr_id=%d, ahmac=%llu, echo=%d, port=%d",
pr_debug("addr_id=%d, ahmac=%llu, echo=%d, port=%d\n",
opts->addr.id, opts->ahmac, echo, ntohs(opts->addr.port));
return true;
@ -726,7 +726,7 @@ static bool mptcp_established_options_rm_addr(struct sock *sk,
opts->rm_list = rm_list;
for (i = 0; i < opts->rm_list.nr; i++)
pr_debug("rm_list_ids[%d]=%d", i, opts->rm_list.ids[i]);
pr_debug("rm_list_ids[%d]=%d\n", i, opts->rm_list.ids[i]);
MPTCP_ADD_STATS(sock_net(sk), MPTCP_MIB_RMADDRTX, opts->rm_list.nr);
return true;
}
@ -752,7 +752,7 @@ static bool mptcp_established_options_mp_prio(struct sock *sk,
opts->suboptions |= OPTION_MPTCP_PRIO;
opts->backup = subflow->request_bkup;
pr_debug("prio=%d", opts->backup);
pr_debug("prio=%d\n", opts->backup);
return true;
}
@ -794,7 +794,7 @@ static bool mptcp_established_options_fastclose(struct sock *sk,
opts->suboptions |= OPTION_MPTCP_FASTCLOSE;
opts->rcvr_key = READ_ONCE(msk->remote_key);
pr_debug("FASTCLOSE key=%llu", opts->rcvr_key);
pr_debug("FASTCLOSE key=%llu\n", opts->rcvr_key);
MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_MPFASTCLOSETX);
return true;
}
@ -816,7 +816,7 @@ static bool mptcp_established_options_mp_fail(struct sock *sk,
opts->suboptions |= OPTION_MPTCP_FAIL;
opts->fail_seq = subflow->map_seq;
pr_debug("MP_FAIL fail_seq=%llu", opts->fail_seq);
pr_debug("MP_FAIL fail_seq=%llu\n", opts->fail_seq);
MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_MPFAILTX);
return true;
@ -904,7 +904,7 @@ bool mptcp_synack_options(const struct request_sock *req, unsigned int *size,
opts->csum_reqd = subflow_req->csum_reqd;
opts->allow_join_id0 = subflow_req->allow_join_id0;
*size = TCPOLEN_MPTCP_MPC_SYNACK;
pr_debug("subflow_req=%p, local_key=%llu",
pr_debug("subflow_req=%p, local_key=%llu\n",
subflow_req, subflow_req->local_key);
return true;
} else if (subflow_req->mp_join) {
@ -913,7 +913,7 @@ bool mptcp_synack_options(const struct request_sock *req, unsigned int *size,
opts->join_id = subflow_req->local_id;
opts->thmac = subflow_req->thmac;
opts->nonce = subflow_req->local_nonce;
pr_debug("req=%p, bkup=%u, id=%u, thmac=%llu, nonce=%u",
pr_debug("req=%p, bkup=%u, id=%u, thmac=%llu, nonce=%u\n",
subflow_req, opts->backup, opts->join_id,
opts->thmac, opts->nonce);
*size = TCPOLEN_MPTCP_MPJ_SYNACK;

View File

@ -19,7 +19,7 @@ int mptcp_pm_announce_addr(struct mptcp_sock *msk,
{
u8 add_addr = READ_ONCE(msk->pm.addr_signal);
pr_debug("msk=%p, local_id=%d, echo=%d", msk, addr->id, echo);
pr_debug("msk=%p, local_id=%d, echo=%d\n", msk, addr->id, echo);
lockdep_assert_held(&msk->pm.lock);
@ -45,7 +45,7 @@ int mptcp_pm_remove_addr(struct mptcp_sock *msk, const struct mptcp_rm_list *rm_
{
u8 rm_addr = READ_ONCE(msk->pm.addr_signal);
pr_debug("msk=%p, rm_list_nr=%d", msk, rm_list->nr);
pr_debug("msk=%p, rm_list_nr=%d\n", msk, rm_list->nr);
if (rm_addr) {
MPTCP_ADD_STATS(sock_net((struct sock *)msk),
@ -66,7 +66,7 @@ void mptcp_pm_new_connection(struct mptcp_sock *msk, const struct sock *ssk, int
{
struct mptcp_pm_data *pm = &msk->pm;
pr_debug("msk=%p, token=%u side=%d", msk, READ_ONCE(msk->token), server_side);
pr_debug("msk=%p, token=%u side=%d\n", msk, READ_ONCE(msk->token), server_side);
WRITE_ONCE(pm->server_side, server_side);
mptcp_event(MPTCP_EVENT_CREATED, msk, ssk, GFP_ATOMIC);
@ -90,7 +90,7 @@ bool mptcp_pm_allow_new_subflow(struct mptcp_sock *msk)
subflows_max = mptcp_pm_get_subflows_max(msk);
pr_debug("msk=%p subflows=%d max=%d allow=%d", msk, pm->subflows,
pr_debug("msk=%p subflows=%d max=%d allow=%d\n", msk, pm->subflows,
subflows_max, READ_ONCE(pm->accept_subflow));
/* try to avoid acquiring the lock below */
@ -114,7 +114,7 @@ bool mptcp_pm_allow_new_subflow(struct mptcp_sock *msk)
static bool mptcp_pm_schedule_work(struct mptcp_sock *msk,
enum mptcp_pm_status new_status)
{
pr_debug("msk=%p status=%x new=%lx", msk, msk->pm.status,
pr_debug("msk=%p status=%x new=%lx\n", msk, msk->pm.status,
BIT(new_status));
if (msk->pm.status & BIT(new_status))
return false;
@ -129,7 +129,7 @@ void mptcp_pm_fully_established(struct mptcp_sock *msk, const struct sock *ssk)
struct mptcp_pm_data *pm = &msk->pm;
bool announce = false;
pr_debug("msk=%p", msk);
pr_debug("msk=%p\n", msk);
spin_lock_bh(&pm->lock);
@ -153,14 +153,14 @@ void mptcp_pm_fully_established(struct mptcp_sock *msk, const struct sock *ssk)
void mptcp_pm_connection_closed(struct mptcp_sock *msk)
{
pr_debug("msk=%p", msk);
pr_debug("msk=%p\n", msk);
}
void mptcp_pm_subflow_established(struct mptcp_sock *msk)
{
struct mptcp_pm_data *pm = &msk->pm;
pr_debug("msk=%p", msk);
pr_debug("msk=%p\n", msk);
if (!READ_ONCE(pm->work_pending))
return;
@ -212,7 +212,7 @@ void mptcp_pm_add_addr_received(const struct sock *ssk,
struct mptcp_sock *msk = mptcp_sk(subflow->conn);
struct mptcp_pm_data *pm = &msk->pm;
pr_debug("msk=%p remote_id=%d accept=%d", msk, addr->id,
pr_debug("msk=%p remote_id=%d accept=%d\n", msk, addr->id,
READ_ONCE(pm->accept_addr));
mptcp_event_addr_announced(ssk, addr);
@ -226,7 +226,9 @@ void mptcp_pm_add_addr_received(const struct sock *ssk,
} else {
__MPTCP_INC_STATS(sock_net((struct sock *)msk), MPTCP_MIB_ADDADDRDROP);
}
} else if (!READ_ONCE(pm->accept_addr)) {
/* id0 should not have a different address */
} else if ((addr->id == 0 && !mptcp_pm_nl_is_init_remote_addr(msk, addr)) ||
(addr->id > 0 && !READ_ONCE(pm->accept_addr))) {
mptcp_pm_announce_addr(msk, addr, true);
mptcp_pm_add_addr_send_ack(msk);
} else if (mptcp_pm_schedule_work(msk, MPTCP_PM_ADD_ADDR_RECEIVED)) {
@ -243,7 +245,7 @@ void mptcp_pm_add_addr_echoed(struct mptcp_sock *msk,
{
struct mptcp_pm_data *pm = &msk->pm;
pr_debug("msk=%p", msk);
pr_debug("msk=%p\n", msk);
spin_lock_bh(&pm->lock);
@ -267,7 +269,7 @@ void mptcp_pm_rm_addr_received(struct mptcp_sock *msk,
struct mptcp_pm_data *pm = &msk->pm;
u8 i;
pr_debug("msk=%p remote_ids_nr=%d", msk, rm_list->nr);
pr_debug("msk=%p remote_ids_nr=%d\n", msk, rm_list->nr);
for (i = 0; i < rm_list->nr; i++)
mptcp_event_addr_removed(msk, rm_list->ids[i]);
@ -299,19 +301,19 @@ void mptcp_pm_mp_fail_received(struct sock *sk, u64 fail_seq)
struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk);
struct mptcp_sock *msk = mptcp_sk(subflow->conn);
pr_debug("fail_seq=%llu", fail_seq);
pr_debug("fail_seq=%llu\n", fail_seq);
if (!READ_ONCE(msk->allow_infinite_fallback))
return;
if (!subflow->fail_tout) {
pr_debug("send MP_FAIL response and infinite map");
pr_debug("send MP_FAIL response and infinite map\n");
subflow->send_mp_fail = 1;
subflow->send_infinite_map = 1;
tcp_send_ack(sk);
} else {
pr_debug("MP_FAIL response received");
pr_debug("MP_FAIL response received\n");
WRITE_ONCE(subflow->fail_tout, 0);
}
}

View File

@ -130,12 +130,15 @@ static bool lookup_subflow_by_daddr(const struct list_head *list,
{
struct mptcp_subflow_context *subflow;
struct mptcp_addr_info cur;
struct sock_common *skc;
list_for_each_entry(subflow, list, node) {
skc = (struct sock_common *)mptcp_subflow_tcp_sock(subflow);
struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
remote_address(skc, &cur);
if (!((1 << inet_sk_state_load(ssk)) &
(TCPF_ESTABLISHED | TCPF_SYN_SENT | TCPF_SYN_RECV)))
continue;
remote_address((struct sock_common *)ssk, &cur);
if (mptcp_addresses_equal(&cur, daddr, daddr->port))
return true;
}
@ -287,7 +290,7 @@ static void mptcp_pm_add_timer(struct timer_list *timer)
struct mptcp_sock *msk = entry->sock;
struct sock *sk = (struct sock *)msk;
pr_debug("msk=%p", msk);
pr_debug("msk=%p\n", msk);
if (!msk)
return;
@ -306,7 +309,7 @@ static void mptcp_pm_add_timer(struct timer_list *timer)
spin_lock_bh(&msk->pm.lock);
if (!mptcp_pm_should_add_signal_addr(msk)) {
pr_debug("retransmit ADD_ADDR id=%d", entry->addr.id);
pr_debug("retransmit ADD_ADDR id=%d\n", entry->addr.id);
mptcp_pm_announce_addr(msk, &entry->addr, false);
mptcp_pm_add_addr_send_ack(msk);
entry->retrans_times++;
@ -387,7 +390,7 @@ void mptcp_pm_free_anno_list(struct mptcp_sock *msk)
struct sock *sk = (struct sock *)msk;
LIST_HEAD(free_list);
pr_debug("msk=%p", msk);
pr_debug("msk=%p\n", msk);
spin_lock_bh(&msk->pm.lock);
list_splice_init(&msk->pm.anno_list, &free_list);
@ -473,7 +476,7 @@ static void __mptcp_pm_send_ack(struct mptcp_sock *msk, struct mptcp_subflow_con
struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
bool slow;
pr_debug("send ack for %s",
pr_debug("send ack for %s\n",
prio ? "mp_prio" : (mptcp_pm_should_add_signal(msk) ? "add_addr" : "rm_addr"));
slow = lock_sock_fast(ssk);
@ -585,6 +588,11 @@ static void mptcp_pm_create_subflow_or_signal_addr(struct mptcp_sock *msk)
__clear_bit(local.addr.id, msk->pm.id_avail_bitmap);
msk->pm.add_addr_signaled++;
/* Special case for ID0: set the correct ID */
if (local.addr.id == msk->mpc_endpoint_id)
local.addr.id = 0;
mptcp_pm_announce_addr(msk, &local.addr, false);
mptcp_pm_nl_addr_send_ack(msk);
@ -607,8 +615,14 @@ subflow:
fullmesh = !!(local.flags & MPTCP_PM_ADDR_FLAG_FULLMESH);
msk->pm.local_addr_used++;
__clear_bit(local.addr.id, msk->pm.id_avail_bitmap);
/* Special case for ID0: set the correct ID */
if (local.addr.id == msk->mpc_endpoint_id)
local.addr.id = 0;
else /* local_addr_used is not decr for ID 0 */
msk->pm.local_addr_used++;
nr = fill_remote_addresses_vec(msk, &local.addr, fullmesh, addrs);
if (nr == 0)
continue;
@ -708,7 +722,7 @@ static void mptcp_pm_nl_add_addr_received(struct mptcp_sock *msk)
add_addr_accept_max = mptcp_pm_get_add_addr_accept_max(msk);
subflows_max = mptcp_pm_get_subflows_max(msk);
pr_debug("accepted %d:%d remote family %d",
pr_debug("accepted %d:%d remote family %d\n",
msk->pm.add_addr_accepted, add_addr_accept_max,
msk->pm.remote.family);
@ -737,6 +751,8 @@ static void mptcp_pm_nl_add_addr_received(struct mptcp_sock *msk)
spin_lock_bh(&msk->pm.lock);
if (sf_created) {
/* add_addr_accepted is not decr for ID 0 */
if (remote.id)
msk->pm.add_addr_accepted++;
if (msk->pm.add_addr_accepted >= add_addr_accept_max ||
msk->pm.subflows >= subflows_max)
@ -744,6 +760,15 @@ static void mptcp_pm_nl_add_addr_received(struct mptcp_sock *msk)
}
}
bool mptcp_pm_nl_is_init_remote_addr(struct mptcp_sock *msk,
const struct mptcp_addr_info *remote)
{
struct mptcp_addr_info mpc_remote;
remote_address((struct sock_common *)msk, &mpc_remote);
return mptcp_addresses_equal(&mpc_remote, remote, remote->port);
}
void mptcp_pm_nl_addr_send_ack(struct mptcp_sock *msk)
{
struct mptcp_subflow_context *subflow;
@ -755,9 +780,12 @@ void mptcp_pm_nl_addr_send_ack(struct mptcp_sock *msk)
!mptcp_pm_should_rm_signal(msk))
return;
subflow = list_first_entry_or_null(&msk->conn_list, typeof(*subflow), node);
if (subflow)
mptcp_for_each_subflow(msk, subflow) {
if (__mptcp_subflow_active(subflow)) {
mptcp_pm_send_ack(msk, subflow, false, false);
break;
}
}
}
int mptcp_pm_nl_mp_prio_send_ack(struct mptcp_sock *msk,
@ -767,7 +795,7 @@ int mptcp_pm_nl_mp_prio_send_ack(struct mptcp_sock *msk,
{
struct mptcp_subflow_context *subflow;
pr_debug("bkup=%d", bkup);
pr_debug("bkup=%d\n", bkup);
mptcp_for_each_subflow(msk, subflow) {
struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
@ -790,11 +818,6 @@ int mptcp_pm_nl_mp_prio_send_ack(struct mptcp_sock *msk,
return -EINVAL;
}
static bool mptcp_local_id_match(const struct mptcp_sock *msk, u8 local_id, u8 id)
{
return local_id == id || (!local_id && msk->mpc_endpoint_id == id);
}
static void mptcp_pm_nl_rm_addr_or_subflow(struct mptcp_sock *msk,
const struct mptcp_rm_list *rm_list,
enum linux_mptcp_mib_field rm_type)
@ -803,7 +826,7 @@ static void mptcp_pm_nl_rm_addr_or_subflow(struct mptcp_sock *msk,
struct sock *sk = (struct sock *)msk;
u8 i;
pr_debug("%s rm_list_nr %d",
pr_debug("%s rm_list_nr %d\n",
rm_type == MPTCP_MIB_RMADDR ? "address" : "subflow", rm_list->nr);
msk_owned_by_me(msk);
@ -827,12 +850,14 @@ static void mptcp_pm_nl_rm_addr_or_subflow(struct mptcp_sock *msk,
int how = RCV_SHUTDOWN | SEND_SHUTDOWN;
u8 id = subflow_get_local_id(subflow);
if (inet_sk_state_load(ssk) == TCP_CLOSE)
continue;
if (rm_type == MPTCP_MIB_RMADDR && remote_id != rm_id)
continue;
if (rm_type == MPTCP_MIB_RMSUBFLOW && !mptcp_local_id_match(msk, id, rm_id))
if (rm_type == MPTCP_MIB_RMSUBFLOW && id != rm_id)
continue;
pr_debug(" -> %s rm_list_ids[%d]=%u local_id=%u remote_id=%u mpc_id=%u",
pr_debug(" -> %s rm_list_ids[%d]=%u local_id=%u remote_id=%u mpc_id=%u\n",
rm_type == MPTCP_MIB_RMADDR ? "address" : "subflow",
i, rm_id, id, remote_id, msk->mpc_endpoint_id);
spin_unlock_bh(&msk->pm.lock);
@ -889,7 +914,7 @@ void mptcp_pm_nl_work(struct mptcp_sock *msk)
spin_lock_bh(&msk->pm.lock);
pr_debug("msk=%p status=%x", msk, pm->status);
pr_debug("msk=%p status=%x\n", msk, pm->status);
if (pm->status & BIT(MPTCP_PM_ADD_ADDR_RECEIVED)) {
pm->status &= ~BIT(MPTCP_PM_ADD_ADDR_RECEIVED);
mptcp_pm_nl_add_addr_received(msk);
@ -1307,20 +1332,27 @@ static struct pm_nl_pernet *genl_info_pm_nl(struct genl_info *info)
return pm_nl_get_pernet(genl_info_net(info));
}
static int mptcp_nl_add_subflow_or_signal_addr(struct net *net)
static int mptcp_nl_add_subflow_or_signal_addr(struct net *net,
struct mptcp_addr_info *addr)
{
struct mptcp_sock *msk;
long s_slot = 0, s_num = 0;
while ((msk = mptcp_token_iter_next(net, &s_slot, &s_num)) != NULL) {
struct sock *sk = (struct sock *)msk;
struct mptcp_addr_info mpc_addr;
if (!READ_ONCE(msk->fully_established) ||
mptcp_pm_is_userspace(msk))
goto next;
/* if the endp linked to the init sf is re-added with a != ID */
mptcp_local_address((struct sock_common *)msk, &mpc_addr);
lock_sock(sk);
spin_lock_bh(&msk->pm.lock);
if (mptcp_addresses_equal(addr, &mpc_addr, addr->port))
msk->mpc_endpoint_id = addr->id;
mptcp_pm_create_subflow_or_signal_addr(msk);
spin_unlock_bh(&msk->pm.lock);
release_sock(sk);
@ -1393,7 +1425,7 @@ int mptcp_pm_nl_add_addr_doit(struct sk_buff *skb, struct genl_info *info)
goto out_free;
}
mptcp_nl_add_subflow_or_signal_addr(sock_net(skb->sk));
mptcp_nl_add_subflow_or_signal_addr(sock_net(skb->sk), &entry->addr);
return 0;
out_free:
@ -1438,6 +1470,12 @@ static bool remove_anno_list_by_saddr(struct mptcp_sock *msk,
return false;
}
static u8 mptcp_endp_get_local_id(struct mptcp_sock *msk,
const struct mptcp_addr_info *addr)
{
return msk->mpc_endpoint_id == addr->id ? 0 : addr->id;
}
static bool mptcp_pm_remove_anno_addr(struct mptcp_sock *msk,
const struct mptcp_addr_info *addr,
bool force)
@ -1445,7 +1483,7 @@ static bool mptcp_pm_remove_anno_addr(struct mptcp_sock *msk,
struct mptcp_rm_list list = { .nr = 0 };
bool ret;
list.ids[list.nr++] = addr->id;
list.ids[list.nr++] = mptcp_endp_get_local_id(msk, addr);
ret = remove_anno_list_by_saddr(msk, addr);
if (ret || force) {
@ -1472,13 +1510,11 @@ static int mptcp_nl_remove_subflow_and_signal_addr(struct net *net,
const struct mptcp_pm_addr_entry *entry)
{
const struct mptcp_addr_info *addr = &entry->addr;
struct mptcp_rm_list list = { .nr = 0 };
struct mptcp_rm_list list = { .nr = 1 };
long s_slot = 0, s_num = 0;
struct mptcp_sock *msk;
pr_debug("remove_id=%d", addr->id);
list.ids[list.nr++] = addr->id;
pr_debug("remove_id=%d\n", addr->id);
while ((msk = mptcp_token_iter_next(net, &s_slot, &s_num)) != NULL) {
struct sock *sk = (struct sock *)msk;
@ -1497,6 +1533,7 @@ static int mptcp_nl_remove_subflow_and_signal_addr(struct net *net,
mptcp_pm_remove_anno_addr(msk, addr, remove_subflow &&
!(entry->flags & MPTCP_PM_ADDR_FLAG_IMPLICIT));
list.ids[0] = mptcp_endp_get_local_id(msk, addr);
if (remove_subflow) {
spin_lock_bh(&msk->pm.lock);
mptcp_pm_nl_rm_subflow_received(msk, &list);
@ -1509,6 +1546,8 @@ static int mptcp_nl_remove_subflow_and_signal_addr(struct net *net,
spin_unlock_bh(&msk->pm.lock);
}
if (msk->mpc_endpoint_id == entry->addr.id)
msk->mpc_endpoint_id = 0;
release_sock(sk);
next:
@ -1603,6 +1642,7 @@ int mptcp_pm_nl_del_addr_doit(struct sk_buff *skb, struct genl_info *info)
return ret;
}
/* Called from the userspace PM only */
void mptcp_pm_remove_addrs(struct mptcp_sock *msk, struct list_head *rm_list)
{
struct mptcp_rm_list alist = { .nr = 0 };
@ -1631,6 +1671,7 @@ void mptcp_pm_remove_addrs(struct mptcp_sock *msk, struct list_head *rm_list)
}
}
/* Called from the in-kernel PM only */
static void mptcp_pm_remove_addrs_and_subflows(struct mptcp_sock *msk,
struct list_head *rm_list)
{
@ -1640,11 +1681,11 @@ static void mptcp_pm_remove_addrs_and_subflows(struct mptcp_sock *msk,
list_for_each_entry(entry, rm_list, list) {
if (slist.nr < MPTCP_RM_IDS_MAX &&
lookup_subflow_by_saddr(&msk->conn_list, &entry->addr))
slist.ids[slist.nr++] = entry->addr.id;
slist.ids[slist.nr++] = mptcp_endp_get_local_id(msk, &entry->addr);
if (alist.nr < MPTCP_RM_IDS_MAX &&
remove_anno_list_by_saddr(msk, &entry->addr))
alist.ids[alist.nr++] = entry->addr.id;
alist.ids[alist.nr++] = mptcp_endp_get_local_id(msk, &entry->addr);
}
spin_lock_bh(&msk->pm.lock);
@ -1941,7 +1982,7 @@ static void mptcp_pm_nl_fullmesh(struct mptcp_sock *msk,
{
struct mptcp_rm_list list = { .nr = 0 };
list.ids[list.nr++] = addr->id;
list.ids[list.nr++] = mptcp_endp_get_local_id(msk, addr);
spin_lock_bh(&msk->pm.lock);
mptcp_pm_nl_rm_subflow_received(msk, &list);

View File

@ -139,7 +139,7 @@ static bool mptcp_try_coalesce(struct sock *sk, struct sk_buff *to,
!skb_try_coalesce(to, from, &fragstolen, &delta))
return false;
pr_debug("colesced seq %llx into %llx new len %d new end seq %llx",
pr_debug("colesced seq %llx into %llx new len %d new end seq %llx\n",
MPTCP_SKB_CB(from)->map_seq, MPTCP_SKB_CB(to)->map_seq,
to->len, MPTCP_SKB_CB(from)->end_seq);
MPTCP_SKB_CB(to)->end_seq = MPTCP_SKB_CB(from)->end_seq;
@ -217,7 +217,7 @@ static void mptcp_data_queue_ofo(struct mptcp_sock *msk, struct sk_buff *skb)
end_seq = MPTCP_SKB_CB(skb)->end_seq;
max_seq = atomic64_read(&msk->rcv_wnd_sent);
pr_debug("msk=%p seq=%llx limit=%llx empty=%d", msk, seq, max_seq,
pr_debug("msk=%p seq=%llx limit=%llx empty=%d\n", msk, seq, max_seq,
RB_EMPTY_ROOT(&msk->out_of_order_queue));
if (after64(end_seq, max_seq)) {
/* out of window */
@ -643,7 +643,7 @@ static bool __mptcp_move_skbs_from_subflow(struct mptcp_sock *msk,
}
}
pr_debug("msk=%p ssk=%p", msk, ssk);
pr_debug("msk=%p ssk=%p\n", msk, ssk);
tp = tcp_sk(ssk);
do {
u32 map_remaining, offset;
@ -724,7 +724,7 @@ static bool __mptcp_ofo_queue(struct mptcp_sock *msk)
u64 end_seq;
p = rb_first(&msk->out_of_order_queue);
pr_debug("msk=%p empty=%d", msk, RB_EMPTY_ROOT(&msk->out_of_order_queue));
pr_debug("msk=%p empty=%d\n", msk, RB_EMPTY_ROOT(&msk->out_of_order_queue));
while (p) {
skb = rb_to_skb(p);
if (after64(MPTCP_SKB_CB(skb)->map_seq, msk->ack_seq))
@ -746,7 +746,7 @@ static bool __mptcp_ofo_queue(struct mptcp_sock *msk)
int delta = msk->ack_seq - MPTCP_SKB_CB(skb)->map_seq;
/* skip overlapping data, if any */
pr_debug("uncoalesced seq=%llx ack seq=%llx delta=%d",
pr_debug("uncoalesced seq=%llx ack seq=%llx delta=%d\n",
MPTCP_SKB_CB(skb)->map_seq, msk->ack_seq,
delta);
MPTCP_SKB_CB(skb)->offset += delta;
@ -1240,7 +1240,7 @@ static int mptcp_sendmsg_frag(struct sock *sk, struct sock *ssk,
size_t copy;
int i;
pr_debug("msk=%p ssk=%p sending dfrag at seq=%llu len=%u already sent=%u",
pr_debug("msk=%p ssk=%p sending dfrag at seq=%llu len=%u already sent=%u\n",
msk, ssk, dfrag->data_seq, dfrag->data_len, info->sent);
if (WARN_ON_ONCE(info->sent > info->limit ||
@ -1341,7 +1341,7 @@ alloc_skb:
mpext->use_map = 1;
mpext->dsn64 = 1;
pr_debug("data_seq=%llu subflow_seq=%u data_len=%u dsn64=%d",
pr_debug("data_seq=%llu subflow_seq=%u data_len=%u dsn64=%d\n",
mpext->data_seq, mpext->subflow_seq, mpext->data_len,
mpext->dsn64);
@ -1892,7 +1892,7 @@ static int mptcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
if (!msk->first_pending)
WRITE_ONCE(msk->first_pending, dfrag);
}
pr_debug("msk=%p dfrag at seq=%llu len=%u sent=%u new=%d", msk,
pr_debug("msk=%p dfrag at seq=%llu len=%u sent=%u new=%d\n", msk,
dfrag->data_seq, dfrag->data_len, dfrag->already_sent,
!dfrag_collapsed);
@ -2248,7 +2248,7 @@ static int mptcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
}
}
pr_debug("block timeout %ld", timeo);
pr_debug("block timeout %ld\n", timeo);
sk_wait_data(sk, &timeo, NULL);
}
@ -2264,7 +2264,7 @@ out_err:
}
}
pr_debug("msk=%p rx queue empty=%d:%d copied=%d",
pr_debug("msk=%p rx queue empty=%d:%d copied=%d\n",
msk, skb_queue_empty_lockless(&sk->sk_receive_queue),
skb_queue_empty(&msk->receive_queue), copied);
if (!(flags & MSG_PEEK))
@ -2326,7 +2326,7 @@ struct sock *mptcp_subflow_get_retrans(struct mptcp_sock *msk)
continue;
}
if (subflow->backup) {
if (subflow->backup || subflow->request_bkup) {
if (!backup)
backup = ssk;
continue;
@ -2508,6 +2508,12 @@ out:
void mptcp_close_ssk(struct sock *sk, struct sock *ssk,
struct mptcp_subflow_context *subflow)
{
/* The first subflow can already be closed and still in the list */
if (subflow->close_event_done)
return;
subflow->close_event_done = true;
if (sk->sk_state == TCP_ESTABLISHED)
mptcp_event(MPTCP_EVENT_SUB_CLOSED, mptcp_sk(sk), ssk, GFP_KERNEL);
@ -2533,8 +2539,11 @@ static void __mptcp_close_subflow(struct sock *sk)
mptcp_for_each_subflow_safe(msk, subflow, tmp) {
struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
int ssk_state = inet_sk_state_load(ssk);
if (inet_sk_state_load(ssk) != TCP_CLOSE)
if (ssk_state != TCP_CLOSE &&
(ssk_state != TCP_CLOSE_WAIT ||
inet_sk_state_load(sk) != TCP_ESTABLISHED))
continue;
/* 'subflow_data_ready' will re-sched once rx queue is empty */
@ -2714,7 +2723,7 @@ static void mptcp_mp_fail_no_response(struct mptcp_sock *msk)
if (!ssk)
return;
pr_debug("MP_FAIL doesn't respond, reset the subflow");
pr_debug("MP_FAIL doesn't respond, reset the subflow\n");
slow = lock_sock_fast(ssk);
mptcp_subflow_reset(ssk);
@ -2888,7 +2897,7 @@ void mptcp_subflow_shutdown(struct sock *sk, struct sock *ssk, int how)
break;
default:
if (__mptcp_check_fallback(mptcp_sk(sk))) {
pr_debug("Fallback");
pr_debug("Fallback\n");
ssk->sk_shutdown |= how;
tcp_shutdown(ssk, how);
@ -2898,7 +2907,7 @@ void mptcp_subflow_shutdown(struct sock *sk, struct sock *ssk, int how)
WRITE_ONCE(mptcp_sk(sk)->snd_una, mptcp_sk(sk)->snd_nxt);
mptcp_schedule_work(sk);
} else {
pr_debug("Sending DATA_FIN on subflow %p", ssk);
pr_debug("Sending DATA_FIN on subflow %p\n", ssk);
tcp_send_ack(ssk);
if (!mptcp_rtx_timer_pending(sk))
mptcp_reset_rtx_timer(sk);
@ -2964,7 +2973,7 @@ static void mptcp_check_send_data_fin(struct sock *sk)
struct mptcp_subflow_context *subflow;
struct mptcp_sock *msk = mptcp_sk(sk);
pr_debug("msk=%p snd_data_fin_enable=%d pending=%d snd_nxt=%llu write_seq=%llu",
pr_debug("msk=%p snd_data_fin_enable=%d pending=%d snd_nxt=%llu write_seq=%llu\n",
msk, msk->snd_data_fin_enable, !!mptcp_send_head(sk),
msk->snd_nxt, msk->write_seq);
@ -2988,7 +2997,7 @@ static void __mptcp_wr_shutdown(struct sock *sk)
{
struct mptcp_sock *msk = mptcp_sk(sk);
pr_debug("msk=%p snd_data_fin_enable=%d shutdown=%x state=%d pending=%d",
pr_debug("msk=%p snd_data_fin_enable=%d shutdown=%x state=%d pending=%d\n",
msk, msk->snd_data_fin_enable, sk->sk_shutdown, sk->sk_state,
!!mptcp_send_head(sk));
@ -3003,7 +3012,7 @@ static void __mptcp_destroy_sock(struct sock *sk)
{
struct mptcp_sock *msk = mptcp_sk(sk);
pr_debug("msk=%p", msk);
pr_debug("msk=%p\n", msk);
might_sleep();
@ -3111,7 +3120,7 @@ cleanup:
mptcp_set_state(sk, TCP_CLOSE);
sock_hold(sk);
pr_debug("msk=%p state=%d", sk, sk->sk_state);
pr_debug("msk=%p state=%d\n", sk, sk->sk_state);
if (msk->token)
mptcp_event(MPTCP_EVENT_CLOSED, msk, NULL, GFP_KERNEL);
@ -3543,7 +3552,7 @@ static int mptcp_get_port(struct sock *sk, unsigned short snum)
{
struct mptcp_sock *msk = mptcp_sk(sk);
pr_debug("msk=%p, ssk=%p", msk, msk->first);
pr_debug("msk=%p, ssk=%p\n", msk, msk->first);
if (WARN_ON_ONCE(!msk->first))
return -EINVAL;
@ -3560,7 +3569,7 @@ void mptcp_finish_connect(struct sock *ssk)
sk = subflow->conn;
msk = mptcp_sk(sk);
pr_debug("msk=%p, token=%u", sk, subflow->token);
pr_debug("msk=%p, token=%u\n", sk, subflow->token);
subflow->map_seq = subflow->iasn;
subflow->map_subflow_seq = 1;
@ -3589,7 +3598,7 @@ bool mptcp_finish_join(struct sock *ssk)
struct sock *parent = (void *)msk;
bool ret = true;
pr_debug("msk=%p, subflow=%p", msk, subflow);
pr_debug("msk=%p, subflow=%p\n", msk, subflow);
/* mptcp socket already closing? */
if (!mptcp_is_fully_established(parent)) {
@ -3635,7 +3644,7 @@ err_prohibited:
static void mptcp_shutdown(struct sock *sk, int how)
{
pr_debug("sk=%p, how=%d", sk, how);
pr_debug("sk=%p, how=%d\n", sk, how);
if ((how & SEND_SHUTDOWN) && mptcp_close_state(sk))
__mptcp_wr_shutdown(sk);
@ -3856,7 +3865,7 @@ static int mptcp_listen(struct socket *sock, int backlog)
struct sock *ssk;
int err;
pr_debug("msk=%p", msk);
pr_debug("msk=%p\n", msk);
lock_sock(sk);
@ -3895,7 +3904,7 @@ static int mptcp_stream_accept(struct socket *sock, struct socket *newsock,
struct mptcp_sock *msk = mptcp_sk(sock->sk);
struct sock *ssk, *newsk;
pr_debug("msk=%p", msk);
pr_debug("msk=%p\n", msk);
/* Buggy applications can call accept on socket states other then LISTEN
* but no need to allocate the first subflow just to error out.
@ -3904,12 +3913,12 @@ static int mptcp_stream_accept(struct socket *sock, struct socket *newsock,
if (!ssk)
return -EINVAL;
pr_debug("ssk=%p, listener=%p", ssk, mptcp_subflow_ctx(ssk));
pr_debug("ssk=%p, listener=%p\n", ssk, mptcp_subflow_ctx(ssk));
newsk = inet_csk_accept(ssk, arg);
if (!newsk)
return arg->err;
pr_debug("newsk=%p, subflow is mptcp=%d", newsk, sk_is_mptcp(newsk));
pr_debug("newsk=%p, subflow is mptcp=%d\n", newsk, sk_is_mptcp(newsk));
if (sk_is_mptcp(newsk)) {
struct mptcp_subflow_context *subflow;
struct sock *new_mptcp_sock;
@ -4002,7 +4011,7 @@ static __poll_t mptcp_poll(struct file *file, struct socket *sock,
sock_poll_wait(file, sock, wait);
state = inet_sk_state_load(sk);
pr_debug("msk=%p state=%d flags=%lx", msk, state, msk->flags);
pr_debug("msk=%p state=%d flags=%lx\n", msk, state, msk->flags);
if (state == TCP_LISTEN) {
struct sock *ssk = READ_ONCE(msk->first);

View File

@ -524,7 +524,8 @@ struct mptcp_subflow_context {
stale : 1, /* unable to snd/rcv data, do not use for xmit */
valid_csum_seen : 1, /* at least one csum validated */
is_mptfo : 1, /* subflow is doing TFO */
__unused : 10;
close_event_done : 1, /* has done the post-closed part */
__unused : 9;
bool data_avail;
bool scheduled;
u32 remote_nonce;
@ -992,6 +993,8 @@ void mptcp_pm_add_addr_received(const struct sock *ssk,
void mptcp_pm_add_addr_echoed(struct mptcp_sock *msk,
const struct mptcp_addr_info *addr);
void mptcp_pm_add_addr_send_ack(struct mptcp_sock *msk);
bool mptcp_pm_nl_is_init_remote_addr(struct mptcp_sock *msk,
const struct mptcp_addr_info *remote);
void mptcp_pm_nl_addr_send_ack(struct mptcp_sock *msk);
void mptcp_pm_rm_addr_received(struct mptcp_sock *msk,
const struct mptcp_rm_list *rm_list);
@ -1177,7 +1180,7 @@ static inline bool mptcp_check_fallback(const struct sock *sk)
static inline void __mptcp_do_fallback(struct mptcp_sock *msk)
{
if (__mptcp_check_fallback(msk)) {
pr_debug("TCP fallback already done (msk=%p)", msk);
pr_debug("TCP fallback already done (msk=%p)\n", msk);
return;
}
set_bit(MPTCP_FALLBACK_DONE, &msk->flags);
@ -1213,7 +1216,7 @@ static inline void mptcp_do_fallback(struct sock *ssk)
}
}
#define pr_fallback(a) pr_debug("%s:fallback to TCP (msk=%p)", __func__, a)
#define pr_fallback(a) pr_debug("%s:fallback to TCP (msk=%p)\n", __func__, a)
static inline bool mptcp_check_infinite_map(struct sk_buff *skb)
{

View File

@ -86,7 +86,7 @@ int mptcp_register_scheduler(struct mptcp_sched_ops *sched)
list_add_tail_rcu(&sched->list, &mptcp_sched_list);
spin_unlock(&mptcp_sched_list_lock);
pr_debug("%s registered", sched->name);
pr_debug("%s registered\n", sched->name);
return 0;
}
@ -118,7 +118,7 @@ int mptcp_init_sched(struct mptcp_sock *msk,
if (msk->sched->init)
msk->sched->init(msk);
pr_debug("sched=%s", msk->sched->name);
pr_debug("sched=%s\n", msk->sched->name);
return 0;
}

View File

@ -873,7 +873,7 @@ int mptcp_setsockopt(struct sock *sk, int level, int optname,
struct mptcp_sock *msk = mptcp_sk(sk);
struct sock *ssk;
pr_debug("msk=%p", msk);
pr_debug("msk=%p\n", msk);
if (level == SOL_SOCKET)
return mptcp_setsockopt_sol_socket(msk, optname, optval, optlen);
@ -1453,7 +1453,7 @@ int mptcp_getsockopt(struct sock *sk, int level, int optname,
struct mptcp_sock *msk = mptcp_sk(sk);
struct sock *ssk;
pr_debug("msk=%p", msk);
pr_debug("msk=%p\n", msk);
/* @@ the meaning of setsockopt() when the socket is connected and
* there are multiple subflows is not yet defined. It is up to the

View File

@ -39,7 +39,7 @@ static void subflow_req_destructor(struct request_sock *req)
{
struct mptcp_subflow_request_sock *subflow_req = mptcp_subflow_rsk(req);
pr_debug("subflow_req=%p", subflow_req);
pr_debug("subflow_req=%p\n", subflow_req);
if (subflow_req->msk)
sock_put((struct sock *)subflow_req->msk);
@ -146,7 +146,7 @@ static int subflow_check_req(struct request_sock *req,
struct mptcp_options_received mp_opt;
bool opt_mp_capable, opt_mp_join;
pr_debug("subflow_req=%p, listener=%p", subflow_req, listener);
pr_debug("subflow_req=%p, listener=%p\n", subflow_req, listener);
#ifdef CONFIG_TCP_MD5SIG
/* no MPTCP if MD5SIG is enabled on this socket or we may run out of
@ -221,7 +221,7 @@ again:
}
if (subflow_use_different_sport(subflow_req->msk, sk_listener)) {
pr_debug("syn inet_sport=%d %d",
pr_debug("syn inet_sport=%d %d\n",
ntohs(inet_sk(sk_listener)->inet_sport),
ntohs(inet_sk((struct sock *)subflow_req->msk)->inet_sport));
if (!mptcp_pm_sport_in_anno_list(subflow_req->msk, sk_listener)) {
@ -243,7 +243,7 @@ again:
subflow_init_req_cookie_join_save(subflow_req, skb);
}
pr_debug("token=%u, remote_nonce=%u msk=%p", subflow_req->token,
pr_debug("token=%u, remote_nonce=%u msk=%p\n", subflow_req->token,
subflow_req->remote_nonce, subflow_req->msk);
}
@ -527,7 +527,7 @@ static void subflow_finish_connect(struct sock *sk, const struct sk_buff *skb)
subflow->rel_write_seq = 1;
subflow->conn_finished = 1;
subflow->ssn_offset = TCP_SKB_CB(skb)->seq;
pr_debug("subflow=%p synack seq=%x", subflow, subflow->ssn_offset);
pr_debug("subflow=%p synack seq=%x\n", subflow, subflow->ssn_offset);
mptcp_get_options(skb, &mp_opt);
if (subflow->request_mptcp) {
@ -559,7 +559,7 @@ static void subflow_finish_connect(struct sock *sk, const struct sk_buff *skb)
subflow->thmac = mp_opt.thmac;
subflow->remote_nonce = mp_opt.nonce;
WRITE_ONCE(subflow->remote_id, mp_opt.join_id);
pr_debug("subflow=%p, thmac=%llu, remote_nonce=%u backup=%d",
pr_debug("subflow=%p, thmac=%llu, remote_nonce=%u backup=%d\n",
subflow, subflow->thmac, subflow->remote_nonce,
subflow->backup);
@ -585,7 +585,7 @@ static void subflow_finish_connect(struct sock *sk, const struct sk_buff *skb)
MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_JOINSYNACKBACKUPRX);
if (subflow_use_different_dport(msk, sk)) {
pr_debug("synack inet_dport=%d %d",
pr_debug("synack inet_dport=%d %d\n",
ntohs(inet_sk(sk)->inet_dport),
ntohs(inet_sk(parent)->inet_dport));
MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_JOINPORTSYNACKRX);
@ -655,7 +655,7 @@ static int subflow_v4_conn_request(struct sock *sk, struct sk_buff *skb)
{
struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk);
pr_debug("subflow=%p", subflow);
pr_debug("subflow=%p\n", subflow);
/* Never answer to SYNs sent to broadcast or multicast */
if (skb_rtable(skb)->rt_flags & (RTCF_BROADCAST | RTCF_MULTICAST))
@ -686,7 +686,7 @@ static int subflow_v6_conn_request(struct sock *sk, struct sk_buff *skb)
{
struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk);
pr_debug("subflow=%p", subflow);
pr_debug("subflow=%p\n", subflow);
if (skb->protocol == htons(ETH_P_IP))
return subflow_v4_conn_request(sk, skb);
@ -807,7 +807,7 @@ static struct sock *subflow_syn_recv_sock(const struct sock *sk,
struct mptcp_sock *owner;
struct sock *child;
pr_debug("listener=%p, req=%p, conn=%p", listener, req, listener->conn);
pr_debug("listener=%p, req=%p, conn=%p\n", listener, req, listener->conn);
/* After child creation we must look for MPC even when options
* are not parsed
@ -898,7 +898,7 @@ create_child:
ctx->conn = (struct sock *)owner;
if (subflow_use_different_sport(owner, sk)) {
pr_debug("ack inet_sport=%d %d",
pr_debug("ack inet_sport=%d %d\n",
ntohs(inet_sk(sk)->inet_sport),
ntohs(inet_sk((struct sock *)owner)->inet_sport));
if (!mptcp_pm_sport_in_anno_list(owner, sk)) {
@ -961,7 +961,7 @@ enum mapping_status {
static void dbg_bad_map(struct mptcp_subflow_context *subflow, u32 ssn)
{
pr_debug("Bad mapping: ssn=%d map_seq=%d map_data_len=%d",
pr_debug("Bad mapping: ssn=%d map_seq=%d map_data_len=%d\n",
ssn, subflow->map_subflow_seq, subflow->map_data_len);
}
@ -1121,7 +1121,7 @@ static enum mapping_status get_mapping_status(struct sock *ssk,
data_len = mpext->data_len;
if (data_len == 0) {
pr_debug("infinite mapping received");
pr_debug("infinite mapping received\n");
MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_INFINITEMAPRX);
subflow->map_data_len = 0;
return MAPPING_INVALID;
@ -1133,7 +1133,7 @@ static enum mapping_status get_mapping_status(struct sock *ssk,
if (data_len == 1) {
bool updated = mptcp_update_rcv_data_fin(msk, mpext->data_seq,
mpext->dsn64);
pr_debug("DATA_FIN with no payload seq=%llu", mpext->data_seq);
pr_debug("DATA_FIN with no payload seq=%llu\n", mpext->data_seq);
if (subflow->map_valid) {
/* A DATA_FIN might arrive in a DSS
* option before the previous mapping
@ -1159,7 +1159,7 @@ static enum mapping_status get_mapping_status(struct sock *ssk,
data_fin_seq &= GENMASK_ULL(31, 0);
mptcp_update_rcv_data_fin(msk, data_fin_seq, mpext->dsn64);
pr_debug("DATA_FIN with mapping seq=%llu dsn64=%d",
pr_debug("DATA_FIN with mapping seq=%llu dsn64=%d\n",
data_fin_seq, mpext->dsn64);
/* Adjust for DATA_FIN using 1 byte of sequence space */
@ -1205,7 +1205,7 @@ static enum mapping_status get_mapping_status(struct sock *ssk,
if (unlikely(subflow->map_csum_reqd != csum_reqd))
return MAPPING_INVALID;
pr_debug("new map seq=%llu subflow_seq=%u data_len=%u csum=%d:%u",
pr_debug("new map seq=%llu subflow_seq=%u data_len=%u csum=%d:%u\n",
subflow->map_seq, subflow->map_subflow_seq,
subflow->map_data_len, subflow->map_csum_reqd,
subflow->map_data_csum);
@ -1240,7 +1240,7 @@ static void mptcp_subflow_discard_data(struct sock *ssk, struct sk_buff *skb,
avail_len = skb->len - offset;
incr = limit >= avail_len ? avail_len + fin : limit;
pr_debug("discarding=%d len=%d offset=%d seq=%d", incr, skb->len,
pr_debug("discarding=%d len=%d offset=%d seq=%d\n", incr, skb->len,
offset, subflow->map_subflow_seq);
MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_DUPDATA);
tcp_sk(ssk)->copied_seq += incr;
@ -1255,12 +1255,16 @@ out:
/* sched mptcp worker to remove the subflow if no more data is pending */
static void subflow_sched_work_if_closed(struct mptcp_sock *msk, struct sock *ssk)
{
if (likely(ssk->sk_state != TCP_CLOSE))
struct sock *sk = (struct sock *)msk;
if (likely(ssk->sk_state != TCP_CLOSE &&
(ssk->sk_state != TCP_CLOSE_WAIT ||
inet_sk_state_load(sk) != TCP_ESTABLISHED)))
return;
if (skb_queue_empty(&ssk->sk_receive_queue) &&
!test_and_set_bit(MPTCP_WORK_CLOSE_SUBFLOW, &msk->flags))
mptcp_schedule_work((struct sock *)msk);
mptcp_schedule_work(sk);
}
static bool subflow_can_fallback(struct mptcp_subflow_context *subflow)
@ -1337,7 +1341,7 @@ static bool subflow_check_data_avail(struct sock *ssk)
old_ack = READ_ONCE(msk->ack_seq);
ack_seq = mptcp_subflow_get_mapped_dsn(subflow);
pr_debug("msk ack_seq=%llx subflow ack_seq=%llx", old_ack,
pr_debug("msk ack_seq=%llx subflow ack_seq=%llx\n", old_ack,
ack_seq);
if (unlikely(before64(ack_seq, old_ack))) {
mptcp_subflow_discard_data(ssk, skb, old_ack - ack_seq);
@ -1409,7 +1413,7 @@ bool mptcp_subflow_data_available(struct sock *sk)
subflow->map_valid = 0;
WRITE_ONCE(subflow->data_avail, false);
pr_debug("Done with mapping: seq=%u data_len=%u",
pr_debug("Done with mapping: seq=%u data_len=%u\n",
subflow->map_subflow_seq,
subflow->map_data_len);
}
@ -1519,7 +1523,7 @@ void mptcpv6_handle_mapped(struct sock *sk, bool mapped)
target = mapped ? &subflow_v6m_specific : subflow_default_af_ops(sk);
pr_debug("subflow=%p family=%d ops=%p target=%p mapped=%d",
pr_debug("subflow=%p family=%d ops=%p target=%p mapped=%d\n",
subflow, sk->sk_family, icsk->icsk_af_ops, target, mapped);
if (likely(icsk->icsk_af_ops == target))
@ -1612,7 +1616,7 @@ int __mptcp_subflow_connect(struct sock *sk, const struct mptcp_addr_info *loc,
goto failed;
mptcp_crypto_key_sha(subflow->remote_key, &remote_token, NULL);
pr_debug("msk=%p remote_token=%u local_id=%d remote_id=%d", msk,
pr_debug("msk=%p remote_token=%u local_id=%d remote_id=%d\n", msk,
remote_token, local_id, remote_id);
subflow->remote_token = remote_token;
WRITE_ONCE(subflow->remote_id, remote_id);
@ -1747,7 +1751,7 @@ int mptcp_subflow_create_socket(struct sock *sk, unsigned short family,
SOCK_INODE(sf)->i_gid = SOCK_INODE(sk->sk_socket)->i_gid;
subflow = mptcp_subflow_ctx(sf->sk);
pr_debug("subflow=%p", subflow);
pr_debug("subflow=%p\n", subflow);
*new_sock = sf;
sock_hold(sk);
@ -1776,7 +1780,7 @@ static struct mptcp_subflow_context *subflow_create_ctx(struct sock *sk,
INIT_LIST_HEAD(&ctx->node);
INIT_LIST_HEAD(&ctx->delegated_node);
pr_debug("subflow=%p", ctx);
pr_debug("subflow=%p\n", ctx);
ctx->tcp_sock = sk;
WRITE_ONCE(ctx->local_id, -1);
@ -1927,7 +1931,7 @@ static int subflow_ulp_init(struct sock *sk)
goto out;
}
pr_debug("subflow=%p, family=%d", ctx, sk->sk_family);
pr_debug("subflow=%p, family=%d\n", ctx, sk->sk_family);
tp->is_mptcp = 1;
ctx->icsk_af_ops = icsk->icsk_af_ops;

View File

@ -663,7 +663,9 @@ begin:
pband = &q->band_flows[q->band_nr];
pband->credit = min(pband->credit + pband->quantum,
pband->quantum);
if (pband->credit > 0)
goto begin;
retry = 0;
}
if (q->time_next_delayed_flow != ~0ULL)
qdisc_watchdog_schedule_range_ns(&q->watchdog,

View File

@ -2260,12 +2260,6 @@ enum sctp_disposition sctp_sf_do_5_2_4_dupcook(
}
}
/* Update socket peer label if first association. */
if (security_sctp_assoc_request(new_asoc, chunk->head_skb ?: chunk->skb)) {
sctp_association_free(new_asoc);
return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
}
/* Set temp so that it won't be added into hashtable */
new_asoc->temp = 1;
@ -2274,6 +2268,22 @@ enum sctp_disposition sctp_sf_do_5_2_4_dupcook(
*/
action = sctp_tietags_compare(new_asoc, asoc);
/* In cases C and E the association doesn't enter the ESTABLISHED
* state, so there is no need to call security_sctp_assoc_request().
*/
switch (action) {
case 'A': /* Association restart. */
case 'B': /* Collision case B. */
case 'D': /* Collision case D. */
/* Update socket peer label if first association. */
if (security_sctp_assoc_request((struct sctp_association *)asoc,
chunk->head_skb ?: chunk->skb)) {
sctp_association_free(new_asoc);
return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
}
break;
}
switch (action) {
case 'A': /* Association restart. */
retval = sctp_sf_do_dupcook_a(net, ep, asoc, chunk, commands,

View File

@ -4015,16 +4015,6 @@ sub process {
}
}
# Block comment styles
# Networking with an initial /*
if ($realfile =~ m@^(drivers/net/|net/)@ &&
$prevrawline =~ /^\+[ \t]*\/\*[ \t]*$/ &&
$rawline =~ /^\+[ \t]*\*/ &&
$realline > 3) { # Do not warn about the initial copyright comment block after SPDX-License-Identifier
WARN("NETWORKING_BLOCK_COMMENT_STYLE",
"networking block comments don't use an empty /* line, use /* Comment...\n" . $hereprev);
}
# Block comments use * on subsequent lines
if ($prevline =~ /$;[ \t]*$/ && #ends in comment
$prevrawline =~ /^\+.*?\/\*/ && #starting /*

View File

@ -571,6 +571,10 @@ vlan_over_vlan_aware_bridge()
cleanup()
{
pre_cleanup
ip link set $h2 down
ip link set $h1 down
vrf_cleanup
}

View File

@ -233,6 +233,9 @@ cleanup()
{
pre_cleanup
ip link set dev $swp2 down
ip link set dev $swp1 down
h2_destroy
h1_destroy

View File

@ -420,12 +420,17 @@ reset_with_fail()
fi
}
start_events()
{
mptcp_lib_events "${ns1}" "${evts_ns1}" evts_ns1_pid
mptcp_lib_events "${ns2}" "${evts_ns2}" evts_ns2_pid
}
reset_with_events()
{
reset "${1}" || return 1
mptcp_lib_events "${ns1}" "${evts_ns1}" evts_ns1_pid
mptcp_lib_events "${ns2}" "${evts_ns2}" evts_ns2_pid
start_events
}
reset_with_tcp_filter()
@ -1112,7 +1117,7 @@ chk_csum_nr()
print_check "sum"
count=$(mptcp_lib_get_counter ${ns1} "MPTcpExtDataCsumErr")
if [ "$count" != "$csum_ns1" ]; then
if [ -n "$count" ] && [ "$count" != "$csum_ns1" ]; then
extra_msg+=" ns1=$count"
fi
if [ -z "$count" ]; then
@ -1125,7 +1130,7 @@ chk_csum_nr()
fi
print_check "csum"
count=$(mptcp_lib_get_counter ${ns2} "MPTcpExtDataCsumErr")
if [ "$count" != "$csum_ns2" ]; then
if [ -n "$count" ] && [ "$count" != "$csum_ns2" ]; then
extra_msg+=" ns2=$count"
fi
if [ -z "$count" ]; then
@ -1169,7 +1174,7 @@ chk_fail_nr()
print_check "ftx"
count=$(mptcp_lib_get_counter ${ns_tx} "MPTcpExtMPFailTx")
if [ "$count" != "$fail_tx" ]; then
if [ -n "$count" ] && [ "$count" != "$fail_tx" ]; then
extra_msg+=",tx=$count"
fi
if [ -z "$count" ]; then
@ -1183,7 +1188,7 @@ chk_fail_nr()
print_check "failrx"
count=$(mptcp_lib_get_counter ${ns_rx} "MPTcpExtMPFailRx")
if [ "$count" != "$fail_rx" ]; then
if [ -n "$count" ] && [ "$count" != "$fail_rx" ]; then
extra_msg+=",rx=$count"
fi
if [ -z "$count" ]; then
@ -3333,6 +3338,36 @@ userspace_pm_chk_get_addr()
fi
}
# $1: ns ; $2: event type ; $3: count
chk_evt_nr()
{
local ns=${1}
local evt_name="${2}"
local exp="${3}"
local evts="${evts_ns1}"
local evt="${!evt_name}"
local count
evt_name="${evt_name:16}" # without MPTCP_LIB_EVENT_
[ "${ns}" == "ns2" ] && evts="${evts_ns2}"
print_check "event ${ns} ${evt_name} (${exp})"
if [[ "${evt_name}" = "LISTENER_"* ]] &&
! mptcp_lib_kallsyms_has "mptcp_event_pm_listener$"; then
print_skip "event not supported"
return
fi
count=$(grep -cw "type:${evt}" "${evts}")
if [ "${count}" != "${exp}" ]; then
fail_test "got ${count} events, expected ${exp}"
else
print_ok
fi
}
userspace_tests()
{
# userspace pm type prevents add_addr
@ -3429,14 +3464,12 @@ userspace_tests()
"signal"
userspace_pm_chk_get_addr "${ns1}" "10" "id 10 flags signal 10.0.2.1"
userspace_pm_chk_get_addr "${ns1}" "20" "id 20 flags signal 10.0.3.1"
userspace_pm_rm_addr $ns1 10
userspace_pm_rm_sf $ns1 "::ffff:10.0.2.1" $MPTCP_LIB_EVENT_SUB_ESTABLISHED
userspace_pm_chk_dump_addr "${ns1}" \
"id 20 flags signal 10.0.3.1" "after rm_addr 10"
"id 20 flags signal 10.0.3.1" "after rm_sf 10"
userspace_pm_rm_addr $ns1 20
userspace_pm_rm_sf $ns1 10.0.3.1 $MPTCP_LIB_EVENT_SUB_ESTABLISHED
userspace_pm_chk_dump_addr "${ns1}" "" "after rm_addr 20"
chk_rm_nr 2 2 invert
chk_rm_nr 1 1 invert
chk_mptcp_info subflows 0 subflows 0
chk_subflows_total 1 1
kill_events_pids
@ -3460,12 +3493,11 @@ userspace_tests()
"id 20 flags subflow 10.0.3.2" \
"subflow"
userspace_pm_chk_get_addr "${ns2}" "20" "id 20 flags subflow 10.0.3.2"
userspace_pm_rm_addr $ns2 20
userspace_pm_rm_sf $ns2 10.0.3.2 $MPTCP_LIB_EVENT_SUB_ESTABLISHED
userspace_pm_chk_dump_addr "${ns2}" \
"" \
"after rm_addr 20"
chk_rm_nr 1 1
"after rm_sf 20"
chk_rm_nr 0 1
chk_mptcp_info subflows 0 subflows 0
chk_subflows_total 1 1
kill_events_pids
@ -3575,27 +3607,29 @@ endpoint_tests()
if reset_with_tcp_filter "delete and re-add" ns2 10.0.3.2 REJECT OUTPUT &&
mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then
pm_nl_set_limits $ns1 0 2
pm_nl_set_limits $ns2 0 2
start_events
pm_nl_set_limits $ns1 0 3
pm_nl_set_limits $ns2 0 3
pm_nl_add_endpoint $ns2 10.0.1.2 id 1 dev ns2eth1 flags subflow
pm_nl_add_endpoint $ns2 10.0.2.2 id 2 dev ns2eth2 flags subflow
test_linkfail=4 speed=20 \
test_linkfail=4 speed=5 \
run_tests $ns1 $ns2 10.0.1.1 &
local tests_pid=$!
wait_mpj $ns2
pm_nl_check_endpoint "creation" \
$ns2 10.0.2.2 id 2 flags subflow dev ns2eth2
chk_subflow_nr "before delete" 2
chk_subflow_nr "before delete id 2" 2
chk_mptcp_info subflows 1 subflows 1
pm_nl_del_endpoint $ns2 2 10.0.2.2
sleep 0.5
chk_subflow_nr "after delete" 1
chk_subflow_nr "after delete id 2" 1
chk_mptcp_info subflows 0 subflows 0
pm_nl_add_endpoint $ns2 10.0.2.2 id 2 dev ns2eth2 flags subflow
wait_mpj $ns2
chk_subflow_nr "after re-add" 2
chk_subflow_nr "after re-add id 2" 2
chk_mptcp_info subflows 1 subflows 1
pm_nl_add_endpoint $ns2 10.0.3.2 id 3 flags subflow
@ -3610,21 +3644,51 @@ endpoint_tests()
chk_subflow_nr "after no reject" 3
chk_mptcp_info subflows 2 subflows 2
local i
for i in $(seq 3); do
pm_nl_del_endpoint $ns2 1 10.0.1.2
sleep 0.5
chk_subflow_nr "after delete id 0 ($i)" 2
chk_mptcp_info subflows 2 subflows 2 # only decr for additional sf
pm_nl_add_endpoint $ns2 10.0.1.2 id 1 dev ns2eth1 flags subflow
wait_mpj $ns2
chk_subflow_nr "after re-add id 0 ($i)" 3
chk_mptcp_info subflows 3 subflows 3
done
mptcp_lib_kill_wait $tests_pid
chk_join_nr 3 3 3
chk_rm_nr 1 1
kill_events_pids
chk_evt_nr ns1 MPTCP_LIB_EVENT_LISTENER_CREATED 1
chk_evt_nr ns1 MPTCP_LIB_EVENT_CREATED 1
chk_evt_nr ns1 MPTCP_LIB_EVENT_ESTABLISHED 1
chk_evt_nr ns1 MPTCP_LIB_EVENT_ANNOUNCED 0
chk_evt_nr ns1 MPTCP_LIB_EVENT_REMOVED 4
chk_evt_nr ns1 MPTCP_LIB_EVENT_SUB_ESTABLISHED 6
chk_evt_nr ns1 MPTCP_LIB_EVENT_SUB_CLOSED 4
chk_evt_nr ns2 MPTCP_LIB_EVENT_CREATED 1
chk_evt_nr ns2 MPTCP_LIB_EVENT_ESTABLISHED 1
chk_evt_nr ns2 MPTCP_LIB_EVENT_ANNOUNCED 0
chk_evt_nr ns2 MPTCP_LIB_EVENT_REMOVED 0
chk_evt_nr ns2 MPTCP_LIB_EVENT_SUB_ESTABLISHED 6
chk_evt_nr ns2 MPTCP_LIB_EVENT_SUB_CLOSED 5 # one has been closed before estab
chk_join_nr 6 6 6
chk_rm_nr 4 4
fi
# remove and re-add
if reset "delete re-add signal" &&
if reset_with_events "delete re-add signal" &&
mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then
pm_nl_set_limits $ns1 0 2
pm_nl_set_limits $ns2 2 2
pm_nl_set_limits $ns1 0 3
pm_nl_set_limits $ns2 3 3
pm_nl_add_endpoint $ns1 10.0.2.1 id 1 flags signal
# broadcast IP: no packet for this address will be received on ns1
pm_nl_add_endpoint $ns1 224.0.0.1 id 2 flags signal
test_linkfail=4 speed=20 \
pm_nl_add_endpoint $ns1 10.0.1.1 id 42 flags signal
test_linkfail=4 speed=5 \
run_tests $ns1 $ns2 10.0.1.1 &
local tests_pid=$!
@ -3645,11 +3709,47 @@ endpoint_tests()
wait_mpj $ns2
chk_subflow_nr "after re-add" 3
chk_mptcp_info subflows 2 subflows 2
pm_nl_del_endpoint $ns1 42 10.0.1.1
sleep 0.5
chk_subflow_nr "after delete ID 0" 2
chk_mptcp_info subflows 2 subflows 2
pm_nl_add_endpoint $ns1 10.0.1.1 id 99 flags signal
wait_mpj $ns2
chk_subflow_nr "after re-add ID 0" 3
chk_mptcp_info subflows 3 subflows 3
pm_nl_del_endpoint $ns1 99 10.0.1.1
sleep 0.5
chk_subflow_nr "after re-delete ID 0" 2
chk_mptcp_info subflows 2 subflows 2
pm_nl_add_endpoint $ns1 10.0.1.1 id 88 flags signal
wait_mpj $ns2
chk_subflow_nr "after re-re-add ID 0" 3
chk_mptcp_info subflows 3 subflows 3
mptcp_lib_kill_wait $tests_pid
chk_join_nr 3 3 3
chk_add_nr 4 4
chk_rm_nr 2 1 invert
kill_events_pids
chk_evt_nr ns1 MPTCP_LIB_EVENT_LISTENER_CREATED 1
chk_evt_nr ns1 MPTCP_LIB_EVENT_CREATED 1
chk_evt_nr ns1 MPTCP_LIB_EVENT_ESTABLISHED 1
chk_evt_nr ns1 MPTCP_LIB_EVENT_ANNOUNCED 0
chk_evt_nr ns1 MPTCP_LIB_EVENT_REMOVED 0
chk_evt_nr ns1 MPTCP_LIB_EVENT_SUB_ESTABLISHED 5
chk_evt_nr ns1 MPTCP_LIB_EVENT_SUB_CLOSED 3
chk_evt_nr ns2 MPTCP_LIB_EVENT_CREATED 1
chk_evt_nr ns2 MPTCP_LIB_EVENT_ESTABLISHED 1
chk_evt_nr ns2 MPTCP_LIB_EVENT_ANNOUNCED 6
chk_evt_nr ns2 MPTCP_LIB_EVENT_REMOVED 4
chk_evt_nr ns2 MPTCP_LIB_EVENT_SUB_ESTABLISHED 5
chk_evt_nr ns2 MPTCP_LIB_EVENT_SUB_CLOSED 3
chk_join_nr 5 5 5
chk_add_nr 6 6
chk_rm_nr 4 3 invert
fi
# flush and re-add

View File

@ -12,10 +12,14 @@ readonly KSFT_SKIP=4
readonly KSFT_TEST="${MPTCP_LIB_KSFT_TEST:-$(basename "${0}" .sh)}"
# These variables are used in some selftests, read-only
declare -rx MPTCP_LIB_EVENT_CREATED=1 # MPTCP_EVENT_CREATED
declare -rx MPTCP_LIB_EVENT_ESTABLISHED=2 # MPTCP_EVENT_ESTABLISHED
declare -rx MPTCP_LIB_EVENT_CLOSED=3 # MPTCP_EVENT_CLOSED
declare -rx MPTCP_LIB_EVENT_ANNOUNCED=6 # MPTCP_EVENT_ANNOUNCED
declare -rx MPTCP_LIB_EVENT_REMOVED=7 # MPTCP_EVENT_REMOVED
declare -rx MPTCP_LIB_EVENT_SUB_ESTABLISHED=10 # MPTCP_EVENT_SUB_ESTABLISHED
declare -rx MPTCP_LIB_EVENT_SUB_CLOSED=11 # MPTCP_EVENT_SUB_CLOSED
declare -rx MPTCP_LIB_EVENT_SUB_PRIORITY=13 # MPTCP_EVENT_SUB_PRIORITY
declare -rx MPTCP_LIB_EVENT_LISTENER_CREATED=15 # MPTCP_EVENT_LISTENER_CREATED
declare -rx MPTCP_LIB_EVENT_LISTENER_CLOSED=16 # MPTCP_EVENT_LISTENER_CLOSED