Networking fixes for 5.17-rc7, including fixes from can, xfrm, wifi,

bluetooth, and netfilter.
 
 Current release - regressions:
 
  - iwlwifi: don't advertise TWT support, prevent FW crash
 
  - xfrm: fix the if_id check in changelink
 
  - xen/netfront: destroy queues before real_num_tx_queues is zeroed
 
  - bluetooth: fix not checking MGMT cmd pending queue, make scanning
    work again
 
 Current release - new code bugs:
 
  - mptcp: make SIOCOUTQ accurate for fallback socket
 
  - bluetooth: access skb->len after null check
 
  - bluetooth: hci_sync: fix not using conn_timeout
 
  - smc: fix cleanup when register ULP fails
 
  - dsa: restore error path of dsa_tree_change_tag_proto
 
  - iwlwifi: fix build error for IWLMEI
 
  - iwlwifi: mvm: propagate error from request_ownership to the user
 
 Previous releases - regressions:
 
  - xfrm: fix pMTU regression when reported pMTU is too small
 
  - xfrm: fix TCP MSS calculation when pMTU is close to 1280
 
  - bluetooth: fix bt_skb_sendmmsg not allocating partial chunks
 
  - ipv6: ensure we call ipv6_mc_down() at most once, prevent leaks
 
  - ipv6: prevent leaks in igmp6 when input queues get full
 
  - fix up skbs delta_truesize in UDP GRO frag_list
 
  - eth: e1000e: fix possible HW unit hang after an s0ix exit
 
  - eth: e1000e: correct NVM checksum verification flow
 
  - ptp: ocp: fix large time adjustments
 
 Previous releases - always broken:
 
  - tcp: make tcp_read_sock() more robust in presence of urgent data
 
  - xfrm: distinguishing SAs and SPs by if_id in xfrm_migrate
 
  - xfrm: fix xfrm_migrate issues when address family changes
 
  - dcb: flush lingering app table entries for unregistered devices
 
  - smc: fix unexpected SMC_CLC_DECL_ERR_REGRMB error
 
  - mac80211: fix EAPoL rekey fail in 802.3 rx path
 
  - mac80211: fix forwarded mesh frames AC & queue selection
 
  - netfilter: nf_queue: fix socket access races and bugs
 
  - batman-adv: fix ToCToU iflink problems and check the result
    belongs to the expected net namespace
 
  - can: gs_usb, etas_es58x: fix opened_channel_cnt's accounting
 
  - can: rcar_canfd: register the CAN device when fully ready
 
  - eth: igb, igc: phy: drop premature return leaking HW semaphore
 
  - eth: ixgbe: xsk: change !netif_carrier_ok() handling in
    ixgbe_xmit_zc(), prevent live lock when link goes down
 
  - eth: stmmac: only enable DMA interrupts when ready
 
  - eth: sparx5: move vlan checks before any changes are made
 
  - eth: iavf: fix races around init, removal, resets and vlan ops
 
  - ibmvnic: more reset flow fixes
 
 Misc:
 
  - eth: fix return value of __setup handlers
 
 Signed-off-by: Jakub Kicinski <kuba@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAmIhAo4ACgkQMUZtbf5S
 IrtLdQ/+LstJ6PQpRVa9Pu68Vu0pjwg0FiuBt/C4G//N46uvOvo5ub+Lx0JajZMt
 m4FBEFji2AXnfvbV/4WIZw+slcEfn2r1oprh0aqS5Ba+s3gQMbcl1C5daXcf7Tte
 tLFSVsxPBl2AEthps4YFSyMQyczrwVry20uBBgswTkDfyrN4uSLFBKmVQscEsRtQ
 dxt2AbxazStJM60Q+PI9Zfru3bXGEFgaG07z8RnTTvIJQFpYYsFUMIsee+30GYdc
 nQRAvrPwFBcSdwzaDf2WLe26MalJ1r7fXe9Mta1IMBFc/e0/8BQWZ6DT8x5n/snc
 gRJRL37E6V6QCtf80GLR7wR9/NkOxckeva3Z2yt6lOyUkVu4FStFA71iF4g9zH2W
 GfGw8ejD++suGR+YRqA8ou1vR69to+Q2M8VP+m75sdI0XU61oGquSPOUKyGQJOfx
 ndCtVW82FaAnQDTs0OBAdliPTCLkTONl0Bezr7htyAiEb8dcNMZESg/szabI+mZS
 ZKMu+rtw5DFMUsFx0ihAj5vE9mmbnsm/b1Mj+WjziOAD00p/WGu64ot7tMTJtv9B
 zkVNDbYwg1pFNIeiF/2FRXtsELad6VtUQJL2GQ0vkwas5jXAmymrtjuo5iiHZ9nR
 Oo9OdwhIFHYWbWTkGMW45uKX3MWTwz5Wne/xGiTpVpke7zhgTso=
 =TqqZ
 -----END PGP SIGNATURE-----

Merge tag 'net-5.17-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
 "Including fixes from can, xfrm, wifi, bluetooth, and netfilter.

  Lots of various size fixes, the length of the tag speaks for itself.
  Most of the 5.17-relevant stuff comes from xfrm, wifi and bt trees
  which had been lagging as you pointed out previously. But there's also
  a larger than we'd like portion of fixes for bugs from previous
  releases.

  Three more fixes still under discussion, including and xfrm revert for
  uAPI error.

  Current release - regressions:

   - iwlwifi: don't advertise TWT support, prevent FW crash

   - xfrm: fix the if_id check in changelink

   - xen/netfront: destroy queues before real_num_tx_queues is zeroed

   - bluetooth: fix not checking MGMT cmd pending queue, make scanning
     work again

  Current release - new code bugs:

   - mptcp: make SIOCOUTQ accurate for fallback socket

   - bluetooth: access skb->len after null check

   - bluetooth: hci_sync: fix not using conn_timeout

   - smc: fix cleanup when register ULP fails

   - dsa: restore error path of dsa_tree_change_tag_proto

   - iwlwifi: fix build error for IWLMEI

   - iwlwifi: mvm: propagate error from request_ownership to the user

  Previous releases - regressions:

   - xfrm: fix pMTU regression when reported pMTU is too small

   - xfrm: fix TCP MSS calculation when pMTU is close to 1280

   - bluetooth: fix bt_skb_sendmmsg not allocating partial chunks

   - ipv6: ensure we call ipv6_mc_down() at most once, prevent leaks

   - ipv6: prevent leaks in igmp6 when input queues get full

   - fix up skbs delta_truesize in UDP GRO frag_list

   - eth: e1000e: fix possible HW unit hang after an s0ix exit

   - eth: e1000e: correct NVM checksum verification flow

   - ptp: ocp: fix large time adjustments

  Previous releases - always broken:

   - tcp: make tcp_read_sock() more robust in presence of urgent data

   - xfrm: distinguishing SAs and SPs by if_id in xfrm_migrate

   - xfrm: fix xfrm_migrate issues when address family changes

   - dcb: flush lingering app table entries for unregistered devices

   - smc: fix unexpected SMC_CLC_DECL_ERR_REGRMB error

   - mac80211: fix EAPoL rekey fail in 802.3 rx path

   - mac80211: fix forwarded mesh frames AC & queue selection

   - netfilter: nf_queue: fix socket access races and bugs

   - batman-adv: fix ToCToU iflink problems and check the result belongs
     to the expected net namespace

   - can: gs_usb, etas_es58x: fix opened_channel_cnt's accounting

   - can: rcar_canfd: register the CAN device when fully ready

   - eth: igb, igc: phy: drop premature return leaking HW semaphore

   - eth: ixgbe: xsk: change !netif_carrier_ok() handling in
     ixgbe_xmit_zc(), prevent live lock when link goes down

   - eth: stmmac: only enable DMA interrupts when ready

   - eth: sparx5: move vlan checks before any changes are made

   - eth: iavf: fix races around init, removal, resets and vlan ops

   - ibmvnic: more reset flow fixes

  Misc:

   - eth: fix return value of __setup handlers"

* tag 'net-5.17-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (92 commits)
  ipv6: fix skb drops in igmp6_event_query() and igmp6_event_report()
  net: dsa: make dsa_tree_change_tag_proto actually unwind the tag proto change
  ixgbe: xsk: change !netif_carrier_ok() handling in ixgbe_xmit_zc()
  selftests: mlxsw: resource_scale: Fix return value
  selftests: mlxsw: tc_police_scale: Make test more robust
  net: dcb: disable softirqs in dcbnl_flush_dev()
  bnx2: Fix an error message
  sfc: extend the locking on mcdi->seqno
  net/smc: fix unexpected SMC_CLC_DECL_ERR_REGRMB error cause by server
  net/smc: fix unexpected SMC_CLC_DECL_ERR_REGRMB error generated by client
  net: arcnet: com20020: Fix null-ptr-deref in com20020pci_probe()
  tcp: make tcp_read_sock() more robust
  bpf, sockmap: Do not ignore orig_len parameter
  net: ipa: add an interconnect dependency
  net: fix up skbs delta_truesize in UDP GRO frag_list
  iwlwifi: mvm: return value for request_ownership
  nl80211: Update bss channel on channel switch for P2P_CLIENT
  iwlwifi: fix build error for IWLMEI
  ptp: ocp: Add ptp_ocp_adjtime_coarse for large adjustments
  batman-adv: Don't expect inter-netns unique iflink indices
  ...
This commit is contained in:
Linus Torvalds 2022-03-03 11:10:56 -08:00
commit b949c21fc2
84 changed files with 988 additions and 343 deletions

View File

@ -1676,6 +1676,8 @@ static int fs_init(struct fs_dev *dev)
dev->hw_base = pci_resource_start(pci_dev, 0);
dev->base = ioremap(dev->hw_base, 0x1000);
if (!dev->base)
return 1;
reset_chip (dev);

View File

@ -138,6 +138,9 @@ static int com20020pci_probe(struct pci_dev *pdev,
return -ENOMEM;
ci = (struct com20020_pci_card_info *)id->driver_data;
if (!ci)
return -EINVAL;
priv->ci = ci;
mm = &ci->misc_map;

View File

@ -1715,15 +1715,15 @@ static int rcar_canfd_channel_probe(struct rcar_canfd_global *gpriv, u32 ch,
netif_napi_add(ndev, &priv->napi, rcar_canfd_rx_poll,
RCANFD_NAPI_WEIGHT);
spin_lock_init(&priv->tx_lock);
devm_can_led_init(ndev);
gpriv->ch[priv->channel] = priv;
err = register_candev(ndev);
if (err) {
dev_err(&pdev->dev,
"register_candev() failed, error %d\n", err);
goto fail_candev;
}
spin_lock_init(&priv->tx_lock);
devm_can_led_init(ndev);
gpriv->ch[priv->channel] = priv;
dev_info(&pdev->dev, "device registered (channel %u)\n", priv->channel);
return 0;

View File

@ -1787,7 +1787,7 @@ static int es58x_open(struct net_device *netdev)
struct es58x_device *es58x_dev = es58x_priv(netdev)->es58x_dev;
int ret;
if (atomic_inc_return(&es58x_dev->opened_channel_cnt) == 1) {
if (!es58x_dev->opened_channel_cnt) {
ret = es58x_alloc_rx_urbs(es58x_dev);
if (ret)
return ret;
@ -1805,12 +1805,13 @@ static int es58x_open(struct net_device *netdev)
if (ret)
goto free_urbs;
es58x_dev->opened_channel_cnt++;
netif_start_queue(netdev);
return ret;
free_urbs:
if (atomic_dec_and_test(&es58x_dev->opened_channel_cnt))
if (!es58x_dev->opened_channel_cnt)
es58x_free_urbs(es58x_dev);
netdev_err(netdev, "%s: Could not open the network device: %pe\n",
__func__, ERR_PTR(ret));
@ -1845,7 +1846,8 @@ static int es58x_stop(struct net_device *netdev)
es58x_flush_pending_tx_msg(netdev);
if (atomic_dec_and_test(&es58x_dev->opened_channel_cnt))
es58x_dev->opened_channel_cnt--;
if (!es58x_dev->opened_channel_cnt)
es58x_free_urbs(es58x_dev);
return 0;
@ -2215,7 +2217,6 @@ static struct es58x_device *es58x_init_es58x_dev(struct usb_interface *intf,
init_usb_anchor(&es58x_dev->tx_urbs_idle);
init_usb_anchor(&es58x_dev->tx_urbs_busy);
atomic_set(&es58x_dev->tx_urbs_idle_cnt, 0);
atomic_set(&es58x_dev->opened_channel_cnt, 0);
usb_set_intfdata(intf, es58x_dev);
es58x_dev->rx_pipe = usb_rcvbulkpipe(es58x_dev->udev,

View File

@ -373,8 +373,6 @@ struct es58x_operators {
* queue wake/stop logic should prevent this URB from getting
* empty. Please refer to es58x_get_tx_urb() for more details.
* @tx_urbs_idle_cnt: number of urbs in @tx_urbs_idle.
* @opened_channel_cnt: number of channels opened (c.f. es58x_open()
* and es58x_stop()).
* @ktime_req_ns: kernel timestamp when es58x_set_realtime_diff_ns()
* was called.
* @realtime_diff_ns: difference in nanoseconds between the clocks of
@ -384,6 +382,10 @@ struct es58x_operators {
* in RX branches.
* @rx_max_packet_size: Maximum length of bulk-in URB.
* @num_can_ch: Number of CAN channel (i.e. number of elements of @netdev).
* @opened_channel_cnt: number of channels opened. Free of race
* conditions because its two users (net_device_ops:ndo_open()
* and net_device_ops:ndo_close()) guarantee that the network
* stack big kernel lock (a.k.a. rtnl_mutex) is being hold.
* @rx_cmd_buf_len: Length of @rx_cmd_buf.
* @rx_cmd_buf: The device might split the URB commands in an
* arbitrary amount of pieces. This buffer is used to concatenate
@ -406,7 +408,6 @@ struct es58x_device {
struct usb_anchor tx_urbs_busy;
struct usb_anchor tx_urbs_idle;
atomic_t tx_urbs_idle_cnt;
atomic_t opened_channel_cnt;
u64 ktime_req_ns;
s64 realtime_diff_ns;
@ -415,6 +416,7 @@ struct es58x_device {
u16 rx_max_packet_size;
u8 num_can_ch;
u8 opened_channel_cnt;
u16 rx_cmd_buf_len;
union es58x_urb_cmd rx_cmd_buf;

View File

@ -191,8 +191,8 @@ struct gs_can {
struct gs_usb {
struct gs_can *canch[GS_MAX_INTF];
struct usb_anchor rx_submitted;
atomic_t active_channels;
struct usb_device *udev;
u8 active_channels;
};
/* 'allocate' a tx context.
@ -589,7 +589,7 @@ static int gs_can_open(struct net_device *netdev)
if (rc)
return rc;
if (atomic_add_return(1, &parent->active_channels) == 1) {
if (!parent->active_channels) {
for (i = 0; i < GS_MAX_RX_URBS; i++) {
struct urb *urb;
u8 *buf;
@ -690,6 +690,7 @@ static int gs_can_open(struct net_device *netdev)
dev->can.state = CAN_STATE_ERROR_ACTIVE;
parent->active_channels++;
if (!(dev->can.ctrlmode & CAN_CTRLMODE_LISTENONLY))
netif_start_queue(netdev);
@ -705,7 +706,8 @@ static int gs_can_close(struct net_device *netdev)
netif_stop_queue(netdev);
/* Stop polling */
if (atomic_dec_and_test(&parent->active_channels))
parent->active_channels--;
if (!parent->active_channels)
usb_kill_anchored_urbs(&parent->rx_submitted);
/* Stop sending URBs */
@ -984,8 +986,6 @@ static int gs_usb_probe(struct usb_interface *intf,
init_usb_anchor(&dev->rx_submitted);
atomic_set(&dev->active_channels, 0);
usb_set_intfdata(intf, dev);
dev->udev = interface_to_usbdev(intf);

View File

@ -8216,7 +8216,7 @@ bnx2_init_board(struct pci_dev *pdev, struct net_device *dev)
rc = dma_set_coherent_mask(&pdev->dev, persist_dma_mask);
if (rc) {
dev_err(&pdev->dev,
"pci_set_consistent_dma_mask failed, aborting\n");
"dma_set_coherent_mask failed, aborting\n");
goto err_out_unmap;
}
} else if ((rc = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32))) != 0) {

View File

@ -3613,6 +3613,8 @@ int t3_prep_adapter(struct adapter *adapter, const struct adapter_info *ai,
MAC_STATS_ACCUM_SECS : (MAC_STATS_ACCUM_SECS * 10);
adapter->params.pci.vpd_cap_addr =
pci_find_capability(adapter->pdev, PCI_CAP_ID_VPD);
if (!adapter->params.pci.vpd_cap_addr)
return -ENODEV;
ret = get_vpd_params(adapter, &adapter->params.vpd);
if (ret < 0)
return ret;

View File

@ -2212,6 +2212,19 @@ static const char *reset_reason_to_string(enum ibmvnic_reset_reason reason)
return "UNKNOWN";
}
/*
* Initialize the init_done completion and return code values. We
* can get a transport event just after registering the CRQ and the
* tasklet will use this to communicate the transport event. To ensure
* we don't miss the notification/error, initialize these _before_
* regisering the CRQ.
*/
static inline void reinit_init_done(struct ibmvnic_adapter *adapter)
{
reinit_completion(&adapter->init_done);
adapter->init_done_rc = 0;
}
/*
* do_reset returns zero if we are able to keep processing reset events, or
* non-zero if we hit a fatal error and must halt.
@ -2318,6 +2331,8 @@ static int do_reset(struct ibmvnic_adapter *adapter,
*/
adapter->state = VNIC_PROBED;
reinit_init_done(adapter);
if (adapter->reset_reason == VNIC_RESET_CHANGE_PARAM) {
rc = init_crq_queue(adapter);
} else if (adapter->reset_reason == VNIC_RESET_MOBILITY) {
@ -2461,7 +2476,8 @@ static int do_hard_reset(struct ibmvnic_adapter *adapter,
*/
adapter->state = VNIC_PROBED;
reinit_completion(&adapter->init_done);
reinit_init_done(adapter);
rc = init_crq_queue(adapter);
if (rc) {
netdev_err(adapter->netdev,
@ -2602,23 +2618,82 @@ out:
static void __ibmvnic_reset(struct work_struct *work)
{
struct ibmvnic_adapter *adapter;
bool saved_state = false;
unsigned int timeout = 5000;
struct ibmvnic_rwi *tmprwi;
bool saved_state = false;
struct ibmvnic_rwi *rwi;
unsigned long flags;
u32 reset_state;
struct device *dev;
bool need_reset;
int num_fails = 0;
u32 reset_state;
int rc = 0;
adapter = container_of(work, struct ibmvnic_adapter, ibmvnic_reset);
dev = &adapter->vdev->dev;
if (test_and_set_bit_lock(0, &adapter->resetting)) {
/* Wait for ibmvnic_probe() to complete. If probe is taking too long
* or if another reset is in progress, defer work for now. If probe
* eventually fails it will flush and terminate our work.
*
* Three possibilities here:
* 1. Adpater being removed - just return
* 2. Timed out on probe or another reset in progress - delay the work
* 3. Completed probe - perform any resets in queue
*/
if (adapter->state == VNIC_PROBING &&
!wait_for_completion_timeout(&adapter->probe_done, timeout)) {
dev_err(dev, "Reset thread timed out on probe");
queue_delayed_work(system_long_wq,
&adapter->ibmvnic_delayed_reset,
IBMVNIC_RESET_DELAY);
return;
}
/* adapter is done with probe (i.e state is never VNIC_PROBING now) */
if (adapter->state == VNIC_REMOVING)
return;
/* ->rwi_list is stable now (no one else is removing entries) */
/* ibmvnic_probe() may have purged the reset queue after we were
* scheduled to process a reset so there maybe no resets to process.
* Before setting the ->resetting bit though, we have to make sure
* that there is infact a reset to process. Otherwise we may race
* with ibmvnic_open() and end up leaving the vnic down:
*
* __ibmvnic_reset() ibmvnic_open()
* ----------------- --------------
*
* set ->resetting bit
* find ->resetting bit is set
* set ->state to IBMVNIC_OPEN (i.e
* assume reset will open device)
* return
* find reset queue empty
* return
*
* Neither performed vnic login/open and vnic stays down
*
* If we hold the lock and conditionally set the bit, either we
* or ibmvnic_open() will complete the open.
*/
need_reset = false;
spin_lock(&adapter->rwi_lock);
if (!list_empty(&adapter->rwi_list)) {
if (test_and_set_bit_lock(0, &adapter->resetting)) {
queue_delayed_work(system_long_wq,
&adapter->ibmvnic_delayed_reset,
IBMVNIC_RESET_DELAY);
} else {
need_reset = true;
}
}
spin_unlock(&adapter->rwi_lock);
if (!need_reset)
return;
rwi = get_next_rwi(adapter);
while (rwi) {
spin_lock_irqsave(&adapter->state_lock, flags);
@ -2735,12 +2810,23 @@ static void __ibmvnic_delayed_reset(struct work_struct *work)
__ibmvnic_reset(&adapter->ibmvnic_reset);
}
static void flush_reset_queue(struct ibmvnic_adapter *adapter)
{
struct list_head *entry, *tmp_entry;
if (!list_empty(&adapter->rwi_list)) {
list_for_each_safe(entry, tmp_entry, &adapter->rwi_list) {
list_del(entry);
kfree(list_entry(entry, struct ibmvnic_rwi, list));
}
}
}
static int ibmvnic_reset(struct ibmvnic_adapter *adapter,
enum ibmvnic_reset_reason reason)
{
struct list_head *entry, *tmp_entry;
struct ibmvnic_rwi *rwi, *tmp;
struct net_device *netdev = adapter->netdev;
struct ibmvnic_rwi *rwi, *tmp;
unsigned long flags;
int ret;
@ -2759,13 +2845,6 @@ static int ibmvnic_reset(struct ibmvnic_adapter *adapter,
goto err;
}
if (adapter->state == VNIC_PROBING) {
netdev_warn(netdev, "Adapter reset during probe\n");
adapter->init_done_rc = -EAGAIN;
ret = EAGAIN;
goto err;
}
list_for_each_entry(tmp, &adapter->rwi_list, list) {
if (tmp->reset_reason == reason) {
netdev_dbg(netdev, "Skipping matching reset, reason=%s\n",
@ -2783,10 +2862,9 @@ static int ibmvnic_reset(struct ibmvnic_adapter *adapter,
/* if we just received a transport event,
* flush reset queue and process this reset
*/
if (adapter->force_reset_recovery && !list_empty(&adapter->rwi_list)) {
list_for_each_safe(entry, tmp_entry, &adapter->rwi_list)
list_del(entry);
}
if (adapter->force_reset_recovery)
flush_reset_queue(adapter);
rwi->reset_reason = reason;
list_add_tail(&rwi->list, &adapter->rwi_list);
netdev_dbg(adapter->netdev, "Scheduling reset (reason %s)\n",
@ -5321,9 +5399,9 @@ static void ibmvnic_handle_crq(union ibmvnic_crq *crq,
}
if (!completion_done(&adapter->init_done)) {
complete(&adapter->init_done);
if (!adapter->init_done_rc)
adapter->init_done_rc = -EAGAIN;
complete(&adapter->init_done);
}
break;
@ -5346,6 +5424,13 @@ static void ibmvnic_handle_crq(union ibmvnic_crq *crq,
adapter->fw_done_rc = -EIO;
complete(&adapter->fw_done);
}
/* if we got here during crq-init, retry crq-init */
if (!completion_done(&adapter->init_done)) {
adapter->init_done_rc = -EAGAIN;
complete(&adapter->init_done);
}
if (!completion_done(&adapter->stats_done))
complete(&adapter->stats_done);
if (test_bit(0, &adapter->resetting))
@ -5662,10 +5747,6 @@ static int ibmvnic_reset_init(struct ibmvnic_adapter *adapter, bool reset)
adapter->from_passive_init = false;
if (reset)
reinit_completion(&adapter->init_done);
adapter->init_done_rc = 0;
rc = ibmvnic_send_crq_init(adapter);
if (rc) {
dev_err(dev, "Send crq init failed with error %d\n", rc);
@ -5679,12 +5760,14 @@ static int ibmvnic_reset_init(struct ibmvnic_adapter *adapter, bool reset)
if (adapter->init_done_rc) {
release_crq_queue(adapter);
dev_err(dev, "CRQ-init failed, %d\n", adapter->init_done_rc);
return adapter->init_done_rc;
}
if (adapter->from_passive_init) {
adapter->state = VNIC_OPEN;
adapter->from_passive_init = false;
dev_err(dev, "CRQ-init failed, passive-init\n");
return -EINVAL;
}
@ -5724,6 +5807,7 @@ static int ibmvnic_probe(struct vio_dev *dev, const struct vio_device_id *id)
struct ibmvnic_adapter *adapter;
struct net_device *netdev;
unsigned char *mac_addr_p;
unsigned long flags;
bool init_success;
int rc;
@ -5768,6 +5852,7 @@ static int ibmvnic_probe(struct vio_dev *dev, const struct vio_device_id *id)
spin_lock_init(&adapter->rwi_lock);
spin_lock_init(&adapter->state_lock);
mutex_init(&adapter->fw_lock);
init_completion(&adapter->probe_done);
init_completion(&adapter->init_done);
init_completion(&adapter->fw_done);
init_completion(&adapter->reset_done);
@ -5778,6 +5863,33 @@ static int ibmvnic_probe(struct vio_dev *dev, const struct vio_device_id *id)
init_success = false;
do {
reinit_init_done(adapter);
/* clear any failovers we got in the previous pass
* since we are reinitializing the CRQ
*/
adapter->failover_pending = false;
/* If we had already initialized CRQ, we may have one or
* more resets queued already. Discard those and release
* the CRQ before initializing the CRQ again.
*/
release_crq_queue(adapter);
/* Since we are still in PROBING state, __ibmvnic_reset()
* will not access the ->rwi_list and since we released CRQ,
* we won't get _new_ transport events. But there maybe an
* ongoing ibmvnic_reset() call. So serialize access to
* rwi_list. If we win the race, ibvmnic_reset() could add
* a reset after we purged but thats ok - we just may end
* up with an extra reset (i.e similar to having two or more
* resets in the queue at once).
* CHECK.
*/
spin_lock_irqsave(&adapter->rwi_lock, flags);
flush_reset_queue(adapter);
spin_unlock_irqrestore(&adapter->rwi_lock, flags);
rc = init_crq_queue(adapter);
if (rc) {
dev_err(&dev->dev, "Couldn't initialize crq. rc=%d\n",
@ -5809,12 +5921,6 @@ static int ibmvnic_probe(struct vio_dev *dev, const struct vio_device_id *id)
goto ibmvnic_dev_file_err;
netif_carrier_off(netdev);
rc = register_netdev(netdev);
if (rc) {
dev_err(&dev->dev, "failed to register netdev rc=%d\n", rc);
goto ibmvnic_register_fail;
}
dev_info(&dev->dev, "ibmvnic registered\n");
if (init_success) {
adapter->state = VNIC_PROBED;
@ -5827,6 +5933,16 @@ static int ibmvnic_probe(struct vio_dev *dev, const struct vio_device_id *id)
adapter->wait_for_reset = false;
adapter->last_reset_time = jiffies;
rc = register_netdev(netdev);
if (rc) {
dev_err(&dev->dev, "failed to register netdev rc=%d\n", rc);
goto ibmvnic_register_fail;
}
dev_info(&dev->dev, "ibmvnic registered\n");
complete(&adapter->probe_done);
return 0;
ibmvnic_register_fail:
@ -5841,6 +5957,17 @@ ibmvnic_stats_fail:
ibmvnic_init_fail:
release_sub_crqs(adapter, 1);
release_crq_queue(adapter);
/* cleanup worker thread after releasing CRQ so we don't get
* transport events (i.e new work items for the worker thread).
*/
adapter->state = VNIC_REMOVING;
complete(&adapter->probe_done);
flush_work(&adapter->ibmvnic_reset);
flush_delayed_work(&adapter->ibmvnic_delayed_reset);
flush_reset_queue(adapter);
mutex_destroy(&adapter->fw_lock);
free_netdev(netdev);

View File

@ -930,6 +930,7 @@ struct ibmvnic_adapter {
struct ibmvnic_tx_pool *tx_pool;
struct ibmvnic_tx_pool *tso_pool;
struct completion probe_done;
struct completion init_done;
int init_done_rc;

View File

@ -630,6 +630,7 @@ struct e1000_phy_info {
bool disable_polarity_correction;
bool is_mdix;
bool polarity_correction;
bool reset_disable;
bool speed_downgraded;
bool autoneg_wait_to_complete;
};

View File

@ -2050,6 +2050,10 @@ static s32 e1000_check_reset_block_ich8lan(struct e1000_hw *hw)
bool blocked = false;
int i = 0;
/* Check the PHY (LCD) reset flag */
if (hw->phy.reset_disable)
return true;
while ((blocked = !(er32(FWSM) & E1000_ICH_FWSM_RSPCIPHY)) &&
(i++ < 30))
usleep_range(10000, 11000);
@ -4136,9 +4140,9 @@ static s32 e1000_validate_nvm_checksum_ich8lan(struct e1000_hw *hw)
return ret_val;
if (!(data & valid_csum_mask)) {
e_dbg("NVM Checksum Invalid\n");
e_dbg("NVM Checksum valid bit not set\n");
if (hw->mac.type < e1000_pch_cnp) {
if (hw->mac.type < e1000_pch_tgp) {
data |= valid_csum_mask;
ret_val = e1000_write_nvm(hw, word, 1, &data);
if (ret_val)

View File

@ -271,6 +271,7 @@
#define I217_CGFREG_ENABLE_MTA_RESET 0x0002
#define I217_MEMPWR PHY_REG(772, 26)
#define I217_MEMPWR_DISABLE_SMB_RELEASE 0x0010
#define I217_MEMPWR_MOEM 0x1000
/* Receive Address Initial CRC Calculation */
#define E1000_PCH_RAICC(_n) (0x05F50 + ((_n) * 4))

View File

@ -6987,8 +6987,21 @@ static __maybe_unused int e1000e_pm_suspend(struct device *dev)
struct net_device *netdev = pci_get_drvdata(to_pci_dev(dev));
struct e1000_adapter *adapter = netdev_priv(netdev);
struct pci_dev *pdev = to_pci_dev(dev);
struct e1000_hw *hw = &adapter->hw;
u16 phy_data;
int rc;
if (er32(FWSM) & E1000_ICH_FWSM_FW_VALID &&
hw->mac.type >= e1000_pch_adp) {
/* Mask OEM Bits / Gig Disable / Restart AN (772_26[12] = 1) */
e1e_rphy(hw, I217_MEMPWR, &phy_data);
phy_data |= I217_MEMPWR_MOEM;
e1e_wphy(hw, I217_MEMPWR, phy_data);
/* Disable LCD reset */
hw->phy.reset_disable = true;
}
e1000e_flush_lpic(pdev);
e1000e_pm_freeze(dev);
@ -7010,6 +7023,8 @@ static __maybe_unused int e1000e_pm_resume(struct device *dev)
struct net_device *netdev = pci_get_drvdata(to_pci_dev(dev));
struct e1000_adapter *adapter = netdev_priv(netdev);
struct pci_dev *pdev = to_pci_dev(dev);
struct e1000_hw *hw = &adapter->hw;
u16 phy_data;
int rc;
/* Introduce S0ix implementation */
@ -7020,6 +7035,17 @@ static __maybe_unused int e1000e_pm_resume(struct device *dev)
if (rc)
return rc;
if (er32(FWSM) & E1000_ICH_FWSM_FW_VALID &&
hw->mac.type >= e1000_pch_adp) {
/* Unmask OEM Bits / Gig Disable / Restart AN 772_26[12] = 0 */
e1e_rphy(hw, I217_MEMPWR, &phy_data);
phy_data &= ~I217_MEMPWR_MOEM;
e1e_wphy(hw, I217_MEMPWR, phy_data);
/* Enable LCD reset */
hw->phy.reset_disable = false;
}
return e1000e_pm_thaw(dev);
}

View File

@ -201,6 +201,10 @@ enum iavf_state_t {
__IAVF_RUNNING, /* opened, working */
};
enum iavf_critical_section_t {
__IAVF_IN_REMOVE_TASK, /* device being removed */
};
#define IAVF_CLOUD_FIELD_OMAC 0x01
#define IAVF_CLOUD_FIELD_IMAC 0x02
#define IAVF_CLOUD_FIELD_IVLAN 0x04
@ -246,7 +250,6 @@ struct iavf_adapter {
struct list_head mac_filter_list;
struct mutex crit_lock;
struct mutex client_lock;
struct mutex remove_lock;
/* Lock to protect accesses to MAC and VLAN lists */
spinlock_t mac_vlan_list_lock;
char misc_vector_name[IFNAMSIZ + 9];
@ -284,6 +287,7 @@ struct iavf_adapter {
#define IAVF_FLAG_LEGACY_RX BIT(15)
#define IAVF_FLAG_REINIT_ITR_NEEDED BIT(16)
#define IAVF_FLAG_QUEUES_DISABLED BIT(17)
#define IAVF_FLAG_SETUP_NETDEV_FEATURES BIT(18)
/* duplicates for common code */
#define IAVF_FLAG_DCB_ENABLED 0
/* flags for admin queue service task */

View File

@ -302,8 +302,9 @@ static irqreturn_t iavf_msix_aq(int irq, void *data)
rd32(hw, IAVF_VFINT_ICR01);
rd32(hw, IAVF_VFINT_ICR0_ENA1);
/* schedule work on the private workqueue */
queue_work(iavf_wq, &adapter->adminq_task);
if (adapter->state != __IAVF_REMOVE)
/* schedule work on the private workqueue */
queue_work(iavf_wq, &adapter->adminq_task);
return IRQ_HANDLED;
}
@ -1136,8 +1137,7 @@ void iavf_down(struct iavf_adapter *adapter)
rss->state = IAVF_ADV_RSS_DEL_REQUEST;
spin_unlock_bh(&adapter->adv_rss_lock);
if (!(adapter->flags & IAVF_FLAG_PF_COMMS_FAILED) &&
adapter->state != __IAVF_RESETTING) {
if (!(adapter->flags & IAVF_FLAG_PF_COMMS_FAILED)) {
/* cancel any current operation */
adapter->current_op = VIRTCHNL_OP_UNKNOWN;
/* Schedule operations to close down the HW. Don't wait
@ -2374,17 +2374,22 @@ static void iavf_watchdog_task(struct work_struct *work)
struct iavf_hw *hw = &adapter->hw;
u32 reg_val;
if (!mutex_trylock(&adapter->crit_lock))
if (!mutex_trylock(&adapter->crit_lock)) {
if (adapter->state == __IAVF_REMOVE)
return;
goto restart_watchdog;
}
if (adapter->flags & IAVF_FLAG_PF_COMMS_FAILED)
iavf_change_state(adapter, __IAVF_COMM_FAILED);
if (adapter->flags & IAVF_FLAG_RESET_NEEDED &&
adapter->state != __IAVF_RESETTING) {
iavf_change_state(adapter, __IAVF_RESETTING);
if (adapter->flags & IAVF_FLAG_RESET_NEEDED) {
adapter->aq_required = 0;
adapter->current_op = VIRTCHNL_OP_UNKNOWN;
mutex_unlock(&adapter->crit_lock);
queue_work(iavf_wq, &adapter->reset_task);
return;
}
switch (adapter->state) {
@ -2419,6 +2424,15 @@ static void iavf_watchdog_task(struct work_struct *work)
msecs_to_jiffies(1));
return;
case __IAVF_INIT_FAILED:
if (test_bit(__IAVF_IN_REMOVE_TASK,
&adapter->crit_section)) {
/* Do not update the state and do not reschedule
* watchdog task, iavf_remove should handle this state
* as it can loop forever
*/
mutex_unlock(&adapter->crit_lock);
return;
}
if (++adapter->aq_wait_count > IAVF_AQ_MAX_ERR) {
dev_err(&adapter->pdev->dev,
"Failed to communicate with PF; waiting before retry\n");
@ -2435,6 +2449,17 @@ static void iavf_watchdog_task(struct work_struct *work)
queue_delayed_work(iavf_wq, &adapter->watchdog_task, HZ);
return;
case __IAVF_COMM_FAILED:
if (test_bit(__IAVF_IN_REMOVE_TASK,
&adapter->crit_section)) {
/* Set state to __IAVF_INIT_FAILED and perform remove
* steps. Remove IAVF_FLAG_PF_COMMS_FAILED so the task
* doesn't bring the state back to __IAVF_COMM_FAILED.
*/
iavf_change_state(adapter, __IAVF_INIT_FAILED);
adapter->flags &= ~IAVF_FLAG_PF_COMMS_FAILED;
mutex_unlock(&adapter->crit_lock);
return;
}
reg_val = rd32(hw, IAVF_VFGEN_RSTAT) &
IAVF_VFGEN_RSTAT_VFR_STATE_MASK;
if (reg_val == VIRTCHNL_VFR_VFACTIVE ||
@ -2507,7 +2532,8 @@ static void iavf_watchdog_task(struct work_struct *work)
schedule_delayed_work(&adapter->client_task, msecs_to_jiffies(5));
mutex_unlock(&adapter->crit_lock);
restart_watchdog:
queue_work(iavf_wq, &adapter->adminq_task);
if (adapter->state >= __IAVF_DOWN)
queue_work(iavf_wq, &adapter->adminq_task);
if (adapter->aq_required)
queue_delayed_work(iavf_wq, &adapter->watchdog_task,
msecs_to_jiffies(20));
@ -2601,13 +2627,13 @@ static void iavf_reset_task(struct work_struct *work)
/* When device is being removed it doesn't make sense to run the reset
* task, just return in such a case.
*/
if (mutex_is_locked(&adapter->remove_lock))
return;
if (!mutex_trylock(&adapter->crit_lock)) {
if (adapter->state != __IAVF_REMOVE)
queue_work(iavf_wq, &adapter->reset_task);
if (iavf_lock_timeout(&adapter->crit_lock, 200)) {
schedule_work(&adapter->reset_task);
return;
}
while (!mutex_trylock(&adapter->client_lock))
usleep_range(500, 1000);
if (CLIENT_ENABLED(adapter)) {
@ -2662,6 +2688,7 @@ static void iavf_reset_task(struct work_struct *work)
reg_val);
iavf_disable_vf(adapter);
mutex_unlock(&adapter->client_lock);
mutex_unlock(&adapter->crit_lock);
return; /* Do not attempt to reinit. It's dead, Jim. */
}
@ -2670,8 +2697,7 @@ continue_reset:
* ndo_open() returning, so we can't assume it means all our open
* tasks have finished, since we're not holding the rtnl_lock here.
*/
running = ((adapter->state == __IAVF_RUNNING) ||
(adapter->state == __IAVF_RESETTING));
running = adapter->state == __IAVF_RUNNING;
if (running) {
netdev->flags &= ~IFF_UP;
@ -2826,13 +2852,19 @@ static void iavf_adminq_task(struct work_struct *work)
if (adapter->flags & IAVF_FLAG_PF_COMMS_FAILED)
goto out;
if (!mutex_trylock(&adapter->crit_lock)) {
if (adapter->state == __IAVF_REMOVE)
return;
queue_work(iavf_wq, &adapter->adminq_task);
goto out;
}
event.buf_len = IAVF_MAX_AQ_BUF_SIZE;
event.msg_buf = kzalloc(event.buf_len, GFP_KERNEL);
if (!event.msg_buf)
goto out;
if (iavf_lock_timeout(&adapter->crit_lock, 200))
goto freedom;
do {
ret = iavf_clean_arq_element(hw, &event, &pending);
v_op = (enum virtchnl_ops)le32_to_cpu(event.desc.cookie_high);
@ -2848,6 +2880,24 @@ static void iavf_adminq_task(struct work_struct *work)
} while (pending);
mutex_unlock(&adapter->crit_lock);
if ((adapter->flags & IAVF_FLAG_SETUP_NETDEV_FEATURES)) {
if (adapter->netdev_registered ||
!test_bit(__IAVF_IN_REMOVE_TASK, &adapter->crit_section)) {
struct net_device *netdev = adapter->netdev;
rtnl_lock();
netdev_update_features(netdev);
rtnl_unlock();
/* Request VLAN offload settings */
if (VLAN_V2_ALLOWED(adapter))
iavf_set_vlan_offload_features
(adapter, 0, netdev->features);
iavf_set_queue_vlan_tag_loc(adapter);
}
adapter->flags &= ~IAVF_FLAG_SETUP_NETDEV_FEATURES;
}
if ((adapter->flags &
(IAVF_FLAG_RESET_PENDING | IAVF_FLAG_RESET_NEEDED)) ||
adapter->state == __IAVF_RESETTING)
@ -3800,11 +3850,12 @@ static int iavf_close(struct net_device *netdev)
struct iavf_adapter *adapter = netdev_priv(netdev);
int status;
if (adapter->state <= __IAVF_DOWN_PENDING)
return 0;
mutex_lock(&adapter->crit_lock);
while (!mutex_trylock(&adapter->crit_lock))
usleep_range(500, 1000);
if (adapter->state <= __IAVF_DOWN_PENDING) {
mutex_unlock(&adapter->crit_lock);
return 0;
}
set_bit(__IAVF_VSI_DOWN, adapter->vsi.state);
if (CLIENT_ENABLED(adapter))
@ -3853,8 +3904,11 @@ static int iavf_change_mtu(struct net_device *netdev, int new_mtu)
iavf_notify_client_l2_params(&adapter->vsi);
adapter->flags |= IAVF_FLAG_SERVICE_CLIENT_REQUESTED;
}
adapter->flags |= IAVF_FLAG_RESET_NEEDED;
queue_work(iavf_wq, &adapter->reset_task);
if (netif_running(netdev)) {
adapter->flags |= IAVF_FLAG_RESET_NEEDED;
queue_work(iavf_wq, &adapter->reset_task);
}
return 0;
}
@ -4431,7 +4485,6 @@ static int iavf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
*/
mutex_init(&adapter->crit_lock);
mutex_init(&adapter->client_lock);
mutex_init(&adapter->remove_lock);
mutex_init(&hw->aq.asq_mutex);
mutex_init(&hw->aq.arq_mutex);
@ -4547,7 +4600,6 @@ static int __maybe_unused iavf_resume(struct device *dev_d)
static void iavf_remove(struct pci_dev *pdev)
{
struct iavf_adapter *adapter = iavf_pdev_to_adapter(pdev);
enum iavf_state_t prev_state = adapter->last_state;
struct net_device *netdev = adapter->netdev;
struct iavf_fdir_fltr *fdir, *fdirtmp;
struct iavf_vlan_filter *vlf, *vlftmp;
@ -4556,14 +4608,30 @@ static void iavf_remove(struct pci_dev *pdev)
struct iavf_cloud_filter *cf, *cftmp;
struct iavf_hw *hw = &adapter->hw;
int err;
/* Indicate we are in remove and not to run reset_task */
mutex_lock(&adapter->remove_lock);
cancel_work_sync(&adapter->reset_task);
set_bit(__IAVF_IN_REMOVE_TASK, &adapter->crit_section);
/* Wait until port initialization is complete.
* There are flows where register/unregister netdev may race.
*/
while (1) {
mutex_lock(&adapter->crit_lock);
if (adapter->state == __IAVF_RUNNING ||
adapter->state == __IAVF_DOWN ||
adapter->state == __IAVF_INIT_FAILED) {
mutex_unlock(&adapter->crit_lock);
break;
}
mutex_unlock(&adapter->crit_lock);
usleep_range(500, 1000);
}
cancel_delayed_work_sync(&adapter->watchdog_task);
cancel_delayed_work_sync(&adapter->client_task);
if (adapter->netdev_registered) {
unregister_netdev(netdev);
rtnl_lock();
unregister_netdevice(netdev);
adapter->netdev_registered = false;
rtnl_unlock();
}
if (CLIENT_ALLOWED(adapter)) {
err = iavf_lan_del_device(adapter);
@ -4572,6 +4640,10 @@ static void iavf_remove(struct pci_dev *pdev)
err);
}
mutex_lock(&adapter->crit_lock);
dev_info(&adapter->pdev->dev, "Remove device\n");
iavf_change_state(adapter, __IAVF_REMOVE);
iavf_request_reset(adapter);
msleep(50);
/* If the FW isn't responding, kick it once, but only once. */
@ -4579,37 +4651,24 @@ static void iavf_remove(struct pci_dev *pdev)
iavf_request_reset(adapter);
msleep(50);
}
if (iavf_lock_timeout(&adapter->crit_lock, 5000))
dev_warn(&adapter->pdev->dev, "failed to acquire crit_lock in %s\n", __FUNCTION__);
dev_info(&adapter->pdev->dev, "Removing device\n");
iavf_misc_irq_disable(adapter);
/* Shut down all the garbage mashers on the detention level */
iavf_change_state(adapter, __IAVF_REMOVE);
cancel_work_sync(&adapter->reset_task);
cancel_delayed_work_sync(&adapter->watchdog_task);
cancel_work_sync(&adapter->adminq_task);
cancel_delayed_work_sync(&adapter->client_task);
adapter->aq_required = 0;
adapter->flags &= ~IAVF_FLAG_REINIT_ITR_NEEDED;
iavf_free_all_tx_resources(adapter);
iavf_free_all_rx_resources(adapter);
iavf_misc_irq_disable(adapter);
iavf_free_misc_irq(adapter);
/* In case we enter iavf_remove from erroneous state, free traffic irqs
* here, so as to not cause a kernel crash, when calling
* iavf_reset_interrupt_capability.
*/
if ((adapter->last_state == __IAVF_RESETTING &&
prev_state != __IAVF_DOWN) ||
(adapter->last_state == __IAVF_RUNNING &&
!(netdev->flags & IFF_UP)))
iavf_free_traffic_irqs(adapter);
iavf_reset_interrupt_capability(adapter);
iavf_free_q_vectors(adapter);
cancel_delayed_work_sync(&adapter->watchdog_task);
cancel_work_sync(&adapter->adminq_task);
iavf_free_rss(adapter);
if (hw->aq.asq.count)
@ -4621,8 +4680,6 @@ static void iavf_remove(struct pci_dev *pdev)
mutex_destroy(&adapter->client_lock);
mutex_unlock(&adapter->crit_lock);
mutex_destroy(&adapter->crit_lock);
mutex_unlock(&adapter->remove_lock);
mutex_destroy(&adapter->remove_lock);
iounmap(hw->hw_addr);
pci_release_regions(pdev);

View File

@ -2146,29 +2146,7 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter,
sizeof(adapter->vlan_v2_caps)));
iavf_process_config(adapter);
/* unlock crit_lock before acquiring rtnl_lock as other
* processes holding rtnl_lock could be waiting for the same
* crit_lock
*/
mutex_unlock(&adapter->crit_lock);
/* VLAN capabilities can change during VFR, so make sure to
* update the netdev features with the new capabilities
*/
rtnl_lock();
netdev_update_features(netdev);
rtnl_unlock();
if (iavf_lock_timeout(&adapter->crit_lock, 10000))
dev_warn(&adapter->pdev->dev, "failed to acquire crit_lock in %s\n",
__FUNCTION__);
/* Request VLAN offload settings */
if (VLAN_V2_ALLOWED(adapter))
iavf_set_vlan_offload_features(adapter, 0,
netdev->features);
iavf_set_queue_vlan_tag_loc(adapter);
adapter->flags |= IAVF_FLAG_SETUP_NETDEV_FEATURES;
}
break;
case VIRTCHNL_OP_ENABLE_QUEUES:

View File

@ -746,8 +746,6 @@ s32 igc_write_phy_reg_gpy(struct igc_hw *hw, u32 offset, u16 data)
if (ret_val)
return ret_val;
ret_val = igc_write_phy_reg_mdic(hw, offset, data);
if (ret_val)
return ret_val;
hw->phy.ops.release(hw);
} else {
ret_val = igc_write_xmdio_reg(hw, (u16)offset, dev_addr,
@ -779,8 +777,6 @@ s32 igc_read_phy_reg_gpy(struct igc_hw *hw, u32 offset, u16 *data)
if (ret_val)
return ret_val;
ret_val = igc_read_phy_reg_mdic(hw, offset, data);
if (ret_val)
return ret_val;
hw->phy.ops.release(hw);
} else {
ret_val = igc_read_xmdio_reg(hw, (u16)offset, dev_addr,

View File

@ -390,12 +390,14 @@ static bool ixgbe_xmit_zc(struct ixgbe_ring *xdp_ring, unsigned int budget)
u32 cmd_type;
while (budget-- > 0) {
if (unlikely(!ixgbe_desc_unused(xdp_ring)) ||
!netif_carrier_ok(xdp_ring->netdev)) {
if (unlikely(!ixgbe_desc_unused(xdp_ring))) {
work_done = false;
break;
}
if (!netif_carrier_ok(xdp_ring->netdev))
break;
if (!xsk_tx_peek_desc(pool, &desc))
break;

View File

@ -16,6 +16,8 @@
#include <linux/phylink.h>
#include <linux/hrtimer.h>
#include "sparx5_main_regs.h"
/* Target chip type */
enum spx5_target_chiptype {
SPX5_TARGET_CT_7546 = 0x7546, /* SparX-5-64 Enterprise */

View File

@ -58,16 +58,6 @@ int sparx5_vlan_vid_add(struct sparx5_port *port, u16 vid, bool pvid,
struct sparx5 *sparx5 = port->sparx5;
int ret;
/* Make the port a member of the VLAN */
set_bit(port->portno, sparx5->vlan_mask[vid]);
ret = sparx5_vlant_set_mask(sparx5, vid);
if (ret)
return ret;
/* Default ingress vlan classification */
if (pvid)
port->pvid = vid;
/* Untagged egress vlan classification */
if (untagged && port->vid != vid) {
if (port->vid) {
@ -79,6 +69,16 @@ int sparx5_vlan_vid_add(struct sparx5_port *port, u16 vid, bool pvid,
port->vid = vid;
}
/* Make the port a member of the VLAN */
set_bit(port->portno, sparx5->vlan_mask[vid]);
ret = sparx5_vlant_set_mask(sparx5, vid);
if (ret)
return ret;
/* Default ingress vlan classification */
if (pvid)
port->pvid = vid;
sparx5_vlan_port_apply(sparx5, port);
return 0;

View File

@ -2285,18 +2285,18 @@ static int __init sxgbe_cmdline_opt(char *str)
char *opt;
if (!str || !*str)
return -EINVAL;
return 1;
while ((opt = strsep(&str, ",")) != NULL) {
if (!strncmp(opt, "eee_timer:", 10)) {
if (kstrtoint(opt + 10, 0, &eee_timer))
goto err;
}
}
return 0;
return 1;
err:
pr_err("%s: ERROR broken module parameter conversion\n", __func__);
return -EINVAL;
return 1;
}
__setup("sxgbeeth=", sxgbe_cmdline_opt);

View File

@ -163,9 +163,9 @@ static void efx_mcdi_send_request(struct efx_nic *efx, unsigned cmd,
/* Serialise with efx_mcdi_ev_cpl() and efx_mcdi_ev_death() */
spin_lock_bh(&mcdi->iface_lock);
++mcdi->seqno;
seqno = mcdi->seqno & SEQ_MASK;
spin_unlock_bh(&mcdi->iface_lock);
seqno = mcdi->seqno & SEQ_MASK;
xflags = 0;
if (mcdi->mode == MCDI_MODE_EVENTS)
xflags |= MCDI_HEADER_XFLAGS_EVREQ;

View File

@ -2262,6 +2262,23 @@ static void stmmac_stop_tx_dma(struct stmmac_priv *priv, u32 chan)
stmmac_stop_tx(priv, priv->ioaddr, chan);
}
static void stmmac_enable_all_dma_irq(struct stmmac_priv *priv)
{
u32 rx_channels_count = priv->plat->rx_queues_to_use;
u32 tx_channels_count = priv->plat->tx_queues_to_use;
u32 dma_csr_ch = max(rx_channels_count, tx_channels_count);
u32 chan;
for (chan = 0; chan < dma_csr_ch; chan++) {
struct stmmac_channel *ch = &priv->channel[chan];
unsigned long flags;
spin_lock_irqsave(&ch->lock, flags);
stmmac_enable_dma_irq(priv, priv->ioaddr, chan, 1, 1);
spin_unlock_irqrestore(&ch->lock, flags);
}
}
/**
* stmmac_start_all_dma - start all RX and TX DMA channels
* @priv: driver private structure
@ -2904,8 +2921,10 @@ static int stmmac_init_dma_engine(struct stmmac_priv *priv)
stmmac_axi(priv, priv->ioaddr, priv->plat->axi);
/* DMA CSR Channel configuration */
for (chan = 0; chan < dma_csr_ch; chan++)
for (chan = 0; chan < dma_csr_ch; chan++) {
stmmac_init_chan(priv, priv->ioaddr, priv->plat->dma_cfg, chan);
stmmac_disable_dma_irq(priv, priv->ioaddr, chan, 1, 1);
}
/* DMA RX Channel Configuration */
for (chan = 0; chan < rx_channels_count; chan++) {
@ -3761,6 +3780,7 @@ static int stmmac_open(struct net_device *dev)
stmmac_enable_all_queues(priv);
netif_tx_start_all_queues(priv->dev);
stmmac_enable_all_dma_irq(priv);
return 0;
@ -6510,8 +6530,10 @@ int stmmac_xdp_open(struct net_device *dev)
}
/* DMA CSR Channel configuration */
for (chan = 0; chan < dma_csr_ch; chan++)
for (chan = 0; chan < dma_csr_ch; chan++) {
stmmac_init_chan(priv, priv->ioaddr, priv->plat->dma_cfg, chan);
stmmac_disable_dma_irq(priv, priv->ioaddr, chan, 1, 1);
}
/* Adjust Split header */
sph_en = (priv->hw->rx_csum > 0) && priv->sph;
@ -6572,6 +6594,7 @@ int stmmac_xdp_open(struct net_device *dev)
stmmac_enable_all_queues(priv);
netif_carrier_on(dev);
netif_tx_start_all_queues(dev);
stmmac_enable_all_dma_irq(priv);
return 0;
@ -7451,6 +7474,7 @@ int stmmac_resume(struct device *dev)
stmmac_restore_hw_vlan_rx_fltr(priv, ndev, priv->hw);
stmmac_enable_all_queues(priv);
stmmac_enable_all_dma_irq(priv);
mutex_unlock(&priv->lock);
rtnl_unlock();
@ -7467,7 +7491,7 @@ static int __init stmmac_cmdline_opt(char *str)
char *opt;
if (!str || !*str)
return -EINVAL;
return 1;
while ((opt = strsep(&str, ",")) != NULL) {
if (!strncmp(opt, "debug:", 6)) {
if (kstrtoint(opt + 6, 0, &debug))
@ -7498,11 +7522,11 @@ static int __init stmmac_cmdline_opt(char *str)
goto err;
}
}
return 0;
return 1;
err:
pr_err("%s: ERROR broken module parameter conversion", __func__);
return -EINVAL;
return 1;
}
__setup("stmmaceth=", stmmac_cmdline_opt);

View File

@ -2,7 +2,9 @@ config QCOM_IPA
tristate "Qualcomm IPA support"
depends on NET && QCOM_SMEM
depends on ARCH_QCOM || COMPILE_TEST
depends on INTERCONNECT
depends on QCOM_RPROC_COMMON || (QCOM_RPROC_COMMON=n && COMPILE_TEST)
depends on QCOM_AOSS_QMP || QCOM_AOSS_QMP=n
select QCOM_MDT_LOADER if ARCH_QCOM
select QCOM_SCM
select QCOM_QMI_HELPERS

View File

@ -5,3 +5,4 @@ obj-$(CONFIG_IPW2200) += ipw2x00/
obj-$(CONFIG_IWLEGACY) += iwlegacy/
obj-$(CONFIG_IWLWIFI) += iwlwifi/
obj-$(CONFIG_IWLMEI) += iwlwifi/

View File

@ -553,8 +553,7 @@ static const struct ieee80211_sband_iftype_data iwl_he_capa[] = {
.has_he = true,
.he_cap_elem = {
.mac_cap_info[0] =
IEEE80211_HE_MAC_CAP0_HTC_HE |
IEEE80211_HE_MAC_CAP0_TWT_REQ,
IEEE80211_HE_MAC_CAP0_HTC_HE,
.mac_cap_info[1] =
IEEE80211_HE_MAC_CAP1_TF_MAC_PAD_DUR_16US |
IEEE80211_HE_MAC_CAP1_MULTI_TID_AGG_RX_QOS_8,

View File

@ -5,6 +5,7 @@
* Copyright (C) 2016-2017 Intel Deutschland GmbH
*/
#include <linux/vmalloc.h>
#include <linux/err.h>
#include <linux/ieee80211.h>
#include <linux/netdevice.h>
@ -1857,7 +1858,6 @@ void iwl_mvm_sta_add_debugfs(struct ieee80211_hw *hw,
void iwl_mvm_dbgfs_register(struct iwl_mvm *mvm)
{
struct dentry *bcast_dir __maybe_unused;
char buf[100];
spin_lock_init(&mvm->drv_stats_lock);
@ -1939,6 +1939,11 @@ void iwl_mvm_dbgfs_register(struct iwl_mvm *mvm)
* Create a symlink with mac80211. It will be removed when mac80211
* exists (before the opmode exists which removes the target.)
*/
snprintf(buf, 100, "../../%pd2", mvm->debugfs_dir->d_parent);
debugfs_create_symlink("iwlwifi", mvm->hw->wiphy->debugfsdir, buf);
if (!IS_ERR(mvm->debugfs_dir)) {
char buf[100];
snprintf(buf, 100, "../../%pd2", mvm->debugfs_dir->d_parent);
debugfs_create_symlink("iwlwifi", mvm->hw->wiphy->debugfsdir,
buf);
}
}

View File

@ -226,7 +226,6 @@ static const u8 he_if_types_ext_capa_sta[] = {
[0] = WLAN_EXT_CAPA1_EXT_CHANNEL_SWITCHING,
[2] = WLAN_EXT_CAPA3_MULTI_BSSID_SUPPORT,
[7] = WLAN_EXT_CAPA8_OPMODE_NOTIF,
[9] = WLAN_EXT_CAPA10_TWT_REQUESTER_SUPPORT,
};
static const struct wiphy_iftype_ext_capab he_iftypes_ext_capa[] = {

View File

@ -71,12 +71,13 @@ static int iwl_mvm_vendor_host_get_ownership(struct wiphy *wiphy,
{
struct ieee80211_hw *hw = wiphy_to_ieee80211_hw(wiphy);
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
int ret;
mutex_lock(&mvm->mutex);
iwl_mvm_mei_get_ownership(mvm);
ret = iwl_mvm_mei_get_ownership(mvm);
mutex_unlock(&mvm->mutex);
return 0;
return ret;
}
static const struct wiphy_vendor_command iwl_mvm_vendor_commands[] = {

View File

@ -842,6 +842,28 @@ static int xennet_close(struct net_device *dev)
return 0;
}
static void xennet_destroy_queues(struct netfront_info *info)
{
unsigned int i;
for (i = 0; i < info->netdev->real_num_tx_queues; i++) {
struct netfront_queue *queue = &info->queues[i];
if (netif_running(info->netdev))
napi_disable(&queue->napi);
netif_napi_del(&queue->napi);
}
kfree(info->queues);
info->queues = NULL;
}
static void xennet_uninit(struct net_device *dev)
{
struct netfront_info *np = netdev_priv(dev);
xennet_destroy_queues(np);
}
static void xennet_set_rx_rsp_cons(struct netfront_queue *queue, RING_IDX val)
{
unsigned long flags;
@ -1611,6 +1633,7 @@ static int xennet_xdp(struct net_device *dev, struct netdev_bpf *xdp)
}
static const struct net_device_ops xennet_netdev_ops = {
.ndo_uninit = xennet_uninit,
.ndo_open = xennet_open,
.ndo_stop = xennet_close,
.ndo_start_xmit = xennet_start_xmit,
@ -2103,22 +2126,6 @@ error:
return err;
}
static void xennet_destroy_queues(struct netfront_info *info)
{
unsigned int i;
for (i = 0; i < info->netdev->real_num_tx_queues; i++) {
struct netfront_queue *queue = &info->queues[i];
if (netif_running(info->netdev))
napi_disable(&queue->napi);
netif_napi_del(&queue->napi);
}
kfree(info->queues);
info->queues = NULL;
}
static int xennet_create_page_pool(struct netfront_queue *queue)

View File

@ -607,7 +607,7 @@ ptp_ocp_settime(struct ptp_clock_info *ptp_info, const struct timespec64 *ts)
}
static void
__ptp_ocp_adjtime_locked(struct ptp_ocp *bp, u64 adj_val)
__ptp_ocp_adjtime_locked(struct ptp_ocp *bp, u32 adj_val)
{
u32 select, ctrl;
@ -615,7 +615,7 @@ __ptp_ocp_adjtime_locked(struct ptp_ocp *bp, u64 adj_val)
iowrite32(OCP_SELECT_CLK_REG, &bp->reg->select);
iowrite32(adj_val, &bp->reg->offset_ns);
iowrite32(adj_val & 0x7f, &bp->reg->offset_window_ns);
iowrite32(NSEC_PER_SEC, &bp->reg->offset_window_ns);
ctrl = OCP_CTRL_ADJUST_OFFSET | OCP_CTRL_ENABLE;
iowrite32(ctrl, &bp->reg->ctrl);
@ -624,6 +624,22 @@ __ptp_ocp_adjtime_locked(struct ptp_ocp *bp, u64 adj_val)
iowrite32(select >> 16, &bp->reg->select);
}
static void
ptp_ocp_adjtime_coarse(struct ptp_ocp *bp, u64 delta_ns)
{
struct timespec64 ts;
unsigned long flags;
int err;
spin_lock_irqsave(&bp->lock, flags);
err = __ptp_ocp_gettime_locked(bp, &ts, NULL);
if (likely(!err)) {
timespec64_add_ns(&ts, delta_ns);
__ptp_ocp_settime_locked(bp, &ts);
}
spin_unlock_irqrestore(&bp->lock, flags);
}
static int
ptp_ocp_adjtime(struct ptp_clock_info *ptp_info, s64 delta_ns)
{
@ -631,6 +647,11 @@ ptp_ocp_adjtime(struct ptp_clock_info *ptp_info, s64 delta_ns)
unsigned long flags;
u32 adj_ns, sign;
if (delta_ns > NSEC_PER_SEC || -delta_ns > NSEC_PER_SEC) {
ptp_ocp_adjtime_coarse(bp, delta_ns);
return 0;
}
sign = delta_ns < 0 ? BIT(31) : 0;
adj_ns = sign ? -delta_ns : delta_ns;

View File

@ -101,7 +101,11 @@ static inline struct sk_buff *nf_hook_egress(struct sk_buff *skb, int *rc,
nf_hook_state_init(&state, NF_NETDEV_EGRESS,
NFPROTO_NETDEV, dev, NULL, NULL,
dev_net(dev), NULL);
/* nf assumes rcu_read_lock, not just read_lock_bh */
rcu_read_lock();
ret = nf_hook_slow(skb, &state, e, 0);
rcu_read_unlock();
if (ret == 1) {
return skb;

View File

@ -308,6 +308,11 @@ static inline bool rfkill_blocked(struct rfkill *rfkill)
return false;
}
static inline bool rfkill_soft_blocked(struct rfkill *rfkill)
{
return false;
}
static inline enum rfkill_type rfkill_find_type(const char *name)
{
return RFKILL_TYPE_ALL;

View File

@ -506,8 +506,7 @@ static inline struct sk_buff *bt_skb_sendmmsg(struct sock *sk,
tmp = bt_skb_sendmsg(sk, msg, len, mtu, headroom, tailroom);
if (IS_ERR(tmp)) {
kfree_skb(skb);
return tmp;
return skb;
}
len -= tmp->len;

View File

@ -1489,6 +1489,14 @@ void hci_conn_del_sysfs(struct hci_conn *conn);
/* Extended advertising support */
#define ext_adv_capable(dev) (((dev)->le_features[1] & HCI_LE_EXT_ADV))
/* BLUETOOTH CORE SPECIFICATION Version 5.3 | Vol 4, Part E page 1789:
*
* C24: Mandatory if the LE Controller supports Connection State and either
* LE Feature (LL Privacy) or LE Feature (Extended Advertising) is supported
*/
#define use_enhanced_conn_complete(dev) (ll_privacy_capable(dev) || \
ext_adv_capable(dev))
/* ----- HCI protocols ----- */
#define HCI_PROTO_DEFER 0x01

View File

@ -475,9 +475,9 @@ int igmp6_late_init(void);
void igmp6_cleanup(void);
void igmp6_late_cleanup(void);
int igmp6_event_query(struct sk_buff *skb);
void igmp6_event_query(struct sk_buff *skb);
int igmp6_event_report(struct sk_buff *skb);
void igmp6_event_report(struct sk_buff *skb);
#ifdef CONFIG_SYSCTL

View File

@ -96,6 +96,7 @@ enum flow_offload_xmit_type {
FLOW_OFFLOAD_XMIT_NEIGH,
FLOW_OFFLOAD_XMIT_XFRM,
FLOW_OFFLOAD_XMIT_DIRECT,
FLOW_OFFLOAD_XMIT_TC,
};
#define NF_FLOW_TABLE_ENCAP_MAX 2
@ -127,7 +128,7 @@ struct flow_offload_tuple {
struct { } __hash;
u8 dir:2,
xmit_type:2,
xmit_type:3,
encap_num:2,
in_vlan_ingress:2;
u16 mtu;
@ -142,6 +143,9 @@ struct flow_offload_tuple {
u8 h_source[ETH_ALEN];
u8 h_dest[ETH_ALEN];
} out;
struct {
u32 iifidx;
} tc;
};
};

View File

@ -37,7 +37,7 @@ void nf_register_queue_handler(const struct nf_queue_handler *qh);
void nf_unregister_queue_handler(void);
void nf_reinject(struct nf_queue_entry *entry, unsigned int verdict);
void nf_queue_entry_get_refs(struct nf_queue_entry *entry);
bool nf_queue_entry_get_refs(struct nf_queue_entry *entry);
void nf_queue_entry_free(struct nf_queue_entry *entry);
static inline void init_hashrandom(u32 *jhash_initval)

View File

@ -1568,7 +1568,6 @@ void xfrm_sad_getinfo(struct net *net, struct xfrmk_sadinfo *si);
void xfrm_spd_getinfo(struct net *net, struct xfrmk_spdinfo *si);
u32 xfrm_replay_seqhi(struct xfrm_state *x, __be32 net_seq);
int xfrm_init_replay(struct xfrm_state *x);
u32 __xfrm_state_mtu(struct xfrm_state *x, int mtu);
u32 xfrm_state_mtu(struct xfrm_state *x, int mtu);
int __xfrm_init_state(struct xfrm_state *x, bool init_replay, bool offload);
int xfrm_init_state(struct xfrm_state *x);
@ -1681,14 +1680,15 @@ int km_migrate(const struct xfrm_selector *sel, u8 dir, u8 type,
const struct xfrm_migrate *m, int num_bundles,
const struct xfrm_kmaddress *k,
const struct xfrm_encap_tmpl *encap);
struct xfrm_state *xfrm_migrate_state_find(struct xfrm_migrate *m, struct net *net);
struct xfrm_state *xfrm_migrate_state_find(struct xfrm_migrate *m, struct net *net,
u32 if_id);
struct xfrm_state *xfrm_state_migrate(struct xfrm_state *x,
struct xfrm_migrate *m,
struct xfrm_encap_tmpl *encap);
int xfrm_migrate(const struct xfrm_selector *sel, u8 dir, u8 type,
struct xfrm_migrate *m, int num_bundles,
struct xfrm_kmaddress *k, struct net *net,
struct xfrm_encap_tmpl *encap);
struct xfrm_encap_tmpl *encap, u32 if_id);
#endif
int km_new_mapping(struct xfrm_state *x, xfrm_address_t *ipaddr, __be16 sport);

View File

@ -511,6 +511,12 @@ struct xfrm_user_offload {
int ifindex;
__u8 flags;
};
/* This flag was exposed without any kernel code that supporting it.
* Unfortunately, strongswan has the code that uses sets this flag,
* which makes impossible to reuse this bit.
*
* So leave it here to make sure that it won't be reused by mistake.
*/
#define XFRM_OFFLOAD_IPV6 1
#define XFRM_OFFLOAD_INBOUND 2

View File

@ -149,22 +149,25 @@ static bool batadv_is_on_batman_iface(const struct net_device *net_dev)
struct net *net = dev_net(net_dev);
struct net_device *parent_dev;
struct net *parent_net;
int iflink;
bool ret;
/* check if this is a batman-adv mesh interface */
if (batadv_softif_is_valid(net_dev))
return true;
/* no more parents..stop recursion */
if (dev_get_iflink(net_dev) == 0 ||
dev_get_iflink(net_dev) == net_dev->ifindex)
iflink = dev_get_iflink(net_dev);
if (iflink == 0)
return false;
parent_net = batadv_getlink_net(net_dev, net);
/* iflink to itself, most likely physical device */
if (net == parent_net && iflink == net_dev->ifindex)
return false;
/* recurse over the parent device */
parent_dev = __dev_get_by_index((struct net *)parent_net,
dev_get_iflink(net_dev));
parent_dev = __dev_get_by_index((struct net *)parent_net, iflink);
/* if we got a NULL parent_dev there is something broken.. */
if (!parent_dev) {
pr_err("Cannot find parent device\n");
@ -214,14 +217,15 @@ static struct net_device *batadv_get_real_netdevice(struct net_device *netdev)
struct net_device *real_netdev = NULL;
struct net *real_net;
struct net *net;
int ifindex;
int iflink;
ASSERT_RTNL();
if (!netdev)
return NULL;
if (netdev->ifindex == dev_get_iflink(netdev)) {
iflink = dev_get_iflink(netdev);
if (iflink == 0) {
dev_hold(netdev);
return netdev;
}
@ -231,9 +235,16 @@ static struct net_device *batadv_get_real_netdevice(struct net_device *netdev)
goto out;
net = dev_net(hard_iface->soft_iface);
ifindex = dev_get_iflink(netdev);
real_net = batadv_getlink_net(netdev, net);
real_netdev = dev_get_by_index(real_net, ifindex);
/* iflink to itself, most likely physical device */
if (net == real_net && netdev->ifindex == iflink) {
real_netdev = netdev;
dev_hold(real_netdev);
goto out;
}
real_netdev = dev_get_by_index(real_net, iflink);
out:
batadv_hardif_put(hard_iface);

View File

@ -2738,6 +2738,7 @@ void hci_release_dev(struct hci_dev *hdev)
hci_dev_unlock(hdev);
ida_simple_remove(&hci_index_ida, hdev->id);
kfree_skb(hdev->sent_cmd);
kfree(hdev);
}
EXPORT_SYMBOL(hci_release_dev);

View File

@ -1841,6 +1841,7 @@ static u8 hci_update_accept_list_sync(struct hci_dev *hdev)
struct bdaddr_list *b, *t;
u8 num_entries = 0;
bool pend_conn, pend_report;
u8 filter_policy;
int err;
/* Pause advertising if resolving list can be used as controllers are
@ -1927,6 +1928,8 @@ static u8 hci_update_accept_list_sync(struct hci_dev *hdev)
err = -EINVAL;
done:
filter_policy = err ? 0x00 : 0x01;
/* Enable address resolution when LL Privacy is enabled. */
err = hci_le_set_addr_resolution_enable_sync(hdev, 0x01);
if (err)
@ -1937,7 +1940,7 @@ done:
hci_resume_advertising_sync(hdev);
/* Select filter policy to use accept list */
return err ? 0x00 : 0x01;
return filter_policy;
}
/* Returns true if an le connection is in the scanning state */
@ -3262,10 +3265,10 @@ static int hci_le_set_event_mask_sync(struct hci_dev *hdev)
if (hdev->le_features[0] & HCI_LE_DATA_LEN_EXT)
events[0] |= 0x40; /* LE Data Length Change */
/* If the controller supports LL Privacy feature, enable
* the corresponding event.
/* If the controller supports LL Privacy feature or LE Extended Adv,
* enable the corresponding event.
*/
if (hdev->le_features[0] & HCI_LE_LL_PRIVACY)
if (use_enhanced_conn_complete(hdev))
events[1] |= 0x02; /* LE Enhanced Connection Complete */
/* If the controller supports Extended Scanner Filter
@ -4106,9 +4109,9 @@ int hci_dev_close_sync(struct hci_dev *hdev)
hci_inquiry_cache_flush(hdev);
hci_pend_le_actions_clear(hdev);
hci_conn_hash_flush(hdev);
hci_dev_unlock(hdev);
/* Prevent data races on hdev->smp_data or hdev->smp_bredr_data */
smp_unregister(hdev);
hci_dev_unlock(hdev);
hci_sock_dev_event(hdev, HCI_DEV_DOWN);
@ -5185,7 +5188,7 @@ int hci_le_ext_create_conn_sync(struct hci_dev *hdev, struct hci_conn *conn,
return __hci_cmd_sync_status_sk(hdev, HCI_OP_LE_EXT_CREATE_CONN,
plen, data,
HCI_EV_LE_ENHANCED_CONN_COMPLETE,
HCI_CMD_TIMEOUT, NULL);
conn->conn_timeout, NULL);
}
int hci_le_create_conn_sync(struct hci_dev *hdev, struct hci_conn *conn)
@ -5270,9 +5273,18 @@ int hci_le_create_conn_sync(struct hci_dev *hdev, struct hci_conn *conn)
cp.min_ce_len = cpu_to_le16(0x0000);
cp.max_ce_len = cpu_to_le16(0x0000);
/* BLUETOOTH CORE SPECIFICATION Version 5.3 | Vol 4, Part E page 2261:
*
* If this event is unmasked and the HCI_LE_Connection_Complete event
* is unmasked, only the HCI_LE_Enhanced_Connection_Complete event is
* sent when a new connection has been created.
*/
err = __hci_cmd_sync_status_sk(hdev, HCI_OP_LE_CREATE_CONN,
sizeof(cp), &cp, HCI_EV_LE_CONN_COMPLETE,
HCI_CMD_TIMEOUT, NULL);
sizeof(cp), &cp,
use_enhanced_conn_complete(hdev) ?
HCI_EV_LE_ENHANCED_CONN_COMPLETE :
HCI_EV_LE_CONN_COMPLETE,
conn->conn_timeout, NULL);
done:
/* Re-enable advertising after the connection attempt is finished. */

View File

@ -1218,7 +1218,13 @@ static int new_settings(struct hci_dev *hdev, struct sock *skip)
static void mgmt_set_powered_complete(struct hci_dev *hdev, void *data, int err)
{
struct mgmt_pending_cmd *cmd = data;
struct mgmt_mode *cp = cmd->param;
struct mgmt_mode *cp;
/* Make sure cmd still outstanding. */
if (cmd != pending_find(MGMT_OP_SET_POWERED, hdev))
return;
cp = cmd->param;
bt_dev_dbg(hdev, "err %d", err);
@ -1242,7 +1248,7 @@ static void mgmt_set_powered_complete(struct hci_dev *hdev, void *data, int err)
mgmt_status(err));
}
mgmt_pending_free(cmd);
mgmt_pending_remove(cmd);
}
static int set_powered_sync(struct hci_dev *hdev, void *data)
@ -1281,7 +1287,7 @@ static int set_powered(struct sock *sk, struct hci_dev *hdev, void *data,
goto failed;
}
cmd = mgmt_pending_new(sk, MGMT_OP_SET_POWERED, hdev, data, len);
cmd = mgmt_pending_add(sk, MGMT_OP_SET_POWERED, hdev, data, len);
if (!cmd) {
err = -ENOMEM;
goto failed;
@ -1290,6 +1296,9 @@ static int set_powered(struct sock *sk, struct hci_dev *hdev, void *data,
err = hci_cmd_sync_queue(hdev, set_powered_sync, cmd,
mgmt_set_powered_complete);
if (err < 0)
mgmt_pending_remove(cmd);
failed:
hci_dev_unlock(hdev);
return err;
@ -1383,6 +1392,10 @@ static void mgmt_set_discoverable_complete(struct hci_dev *hdev, void *data,
bt_dev_dbg(hdev, "err %d", err);
/* Make sure cmd still outstanding. */
if (cmd != pending_find(MGMT_OP_SET_DISCOVERABLE, hdev))
return;
hci_dev_lock(hdev);
if (err) {
@ -1402,7 +1415,7 @@ static void mgmt_set_discoverable_complete(struct hci_dev *hdev, void *data,
new_settings(hdev, cmd->sk);
done:
mgmt_pending_free(cmd);
mgmt_pending_remove(cmd);
hci_dev_unlock(hdev);
}
@ -1511,7 +1524,7 @@ static int set_discoverable(struct sock *sk, struct hci_dev *hdev, void *data,
goto failed;
}
cmd = mgmt_pending_new(sk, MGMT_OP_SET_DISCOVERABLE, hdev, data, len);
cmd = mgmt_pending_add(sk, MGMT_OP_SET_DISCOVERABLE, hdev, data, len);
if (!cmd) {
err = -ENOMEM;
goto failed;
@ -1538,6 +1551,9 @@ static int set_discoverable(struct sock *sk, struct hci_dev *hdev, void *data,
err = hci_cmd_sync_queue(hdev, set_discoverable_sync, cmd,
mgmt_set_discoverable_complete);
if (err < 0)
mgmt_pending_remove(cmd);
failed:
hci_dev_unlock(hdev);
return err;
@ -1550,6 +1566,10 @@ static void mgmt_set_connectable_complete(struct hci_dev *hdev, void *data,
bt_dev_dbg(hdev, "err %d", err);
/* Make sure cmd still outstanding. */
if (cmd != pending_find(MGMT_OP_SET_CONNECTABLE, hdev))
return;
hci_dev_lock(hdev);
if (err) {
@ -1562,7 +1582,9 @@ static void mgmt_set_connectable_complete(struct hci_dev *hdev, void *data,
new_settings(hdev, cmd->sk);
done:
mgmt_pending_free(cmd);
if (cmd)
mgmt_pending_remove(cmd);
hci_dev_unlock(hdev);
}
@ -1634,7 +1656,7 @@ static int set_connectable(struct sock *sk, struct hci_dev *hdev, void *data,
goto failed;
}
cmd = mgmt_pending_new(sk, MGMT_OP_SET_CONNECTABLE, hdev, data, len);
cmd = mgmt_pending_add(sk, MGMT_OP_SET_CONNECTABLE, hdev, data, len);
if (!cmd) {
err = -ENOMEM;
goto failed;
@ -1654,6 +1676,9 @@ static int set_connectable(struct sock *sk, struct hci_dev *hdev, void *data,
err = hci_cmd_sync_queue(hdev, set_connectable_sync, cmd,
mgmt_set_connectable_complete);
if (err < 0)
mgmt_pending_remove(cmd);
failed:
hci_dev_unlock(hdev);
return err;
@ -1774,6 +1799,10 @@ static void set_ssp_complete(struct hci_dev *hdev, void *data, int err)
u8 enable = cp->val;
bool changed;
/* Make sure cmd still outstanding. */
if (cmd != pending_find(MGMT_OP_SET_SSP, hdev))
return;
if (err) {
u8 mgmt_err = mgmt_status(err);
@ -3321,6 +3350,9 @@ static void set_name_complete(struct hci_dev *hdev, void *data, int err)
bt_dev_dbg(hdev, "err %d", err);
if (cmd != pending_find(MGMT_OP_SET_LOCAL_NAME, hdev))
return;
if (status) {
mgmt_cmd_status(cmd->sk, hdev->id, MGMT_OP_SET_LOCAL_NAME,
status);
@ -3493,6 +3525,9 @@ static void set_default_phy_complete(struct hci_dev *hdev, void *data, int err)
struct sk_buff *skb = cmd->skb;
u8 status = mgmt_status(err);
if (cmd != pending_find(MGMT_OP_SET_PHY_CONFIGURATION, hdev))
return;
if (!status) {
if (!skb)
status = MGMT_STATUS_FAILED;
@ -3759,13 +3794,6 @@ static int set_wideband_speech(struct sock *sk, struct hci_dev *hdev,
hci_dev_lock(hdev);
if (pending_find(MGMT_OP_SET_WIDEBAND_SPEECH, hdev)) {
err = mgmt_cmd_status(sk, hdev->id,
MGMT_OP_SET_WIDEBAND_SPEECH,
MGMT_STATUS_BUSY);
goto unlock;
}
if (hdev_is_powered(hdev) &&
!!cp->val != hci_dev_test_flag(hdev,
HCI_WIDEBAND_SPEECH_ENABLED)) {
@ -5036,12 +5064,6 @@ static int read_local_oob_data(struct sock *sk, struct hci_dev *hdev,
goto unlock;
}
if (pending_find(MGMT_OP_READ_LOCAL_OOB_DATA, hdev)) {
err = mgmt_cmd_status(sk, hdev->id, MGMT_OP_READ_LOCAL_OOB_DATA,
MGMT_STATUS_BUSY);
goto unlock;
}
cmd = mgmt_pending_new(sk, MGMT_OP_READ_LOCAL_OOB_DATA, hdev, NULL, 0);
if (!cmd)
err = -ENOMEM;
@ -5261,11 +5283,16 @@ static void start_discovery_complete(struct hci_dev *hdev, void *data, int err)
{
struct mgmt_pending_cmd *cmd = data;
if (cmd != pending_find(MGMT_OP_START_DISCOVERY, hdev) &&
cmd != pending_find(MGMT_OP_START_LIMITED_DISCOVERY, hdev) &&
cmd != pending_find(MGMT_OP_START_SERVICE_DISCOVERY, hdev))
return;
bt_dev_dbg(hdev, "err %d", err);
mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode, mgmt_status(err),
cmd->param, 1);
mgmt_pending_free(cmd);
mgmt_pending_remove(cmd);
hci_discovery_set_state(hdev, err ? DISCOVERY_STOPPED:
DISCOVERY_FINDING);
@ -5327,7 +5354,7 @@ static int start_discovery_internal(struct sock *sk, struct hci_dev *hdev,
else
hdev->discovery.limited = false;
cmd = mgmt_pending_new(sk, op, hdev, data, len);
cmd = mgmt_pending_add(sk, op, hdev, data, len);
if (!cmd) {
err = -ENOMEM;
goto failed;
@ -5336,7 +5363,7 @@ static int start_discovery_internal(struct sock *sk, struct hci_dev *hdev,
err = hci_cmd_sync_queue(hdev, start_discovery_sync, cmd,
start_discovery_complete);
if (err < 0) {
mgmt_pending_free(cmd);
mgmt_pending_remove(cmd);
goto failed;
}
@ -5430,7 +5457,7 @@ static int start_service_discovery(struct sock *sk, struct hci_dev *hdev,
goto failed;
}
cmd = mgmt_pending_new(sk, MGMT_OP_START_SERVICE_DISCOVERY,
cmd = mgmt_pending_add(sk, MGMT_OP_START_SERVICE_DISCOVERY,
hdev, data, len);
if (!cmd) {
err = -ENOMEM;
@ -5463,7 +5490,7 @@ static int start_service_discovery(struct sock *sk, struct hci_dev *hdev,
err = hci_cmd_sync_queue(hdev, start_discovery_sync, cmd,
start_discovery_complete);
if (err < 0) {
mgmt_pending_free(cmd);
mgmt_pending_remove(cmd);
goto failed;
}
@ -5495,11 +5522,14 @@ static void stop_discovery_complete(struct hci_dev *hdev, void *data, int err)
{
struct mgmt_pending_cmd *cmd = data;
if (cmd != pending_find(MGMT_OP_STOP_DISCOVERY, hdev))
return;
bt_dev_dbg(hdev, "err %d", err);
mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode, mgmt_status(err),
cmd->param, 1);
mgmt_pending_free(cmd);
mgmt_pending_remove(cmd);
if (!err)
hci_discovery_set_state(hdev, DISCOVERY_STOPPED);
@ -5535,7 +5565,7 @@ static int stop_discovery(struct sock *sk, struct hci_dev *hdev, void *data,
goto unlock;
}
cmd = mgmt_pending_new(sk, MGMT_OP_STOP_DISCOVERY, hdev, data, len);
cmd = mgmt_pending_add(sk, MGMT_OP_STOP_DISCOVERY, hdev, data, len);
if (!cmd) {
err = -ENOMEM;
goto unlock;
@ -5544,7 +5574,7 @@ static int stop_discovery(struct sock *sk, struct hci_dev *hdev, void *data,
err = hci_cmd_sync_queue(hdev, stop_discovery_sync, cmd,
stop_discovery_complete);
if (err < 0) {
mgmt_pending_free(cmd);
mgmt_pending_remove(cmd);
goto unlock;
}
@ -7474,6 +7504,9 @@ static void read_local_oob_ext_data_complete(struct hci_dev *hdev, void *data,
u8 status = mgmt_status(err);
u16 eir_len;
if (cmd != pending_find(MGMT_OP_READ_LOCAL_OOB_EXT_DATA, hdev))
return;
if (!status) {
if (!skb)
status = MGMT_STATUS_FAILED;
@ -7969,11 +8002,7 @@ static bool requested_adv_flags_are_valid(struct hci_dev *hdev, u32 adv_flags)
static bool adv_busy(struct hci_dev *hdev)
{
return (pending_find(MGMT_OP_ADD_ADVERTISING, hdev) ||
pending_find(MGMT_OP_REMOVE_ADVERTISING, hdev) ||
pending_find(MGMT_OP_SET_LE, hdev) ||
pending_find(MGMT_OP_ADD_EXT_ADV_PARAMS, hdev) ||
pending_find(MGMT_OP_ADD_EXT_ADV_DATA, hdev));
return pending_find(MGMT_OP_SET_LE, hdev);
}
static void add_adv_complete(struct hci_dev *hdev, struct sock *sk, u8 instance,
@ -8563,9 +8592,7 @@ static int remove_advertising(struct sock *sk, struct hci_dev *hdev,
goto unlock;
}
if (pending_find(MGMT_OP_ADD_ADVERTISING, hdev) ||
pending_find(MGMT_OP_REMOVE_ADVERTISING, hdev) ||
pending_find(MGMT_OP_SET_LE, hdev)) {
if (pending_find(MGMT_OP_SET_LE, hdev)) {
err = mgmt_cmd_status(sk, hdev->id, MGMT_OP_REMOVE_ADVERTISING,
MGMT_STATUS_BUSY);
goto unlock;

View File

@ -77,11 +77,12 @@ int mgmt_send_event_skb(unsigned short channel, struct sk_buff *skb, int flag,
{
struct hci_dev *hdev;
struct mgmt_hdr *hdr;
int len = skb->len;
int len;
if (!skb)
return -EINVAL;
len = skb->len;
hdev = bt_cb(skb)->mgmt.hdev;
/* Time stamp */

View File

@ -3876,6 +3876,7 @@ struct sk_buff *skb_segment_list(struct sk_buff *skb,
list_skb = list_skb->next;
err = 0;
delta_truesize += nskb->truesize;
if (skb_shared(nskb)) {
tmp = skb_clone(nskb, GFP_ATOMIC);
if (tmp) {
@ -3900,7 +3901,6 @@ struct sk_buff *skb_segment_list(struct sk_buff *skb,
tail = nskb;
delta_len += nskb->len;
delta_truesize += nskb->truesize;
skb_push(nskb, -skb_network_offset(nskb) + offset);

View File

@ -1153,7 +1153,7 @@ static int sk_psock_verdict_recv(read_descriptor_t *desc, struct sk_buff *skb,
struct sk_psock *psock;
struct bpf_prog *prog;
int ret = __SK_DROP;
int len = skb->len;
int len = orig_len;
/* clone here so sk_eat_skb() in tcp_read_sock does not drop our data */
skb = skb_clone(skb, GFP_ATOMIC);

View File

@ -2073,8 +2073,52 @@ u8 dcb_ieee_getapp_default_prio_mask(const struct net_device *dev)
}
EXPORT_SYMBOL(dcb_ieee_getapp_default_prio_mask);
static void dcbnl_flush_dev(struct net_device *dev)
{
struct dcb_app_type *itr, *tmp;
spin_lock_bh(&dcb_lock);
list_for_each_entry_safe(itr, tmp, &dcb_app_list, list) {
if (itr->ifindex == dev->ifindex) {
list_del(&itr->list);
kfree(itr);
}
}
spin_unlock_bh(&dcb_lock);
}
static int dcbnl_netdevice_event(struct notifier_block *nb,
unsigned long event, void *ptr)
{
struct net_device *dev = netdev_notifier_info_to_dev(ptr);
switch (event) {
case NETDEV_UNREGISTER:
if (!dev->dcbnl_ops)
return NOTIFY_DONE;
dcbnl_flush_dev(dev);
return NOTIFY_OK;
default:
return NOTIFY_DONE;
}
}
static struct notifier_block dcbnl_nb __read_mostly = {
.notifier_call = dcbnl_netdevice_event,
};
static int __init dcbnl_init(void)
{
int err;
err = register_netdevice_notifier(&dcbnl_nb);
if (err)
return err;
rtnl_register(PF_UNSPEC, RTM_GETDCB, dcb_doit, NULL, 0);
rtnl_register(PF_UNSPEC, RTM_SETDCB, dcb_doit, NULL, 0);

View File

@ -1261,7 +1261,7 @@ int dsa_tree_change_tag_proto(struct dsa_switch_tree *dst,
info.tag_ops = tag_ops;
err = dsa_tree_notify(dst, DSA_NOTIFIER_TAG_PROTO, &info);
if (err)
return err;
goto out_unwind_tagger;
err = dsa_tree_bind_tag_proto(dst, tag_ops);
if (err)

View File

@ -671,7 +671,7 @@ static int esp_output(struct xfrm_state *x, struct sk_buff *skb)
struct xfrm_dst *dst = (struct xfrm_dst *)skb_dst(skb);
u32 padto;
padto = min(x->tfcpad, __xfrm_state_mtu(x, dst->child_mtu_cached));
padto = min(x->tfcpad, xfrm_state_mtu(x, dst->child_mtu_cached));
if (skb->len < padto)
esp.tfclen = padto - skb->len;
}

View File

@ -1684,11 +1684,13 @@ int tcp_read_sock(struct sock *sk, read_descriptor_t *desc,
if (!copied)
copied = used;
break;
} else if (used <= len) {
seq += used;
copied += used;
offset += used;
}
if (WARN_ON_ONCE(used > len))
used = len;
seq += used;
copied += used;
offset += used;
/* If recv_actor drops the lock (e.g. TCP splice
* receive) the skb pointer might be invalid when
* getting here: tcp_collapse might have deleted it

View File

@ -3732,6 +3732,7 @@ static int addrconf_ifdown(struct net_device *dev, bool unregister)
struct inet6_dev *idev;
struct inet6_ifaddr *ifa, *tmp;
bool keep_addr = false;
bool was_ready;
int state, i;
ASSERT_RTNL();
@ -3797,7 +3798,10 @@ restart:
addrconf_del_rs_timer(idev);
/* Step 2: clear flags for stateless addrconf */
/* Step 2: clear flags for stateless addrconf, repeated down
* detection
*/
was_ready = idev->if_flags & IF_READY;
if (!unregister)
idev->if_flags &= ~(IF_RS_SENT|IF_RA_RCVD|IF_READY);
@ -3871,7 +3875,7 @@ restart:
if (unregister) {
ipv6_ac_destroy_dev(idev);
ipv6_mc_destroy_dev(idev);
} else {
} else if (was_ready) {
ipv6_mc_down(idev);
}

View File

@ -707,7 +707,7 @@ static int esp6_output(struct xfrm_state *x, struct sk_buff *skb)
struct xfrm_dst *dst = (struct xfrm_dst *)skb_dst(skb);
u32 padto;
padto = min(x->tfcpad, __xfrm_state_mtu(x, dst->child_mtu_cached));
padto = min(x->tfcpad, xfrm_state_mtu(x, dst->child_mtu_cached));
if (skb->len < padto)
esp.tfclen = padto - skb->len;
}

View File

@ -1408,8 +1408,6 @@ static int ip6_setup_cork(struct sock *sk, struct inet_cork_full *cork,
if (np->frag_size)
mtu = np->frag_size;
}
if (mtu < IPV6_MIN_MTU)
return -EINVAL;
cork->base.fragsize = mtu;
cork->base.gso_size = ipc6->gso_size;
cork->base.tx_flags = 0;
@ -1471,8 +1469,6 @@ static int __ip6_append_data(struct sock *sk,
fragheaderlen = sizeof(struct ipv6hdr) + rt->rt6i_nfheader_len +
(opt ? opt->opt_nflen : 0);
maxfraglen = ((mtu - fragheaderlen) & ~7) + fragheaderlen -
sizeof(struct frag_hdr);
headersize = sizeof(struct ipv6hdr) +
(opt ? opt->opt_flen + opt->opt_nflen : 0) +
@ -1480,6 +1476,13 @@ static int __ip6_append_data(struct sock *sk,
sizeof(struct frag_hdr) : 0) +
rt->rt6i_nfheader_len;
if (mtu < fragheaderlen ||
((mtu - fragheaderlen) & ~7) + fragheaderlen < sizeof(struct frag_hdr))
goto emsgsize;
maxfraglen = ((mtu - fragheaderlen) & ~7) + fragheaderlen -
sizeof(struct frag_hdr);
/* as per RFC 7112 section 5, the entire IPv6 Header Chain must fit
* the first fragment
*/

View File

@ -1371,27 +1371,23 @@ static void mld_process_v2(struct inet6_dev *idev, struct mld2_query *mld,
}
/* called with rcu_read_lock() */
int igmp6_event_query(struct sk_buff *skb)
void igmp6_event_query(struct sk_buff *skb)
{
struct inet6_dev *idev = __in6_dev_get(skb->dev);
if (!idev)
return -EINVAL;
if (idev->dead) {
kfree_skb(skb);
return -ENODEV;
}
if (!idev || idev->dead)
goto out;
spin_lock_bh(&idev->mc_query_lock);
if (skb_queue_len(&idev->mc_query_queue) < MLD_MAX_SKBS) {
__skb_queue_tail(&idev->mc_query_queue, skb);
if (!mod_delayed_work(mld_wq, &idev->mc_query_work, 0))
in6_dev_hold(idev);
skb = NULL;
}
spin_unlock_bh(&idev->mc_query_lock);
return 0;
out:
kfree_skb(skb);
}
static void __mld_query_work(struct sk_buff *skb)
@ -1542,27 +1538,23 @@ static void mld_query_work(struct work_struct *work)
}
/* called with rcu_read_lock() */
int igmp6_event_report(struct sk_buff *skb)
void igmp6_event_report(struct sk_buff *skb)
{
struct inet6_dev *idev = __in6_dev_get(skb->dev);
if (!idev)
return -EINVAL;
if (idev->dead) {
kfree_skb(skb);
return -ENODEV;
}
if (!idev || idev->dead)
goto out;
spin_lock_bh(&idev->mc_report_lock);
if (skb_queue_len(&idev->mc_report_queue) < MLD_MAX_SKBS) {
__skb_queue_tail(&idev->mc_report_queue, skb);
if (!mod_delayed_work(mld_wq, &idev->mc_report_work, 0))
in6_dev_hold(idev);
skb = NULL;
}
spin_unlock_bh(&idev->mc_report_lock);
return 0;
out:
kfree_skb(skb);
}
static void __mld_report_work(struct sk_buff *skb)

View File

@ -2623,7 +2623,7 @@ static int pfkey_migrate(struct sock *sk, struct sk_buff *skb,
}
return xfrm_migrate(&sel, dir, XFRM_POLICY_TYPE_MAIN, m, i,
kma ? &k : NULL, net, NULL);
kma ? &k : NULL, net, NULL, 0);
out:
return err;

View File

@ -9,7 +9,7 @@
* Copyright 2007, Michael Wu <flamingice@sourmilk.net>
* Copyright 2007-2010, Intel Corporation
* Copyright(c) 2015-2017 Intel Deutschland GmbH
* Copyright (C) 2018 - 2021 Intel Corporation
* Copyright (C) 2018 - 2022 Intel Corporation
*/
#include <linux/ieee80211.h>
@ -626,6 +626,14 @@ int ieee80211_start_tx_ba_session(struct ieee80211_sta *pubsta, u16 tid,
return -EINVAL;
}
if (test_sta_flag(sta, WLAN_STA_MFP) &&
!test_sta_flag(sta, WLAN_STA_AUTHORIZED)) {
ht_dbg(sdata,
"MFP STA not authorized - deny BA session request %pM tid %d\n",
sta->sta.addr, tid);
return -EINVAL;
}
/*
* 802.11n-2009 11.5.1.1: If the initiating STA is an HT STA, is a
* member of an IBSS, and has no other existing Block Ack agreement

View File

@ -376,7 +376,7 @@ struct ieee80211_mgd_auth_data {
u8 key[WLAN_KEY_LEN_WEP104];
u8 key_len, key_idx;
bool done;
bool done, waiting;
bool peer_confirmed;
bool timeout_started;

View File

@ -37,6 +37,7 @@
#define IEEE80211_AUTH_TIMEOUT_SAE (HZ * 2)
#define IEEE80211_AUTH_MAX_TRIES 3
#define IEEE80211_AUTH_WAIT_ASSOC (HZ * 5)
#define IEEE80211_AUTH_WAIT_SAE_RETRY (HZ * 2)
#define IEEE80211_ASSOC_TIMEOUT (HZ / 5)
#define IEEE80211_ASSOC_TIMEOUT_LONG (HZ / 2)
#define IEEE80211_ASSOC_TIMEOUT_SHORT (HZ / 10)
@ -3011,8 +3012,15 @@ static void ieee80211_rx_mgmt_auth(struct ieee80211_sub_if_data *sdata,
(status_code == WLAN_STATUS_ANTI_CLOG_REQUIRED ||
(auth_transaction == 1 &&
(status_code == WLAN_STATUS_SAE_HASH_TO_ELEMENT ||
status_code == WLAN_STATUS_SAE_PK))))
status_code == WLAN_STATUS_SAE_PK)))) {
/* waiting for userspace now */
ifmgd->auth_data->waiting = true;
ifmgd->auth_data->timeout =
jiffies + IEEE80211_AUTH_WAIT_SAE_RETRY;
ifmgd->auth_data->timeout_started = true;
run_again(sdata, ifmgd->auth_data->timeout);
goto notify_driver;
}
sdata_info(sdata, "%pM denied authentication (status %d)\n",
mgmt->sa, status_code);
@ -4603,10 +4611,10 @@ void ieee80211_sta_work(struct ieee80211_sub_if_data *sdata)
if (ifmgd->auth_data && ifmgd->auth_data->timeout_started &&
time_after(jiffies, ifmgd->auth_data->timeout)) {
if (ifmgd->auth_data->done) {
if (ifmgd->auth_data->done || ifmgd->auth_data->waiting) {
/*
* ok ... we waited for assoc but userspace didn't,
* so let's just kill the auth data
* ok ... we waited for assoc or continuation but
* userspace didn't do it, so kill the auth data
*/
ieee80211_destroy_auth_data(sdata, false);
} else if (ieee80211_auth(sdata)) {

View File

@ -2607,7 +2607,8 @@ static void ieee80211_deliver_skb_to_local_stack(struct sk_buff *skb,
* address, so that the authenticator (e.g. hostapd) will see
* the frame, but bridge won't forward it anywhere else. Note
* that due to earlier filtering, the only other address can
* be the PAE group address.
* be the PAE group address, unless the hardware allowed them
* through in 802.3 offloaded mode.
*/
if (unlikely(skb->protocol == sdata->control_port_protocol &&
!ether_addr_equal(ehdr->h_dest, sdata->vif.addr)))
@ -2922,13 +2923,13 @@ ieee80211_rx_h_mesh_fwding(struct ieee80211_rx_data *rx)
ether_addr_equal(sdata->vif.addr, hdr->addr3))
return RX_CONTINUE;
ac = ieee80211_select_queue_80211(sdata, skb, hdr);
ac = ieee802_1d_to_ac[skb->priority];
q = sdata->vif.hw_queue[ac];
if (ieee80211_queue_stopped(&local->hw, q)) {
IEEE80211_IFSTA_MESH_CTR_INC(ifmsh, dropped_frames_congestion);
return RX_DROP_MONITOR;
}
skb_set_queue_mapping(skb, q);
skb_set_queue_mapping(skb, ac);
if (!--mesh_hdr->ttl) {
if (!is_multicast_ether_addr(hdr->addr1))
@ -4514,12 +4515,7 @@ static void ieee80211_rx_8023(struct ieee80211_rx_data *rx,
/* deliver to local stack */
skb->protocol = eth_type_trans(skb, fast_rx->dev);
memset(skb->cb, 0, sizeof(skb->cb));
if (rx->list)
list_add_tail(&skb->list, rx->list);
else
netif_receive_skb(skb);
ieee80211_deliver_skb_to_local_stack(skb, rx);
}
static bool ieee80211_invoke_fast_rx(struct ieee80211_rx_data *rx,

View File

@ -466,9 +466,12 @@ static bool mptcp_pending_data_fin(struct sock *sk, u64 *seq)
static void mptcp_set_datafin_timeout(const struct sock *sk)
{
struct inet_connection_sock *icsk = inet_csk(sk);
u32 retransmits;
mptcp_sk(sk)->timer_ival = min(TCP_RTO_MAX,
TCP_RTO_MIN << icsk->icsk_retransmits);
retransmits = min_t(u32, icsk->icsk_retransmits,
ilog2(TCP_RTO_MAX / TCP_RTO_MIN));
mptcp_sk(sk)->timer_ival = TCP_RTO_MIN << retransmits;
}
static void __mptcp_set_timeout(struct sock *sk, long tout)
@ -3294,6 +3297,17 @@ static int mptcp_ioctl_outq(const struct mptcp_sock *msk, u64 v)
return 0;
delta = msk->write_seq - v;
if (__mptcp_check_fallback(msk) && msk->first) {
struct tcp_sock *tp = tcp_sk(msk->first);
/* the first subflow is disconnected after close - see
* __mptcp_close_ssk(). tcp_disconnect() moves the write_seq
* so ignore that status, too.
*/
if (!((1 << msk->first->sk_state) &
(TCPF_SYN_SENT | TCPF_SYN_RECV | TCPF_CLOSE)))
delta += READ_ONCE(tp->write_seq) - tp->snd_una;
}
if (delta > INT_MAX)
delta = INT_MAX;

View File

@ -428,14 +428,15 @@ static int __nf_register_net_hook(struct net *net, int pf,
p = nf_entry_dereference(*pp);
new_hooks = nf_hook_entries_grow(p, reg);
if (!IS_ERR(new_hooks))
if (!IS_ERR(new_hooks)) {
hooks_validate(new_hooks);
rcu_assign_pointer(*pp, new_hooks);
}
mutex_unlock(&nf_hook_mutex);
if (IS_ERR(new_hooks))
return PTR_ERR(new_hooks);
hooks_validate(new_hooks);
#ifdef CONFIG_NETFILTER_INGRESS
if (nf_ingress_hook(reg, pf))
net_inc_ingress_queue();

View File

@ -110,7 +110,11 @@ static int nf_flow_rule_match(struct nf_flow_match *match,
nf_flow_rule_lwt_match(match, tun_info);
}
key->meta.ingress_ifindex = tuple->iifidx;
if (tuple->xmit_type == FLOW_OFFLOAD_XMIT_TC)
key->meta.ingress_ifindex = tuple->tc.iifidx;
else
key->meta.ingress_ifindex = tuple->iifidx;
mask->meta.ingress_ifindex = 0xffffffff;
if (tuple->encap_num > 0 && !(tuple->in_vlan_ingress & BIT(0)) &&

View File

@ -46,6 +46,15 @@ void nf_unregister_queue_handler(void)
}
EXPORT_SYMBOL(nf_unregister_queue_handler);
static void nf_queue_sock_put(struct sock *sk)
{
#ifdef CONFIG_INET
sock_gen_put(sk);
#else
sock_put(sk);
#endif
}
static void nf_queue_entry_release_refs(struct nf_queue_entry *entry)
{
struct nf_hook_state *state = &entry->state;
@ -54,7 +63,7 @@ static void nf_queue_entry_release_refs(struct nf_queue_entry *entry)
dev_put(state->in);
dev_put(state->out);
if (state->sk)
sock_put(state->sk);
nf_queue_sock_put(state->sk);
#if IS_ENABLED(CONFIG_BRIDGE_NETFILTER)
dev_put(entry->physin);
@ -87,19 +96,21 @@ static void __nf_queue_entry_init_physdevs(struct nf_queue_entry *entry)
}
/* Bump dev refs so they don't vanish while packet is out */
void nf_queue_entry_get_refs(struct nf_queue_entry *entry)
bool nf_queue_entry_get_refs(struct nf_queue_entry *entry)
{
struct nf_hook_state *state = &entry->state;
if (state->sk && !refcount_inc_not_zero(&state->sk->sk_refcnt))
return false;
dev_hold(state->in);
dev_hold(state->out);
if (state->sk)
sock_hold(state->sk);
#if IS_ENABLED(CONFIG_BRIDGE_NETFILTER)
dev_hold(entry->physin);
dev_hold(entry->physout);
#endif
return true;
}
EXPORT_SYMBOL_GPL(nf_queue_entry_get_refs);
@ -169,6 +180,18 @@ static int __nf_queue(struct sk_buff *skb, const struct nf_hook_state *state,
break;
}
if (skb_sk_is_prefetched(skb)) {
struct sock *sk = skb->sk;
if (!sk_is_refcounted(sk)) {
if (!refcount_inc_not_zero(&sk->sk_refcnt))
return -ENOTCONN;
/* drop refcount on skb_orphan */
skb->destructor = sock_edemux;
}
}
entry = kmalloc(sizeof(*entry) + route_key_size, GFP_ATOMIC);
if (!entry)
return -ENOMEM;
@ -187,7 +210,10 @@ static int __nf_queue(struct sk_buff *skb, const struct nf_hook_state *state,
__nf_queue_entry_init_physdevs(entry);
nf_queue_entry_get_refs(entry);
if (!nf_queue_entry_get_refs(entry)) {
kfree(entry);
return -ENOTCONN;
}
switch (entry->state.pf) {
case AF_INET:

View File

@ -4502,7 +4502,7 @@ static void nft_set_catchall_destroy(const struct nft_ctx *ctx,
list_for_each_entry_safe(catchall, next, &set->catchall_list, list) {
list_del_rcu(&catchall->list);
nft_set_elem_destroy(set, catchall->elem, true);
kfree_rcu(catchall);
kfree_rcu(catchall, rcu);
}
}
@ -5669,7 +5669,7 @@ static void nft_setelem_catchall_remove(const struct net *net,
list_for_each_entry_safe(catchall, next, &set->catchall_list, list) {
if (catchall->elem == elem->priv) {
list_del_rcu(&catchall->list);
kfree_rcu(catchall);
kfree_rcu(catchall, rcu);
break;
}
}

View File

@ -710,9 +710,15 @@ static struct nf_queue_entry *
nf_queue_entry_dup(struct nf_queue_entry *e)
{
struct nf_queue_entry *entry = kmemdup(e, e->size, GFP_ATOMIC);
if (entry)
nf_queue_entry_get_refs(entry);
return entry;
if (!entry)
return NULL;
if (nf_queue_entry_get_refs(entry))
return entry;
kfree(entry);
return NULL;
}
#if IS_ENABLED(CONFIG_BRIDGE_NETFILTER)

View File

@ -361,6 +361,13 @@ static void tcf_ct_flow_table_put(struct tcf_ct_params *params)
}
}
static void tcf_ct_flow_tc_ifidx(struct flow_offload *entry,
struct nf_conn_act_ct_ext *act_ct_ext, u8 dir)
{
entry->tuplehash[dir].tuple.xmit_type = FLOW_OFFLOAD_XMIT_TC;
entry->tuplehash[dir].tuple.tc.iifidx = act_ct_ext->ifindex[dir];
}
static void tcf_ct_flow_table_add(struct tcf_ct_flow_table *ct_ft,
struct nf_conn *ct,
bool tcp)
@ -385,10 +392,8 @@ static void tcf_ct_flow_table_add(struct tcf_ct_flow_table *ct_ft,
act_ct_ext = nf_conn_act_ct_ext_find(ct);
if (act_ct_ext) {
entry->tuplehash[FLOW_OFFLOAD_DIR_ORIGINAL].tuple.iifidx =
act_ct_ext->ifindex[IP_CT_DIR_ORIGINAL];
entry->tuplehash[FLOW_OFFLOAD_DIR_REPLY].tuple.iifidx =
act_ct_ext->ifindex[IP_CT_DIR_REPLY];
tcf_ct_flow_tc_ifidx(entry, act_ct_ext, FLOW_OFFLOAD_DIR_ORIGINAL);
tcf_ct_flow_tc_ifidx(entry, act_ct_ext, FLOW_OFFLOAD_DIR_REPLY);
}
err = flow_offload_add(&ct_ft->nf_ft, entry);

View File

@ -183,7 +183,7 @@ static int smc_release(struct socket *sock)
{
struct sock *sk = sock->sk;
struct smc_sock *smc;
int rc = 0;
int old_state, rc = 0;
if (!sk)
goto out;
@ -191,8 +191,10 @@ static int smc_release(struct socket *sock)
sock_hold(sk); /* sock_put below */
smc = smc_sk(sk);
old_state = sk->sk_state;
/* cleanup for a dangling non-blocking connect */
if (smc->connect_nonblock && sk->sk_state == SMC_INIT)
if (smc->connect_nonblock && old_state == SMC_INIT)
tcp_abort(smc->clcsock->sk, ECONNABORTED);
if (cancel_work_sync(&smc->connect_work))
@ -206,6 +208,10 @@ static int smc_release(struct socket *sock)
else
lock_sock(sk);
if (old_state == SMC_INIT && sk->sk_state == SMC_ACTIVE &&
!smc->use_fallback)
smc_close_active_abort(smc);
rc = __smc_release(smc);
/* detach socket */
@ -3081,12 +3087,14 @@ static int __init smc_init(void)
rc = tcp_register_ulp(&smc_ulp_ops);
if (rc) {
pr_err("%s: tcp_ulp_register fails with %d\n", __func__, rc);
goto out_sock;
goto out_ib;
}
static_branch_enable(&tcp_have_smc);
return 0;
out_ib:
smc_ib_unregister_client();
out_sock:
sock_unregister(PF_SMC);
out_proto6:

View File

@ -1161,8 +1161,8 @@ void smc_conn_free(struct smc_connection *conn)
cancel_work_sync(&conn->abort_work);
}
if (!list_empty(&lgr->list)) {
smc_lgr_unregister_conn(conn);
smc_buf_unuse(conn, lgr); /* allow buffer reuse */
smc_lgr_unregister_conn(conn);
}
if (!lgr->conns_num)
@ -1864,7 +1864,8 @@ int smc_conn_create(struct smc_sock *smc, struct smc_init_info *ini)
(ini->smcd_version == SMC_V2 ||
lgr->vlan_id == ini->vlan_id) &&
(role == SMC_CLNT || ini->is_smcd ||
lgr->conns_num < SMC_RMBS_PER_LGR_MAX)) {
(lgr->conns_num < SMC_RMBS_PER_LGR_MAX &&
!bitmap_full(lgr->rtokens_used_mask, SMC_RMBS_PER_LGR_MAX)))) {
/* link group found */
ini->first_contact_local = 0;
conn->lgr = lgr;

View File

@ -33,7 +33,7 @@ $(obj)/shipped-certs.c: $(wildcard $(srctree)/$(src)/certs/*.hex)
echo 'unsigned int shipped_regdb_certs_len = sizeof(shipped_regdb_certs);'; \
) > $@
$(obj)/extra-certs.c: $(CONFIG_CFG80211_EXTRA_REGDB_KEYDI) \
$(obj)/extra-certs.c: $(CONFIG_CFG80211_EXTRA_REGDB_KEYDIR) \
$(wildcard $(CONFIG_CFG80211_EXTRA_REGDB_KEYDIR)/*.x509)
@$(kecho) " GEN $@"
$(Q)(set -e; \

View File

@ -13411,6 +13411,9 @@ static int handle_nan_filter(struct nlattr *attr_filter,
i = 0;
nla_for_each_nested(attr, attr_filter, rem) {
filter[i].filter = nla_memdup(attr, GFP_KERNEL);
if (!filter[i].filter)
goto err;
filter[i].len = nla_len(attr);
i++;
}
@ -13423,6 +13426,15 @@ static int handle_nan_filter(struct nlattr *attr_filter,
}
return 0;
err:
i = 0;
nla_for_each_nested(attr, attr_filter, rem) {
kfree(filter[i].filter);
i++;
}
kfree(filter);
return -ENOMEM;
}
static int nl80211_nan_add_func(struct sk_buff *skb,
@ -17816,7 +17828,8 @@ void cfg80211_ch_switch_notify(struct net_device *dev,
wdev->chandef = *chandef;
wdev->preset_chandef = *chandef;
if (wdev->iftype == NL80211_IFTYPE_STATION &&
if ((wdev->iftype == NL80211_IFTYPE_STATION ||
wdev->iftype == NL80211_IFTYPE_P2P_CLIENT) &&
!WARN_ON(!wdev->current_bss))
cfg80211_update_assoc_bss_entry(wdev, chandef->chan);

View File

@ -223,6 +223,9 @@ int xfrm_dev_state_add(struct net *net, struct xfrm_state *x,
if (x->encap || x->tfcpad)
return -EINVAL;
if (xuo->flags & ~(XFRM_OFFLOAD_IPV6 | XFRM_OFFLOAD_INBOUND))
return -EINVAL;
dev = dev_get_by_index(net, xuo->ifindex);
if (!dev) {
if (!(xuo->flags & XFRM_OFFLOAD_INBOUND)) {
@ -262,7 +265,8 @@ int xfrm_dev_state_add(struct net *net, struct xfrm_state *x,
netdev_tracker_alloc(dev, &xso->dev_tracker, GFP_ATOMIC);
xso->real_dev = dev;
xso->num_exthdrs = 1;
xso->flags = xuo->flags;
/* Don't forward bit that is not implemented */
xso->flags = xuo->flags & ~XFRM_OFFLOAD_IPV6;
err = dev->xfrmdev_ops->xdo_dev_state_add(x);
if (err) {

View File

@ -673,12 +673,12 @@ static int xfrmi_changelink(struct net_device *dev, struct nlattr *tb[],
struct net *net = xi->net;
struct xfrm_if_parms p = {};
xfrmi_netlink_parms(data, &p);
if (!p.if_id) {
NL_SET_ERR_MSG(extack, "if_id must be non zero");
return -EINVAL;
}
xfrmi_netlink_parms(data, &p);
xi = xfrmi_locate(net, &p);
if (!xi) {
xi = netdev_priv(dev);

View File

@ -4256,7 +4256,7 @@ static bool xfrm_migrate_selector_match(const struct xfrm_selector *sel_cmp,
}
static struct xfrm_policy *xfrm_migrate_policy_find(const struct xfrm_selector *sel,
u8 dir, u8 type, struct net *net)
u8 dir, u8 type, struct net *net, u32 if_id)
{
struct xfrm_policy *pol, *ret = NULL;
struct hlist_head *chain;
@ -4265,7 +4265,8 @@ static struct xfrm_policy *xfrm_migrate_policy_find(const struct xfrm_selector *
spin_lock_bh(&net->xfrm.xfrm_policy_lock);
chain = policy_hash_direct(net, &sel->daddr, &sel->saddr, sel->family, dir);
hlist_for_each_entry(pol, chain, bydst) {
if (xfrm_migrate_selector_match(sel, &pol->selector) &&
if ((if_id == 0 || pol->if_id == if_id) &&
xfrm_migrate_selector_match(sel, &pol->selector) &&
pol->type == type) {
ret = pol;
priority = ret->priority;
@ -4277,7 +4278,8 @@ static struct xfrm_policy *xfrm_migrate_policy_find(const struct xfrm_selector *
if ((pol->priority >= priority) && ret)
break;
if (xfrm_migrate_selector_match(sel, &pol->selector) &&
if ((if_id == 0 || pol->if_id == if_id) &&
xfrm_migrate_selector_match(sel, &pol->selector) &&
pol->type == type) {
ret = pol;
break;
@ -4393,7 +4395,7 @@ static int xfrm_migrate_check(const struct xfrm_migrate *m, int num_migrate)
int xfrm_migrate(const struct xfrm_selector *sel, u8 dir, u8 type,
struct xfrm_migrate *m, int num_migrate,
struct xfrm_kmaddress *k, struct net *net,
struct xfrm_encap_tmpl *encap)
struct xfrm_encap_tmpl *encap, u32 if_id)
{
int i, err, nx_cur = 0, nx_new = 0;
struct xfrm_policy *pol = NULL;
@ -4412,14 +4414,14 @@ int xfrm_migrate(const struct xfrm_selector *sel, u8 dir, u8 type,
}
/* Stage 1 - find policy */
if ((pol = xfrm_migrate_policy_find(sel, dir, type, net)) == NULL) {
if ((pol = xfrm_migrate_policy_find(sel, dir, type, net, if_id)) == NULL) {
err = -ENOENT;
goto out;
}
/* Stage 2 - find and update state(s) */
for (i = 0, mp = m; i < num_migrate; i++, mp++) {
if ((x = xfrm_migrate_state_find(mp, net))) {
if ((x = xfrm_migrate_state_find(mp, net, if_id))) {
x_cur[nx_cur] = x;
nx_cur++;
xc = xfrm_state_migrate(x, mp, encap);

View File

@ -1579,9 +1579,6 @@ static struct xfrm_state *xfrm_state_clone(struct xfrm_state *orig,
memcpy(&x->mark, &orig->mark, sizeof(x->mark));
memcpy(&x->props.smark, &orig->props.smark, sizeof(x->props.smark));
if (xfrm_init_state(x) < 0)
goto error;
x->props.flags = orig->props.flags;
x->props.extra_flags = orig->props.extra_flags;
@ -1606,7 +1603,8 @@ out:
return NULL;
}
struct xfrm_state *xfrm_migrate_state_find(struct xfrm_migrate *m, struct net *net)
struct xfrm_state *xfrm_migrate_state_find(struct xfrm_migrate *m, struct net *net,
u32 if_id)
{
unsigned int h;
struct xfrm_state *x = NULL;
@ -1622,6 +1620,8 @@ struct xfrm_state *xfrm_migrate_state_find(struct xfrm_migrate *m, struct net *n
continue;
if (m->reqid && x->props.reqid != m->reqid)
continue;
if (if_id != 0 && x->if_id != if_id)
continue;
if (!xfrm_addr_equal(&x->id.daddr, &m->old_daddr,
m->old_family) ||
!xfrm_addr_equal(&x->props.saddr, &m->old_saddr,
@ -1637,6 +1637,8 @@ struct xfrm_state *xfrm_migrate_state_find(struct xfrm_migrate *m, struct net *n
if (x->props.mode != m->mode ||
x->id.proto != m->proto)
continue;
if (if_id != 0 && x->if_id != if_id)
continue;
if (!xfrm_addr_equal(&x->id.daddr, &m->old_daddr,
m->old_family) ||
!xfrm_addr_equal(&x->props.saddr, &m->old_saddr,
@ -1663,6 +1665,11 @@ struct xfrm_state *xfrm_state_migrate(struct xfrm_state *x,
if (!xc)
return NULL;
xc->props.family = m->new_family;
if (xfrm_init_state(xc) < 0)
goto error;
memcpy(&xc->id.daddr, &m->new_daddr, sizeof(xc->id.daddr));
memcpy(&xc->props.saddr, &m->new_saddr, sizeof(xc->props.saddr));
@ -2572,7 +2579,7 @@ void xfrm_state_delete_tunnel(struct xfrm_state *x)
}
EXPORT_SYMBOL(xfrm_state_delete_tunnel);
u32 __xfrm_state_mtu(struct xfrm_state *x, int mtu)
u32 xfrm_state_mtu(struct xfrm_state *x, int mtu)
{
const struct xfrm_type *type = READ_ONCE(x->type);
struct crypto_aead *aead;
@ -2603,17 +2610,7 @@ u32 __xfrm_state_mtu(struct xfrm_state *x, int mtu)
return ((mtu - x->props.header_len - crypto_aead_authsize(aead) -
net_adj) & ~(blksize - 1)) + net_adj - 2;
}
EXPORT_SYMBOL_GPL(__xfrm_state_mtu);
u32 xfrm_state_mtu(struct xfrm_state *x, int mtu)
{
mtu = __xfrm_state_mtu(x, mtu);
if (x->props.family == AF_INET6 && mtu < IPV6_MIN_MTU)
return IPV6_MIN_MTU;
return mtu;
}
EXPORT_SYMBOL_GPL(xfrm_state_mtu);
int __xfrm_init_state(struct xfrm_state *x, bool init_replay, bool offload)
{

View File

@ -2608,6 +2608,7 @@ static int xfrm_do_migrate(struct sk_buff *skb, struct nlmsghdr *nlh,
int n = 0;
struct net *net = sock_net(skb->sk);
struct xfrm_encap_tmpl *encap = NULL;
u32 if_id = 0;
if (attrs[XFRMA_MIGRATE] == NULL)
return -EINVAL;
@ -2632,7 +2633,10 @@ static int xfrm_do_migrate(struct sk_buff *skb, struct nlmsghdr *nlh,
return -ENOMEM;
}
err = xfrm_migrate(&pi->sel, pi->dir, type, m, n, kmp, net, encap);
if (attrs[XFRMA_IF_ID])
if_id = nla_get_u32(attrs[XFRMA_IF_ID]);
err = xfrm_migrate(&pi->sel, pi->dir, type, m, n, kmp, net, encap, if_id);
kfree(encap);

View File

@ -50,8 +50,8 @@ for current_test in ${TESTS:-$ALL_TESTS}; do
else
log_test "'$current_test' [$profile] overflow $target"
fi
RET_FIN=$(( RET_FIN || RET ))
done
RET_FIN=$(( RET_FIN || RET ))
done
done
current_test=""

View File

@ -60,7 +60,8 @@ __tc_police_test()
tc_police_rules_create $count $should_fail
offload_count=$(tc filter show dev $swp1 ingress | grep in_hw | wc -l)
offload_count=$(tc -j filter show dev $swp1 ingress |
jq "[.[] | select(.options.in_hw == true)] | length")
((offload_count == count))
check_err_fail $should_fail $? "tc police offload count"
}

View File

@ -763,8 +763,8 @@ run_tests_disconnect()
run_tests_lo "$ns1" "$ns1" dead:beef:1::1 1 "-I 3 -i $old_cin"
# restore previous status
cout=$old_cout
cout_disconnect="$cout".disconnect
sin=$old_sin
sin_disconnect="$cout".disconnect
cin=$old_cin
cin_disconnect="$cin".disconnect
connect_per_transfer=1

View File

@ -1,2 +1,3 @@
# SPDX-License-Identifier: GPL-2.0-only
nf-queue
connect_close

View File

@ -9,6 +9,6 @@ TEST_PROGS := nft_trans_stress.sh nft_fib.sh nft_nat.sh bridge_brouter.sh \
conntrack_vrf.sh nft_synproxy.sh
LDLIBS = -lmnl
TEST_GEN_FILES = nf-queue
TEST_GEN_FILES = nf-queue connect_close
include ../lib.mk

View File

@ -0,0 +1,136 @@
// SPDX-License-Identifier: GPL-2.0
#include <stdio.h>
#include <stdlib.h>
#include <fcntl.h>
#include <string.h>
#include <unistd.h>
#include <signal.h>
#include <arpa/inet.h>
#include <sys/socket.h>
#define PORT 12345
#define RUNTIME 10
static struct {
unsigned int timeout;
unsigned int port;
} opts = {
.timeout = RUNTIME,
.port = PORT,
};
static void handler(int sig)
{
_exit(sig == SIGALRM ? 0 : 1);
}
static void set_timeout(void)
{
struct sigaction action = {
.sa_handler = handler,
};
sigaction(SIGALRM, &action, NULL);
alarm(opts.timeout);
}
static void do_connect(const struct sockaddr_in *dst)
{
int s = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
if (s >= 0)
fcntl(s, F_SETFL, O_NONBLOCK);
connect(s, (struct sockaddr *)dst, sizeof(*dst));
close(s);
}
static void do_accept(const struct sockaddr_in *src)
{
int c, one = 1, s = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
if (s < 0)
return;
setsockopt(s, SOL_SOCKET, SO_REUSEADDR, &one, sizeof(one));
setsockopt(s, SOL_SOCKET, SO_REUSEPORT, &one, sizeof(one));
bind(s, (struct sockaddr *)src, sizeof(*src));
listen(s, 16);
c = accept(s, NULL, NULL);
if (c >= 0)
close(c);
close(s);
}
static int accept_loop(void)
{
struct sockaddr_in src = {
.sin_family = AF_INET,
.sin_port = htons(opts.port),
};
inet_pton(AF_INET, "127.0.0.1", &src.sin_addr);
set_timeout();
for (;;)
do_accept(&src);
return 1;
}
static int connect_loop(void)
{
struct sockaddr_in dst = {
.sin_family = AF_INET,
.sin_port = htons(opts.port),
};
inet_pton(AF_INET, "127.0.0.1", &dst.sin_addr);
set_timeout();
for (;;)
do_connect(&dst);
return 1;
}
static void parse_opts(int argc, char **argv)
{
int c;
while ((c = getopt(argc, argv, "t:p:")) != -1) {
switch (c) {
case 't':
opts.timeout = atoi(optarg);
break;
case 'p':
opts.port = atoi(optarg);
break;
}
}
}
int main(int argc, char *argv[])
{
pid_t p;
parse_opts(argc, argv);
p = fork();
if (p < 0)
return 111;
if (p > 0)
return accept_loop();
return connect_loop();
}

View File

@ -113,6 +113,7 @@ table inet $name {
chain output {
type filter hook output priority $prio; policy accept;
tcp dport 12345 queue num 3
tcp sport 23456 queue num 3
jump nfq
}
chain post {
@ -296,6 +297,23 @@ test_tcp_localhost()
wait 2>/dev/null
}
test_tcp_localhost_connectclose()
{
tmpfile=$(mktemp) || exit 1
ip netns exec ${nsrouter} ./connect_close -p 23456 -t $timeout &
ip netns exec ${nsrouter} ./nf-queue -q 3 -t $timeout &
local nfqpid=$!
sleep 1
rm -f "$tmpfile"
wait $rpid
[ $? -eq 0 ] && echo "PASS: tcp via loopback with connect/close"
wait 2>/dev/null
}
test_tcp_localhost_requeue()
{
ip netns exec ${nsrouter} nft -f /dev/stdin <<EOF
@ -424,6 +442,7 @@ test_queue 20
test_tcp_forward
test_tcp_localhost
test_tcp_localhost_connectclose
test_tcp_localhost_requeue
test_icmp_vrf