Networking fixes for 5.18-rc4, including fixes from xfrm and can.

Current release - regressions:
 
   - rxrpc: restore removed timer deletion
 
 Current release - new code bugs:
 
   - gre: fix device lookup for l3mdev use-case
 
   - xfrm: fix egress device lookup for l3mdev use-case
 
 Previous releases - regressions:
 
   - sched: cls_u32: fix netns refcount changes in u32_change()
 
   - smc: fix sock leak when release after smc_shutdown()
 
   - xfrm: limit skb_page_frag_refill use to a single page
 
   - eth: atlantic: invert deep par in pm functions, preventing null
 	derefs
 
   - eth: stmmac: use readl_poll_timeout_atomic() in atomic state
 
 Previous releases - always broken:
 
   - gre: fix skb_under_panic on xmit
 
   - openvswitch: fix OOB access in reserve_sfa_size()
 
   - dsa: hellcreek: calculate checksums in tagger
 
   - eth: ice: fix crash in switchdev mode
 
   - eth: igc:
     - fix infinite loop in release_swfw_sync
     - fix scheduling while atomic
 
 Signed-off-by: Paolo Abeni <pabeni@redhat.com>
 -----BEGIN PGP SIGNATURE-----
 
 iQJGBAABCAAwFiEEg1AjqC77wbdLX2LbKSR5jcyPE6QFAmJhLbcSHHBhYmVuaUBy
 ZWRoYXQuY29tAAoJECkkeY3MjxOknxkP/jiAATyBt/HQFykQNiZ7cdrcv1gzJ3Lg
 BmrV1QmbrNwfffBtmBHRliP7x0vNF6fV8LUKjfyQh1YgJw8I9F/FDH/1fojhBZq/
 JJpZrh+TFikBBM4RDMJ0aQi6ssOEo8S9gfN4W48F/49O4S3Q/Gbgv7Lk0jL8utRz
 7RgGUVxX+xOSklvh2Tn/xHdYPeebPhLojiKhmH+l6xghyDEUHkemF3AkLwV9QMnq
 LXmNP4y100xcdCW1bLbyVcq0lbwdLSg4SL+2wC2bvgEDRR0MUezQyNxD6Oqrmusn
 sASZYgNK92R9ekLBqTX/QwV/XIT+17hclTk4u0eV8GnemnibqOq7DhDqtKyeAzbD
 mfU6Z5Ku6LuXA1U+2w1jxnd4cJTacA+dCRKcQD91ReguBbCd6zOweB996iBNLucK
 Kf+r6qWWLxt+JmhSexb/T+oQHsdgvIPSQXNHUH2W8w+2DdTB/EPcSL76DlbZUxrP
 up4EC3Nr3oxJjHbv7Iq4d9mHuRlwoOOpNJ3mARlfRDL6iuL0zECTweST3qT9YyIH
 Cz4FGj7kwEDTxGtufoTVia+/JmS39f2lBrMKuhbTVo+qcYhs3zJM4Ki9bAgOKXqI
 Qf+I73x9yQZ182afq4DsRXLnq/BajmRMyX2/kebY8KsARzRsPAktBhsT17SI6tUG
 3MiLiHiIb0qM
 =thBq
 -----END PGP SIGNATURE-----

Merge tag 'net-5.18-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Paolo Abeni:
 "Including fixes from xfrm and can.

  Current release - regressions:

   - rxrpc: restore removed timer deletion

  Current release - new code bugs:

   - gre: fix device lookup for l3mdev use-case

   - xfrm: fix egress device lookup for l3mdev use-case

  Previous releases - regressions:

   - sched: cls_u32: fix netns refcount changes in u32_change()

   - smc: fix sock leak when release after smc_shutdown()

   - xfrm: limit skb_page_frag_refill use to a single page

   - eth: atlantic: invert deep par in pm functions, preventing null
     derefs

   - eth: stmmac: use readl_poll_timeout_atomic() in atomic state

  Previous releases - always broken:

   - gre: fix skb_under_panic on xmit

   - openvswitch: fix OOB access in reserve_sfa_size()

   - dsa: hellcreek: calculate checksums in tagger

   - eth: ice: fix crash in switchdev mode

   - eth: igc:
      - fix infinite loop in release_swfw_sync
      - fix scheduling while atomic"

* tag 'net-5.18-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (37 commits)
  drivers: net: hippi: Fix deadlock in rr_close()
  selftests: mlxsw: vxlan_flooding_ipv6: Prevent flooding of unwanted packets
  selftests: mlxsw: vxlan_flooding: Prevent flooding of unwanted packets
  nfc: MAINTAINERS: add Bug entry
  net: stmmac: Use readl_poll_timeout_atomic() in atomic state
  doc/ip-sysctl: add bc_forwarding
  netlink: reset network and mac headers in netlink_dump()
  net: mscc: ocelot: fix broken IP multicast flooding
  net: dsa: hellcreek: Calculate checksums in tagger
  net: atlantic: invert deep par in pm functions, preventing null derefs
  can: isotp: stop timeout monitoring when no first frame was sent
  bonding: do not discard lowest hash bit for non layer3+4 hashing
  net: lan966x: Make sure to release ptp interrupt
  ipv6: make ip6_rt_gc_expire an atomic_t
  net: Handle l3mdev in ip_tunnel_init_flow
  l3mdev: l3mdev_master_upper_ifindex_by_index_rcu should be using netdev_master_upper_dev_get_rcu
  net/sched: cls_u32: fix possible leak in u32_init_knode()
  net/sched: cls_u32: fix netns refcount changes in u32_change()
  powerpc: Update MAINTAINERS for ibmvnic and VAS
  net: restore alpha order to Ethernet devices in config
  ...
This commit is contained in:
Linus Torvalds 2022-04-21 12:29:08 -07:00
commit 59f0c2447e
40 changed files with 210 additions and 83 deletions

View File

@ -267,6 +267,13 @@ ipfrag_max_dist - INTEGER
from different IP datagrams, which could result in data corruption.
Default: 64
bc_forwarding - INTEGER
bc_forwarding enables the feature described in rfc1812#section-5.3.5.2
and rfc2644. It allows the router to forward directed broadcast.
To enable this feature, the 'all' entry and the input interface entry
should be set to 1.
Default: 0
INET peer storage
=================

View File

@ -9337,14 +9337,12 @@ F: drivers/pci/hotplug/rpaphp*
IBM Power SRIOV Virtual NIC Device Driver
M: Dany Madden <drt@linux.ibm.com>
M: Sukadev Bhattiprolu <sukadev@linux.ibm.com>
R: Thomas Falcon <tlfalcon@linux.ibm.com>
L: netdev@vger.kernel.org
S: Supported
F: drivers/net/ethernet/ibm/ibmvnic.*
IBM Power Virtual Accelerator Switchboard
M: Sukadev Bhattiprolu <sukadev@linux.ibm.com>
L: linuxppc-dev@lists.ozlabs.org
S: Supported
F: arch/powerpc/include/asm/vas.h
@ -13823,6 +13821,7 @@ M: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
L: linux-nfc@lists.01.org (subscribers-only)
L: netdev@vger.kernel.org
S: Maintained
B: mailto:linux-nfc@lists.01.org
F: Documentation/devicetree/bindings/net/nfc/
F: drivers/nfc/
F: include/linux/platform_data/nfcmrvl.h

View File

@ -4027,14 +4027,19 @@ static bool bond_flow_dissect(struct bonding *bond, struct sk_buff *skb, const v
return true;
}
static u32 bond_ip_hash(u32 hash, struct flow_keys *flow)
static u32 bond_ip_hash(u32 hash, struct flow_keys *flow, int xmit_policy)
{
hash ^= (__force u32)flow_get_u32_dst(flow) ^
(__force u32)flow_get_u32_src(flow);
hash ^= (hash >> 16);
hash ^= (hash >> 8);
/* discard lowest hash bit to deal with the common even ports pattern */
return hash >> 1;
if (xmit_policy == BOND_XMIT_POLICY_LAYER34 ||
xmit_policy == BOND_XMIT_POLICY_ENCAP34)
return hash >> 1;
return hash;
}
/* Generate hash based on xmit policy. If @skb is given it is used to linearize
@ -4064,7 +4069,7 @@ static u32 __bond_xmit_hash(struct bonding *bond, struct sk_buff *skb, const voi
memcpy(&hash, &flow.ports.ports, sizeof(hash));
}
return bond_ip_hash(hash, &flow);
return bond_ip_hash(hash, &flow, bond->params.xmit_policy);
}
/**
@ -5259,7 +5264,7 @@ static u32 bond_sk_hash_l34(struct sock *sk)
/* L4 */
memcpy(&hash, &flow.ports.ports, sizeof(hash));
/* L3 */
return bond_ip_hash(hash, &flow);
return bond_ip_hash(hash, &flow, BOND_XMIT_POLICY_LAYER34);
}
static struct net_device *__bond_sk_get_lower_dev(struct bonding *bond,

View File

@ -35,15 +35,6 @@ source "drivers/net/ethernet/aquantia/Kconfig"
source "drivers/net/ethernet/arc/Kconfig"
source "drivers/net/ethernet/asix/Kconfig"
source "drivers/net/ethernet/atheros/Kconfig"
source "drivers/net/ethernet/broadcom/Kconfig"
source "drivers/net/ethernet/brocade/Kconfig"
source "drivers/net/ethernet/cadence/Kconfig"
source "drivers/net/ethernet/calxeda/Kconfig"
source "drivers/net/ethernet/cavium/Kconfig"
source "drivers/net/ethernet/chelsio/Kconfig"
source "drivers/net/ethernet/cirrus/Kconfig"
source "drivers/net/ethernet/cisco/Kconfig"
source "drivers/net/ethernet/cortina/Kconfig"
config CX_ECAT
tristate "Beckhoff CX5020 EtherCAT master support"
@ -57,6 +48,14 @@ config CX_ECAT
To compile this driver as a module, choose M here. The module
will be called ec_bhf.
source "drivers/net/ethernet/broadcom/Kconfig"
source "drivers/net/ethernet/cadence/Kconfig"
source "drivers/net/ethernet/calxeda/Kconfig"
source "drivers/net/ethernet/cavium/Kconfig"
source "drivers/net/ethernet/chelsio/Kconfig"
source "drivers/net/ethernet/cirrus/Kconfig"
source "drivers/net/ethernet/cisco/Kconfig"
source "drivers/net/ethernet/cortina/Kconfig"
source "drivers/net/ethernet/davicom/Kconfig"
config DNET
@ -85,7 +84,6 @@ source "drivers/net/ethernet/huawei/Kconfig"
source "drivers/net/ethernet/i825xx/Kconfig"
source "drivers/net/ethernet/ibm/Kconfig"
source "drivers/net/ethernet/intel/Kconfig"
source "drivers/net/ethernet/microsoft/Kconfig"
source "drivers/net/ethernet/xscale/Kconfig"
config JME
@ -128,8 +126,9 @@ source "drivers/net/ethernet/mediatek/Kconfig"
source "drivers/net/ethernet/mellanox/Kconfig"
source "drivers/net/ethernet/micrel/Kconfig"
source "drivers/net/ethernet/microchip/Kconfig"
source "drivers/net/ethernet/moxa/Kconfig"
source "drivers/net/ethernet/mscc/Kconfig"
source "drivers/net/ethernet/microsoft/Kconfig"
source "drivers/net/ethernet/moxa/Kconfig"
source "drivers/net/ethernet/myricom/Kconfig"
config FEALNX
@ -141,10 +140,10 @@ config FEALNX
Say Y here to support the Myson MTD-800 family of PCI-based Ethernet
cards. <http://www.myson.com.tw/>
source "drivers/net/ethernet/ni/Kconfig"
source "drivers/net/ethernet/natsemi/Kconfig"
source "drivers/net/ethernet/neterion/Kconfig"
source "drivers/net/ethernet/netronome/Kconfig"
source "drivers/net/ethernet/ni/Kconfig"
source "drivers/net/ethernet/8390/Kconfig"
source "drivers/net/ethernet/nvidia/Kconfig"
source "drivers/net/ethernet/nxp/Kconfig"
@ -164,6 +163,7 @@ source "drivers/net/ethernet/packetengines/Kconfig"
source "drivers/net/ethernet/pasemi/Kconfig"
source "drivers/net/ethernet/pensando/Kconfig"
source "drivers/net/ethernet/qlogic/Kconfig"
source "drivers/net/ethernet/brocade/Kconfig"
source "drivers/net/ethernet/qualcomm/Kconfig"
source "drivers/net/ethernet/rdc/Kconfig"
source "drivers/net/ethernet/realtek/Kconfig"
@ -171,10 +171,10 @@ source "drivers/net/ethernet/renesas/Kconfig"
source "drivers/net/ethernet/rocker/Kconfig"
source "drivers/net/ethernet/samsung/Kconfig"
source "drivers/net/ethernet/seeq/Kconfig"
source "drivers/net/ethernet/sfc/Kconfig"
source "drivers/net/ethernet/sgi/Kconfig"
source "drivers/net/ethernet/silan/Kconfig"
source "drivers/net/ethernet/sis/Kconfig"
source "drivers/net/ethernet/sfc/Kconfig"
source "drivers/net/ethernet/smsc/Kconfig"
source "drivers/net/ethernet/socionext/Kconfig"
source "drivers/net/ethernet/stmicro/Kconfig"

View File

@ -444,22 +444,22 @@ err_exit:
static int aq_pm_freeze(struct device *dev)
{
return aq_suspend_common(dev, false);
return aq_suspend_common(dev, true);
}
static int aq_pm_suspend_poweroff(struct device *dev)
{
return aq_suspend_common(dev, true);
return aq_suspend_common(dev, false);
}
static int aq_pm_thaw(struct device *dev)
{
return atl_resume_common(dev, false);
return atl_resume_common(dev, true);
}
static int aq_pm_resume_restore(struct device *dev)
{
return atl_resume_common(dev, true);
return atl_resume_common(dev, false);
}
static const struct dev_pm_ops aq_pm_ops = {

View File

@ -1009,8 +1009,8 @@ static s32 e1000_platform_pm_pch_lpt(struct e1000_hw *hw, bool link)
{
u32 reg = link << (E1000_LTRV_REQ_SHIFT + E1000_LTRV_NOSNOOP_SHIFT) |
link << E1000_LTRV_REQ_SHIFT | E1000_LTRV_SEND;
u16 max_ltr_enc_d = 0; /* maximum LTR decoded by platform */
u16 lat_enc_d = 0; /* latency decoded */
u32 max_ltr_enc_d = 0; /* maximum LTR decoded by platform */
u32 lat_enc_d = 0; /* latency decoded */
u16 lat_enc = 0; /* latency encoded */
if (link) {

View File

@ -361,7 +361,8 @@ ice_eswitch_port_start_xmit(struct sk_buff *skb, struct net_device *netdev)
np = netdev_priv(netdev);
vsi = np->vsi;
if (ice_is_reset_in_progress(vsi->back->state))
if (ice_is_reset_in_progress(vsi->back->state) ||
test_bit(ICE_VF_DIS, vsi->back->state))
return NETDEV_TX_BUSY;
repr = ice_netdev_to_repr(netdev);

View File

@ -52,7 +52,7 @@ static inline void ice_eswitch_update_repr(struct ice_vsi *vsi) { }
static inline int ice_eswitch_configure(struct ice_pf *pf)
{
return -EOPNOTSUPP;
return 0;
}
static inline int ice_eswitch_rebuild(struct ice_pf *pf)

View File

@ -641,6 +641,7 @@ ice_get_orom_civd_data(struct ice_hw *hw, enum ice_bank_select bank,
status = ice_read_flash_module(hw, bank, ICE_SR_1ST_OROM_BANK_PTR, 0,
orom_data, hw->flash.banks.orom_size);
if (status) {
vfree(orom_data);
ice_debug(hw, ICE_DBG_NVM, "Unable to read Option ROM data\n");
return status;
}

View File

@ -415,8 +415,8 @@ static u16 ice_fill_rx_descs(struct xsk_buff_pool *pool, struct xdp_buff **xdp,
*/
static bool __ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count)
{
u32 nb_buffs_extra = 0, nb_buffs = 0;
union ice_32b_rx_flex_desc *rx_desc;
u32 nb_buffs_extra = 0, nb_buffs;
u16 ntu = rx_ring->next_to_use;
u16 total_count = count;
struct xdp_buff **xdp;
@ -428,6 +428,10 @@ static bool __ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count)
nb_buffs_extra = ice_fill_rx_descs(rx_ring->xsk_pool, xdp,
rx_desc,
rx_ring->count - ntu);
if (nb_buffs_extra != rx_ring->count - ntu) {
ntu += nb_buffs_extra;
goto exit;
}
rx_desc = ICE_RX_DESC(rx_ring, 0);
xdp = ice_xdp_buf(rx_ring, 0);
ntu = 0;
@ -441,6 +445,7 @@ static bool __ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count)
if (ntu == rx_ring->count)
ntu = 0;
exit:
if (rx_ring->next_to_use != ntu)
ice_release_rx_desc(rx_ring, ntu);

View File

@ -156,8 +156,15 @@ void igc_release_swfw_sync_i225(struct igc_hw *hw, u16 mask)
{
u32 swfw_sync;
while (igc_get_hw_semaphore_i225(hw))
; /* Empty */
/* Releasing the resource requires first getting the HW semaphore.
* If we fail to get the semaphore, there is nothing we can do,
* except log an error and quit. We are not allowed to hang here
* indefinitely, as it may cause denial of service or system crash.
*/
if (igc_get_hw_semaphore_i225(hw)) {
hw_dbg("Failed to release SW_FW_SYNC.\n");
return;
}
swfw_sync = rd32(IGC_SW_FW_SYNC);
swfw_sync &= ~mask;

View File

@ -581,7 +581,7 @@ static s32 igc_read_phy_reg_mdic(struct igc_hw *hw, u32 offset, u16 *data)
* the lower time out
*/
for (i = 0; i < IGC_GEN_POLL_TIMEOUT; i++) {
usleep_range(500, 1000);
udelay(50);
mdic = rd32(IGC_MDIC);
if (mdic & IGC_MDIC_READY)
break;
@ -638,7 +638,7 @@ static s32 igc_write_phy_reg_mdic(struct igc_hw *hw, u32 offset, u16 data)
* the lower time out
*/
for (i = 0; i < IGC_GEN_POLL_TIMEOUT; i++) {
usleep_range(500, 1000);
udelay(50);
mdic = rd32(IGC_MDIC);
if (mdic & IGC_MDIC_READY)
break;

View File

@ -992,6 +992,17 @@ static void igc_ptp_time_restore(struct igc_adapter *adapter)
igc_ptp_write_i225(adapter, &ts);
}
static void igc_ptm_stop(struct igc_adapter *adapter)
{
struct igc_hw *hw = &adapter->hw;
u32 ctrl;
ctrl = rd32(IGC_PTM_CTRL);
ctrl &= ~IGC_PTM_CTRL_EN;
wr32(IGC_PTM_CTRL, ctrl);
}
/**
* igc_ptp_suspend - Disable PTP work items and prepare for suspend
* @adapter: Board private structure
@ -1009,8 +1020,10 @@ void igc_ptp_suspend(struct igc_adapter *adapter)
adapter->ptp_tx_skb = NULL;
clear_bit_unlock(__IGC_PTP_TX_IN_PROGRESS, &adapter->state);
if (pci_device_is_present(adapter->pdev))
if (pci_device_is_present(adapter->pdev)) {
igc_ptp_time_save(adapter);
igc_ptm_stop(adapter);
}
}
/**

View File

@ -423,7 +423,7 @@ mlxsw_sp_span_gretap4_route(const struct net_device *to_dev,
parms = mlxsw_sp_ipip_netdev_parms4(to_dev);
ip_tunnel_init_flow(&fl4, parms.iph.protocol, *daddrp, *saddrp,
0, 0, parms.link, tun->fwmark, 0);
0, 0, dev_net(to_dev), parms.link, tun->fwmark, 0);
rt = ip_route_output_key(tun->net, &fl4);
if (IS_ERR(rt))

View File

@ -671,6 +671,9 @@ static void lan966x_cleanup_ports(struct lan966x *lan966x)
disable_irq(lan966x->ana_irq);
lan966x->ana_irq = -ENXIO;
}
if (lan966x->ptp_irq)
devm_free_irq(lan966x->dev, lan966x->ptp_irq, lan966x);
}
static int lan966x_probe_port(struct lan966x *lan966x, u32 p,

View File

@ -2859,6 +2859,8 @@ static void ocelot_port_set_mcast_flood(struct ocelot *ocelot, int port,
val = BIT(port);
ocelot_rmw_rix(ocelot, val, BIT(port), ANA_PGID_PGID, PGID_MC);
ocelot_rmw_rix(ocelot, val, BIT(port), ANA_PGID_PGID, PGID_MCIPV4);
ocelot_rmw_rix(ocelot, val, BIT(port), ANA_PGID_PGID, PGID_MCIPV6);
}
static void ocelot_port_set_bcast_flood(struct ocelot *ocelot, int port,

View File

@ -71,9 +71,9 @@ static int init_systime(void __iomem *ioaddr, u32 sec, u32 nsec)
writel(value, ioaddr + PTP_TCR);
/* wait for present system time initialize to complete */
return readl_poll_timeout(ioaddr + PTP_TCR, value,
return readl_poll_timeout_atomic(ioaddr + PTP_TCR, value,
!(value & PTP_TCR_TSINIT),
10000, 100000);
10, 100000);
}
static int config_addend(void __iomem *ioaddr, u32 addend)

View File

@ -1355,7 +1355,9 @@ static int rr_close(struct net_device *dev)
rrpriv->fw_running = 0;
spin_unlock_irqrestore(&rrpriv->lock, flags);
del_timer_sync(&rrpriv->timer);
spin_lock_irqsave(&rrpriv->lock, flags);
writel(0, &regs->TxPi);
writel(0, &regs->IpRxPi);

View File

@ -743,6 +743,7 @@ static struct phy_driver microchip_t1_phy_driver[] = {
{
PHY_ID_MATCH_MODEL(PHY_ID_LAN937X),
.name = "Microchip LAN937x T1",
.flags = PHY_POLL_CABLE_TEST,
.features = PHY_BASIC_T1_FEATURES,
.config_init = lan87xx_config_init,
.suspend = genphy_suspend,

View File

@ -4,8 +4,6 @@
#include <linux/skbuff.h>
#define ESP_SKB_FRAG_MAXSIZE (PAGE_SIZE << SKB_FRAG_PAGE_ORDER)
struct ip_esp_hdr;
static inline struct ip_esp_hdr *ip_esp_hdr(const struct sk_buff *skb)

View File

@ -243,11 +243,18 @@ static inline __be32 tunnel_id_to_key32(__be64 tun_id)
static inline void ip_tunnel_init_flow(struct flowi4 *fl4,
int proto,
__be32 daddr, __be32 saddr,
__be32 key, __u8 tos, int oif,
__be32 key, __u8 tos,
struct net *net, int oif,
__u32 mark, __u32 tun_inner_hash)
{
memset(fl4, 0, sizeof(*fl4));
fl4->flowi4_oif = oif;
if (oif) {
fl4->flowi4_l3mdev = l3mdev_master_upper_ifindex_by_index_rcu(net, oif);
/* Legacy VRF/l3mdev use case */
fl4->flowi4_oif = fl4->flowi4_l3mdev ? 0 : oif;
}
fl4->daddr = daddr;
fl4->saddr = saddr;
fl4->flowi4_tos = tos;

View File

@ -75,8 +75,8 @@ struct netns_ipv6 {
struct list_head fib6_walkers;
rwlock_t fib6_walker_lock;
spinlock_t fib6_gc_lock;
unsigned int ip6_rt_gc_expire;
unsigned long ip6_rt_last_gc;
atomic_t ip6_rt_gc_expire;
unsigned long ip6_rt_last_gc;
unsigned char flowlabel_has_excl;
#ifdef CONFIG_IPV6_MULTIPLE_TABLES
bool fib6_has_custom_rules;

View File

@ -906,6 +906,7 @@ static int isotp_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
struct canfd_frame *cf;
int ae = (so->opt.flags & CAN_ISOTP_EXTEND_ADDR) ? 1 : 0;
int wait_tx_done = (so->opt.flags & CAN_ISOTP_WAIT_TX_DONE) ? 1 : 0;
s64 hrtimer_sec = 0;
int off;
int err;
@ -1004,7 +1005,9 @@ static int isotp_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
isotp_create_fframe(cf, so, ae);
/* start timeout for FC */
hrtimer_start(&so->txtimer, ktime_set(1, 0), HRTIMER_MODE_REL_SOFT);
hrtimer_sec = 1;
hrtimer_start(&so->txtimer, ktime_set(hrtimer_sec, 0),
HRTIMER_MODE_REL_SOFT);
}
/* send the first or only CAN frame */
@ -1017,6 +1020,11 @@ static int isotp_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
if (err) {
pr_notice_once("can-isotp: %s: can_send_ret %pe\n",
__func__, ERR_PTR(err));
/* no transmission -> no timeout monitoring */
if (hrtimer_sec)
hrtimer_cancel(&so->txtimer);
goto err_out_drop;
}

View File

@ -21,6 +21,14 @@ static struct sk_buff *hellcreek_xmit(struct sk_buff *skb,
struct dsa_port *dp = dsa_slave_to_port(dev);
u8 *tag;
/* Calculate checksums (if required) before adding the trailer tag to
* avoid including it in calculations. That would lead to wrong
* checksums after the switch strips the tag.
*/
if (skb->ip_summed == CHECKSUM_PARTIAL &&
skb_checksum_help(skb))
return NULL;
/* Tag encoding */
tag = skb_put(skb, HELLCREEK_TAG_LEN);
*tag = BIT(dp->index);

View File

@ -446,7 +446,6 @@ int esp_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info *
struct page *page;
struct sk_buff *trailer;
int tailen = esp->tailen;
unsigned int allocsz;
/* this is non-NULL only with TCP/UDP Encapsulation */
if (x->encap) {
@ -456,8 +455,8 @@ int esp_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info *
return err;
}
allocsz = ALIGN(skb->data_len + tailen, L1_CACHE_BYTES);
if (allocsz > ESP_SKB_FRAG_MAXSIZE)
if (ALIGN(tailen, L1_CACHE_BYTES) > PAGE_SIZE ||
ALIGN(skb->data_len, L1_CACHE_BYTES) > PAGE_SIZE)
goto cow;
if (!skb_cloned(skb)) {

View File

@ -605,8 +605,8 @@ static int gre_fill_metadata_dst(struct net_device *dev, struct sk_buff *skb)
key = &info->key;
ip_tunnel_init_flow(&fl4, IPPROTO_GRE, key->u.ipv4.dst, key->u.ipv4.src,
tunnel_id_to_key32(key->tun_id),
key->tos & ~INET_ECN_MASK, 0, skb->mark,
skb_get_hash(skb));
key->tos & ~INET_ECN_MASK, dev_net(dev), 0,
skb->mark, skb_get_hash(skb));
rt = ip_route_output_key(dev_net(dev), &fl4);
if (IS_ERR(rt))
return PTR_ERR(rt);

View File

@ -294,8 +294,8 @@ static int ip_tunnel_bind_dev(struct net_device *dev)
ip_tunnel_init_flow(&fl4, iph->protocol, iph->daddr,
iph->saddr, tunnel->parms.o_key,
RT_TOS(iph->tos), tunnel->parms.link,
tunnel->fwmark, 0);
RT_TOS(iph->tos), dev_net(dev),
tunnel->parms.link, tunnel->fwmark, 0);
rt = ip_route_output_key(tunnel->net, &fl4);
if (!IS_ERR(rt)) {
@ -570,7 +570,7 @@ void ip_md_tunnel_xmit(struct sk_buff *skb, struct net_device *dev,
}
ip_tunnel_init_flow(&fl4, proto, key->u.ipv4.dst, key->u.ipv4.src,
tunnel_id_to_key32(key->tun_id), RT_TOS(tos),
0, skb->mark, skb_get_hash(skb));
dev_net(dev), 0, skb->mark, skb_get_hash(skb));
if (tunnel->encap.type != TUNNEL_ENCAP_NONE)
goto tx_error;
@ -726,7 +726,8 @@ void ip_tunnel_xmit(struct sk_buff *skb, struct net_device *dev,
}
ip_tunnel_init_flow(&fl4, protocol, dst, tnl_params->saddr,
tunnel->parms.o_key, RT_TOS(tos), tunnel->parms.link,
tunnel->parms.o_key, RT_TOS(tos),
dev_net(dev), tunnel->parms.link,
tunnel->fwmark, skb_get_hash(skb));
if (ip_tunnel_encap(skb, tunnel, &protocol, &fl4) < 0)

View File

@ -482,7 +482,6 @@ int esp6_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info
struct page *page;
struct sk_buff *trailer;
int tailen = esp->tailen;
unsigned int allocsz;
if (x->encap) {
int err = esp6_output_encap(x, skb, esp);
@ -491,8 +490,8 @@ int esp6_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info
return err;
}
allocsz = ALIGN(skb->data_len + tailen, L1_CACHE_BYTES);
if (allocsz > ESP_SKB_FRAG_MAXSIZE)
if (ALIGN(tailen, L1_CACHE_BYTES) > PAGE_SIZE ||
ALIGN(skb->data_len, L1_CACHE_BYTES) > PAGE_SIZE)
goto cow;
if (!skb_cloned(skb)) {

View File

@ -733,9 +733,6 @@ static netdev_tx_t __gre6_xmit(struct sk_buff *skb,
else
fl6->daddr = tunnel->parms.raddr;
if (skb_cow_head(skb, dev->needed_headroom ?: tunnel->hlen))
return -ENOMEM;
/* Push GRE header. */
protocol = (dev->type == ARPHRD_ETHER) ? htons(ETH_P_TEB) : proto;
@ -743,6 +740,7 @@ static netdev_tx_t __gre6_xmit(struct sk_buff *skb,
struct ip_tunnel_info *tun_info;
const struct ip_tunnel_key *key;
__be16 flags;
int tun_hlen;
tun_info = skb_tunnel_info_txcheck(skb);
if (IS_ERR(tun_info) ||
@ -760,9 +758,12 @@ static netdev_tx_t __gre6_xmit(struct sk_buff *skb,
dsfield = key->tos;
flags = key->tun_flags &
(TUNNEL_CSUM | TUNNEL_KEY | TUNNEL_SEQ);
tunnel->tun_hlen = gre_calc_hlen(flags);
tun_hlen = gre_calc_hlen(flags);
gre_build_header(skb, tunnel->tun_hlen,
if (skb_cow_head(skb, dev->needed_headroom ?: tun_hlen + tunnel->encap_hlen))
return -ENOMEM;
gre_build_header(skb, tun_hlen,
flags, protocol,
tunnel_id_to_key32(tun_info->key.tun_id),
(flags & TUNNEL_SEQ) ? htonl(tunnel->o_seqno++)
@ -772,6 +773,9 @@ static netdev_tx_t __gre6_xmit(struct sk_buff *skb,
if (tunnel->parms.o_flags & TUNNEL_SEQ)
tunnel->o_seqno++;
if (skb_cow_head(skb, dev->needed_headroom ?: tunnel->hlen))
return -ENOMEM;
gre_build_header(skb, tunnel->tun_hlen, tunnel->parms.o_flags,
protocol, tunnel->parms.o_key,
htonl(tunnel->o_seqno));

View File

@ -3292,6 +3292,7 @@ static int ip6_dst_gc(struct dst_ops *ops)
int rt_elasticity = net->ipv6.sysctl.ip6_rt_gc_elasticity;
int rt_gc_timeout = net->ipv6.sysctl.ip6_rt_gc_timeout;
unsigned long rt_last_gc = net->ipv6.ip6_rt_last_gc;
unsigned int val;
int entries;
entries = dst_entries_get_fast(ops);
@ -3302,13 +3303,13 @@ static int ip6_dst_gc(struct dst_ops *ops)
entries <= rt_max_size)
goto out;
net->ipv6.ip6_rt_gc_expire++;
fib6_run_gc(net->ipv6.ip6_rt_gc_expire, net, true);
fib6_run_gc(atomic_inc_return(&net->ipv6.ip6_rt_gc_expire), net, true);
entries = dst_entries_get_slow(ops);
if (entries < ops->gc_thresh)
net->ipv6.ip6_rt_gc_expire = rt_gc_timeout>>1;
atomic_set(&net->ipv6.ip6_rt_gc_expire, rt_gc_timeout >> 1);
out:
net->ipv6.ip6_rt_gc_expire -= net->ipv6.ip6_rt_gc_expire>>rt_elasticity;
val = atomic_read(&net->ipv6.ip6_rt_gc_expire);
atomic_set(&net->ipv6.ip6_rt_gc_expire, val - (val >> rt_elasticity));
return entries > rt_max_size;
}
@ -6509,7 +6510,7 @@ static int __net_init ip6_route_net_init(struct net *net)
net->ipv6.sysctl.ip6_rt_min_advmss = IPV6_MIN_MTU - 20 - 40;
net->ipv6.sysctl.skip_notify_on_dev_down = 0;
net->ipv6.ip6_rt_gc_expire = 30*HZ;
atomic_set(&net->ipv6.ip6_rt_gc_expire, 30*HZ);
ret = 0;
out:

View File

@ -147,7 +147,7 @@ int l3mdev_master_upper_ifindex_by_index_rcu(struct net *net, int ifindex)
dev = dev_get_by_index_rcu(net, ifindex);
while (dev && !netif_is_l3_master(dev))
dev = netdev_master_upper_dev_get(dev);
dev = netdev_master_upper_dev_get_rcu(dev);
return dev ? dev->ifindex : 0;
}

View File

@ -2263,6 +2263,13 @@ static int netlink_dump(struct sock *sk)
* single netdev. The outcome is MSG_TRUNC error.
*/
skb_reserve(skb, skb_tailroom(skb) - alloc_size);
/* Make sure malicious BPF programs can not read unitialized memory
* from skb->head -> skb->data
*/
skb_reset_network_header(skb);
skb_reset_mac_header(skb);
netlink_skb_set_owner_r(skb, sk);
if (nlk->dump_done_errno > 0) {

View File

@ -2465,7 +2465,7 @@ static struct nlattr *reserve_sfa_size(struct sw_flow_actions **sfa,
new_acts_size = max(next_offset + req_size, ksize(*sfa) * 2);
if (new_acts_size > MAX_ACTIONS_BUFSIZE) {
if ((MAX_ACTIONS_BUFSIZE - next_offset) < req_size) {
if ((next_offset + req_size) > MAX_ACTIONS_BUFSIZE) {
OVS_NLERR(log, "Flow action size exceeds max %u",
MAX_ACTIONS_BUFSIZE);
return ERR_PTR(-EMSGSIZE);

View File

@ -2858,8 +2858,9 @@ tpacket_error:
status = TP_STATUS_SEND_REQUEST;
err = po->xmit(skb);
if (unlikely(err > 0)) {
err = net_xmit_errno(err);
if (unlikely(err != 0)) {
if (err > 0)
err = net_xmit_errno(err);
if (err && __packet_get_status(po, ph) ==
TP_STATUS_AVAILABLE) {
/* skb was destructed already */
@ -3060,8 +3061,12 @@ static int packet_snd(struct socket *sock, struct msghdr *msg, size_t len)
skb->no_fcs = 1;
err = po->xmit(skb);
if (err > 0 && (err = net_xmit_errno(err)) != 0)
goto out_unlock;
if (unlikely(err != 0)) {
if (err > 0)
err = net_xmit_errno(err);
if (err)
goto out_unlock;
}
dev_put(dev);

View File

@ -113,7 +113,9 @@ static __net_exit void rxrpc_exit_net(struct net *net)
struct rxrpc_net *rxnet = rxrpc_net(net);
rxnet->live = false;
del_timer_sync(&rxnet->peer_keepalive_timer);
cancel_work_sync(&rxnet->peer_keepalive_work);
/* Remove the timer again as the worker may have restarted it. */
del_timer_sync(&rxnet->peer_keepalive_timer);
rxrpc_destroy_all_calls(rxnet);
rxrpc_destroy_all_connections(rxnet);

View File

@ -386,14 +386,19 @@ static int u32_init(struct tcf_proto *tp)
return 0;
}
static int u32_destroy_key(struct tc_u_knode *n, bool free_pf)
static void __u32_destroy_key(struct tc_u_knode *n)
{
struct tc_u_hnode *ht = rtnl_dereference(n->ht_down);
tcf_exts_destroy(&n->exts);
tcf_exts_put_net(&n->exts);
if (ht && --ht->refcnt == 0)
kfree(ht);
kfree(n);
}
static void u32_destroy_key(struct tc_u_knode *n, bool free_pf)
{
tcf_exts_put_net(&n->exts);
#ifdef CONFIG_CLS_U32_PERF
if (free_pf)
free_percpu(n->pf);
@ -402,8 +407,7 @@ static int u32_destroy_key(struct tc_u_knode *n, bool free_pf)
if (free_pf)
free_percpu(n->pcpu_success);
#endif
kfree(n);
return 0;
__u32_destroy_key(n);
}
/* u32_delete_key_rcu should be called when free'ing a copied
@ -811,10 +815,6 @@ static struct tc_u_knode *u32_init_knode(struct net *net, struct tcf_proto *tp,
new->flags = n->flags;
RCU_INIT_POINTER(new->ht_down, ht);
/* bump reference count as long as we hold pointer to structure */
if (ht)
ht->refcnt++;
#ifdef CONFIG_CLS_U32_PERF
/* Statistics may be incremented by readers during update
* so we must keep them in tact. When the node is later destroyed
@ -836,6 +836,10 @@ static struct tc_u_knode *u32_init_knode(struct net *net, struct tcf_proto *tp,
return NULL;
}
/* bump reference count as long as we hold pointer to structure */
if (ht)
ht->refcnt++;
return new;
}
@ -900,13 +904,13 @@ static int u32_change(struct net *net, struct sk_buff *in_skb,
extack);
if (err) {
u32_destroy_key(new, false);
__u32_destroy_key(new);
return err;
}
err = u32_replace_hw_knode(tp, new, flags, extack);
if (err) {
u32_destroy_key(new, false);
__u32_destroy_key(new);
return err;
}

View File

@ -2674,8 +2674,10 @@ static int smc_shutdown(struct socket *sock, int how)
if (smc->use_fallback) {
rc = kernel_sock_shutdown(smc->clcsock, how);
sk->sk_shutdown = smc->clcsock->sk->sk_shutdown;
if (sk->sk_shutdown == SHUTDOWN_MASK)
if (sk->sk_shutdown == SHUTDOWN_MASK) {
sk->sk_state = SMC_CLOSED;
sock_put(sk);
}
goto out;
}
switch (how) {

View File

@ -2593,12 +2593,14 @@ static struct dst_entry *xfrm_bundle_create(struct xfrm_policy *policy,
if (xfrm[i]->props.mode != XFRM_MODE_TRANSPORT) {
__u32 mark = 0;
int oif;
if (xfrm[i]->props.smark.v || xfrm[i]->props.smark.m)
mark = xfrm_smark_get(fl->flowi_mark, xfrm[i]);
family = xfrm[i]->props.family;
dst = xfrm_dst_lookup(xfrm[i], tos, fl->flowi_oif,
oif = fl->flowi_oif ? : fl->flowi_l3mdev;
dst = xfrm_dst_lookup(xfrm[i], tos, oif,
&saddr, &daddr, family, mark);
err = PTR_ERR(dst);
if (IS_ERR(dst))

View File

@ -159,6 +159,17 @@ flooding_remotes_add()
local lsb
local i
# Prevent unwanted packets from entering the bridge and interfering
# with the test.
tc qdisc add dev br0 clsact
tc filter add dev br0 egress protocol all pref 1 handle 1 \
matchall skip_hw action drop
tc qdisc add dev $h1 clsact
tc filter add dev $h1 egress protocol all pref 1 handle 1 \
flower skip_hw dst_mac de:ad:be:ef:13:37 action pass
tc filter add dev $h1 egress protocol all pref 2 handle 2 \
matchall skip_hw action drop
for i in $(eval echo {1..$num_remotes}); do
lsb=$((i + 1))
@ -195,6 +206,12 @@ flooding_filters_del()
done
tc qdisc del dev $rp2 clsact
tc filter del dev $h1 egress protocol all pref 2 handle 2 matchall
tc filter del dev $h1 egress protocol all pref 1 handle 1 flower
tc qdisc del dev $h1 clsact
tc filter del dev br0 egress protocol all pref 1 handle 1 matchall
tc qdisc del dev br0 clsact
}
flooding_check_packets()

View File

@ -172,6 +172,17 @@ flooding_filters_add()
local lsb
local i
# Prevent unwanted packets from entering the bridge and interfering
# with the test.
tc qdisc add dev br0 clsact
tc filter add dev br0 egress protocol all pref 1 handle 1 \
matchall skip_hw action drop
tc qdisc add dev $h1 clsact
tc filter add dev $h1 egress protocol all pref 1 handle 1 \
flower skip_hw dst_mac de:ad:be:ef:13:37 action pass
tc filter add dev $h1 egress protocol all pref 2 handle 2 \
matchall skip_hw action drop
tc qdisc add dev $rp2 clsact
for i in $(eval echo {1..$num_remotes}); do
@ -194,6 +205,12 @@ flooding_filters_del()
done
tc qdisc del dev $rp2 clsact
tc filter del dev $h1 egress protocol all pref 2 handle 2 matchall
tc filter del dev $h1 egress protocol all pref 1 handle 1 flower
tc qdisc del dev $h1 clsact
tc filter del dev br0 egress protocol all pref 1 handle 1 matchall
tc qdisc del dev br0 clsact
}
flooding_check_packets()