mirror of
https://github.com/torvalds/linux.git
synced 2024-11-22 20:22:09 +00:00
Networking fixes for 5.12-rc8, including fixes from netfilter,
and bpf. BPF verifier changes stand out, otherwise things have slowed down. Current release - regressions: - gro: ensure frag0 meets IP header alignment - Revert "net: stmmac: re-init rx buffers when mac resume back" - ethernet: macb: fix the restore of cmp registers Previous releases - regressions: - ixgbe: Fix NULL pointer dereference in ethtool loopback test - ixgbe: fix unbalanced device enable/disable in suspend/resume - phy: marvell: fix detection of PHY on Topaz switches - make tcp_allowed_congestion_control readonly in non-init netns - xen-netback: Check for hotplug-status existence before watching Previous releases - always broken: - bpf: mitigate a speculative oob read of up to map value size by tightening the masking window - sctp: fix race condition in sctp_destroy_sock - sit, ip6_tunnel: Unregister catch-all devices - netfilter: nftables: clone set element expression template - netfilter: flowtable: fix NAT IPv6 offload mangling - net: geneve: check skb is large enough for IPv4/IPv6 header - netlink: don't call ->netlink_bind with table lock held Signed-off-by: Jakub Kicinski <kuba@kernel.org> -----BEGIN PGP SIGNATURE----- iQIzBAABCAAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAmB6aBQACgkQMUZtbf5S Iruu2BAAqHKdB5Qd1iBGaA1md8f+elErsotzjONz+eh2yqDKRaOW84+Fo9TPKgu6 se0WmAY1HMUO3TEVdFeBsrgrs+bTY1E1OdfoZ39PFNkMdKMM80Ks1rn94nrPOohy q1uoNxe9jjT3nRQBTKHWdB3ZC3Jetwf3LP7G2b8SoA+gNd9xl+b1H/drmv7WdE/n pY7/GND7wd4qqidLRDgAaavaiGIdqym8V0bZEpz7cZtjT/U6RhjkBLKSB8JFGUxP PQ1NFrYKmLDM1zYTSObLOrKUmEaWzPPSsXmWqGkCE4qjJ8euX0e+5EbxF98JHdYW O+HMtdgr4UJGWAoxyGaxk7h9w0ydVyC1+Xgi6jAFWdXP7wgvXXQrldLnO44pX/6I dYlIM+Br/5VmnKiS1i1gBUURREBRSEy7ZYxtREjGC7dFSUn9RPm+0s0x/DCRBS9/ MtNo0lCiuWsyaZ2v57aEKLX4YvGpilzg4UU3/45RNW6OnFzQubvjMBJPfap6EUAC Ii8uUc/vX0Jq4nZVZzDZ7vlkRcJTQgUqKrzgamUuwJmyPqzefkDcbSZub3tM8G39 eetiHS1nqe3QwuP+TYM3MaBjw0bdgNz9Wt3xmY3Ehnf3pujMR5fbAsCbcdowV5/+ OI2ZcTUZculeAW2q9DgsOCtyS/1huwMHG0zO32TgadbFv45UCS0= =LN+J -----END PGP SIGNATURE----- Merge tag 'net-5.12-rc8' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net Pull networking fixes from Jakub Kicinski: "Networking fixes for 5.12-rc8, including fixes from netfilter, and bpf. BPF verifier changes stand out, otherwise things have slowed down. Current release - regressions: - gro: ensure frag0 meets IP header alignment - Revert "net: stmmac: re-init rx buffers when mac resume back" - ethernet: macb: fix the restore of cmp registers Previous releases - regressions: - ixgbe: Fix NULL pointer dereference in ethtool loopback test - ixgbe: fix unbalanced device enable/disable in suspend/resume - phy: marvell: fix detection of PHY on Topaz switches - make tcp_allowed_congestion_control readonly in non-init netns - xen-netback: Check for hotplug-status existence before watching Previous releases - always broken: - bpf: mitigate a speculative oob read of up to map value size by tightening the masking window - sctp: fix race condition in sctp_destroy_sock - sit, ip6_tunnel: Unregister catch-all devices - netfilter: nftables: clone set element expression template - netfilter: flowtable: fix NAT IPv6 offload mangling - net: geneve: check skb is large enough for IPv4/IPv6 header - netlink: don't call ->netlink_bind with table lock held" * tag 'net-5.12-rc8' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (52 commits) netlink: don't call ->netlink_bind with table lock held MAINTAINERS: update my email bpf: Update selftests to reflect new error states bpf: Tighten speculative pointer arithmetic mask bpf: Move sanitize_val_alu out of op switch bpf: Refactor and streamline bounds check into helper bpf: Improve verifier error messages for users bpf: Rework ptr_limit into alu_limit and add common error path bpf: Ensure off_reg has no mixed signed bounds for all types bpf: Move off_reg into sanitize_ptr_alu bpf: Use correct permission flag for mixed signed bounds arithmetic ch_ktls: do not send snd_una update to TCB in middle ch_ktls: tcb close causes tls connection failure ch_ktls: fix device connection close ch_ktls: Fix kernel panic i40e: fix the panic when running bpf in xdpdrv mode net/mlx5e: fix ingress_ifindex check in mlx5e_flower_parse_meta net/mlx5e: Fix setting of RS FEC mode net/mlx5: Fix setting of devlink traps in switchdev mode Revert "net: stmmac: re-init rx buffers when mac resume back" ...
This commit is contained in:
commit
88a5af9439
@ -1849,21 +1849,6 @@ ip6frag_low_thresh - INTEGER
|
||||
ip6frag_time - INTEGER
|
||||
Time in seconds to keep an IPv6 fragment in memory.
|
||||
|
||||
IPv6 Segment Routing:
|
||||
|
||||
seg6_flowlabel - INTEGER
|
||||
Controls the behaviour of computing the flowlabel of outer
|
||||
IPv6 header in case of SR T.encaps
|
||||
|
||||
== =======================================================
|
||||
-1 set flowlabel to zero.
|
||||
0 copy flowlabel from Inner packet in case of Inner IPv6
|
||||
(Set flowlabel to 0 in case IPv4/L2)
|
||||
1 Compute the flowlabel using seg6_make_flowlabel()
|
||||
== =======================================================
|
||||
|
||||
Default is 0.
|
||||
|
||||
``conf/default/*``:
|
||||
Change the interface-specific default settings.
|
||||
|
||||
|
@ -24,3 +24,16 @@ seg6_require_hmac - INTEGER
|
||||
* 1 - Drop SR packets without HMAC, validate SR packets with HMAC
|
||||
|
||||
Default is 0.
|
||||
|
||||
seg6_flowlabel - INTEGER
|
||||
Controls the behaviour of computing the flowlabel of outer
|
||||
IPv6 header in case of SR T.encaps
|
||||
|
||||
== =======================================================
|
||||
-1 set flowlabel to zero.
|
||||
0 copy flowlabel from Inner packet in case of Inner IPv6
|
||||
(Set flowlabel to 0 in case IPv4/L2)
|
||||
1 Compute the flowlabel using seg6_make_flowlabel()
|
||||
== =======================================================
|
||||
|
||||
Default is 0.
|
||||
|
@ -7096,7 +7096,7 @@ S: Maintained
|
||||
F: drivers/i2c/busses/i2c-cpm.c
|
||||
|
||||
FREESCALE IMX / MXC FEC DRIVER
|
||||
M: Fugang Duan <fugang.duan@nxp.com>
|
||||
M: Joakim Zhang <qiangqing.zhang@nxp.com>
|
||||
L: netdev@vger.kernel.org
|
||||
S: Maintained
|
||||
F: Documentation/devicetree/bindings/net/fsl-fec.txt
|
||||
@ -8524,9 +8524,9 @@ F: drivers/pci/hotplug/rpaphp*
|
||||
|
||||
IBM Power SRIOV Virtual NIC Device Driver
|
||||
M: Dany Madden <drt@linux.ibm.com>
|
||||
M: Lijun Pan <ljp@linux.ibm.com>
|
||||
M: Sukadev Bhattiprolu <sukadev@linux.ibm.com>
|
||||
R: Thomas Falcon <tlfalcon@linux.ibm.com>
|
||||
R: Lijun Pan <lijunp213@gmail.com>
|
||||
L: netdev@vger.kernel.org
|
||||
S: Supported
|
||||
F: drivers/net/ethernet/ibm/ibmvnic.*
|
||||
|
@ -3026,10 +3026,17 @@ out_resources:
|
||||
return err;
|
||||
}
|
||||
|
||||
/* prod_id for switch families which do not have a PHY model number */
|
||||
static const u16 family_prod_id_table[] = {
|
||||
[MV88E6XXX_FAMILY_6341] = MV88E6XXX_PORT_SWITCH_ID_PROD_6341,
|
||||
[MV88E6XXX_FAMILY_6390] = MV88E6XXX_PORT_SWITCH_ID_PROD_6390,
|
||||
};
|
||||
|
||||
static int mv88e6xxx_mdio_read(struct mii_bus *bus, int phy, int reg)
|
||||
{
|
||||
struct mv88e6xxx_mdio_bus *mdio_bus = bus->priv;
|
||||
struct mv88e6xxx_chip *chip = mdio_bus->chip;
|
||||
u16 prod_id;
|
||||
u16 val;
|
||||
int err;
|
||||
|
||||
@ -3040,23 +3047,12 @@ static int mv88e6xxx_mdio_read(struct mii_bus *bus, int phy, int reg)
|
||||
err = chip->info->ops->phy_read(chip, bus, phy, reg, &val);
|
||||
mv88e6xxx_reg_unlock(chip);
|
||||
|
||||
if (reg == MII_PHYSID2) {
|
||||
/* Some internal PHYs don't have a model number. */
|
||||
if (chip->info->family != MV88E6XXX_FAMILY_6165)
|
||||
/* Then there is the 6165 family. It gets is
|
||||
* PHYs correct. But it can also have two
|
||||
* SERDES interfaces in the PHY address
|
||||
* space. And these don't have a model
|
||||
* number. But they are not PHYs, so we don't
|
||||
* want to give them something a PHY driver
|
||||
* will recognise.
|
||||
*
|
||||
* Use the mv88e6390 family model number
|
||||
* instead, for anything which really could be
|
||||
* a PHY,
|
||||
*/
|
||||
if (!(val & 0x3f0))
|
||||
val |= MV88E6XXX_PORT_SWITCH_ID_PROD_6390 >> 4;
|
||||
/* Some internal PHYs don't have a model number. */
|
||||
if (reg == MII_PHYSID2 && !(val & 0x3f0) &&
|
||||
chip->info->family < ARRAY_SIZE(family_prod_id_table)) {
|
||||
prod_id = family_prod_id_table[chip->info->family];
|
||||
if (prod_id)
|
||||
val |= prod_id >> 4;
|
||||
}
|
||||
|
||||
return err ? err : val;
|
||||
|
@ -3918,6 +3918,7 @@ static int macb_init(struct platform_device *pdev)
|
||||
reg = gem_readl(bp, DCFG8);
|
||||
bp->max_tuples = min((GEM_BFEXT(SCR2CMP, reg) / 3),
|
||||
GEM_BFEXT(T2SCR, reg));
|
||||
INIT_LIST_HEAD(&bp->rx_fs_list.list);
|
||||
if (bp->max_tuples > 0) {
|
||||
/* also needs one ethtype match to check IPv4 */
|
||||
if (GEM_BFEXT(SCR2ETH, reg) > 0) {
|
||||
@ -3928,7 +3929,6 @@ static int macb_init(struct platform_device *pdev)
|
||||
/* Filtering is supported in hw but don't enable it in kernel now */
|
||||
dev->hw_features |= NETIF_F_NTUPLE;
|
||||
/* init Rx flow definitions */
|
||||
INIT_LIST_HEAD(&bp->rx_fs_list.list);
|
||||
bp->rx_fs_list.count = 0;
|
||||
spin_lock_init(&bp->rx_fs_lock);
|
||||
} else
|
||||
|
@ -412,7 +412,7 @@
|
||||
| CN6XXX_INTR_M0UNWI_ERR \
|
||||
| CN6XXX_INTR_M1UPB0_ERR \
|
||||
| CN6XXX_INTR_M1UPWI_ERR \
|
||||
| CN6XXX_INTR_M1UPB0_ERR \
|
||||
| CN6XXX_INTR_M1UNB0_ERR \
|
||||
| CN6XXX_INTR_M1UNWI_ERR \
|
||||
| CN6XXX_INTR_INSTR_DB_OF_ERR \
|
||||
| CN6XXX_INTR_SLIST_DB_OF_ERR \
|
||||
|
@ -349,18 +349,6 @@ static int chcr_set_tcb_field(struct chcr_ktls_info *tx_info, u16 word,
|
||||
return cxgb4_ofld_send(tx_info->netdev, skb);
|
||||
}
|
||||
|
||||
/*
|
||||
* chcr_ktls_mark_tcb_close: mark tcb state to CLOSE
|
||||
* @tx_info - driver specific tls info.
|
||||
* return: NET_TX_OK/NET_XMIT_DROP.
|
||||
*/
|
||||
static int chcr_ktls_mark_tcb_close(struct chcr_ktls_info *tx_info)
|
||||
{
|
||||
return chcr_set_tcb_field(tx_info, TCB_T_STATE_W,
|
||||
TCB_T_STATE_V(TCB_T_STATE_M),
|
||||
CHCR_TCB_STATE_CLOSED, 1);
|
||||
}
|
||||
|
||||
/*
|
||||
* chcr_ktls_dev_del: call back for tls_dev_del.
|
||||
* Remove the tid and l2t entry and close the connection.
|
||||
@ -395,8 +383,6 @@ static void chcr_ktls_dev_del(struct net_device *netdev,
|
||||
|
||||
/* clear tid */
|
||||
if (tx_info->tid != -1) {
|
||||
/* clear tcb state and then release tid */
|
||||
chcr_ktls_mark_tcb_close(tx_info);
|
||||
cxgb4_remove_tid(&tx_info->adap->tids, tx_info->tx_chan,
|
||||
tx_info->tid, tx_info->ip_family);
|
||||
}
|
||||
@ -574,7 +560,6 @@ static int chcr_ktls_dev_add(struct net_device *netdev, struct sock *sk,
|
||||
return 0;
|
||||
|
||||
free_tid:
|
||||
chcr_ktls_mark_tcb_close(tx_info);
|
||||
#if IS_ENABLED(CONFIG_IPV6)
|
||||
/* clear clip entry */
|
||||
if (tx_info->ip_family == AF_INET6)
|
||||
@ -672,10 +657,6 @@ static int chcr_ktls_cpl_act_open_rpl(struct adapter *adap,
|
||||
if (tx_info->pending_close) {
|
||||
spin_unlock(&tx_info->lock);
|
||||
if (!status) {
|
||||
/* it's a late success, tcb status is established,
|
||||
* mark it close.
|
||||
*/
|
||||
chcr_ktls_mark_tcb_close(tx_info);
|
||||
cxgb4_remove_tid(&tx_info->adap->tids, tx_info->tx_chan,
|
||||
tid, tx_info->ip_family);
|
||||
}
|
||||
@ -1663,54 +1644,6 @@ static void chcr_ktls_copy_record_in_skb(struct sk_buff *nskb,
|
||||
refcount_add(nskb->truesize, &nskb->sk->sk_wmem_alloc);
|
||||
}
|
||||
|
||||
/*
|
||||
* chcr_ktls_update_snd_una: Reset the SEND_UNA. It will be done to avoid
|
||||
* sending the same segment again. It will discard the segment which is before
|
||||
* the current tx max.
|
||||
* @tx_info - driver specific tls info.
|
||||
* @q - TX queue.
|
||||
* return: NET_TX_OK/NET_XMIT_DROP.
|
||||
*/
|
||||
static int chcr_ktls_update_snd_una(struct chcr_ktls_info *tx_info,
|
||||
struct sge_eth_txq *q)
|
||||
{
|
||||
struct fw_ulptx_wr *wr;
|
||||
unsigned int ndesc;
|
||||
int credits;
|
||||
void *pos;
|
||||
u32 len;
|
||||
|
||||
len = sizeof(*wr) + roundup(CHCR_SET_TCB_FIELD_LEN, 16);
|
||||
ndesc = DIV_ROUND_UP(len, 64);
|
||||
|
||||
credits = chcr_txq_avail(&q->q) - ndesc;
|
||||
if (unlikely(credits < 0)) {
|
||||
chcr_eth_txq_stop(q);
|
||||
return NETDEV_TX_BUSY;
|
||||
}
|
||||
|
||||
pos = &q->q.desc[q->q.pidx];
|
||||
|
||||
wr = pos;
|
||||
/* ULPTX wr */
|
||||
wr->op_to_compl = htonl(FW_WR_OP_V(FW_ULPTX_WR));
|
||||
wr->cookie = 0;
|
||||
/* fill len in wr field */
|
||||
wr->flowid_len16 = htonl(FW_WR_LEN16_V(DIV_ROUND_UP(len, 16)));
|
||||
|
||||
pos += sizeof(*wr);
|
||||
|
||||
pos = chcr_write_cpl_set_tcb_ulp(tx_info, q, tx_info->tid, pos,
|
||||
TCB_SND_UNA_RAW_W,
|
||||
TCB_SND_UNA_RAW_V(TCB_SND_UNA_RAW_M),
|
||||
TCB_SND_UNA_RAW_V(0), 0);
|
||||
|
||||
chcr_txq_advance(&q->q, ndesc);
|
||||
cxgb4_ring_tx_db(tx_info->adap, &q->q, ndesc);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* chcr_end_part_handler: This handler will handle the record which
|
||||
* is complete or if record's end part is received. T6 adapter has a issue that
|
||||
@ -1735,7 +1668,9 @@ static int chcr_end_part_handler(struct chcr_ktls_info *tx_info,
|
||||
struct sge_eth_txq *q, u32 skb_offset,
|
||||
u32 tls_end_offset, bool last_wr)
|
||||
{
|
||||
bool free_skb_if_tx_fails = false;
|
||||
struct sk_buff *nskb = NULL;
|
||||
|
||||
/* check if it is a complete record */
|
||||
if (tls_end_offset == record->len) {
|
||||
nskb = skb;
|
||||
@ -1758,6 +1693,8 @@ static int chcr_end_part_handler(struct chcr_ktls_info *tx_info,
|
||||
|
||||
if (last_wr)
|
||||
dev_kfree_skb_any(skb);
|
||||
else
|
||||
free_skb_if_tx_fails = true;
|
||||
|
||||
last_wr = true;
|
||||
|
||||
@ -1769,6 +1706,8 @@ static int chcr_end_part_handler(struct chcr_ktls_info *tx_info,
|
||||
record->num_frags,
|
||||
(last_wr && tcp_push_no_fin),
|
||||
mss)) {
|
||||
if (free_skb_if_tx_fails)
|
||||
dev_kfree_skb_any(skb);
|
||||
goto out;
|
||||
}
|
||||
tx_info->prev_seq = record->end_seq;
|
||||
@ -1905,11 +1844,6 @@ static int chcr_short_record_handler(struct chcr_ktls_info *tx_info,
|
||||
/* reset tcp_seq as per the prior_data_required len */
|
||||
tcp_seq -= prior_data_len;
|
||||
}
|
||||
/* reset snd una, so the middle record won't send the already
|
||||
* sent part.
|
||||
*/
|
||||
if (chcr_ktls_update_snd_una(tx_info, q))
|
||||
goto out;
|
||||
atomic64_inc(&tx_info->adap->ch_ktls_stats.ktls_tx_middle_pkts);
|
||||
} else {
|
||||
atomic64_inc(&tx_info->adap->ch_ktls_stats.ktls_tx_start_pkts);
|
||||
@ -2010,12 +1944,11 @@ static int chcr_ktls_xmit(struct sk_buff *skb, struct net_device *dev)
|
||||
* we will send the complete record again.
|
||||
*/
|
||||
|
||||
spin_lock_irqsave(&tx_ctx->base.lock, flags);
|
||||
|
||||
do {
|
||||
int i;
|
||||
|
||||
cxgb4_reclaim_completed_tx(adap, &q->q, true);
|
||||
/* lock taken */
|
||||
spin_lock_irqsave(&tx_ctx->base.lock, flags);
|
||||
/* fetch the tls record */
|
||||
record = tls_get_record(&tx_ctx->base, tcp_seq,
|
||||
&tx_info->record_no);
|
||||
@ -2074,11 +2007,11 @@ static int chcr_ktls_xmit(struct sk_buff *skb, struct net_device *dev)
|
||||
tls_end_offset, skb_offset,
|
||||
0);
|
||||
|
||||
spin_unlock_irqrestore(&tx_ctx->base.lock, flags);
|
||||
if (ret) {
|
||||
/* free the refcount taken earlier */
|
||||
if (tls_end_offset < data_len)
|
||||
dev_kfree_skb_any(skb);
|
||||
spin_unlock_irqrestore(&tx_ctx->base.lock, flags);
|
||||
goto out;
|
||||
}
|
||||
|
||||
@ -2088,16 +2021,6 @@ static int chcr_ktls_xmit(struct sk_buff *skb, struct net_device *dev)
|
||||
continue;
|
||||
}
|
||||
|
||||
/* increase page reference count of the record, so that there
|
||||
* won't be any chance of page free in middle if in case stack
|
||||
* receives ACK and try to delete the record.
|
||||
*/
|
||||
for (i = 0; i < record->num_frags; i++)
|
||||
__skb_frag_ref(&record->frags[i]);
|
||||
/* lock cleared */
|
||||
spin_unlock_irqrestore(&tx_ctx->base.lock, flags);
|
||||
|
||||
|
||||
/* if a tls record is finishing in this SKB */
|
||||
if (tls_end_offset <= data_len) {
|
||||
ret = chcr_end_part_handler(tx_info, skb, record,
|
||||
@ -2122,13 +2045,9 @@ static int chcr_ktls_xmit(struct sk_buff *skb, struct net_device *dev)
|
||||
data_len = 0;
|
||||
}
|
||||
|
||||
/* clear the frag ref count which increased locally before */
|
||||
for (i = 0; i < record->num_frags; i++) {
|
||||
/* clear the frag ref count */
|
||||
__skb_frag_unref(&record->frags[i]);
|
||||
}
|
||||
/* if any failure, come out from the loop. */
|
||||
if (ret) {
|
||||
spin_unlock_irqrestore(&tx_ctx->base.lock, flags);
|
||||
if (th->fin)
|
||||
dev_kfree_skb_any(skb);
|
||||
|
||||
@ -2143,6 +2062,7 @@ static int chcr_ktls_xmit(struct sk_buff *skb, struct net_device *dev)
|
||||
|
||||
} while (data_len > 0);
|
||||
|
||||
spin_unlock_irqrestore(&tx_ctx->base.lock, flags);
|
||||
atomic64_inc(&port_stats->ktls_tx_encrypted_packets);
|
||||
atomic64_add(skb_data_len, &port_stats->ktls_tx_encrypted_bytes);
|
||||
|
||||
|
@ -1471,8 +1471,10 @@ dm9000_probe(struct platform_device *pdev)
|
||||
|
||||
/* Init network device */
|
||||
ndev = alloc_etherdev(sizeof(struct board_info));
|
||||
if (!ndev)
|
||||
return -ENOMEM;
|
||||
if (!ndev) {
|
||||
ret = -ENOMEM;
|
||||
goto out_regulator_disable;
|
||||
}
|
||||
|
||||
SET_NETDEV_DEV(ndev, &pdev->dev);
|
||||
|
||||
|
@ -1149,19 +1149,13 @@ static int __ibmvnic_open(struct net_device *netdev)
|
||||
|
||||
rc = set_link_state(adapter, IBMVNIC_LOGICAL_LNK_UP);
|
||||
if (rc) {
|
||||
for (i = 0; i < adapter->req_rx_queues; i++)
|
||||
napi_disable(&adapter->napi[i]);
|
||||
ibmvnic_napi_disable(adapter);
|
||||
release_resources(adapter);
|
||||
return rc;
|
||||
}
|
||||
|
||||
netif_tx_start_all_queues(netdev);
|
||||
|
||||
if (prev_state == VNIC_CLOSED) {
|
||||
for (i = 0; i < adapter->req_rx_queues; i++)
|
||||
napi_schedule(&adapter->napi[i]);
|
||||
}
|
||||
|
||||
adapter->state = VNIC_OPEN;
|
||||
return rc;
|
||||
}
|
||||
@ -1922,7 +1916,7 @@ static int do_reset(struct ibmvnic_adapter *adapter,
|
||||
u64 old_num_rx_queues, old_num_tx_queues;
|
||||
u64 old_num_rx_slots, old_num_tx_slots;
|
||||
struct net_device *netdev = adapter->netdev;
|
||||
int i, rc;
|
||||
int rc;
|
||||
|
||||
netdev_dbg(adapter->netdev,
|
||||
"[S:%d FOP:%d] Reset reason %d, reset_state %d\n",
|
||||
@ -2111,10 +2105,6 @@ static int do_reset(struct ibmvnic_adapter *adapter,
|
||||
/* refresh device's multicast list */
|
||||
ibmvnic_set_multi(netdev);
|
||||
|
||||
/* kick napi */
|
||||
for (i = 0; i < adapter->req_rx_queues; i++)
|
||||
napi_schedule(&adapter->napi[i]);
|
||||
|
||||
if (adapter->reset_reason == VNIC_RESET_FAILOVER ||
|
||||
adapter->reset_reason == VNIC_RESET_MOBILITY)
|
||||
__netdev_notify_peers(netdev);
|
||||
@ -3204,9 +3194,6 @@ restart_loop:
|
||||
|
||||
next = ibmvnic_next_scrq(adapter, scrq);
|
||||
for (i = 0; i < next->tx_comp.num_comps; i++) {
|
||||
if (next->tx_comp.rcs[i])
|
||||
dev_err(dev, "tx error %x\n",
|
||||
next->tx_comp.rcs[i]);
|
||||
index = be32_to_cpu(next->tx_comp.correlators[i]);
|
||||
if (index & IBMVNIC_TSO_POOL_MASK) {
|
||||
tx_pool = &adapter->tso_pool[pool];
|
||||
@ -3220,7 +3207,13 @@ restart_loop:
|
||||
num_entries += txbuff->num_entries;
|
||||
if (txbuff->skb) {
|
||||
total_bytes += txbuff->skb->len;
|
||||
dev_consume_skb_irq(txbuff->skb);
|
||||
if (next->tx_comp.rcs[i]) {
|
||||
dev_err(dev, "tx error %x\n",
|
||||
next->tx_comp.rcs[i]);
|
||||
dev_kfree_skb_irq(txbuff->skb);
|
||||
} else {
|
||||
dev_consume_skb_irq(txbuff->skb);
|
||||
}
|
||||
txbuff->skb = NULL;
|
||||
} else {
|
||||
netdev_warn(adapter->netdev,
|
||||
|
@ -12357,6 +12357,7 @@ static int i40e_sw_init(struct i40e_pf *pf)
|
||||
{
|
||||
int err = 0;
|
||||
int size;
|
||||
u16 pow;
|
||||
|
||||
/* Set default capability flags */
|
||||
pf->flags = I40E_FLAG_RX_CSUM_ENABLED |
|
||||
@ -12375,6 +12376,11 @@ static int i40e_sw_init(struct i40e_pf *pf)
|
||||
pf->rss_table_size = pf->hw.func_caps.rss_table_size;
|
||||
pf->rss_size_max = min_t(int, pf->rss_size_max,
|
||||
pf->hw.func_caps.num_tx_qp);
|
||||
|
||||
/* find the next higher power-of-2 of num cpus */
|
||||
pow = roundup_pow_of_two(num_online_cpus());
|
||||
pf->rss_size_max = min_t(int, pf->rss_size_max, pow);
|
||||
|
||||
if (pf->hw.func_caps.rss) {
|
||||
pf->flags |= I40E_FLAG_RSS_ENABLED;
|
||||
pf->alloc_rss_size = min_t(int, pf->rss_size_max,
|
||||
|
@ -747,8 +747,8 @@ ice_cee_to_dcb_cfg(struct ice_aqc_get_cee_dcb_cfg_resp *cee_cfg,
|
||||
struct ice_port_info *pi)
|
||||
{
|
||||
u32 status, tlv_status = le32_to_cpu(cee_cfg->tlv_status);
|
||||
u32 ice_aqc_cee_status_mask, ice_aqc_cee_status_shift;
|
||||
u8 i, j, err, sync, oper, app_index, ice_app_sel_type;
|
||||
u32 ice_aqc_cee_status_mask, ice_aqc_cee_status_shift, j;
|
||||
u8 i, err, sync, oper, app_index, ice_app_sel_type;
|
||||
u16 app_prio = le16_to_cpu(cee_cfg->oper_app_prio);
|
||||
u16 ice_aqc_cee_app_mask, ice_aqc_cee_app_shift;
|
||||
struct ice_dcbx_cfg *cmp_dcbcfg, *dcbcfg;
|
||||
|
@ -6536,6 +6536,13 @@ err_setup_tx:
|
||||
return err;
|
||||
}
|
||||
|
||||
static int ixgbe_rx_napi_id(struct ixgbe_ring *rx_ring)
|
||||
{
|
||||
struct ixgbe_q_vector *q_vector = rx_ring->q_vector;
|
||||
|
||||
return q_vector ? q_vector->napi.napi_id : 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* ixgbe_setup_rx_resources - allocate Rx resources (Descriptors)
|
||||
* @adapter: pointer to ixgbe_adapter
|
||||
@ -6583,7 +6590,7 @@ int ixgbe_setup_rx_resources(struct ixgbe_adapter *adapter,
|
||||
|
||||
/* XDP RX-queue info */
|
||||
if (xdp_rxq_info_reg(&rx_ring->xdp_rxq, adapter->netdev,
|
||||
rx_ring->queue_index, rx_ring->q_vector->napi.napi_id) < 0)
|
||||
rx_ring->queue_index, ixgbe_rx_napi_id(rx_ring)) < 0)
|
||||
goto err;
|
||||
|
||||
rx_ring->xdp_prog = adapter->xdp_prog;
|
||||
@ -6892,6 +6899,11 @@ static int __maybe_unused ixgbe_resume(struct device *dev_d)
|
||||
|
||||
adapter->hw.hw_addr = adapter->io_addr;
|
||||
|
||||
err = pci_enable_device_mem(pdev);
|
||||
if (err) {
|
||||
e_dev_err("Cannot enable PCI device from suspend\n");
|
||||
return err;
|
||||
}
|
||||
smp_mb__before_atomic();
|
||||
clear_bit(__IXGBE_DISABLED, &adapter->state);
|
||||
pci_set_master(pdev);
|
||||
|
@ -246,6 +246,11 @@ static int mlx5_devlink_trap_action_set(struct devlink *devlink,
|
||||
struct mlx5_devlink_trap *dl_trap;
|
||||
int err = 0;
|
||||
|
||||
if (is_mdev_switchdev_mode(dev)) {
|
||||
NL_SET_ERR_MSG_MOD(extack, "Devlink traps can't be set in switchdev mode");
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
dl_trap = mlx5_find_trap_by_id(dev, trap->id);
|
||||
if (!dl_trap) {
|
||||
mlx5_core_err(dev, "Devlink trap: Set action on invalid trap id 0x%x", trap->id);
|
||||
|
@ -387,21 +387,6 @@ enum mlx5e_fec_supported_link_mode {
|
||||
*_policy = MLX5_GET(pplm_reg, _buf, fec_override_admin_##link); \
|
||||
} while (0)
|
||||
|
||||
#define MLX5E_FEC_OVERRIDE_ADMIN_50G_POLICY(buf, policy, write, link) \
|
||||
do { \
|
||||
unsigned long policy_long; \
|
||||
u16 *__policy = &(policy); \
|
||||
bool _write = (write); \
|
||||
\
|
||||
policy_long = *__policy; \
|
||||
if (_write && *__policy) \
|
||||
*__policy = find_first_bit(&policy_long, \
|
||||
sizeof(policy_long) * BITS_PER_BYTE);\
|
||||
MLX5E_FEC_OVERRIDE_ADMIN_POLICY(buf, *__policy, _write, link); \
|
||||
if (!_write && *__policy) \
|
||||
*__policy = 1 << *__policy; \
|
||||
} while (0)
|
||||
|
||||
/* get/set FEC admin field for a given speed */
|
||||
static int mlx5e_fec_admin_field(u32 *pplm, u16 *fec_policy, bool write,
|
||||
enum mlx5e_fec_supported_link_mode link_mode)
|
||||
@ -423,16 +408,16 @@ static int mlx5e_fec_admin_field(u32 *pplm, u16 *fec_policy, bool write,
|
||||
MLX5E_FEC_OVERRIDE_ADMIN_POLICY(pplm, *fec_policy, write, 100g);
|
||||
break;
|
||||
case MLX5E_FEC_SUPPORTED_LINK_MODE_50G_1X:
|
||||
MLX5E_FEC_OVERRIDE_ADMIN_50G_POLICY(pplm, *fec_policy, write, 50g_1x);
|
||||
MLX5E_FEC_OVERRIDE_ADMIN_POLICY(pplm, *fec_policy, write, 50g_1x);
|
||||
break;
|
||||
case MLX5E_FEC_SUPPORTED_LINK_MODE_100G_2X:
|
||||
MLX5E_FEC_OVERRIDE_ADMIN_50G_POLICY(pplm, *fec_policy, write, 100g_2x);
|
||||
MLX5E_FEC_OVERRIDE_ADMIN_POLICY(pplm, *fec_policy, write, 100g_2x);
|
||||
break;
|
||||
case MLX5E_FEC_SUPPORTED_LINK_MODE_200G_4X:
|
||||
MLX5E_FEC_OVERRIDE_ADMIN_50G_POLICY(pplm, *fec_policy, write, 200g_4x);
|
||||
MLX5E_FEC_OVERRIDE_ADMIN_POLICY(pplm, *fec_policy, write, 200g_4x);
|
||||
break;
|
||||
case MLX5E_FEC_SUPPORTED_LINK_MODE_400G_8X:
|
||||
MLX5E_FEC_OVERRIDE_ADMIN_50G_POLICY(pplm, *fec_policy, write, 400g_8x);
|
||||
MLX5E_FEC_OVERRIDE_ADMIN_POLICY(pplm, *fec_policy, write, 400g_8x);
|
||||
break;
|
||||
default:
|
||||
return -EINVAL;
|
||||
|
@ -1895,6 +1895,9 @@ static int mlx5e_flower_parse_meta(struct net_device *filter_dev,
|
||||
return 0;
|
||||
|
||||
flow_rule_match_meta(rule, &match);
|
||||
if (!match.mask->ingress_ifindex)
|
||||
return 0;
|
||||
|
||||
if (match.mask->ingress_ifindex != 0xFFFFFFFF) {
|
||||
NL_SET_ERR_MSG_MOD(extack, "Unsupported ingress ifindex mask");
|
||||
return -EOPNOTSUPP;
|
||||
|
@ -2350,6 +2350,13 @@ static void rtl_jumbo_config(struct rtl8169_private *tp)
|
||||
|
||||
if (pci_is_pcie(tp->pci_dev) && tp->supports_gmii)
|
||||
pcie_set_readrq(tp->pci_dev, readrq);
|
||||
|
||||
/* Chip doesn't support pause in jumbo mode */
|
||||
linkmode_mod_bit(ETHTOOL_LINK_MODE_Pause_BIT,
|
||||
tp->phydev->advertising, !jumbo);
|
||||
linkmode_mod_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT,
|
||||
tp->phydev->advertising, !jumbo);
|
||||
phy_start_aneg(tp->phydev);
|
||||
}
|
||||
|
||||
DECLARE_RTL_COND(rtl_chipcmd_cond)
|
||||
@ -4630,8 +4637,6 @@ static int r8169_phy_connect(struct rtl8169_private *tp)
|
||||
if (!tp->supports_gmii)
|
||||
phy_set_max_speed(phydev, SPEED_100);
|
||||
|
||||
phy_support_asym_pause(phydev);
|
||||
|
||||
phy_attached_info(phydev);
|
||||
|
||||
return 0;
|
||||
|
@ -1379,88 +1379,6 @@ static void stmmac_free_tx_buffer(struct stmmac_priv *priv, u32 queue, int i)
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* stmmac_reinit_rx_buffers - reinit the RX descriptor buffer.
|
||||
* @priv: driver private structure
|
||||
* Description: this function is called to re-allocate a receive buffer, perform
|
||||
* the DMA mapping and init the descriptor.
|
||||
*/
|
||||
static void stmmac_reinit_rx_buffers(struct stmmac_priv *priv)
|
||||
{
|
||||
u32 rx_count = priv->plat->rx_queues_to_use;
|
||||
u32 queue;
|
||||
int i;
|
||||
|
||||
for (queue = 0; queue < rx_count; queue++) {
|
||||
struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue];
|
||||
|
||||
for (i = 0; i < priv->dma_rx_size; i++) {
|
||||
struct stmmac_rx_buffer *buf = &rx_q->buf_pool[i];
|
||||
|
||||
if (buf->page) {
|
||||
page_pool_recycle_direct(rx_q->page_pool, buf->page);
|
||||
buf->page = NULL;
|
||||
}
|
||||
|
||||
if (priv->sph && buf->sec_page) {
|
||||
page_pool_recycle_direct(rx_q->page_pool, buf->sec_page);
|
||||
buf->sec_page = NULL;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
for (queue = 0; queue < rx_count; queue++) {
|
||||
struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue];
|
||||
|
||||
for (i = 0; i < priv->dma_rx_size; i++) {
|
||||
struct stmmac_rx_buffer *buf = &rx_q->buf_pool[i];
|
||||
struct dma_desc *p;
|
||||
|
||||
if (priv->extend_desc)
|
||||
p = &((rx_q->dma_erx + i)->basic);
|
||||
else
|
||||
p = rx_q->dma_rx + i;
|
||||
|
||||
if (!buf->page) {
|
||||
buf->page = page_pool_dev_alloc_pages(rx_q->page_pool);
|
||||
if (!buf->page)
|
||||
goto err_reinit_rx_buffers;
|
||||
|
||||
buf->addr = page_pool_get_dma_addr(buf->page);
|
||||
}
|
||||
|
||||
if (priv->sph && !buf->sec_page) {
|
||||
buf->sec_page = page_pool_dev_alloc_pages(rx_q->page_pool);
|
||||
if (!buf->sec_page)
|
||||
goto err_reinit_rx_buffers;
|
||||
|
||||
buf->sec_addr = page_pool_get_dma_addr(buf->sec_page);
|
||||
}
|
||||
|
||||
stmmac_set_desc_addr(priv, p, buf->addr);
|
||||
if (priv->sph)
|
||||
stmmac_set_desc_sec_addr(priv, p, buf->sec_addr, true);
|
||||
else
|
||||
stmmac_set_desc_sec_addr(priv, p, buf->sec_addr, false);
|
||||
if (priv->dma_buf_sz == BUF_SIZE_16KiB)
|
||||
stmmac_init_desc3(priv, p);
|
||||
}
|
||||
}
|
||||
|
||||
return;
|
||||
|
||||
err_reinit_rx_buffers:
|
||||
do {
|
||||
while (--i >= 0)
|
||||
stmmac_free_rx_buffer(priv, queue, i);
|
||||
|
||||
if (queue == 0)
|
||||
break;
|
||||
|
||||
i = priv->dma_rx_size;
|
||||
} while (queue-- > 0);
|
||||
}
|
||||
|
||||
/**
|
||||
* init_dma_rx_desc_rings - init the RX descriptor rings
|
||||
* @dev: net device structure
|
||||
@ -5428,7 +5346,7 @@ int stmmac_resume(struct device *dev)
|
||||
mutex_lock(&priv->lock);
|
||||
|
||||
stmmac_reset_queues_param(priv);
|
||||
stmmac_reinit_rx_buffers(priv);
|
||||
|
||||
stmmac_free_tx_skbufs(priv);
|
||||
stmmac_clear_descriptors(priv);
|
||||
|
||||
|
@ -891,6 +891,9 @@ static int geneve_xmit_skb(struct sk_buff *skb, struct net_device *dev,
|
||||
__be16 sport;
|
||||
int err;
|
||||
|
||||
if (!pskb_network_may_pull(skb, sizeof(struct iphdr)))
|
||||
return -EINVAL;
|
||||
|
||||
sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true);
|
||||
rt = geneve_get_v4_rt(skb, dev, gs4, &fl4, info,
|
||||
geneve->cfg.info.key.tp_dst, sport);
|
||||
@ -985,6 +988,9 @@ static int geneve6_xmit_skb(struct sk_buff *skb, struct net_device *dev,
|
||||
__be16 sport;
|
||||
int err;
|
||||
|
||||
if (!pskb_network_may_pull(skb, sizeof(struct ipv6hdr)))
|
||||
return -EINVAL;
|
||||
|
||||
sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true);
|
||||
dst = geneve_get_v6_dst(skb, dev, gs6, &fl6, info,
|
||||
geneve->cfg.info.key.tp_dst, sport);
|
||||
|
@ -3021,9 +3021,34 @@ static struct phy_driver marvell_drivers[] = {
|
||||
.get_stats = marvell_get_stats,
|
||||
},
|
||||
{
|
||||
.phy_id = MARVELL_PHY_ID_88E6390,
|
||||
.phy_id = MARVELL_PHY_ID_88E6341_FAMILY,
|
||||
.phy_id_mask = MARVELL_PHY_ID_MASK,
|
||||
.name = "Marvell 88E6390",
|
||||
.name = "Marvell 88E6341 Family",
|
||||
/* PHY_GBIT_FEATURES */
|
||||
.flags = PHY_POLL_CABLE_TEST,
|
||||
.probe = m88e1510_probe,
|
||||
.config_init = marvell_config_init,
|
||||
.config_aneg = m88e6390_config_aneg,
|
||||
.read_status = marvell_read_status,
|
||||
.config_intr = marvell_config_intr,
|
||||
.handle_interrupt = marvell_handle_interrupt,
|
||||
.resume = genphy_resume,
|
||||
.suspend = genphy_suspend,
|
||||
.read_page = marvell_read_page,
|
||||
.write_page = marvell_write_page,
|
||||
.get_sset_count = marvell_get_sset_count,
|
||||
.get_strings = marvell_get_strings,
|
||||
.get_stats = marvell_get_stats,
|
||||
.get_tunable = m88e1540_get_tunable,
|
||||
.set_tunable = m88e1540_set_tunable,
|
||||
.cable_test_start = marvell_vct7_cable_test_start,
|
||||
.cable_test_tdr_start = marvell_vct5_cable_test_tdr_start,
|
||||
.cable_test_get_status = marvell_vct7_cable_test_get_status,
|
||||
},
|
||||
{
|
||||
.phy_id = MARVELL_PHY_ID_88E6390_FAMILY,
|
||||
.phy_id_mask = MARVELL_PHY_ID_MASK,
|
||||
.name = "Marvell 88E6390 Family",
|
||||
/* PHY_GBIT_FEATURES */
|
||||
.flags = PHY_POLL_CABLE_TEST,
|
||||
.probe = m88e6390_probe,
|
||||
@ -3107,7 +3132,8 @@ static struct mdio_device_id __maybe_unused marvell_tbl[] = {
|
||||
{ MARVELL_PHY_ID_88E1540, MARVELL_PHY_ID_MASK },
|
||||
{ MARVELL_PHY_ID_88E1545, MARVELL_PHY_ID_MASK },
|
||||
{ MARVELL_PHY_ID_88E3016, MARVELL_PHY_ID_MASK },
|
||||
{ MARVELL_PHY_ID_88E6390, MARVELL_PHY_ID_MASK },
|
||||
{ MARVELL_PHY_ID_88E6341_FAMILY, MARVELL_PHY_ID_MASK },
|
||||
{ MARVELL_PHY_ID_88E6390_FAMILY, MARVELL_PHY_ID_MASK },
|
||||
{ MARVELL_PHY_ID_88E1340S, MARVELL_PHY_ID_MASK },
|
||||
{ MARVELL_PHY_ID_88E1548P, MARVELL_PHY_ID_MASK },
|
||||
{ }
|
||||
|
@ -471,9 +471,8 @@ static netdev_tx_t vrf_process_v6_outbound(struct sk_buff *skb,
|
||||
|
||||
skb_dst_drop(skb);
|
||||
|
||||
/* if dst.dev is loopback or the VRF device again this is locally
|
||||
* originated traffic destined to a local address. Short circuit
|
||||
* to Rx path
|
||||
/* if dst.dev is the VRF device again this is locally originated traffic
|
||||
* destined to a local address. Short circuit to Rx path.
|
||||
*/
|
||||
if (dst->dev == dev)
|
||||
return vrf_local_xmit(skb, dev, dst);
|
||||
@ -547,9 +546,8 @@ static netdev_tx_t vrf_process_v4_outbound(struct sk_buff *skb,
|
||||
|
||||
skb_dst_drop(skb);
|
||||
|
||||
/* if dst.dev is loopback or the VRF device again this is locally
|
||||
* originated traffic destined to a local address. Short circuit
|
||||
* to Rx path
|
||||
/* if dst.dev is the VRF device again this is locally originated traffic
|
||||
* destined to a local address. Short circuit to Rx path.
|
||||
*/
|
||||
if (rt->dst.dev == vrf_dev)
|
||||
return vrf_local_xmit(skb, vrf_dev, &rt->dst);
|
||||
|
@ -824,11 +824,15 @@ static void connect(struct backend_info *be)
|
||||
xenvif_carrier_on(be->vif);
|
||||
|
||||
unregister_hotplug_status_watch(be);
|
||||
err = xenbus_watch_pathfmt(dev, &be->hotplug_status_watch, NULL,
|
||||
hotplug_status_changed,
|
||||
"%s/%s", dev->nodename, "hotplug-status");
|
||||
if (!err)
|
||||
if (xenbus_exists(XBT_NIL, dev->nodename, "hotplug-status")) {
|
||||
err = xenbus_watch_pathfmt(dev, &be->hotplug_status_watch,
|
||||
NULL, hotplug_status_changed,
|
||||
"%s/%s", dev->nodename,
|
||||
"hotplug-status");
|
||||
if (err)
|
||||
goto err;
|
||||
be->have_hotplug_status_watch = 1;
|
||||
}
|
||||
|
||||
netif_tx_wake_all_queues(be->vif->dev);
|
||||
|
||||
|
@ -28,11 +28,12 @@
|
||||
/* Marvel 88E1111 in Finisar SFP module with modified PHY ID */
|
||||
#define MARVELL_PHY_ID_88E1111_FINISAR 0x01ff0cc0
|
||||
|
||||
/* The MV88e6390 Ethernet switch contains embedded PHYs. These PHYs do
|
||||
/* These Ethernet switch families contain embedded PHYs, but they do
|
||||
* not have a model ID. So the switch driver traps reads to the ID2
|
||||
* register and returns the switch family ID
|
||||
*/
|
||||
#define MARVELL_PHY_ID_88E6390 0x01410f90
|
||||
#define MARVELL_PHY_ID_88E6341_FAMILY 0x01410f41
|
||||
#define MARVELL_PHY_ID_88E6390_FAMILY 0x01410f90
|
||||
|
||||
#define MARVELL_PHY_FAMILY_ID(id) ((id) >> 4)
|
||||
|
||||
|
@ -52,8 +52,9 @@ extern void *arpt_alloc_initial_table(const struct xt_table *);
|
||||
int arpt_register_table(struct net *net, const struct xt_table *table,
|
||||
const struct arpt_replace *repl,
|
||||
const struct nf_hook_ops *ops, struct xt_table **res);
|
||||
void arpt_unregister_table(struct net *net, struct xt_table *table,
|
||||
const struct nf_hook_ops *ops);
|
||||
void arpt_unregister_table(struct net *net, struct xt_table *table);
|
||||
void arpt_unregister_table_pre_exit(struct net *net, struct xt_table *table,
|
||||
const struct nf_hook_ops *ops);
|
||||
extern unsigned int arpt_do_table(struct sk_buff *skb,
|
||||
const struct nf_hook_state *state,
|
||||
struct xt_table *table);
|
||||
|
@ -110,8 +110,9 @@ extern int ebt_register_table(struct net *net,
|
||||
const struct ebt_table *table,
|
||||
const struct nf_hook_ops *ops,
|
||||
struct ebt_table **res);
|
||||
extern void ebt_unregister_table(struct net *net, struct ebt_table *table,
|
||||
const struct nf_hook_ops *);
|
||||
extern void ebt_unregister_table(struct net *net, struct ebt_table *table);
|
||||
void ebt_unregister_table_pre_exit(struct net *net, const char *tablename,
|
||||
const struct nf_hook_ops *ops);
|
||||
extern unsigned int ebt_do_table(struct sk_buff *skb,
|
||||
const struct nf_hook_state *state,
|
||||
struct ebt_table *table);
|
||||
|
@ -5856,40 +5856,51 @@ static struct bpf_insn_aux_data *cur_aux(struct bpf_verifier_env *env)
|
||||
return &env->insn_aux_data[env->insn_idx];
|
||||
}
|
||||
|
||||
enum {
|
||||
REASON_BOUNDS = -1,
|
||||
REASON_TYPE = -2,
|
||||
REASON_PATHS = -3,
|
||||
REASON_LIMIT = -4,
|
||||
REASON_STACK = -5,
|
||||
};
|
||||
|
||||
static int retrieve_ptr_limit(const struct bpf_reg_state *ptr_reg,
|
||||
u32 *ptr_limit, u8 opcode, bool off_is_neg)
|
||||
const struct bpf_reg_state *off_reg,
|
||||
u32 *alu_limit, u8 opcode)
|
||||
{
|
||||
bool off_is_neg = off_reg->smin_value < 0;
|
||||
bool mask_to_left = (opcode == BPF_ADD && off_is_neg) ||
|
||||
(opcode == BPF_SUB && !off_is_neg);
|
||||
u32 off, max;
|
||||
u32 max = 0, ptr_limit = 0;
|
||||
|
||||
if (!tnum_is_const(off_reg->var_off) &&
|
||||
(off_reg->smin_value < 0) != (off_reg->smax_value < 0))
|
||||
return REASON_BOUNDS;
|
||||
|
||||
switch (ptr_reg->type) {
|
||||
case PTR_TO_STACK:
|
||||
/* Offset 0 is out-of-bounds, but acceptable start for the
|
||||
* left direction, see BPF_REG_FP.
|
||||
* left direction, see BPF_REG_FP. Also, unknown scalar
|
||||
* offset where we would need to deal with min/max bounds is
|
||||
* currently prohibited for unprivileged.
|
||||
*/
|
||||
max = MAX_BPF_STACK + mask_to_left;
|
||||
/* Indirect variable offset stack access is prohibited in
|
||||
* unprivileged mode so it's not handled here.
|
||||
*/
|
||||
off = ptr_reg->off + ptr_reg->var_off.value;
|
||||
if (mask_to_left)
|
||||
*ptr_limit = MAX_BPF_STACK + off;
|
||||
else
|
||||
*ptr_limit = -off - 1;
|
||||
return *ptr_limit >= max ? -ERANGE : 0;
|
||||
ptr_limit = -(ptr_reg->var_off.value + ptr_reg->off);
|
||||
break;
|
||||
case PTR_TO_MAP_VALUE:
|
||||
max = ptr_reg->map_ptr->value_size;
|
||||
if (mask_to_left) {
|
||||
*ptr_limit = ptr_reg->umax_value + ptr_reg->off;
|
||||
} else {
|
||||
off = ptr_reg->smin_value + ptr_reg->off;
|
||||
*ptr_limit = ptr_reg->map_ptr->value_size - off - 1;
|
||||
}
|
||||
return *ptr_limit >= max ? -ERANGE : 0;
|
||||
ptr_limit = (mask_to_left ?
|
||||
ptr_reg->smin_value :
|
||||
ptr_reg->umax_value) + ptr_reg->off;
|
||||
break;
|
||||
default:
|
||||
return -EINVAL;
|
||||
return REASON_TYPE;
|
||||
}
|
||||
|
||||
if (ptr_limit >= max)
|
||||
return REASON_LIMIT;
|
||||
*alu_limit = ptr_limit;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static bool can_skip_alu_sanitation(const struct bpf_verifier_env *env,
|
||||
@ -5907,7 +5918,7 @@ static int update_alu_sanitation_state(struct bpf_insn_aux_data *aux,
|
||||
if (aux->alu_state &&
|
||||
(aux->alu_state != alu_state ||
|
||||
aux->alu_limit != alu_limit))
|
||||
return -EACCES;
|
||||
return REASON_PATHS;
|
||||
|
||||
/* Corresponding fixup done in fixup_bpf_calls(). */
|
||||
aux->alu_state = alu_state;
|
||||
@ -5926,14 +5937,22 @@ static int sanitize_val_alu(struct bpf_verifier_env *env,
|
||||
return update_alu_sanitation_state(aux, BPF_ALU_NON_POINTER, 0);
|
||||
}
|
||||
|
||||
static bool sanitize_needed(u8 opcode)
|
||||
{
|
||||
return opcode == BPF_ADD || opcode == BPF_SUB;
|
||||
}
|
||||
|
||||
static int sanitize_ptr_alu(struct bpf_verifier_env *env,
|
||||
struct bpf_insn *insn,
|
||||
const struct bpf_reg_state *ptr_reg,
|
||||
const struct bpf_reg_state *off_reg,
|
||||
struct bpf_reg_state *dst_reg,
|
||||
bool off_is_neg)
|
||||
struct bpf_insn_aux_data *tmp_aux,
|
||||
const bool commit_window)
|
||||
{
|
||||
struct bpf_insn_aux_data *aux = commit_window ? cur_aux(env) : tmp_aux;
|
||||
struct bpf_verifier_state *vstate = env->cur_state;
|
||||
struct bpf_insn_aux_data *aux = cur_aux(env);
|
||||
bool off_is_neg = off_reg->smin_value < 0;
|
||||
bool ptr_is_dst_reg = ptr_reg == dst_reg;
|
||||
u8 opcode = BPF_OP(insn->code);
|
||||
u32 alu_state, alu_limit;
|
||||
@ -5951,18 +5970,33 @@ static int sanitize_ptr_alu(struct bpf_verifier_env *env,
|
||||
if (vstate->speculative)
|
||||
goto do_sim;
|
||||
|
||||
alu_state = off_is_neg ? BPF_ALU_NEG_VALUE : 0;
|
||||
alu_state |= ptr_is_dst_reg ?
|
||||
BPF_ALU_SANITIZE_SRC : BPF_ALU_SANITIZE_DST;
|
||||
|
||||
err = retrieve_ptr_limit(ptr_reg, &alu_limit, opcode, off_is_neg);
|
||||
err = retrieve_ptr_limit(ptr_reg, off_reg, &alu_limit, opcode);
|
||||
if (err < 0)
|
||||
return err;
|
||||
|
||||
if (commit_window) {
|
||||
/* In commit phase we narrow the masking window based on
|
||||
* the observed pointer move after the simulated operation.
|
||||
*/
|
||||
alu_state = tmp_aux->alu_state;
|
||||
alu_limit = abs(tmp_aux->alu_limit - alu_limit);
|
||||
} else {
|
||||
alu_state = off_is_neg ? BPF_ALU_NEG_VALUE : 0;
|
||||
alu_state |= ptr_is_dst_reg ?
|
||||
BPF_ALU_SANITIZE_SRC : BPF_ALU_SANITIZE_DST;
|
||||
}
|
||||
|
||||
err = update_alu_sanitation_state(aux, alu_state, alu_limit);
|
||||
if (err < 0)
|
||||
return err;
|
||||
do_sim:
|
||||
/* If we're in commit phase, we're done here given we already
|
||||
* pushed the truncated dst_reg into the speculative verification
|
||||
* stack.
|
||||
*/
|
||||
if (commit_window)
|
||||
return 0;
|
||||
|
||||
/* Simulate and find potential out-of-bounds access under
|
||||
* speculative execution from truncation as a result of
|
||||
* masking when off was not within expected range. If off
|
||||
@ -5979,7 +6013,46 @@ do_sim:
|
||||
ret = push_stack(env, env->insn_idx + 1, env->insn_idx, true);
|
||||
if (!ptr_is_dst_reg && ret)
|
||||
*dst_reg = tmp;
|
||||
return !ret ? -EFAULT : 0;
|
||||
return !ret ? REASON_STACK : 0;
|
||||
}
|
||||
|
||||
static int sanitize_err(struct bpf_verifier_env *env,
|
||||
const struct bpf_insn *insn, int reason,
|
||||
const struct bpf_reg_state *off_reg,
|
||||
const struct bpf_reg_state *dst_reg)
|
||||
{
|
||||
static const char *err = "pointer arithmetic with it prohibited for !root";
|
||||
const char *op = BPF_OP(insn->code) == BPF_ADD ? "add" : "sub";
|
||||
u32 dst = insn->dst_reg, src = insn->src_reg;
|
||||
|
||||
switch (reason) {
|
||||
case REASON_BOUNDS:
|
||||
verbose(env, "R%d has unknown scalar with mixed signed bounds, %s\n",
|
||||
off_reg == dst_reg ? dst : src, err);
|
||||
break;
|
||||
case REASON_TYPE:
|
||||
verbose(env, "R%d has pointer with unsupported alu operation, %s\n",
|
||||
off_reg == dst_reg ? src : dst, err);
|
||||
break;
|
||||
case REASON_PATHS:
|
||||
verbose(env, "R%d tried to %s from different maps, paths or scalars, %s\n",
|
||||
dst, op, err);
|
||||
break;
|
||||
case REASON_LIMIT:
|
||||
verbose(env, "R%d tried to %s beyond pointer bounds, %s\n",
|
||||
dst, op, err);
|
||||
break;
|
||||
case REASON_STACK:
|
||||
verbose(env, "R%d could not be pushed for speculative verification, %s\n",
|
||||
dst, err);
|
||||
break;
|
||||
default:
|
||||
verbose(env, "verifier internal error: unknown reason (%d)\n",
|
||||
reason);
|
||||
break;
|
||||
}
|
||||
|
||||
return -EACCES;
|
||||
}
|
||||
|
||||
/* check that stack access falls within stack limits and that 'reg' doesn't
|
||||
@ -6016,6 +6089,37 @@ static int check_stack_access_for_ptr_arithmetic(
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int sanitize_check_bounds(struct bpf_verifier_env *env,
|
||||
const struct bpf_insn *insn,
|
||||
const struct bpf_reg_state *dst_reg)
|
||||
{
|
||||
u32 dst = insn->dst_reg;
|
||||
|
||||
/* For unprivileged we require that resulting offset must be in bounds
|
||||
* in order to be able to sanitize access later on.
|
||||
*/
|
||||
if (env->bypass_spec_v1)
|
||||
return 0;
|
||||
|
||||
switch (dst_reg->type) {
|
||||
case PTR_TO_STACK:
|
||||
if (check_stack_access_for_ptr_arithmetic(env, dst, dst_reg,
|
||||
dst_reg->off + dst_reg->var_off.value))
|
||||
return -EACCES;
|
||||
break;
|
||||
case PTR_TO_MAP_VALUE:
|
||||
if (check_map_access(env, dst, dst_reg->off, 1, false)) {
|
||||
verbose(env, "R%d pointer arithmetic of map value goes out of range, "
|
||||
"prohibited for !root\n", dst);
|
||||
return -EACCES;
|
||||
}
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* Handles arithmetic on a pointer and a scalar: computes new min/max and var_off.
|
||||
* Caller should also handle BPF_MOV case separately.
|
||||
@ -6035,8 +6139,9 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
|
||||
smin_ptr = ptr_reg->smin_value, smax_ptr = ptr_reg->smax_value;
|
||||
u64 umin_val = off_reg->umin_value, umax_val = off_reg->umax_value,
|
||||
umin_ptr = ptr_reg->umin_value, umax_ptr = ptr_reg->umax_value;
|
||||
u32 dst = insn->dst_reg, src = insn->src_reg;
|
||||
struct bpf_insn_aux_data tmp_aux = {};
|
||||
u8 opcode = BPF_OP(insn->code);
|
||||
u32 dst = insn->dst_reg;
|
||||
int ret;
|
||||
|
||||
dst_reg = ®s[dst];
|
||||
@ -6084,13 +6189,6 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
|
||||
verbose(env, "R%d pointer arithmetic on %s prohibited\n",
|
||||
dst, reg_type_str[ptr_reg->type]);
|
||||
return -EACCES;
|
||||
case PTR_TO_MAP_VALUE:
|
||||
if (!env->allow_ptr_leaks && !known && (smin_val < 0) != (smax_val < 0)) {
|
||||
verbose(env, "R%d has unknown scalar with mixed signed bounds, pointer arithmetic with it prohibited for !root\n",
|
||||
off_reg == dst_reg ? dst : src);
|
||||
return -EACCES;
|
||||
}
|
||||
fallthrough;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
@ -6108,13 +6206,15 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
|
||||
/* pointer types do not carry 32-bit bounds at the moment. */
|
||||
__mark_reg32_unbounded(dst_reg);
|
||||
|
||||
if (sanitize_needed(opcode)) {
|
||||
ret = sanitize_ptr_alu(env, insn, ptr_reg, off_reg, dst_reg,
|
||||
&tmp_aux, false);
|
||||
if (ret < 0)
|
||||
return sanitize_err(env, insn, ret, off_reg, dst_reg);
|
||||
}
|
||||
|
||||
switch (opcode) {
|
||||
case BPF_ADD:
|
||||
ret = sanitize_ptr_alu(env, insn, ptr_reg, dst_reg, smin_val < 0);
|
||||
if (ret < 0) {
|
||||
verbose(env, "R%d tried to add from different maps, paths, or prohibited types\n", dst);
|
||||
return ret;
|
||||
}
|
||||
/* We can take a fixed offset as long as it doesn't overflow
|
||||
* the s32 'off' field
|
||||
*/
|
||||
@ -6165,11 +6265,6 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
|
||||
}
|
||||
break;
|
||||
case BPF_SUB:
|
||||
ret = sanitize_ptr_alu(env, insn, ptr_reg, dst_reg, smin_val < 0);
|
||||
if (ret < 0) {
|
||||
verbose(env, "R%d tried to sub from different maps, paths, or prohibited types\n", dst);
|
||||
return ret;
|
||||
}
|
||||
if (dst_reg == off_reg) {
|
||||
/* scalar -= pointer. Creates an unknown scalar */
|
||||
verbose(env, "R%d tried to subtract pointer from scalar\n",
|
||||
@ -6250,21 +6345,13 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
|
||||
__reg_deduce_bounds(dst_reg);
|
||||
__reg_bound_offset(dst_reg);
|
||||
|
||||
/* For unprivileged we require that resulting offset must be in bounds
|
||||
* in order to be able to sanitize access later on.
|
||||
*/
|
||||
if (!env->bypass_spec_v1) {
|
||||
if (dst_reg->type == PTR_TO_MAP_VALUE &&
|
||||
check_map_access(env, dst, dst_reg->off, 1, false)) {
|
||||
verbose(env, "R%d pointer arithmetic of map value goes out of range, "
|
||||
"prohibited for !root\n", dst);
|
||||
return -EACCES;
|
||||
} else if (dst_reg->type == PTR_TO_STACK &&
|
||||
check_stack_access_for_ptr_arithmetic(
|
||||
env, dst, dst_reg, dst_reg->off +
|
||||
dst_reg->var_off.value)) {
|
||||
return -EACCES;
|
||||
}
|
||||
if (sanitize_check_bounds(env, insn, dst_reg) < 0)
|
||||
return -EACCES;
|
||||
if (sanitize_needed(opcode)) {
|
||||
ret = sanitize_ptr_alu(env, insn, dst_reg, off_reg, dst_reg,
|
||||
&tmp_aux, true);
|
||||
if (ret < 0)
|
||||
return sanitize_err(env, insn, ret, off_reg, dst_reg);
|
||||
}
|
||||
|
||||
return 0;
|
||||
@ -6858,9 +6945,8 @@ static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env,
|
||||
s32 s32_min_val, s32_max_val;
|
||||
u32 u32_min_val, u32_max_val;
|
||||
u64 insn_bitness = (BPF_CLASS(insn->code) == BPF_ALU64) ? 64 : 32;
|
||||
u32 dst = insn->dst_reg;
|
||||
int ret;
|
||||
bool alu32 = (BPF_CLASS(insn->code) != BPF_ALU64);
|
||||
int ret;
|
||||
|
||||
smin_val = src_reg.smin_value;
|
||||
smax_val = src_reg.smax_value;
|
||||
@ -6902,6 +6988,12 @@ static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env,
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (sanitize_needed(opcode)) {
|
||||
ret = sanitize_val_alu(env, insn);
|
||||
if (ret < 0)
|
||||
return sanitize_err(env, insn, ret, NULL, NULL);
|
||||
}
|
||||
|
||||
/* Calculate sign/unsigned bounds and tnum for alu32 and alu64 bit ops.
|
||||
* There are two classes of instructions: The first class we track both
|
||||
* alu32 and alu64 sign/unsigned bounds independently this provides the
|
||||
@ -6918,21 +7010,11 @@ static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env,
|
||||
*/
|
||||
switch (opcode) {
|
||||
case BPF_ADD:
|
||||
ret = sanitize_val_alu(env, insn);
|
||||
if (ret < 0) {
|
||||
verbose(env, "R%d tried to add from different pointers or scalars\n", dst);
|
||||
return ret;
|
||||
}
|
||||
scalar32_min_max_add(dst_reg, &src_reg);
|
||||
scalar_min_max_add(dst_reg, &src_reg);
|
||||
dst_reg->var_off = tnum_add(dst_reg->var_off, src_reg.var_off);
|
||||
break;
|
||||
case BPF_SUB:
|
||||
ret = sanitize_val_alu(env, insn);
|
||||
if (ret < 0) {
|
||||
verbose(env, "R%d tried to sub from different pointers or scalars\n", dst);
|
||||
return ret;
|
||||
}
|
||||
scalar32_min_max_sub(dst_reg, &src_reg);
|
||||
scalar_min_max_sub(dst_reg, &src_reg);
|
||||
dst_reg->var_off = tnum_sub(dst_reg->var_off, src_reg.var_off);
|
||||
|
@ -105,14 +105,20 @@ static int __net_init broute_net_init(struct net *net)
|
||||
&net->xt.broute_table);
|
||||
}
|
||||
|
||||
static void __net_exit broute_net_pre_exit(struct net *net)
|
||||
{
|
||||
ebt_unregister_table_pre_exit(net, "broute", &ebt_ops_broute);
|
||||
}
|
||||
|
||||
static void __net_exit broute_net_exit(struct net *net)
|
||||
{
|
||||
ebt_unregister_table(net, net->xt.broute_table, &ebt_ops_broute);
|
||||
ebt_unregister_table(net, net->xt.broute_table);
|
||||
}
|
||||
|
||||
static struct pernet_operations broute_net_ops = {
|
||||
.init = broute_net_init,
|
||||
.exit = broute_net_exit,
|
||||
.pre_exit = broute_net_pre_exit,
|
||||
};
|
||||
|
||||
static int __init ebtable_broute_init(void)
|
||||
|
@ -99,14 +99,20 @@ static int __net_init frame_filter_net_init(struct net *net)
|
||||
&net->xt.frame_filter);
|
||||
}
|
||||
|
||||
static void __net_exit frame_filter_net_pre_exit(struct net *net)
|
||||
{
|
||||
ebt_unregister_table_pre_exit(net, "filter", ebt_ops_filter);
|
||||
}
|
||||
|
||||
static void __net_exit frame_filter_net_exit(struct net *net)
|
||||
{
|
||||
ebt_unregister_table(net, net->xt.frame_filter, ebt_ops_filter);
|
||||
ebt_unregister_table(net, net->xt.frame_filter);
|
||||
}
|
||||
|
||||
static struct pernet_operations frame_filter_net_ops = {
|
||||
.init = frame_filter_net_init,
|
||||
.exit = frame_filter_net_exit,
|
||||
.pre_exit = frame_filter_net_pre_exit,
|
||||
};
|
||||
|
||||
static int __init ebtable_filter_init(void)
|
||||
|
@ -99,14 +99,20 @@ static int __net_init frame_nat_net_init(struct net *net)
|
||||
&net->xt.frame_nat);
|
||||
}
|
||||
|
||||
static void __net_exit frame_nat_net_pre_exit(struct net *net)
|
||||
{
|
||||
ebt_unregister_table_pre_exit(net, "nat", ebt_ops_nat);
|
||||
}
|
||||
|
||||
static void __net_exit frame_nat_net_exit(struct net *net)
|
||||
{
|
||||
ebt_unregister_table(net, net->xt.frame_nat, ebt_ops_nat);
|
||||
ebt_unregister_table(net, net->xt.frame_nat);
|
||||
}
|
||||
|
||||
static struct pernet_operations frame_nat_net_ops = {
|
||||
.init = frame_nat_net_init,
|
||||
.exit = frame_nat_net_exit,
|
||||
.pre_exit = frame_nat_net_pre_exit,
|
||||
};
|
||||
|
||||
static int __init ebtable_nat_init(void)
|
||||
|
@ -1232,10 +1232,34 @@ out:
|
||||
return ret;
|
||||
}
|
||||
|
||||
void ebt_unregister_table(struct net *net, struct ebt_table *table,
|
||||
const struct nf_hook_ops *ops)
|
||||
static struct ebt_table *__ebt_find_table(struct net *net, const char *name)
|
||||
{
|
||||
struct ebt_table *t;
|
||||
|
||||
mutex_lock(&ebt_mutex);
|
||||
|
||||
list_for_each_entry(t, &net->xt.tables[NFPROTO_BRIDGE], list) {
|
||||
if (strcmp(t->name, name) == 0) {
|
||||
mutex_unlock(&ebt_mutex);
|
||||
return t;
|
||||
}
|
||||
}
|
||||
|
||||
mutex_unlock(&ebt_mutex);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
void ebt_unregister_table_pre_exit(struct net *net, const char *name, const struct nf_hook_ops *ops)
|
||||
{
|
||||
struct ebt_table *table = __ebt_find_table(net, name);
|
||||
|
||||
if (table)
|
||||
nf_unregister_net_hooks(net, ops, hweight32(table->valid_hooks));
|
||||
}
|
||||
EXPORT_SYMBOL(ebt_unregister_table_pre_exit);
|
||||
|
||||
void ebt_unregister_table(struct net *net, struct ebt_table *table)
|
||||
{
|
||||
nf_unregister_net_hooks(net, ops, hweight32(table->valid_hooks));
|
||||
__ebt_unregister_table(net, table);
|
||||
}
|
||||
|
||||
|
@ -5924,7 +5924,8 @@ static void skb_gro_reset_offset(struct sk_buff *skb)
|
||||
NAPI_GRO_CB(skb)->frag0_len = 0;
|
||||
|
||||
if (!skb_headlen(skb) && pinfo->nr_frags &&
|
||||
!PageHighMem(skb_frag_page(frag0))) {
|
||||
!PageHighMem(skb_frag_page(frag0)) &&
|
||||
(!NET_IP_ALIGN || !(skb_frag_off(frag0) & 3))) {
|
||||
NAPI_GRO_CB(skb)->frag0 = skb_frag_address(frag0);
|
||||
NAPI_GRO_CB(skb)->frag0_len = min_t(unsigned int,
|
||||
skb_frag_size(frag0),
|
||||
|
@ -36,9 +36,9 @@ static inline int ethnl_strz_size(const char *s)
|
||||
|
||||
/**
|
||||
* ethnl_put_strz() - put string attribute with fixed size string
|
||||
* @skb: skb with the message
|
||||
* @attrype: attribute type
|
||||
* @s: ETH_GSTRING_LEN sized string (may not be null terminated)
|
||||
* @skb: skb with the message
|
||||
* @attrtype: attribute type
|
||||
* @s: ETH_GSTRING_LEN sized string (may not be null terminated)
|
||||
*
|
||||
* Puts an attribute with null terminated string from @s into the message.
|
||||
*
|
||||
|
@ -38,16 +38,16 @@ static int pause_prepare_data(const struct ethnl_req_info *req_base,
|
||||
if (!dev->ethtool_ops->get_pauseparam)
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
ethtool_stats_init((u64 *)&data->pausestat,
|
||||
sizeof(data->pausestat) / 8);
|
||||
|
||||
ret = ethnl_ops_begin(dev);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
dev->ethtool_ops->get_pauseparam(dev, &data->pauseparam);
|
||||
if (req_base->flags & ETHTOOL_FLAG_STATS &&
|
||||
dev->ethtool_ops->get_pause_stats) {
|
||||
ethtool_stats_init((u64 *)&data->pausestat,
|
||||
sizeof(data->pausestat) / 8);
|
||||
dev->ethtool_ops->get_pause_stats)
|
||||
dev->ethtool_ops->get_pause_stats(dev, &data->pausestat);
|
||||
}
|
||||
ethnl_ops_complete(dev);
|
||||
|
||||
return 0;
|
||||
|
@ -1193,6 +1193,8 @@ static int translate_compat_table(struct net *net,
|
||||
if (!newinfo)
|
||||
goto out_unlock;
|
||||
|
||||
memset(newinfo->entries, 0, size);
|
||||
|
||||
newinfo->number = compatr->num_entries;
|
||||
for (i = 0; i < NF_ARP_NUMHOOKS; i++) {
|
||||
newinfo->hook_entry[i] = compatr->hook_entry[i];
|
||||
@ -1539,10 +1541,15 @@ out_free:
|
||||
return ret;
|
||||
}
|
||||
|
||||
void arpt_unregister_table(struct net *net, struct xt_table *table,
|
||||
const struct nf_hook_ops *ops)
|
||||
void arpt_unregister_table_pre_exit(struct net *net, struct xt_table *table,
|
||||
const struct nf_hook_ops *ops)
|
||||
{
|
||||
nf_unregister_net_hooks(net, ops, hweight32(table->valid_hooks));
|
||||
}
|
||||
EXPORT_SYMBOL(arpt_unregister_table_pre_exit);
|
||||
|
||||
void arpt_unregister_table(struct net *net, struct xt_table *table)
|
||||
{
|
||||
__arpt_unregister_table(net, table);
|
||||
}
|
||||
|
||||
|
@ -56,16 +56,24 @@ static int __net_init arptable_filter_table_init(struct net *net)
|
||||
return err;
|
||||
}
|
||||
|
||||
static void __net_exit arptable_filter_net_pre_exit(struct net *net)
|
||||
{
|
||||
if (net->ipv4.arptable_filter)
|
||||
arpt_unregister_table_pre_exit(net, net->ipv4.arptable_filter,
|
||||
arpfilter_ops);
|
||||
}
|
||||
|
||||
static void __net_exit arptable_filter_net_exit(struct net *net)
|
||||
{
|
||||
if (!net->ipv4.arptable_filter)
|
||||
return;
|
||||
arpt_unregister_table(net, net->ipv4.arptable_filter, arpfilter_ops);
|
||||
arpt_unregister_table(net, net->ipv4.arptable_filter);
|
||||
net->ipv4.arptable_filter = NULL;
|
||||
}
|
||||
|
||||
static struct pernet_operations arptable_filter_net_ops = {
|
||||
.exit = arptable_filter_net_exit,
|
||||
.pre_exit = arptable_filter_net_pre_exit,
|
||||
};
|
||||
|
||||
static int __init arptable_filter_init(void)
|
||||
|
@ -1428,6 +1428,8 @@ translate_compat_table(struct net *net,
|
||||
if (!newinfo)
|
||||
goto out_unlock;
|
||||
|
||||
memset(newinfo->entries, 0, size);
|
||||
|
||||
newinfo->number = compatr->num_entries;
|
||||
for (i = 0; i < NF_INET_NUMHOOKS; i++) {
|
||||
newinfo->hook_entry[i] = compatr->hook_entry[i];
|
||||
|
@ -1378,9 +1378,19 @@ static __net_init int ipv4_sysctl_init_net(struct net *net)
|
||||
if (!table)
|
||||
goto err_alloc;
|
||||
|
||||
/* Update the variables to point into the current struct net */
|
||||
for (i = 0; i < ARRAY_SIZE(ipv4_net_table) - 1; i++)
|
||||
table[i].data += (void *)net - (void *)&init_net;
|
||||
for (i = 0; i < ARRAY_SIZE(ipv4_net_table) - 1; i++) {
|
||||
if (table[i].data) {
|
||||
/* Update the variables to point into
|
||||
* the current struct net
|
||||
*/
|
||||
table[i].data += (void *)net - (void *)&init_net;
|
||||
} else {
|
||||
/* Entries without data pointer are global;
|
||||
* Make them read-only in non-init_net ns
|
||||
*/
|
||||
table[i].mode &= ~0222;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
net->ipv4.ipv4_hdr = register_net_sysctl(net, "net/ipv4", table);
|
||||
|
@ -2244,6 +2244,16 @@ static void __net_exit ip6_tnl_destroy_tunnels(struct net *net, struct list_head
|
||||
t = rtnl_dereference(t->next);
|
||||
}
|
||||
}
|
||||
|
||||
t = rtnl_dereference(ip6n->tnls_wc[0]);
|
||||
while (t) {
|
||||
/* If dev is in the same netns, it has already
|
||||
* been added to the list by the previous loop.
|
||||
*/
|
||||
if (!net_eq(dev_net(t->dev), net))
|
||||
unregister_netdevice_queue(t->dev, list);
|
||||
t = rtnl_dereference(t->next);
|
||||
}
|
||||
}
|
||||
|
||||
static int __net_init ip6_tnl_init_net(struct net *net)
|
||||
|
@ -1443,6 +1443,8 @@ translate_compat_table(struct net *net,
|
||||
if (!newinfo)
|
||||
goto out_unlock;
|
||||
|
||||
memset(newinfo->entries, 0, size);
|
||||
|
||||
newinfo->number = compatr->num_entries;
|
||||
for (i = 0; i < NF_INET_NUMHOOKS; i++) {
|
||||
newinfo->hook_entry[i] = compatr->hook_entry[i];
|
||||
|
@ -1867,9 +1867,9 @@ static void __net_exit sit_destroy_tunnels(struct net *net,
|
||||
if (dev->rtnl_link_ops == &sit_link_ops)
|
||||
unregister_netdevice_queue(dev, head);
|
||||
|
||||
for (prio = 1; prio < 4; prio++) {
|
||||
for (prio = 0; prio < 4; prio++) {
|
||||
int h;
|
||||
for (h = 0; h < IP6_SIT_HASH_SIZE; h++) {
|
||||
for (h = 0; h < (prio ? IP6_SIT_HASH_SIZE : 1); h++) {
|
||||
struct ip_tunnel *t;
|
||||
|
||||
t = rtnl_dereference(sitn->tunnels[prio][h]);
|
||||
|
@ -266,6 +266,7 @@ static const char* l4proto_name(u16 proto)
|
||||
case IPPROTO_GRE: return "gre";
|
||||
case IPPROTO_SCTP: return "sctp";
|
||||
case IPPROTO_UDPLITE: return "udplite";
|
||||
case IPPROTO_ICMPV6: return "icmpv6";
|
||||
}
|
||||
|
||||
return "unknown";
|
||||
|
@ -305,12 +305,12 @@ static void flow_offload_ipv6_mangle(struct nf_flow_rule *flow_rule,
|
||||
const __be32 *addr, const __be32 *mask)
|
||||
{
|
||||
struct flow_action_entry *entry;
|
||||
int i;
|
||||
int i, j;
|
||||
|
||||
for (i = 0; i < sizeof(struct in6_addr) / sizeof(u32); i += sizeof(u32)) {
|
||||
for (i = 0, j = 0; i < sizeof(struct in6_addr) / sizeof(u32); i += sizeof(u32), j++) {
|
||||
entry = flow_action_entry_next(flow_rule);
|
||||
flow_offload_mangle(entry, FLOW_ACT_MANGLE_HDR_TYPE_IP6,
|
||||
offset + i, &addr[i], mask);
|
||||
offset + i, &addr[j], mask);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -5295,16 +5295,35 @@ err_expr:
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
static void nft_set_elem_expr_setup(const struct nft_set_ext *ext, int i,
|
||||
struct nft_expr *expr_array[])
|
||||
static int nft_set_elem_expr_setup(struct nft_ctx *ctx,
|
||||
const struct nft_set_ext *ext,
|
||||
struct nft_expr *expr_array[],
|
||||
u32 num_exprs)
|
||||
{
|
||||
struct nft_set_elem_expr *elem_expr = nft_set_ext_expr(ext);
|
||||
struct nft_expr *expr = nft_setelem_expr_at(elem_expr, elem_expr->size);
|
||||
struct nft_expr *expr;
|
||||
int i, err;
|
||||
|
||||
memcpy(expr, expr_array[i], expr_array[i]->ops->size);
|
||||
elem_expr->size += expr_array[i]->ops->size;
|
||||
kfree(expr_array[i]);
|
||||
expr_array[i] = NULL;
|
||||
for (i = 0; i < num_exprs; i++) {
|
||||
expr = nft_setelem_expr_at(elem_expr, elem_expr->size);
|
||||
err = nft_expr_clone(expr, expr_array[i]);
|
||||
if (err < 0)
|
||||
goto err_elem_expr_setup;
|
||||
|
||||
elem_expr->size += expr_array[i]->ops->size;
|
||||
nft_expr_destroy(ctx, expr_array[i]);
|
||||
expr_array[i] = NULL;
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
||||
err_elem_expr_setup:
|
||||
for (; i < num_exprs; i++) {
|
||||
nft_expr_destroy(ctx, expr_array[i]);
|
||||
expr_array[i] = NULL;
|
||||
}
|
||||
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set,
|
||||
@ -5556,12 +5575,15 @@ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set,
|
||||
*nft_set_ext_obj(ext) = obj;
|
||||
obj->use++;
|
||||
}
|
||||
for (i = 0; i < num_exprs; i++)
|
||||
nft_set_elem_expr_setup(ext, i, expr_array);
|
||||
err = nft_set_elem_expr_setup(ctx, ext, expr_array, num_exprs);
|
||||
if (err < 0)
|
||||
goto err_elem_expr;
|
||||
|
||||
trans = nft_trans_elem_alloc(ctx, NFT_MSG_NEWSETELEM, set);
|
||||
if (trans == NULL)
|
||||
goto err_trans;
|
||||
if (trans == NULL) {
|
||||
err = -ENOMEM;
|
||||
goto err_elem_expr;
|
||||
}
|
||||
|
||||
ext->genmask = nft_genmask_cur(ctx->net) | NFT_SET_ELEM_BUSY_MASK;
|
||||
err = set->ops->insert(ctx->net, set, &elem, &ext2);
|
||||
@ -5605,7 +5627,7 @@ err_set_full:
|
||||
set->ops->remove(ctx->net, set, &elem);
|
||||
err_element_clash:
|
||||
kfree(trans);
|
||||
err_trans:
|
||||
err_elem_expr:
|
||||
if (obj)
|
||||
obj->use--;
|
||||
|
||||
|
@ -76,13 +76,13 @@ static int nft_limit_init(struct nft_limit *limit,
|
||||
return -EOVERFLOW;
|
||||
|
||||
if (pkts) {
|
||||
tokens = div_u64(limit->nsecs, limit->rate) * limit->burst;
|
||||
tokens = div64_u64(limit->nsecs, limit->rate) * limit->burst;
|
||||
} else {
|
||||
/* The token bucket size limits the number of tokens can be
|
||||
* accumulated. tokens_max specifies the bucket size.
|
||||
* tokens_max = unit * (rate + burst) / rate.
|
||||
*/
|
||||
tokens = div_u64(limit->nsecs * (limit->rate + limit->burst),
|
||||
tokens = div64_u64(limit->nsecs * (limit->rate + limit->burst),
|
||||
limit->rate);
|
||||
}
|
||||
|
||||
|
@ -733,7 +733,7 @@ void xt_compat_match_from_user(struct xt_entry_match *m, void **dstptr,
|
||||
{
|
||||
const struct xt_match *match = m->u.kernel.match;
|
||||
struct compat_xt_entry_match *cm = (struct compat_xt_entry_match *)m;
|
||||
int pad, off = xt_compat_match_offset(match);
|
||||
int off = xt_compat_match_offset(match);
|
||||
u_int16_t msize = cm->u.user.match_size;
|
||||
char name[sizeof(m->u.user.name)];
|
||||
|
||||
@ -743,9 +743,6 @@ void xt_compat_match_from_user(struct xt_entry_match *m, void **dstptr,
|
||||
match->compat_from_user(m->data, cm->data);
|
||||
else
|
||||
memcpy(m->data, cm->data, msize - sizeof(*cm));
|
||||
pad = XT_ALIGN(match->matchsize) - match->matchsize;
|
||||
if (pad > 0)
|
||||
memset(m->data + match->matchsize, 0, pad);
|
||||
|
||||
msize += off;
|
||||
m->u.user.match_size = msize;
|
||||
@ -1116,7 +1113,7 @@ void xt_compat_target_from_user(struct xt_entry_target *t, void **dstptr,
|
||||
{
|
||||
const struct xt_target *target = t->u.kernel.target;
|
||||
struct compat_xt_entry_target *ct = (struct compat_xt_entry_target *)t;
|
||||
int pad, off = xt_compat_target_offset(target);
|
||||
int off = xt_compat_target_offset(target);
|
||||
u_int16_t tsize = ct->u.user.target_size;
|
||||
char name[sizeof(t->u.user.name)];
|
||||
|
||||
@ -1126,9 +1123,6 @@ void xt_compat_target_from_user(struct xt_entry_target *t, void **dstptr,
|
||||
target->compat_from_user(t->data, ct->data);
|
||||
else
|
||||
memcpy(t->data, ct->data, tsize - sizeof(*ct));
|
||||
pad = XT_ALIGN(target->targetsize) - target->targetsize;
|
||||
if (pad > 0)
|
||||
memset(t->data + target->targetsize, 0, pad);
|
||||
|
||||
tsize += off;
|
||||
t->u.user.target_size = tsize;
|
||||
|
@ -1019,7 +1019,6 @@ static int netlink_bind(struct socket *sock, struct sockaddr *addr,
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
netlink_lock_table();
|
||||
if (nlk->netlink_bind && groups) {
|
||||
int group;
|
||||
|
||||
@ -1031,13 +1030,14 @@ static int netlink_bind(struct socket *sock, struct sockaddr *addr,
|
||||
if (!err)
|
||||
continue;
|
||||
netlink_undo_bind(group, groups, sk);
|
||||
goto unlock;
|
||||
return err;
|
||||
}
|
||||
}
|
||||
|
||||
/* No need for barriers here as we return to user-space without
|
||||
* using any of the bound attributes.
|
||||
*/
|
||||
netlink_lock_table();
|
||||
if (!bound) {
|
||||
err = nladdr->nl_pid ?
|
||||
netlink_insert(sk, nladdr->nl_pid) :
|
||||
|
@ -1520,11 +1520,9 @@ static void sctp_close(struct sock *sk, long timeout)
|
||||
|
||||
/* Supposedly, no process has access to the socket, but
|
||||
* the net layers still may.
|
||||
* Also, sctp_destroy_sock() needs to be called with addr_wq_lock
|
||||
* held and that should be grabbed before socket lock.
|
||||
*/
|
||||
spin_lock_bh(&net->sctp.addr_wq_lock);
|
||||
bh_lock_sock_nested(sk);
|
||||
local_bh_disable();
|
||||
bh_lock_sock(sk);
|
||||
|
||||
/* Hold the sock, since sk_common_release() will put sock_put()
|
||||
* and we have just a little more cleanup.
|
||||
@ -1533,7 +1531,7 @@ static void sctp_close(struct sock *sk, long timeout)
|
||||
sk_common_release(sk);
|
||||
|
||||
bh_unlock_sock(sk);
|
||||
spin_unlock_bh(&net->sctp.addr_wq_lock);
|
||||
local_bh_enable();
|
||||
|
||||
sock_put(sk);
|
||||
|
||||
@ -4993,9 +4991,6 @@ static int sctp_init_sock(struct sock *sk)
|
||||
sk_sockets_allocated_inc(sk);
|
||||
sock_prot_inuse_add(net, sk->sk_prot, 1);
|
||||
|
||||
/* Nothing can fail after this block, otherwise
|
||||
* sctp_destroy_sock() will be called without addr_wq_lock held
|
||||
*/
|
||||
if (net->sctp.default_auto_asconf) {
|
||||
spin_lock(&sock_net(sk)->sctp.addr_wq_lock);
|
||||
list_add_tail(&sp->auto_asconf_list,
|
||||
@ -5030,7 +5025,9 @@ static void sctp_destroy_sock(struct sock *sk)
|
||||
|
||||
if (sp->do_auto_asconf) {
|
||||
sp->do_auto_asconf = 0;
|
||||
spin_lock_bh(&sock_net(sk)->sctp.addr_wq_lock);
|
||||
list_del(&sp->auto_asconf_list);
|
||||
spin_unlock_bh(&sock_net(sk)->sctp.addr_wq_lock);
|
||||
}
|
||||
sctp_endpoint_free(sp->ep);
|
||||
local_bh_disable();
|
||||
|
@ -852,18 +852,19 @@ int xsk_socket__create_shared(struct xsk_socket **xsk_ptr,
|
||||
struct xsk_ring_cons *comp,
|
||||
const struct xsk_socket_config *usr_config)
|
||||
{
|
||||
bool unmap, rx_setup_done = false, tx_setup_done = false;
|
||||
void *rx_map = NULL, *tx_map = NULL;
|
||||
struct sockaddr_xdp sxdp = {};
|
||||
struct xdp_mmap_offsets off;
|
||||
struct xsk_socket *xsk;
|
||||
struct xsk_ctx *ctx;
|
||||
int err, ifindex;
|
||||
bool unmap = umem->fill_save != fill;
|
||||
bool rx_setup_done = false, tx_setup_done = false;
|
||||
|
||||
if (!umem || !xsk_ptr || !(rx || tx))
|
||||
return -EFAULT;
|
||||
|
||||
unmap = umem->fill_save != fill;
|
||||
|
||||
xsk = calloc(1, sizeof(*xsk));
|
||||
if (!xsk)
|
||||
return -ENOMEM;
|
||||
|
@ -261,8 +261,6 @@
|
||||
},
|
||||
.fixup_map_hash_8b = { 3 },
|
||||
/* not actually fully unbounded, but the bound is very high */
|
||||
.errstr_unpriv = "R1 has unknown scalar with mixed signed bounds, pointer arithmetic with it prohibited for !root",
|
||||
.result_unpriv = REJECT,
|
||||
.errstr = "value -4294967168 makes map_value pointer be out of bounds",
|
||||
.result = REJECT,
|
||||
},
|
||||
@ -298,9 +296,6 @@
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_map_hash_8b = { 3 },
|
||||
/* not actually fully unbounded, but the bound is very high */
|
||||
.errstr_unpriv = "R1 has unknown scalar with mixed signed bounds, pointer arithmetic with it prohibited for !root",
|
||||
.result_unpriv = REJECT,
|
||||
.errstr = "value -4294967168 makes map_value pointer be out of bounds",
|
||||
.result = REJECT,
|
||||
},
|
||||
|
@ -6,7 +6,7 @@
|
||||
BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr_unpriv = "R0 tried to sub from different maps, paths, or prohibited types",
|
||||
.errstr_unpriv = "R1 has pointer with unsupported alu operation",
|
||||
.errstr = "R0 tried to subtract pointer from scalar",
|
||||
.result = REJECT,
|
||||
},
|
||||
@ -21,7 +21,7 @@
|
||||
BPF_ALU64_REG(BPF_SUB, BPF_REG_1, BPF_REG_0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr_unpriv = "R1 tried to sub from different maps, paths, or prohibited types",
|
||||
.errstr_unpriv = "R1 has pointer with unsupported alu operation",
|
||||
.result_unpriv = REJECT,
|
||||
.result = ACCEPT,
|
||||
.retval = 1,
|
||||
@ -34,22 +34,23 @@
|
||||
BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr_unpriv = "R0 tried to sub from different maps, paths, or prohibited types",
|
||||
.errstr_unpriv = "R1 has pointer with unsupported alu operation",
|
||||
.errstr = "R0 tried to subtract pointer from scalar",
|
||||
.result = REJECT,
|
||||
},
|
||||
{
|
||||
"check deducing bounds from const, 4",
|
||||
.insns = {
|
||||
BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_JMP_IMM(BPF_JSLE, BPF_REG_0, 0, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_JMP_IMM(BPF_JSGE, BPF_REG_0, 0, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_ALU64_REG(BPF_SUB, BPF_REG_1, BPF_REG_0),
|
||||
BPF_ALU64_REG(BPF_SUB, BPF_REG_6, BPF_REG_0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr_unpriv = "R1 tried to sub from different maps, paths, or prohibited types",
|
||||
.errstr_unpriv = "R6 has pointer with unsupported alu operation",
|
||||
.result_unpriv = REJECT,
|
||||
.result = ACCEPT,
|
||||
},
|
||||
@ -61,7 +62,7 @@
|
||||
BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr_unpriv = "R0 tried to sub from different maps, paths, or prohibited types",
|
||||
.errstr_unpriv = "R1 has pointer with unsupported alu operation",
|
||||
.errstr = "R0 tried to subtract pointer from scalar",
|
||||
.result = REJECT,
|
||||
},
|
||||
@ -74,7 +75,7 @@
|
||||
BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr_unpriv = "R0 tried to sub from different maps, paths, or prohibited types",
|
||||
.errstr_unpriv = "R1 has pointer with unsupported alu operation",
|
||||
.errstr = "R0 tried to subtract pointer from scalar",
|
||||
.result = REJECT,
|
||||
},
|
||||
@ -88,7 +89,7 @@
|
||||
offsetof(struct __sk_buff, mark)),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr_unpriv = "R1 tried to sub from different maps, paths, or prohibited types",
|
||||
.errstr_unpriv = "R1 has pointer with unsupported alu operation",
|
||||
.errstr = "dereference of modified ctx ptr",
|
||||
.result = REJECT,
|
||||
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
|
||||
@ -103,7 +104,7 @@
|
||||
offsetof(struct __sk_buff, mark)),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr_unpriv = "R1 tried to add from different maps, paths, or prohibited types",
|
||||
.errstr_unpriv = "R1 has pointer with unsupported alu operation",
|
||||
.errstr = "dereference of modified ctx ptr",
|
||||
.result = REJECT,
|
||||
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
|
||||
@ -116,7 +117,7 @@
|
||||
BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr_unpriv = "R0 tried to sub from different maps, paths, or prohibited types",
|
||||
.errstr_unpriv = "R1 has pointer with unsupported alu operation",
|
||||
.errstr = "R0 tried to subtract pointer from scalar",
|
||||
.result = REJECT,
|
||||
},
|
||||
|
@ -19,7 +19,6 @@
|
||||
},
|
||||
.fixup_map_hash_8b = { 3 },
|
||||
.errstr = "unbounded min value",
|
||||
.errstr_unpriv = "R1 has unknown scalar with mixed signed bounds",
|
||||
.result = REJECT,
|
||||
},
|
||||
{
|
||||
@ -43,7 +42,6 @@
|
||||
},
|
||||
.fixup_map_hash_8b = { 3 },
|
||||
.errstr = "unbounded min value",
|
||||
.errstr_unpriv = "R1 has unknown scalar with mixed signed bounds",
|
||||
.result = REJECT,
|
||||
},
|
||||
{
|
||||
@ -69,7 +67,6 @@
|
||||
},
|
||||
.fixup_map_hash_8b = { 3 },
|
||||
.errstr = "unbounded min value",
|
||||
.errstr_unpriv = "R8 has unknown scalar with mixed signed bounds",
|
||||
.result = REJECT,
|
||||
},
|
||||
{
|
||||
@ -94,7 +91,6 @@
|
||||
},
|
||||
.fixup_map_hash_8b = { 3 },
|
||||
.errstr = "unbounded min value",
|
||||
.errstr_unpriv = "R8 has unknown scalar with mixed signed bounds",
|
||||
.result = REJECT,
|
||||
},
|
||||
{
|
||||
@ -141,7 +137,6 @@
|
||||
},
|
||||
.fixup_map_hash_8b = { 3 },
|
||||
.errstr = "unbounded min value",
|
||||
.errstr_unpriv = "R1 has unknown scalar with mixed signed bounds",
|
||||
.result = REJECT,
|
||||
},
|
||||
{
|
||||
@ -210,7 +205,6 @@
|
||||
},
|
||||
.fixup_map_hash_8b = { 3 },
|
||||
.errstr = "unbounded min value",
|
||||
.errstr_unpriv = "R1 has unknown scalar with mixed signed bounds",
|
||||
.result = REJECT,
|
||||
},
|
||||
{
|
||||
@ -260,7 +254,6 @@
|
||||
},
|
||||
.fixup_map_hash_8b = { 3 },
|
||||
.errstr = "unbounded min value",
|
||||
.errstr_unpriv = "R1 has unknown scalar with mixed signed bounds",
|
||||
.result = REJECT,
|
||||
},
|
||||
{
|
||||
@ -287,7 +280,6 @@
|
||||
},
|
||||
.fixup_map_hash_8b = { 3 },
|
||||
.errstr = "unbounded min value",
|
||||
.errstr_unpriv = "R1 has unknown scalar with mixed signed bounds",
|
||||
.result = REJECT,
|
||||
},
|
||||
{
|
||||
@ -313,7 +305,6 @@
|
||||
},
|
||||
.fixup_map_hash_8b = { 3 },
|
||||
.errstr = "unbounded min value",
|
||||
.errstr_unpriv = "R1 has unknown scalar with mixed signed bounds",
|
||||
.result = REJECT,
|
||||
},
|
||||
{
|
||||
@ -342,7 +333,6 @@
|
||||
},
|
||||
.fixup_map_hash_8b = { 3 },
|
||||
.errstr = "unbounded min value",
|
||||
.errstr_unpriv = "R7 has unknown scalar with mixed signed bounds",
|
||||
.result = REJECT,
|
||||
},
|
||||
{
|
||||
@ -372,7 +362,6 @@
|
||||
},
|
||||
.fixup_map_hash_8b = { 4 },
|
||||
.errstr = "unbounded min value",
|
||||
.errstr_unpriv = "R1 has unknown scalar with mixed signed bounds",
|
||||
.result = REJECT,
|
||||
},
|
||||
{
|
||||
@ -400,7 +389,5 @@
|
||||
},
|
||||
.fixup_map_hash_8b = { 3 },
|
||||
.errstr = "unbounded min value",
|
||||
.errstr_unpriv = "R1 has unknown scalar with mixed signed bounds",
|
||||
.result = REJECT,
|
||||
.result_unpriv = REJECT,
|
||||
},
|
||||
|
@ -76,7 +76,7 @@
|
||||
},
|
||||
.fixup_map_hash_16b = { 4 },
|
||||
.result_unpriv = REJECT,
|
||||
.errstr_unpriv = "R1 tried to add from different maps, paths, or prohibited types",
|
||||
.errstr_unpriv = "R1 has pointer with unsupported alu operation",
|
||||
.result = ACCEPT,
|
||||
},
|
||||
{
|
||||
@ -94,6 +94,6 @@
|
||||
},
|
||||
.fixup_map_hash_16b = { 4 },
|
||||
.result_unpriv = REJECT,
|
||||
.errstr_unpriv = "R1 tried to add from different maps, paths, or prohibited types",
|
||||
.errstr_unpriv = "R0 has pointer with unsupported alu operation",
|
||||
.result = ACCEPT,
|
||||
},
|
||||
|
@ -505,7 +505,7 @@
|
||||
BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, -8),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr_unpriv = "R1 tried to add from different maps, paths, or prohibited types",
|
||||
.errstr_unpriv = "R1 stack pointer arithmetic goes out of range",
|
||||
.result_unpriv = REJECT,
|
||||
.result = ACCEPT,
|
||||
},
|
||||
|
@ -21,8 +21,6 @@
|
||||
.fixup_map_hash_16b = { 5 },
|
||||
.fixup_map_array_48b = { 8 },
|
||||
.result = ACCEPT,
|
||||
.result_unpriv = REJECT,
|
||||
.errstr_unpriv = "R1 tried to add from different maps",
|
||||
.retval = 1,
|
||||
},
|
||||
{
|
||||
@ -122,7 +120,7 @@
|
||||
.fixup_map_array_48b = { 1 },
|
||||
.result = ACCEPT,
|
||||
.result_unpriv = REJECT,
|
||||
.errstr_unpriv = "R2 tried to add from different pointers or scalars",
|
||||
.errstr_unpriv = "R2 tried to add from different maps, paths or scalars",
|
||||
.retval = 0,
|
||||
},
|
||||
{
|
||||
@ -169,7 +167,7 @@
|
||||
.fixup_map_array_48b = { 1 },
|
||||
.result = ACCEPT,
|
||||
.result_unpriv = REJECT,
|
||||
.errstr_unpriv = "R2 tried to add from different maps, paths, or prohibited types",
|
||||
.errstr_unpriv = "R2 tried to add from different maps, paths or scalars",
|
||||
.retval = 0,
|
||||
},
|
||||
{
|
||||
|
Loading…
Reference in New Issue
Block a user