forked from Minki/linux
Including fixes from netfilter, wifi, can and bpf.
Current release - new code bugs: - can: af_can: can_exit(): add missing dev_remove_pack() of canxl_packet Previous releases - regressions: - bpf, sockmap: fix the sk->sk_forward_alloc warning - wifi: mac80211: fix general-protection-fault in ieee80211_subif_start_xmit() - can: af_can: fix NULL pointer dereference in can_rx_register() - can: dev: fix skb drop check, avoid o-o-b access - nfnetlink: fix potential dead lock in nfnetlink_rcv_msg() Previous releases - always broken: - bpf: fix wrong reg type conversion in release_reference() - gso: fix panic on frag_list with mixed head alloc types - wifi: brcmfmac: fix buffer overflow in brcmf_fweh_event_worker() - wifi: mac80211: set TWT Information Frame Disabled bit as 1 - eth: macsec offload related fixes, make sure to clear the keys from memory - tun: fix memory leaks in the use of napi_get_frags - tun: call napi_schedule_prep() to ensure we own a napi - tcp: prohibit TCP_REPAIR_OPTIONS if data was already sent - ipv6: addrlabel: fix infoleak when sending struct ifaddrlblmsg to network - tipc: fix a msg->req tlv length check - sctp: clear out_curr if all frag chunks of current msg are pruned, avoid list corruption - mctp: fix an error handling path in mctp_init(), avoid leaks Signed-off-by: Jakub Kicinski <kuba@kernel.org> -----BEGIN PGP SIGNATURE----- iQIzBAABCAAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAmNtnlEACgkQMUZtbf5S IrvSfg//axNePPwFiAdbYUmSNmnnv2Zpyz1l9a2/WvKKMeyAH3d4zuQGyTz7VgoJ at4k1fr14vm+3qBhlL0UFdd+h/wBewwuuWLiogIfhgqDO7KavZsbTJWQ59DSHH08 ujihvt7dF9ByVd3hOpUDjrYGd2rPghqXk8l/2gpPp/KIrbj1jSW0DdF7Y48/0RRw PYzNYZ9tqICw1crBT52ZilNEebGaUuWpPLzV2owlhJpzqyRLcgd9GWN9DkKieiiw wF0Wi7A8b/+cR/Wo93RAXtvEayN9vp/t6iyiI1opv3Yg6bhAMlzDUX/v79ccnAM6 wJ3b8bKyLgph5ZTNmbL8GwC2pwl/20hOgCVLb/Haykqrk4oO2+xD39fjKniFP/71 IBYuLCethi0zmiSyR8yO4iyrfJCnkJffoxtcG8O5x+FuCfMI1xQWx44bSc34KlqT vDw/VmnIfXH9K3F+QdWtlZfLiM0F6vd7RNGIxX0cC2wQCwaubCo0LOs5vl2+jpR8 Xclo+OquQtX5XRqGGQDtA7kCM9jfuc/DWla1v10wy7ZagiKkdfrV7Zu7r431Dtwn BWeKZAA38o9WNRb4FD5GGUN0dK5R5V25LmbpvYuerq5Ub3pGJgHMsdA15LqsqTnW MGIokGFhu7ToAQEnaRkF96jh3c3yoMU/sWXsqh7x/G6Tir7JGUw= =WPta -----END PGP SIGNATURE----- Merge tag 'net-6.1-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net Pull networking fixes from Jakub Kicinski: "Including fixes from netfilter, wifi, can and bpf. Current release - new code bugs: - can: af_can: can_exit(): add missing dev_remove_pack() of canxl_packet Previous releases - regressions: - bpf, sockmap: fix the sk->sk_forward_alloc warning - wifi: mac80211: fix general-protection-fault in ieee80211_subif_start_xmit() - can: af_can: fix NULL pointer dereference in can_rx_register() - can: dev: fix skb drop check, avoid o-o-b access - nfnetlink: fix potential dead lock in nfnetlink_rcv_msg() Previous releases - always broken: - bpf: fix wrong reg type conversion in release_reference() - gso: fix panic on frag_list with mixed head alloc types - wifi: brcmfmac: fix buffer overflow in brcmf_fweh_event_worker() - wifi: mac80211: set TWT Information Frame Disabled bit as 1 - eth: macsec offload related fixes, make sure to clear the keys from memory - tun: fix memory leaks in the use of napi_get_frags - tun: call napi_schedule_prep() to ensure we own a napi - tcp: prohibit TCP_REPAIR_OPTIONS if data was already sent - ipv6: addrlabel: fix infoleak when sending struct ifaddrlblmsg to network - tipc: fix a msg->req tlv length check - sctp: clear out_curr if all frag chunks of current msg are pruned, avoid list corruption - mctp: fix an error handling path in mctp_init(), avoid leaks" * tag 'net-6.1-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (101 commits) eth: sp7021: drop free_netdev() from spl2sw_init_netdev() MAINTAINERS: Move Vivien to CREDITS net: macvlan: fix memory leaks of macvlan_common_newlink ethernet: tundra: free irq when alloc ring failed in tsi108_open() net: mv643xx_eth: disable napi when init rxq or txq failed in mv643xx_eth_open() ethernet: s2io: disable napi when start nic failed in s2io_card_up() net: atlantic: macsec: clear encryption keys from the stack net: phy: mscc: macsec: clear encryption keys when freeing a flow stmmac: dwmac-loongson: fix missing of_node_put() while module exiting stmmac: dwmac-loongson: fix missing pci_disable_device() in loongson_dwmac_probe() stmmac: dwmac-loongson: fix missing pci_disable_msi() while module exiting cxgb4vf: shut down the adapter when t4vf_update_port_info() failed in cxgb4vf_open() mctp: Fix an error handling path in mctp_init() stmmac: intel: Update PCH PTP clock rate from 200MHz to 204.8MHz net: cxgb3_main: disable napi when bind qsets failed in cxgb_up() net: cpsw: disable napi in cpsw_ndo_open() iavf: Fix VF driver counting VLAN 0 filters ice: Fix spurious interrupt during removal of trusted VF net/mlx5e: TC, Fix slab-out-of-bounds in parse_tc_actions net/mlx5e: E-Switch, Fix comparing termination table instance ...
This commit is contained in:
commit
4bbf3422df
5
CREDITS
5
CREDITS
@ -918,6 +918,11 @@ S: Ottawa, Ontario
|
|||||||
S: K1N 6Z9
|
S: K1N 6Z9
|
||||||
S: CANADA
|
S: CANADA
|
||||||
|
|
||||||
|
N: Vivien Didelot
|
||||||
|
E: vivien.didelot@gmail.com
|
||||||
|
D: DSA framework and MV88E6XXX driver
|
||||||
|
S: Montreal, Quebec, Canada
|
||||||
|
|
||||||
N: Jeff Dike
|
N: Jeff Dike
|
||||||
E: jdike@karaya.com
|
E: jdike@karaya.com
|
||||||
W: http://user-mode-linux.sourceforge.net
|
W: http://user-mode-linux.sourceforge.net
|
||||||
|
@ -47,7 +47,7 @@ properties:
|
|||||||
|
|
||||||
nvmem-cells: true
|
nvmem-cells: true
|
||||||
|
|
||||||
nvmem-cells-names: true
|
nvmem-cell-names: true
|
||||||
|
|
||||||
phy-connection-type:
|
phy-connection-type:
|
||||||
enum:
|
enum:
|
||||||
|
@ -12226,7 +12226,6 @@ F: arch/mips/boot/dts/img/pistachio*
|
|||||||
|
|
||||||
MARVELL 88E6XXX ETHERNET SWITCH FABRIC DRIVER
|
MARVELL 88E6XXX ETHERNET SWITCH FABRIC DRIVER
|
||||||
M: Andrew Lunn <andrew@lunn.ch>
|
M: Andrew Lunn <andrew@lunn.ch>
|
||||||
M: Vivien Didelot <vivien.didelot@gmail.com>
|
|
||||||
L: netdev@vger.kernel.org
|
L: netdev@vger.kernel.org
|
||||||
S: Maintained
|
S: Maintained
|
||||||
F: Documentation/devicetree/bindings/net/dsa/marvell.txt
|
F: Documentation/devicetree/bindings/net/dsa/marvell.txt
|
||||||
@ -14324,7 +14323,6 @@ F: drivers/net/wireless/
|
|||||||
|
|
||||||
NETWORKING [DSA]
|
NETWORKING [DSA]
|
||||||
M: Andrew Lunn <andrew@lunn.ch>
|
M: Andrew Lunn <andrew@lunn.ch>
|
||||||
M: Vivien Didelot <vivien.didelot@gmail.com>
|
|
||||||
M: Florian Fainelli <f.fainelli@gmail.com>
|
M: Florian Fainelli <f.fainelli@gmail.com>
|
||||||
M: Vladimir Oltean <olteanv@gmail.com>
|
M: Vladimir Oltean <olteanv@gmail.com>
|
||||||
S: Maintained
|
S: Maintained
|
||||||
|
@ -452,7 +452,7 @@ static netdev_tx_t at91_start_xmit(struct sk_buff *skb, struct net_device *dev)
|
|||||||
unsigned int mb, prio;
|
unsigned int mb, prio;
|
||||||
u32 reg_mid, reg_mcr;
|
u32 reg_mid, reg_mcr;
|
||||||
|
|
||||||
if (can_dropped_invalid_skb(dev, skb))
|
if (can_dev_dropped_skb(dev, skb))
|
||||||
return NETDEV_TX_OK;
|
return NETDEV_TX_OK;
|
||||||
|
|
||||||
mb = get_tx_next_mb(priv);
|
mb = get_tx_next_mb(priv);
|
||||||
|
@ -457,7 +457,7 @@ static netdev_tx_t c_can_start_xmit(struct sk_buff *skb,
|
|||||||
struct c_can_tx_ring *tx_ring = &priv->tx;
|
struct c_can_tx_ring *tx_ring = &priv->tx;
|
||||||
u32 idx, obj, cmd = IF_COMM_TX;
|
u32 idx, obj, cmd = IF_COMM_TX;
|
||||||
|
|
||||||
if (can_dropped_invalid_skb(dev, skb))
|
if (can_dev_dropped_skb(dev, skb))
|
||||||
return NETDEV_TX_OK;
|
return NETDEV_TX_OK;
|
||||||
|
|
||||||
if (c_can_tx_busy(priv, tx_ring))
|
if (c_can_tx_busy(priv, tx_ring))
|
||||||
|
@ -813,7 +813,7 @@ static netdev_tx_t can327_netdev_start_xmit(struct sk_buff *skb,
|
|||||||
struct can327 *elm = netdev_priv(dev);
|
struct can327 *elm = netdev_priv(dev);
|
||||||
struct can_frame *frame = (struct can_frame *)skb->data;
|
struct can_frame *frame = (struct can_frame *)skb->data;
|
||||||
|
|
||||||
if (can_dropped_invalid_skb(dev, skb))
|
if (can_dev_dropped_skb(dev, skb))
|
||||||
return NETDEV_TX_OK;
|
return NETDEV_TX_OK;
|
||||||
|
|
||||||
/* We shouldn't get here after a hardware fault:
|
/* We shouldn't get here after a hardware fault:
|
||||||
|
@ -429,7 +429,7 @@ static netdev_tx_t cc770_start_xmit(struct sk_buff *skb, struct net_device *dev)
|
|||||||
struct cc770_priv *priv = netdev_priv(dev);
|
struct cc770_priv *priv = netdev_priv(dev);
|
||||||
unsigned int mo = obj2msgobj(CC770_OBJ_TX);
|
unsigned int mo = obj2msgobj(CC770_OBJ_TX);
|
||||||
|
|
||||||
if (can_dropped_invalid_skb(dev, skb))
|
if (can_dev_dropped_skb(dev, skb))
|
||||||
return NETDEV_TX_OK;
|
return NETDEV_TX_OK;
|
||||||
|
|
||||||
netif_stop_queue(dev);
|
netif_stop_queue(dev);
|
||||||
|
@ -600,7 +600,7 @@ static netdev_tx_t ctucan_start_xmit(struct sk_buff *skb, struct net_device *nde
|
|||||||
bool ok;
|
bool ok;
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
|
|
||||||
if (can_dropped_invalid_skb(ndev, skb))
|
if (can_dev_dropped_skb(ndev, skb))
|
||||||
return NETDEV_TX_OK;
|
return NETDEV_TX_OK;
|
||||||
|
|
||||||
if (unlikely(!CTU_CAN_FD_TXTNF(priv))) {
|
if (unlikely(!CTU_CAN_FD_TXTNF(priv))) {
|
||||||
|
@ -5,7 +5,6 @@
|
|||||||
*/
|
*/
|
||||||
|
|
||||||
#include <linux/can/dev.h>
|
#include <linux/can/dev.h>
|
||||||
#include <linux/can/netlink.h>
|
|
||||||
#include <linux/module.h>
|
#include <linux/module.h>
|
||||||
|
|
||||||
#define MOD_DESC "CAN device driver interface"
|
#define MOD_DESC "CAN device driver interface"
|
||||||
@ -337,8 +336,6 @@ static bool can_skb_headroom_valid(struct net_device *dev, struct sk_buff *skb)
|
|||||||
/* Drop a given socketbuffer if it does not contain a valid CAN frame. */
|
/* Drop a given socketbuffer if it does not contain a valid CAN frame. */
|
||||||
bool can_dropped_invalid_skb(struct net_device *dev, struct sk_buff *skb)
|
bool can_dropped_invalid_skb(struct net_device *dev, struct sk_buff *skb)
|
||||||
{
|
{
|
||||||
struct can_priv *priv = netdev_priv(dev);
|
|
||||||
|
|
||||||
switch (ntohs(skb->protocol)) {
|
switch (ntohs(skb->protocol)) {
|
||||||
case ETH_P_CAN:
|
case ETH_P_CAN:
|
||||||
if (!can_is_can_skb(skb))
|
if (!can_is_can_skb(skb))
|
||||||
@ -359,13 +356,8 @@ bool can_dropped_invalid_skb(struct net_device *dev, struct sk_buff *skb)
|
|||||||
goto inval_skb;
|
goto inval_skb;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!can_skb_headroom_valid(dev, skb)) {
|
if (!can_skb_headroom_valid(dev, skb))
|
||||||
goto inval_skb;
|
goto inval_skb;
|
||||||
} else if (priv->ctrlmode & CAN_CTRLMODE_LISTENONLY) {
|
|
||||||
netdev_info_once(dev,
|
|
||||||
"interface in listen only mode, dropping skb\n");
|
|
||||||
goto inval_skb;
|
|
||||||
}
|
|
||||||
|
|
||||||
return false;
|
return false;
|
||||||
|
|
||||||
|
@ -742,7 +742,7 @@ static netdev_tx_t flexcan_start_xmit(struct sk_buff *skb, struct net_device *de
|
|||||||
u32 ctrl = FLEXCAN_MB_CODE_TX_DATA | ((can_fd_len2dlc(cfd->len)) << 16);
|
u32 ctrl = FLEXCAN_MB_CODE_TX_DATA | ((can_fd_len2dlc(cfd->len)) << 16);
|
||||||
int i;
|
int i;
|
||||||
|
|
||||||
if (can_dropped_invalid_skb(dev, skb))
|
if (can_dev_dropped_skb(dev, skb))
|
||||||
return NETDEV_TX_OK;
|
return NETDEV_TX_OK;
|
||||||
|
|
||||||
netif_stop_queue(dev);
|
netif_stop_queue(dev);
|
||||||
|
@ -1345,7 +1345,7 @@ static netdev_tx_t grcan_start_xmit(struct sk_buff *skb,
|
|||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
u32 oneshotmode = priv->can.ctrlmode & CAN_CTRLMODE_ONE_SHOT;
|
u32 oneshotmode = priv->can.ctrlmode & CAN_CTRLMODE_ONE_SHOT;
|
||||||
|
|
||||||
if (can_dropped_invalid_skb(dev, skb))
|
if (can_dev_dropped_skb(dev, skb))
|
||||||
return NETDEV_TX_OK;
|
return NETDEV_TX_OK;
|
||||||
|
|
||||||
/* Trying to transmit in silent mode will generate error interrupts, but
|
/* Trying to transmit in silent mode will generate error interrupts, but
|
||||||
|
@ -860,7 +860,7 @@ static netdev_tx_t ifi_canfd_start_xmit(struct sk_buff *skb,
|
|||||||
u32 txst, txid, txdlc;
|
u32 txst, txid, txdlc;
|
||||||
int i;
|
int i;
|
||||||
|
|
||||||
if (can_dropped_invalid_skb(ndev, skb))
|
if (can_dev_dropped_skb(ndev, skb))
|
||||||
return NETDEV_TX_OK;
|
return NETDEV_TX_OK;
|
||||||
|
|
||||||
/* Check if the TX buffer is full */
|
/* Check if the TX buffer is full */
|
||||||
|
@ -1693,7 +1693,7 @@ static netdev_tx_t ican3_xmit(struct sk_buff *skb, struct net_device *ndev)
|
|||||||
void __iomem *desc_addr;
|
void __iomem *desc_addr;
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
|
|
||||||
if (can_dropped_invalid_skb(ndev, skb))
|
if (can_dev_dropped_skb(ndev, skb))
|
||||||
return NETDEV_TX_OK;
|
return NETDEV_TX_OK;
|
||||||
|
|
||||||
spin_lock_irqsave(&mod->lock, flags);
|
spin_lock_irqsave(&mod->lock, flags);
|
||||||
|
@ -772,7 +772,7 @@ static netdev_tx_t kvaser_pciefd_start_xmit(struct sk_buff *skb,
|
|||||||
int nwords;
|
int nwords;
|
||||||
u8 count;
|
u8 count;
|
||||||
|
|
||||||
if (can_dropped_invalid_skb(netdev, skb))
|
if (can_dev_dropped_skb(netdev, skb))
|
||||||
return NETDEV_TX_OK;
|
return NETDEV_TX_OK;
|
||||||
|
|
||||||
nwords = kvaser_pciefd_prepare_tx_packet(&packet, can, skb);
|
nwords = kvaser_pciefd_prepare_tx_packet(&packet, can, skb);
|
||||||
|
@ -1721,7 +1721,7 @@ static netdev_tx_t m_can_start_xmit(struct sk_buff *skb,
|
|||||||
{
|
{
|
||||||
struct m_can_classdev *cdev = netdev_priv(dev);
|
struct m_can_classdev *cdev = netdev_priv(dev);
|
||||||
|
|
||||||
if (can_dropped_invalid_skb(dev, skb))
|
if (can_dev_dropped_skb(dev, skb))
|
||||||
return NETDEV_TX_OK;
|
return NETDEV_TX_OK;
|
||||||
|
|
||||||
if (cdev->is_peripheral) {
|
if (cdev->is_peripheral) {
|
||||||
|
@ -191,7 +191,7 @@ static netdev_tx_t mscan_start_xmit(struct sk_buff *skb, struct net_device *dev)
|
|||||||
int i, rtr, buf_id;
|
int i, rtr, buf_id;
|
||||||
u32 can_id;
|
u32 can_id;
|
||||||
|
|
||||||
if (can_dropped_invalid_skb(dev, skb))
|
if (can_dev_dropped_skb(dev, skb))
|
||||||
return NETDEV_TX_OK;
|
return NETDEV_TX_OK;
|
||||||
|
|
||||||
out_8(®s->cantier, 0);
|
out_8(®s->cantier, 0);
|
||||||
|
@ -882,7 +882,7 @@ static netdev_tx_t pch_xmit(struct sk_buff *skb, struct net_device *ndev)
|
|||||||
int i;
|
int i;
|
||||||
u32 id2;
|
u32 id2;
|
||||||
|
|
||||||
if (can_dropped_invalid_skb(ndev, skb))
|
if (can_dev_dropped_skb(ndev, skb))
|
||||||
return NETDEV_TX_OK;
|
return NETDEV_TX_OK;
|
||||||
|
|
||||||
tx_obj_no = priv->tx_obj;
|
tx_obj_no = priv->tx_obj;
|
||||||
|
@ -651,7 +651,7 @@ static netdev_tx_t peak_canfd_start_xmit(struct sk_buff *skb,
|
|||||||
int room_left;
|
int room_left;
|
||||||
u8 len;
|
u8 len;
|
||||||
|
|
||||||
if (can_dropped_invalid_skb(ndev, skb))
|
if (can_dev_dropped_skb(ndev, skb))
|
||||||
return NETDEV_TX_OK;
|
return NETDEV_TX_OK;
|
||||||
|
|
||||||
msg_size = ALIGN(sizeof(*msg) + cf->len, 4);
|
msg_size = ALIGN(sizeof(*msg) + cf->len, 4);
|
||||||
|
@ -590,7 +590,7 @@ static netdev_tx_t rcar_can_start_xmit(struct sk_buff *skb,
|
|||||||
struct can_frame *cf = (struct can_frame *)skb->data;
|
struct can_frame *cf = (struct can_frame *)skb->data;
|
||||||
u32 data, i;
|
u32 data, i;
|
||||||
|
|
||||||
if (can_dropped_invalid_skb(ndev, skb))
|
if (can_dev_dropped_skb(ndev, skb))
|
||||||
return NETDEV_TX_OK;
|
return NETDEV_TX_OK;
|
||||||
|
|
||||||
if (cf->can_id & CAN_EFF_FLAG) /* Extended frame format */
|
if (cf->can_id & CAN_EFF_FLAG) /* Extended frame format */
|
||||||
|
@ -81,8 +81,7 @@ enum rcanfd_chip_id {
|
|||||||
|
|
||||||
/* RSCFDnCFDGERFL / RSCFDnGERFL */
|
/* RSCFDnCFDGERFL / RSCFDnGERFL */
|
||||||
#define RCANFD_GERFL_EEF0_7 GENMASK(23, 16)
|
#define RCANFD_GERFL_EEF0_7 GENMASK(23, 16)
|
||||||
#define RCANFD_GERFL_EEF1 BIT(17)
|
#define RCANFD_GERFL_EEF(ch) BIT(16 + (ch))
|
||||||
#define RCANFD_GERFL_EEF0 BIT(16)
|
|
||||||
#define RCANFD_GERFL_CMPOF BIT(3) /* CAN FD only */
|
#define RCANFD_GERFL_CMPOF BIT(3) /* CAN FD only */
|
||||||
#define RCANFD_GERFL_THLES BIT(2)
|
#define RCANFD_GERFL_THLES BIT(2)
|
||||||
#define RCANFD_GERFL_MES BIT(1)
|
#define RCANFD_GERFL_MES BIT(1)
|
||||||
@ -90,7 +89,7 @@ enum rcanfd_chip_id {
|
|||||||
|
|
||||||
#define RCANFD_GERFL_ERR(gpriv, x) \
|
#define RCANFD_GERFL_ERR(gpriv, x) \
|
||||||
((x) & (reg_v3u(gpriv, RCANFD_GERFL_EEF0_7, \
|
((x) & (reg_v3u(gpriv, RCANFD_GERFL_EEF0_7, \
|
||||||
RCANFD_GERFL_EEF0 | RCANFD_GERFL_EEF1) | \
|
RCANFD_GERFL_EEF(0) | RCANFD_GERFL_EEF(1)) | \
|
||||||
RCANFD_GERFL_MES | \
|
RCANFD_GERFL_MES | \
|
||||||
((gpriv)->fdmode ? RCANFD_GERFL_CMPOF : 0)))
|
((gpriv)->fdmode ? RCANFD_GERFL_CMPOF : 0)))
|
||||||
|
|
||||||
@ -936,12 +935,8 @@ static void rcar_canfd_global_error(struct net_device *ndev)
|
|||||||
u32 ridx = ch + RCANFD_RFFIFO_IDX;
|
u32 ridx = ch + RCANFD_RFFIFO_IDX;
|
||||||
|
|
||||||
gerfl = rcar_canfd_read(priv->base, RCANFD_GERFL);
|
gerfl = rcar_canfd_read(priv->base, RCANFD_GERFL);
|
||||||
if ((gerfl & RCANFD_GERFL_EEF0) && (ch == 0)) {
|
if (gerfl & RCANFD_GERFL_EEF(ch)) {
|
||||||
netdev_dbg(ndev, "Ch0: ECC Error flag\n");
|
netdev_dbg(ndev, "Ch%u: ECC Error flag\n", ch);
|
||||||
stats->tx_dropped++;
|
|
||||||
}
|
|
||||||
if ((gerfl & RCANFD_GERFL_EEF1) && (ch == 1)) {
|
|
||||||
netdev_dbg(ndev, "Ch1: ECC Error flag\n");
|
|
||||||
stats->tx_dropped++;
|
stats->tx_dropped++;
|
||||||
}
|
}
|
||||||
if (gerfl & RCANFD_GERFL_MES) {
|
if (gerfl & RCANFD_GERFL_MES) {
|
||||||
@ -1481,7 +1476,7 @@ static netdev_tx_t rcar_canfd_start_xmit(struct sk_buff *skb,
|
|||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
u32 ch = priv->channel;
|
u32 ch = priv->channel;
|
||||||
|
|
||||||
if (can_dropped_invalid_skb(ndev, skb))
|
if (can_dev_dropped_skb(ndev, skb))
|
||||||
return NETDEV_TX_OK;
|
return NETDEV_TX_OK;
|
||||||
|
|
||||||
if (cf->can_id & CAN_EFF_FLAG) {
|
if (cf->can_id & CAN_EFF_FLAG) {
|
||||||
|
@ -291,7 +291,7 @@ static netdev_tx_t sja1000_start_xmit(struct sk_buff *skb,
|
|||||||
u8 cmd_reg_val = 0x00;
|
u8 cmd_reg_val = 0x00;
|
||||||
int i;
|
int i;
|
||||||
|
|
||||||
if (can_dropped_invalid_skb(dev, skb))
|
if (can_dev_dropped_skb(dev, skb))
|
||||||
return NETDEV_TX_OK;
|
return NETDEV_TX_OK;
|
||||||
|
|
||||||
netif_stop_queue(dev);
|
netif_stop_queue(dev);
|
||||||
|
@ -594,7 +594,7 @@ static netdev_tx_t slcan_netdev_xmit(struct sk_buff *skb,
|
|||||||
{
|
{
|
||||||
struct slcan *sl = netdev_priv(dev);
|
struct slcan *sl = netdev_priv(dev);
|
||||||
|
|
||||||
if (can_dropped_invalid_skb(dev, skb))
|
if (can_dev_dropped_skb(dev, skb))
|
||||||
return NETDEV_TX_OK;
|
return NETDEV_TX_OK;
|
||||||
|
|
||||||
spin_lock(&sl->lock);
|
spin_lock(&sl->lock);
|
||||||
|
@ -60,7 +60,7 @@ static netdev_tx_t softing_netdev_start_xmit(struct sk_buff *skb,
|
|||||||
struct can_frame *cf = (struct can_frame *)skb->data;
|
struct can_frame *cf = (struct can_frame *)skb->data;
|
||||||
uint8_t buf[DPRAM_TX_SIZE];
|
uint8_t buf[DPRAM_TX_SIZE];
|
||||||
|
|
||||||
if (can_dropped_invalid_skb(dev, skb))
|
if (can_dev_dropped_skb(dev, skb))
|
||||||
return NETDEV_TX_OK;
|
return NETDEV_TX_OK;
|
||||||
|
|
||||||
spin_lock(&card->spin);
|
spin_lock(&card->spin);
|
||||||
|
@ -373,7 +373,7 @@ static netdev_tx_t hi3110_hard_start_xmit(struct sk_buff *skb,
|
|||||||
return NETDEV_TX_BUSY;
|
return NETDEV_TX_BUSY;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (can_dropped_invalid_skb(net, skb))
|
if (can_dev_dropped_skb(net, skb))
|
||||||
return NETDEV_TX_OK;
|
return NETDEV_TX_OK;
|
||||||
|
|
||||||
netif_stop_queue(net);
|
netif_stop_queue(net);
|
||||||
|
@ -789,7 +789,7 @@ static netdev_tx_t mcp251x_hard_start_xmit(struct sk_buff *skb,
|
|||||||
return NETDEV_TX_BUSY;
|
return NETDEV_TX_BUSY;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (can_dropped_invalid_skb(net, skb))
|
if (can_dev_dropped_skb(net, skb))
|
||||||
return NETDEV_TX_OK;
|
return NETDEV_TX_OK;
|
||||||
|
|
||||||
netif_stop_queue(net);
|
netif_stop_queue(net);
|
||||||
|
@ -172,7 +172,7 @@ netdev_tx_t mcp251xfd_start_xmit(struct sk_buff *skb,
|
|||||||
u8 tx_head;
|
u8 tx_head;
|
||||||
int err;
|
int err;
|
||||||
|
|
||||||
if (can_dropped_invalid_skb(ndev, skb))
|
if (can_dev_dropped_skb(ndev, skb))
|
||||||
return NETDEV_TX_OK;
|
return NETDEV_TX_OK;
|
||||||
|
|
||||||
if (mcp251xfd_tx_busy(priv, tx_ring))
|
if (mcp251xfd_tx_busy(priv, tx_ring))
|
||||||
|
@ -429,7 +429,7 @@ static netdev_tx_t sun4ican_start_xmit(struct sk_buff *skb, struct net_device *d
|
|||||||
canid_t id;
|
canid_t id;
|
||||||
int i;
|
int i;
|
||||||
|
|
||||||
if (can_dropped_invalid_skb(dev, skb))
|
if (can_dev_dropped_skb(dev, skb))
|
||||||
return NETDEV_TX_OK;
|
return NETDEV_TX_OK;
|
||||||
|
|
||||||
netif_stop_queue(dev);
|
netif_stop_queue(dev);
|
||||||
|
@ -470,7 +470,7 @@ static netdev_tx_t ti_hecc_xmit(struct sk_buff *skb, struct net_device *ndev)
|
|||||||
u32 mbxno, mbx_mask, data;
|
u32 mbxno, mbx_mask, data;
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
|
|
||||||
if (can_dropped_invalid_skb(ndev, skb))
|
if (can_dev_dropped_skb(ndev, skb))
|
||||||
return NETDEV_TX_OK;
|
return NETDEV_TX_OK;
|
||||||
|
|
||||||
mbxno = get_tx_head_mb(priv);
|
mbxno = get_tx_head_mb(priv);
|
||||||
|
@ -747,7 +747,7 @@ static netdev_tx_t ems_usb_start_xmit(struct sk_buff *skb, struct net_device *ne
|
|||||||
size_t size = CPC_HEADER_SIZE + CPC_MSG_HEADER_LEN
|
size_t size = CPC_HEADER_SIZE + CPC_MSG_HEADER_LEN
|
||||||
+ sizeof(struct cpc_can_msg);
|
+ sizeof(struct cpc_can_msg);
|
||||||
|
|
||||||
if (can_dropped_invalid_skb(netdev, skb))
|
if (can_dev_dropped_skb(netdev, skb))
|
||||||
return NETDEV_TX_OK;
|
return NETDEV_TX_OK;
|
||||||
|
|
||||||
/* create a URB, and a buffer for it, and copy the data to the URB */
|
/* create a URB, and a buffer for it, and copy the data to the URB */
|
||||||
|
@ -725,7 +725,7 @@ static netdev_tx_t esd_usb_start_xmit(struct sk_buff *skb,
|
|||||||
int ret = NETDEV_TX_OK;
|
int ret = NETDEV_TX_OK;
|
||||||
size_t size = sizeof(struct esd_usb_msg);
|
size_t size = sizeof(struct esd_usb_msg);
|
||||||
|
|
||||||
if (can_dropped_invalid_skb(netdev, skb))
|
if (can_dev_dropped_skb(netdev, skb))
|
||||||
return NETDEV_TX_OK;
|
return NETDEV_TX_OK;
|
||||||
|
|
||||||
/* create a URB, and a buffer for it, and copy the data to the URB */
|
/* create a URB, and a buffer for it, and copy the data to the URB */
|
||||||
|
@ -1913,7 +1913,7 @@ static netdev_tx_t es58x_start_xmit(struct sk_buff *skb,
|
|||||||
unsigned int frame_len;
|
unsigned int frame_len;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
if (can_dropped_invalid_skb(netdev, skb)) {
|
if (can_dev_dropped_skb(netdev, skb)) {
|
||||||
if (priv->tx_urb)
|
if (priv->tx_urb)
|
||||||
goto xmit_commit;
|
goto xmit_commit;
|
||||||
return NETDEV_TX_OK;
|
return NETDEV_TX_OK;
|
||||||
|
@ -723,7 +723,7 @@ static netdev_tx_t gs_can_start_xmit(struct sk_buff *skb,
|
|||||||
unsigned int idx;
|
unsigned int idx;
|
||||||
struct gs_tx_context *txc;
|
struct gs_tx_context *txc;
|
||||||
|
|
||||||
if (can_dropped_invalid_skb(netdev, skb))
|
if (can_dev_dropped_skb(netdev, skb))
|
||||||
return NETDEV_TX_OK;
|
return NETDEV_TX_OK;
|
||||||
|
|
||||||
/* find an empty context to keep track of transmission */
|
/* find an empty context to keep track of transmission */
|
||||||
|
@ -570,7 +570,7 @@ static netdev_tx_t kvaser_usb_start_xmit(struct sk_buff *skb,
|
|||||||
unsigned int i;
|
unsigned int i;
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
|
|
||||||
if (can_dropped_invalid_skb(netdev, skb))
|
if (can_dev_dropped_skb(netdev, skb))
|
||||||
return NETDEV_TX_OK;
|
return NETDEV_TX_OK;
|
||||||
|
|
||||||
urb = usb_alloc_urb(0, GFP_ATOMIC);
|
urb = usb_alloc_urb(0, GFP_ATOMIC);
|
||||||
|
@ -311,7 +311,7 @@ static netdev_tx_t mcba_usb_start_xmit(struct sk_buff *skb,
|
|||||||
.cmd_id = MBCA_CMD_TRANSMIT_MESSAGE_EV
|
.cmd_id = MBCA_CMD_TRANSMIT_MESSAGE_EV
|
||||||
};
|
};
|
||||||
|
|
||||||
if (can_dropped_invalid_skb(netdev, skb))
|
if (can_dev_dropped_skb(netdev, skb))
|
||||||
return NETDEV_TX_OK;
|
return NETDEV_TX_OK;
|
||||||
|
|
||||||
ctx = mcba_usb_get_free_ctx(priv, cf);
|
ctx = mcba_usb_get_free_ctx(priv, cf);
|
||||||
|
@ -351,7 +351,7 @@ static netdev_tx_t peak_usb_ndo_start_xmit(struct sk_buff *skb,
|
|||||||
int i, err;
|
int i, err;
|
||||||
size_t size = dev->adapter->tx_buffer_size;
|
size_t size = dev->adapter->tx_buffer_size;
|
||||||
|
|
||||||
if (can_dropped_invalid_skb(netdev, skb))
|
if (can_dev_dropped_skb(netdev, skb))
|
||||||
return NETDEV_TX_OK;
|
return NETDEV_TX_OK;
|
||||||
|
|
||||||
for (i = 0; i < PCAN_USB_MAX_TX_URBS; i++)
|
for (i = 0; i < PCAN_USB_MAX_TX_URBS; i++)
|
||||||
|
@ -1120,7 +1120,7 @@ static netdev_tx_t ucan_start_xmit(struct sk_buff *skb,
|
|||||||
struct can_frame *cf = (struct can_frame *)skb->data;
|
struct can_frame *cf = (struct can_frame *)skb->data;
|
||||||
|
|
||||||
/* check skb */
|
/* check skb */
|
||||||
if (can_dropped_invalid_skb(netdev, skb))
|
if (can_dev_dropped_skb(netdev, skb))
|
||||||
return NETDEV_TX_OK;
|
return NETDEV_TX_OK;
|
||||||
|
|
||||||
/* allocate a context and slow down tx path, if fifo state is low */
|
/* allocate a context and slow down tx path, if fifo state is low */
|
||||||
|
@ -602,7 +602,7 @@ static netdev_tx_t usb_8dev_start_xmit(struct sk_buff *skb,
|
|||||||
int i, err;
|
int i, err;
|
||||||
size_t size = sizeof(struct usb_8dev_tx_msg);
|
size_t size = sizeof(struct usb_8dev_tx_msg);
|
||||||
|
|
||||||
if (can_dropped_invalid_skb(netdev, skb))
|
if (can_dev_dropped_skb(netdev, skb))
|
||||||
return NETDEV_TX_OK;
|
return NETDEV_TX_OK;
|
||||||
|
|
||||||
/* create a URB, and a buffer for it, and copy the data to the URB */
|
/* create a URB, and a buffer for it, and copy the data to the URB */
|
||||||
|
@ -743,7 +743,7 @@ static netdev_tx_t xcan_start_xmit(struct sk_buff *skb, struct net_device *ndev)
|
|||||||
struct xcan_priv *priv = netdev_priv(ndev);
|
struct xcan_priv *priv = netdev_priv(ndev);
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
if (can_dropped_invalid_skb(ndev, skb))
|
if (can_dev_dropped_skb(ndev, skb))
|
||||||
return NETDEV_TX_OK;
|
return NETDEV_TX_OK;
|
||||||
|
|
||||||
if (priv->devtype.flags & XCAN_FLAG_TX_MAILBOXES)
|
if (priv->devtype.flags & XCAN_FLAG_TX_MAILBOXES)
|
||||||
|
@ -1004,8 +1004,10 @@ static int xgene_enet_open(struct net_device *ndev)
|
|||||||
|
|
||||||
xgene_enet_napi_enable(pdata);
|
xgene_enet_napi_enable(pdata);
|
||||||
ret = xgene_enet_register_irq(ndev);
|
ret = xgene_enet_register_irq(ndev);
|
||||||
if (ret)
|
if (ret) {
|
||||||
|
xgene_enet_napi_disable(pdata);
|
||||||
return ret;
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
if (ndev->phydev) {
|
if (ndev->phydev) {
|
||||||
phy_start(ndev->phydev);
|
phy_start(ndev->phydev);
|
||||||
|
@ -570,6 +570,7 @@ static int aq_update_txsa(struct aq_nic_s *nic, const unsigned int sc_idx,
|
|||||||
|
|
||||||
ret = aq_mss_set_egress_sakey_record(hw, &key_rec, sa_idx);
|
ret = aq_mss_set_egress_sakey_record(hw, &key_rec, sa_idx);
|
||||||
|
|
||||||
|
memzero_explicit(&key_rec, sizeof(key_rec));
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -899,6 +900,7 @@ static int aq_update_rxsa(struct aq_nic_s *nic, const unsigned int sc_idx,
|
|||||||
|
|
||||||
ret = aq_mss_set_ingress_sakey_record(hw, &sa_key_record, sa_idx);
|
ret = aq_mss_set_ingress_sakey_record(hw, &sa_key_record, sa_idx);
|
||||||
|
|
||||||
|
memzero_explicit(&sa_key_record, sizeof(sa_key_record));
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -757,6 +757,7 @@ set_ingress_sakey_record(struct aq_hw_s *hw,
|
|||||||
u16 table_index)
|
u16 table_index)
|
||||||
{
|
{
|
||||||
u16 packed_record[18];
|
u16 packed_record[18];
|
||||||
|
int ret;
|
||||||
|
|
||||||
if (table_index >= NUMROWS_INGRESSSAKEYRECORD)
|
if (table_index >= NUMROWS_INGRESSSAKEYRECORD)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
@ -789,9 +790,12 @@ set_ingress_sakey_record(struct aq_hw_s *hw,
|
|||||||
|
|
||||||
packed_record[16] = rec->key_len & 0x3;
|
packed_record[16] = rec->key_len & 0x3;
|
||||||
|
|
||||||
return set_raw_ingress_record(hw, packed_record, 18, 2,
|
ret = set_raw_ingress_record(hw, packed_record, 18, 2,
|
||||||
ROWOFFSET_INGRESSSAKEYRECORD +
|
ROWOFFSET_INGRESSSAKEYRECORD +
|
||||||
table_index);
|
table_index);
|
||||||
|
|
||||||
|
memzero_explicit(packed_record, sizeof(packed_record));
|
||||||
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
int aq_mss_set_ingress_sakey_record(struct aq_hw_s *hw,
|
int aq_mss_set_ingress_sakey_record(struct aq_hw_s *hw,
|
||||||
@ -1739,14 +1743,14 @@ static int set_egress_sakey_record(struct aq_hw_s *hw,
|
|||||||
ret = set_raw_egress_record(hw, packed_record, 8, 2,
|
ret = set_raw_egress_record(hw, packed_record, 8, 2,
|
||||||
ROWOFFSET_EGRESSSAKEYRECORD + table_index);
|
ROWOFFSET_EGRESSSAKEYRECORD + table_index);
|
||||||
if (unlikely(ret))
|
if (unlikely(ret))
|
||||||
return ret;
|
goto clear_key;
|
||||||
ret = set_raw_egress_record(hw, packed_record + 8, 8, 2,
|
ret = set_raw_egress_record(hw, packed_record + 8, 8, 2,
|
||||||
ROWOFFSET_EGRESSSAKEYRECORD + table_index -
|
ROWOFFSET_EGRESSSAKEYRECORD + table_index -
|
||||||
32);
|
32);
|
||||||
if (unlikely(ret))
|
|
||||||
return ret;
|
|
||||||
|
|
||||||
return 0;
|
clear_key:
|
||||||
|
memzero_explicit(packed_record, sizeof(packed_record));
|
||||||
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
int aq_mss_set_egress_sakey_record(struct aq_hw_s *hw,
|
int aq_mss_set_egress_sakey_record(struct aq_hw_s *hw,
|
||||||
|
@ -77,7 +77,7 @@ config BCMGENET
|
|||||||
select BCM7XXX_PHY
|
select BCM7XXX_PHY
|
||||||
select MDIO_BCM_UNIMAC
|
select MDIO_BCM_UNIMAC
|
||||||
select DIMLIB
|
select DIMLIB
|
||||||
select BROADCOM_PHY if ARCH_BCM2835
|
select BROADCOM_PHY if (ARCH_BCM2835 && PTP_1588_CLOCK_OPTIONAL)
|
||||||
help
|
help
|
||||||
This driver supports the built-in Ethernet MACs found in the
|
This driver supports the built-in Ethernet MACs found in the
|
||||||
Broadcom BCM7xxx Set Top Box family chipset.
|
Broadcom BCM7xxx Set Top Box family chipset.
|
||||||
|
@ -9983,17 +9983,12 @@ static int bnxt_try_recover_fw(struct bnxt *bp)
|
|||||||
return -ENODEV;
|
return -ENODEV;
|
||||||
}
|
}
|
||||||
|
|
||||||
int bnxt_cancel_reservations(struct bnxt *bp, bool fw_reset)
|
static void bnxt_clear_reservations(struct bnxt *bp, bool fw_reset)
|
||||||
{
|
{
|
||||||
struct bnxt_hw_resc *hw_resc = &bp->hw_resc;
|
struct bnxt_hw_resc *hw_resc = &bp->hw_resc;
|
||||||
int rc;
|
|
||||||
|
|
||||||
if (!BNXT_NEW_RM(bp))
|
if (!BNXT_NEW_RM(bp))
|
||||||
return 0; /* no resource reservations required */
|
return; /* no resource reservations required */
|
||||||
|
|
||||||
rc = bnxt_hwrm_func_resc_qcaps(bp, true);
|
|
||||||
if (rc)
|
|
||||||
netdev_err(bp->dev, "resc_qcaps failed\n");
|
|
||||||
|
|
||||||
hw_resc->resv_cp_rings = 0;
|
hw_resc->resv_cp_rings = 0;
|
||||||
hw_resc->resv_stat_ctxs = 0;
|
hw_resc->resv_stat_ctxs = 0;
|
||||||
@ -10006,6 +10001,20 @@ int bnxt_cancel_reservations(struct bnxt *bp, bool fw_reset)
|
|||||||
bp->tx_nr_rings = 0;
|
bp->tx_nr_rings = 0;
|
||||||
bp->rx_nr_rings = 0;
|
bp->rx_nr_rings = 0;
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
int bnxt_cancel_reservations(struct bnxt *bp, bool fw_reset)
|
||||||
|
{
|
||||||
|
int rc;
|
||||||
|
|
||||||
|
if (!BNXT_NEW_RM(bp))
|
||||||
|
return 0; /* no resource reservations required */
|
||||||
|
|
||||||
|
rc = bnxt_hwrm_func_resc_qcaps(bp, true);
|
||||||
|
if (rc)
|
||||||
|
netdev_err(bp->dev, "resc_qcaps failed\n");
|
||||||
|
|
||||||
|
bnxt_clear_reservations(bp, fw_reset);
|
||||||
|
|
||||||
return rc;
|
return rc;
|
||||||
}
|
}
|
||||||
@ -12894,8 +12903,8 @@ static int bnxt_rx_flow_steer(struct net_device *dev, const struct sk_buff *skb,
|
|||||||
rcu_read_lock();
|
rcu_read_lock();
|
||||||
hlist_for_each_entry_rcu(fltr, head, hash) {
|
hlist_for_each_entry_rcu(fltr, head, hash) {
|
||||||
if (bnxt_fltr_match(fltr, new_fltr)) {
|
if (bnxt_fltr_match(fltr, new_fltr)) {
|
||||||
|
rc = fltr->sw_id;
|
||||||
rcu_read_unlock();
|
rcu_read_unlock();
|
||||||
rc = 0;
|
|
||||||
goto err_free;
|
goto err_free;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -13913,7 +13922,9 @@ static pci_ers_result_t bnxt_io_slot_reset(struct pci_dev *pdev)
|
|||||||
pci_ers_result_t result = PCI_ERS_RESULT_DISCONNECT;
|
pci_ers_result_t result = PCI_ERS_RESULT_DISCONNECT;
|
||||||
struct net_device *netdev = pci_get_drvdata(pdev);
|
struct net_device *netdev = pci_get_drvdata(pdev);
|
||||||
struct bnxt *bp = netdev_priv(netdev);
|
struct bnxt *bp = netdev_priv(netdev);
|
||||||
int err = 0, off;
|
int retry = 0;
|
||||||
|
int err = 0;
|
||||||
|
int off;
|
||||||
|
|
||||||
netdev_info(bp->dev, "PCI Slot Reset\n");
|
netdev_info(bp->dev, "PCI Slot Reset\n");
|
||||||
|
|
||||||
@ -13941,11 +13952,36 @@ static pci_ers_result_t bnxt_io_slot_reset(struct pci_dev *pdev)
|
|||||||
pci_restore_state(pdev);
|
pci_restore_state(pdev);
|
||||||
pci_save_state(pdev);
|
pci_save_state(pdev);
|
||||||
|
|
||||||
|
bnxt_inv_fw_health_reg(bp);
|
||||||
|
bnxt_try_map_fw_health_reg(bp);
|
||||||
|
|
||||||
|
/* In some PCIe AER scenarios, firmware may take up to
|
||||||
|
* 10 seconds to become ready in the worst case.
|
||||||
|
*/
|
||||||
|
do {
|
||||||
|
err = bnxt_try_recover_fw(bp);
|
||||||
|
if (!err)
|
||||||
|
break;
|
||||||
|
retry++;
|
||||||
|
} while (retry < BNXT_FW_SLOT_RESET_RETRY);
|
||||||
|
|
||||||
|
if (err) {
|
||||||
|
dev_err(&pdev->dev, "Firmware not ready\n");
|
||||||
|
goto reset_exit;
|
||||||
|
}
|
||||||
|
|
||||||
err = bnxt_hwrm_func_reset(bp);
|
err = bnxt_hwrm_func_reset(bp);
|
||||||
if (!err)
|
if (!err)
|
||||||
result = PCI_ERS_RESULT_RECOVERED;
|
result = PCI_ERS_RESULT_RECOVERED;
|
||||||
|
|
||||||
|
bnxt_ulp_irq_stop(bp);
|
||||||
|
bnxt_clear_int_mode(bp);
|
||||||
|
err = bnxt_init_int_mode(bp);
|
||||||
|
bnxt_ulp_irq_restart(bp, err);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
reset_exit:
|
||||||
|
bnxt_clear_reservations(bp, true);
|
||||||
rtnl_unlock();
|
rtnl_unlock();
|
||||||
|
|
||||||
return result;
|
return result;
|
||||||
|
@ -1621,6 +1621,7 @@ struct bnxt_fw_health {
|
|||||||
|
|
||||||
#define BNXT_FW_RETRY 5
|
#define BNXT_FW_RETRY 5
|
||||||
#define BNXT_FW_IF_RETRY 10
|
#define BNXT_FW_IF_RETRY 10
|
||||||
|
#define BNXT_FW_SLOT_RESET_RETRY 4
|
||||||
|
|
||||||
enum board_idx {
|
enum board_idx {
|
||||||
BCM57301,
|
BCM57301,
|
||||||
|
@ -162,7 +162,7 @@ static int bnxt_set_coalesce(struct net_device *dev,
|
|||||||
}
|
}
|
||||||
|
|
||||||
reset_coalesce:
|
reset_coalesce:
|
||||||
if (netif_running(dev)) {
|
if (test_bit(BNXT_STATE_OPEN, &bp->state)) {
|
||||||
if (update_stats) {
|
if (update_stats) {
|
||||||
rc = bnxt_close_nic(bp, true, false);
|
rc = bnxt_close_nic(bp, true, false);
|
||||||
if (!rc)
|
if (!rc)
|
||||||
|
@ -476,7 +476,8 @@ static int __hwrm_send(struct bnxt *bp, struct bnxt_hwrm_ctx *ctx)
|
|||||||
memset(ctx->resp, 0, PAGE_SIZE);
|
memset(ctx->resp, 0, PAGE_SIZE);
|
||||||
|
|
||||||
req_type = le16_to_cpu(ctx->req->req_type);
|
req_type = le16_to_cpu(ctx->req->req_type);
|
||||||
if (BNXT_NO_FW_ACCESS(bp) && req_type != HWRM_FUNC_RESET) {
|
if (BNXT_NO_FW_ACCESS(bp) &&
|
||||||
|
(req_type != HWRM_FUNC_RESET && req_type != HWRM_VER_GET)) {
|
||||||
netdev_dbg(bp->dev, "hwrm req_type 0x%x skipped, FW channel down\n",
|
netdev_dbg(bp->dev, "hwrm req_type 0x%x skipped, FW channel down\n",
|
||||||
req_type);
|
req_type);
|
||||||
goto exit;
|
goto exit;
|
||||||
|
@ -1301,6 +1301,7 @@ static int cxgb_up(struct adapter *adap)
|
|||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
CH_ERR(adap, "failed to bind qsets, err %d\n", ret);
|
CH_ERR(adap, "failed to bind qsets, err %d\n", ret);
|
||||||
t3_intr_disable(adap);
|
t3_intr_disable(adap);
|
||||||
|
quiesce_rx(adap);
|
||||||
free_irq_resources(adap);
|
free_irq_resources(adap);
|
||||||
err = ret;
|
err = ret;
|
||||||
goto out;
|
goto out;
|
||||||
|
@ -858,7 +858,7 @@ static int cxgb4vf_open(struct net_device *dev)
|
|||||||
*/
|
*/
|
||||||
err = t4vf_update_port_info(pi);
|
err = t4vf_update_port_info(pi);
|
||||||
if (err < 0)
|
if (err < 0)
|
||||||
return err;
|
goto err_unwind;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Note that this interface is up and start everything up ...
|
* Note that this interface is up and start everything up ...
|
||||||
|
@ -487,12 +487,21 @@ _return_of_node_put:
|
|||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static int mac_remove(struct platform_device *pdev)
|
||||||
|
{
|
||||||
|
struct mac_device *mac_dev = platform_get_drvdata(pdev);
|
||||||
|
|
||||||
|
platform_device_unregister(mac_dev->priv->eth_dev);
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
static struct platform_driver mac_driver = {
|
static struct platform_driver mac_driver = {
|
||||||
.driver = {
|
.driver = {
|
||||||
.name = KBUILD_MODNAME,
|
.name = KBUILD_MODNAME,
|
||||||
.of_match_table = mac_match,
|
.of_match_table = mac_match,
|
||||||
},
|
},
|
||||||
.probe = mac_probe,
|
.probe = mac_probe,
|
||||||
|
.remove = mac_remove,
|
||||||
};
|
};
|
||||||
|
|
||||||
builtin_platform_driver(mac_driver);
|
builtin_platform_driver(mac_driver);
|
||||||
|
@ -12984,14 +12984,16 @@ static void hclge_clean_vport_config(struct hnae3_ae_dev *ae_dev, int num_vfs)
|
|||||||
static int hclge_get_dscp_prio(struct hnae3_handle *h, u8 dscp, u8 *tc_mode,
|
static int hclge_get_dscp_prio(struct hnae3_handle *h, u8 dscp, u8 *tc_mode,
|
||||||
u8 *priority)
|
u8 *priority)
|
||||||
{
|
{
|
||||||
|
struct hclge_vport *vport = hclge_get_vport(h);
|
||||||
|
|
||||||
if (dscp >= HNAE3_MAX_DSCP)
|
if (dscp >= HNAE3_MAX_DSCP)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
if (tc_mode)
|
if (tc_mode)
|
||||||
*tc_mode = h->kinfo.tc_map_mode;
|
*tc_mode = vport->nic.kinfo.tc_map_mode;
|
||||||
if (priority)
|
if (priority)
|
||||||
*priority = h->kinfo.dscp_prio[dscp] == HNAE3_PRIO_ID_INVALID ? 0 :
|
*priority = vport->nic.kinfo.dscp_prio[dscp] == HNAE3_PRIO_ID_INVALID ? 0 :
|
||||||
h->kinfo.dscp_prio[dscp];
|
vport->nic.kinfo.dscp_prio[dscp];
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
@ -1757,7 +1757,8 @@ static int ibmveth_probe(struct vio_dev *dev, const struct vio_device_id *id)
|
|||||||
kobject_uevent(kobj, KOBJ_ADD);
|
kobject_uevent(kobj, KOBJ_ADD);
|
||||||
}
|
}
|
||||||
|
|
||||||
rc = netif_set_real_num_tx_queues(netdev, ibmveth_real_max_tx_queues());
|
rc = netif_set_real_num_tx_queues(netdev, min(num_online_cpus(),
|
||||||
|
IBMVETH_DEFAULT_QUEUES));
|
||||||
if (rc) {
|
if (rc) {
|
||||||
netdev_dbg(netdev, "failed to set number of tx queues rc=%d\n",
|
netdev_dbg(netdev, "failed to set number of tx queues rc=%d\n",
|
||||||
rc);
|
rc);
|
||||||
|
@ -100,6 +100,7 @@ static inline long h_illan_attributes(unsigned long unit_address,
|
|||||||
#define IBMVETH_MAX_BUF_SIZE (1024 * 128)
|
#define IBMVETH_MAX_BUF_SIZE (1024 * 128)
|
||||||
#define IBMVETH_MAX_TX_BUF_SIZE (1024 * 64)
|
#define IBMVETH_MAX_TX_BUF_SIZE (1024 * 64)
|
||||||
#define IBMVETH_MAX_QUEUES 16U
|
#define IBMVETH_MAX_QUEUES 16U
|
||||||
|
#define IBMVETH_DEFAULT_QUEUES 8U
|
||||||
|
|
||||||
static int pool_size[] = { 512, 1024 * 2, 1024 * 16, 1024 * 32, 1024 * 64 };
|
static int pool_size[] = { 512, 1024 * 2, 1024 * 16, 1024 * 32, 1024 * 64 };
|
||||||
static int pool_count[] = { 256, 512, 256, 256, 256 };
|
static int pool_count[] = { 256, 512, 256, 256, 256 };
|
||||||
|
@ -2438,6 +2438,8 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter,
|
|||||||
list_for_each_entry(f, &adapter->vlan_filter_list, list) {
|
list_for_each_entry(f, &adapter->vlan_filter_list, list) {
|
||||||
if (f->is_new_vlan) {
|
if (f->is_new_vlan) {
|
||||||
f->is_new_vlan = false;
|
f->is_new_vlan = false;
|
||||||
|
if (!f->vlan.vid)
|
||||||
|
continue;
|
||||||
if (f->vlan.tpid == ETH_P_8021Q)
|
if (f->vlan.tpid == ETH_P_8021Q)
|
||||||
set_bit(f->vlan.vid,
|
set_bit(f->vlan.vid,
|
||||||
adapter->vsi.active_cvlans);
|
adapter->vsi.active_cvlans);
|
||||||
|
@ -958,7 +958,7 @@ ice_vsi_stop_tx_ring(struct ice_vsi *vsi, enum ice_disq_rst_src rst_src,
|
|||||||
* associated to the queue to schedule NAPI handler
|
* associated to the queue to schedule NAPI handler
|
||||||
*/
|
*/
|
||||||
q_vector = ring->q_vector;
|
q_vector = ring->q_vector;
|
||||||
if (q_vector)
|
if (q_vector && !(vsi->vf && ice_is_vf_disabled(vsi->vf)))
|
||||||
ice_trigger_sw_intr(hw, q_vector);
|
ice_trigger_sw_intr(hw, q_vector);
|
||||||
|
|
||||||
status = ice_dis_vsi_txq(vsi->port_info, txq_meta->vsi_idx,
|
status = ice_dis_vsi_txq(vsi->port_info, txq_meta->vsi_idx,
|
||||||
|
@ -2239,6 +2239,31 @@ int ice_vsi_stop_xdp_tx_rings(struct ice_vsi *vsi)
|
|||||||
return ice_vsi_stop_tx_rings(vsi, ICE_NO_RESET, 0, vsi->xdp_rings, vsi->num_xdp_txq);
|
return ice_vsi_stop_tx_rings(vsi, ICE_NO_RESET, 0, vsi->xdp_rings, vsi->num_xdp_txq);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* ice_vsi_is_rx_queue_active
|
||||||
|
* @vsi: the VSI being configured
|
||||||
|
*
|
||||||
|
* Return true if at least one queue is active.
|
||||||
|
*/
|
||||||
|
bool ice_vsi_is_rx_queue_active(struct ice_vsi *vsi)
|
||||||
|
{
|
||||||
|
struct ice_pf *pf = vsi->back;
|
||||||
|
struct ice_hw *hw = &pf->hw;
|
||||||
|
int i;
|
||||||
|
|
||||||
|
ice_for_each_rxq(vsi, i) {
|
||||||
|
u32 rx_reg;
|
||||||
|
int pf_q;
|
||||||
|
|
||||||
|
pf_q = vsi->rxq_map[i];
|
||||||
|
rx_reg = rd32(hw, QRX_CTRL(pf_q));
|
||||||
|
if (rx_reg & QRX_CTRL_QENA_STAT_M)
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* ice_vsi_is_vlan_pruning_ena - check if VLAN pruning is enabled or not
|
* ice_vsi_is_vlan_pruning_ena - check if VLAN pruning is enabled or not
|
||||||
* @vsi: VSI to check whether or not VLAN pruning is enabled.
|
* @vsi: VSI to check whether or not VLAN pruning is enabled.
|
||||||
|
@ -129,4 +129,5 @@ u16 ice_vsi_num_non_zero_vlans(struct ice_vsi *vsi);
|
|||||||
bool ice_is_feature_supported(struct ice_pf *pf, enum ice_feature f);
|
bool ice_is_feature_supported(struct ice_pf *pf, enum ice_feature f);
|
||||||
void ice_clear_feature_support(struct ice_pf *pf, enum ice_feature f);
|
void ice_clear_feature_support(struct ice_pf *pf, enum ice_feature f);
|
||||||
void ice_init_feature_support(struct ice_pf *pf);
|
void ice_init_feature_support(struct ice_pf *pf);
|
||||||
|
bool ice_vsi_is_rx_queue_active(struct ice_vsi *vsi);
|
||||||
#endif /* !_ICE_LIB_H_ */
|
#endif /* !_ICE_LIB_H_ */
|
||||||
|
@ -576,7 +576,10 @@ int ice_reset_vf(struct ice_vf *vf, u32 flags)
|
|||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
ice_vsi_stop_lan_tx_rings(vsi, ICE_NO_RESET, vf->vf_id);
|
ice_vsi_stop_lan_tx_rings(vsi, ICE_NO_RESET, vf->vf_id);
|
||||||
ice_vsi_stop_all_rx_rings(vsi);
|
|
||||||
|
if (ice_vsi_is_rx_queue_active(vsi))
|
||||||
|
ice_vsi_stop_all_rx_rings(vsi);
|
||||||
|
|
||||||
dev_dbg(dev, "VF is already disabled, there is no need for resetting it, telling VM, all is fine %d\n",
|
dev_dbg(dev, "VF is already disabled, there is no need for resetting it, telling VM, all is fine %d\n",
|
||||||
vf->vf_id);
|
vf->vf_id);
|
||||||
return 0;
|
return 0;
|
||||||
|
@ -2481,6 +2481,7 @@ out_free:
|
|||||||
for (i = 0; i < mp->rxq_count; i++)
|
for (i = 0; i < mp->rxq_count; i++)
|
||||||
rxq_deinit(mp->rxq + i);
|
rxq_deinit(mp->rxq + i);
|
||||||
out:
|
out:
|
||||||
|
napi_disable(&mp->napi);
|
||||||
free_irq(dev->irq, dev);
|
free_irq(dev->irq, dev);
|
||||||
|
|
||||||
return err;
|
return err;
|
||||||
|
@ -32,10 +32,12 @@ config OCTEONTX2_PF
|
|||||||
tristate "Marvell OcteonTX2 NIC Physical Function driver"
|
tristate "Marvell OcteonTX2 NIC Physical Function driver"
|
||||||
select OCTEONTX2_MBOX
|
select OCTEONTX2_MBOX
|
||||||
select NET_DEVLINK
|
select NET_DEVLINK
|
||||||
|
depends on MACSEC || !MACSEC
|
||||||
depends on (64BIT && COMPILE_TEST) || ARM64
|
depends on (64BIT && COMPILE_TEST) || ARM64
|
||||||
select DIMLIB
|
select DIMLIB
|
||||||
depends on PCI
|
depends on PCI
|
||||||
depends on PTP_1588_CLOCK_OPTIONAL
|
depends on PTP_1588_CLOCK_OPTIONAL
|
||||||
|
depends on MACSEC || !MACSEC
|
||||||
help
|
help
|
||||||
This driver supports Marvell's OcteonTX2 NIC physical function.
|
This driver supports Marvell's OcteonTX2 NIC physical function.
|
||||||
|
|
||||||
|
@ -898,6 +898,7 @@ static int otx2_sq_init(struct otx2_nic *pfvf, u16 qidx, u16 sqb_aura)
|
|||||||
}
|
}
|
||||||
|
|
||||||
sq->head = 0;
|
sq->head = 0;
|
||||||
|
sq->cons_head = 0;
|
||||||
sq->sqe_per_sqb = (pfvf->hw.sqb_size / sq->sqe_size) - 1;
|
sq->sqe_per_sqb = (pfvf->hw.sqb_size / sq->sqe_size) - 1;
|
||||||
sq->num_sqbs = (qset->sqe_cnt + sq->sqe_per_sqb) / sq->sqe_per_sqb;
|
sq->num_sqbs = (qset->sqe_cnt + sq->sqe_per_sqb) / sq->sqe_per_sqb;
|
||||||
/* Set SQE threshold to 10% of total SQEs */
|
/* Set SQE threshold to 10% of total SQEs */
|
||||||
|
@ -15,6 +15,7 @@
|
|||||||
#include <net/ip.h>
|
#include <net/ip.h>
|
||||||
#include <linux/bpf.h>
|
#include <linux/bpf.h>
|
||||||
#include <linux/bpf_trace.h>
|
#include <linux/bpf_trace.h>
|
||||||
|
#include <linux/bitfield.h>
|
||||||
|
|
||||||
#include "otx2_reg.h"
|
#include "otx2_reg.h"
|
||||||
#include "otx2_common.h"
|
#include "otx2_common.h"
|
||||||
@ -1171,6 +1172,59 @@ int otx2_set_real_num_queues(struct net_device *netdev,
|
|||||||
}
|
}
|
||||||
EXPORT_SYMBOL(otx2_set_real_num_queues);
|
EXPORT_SYMBOL(otx2_set_real_num_queues);
|
||||||
|
|
||||||
|
static char *nix_sqoperr_e_str[NIX_SQOPERR_MAX] = {
|
||||||
|
"NIX_SQOPERR_OOR",
|
||||||
|
"NIX_SQOPERR_CTX_FAULT",
|
||||||
|
"NIX_SQOPERR_CTX_POISON",
|
||||||
|
"NIX_SQOPERR_DISABLED",
|
||||||
|
"NIX_SQOPERR_SIZE_ERR",
|
||||||
|
"NIX_SQOPERR_OFLOW",
|
||||||
|
"NIX_SQOPERR_SQB_NULL",
|
||||||
|
"NIX_SQOPERR_SQB_FAULT",
|
||||||
|
"NIX_SQOPERR_SQE_SZ_ZERO",
|
||||||
|
};
|
||||||
|
|
||||||
|
static char *nix_mnqerr_e_str[NIX_MNQERR_MAX] = {
|
||||||
|
"NIX_MNQERR_SQ_CTX_FAULT",
|
||||||
|
"NIX_MNQERR_SQ_CTX_POISON",
|
||||||
|
"NIX_MNQERR_SQB_FAULT",
|
||||||
|
"NIX_MNQERR_SQB_POISON",
|
||||||
|
"NIX_MNQERR_TOTAL_ERR",
|
||||||
|
"NIX_MNQERR_LSO_ERR",
|
||||||
|
"NIX_MNQERR_CQ_QUERY_ERR",
|
||||||
|
"NIX_MNQERR_MAX_SQE_SIZE_ERR",
|
||||||
|
"NIX_MNQERR_MAXLEN_ERR",
|
||||||
|
"NIX_MNQERR_SQE_SIZEM1_ZERO",
|
||||||
|
};
|
||||||
|
|
||||||
|
static char *nix_snd_status_e_str[NIX_SND_STATUS_MAX] = {
|
||||||
|
"NIX_SND_STATUS_GOOD",
|
||||||
|
"NIX_SND_STATUS_SQ_CTX_FAULT",
|
||||||
|
"NIX_SND_STATUS_SQ_CTX_POISON",
|
||||||
|
"NIX_SND_STATUS_SQB_FAULT",
|
||||||
|
"NIX_SND_STATUS_SQB_POISON",
|
||||||
|
"NIX_SND_STATUS_HDR_ERR",
|
||||||
|
"NIX_SND_STATUS_EXT_ERR",
|
||||||
|
"NIX_SND_STATUS_JUMP_FAULT",
|
||||||
|
"NIX_SND_STATUS_JUMP_POISON",
|
||||||
|
"NIX_SND_STATUS_CRC_ERR",
|
||||||
|
"NIX_SND_STATUS_IMM_ERR",
|
||||||
|
"NIX_SND_STATUS_SG_ERR",
|
||||||
|
"NIX_SND_STATUS_MEM_ERR",
|
||||||
|
"NIX_SND_STATUS_INVALID_SUBDC",
|
||||||
|
"NIX_SND_STATUS_SUBDC_ORDER_ERR",
|
||||||
|
"NIX_SND_STATUS_DATA_FAULT",
|
||||||
|
"NIX_SND_STATUS_DATA_POISON",
|
||||||
|
"NIX_SND_STATUS_NPC_DROP_ACTION",
|
||||||
|
"NIX_SND_STATUS_LOCK_VIOL",
|
||||||
|
"NIX_SND_STATUS_NPC_UCAST_CHAN_ERR",
|
||||||
|
"NIX_SND_STATUS_NPC_MCAST_CHAN_ERR",
|
||||||
|
"NIX_SND_STATUS_NPC_MCAST_ABORT",
|
||||||
|
"NIX_SND_STATUS_NPC_VTAG_PTR_ERR",
|
||||||
|
"NIX_SND_STATUS_NPC_VTAG_SIZE_ERR",
|
||||||
|
"NIX_SND_STATUS_SEND_STATS_ERR",
|
||||||
|
};
|
||||||
|
|
||||||
static irqreturn_t otx2_q_intr_handler(int irq, void *data)
|
static irqreturn_t otx2_q_intr_handler(int irq, void *data)
|
||||||
{
|
{
|
||||||
struct otx2_nic *pf = data;
|
struct otx2_nic *pf = data;
|
||||||
@ -1204,46 +1258,67 @@ static irqreturn_t otx2_q_intr_handler(int irq, void *data)
|
|||||||
|
|
||||||
/* SQ */
|
/* SQ */
|
||||||
for (qidx = 0; qidx < pf->hw.tot_tx_queues; qidx++) {
|
for (qidx = 0; qidx < pf->hw.tot_tx_queues; qidx++) {
|
||||||
|
u64 sq_op_err_dbg, mnq_err_dbg, snd_err_dbg;
|
||||||
|
u8 sq_op_err_code, mnq_err_code, snd_err_code;
|
||||||
|
|
||||||
|
/* Below debug registers captures first errors corresponding to
|
||||||
|
* those registers. We don't have to check against SQ qid as
|
||||||
|
* these are fatal errors.
|
||||||
|
*/
|
||||||
|
|
||||||
ptr = otx2_get_regaddr(pf, NIX_LF_SQ_OP_INT);
|
ptr = otx2_get_regaddr(pf, NIX_LF_SQ_OP_INT);
|
||||||
val = otx2_atomic64_add((qidx << 44), ptr);
|
val = otx2_atomic64_add((qidx << 44), ptr);
|
||||||
otx2_write64(pf, NIX_LF_SQ_OP_INT, (qidx << 44) |
|
otx2_write64(pf, NIX_LF_SQ_OP_INT, (qidx << 44) |
|
||||||
(val & NIX_SQINT_BITS));
|
(val & NIX_SQINT_BITS));
|
||||||
|
|
||||||
if (!(val & (NIX_SQINT_BITS | BIT_ULL(42))))
|
|
||||||
continue;
|
|
||||||
|
|
||||||
if (val & BIT_ULL(42)) {
|
if (val & BIT_ULL(42)) {
|
||||||
netdev_err(pf->netdev, "SQ%lld: error reading NIX_LF_SQ_OP_INT, NIX_LF_ERR_INT 0x%llx\n",
|
netdev_err(pf->netdev, "SQ%lld: error reading NIX_LF_SQ_OP_INT, NIX_LF_ERR_INT 0x%llx\n",
|
||||||
qidx, otx2_read64(pf, NIX_LF_ERR_INT));
|
qidx, otx2_read64(pf, NIX_LF_ERR_INT));
|
||||||
} else {
|
goto done;
|
||||||
if (val & BIT_ULL(NIX_SQINT_LMT_ERR)) {
|
|
||||||
netdev_err(pf->netdev, "SQ%lld: LMT store error NIX_LF_SQ_OP_ERR_DBG:0x%llx",
|
|
||||||
qidx,
|
|
||||||
otx2_read64(pf,
|
|
||||||
NIX_LF_SQ_OP_ERR_DBG));
|
|
||||||
otx2_write64(pf, NIX_LF_SQ_OP_ERR_DBG,
|
|
||||||
BIT_ULL(44));
|
|
||||||
}
|
|
||||||
if (val & BIT_ULL(NIX_SQINT_MNQ_ERR)) {
|
|
||||||
netdev_err(pf->netdev, "SQ%lld: Meta-descriptor enqueue error NIX_LF_MNQ_ERR_DGB:0x%llx\n",
|
|
||||||
qidx,
|
|
||||||
otx2_read64(pf, NIX_LF_MNQ_ERR_DBG));
|
|
||||||
otx2_write64(pf, NIX_LF_MNQ_ERR_DBG,
|
|
||||||
BIT_ULL(44));
|
|
||||||
}
|
|
||||||
if (val & BIT_ULL(NIX_SQINT_SEND_ERR)) {
|
|
||||||
netdev_err(pf->netdev, "SQ%lld: Send error, NIX_LF_SEND_ERR_DBG 0x%llx",
|
|
||||||
qidx,
|
|
||||||
otx2_read64(pf,
|
|
||||||
NIX_LF_SEND_ERR_DBG));
|
|
||||||
otx2_write64(pf, NIX_LF_SEND_ERR_DBG,
|
|
||||||
BIT_ULL(44));
|
|
||||||
}
|
|
||||||
if (val & BIT_ULL(NIX_SQINT_SQB_ALLOC_FAIL))
|
|
||||||
netdev_err(pf->netdev, "SQ%lld: SQB allocation failed",
|
|
||||||
qidx);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
sq_op_err_dbg = otx2_read64(pf, NIX_LF_SQ_OP_ERR_DBG);
|
||||||
|
if (!(sq_op_err_dbg & BIT(44)))
|
||||||
|
goto chk_mnq_err_dbg;
|
||||||
|
|
||||||
|
sq_op_err_code = FIELD_GET(GENMASK(7, 0), sq_op_err_dbg);
|
||||||
|
netdev_err(pf->netdev, "SQ%lld: NIX_LF_SQ_OP_ERR_DBG(%llx) err=%s\n",
|
||||||
|
qidx, sq_op_err_dbg, nix_sqoperr_e_str[sq_op_err_code]);
|
||||||
|
|
||||||
|
otx2_write64(pf, NIX_LF_SQ_OP_ERR_DBG, BIT_ULL(44));
|
||||||
|
|
||||||
|
if (sq_op_err_code == NIX_SQOPERR_SQB_NULL)
|
||||||
|
goto chk_mnq_err_dbg;
|
||||||
|
|
||||||
|
/* Err is not NIX_SQOPERR_SQB_NULL, call aq function to read SQ structure.
|
||||||
|
* TODO: But we are in irq context. How to call mbox functions which does sleep
|
||||||
|
*/
|
||||||
|
|
||||||
|
chk_mnq_err_dbg:
|
||||||
|
mnq_err_dbg = otx2_read64(pf, NIX_LF_MNQ_ERR_DBG);
|
||||||
|
if (!(mnq_err_dbg & BIT(44)))
|
||||||
|
goto chk_snd_err_dbg;
|
||||||
|
|
||||||
|
mnq_err_code = FIELD_GET(GENMASK(7, 0), mnq_err_dbg);
|
||||||
|
netdev_err(pf->netdev, "SQ%lld: NIX_LF_MNQ_ERR_DBG(%llx) err=%s\n",
|
||||||
|
qidx, mnq_err_dbg, nix_mnqerr_e_str[mnq_err_code]);
|
||||||
|
otx2_write64(pf, NIX_LF_MNQ_ERR_DBG, BIT_ULL(44));
|
||||||
|
|
||||||
|
chk_snd_err_dbg:
|
||||||
|
snd_err_dbg = otx2_read64(pf, NIX_LF_SEND_ERR_DBG);
|
||||||
|
if (snd_err_dbg & BIT(44)) {
|
||||||
|
snd_err_code = FIELD_GET(GENMASK(7, 0), snd_err_dbg);
|
||||||
|
netdev_err(pf->netdev, "SQ%lld: NIX_LF_SND_ERR_DBG:0x%llx err=%s\n",
|
||||||
|
qidx, snd_err_dbg, nix_snd_status_e_str[snd_err_code]);
|
||||||
|
otx2_write64(pf, NIX_LF_SEND_ERR_DBG, BIT_ULL(44));
|
||||||
|
}
|
||||||
|
|
||||||
|
done:
|
||||||
|
/* Print values and reset */
|
||||||
|
if (val & BIT_ULL(NIX_SQINT_SQB_ALLOC_FAIL))
|
||||||
|
netdev_err(pf->netdev, "SQ%lld: SQB allocation failed",
|
||||||
|
qidx);
|
||||||
|
|
||||||
schedule_work(&pf->reset_task);
|
schedule_work(&pf->reset_task);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -281,4 +281,61 @@ enum nix_sqint_e {
|
|||||||
BIT_ULL(NIX_SQINT_SEND_ERR) | \
|
BIT_ULL(NIX_SQINT_SEND_ERR) | \
|
||||||
BIT_ULL(NIX_SQINT_SQB_ALLOC_FAIL))
|
BIT_ULL(NIX_SQINT_SQB_ALLOC_FAIL))
|
||||||
|
|
||||||
|
enum nix_sqoperr_e {
|
||||||
|
NIX_SQOPERR_OOR = 0,
|
||||||
|
NIX_SQOPERR_CTX_FAULT = 1,
|
||||||
|
NIX_SQOPERR_CTX_POISON = 2,
|
||||||
|
NIX_SQOPERR_DISABLED = 3,
|
||||||
|
NIX_SQOPERR_SIZE_ERR = 4,
|
||||||
|
NIX_SQOPERR_OFLOW = 5,
|
||||||
|
NIX_SQOPERR_SQB_NULL = 6,
|
||||||
|
NIX_SQOPERR_SQB_FAULT = 7,
|
||||||
|
NIX_SQOPERR_SQE_SZ_ZERO = 8,
|
||||||
|
NIX_SQOPERR_MAX,
|
||||||
|
};
|
||||||
|
|
||||||
|
enum nix_mnqerr_e {
|
||||||
|
NIX_MNQERR_SQ_CTX_FAULT = 0,
|
||||||
|
NIX_MNQERR_SQ_CTX_POISON = 1,
|
||||||
|
NIX_MNQERR_SQB_FAULT = 2,
|
||||||
|
NIX_MNQERR_SQB_POISON = 3,
|
||||||
|
NIX_MNQERR_TOTAL_ERR = 4,
|
||||||
|
NIX_MNQERR_LSO_ERR = 5,
|
||||||
|
NIX_MNQERR_CQ_QUERY_ERR = 6,
|
||||||
|
NIX_MNQERR_MAX_SQE_SIZE_ERR = 7,
|
||||||
|
NIX_MNQERR_MAXLEN_ERR = 8,
|
||||||
|
NIX_MNQERR_SQE_SIZEM1_ZERO = 9,
|
||||||
|
NIX_MNQERR_MAX,
|
||||||
|
};
|
||||||
|
|
||||||
|
enum nix_snd_status_e {
|
||||||
|
NIX_SND_STATUS_GOOD = 0x0,
|
||||||
|
NIX_SND_STATUS_SQ_CTX_FAULT = 0x1,
|
||||||
|
NIX_SND_STATUS_SQ_CTX_POISON = 0x2,
|
||||||
|
NIX_SND_STATUS_SQB_FAULT = 0x3,
|
||||||
|
NIX_SND_STATUS_SQB_POISON = 0x4,
|
||||||
|
NIX_SND_STATUS_HDR_ERR = 0x5,
|
||||||
|
NIX_SND_STATUS_EXT_ERR = 0x6,
|
||||||
|
NIX_SND_STATUS_JUMP_FAULT = 0x7,
|
||||||
|
NIX_SND_STATUS_JUMP_POISON = 0x8,
|
||||||
|
NIX_SND_STATUS_CRC_ERR = 0x9,
|
||||||
|
NIX_SND_STATUS_IMM_ERR = 0x10,
|
||||||
|
NIX_SND_STATUS_SG_ERR = 0x11,
|
||||||
|
NIX_SND_STATUS_MEM_ERR = 0x12,
|
||||||
|
NIX_SND_STATUS_INVALID_SUBDC = 0x13,
|
||||||
|
NIX_SND_STATUS_SUBDC_ORDER_ERR = 0x14,
|
||||||
|
NIX_SND_STATUS_DATA_FAULT = 0x15,
|
||||||
|
NIX_SND_STATUS_DATA_POISON = 0x16,
|
||||||
|
NIX_SND_STATUS_NPC_DROP_ACTION = 0x17,
|
||||||
|
NIX_SND_STATUS_LOCK_VIOL = 0x18,
|
||||||
|
NIX_SND_STATUS_NPC_UCAST_CHAN_ERR = 0x19,
|
||||||
|
NIX_SND_STATUS_NPC_MCAST_CHAN_ERR = 0x20,
|
||||||
|
NIX_SND_STATUS_NPC_MCAST_ABORT = 0x21,
|
||||||
|
NIX_SND_STATUS_NPC_VTAG_PTR_ERR = 0x22,
|
||||||
|
NIX_SND_STATUS_NPC_VTAG_SIZE_ERR = 0x23,
|
||||||
|
NIX_SND_STATUS_SEND_MEM_FAULT = 0x24,
|
||||||
|
NIX_SND_STATUS_SEND_STATS_ERR = 0x25,
|
||||||
|
NIX_SND_STATUS_MAX,
|
||||||
|
};
|
||||||
|
|
||||||
#endif /* OTX2_STRUCT_H */
|
#endif /* OTX2_STRUCT_H */
|
||||||
|
@ -441,6 +441,7 @@ static int otx2_tx_napi_handler(struct otx2_nic *pfvf,
|
|||||||
struct otx2_cq_queue *cq, int budget)
|
struct otx2_cq_queue *cq, int budget)
|
||||||
{
|
{
|
||||||
int tx_pkts = 0, tx_bytes = 0, qidx;
|
int tx_pkts = 0, tx_bytes = 0, qidx;
|
||||||
|
struct otx2_snd_queue *sq;
|
||||||
struct nix_cqe_tx_s *cqe;
|
struct nix_cqe_tx_s *cqe;
|
||||||
int processed_cqe = 0;
|
int processed_cqe = 0;
|
||||||
|
|
||||||
@ -451,6 +452,9 @@ static int otx2_tx_napi_handler(struct otx2_nic *pfvf,
|
|||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
process_cqe:
|
process_cqe:
|
||||||
|
qidx = cq->cq_idx - pfvf->hw.rx_queues;
|
||||||
|
sq = &pfvf->qset.sq[qidx];
|
||||||
|
|
||||||
while (likely(processed_cqe < budget) && cq->pend_cqe) {
|
while (likely(processed_cqe < budget) && cq->pend_cqe) {
|
||||||
cqe = (struct nix_cqe_tx_s *)otx2_get_next_cqe(cq);
|
cqe = (struct nix_cqe_tx_s *)otx2_get_next_cqe(cq);
|
||||||
if (unlikely(!cqe)) {
|
if (unlikely(!cqe)) {
|
||||||
@ -458,18 +462,20 @@ process_cqe:
|
|||||||
return 0;
|
return 0;
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (cq->cq_type == CQ_XDP) {
|
if (cq->cq_type == CQ_XDP) {
|
||||||
qidx = cq->cq_idx - pfvf->hw.rx_queues;
|
otx2_xdp_snd_pkt_handler(pfvf, sq, cqe);
|
||||||
otx2_xdp_snd_pkt_handler(pfvf, &pfvf->qset.sq[qidx],
|
|
||||||
cqe);
|
|
||||||
} else {
|
} else {
|
||||||
otx2_snd_pkt_handler(pfvf, cq,
|
otx2_snd_pkt_handler(pfvf, cq, sq, cqe, budget,
|
||||||
&pfvf->qset.sq[cq->cint_idx],
|
&tx_pkts, &tx_bytes);
|
||||||
cqe, budget, &tx_pkts, &tx_bytes);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
cqe->hdr.cqe_type = NIX_XQE_TYPE_INVALID;
|
cqe->hdr.cqe_type = NIX_XQE_TYPE_INVALID;
|
||||||
processed_cqe++;
|
processed_cqe++;
|
||||||
cq->pend_cqe--;
|
cq->pend_cqe--;
|
||||||
|
|
||||||
|
sq->cons_head++;
|
||||||
|
sq->cons_head &= (sq->sqe_cnt - 1);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Free CQEs to HW */
|
/* Free CQEs to HW */
|
||||||
@ -1072,17 +1078,17 @@ bool otx2_sq_append_skb(struct net_device *netdev, struct otx2_snd_queue *sq,
|
|||||||
{
|
{
|
||||||
struct netdev_queue *txq = netdev_get_tx_queue(netdev, qidx);
|
struct netdev_queue *txq = netdev_get_tx_queue(netdev, qidx);
|
||||||
struct otx2_nic *pfvf = netdev_priv(netdev);
|
struct otx2_nic *pfvf = netdev_priv(netdev);
|
||||||
int offset, num_segs, free_sqe;
|
int offset, num_segs, free_desc;
|
||||||
struct nix_sqe_hdr_s *sqe_hdr;
|
struct nix_sqe_hdr_s *sqe_hdr;
|
||||||
|
|
||||||
/* Check if there is room for new SQE.
|
/* Check if there is enough room between producer
|
||||||
* 'Num of SQBs freed to SQ's pool - SQ's Aura count'
|
* and consumer index.
|
||||||
* will give free SQE count.
|
|
||||||
*/
|
*/
|
||||||
free_sqe = (sq->num_sqbs - *sq->aura_fc_addr) * sq->sqe_per_sqb;
|
free_desc = (sq->cons_head - sq->head - 1 + sq->sqe_cnt) & (sq->sqe_cnt - 1);
|
||||||
|
if (free_desc < sq->sqe_thresh)
|
||||||
|
return false;
|
||||||
|
|
||||||
if (free_sqe < sq->sqe_thresh ||
|
if (free_desc < otx2_get_sqe_count(pfvf, skb))
|
||||||
free_sqe < otx2_get_sqe_count(pfvf, skb))
|
|
||||||
return false;
|
return false;
|
||||||
|
|
||||||
num_segs = skb_shinfo(skb)->nr_frags + 1;
|
num_segs = skb_shinfo(skb)->nr_frags + 1;
|
||||||
|
@ -79,6 +79,7 @@ struct sg_list {
|
|||||||
struct otx2_snd_queue {
|
struct otx2_snd_queue {
|
||||||
u8 aura_id;
|
u8 aura_id;
|
||||||
u16 head;
|
u16 head;
|
||||||
|
u16 cons_head;
|
||||||
u16 sqe_size;
|
u16 sqe_size;
|
||||||
u32 sqe_cnt;
|
u32 sqe_cnt;
|
||||||
u16 num_sqbs;
|
u16 num_sqbs;
|
||||||
|
@ -776,6 +776,7 @@ tx_done:
|
|||||||
int prestera_rxtx_switch_init(struct prestera_switch *sw)
|
int prestera_rxtx_switch_init(struct prestera_switch *sw)
|
||||||
{
|
{
|
||||||
struct prestera_rxtx *rxtx;
|
struct prestera_rxtx *rxtx;
|
||||||
|
int err;
|
||||||
|
|
||||||
rxtx = kzalloc(sizeof(*rxtx), GFP_KERNEL);
|
rxtx = kzalloc(sizeof(*rxtx), GFP_KERNEL);
|
||||||
if (!rxtx)
|
if (!rxtx)
|
||||||
@ -783,7 +784,11 @@ int prestera_rxtx_switch_init(struct prestera_switch *sw)
|
|||||||
|
|
||||||
sw->rxtx = rxtx;
|
sw->rxtx = rxtx;
|
||||||
|
|
||||||
return prestera_sdma_switch_init(sw);
|
err = prestera_sdma_switch_init(sw);
|
||||||
|
if (err)
|
||||||
|
kfree(rxtx);
|
||||||
|
|
||||||
|
return err;
|
||||||
}
|
}
|
||||||
|
|
||||||
void prestera_rxtx_switch_fini(struct prestera_switch *sw)
|
void prestera_rxtx_switch_fini(struct prestera_switch *sw)
|
||||||
|
@ -1026,6 +1026,8 @@ static int mtk_star_enable(struct net_device *ndev)
|
|||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
err_free_irq:
|
err_free_irq:
|
||||||
|
napi_disable(&priv->rx_napi);
|
||||||
|
napi_disable(&priv->tx_napi);
|
||||||
free_irq(ndev->irq, ndev);
|
free_irq(ndev->irq, ndev);
|
||||||
err_free_skbs:
|
err_free_skbs:
|
||||||
mtk_star_free_rx_skbs(priv);
|
mtk_star_free_rx_skbs(priv);
|
||||||
|
@ -1770,12 +1770,17 @@ void mlx5_cmd_flush(struct mlx5_core_dev *dev)
|
|||||||
struct mlx5_cmd *cmd = &dev->cmd;
|
struct mlx5_cmd *cmd = &dev->cmd;
|
||||||
int i;
|
int i;
|
||||||
|
|
||||||
for (i = 0; i < cmd->max_reg_cmds; i++)
|
for (i = 0; i < cmd->max_reg_cmds; i++) {
|
||||||
while (down_trylock(&cmd->sem))
|
while (down_trylock(&cmd->sem)) {
|
||||||
mlx5_cmd_trigger_completions(dev);
|
mlx5_cmd_trigger_completions(dev);
|
||||||
|
cond_resched();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
while (down_trylock(&cmd->pages_sem))
|
while (down_trylock(&cmd->pages_sem)) {
|
||||||
mlx5_cmd_trigger_completions(dev);
|
mlx5_cmd_trigger_completions(dev);
|
||||||
|
cond_resched();
|
||||||
|
}
|
||||||
|
|
||||||
/* Unlock cmdif */
|
/* Unlock cmdif */
|
||||||
up(&cmd->pages_sem);
|
up(&cmd->pages_sem);
|
||||||
|
@ -164,6 +164,36 @@ static int mlx5_esw_bridge_port_changeupper(struct notifier_block *nb, void *ptr
|
|||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static int
|
||||||
|
mlx5_esw_bridge_changeupper_validate_netdev(void *ptr)
|
||||||
|
{
|
||||||
|
struct net_device *dev = netdev_notifier_info_to_dev(ptr);
|
||||||
|
struct netdev_notifier_changeupper_info *info = ptr;
|
||||||
|
struct net_device *upper = info->upper_dev;
|
||||||
|
struct net_device *lower;
|
||||||
|
struct list_head *iter;
|
||||||
|
|
||||||
|
if (!netif_is_bridge_master(upper) || !netif_is_lag_master(dev))
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
netdev_for_each_lower_dev(dev, lower, iter) {
|
||||||
|
struct mlx5_core_dev *mdev;
|
||||||
|
struct mlx5e_priv *priv;
|
||||||
|
|
||||||
|
if (!mlx5e_eswitch_rep(lower))
|
||||||
|
continue;
|
||||||
|
|
||||||
|
priv = netdev_priv(lower);
|
||||||
|
mdev = priv->mdev;
|
||||||
|
if (!mlx5_lag_is_active(mdev))
|
||||||
|
return -EAGAIN;
|
||||||
|
if (!mlx5_lag_is_shared_fdb(mdev))
|
||||||
|
return -EOPNOTSUPP;
|
||||||
|
}
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
static int mlx5_esw_bridge_switchdev_port_event(struct notifier_block *nb,
|
static int mlx5_esw_bridge_switchdev_port_event(struct notifier_block *nb,
|
||||||
unsigned long event, void *ptr)
|
unsigned long event, void *ptr)
|
||||||
{
|
{
|
||||||
@ -171,6 +201,7 @@ static int mlx5_esw_bridge_switchdev_port_event(struct notifier_block *nb,
|
|||||||
|
|
||||||
switch (event) {
|
switch (event) {
|
||||||
case NETDEV_PRECHANGEUPPER:
|
case NETDEV_PRECHANGEUPPER:
|
||||||
|
err = mlx5_esw_bridge_changeupper_validate_netdev(ptr);
|
||||||
break;
|
break;
|
||||||
|
|
||||||
case NETDEV_CHANGEUPPER:
|
case NETDEV_CHANGEUPPER:
|
||||||
|
@ -6,70 +6,42 @@
|
|||||||
#include "en/tc_priv.h"
|
#include "en/tc_priv.h"
|
||||||
#include "mlx5_core.h"
|
#include "mlx5_core.h"
|
||||||
|
|
||||||
/* Must be aligned with enum flow_action_id. */
|
|
||||||
static struct mlx5e_tc_act *tc_acts_fdb[NUM_FLOW_ACTIONS] = {
|
static struct mlx5e_tc_act *tc_acts_fdb[NUM_FLOW_ACTIONS] = {
|
||||||
&mlx5e_tc_act_accept,
|
[FLOW_ACTION_ACCEPT] = &mlx5e_tc_act_accept,
|
||||||
&mlx5e_tc_act_drop,
|
[FLOW_ACTION_DROP] = &mlx5e_tc_act_drop,
|
||||||
&mlx5e_tc_act_trap,
|
[FLOW_ACTION_TRAP] = &mlx5e_tc_act_trap,
|
||||||
&mlx5e_tc_act_goto,
|
[FLOW_ACTION_GOTO] = &mlx5e_tc_act_goto,
|
||||||
&mlx5e_tc_act_mirred,
|
[FLOW_ACTION_REDIRECT] = &mlx5e_tc_act_mirred,
|
||||||
&mlx5e_tc_act_mirred,
|
[FLOW_ACTION_MIRRED] = &mlx5e_tc_act_mirred,
|
||||||
&mlx5e_tc_act_redirect_ingress,
|
[FLOW_ACTION_REDIRECT_INGRESS] = &mlx5e_tc_act_redirect_ingress,
|
||||||
NULL, /* FLOW_ACTION_MIRRED_INGRESS, */
|
[FLOW_ACTION_VLAN_PUSH] = &mlx5e_tc_act_vlan,
|
||||||
&mlx5e_tc_act_vlan,
|
[FLOW_ACTION_VLAN_POP] = &mlx5e_tc_act_vlan,
|
||||||
&mlx5e_tc_act_vlan,
|
[FLOW_ACTION_VLAN_MANGLE] = &mlx5e_tc_act_vlan_mangle,
|
||||||
&mlx5e_tc_act_vlan_mangle,
|
[FLOW_ACTION_TUNNEL_ENCAP] = &mlx5e_tc_act_tun_encap,
|
||||||
&mlx5e_tc_act_tun_encap,
|
[FLOW_ACTION_TUNNEL_DECAP] = &mlx5e_tc_act_tun_decap,
|
||||||
&mlx5e_tc_act_tun_decap,
|
[FLOW_ACTION_MANGLE] = &mlx5e_tc_act_pedit,
|
||||||
&mlx5e_tc_act_pedit,
|
[FLOW_ACTION_ADD] = &mlx5e_tc_act_pedit,
|
||||||
&mlx5e_tc_act_pedit,
|
[FLOW_ACTION_CSUM] = &mlx5e_tc_act_csum,
|
||||||
&mlx5e_tc_act_csum,
|
[FLOW_ACTION_PTYPE] = &mlx5e_tc_act_ptype,
|
||||||
NULL, /* FLOW_ACTION_MARK, */
|
[FLOW_ACTION_SAMPLE] = &mlx5e_tc_act_sample,
|
||||||
&mlx5e_tc_act_ptype,
|
[FLOW_ACTION_POLICE] = &mlx5e_tc_act_police,
|
||||||
NULL, /* FLOW_ACTION_PRIORITY, */
|
[FLOW_ACTION_CT] = &mlx5e_tc_act_ct,
|
||||||
NULL, /* FLOW_ACTION_WAKE, */
|
[FLOW_ACTION_MPLS_PUSH] = &mlx5e_tc_act_mpls_push,
|
||||||
NULL, /* FLOW_ACTION_QUEUE, */
|
[FLOW_ACTION_MPLS_POP] = &mlx5e_tc_act_mpls_pop,
|
||||||
&mlx5e_tc_act_sample,
|
[FLOW_ACTION_VLAN_PUSH_ETH] = &mlx5e_tc_act_vlan,
|
||||||
&mlx5e_tc_act_police,
|
[FLOW_ACTION_VLAN_POP_ETH] = &mlx5e_tc_act_vlan,
|
||||||
&mlx5e_tc_act_ct,
|
|
||||||
NULL, /* FLOW_ACTION_CT_METADATA, */
|
|
||||||
&mlx5e_tc_act_mpls_push,
|
|
||||||
&mlx5e_tc_act_mpls_pop,
|
|
||||||
NULL, /* FLOW_ACTION_MPLS_MANGLE, */
|
|
||||||
NULL, /* FLOW_ACTION_GATE, */
|
|
||||||
NULL, /* FLOW_ACTION_PPPOE_PUSH, */
|
|
||||||
NULL, /* FLOW_ACTION_JUMP, */
|
|
||||||
NULL, /* FLOW_ACTION_PIPE, */
|
|
||||||
&mlx5e_tc_act_vlan,
|
|
||||||
&mlx5e_tc_act_vlan,
|
|
||||||
};
|
};
|
||||||
|
|
||||||
/* Must be aligned with enum flow_action_id. */
|
|
||||||
static struct mlx5e_tc_act *tc_acts_nic[NUM_FLOW_ACTIONS] = {
|
static struct mlx5e_tc_act *tc_acts_nic[NUM_FLOW_ACTIONS] = {
|
||||||
&mlx5e_tc_act_accept,
|
[FLOW_ACTION_ACCEPT] = &mlx5e_tc_act_accept,
|
||||||
&mlx5e_tc_act_drop,
|
[FLOW_ACTION_DROP] = &mlx5e_tc_act_drop,
|
||||||
NULL, /* FLOW_ACTION_TRAP, */
|
[FLOW_ACTION_GOTO] = &mlx5e_tc_act_goto,
|
||||||
&mlx5e_tc_act_goto,
|
[FLOW_ACTION_REDIRECT] = &mlx5e_tc_act_mirred_nic,
|
||||||
&mlx5e_tc_act_mirred_nic,
|
[FLOW_ACTION_MANGLE] = &mlx5e_tc_act_pedit,
|
||||||
NULL, /* FLOW_ACTION_MIRRED, */
|
[FLOW_ACTION_ADD] = &mlx5e_tc_act_pedit,
|
||||||
NULL, /* FLOW_ACTION_REDIRECT_INGRESS, */
|
[FLOW_ACTION_CSUM] = &mlx5e_tc_act_csum,
|
||||||
NULL, /* FLOW_ACTION_MIRRED_INGRESS, */
|
[FLOW_ACTION_MARK] = &mlx5e_tc_act_mark,
|
||||||
NULL, /* FLOW_ACTION_VLAN_PUSH, */
|
[FLOW_ACTION_CT] = &mlx5e_tc_act_ct,
|
||||||
NULL, /* FLOW_ACTION_VLAN_POP, */
|
|
||||||
NULL, /* FLOW_ACTION_VLAN_MANGLE, */
|
|
||||||
NULL, /* FLOW_ACTION_TUNNEL_ENCAP, */
|
|
||||||
NULL, /* FLOW_ACTION_TUNNEL_DECAP, */
|
|
||||||
&mlx5e_tc_act_pedit,
|
|
||||||
&mlx5e_tc_act_pedit,
|
|
||||||
&mlx5e_tc_act_csum,
|
|
||||||
&mlx5e_tc_act_mark,
|
|
||||||
NULL, /* FLOW_ACTION_PTYPE, */
|
|
||||||
NULL, /* FLOW_ACTION_PRIORITY, */
|
|
||||||
NULL, /* FLOW_ACTION_WAKE, */
|
|
||||||
NULL, /* FLOW_ACTION_QUEUE, */
|
|
||||||
NULL, /* FLOW_ACTION_SAMPLE, */
|
|
||||||
NULL, /* FLOW_ACTION_POLICE, */
|
|
||||||
&mlx5e_tc_act_ct,
|
|
||||||
};
|
};
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
@ -11,6 +11,27 @@
|
|||||||
|
|
||||||
#define INL_HDR_START_SZ (sizeof(((struct mlx5_wqe_eth_seg *)NULL)->inline_hdr.start))
|
#define INL_HDR_START_SZ (sizeof(((struct mlx5_wqe_eth_seg *)NULL)->inline_hdr.start))
|
||||||
|
|
||||||
|
/* IPSEC inline data includes:
|
||||||
|
* 1. ESP trailer: up to 255 bytes of padding, 1 byte for pad length, 1 byte for
|
||||||
|
* next header.
|
||||||
|
* 2. ESP authentication data: 16 bytes for ICV.
|
||||||
|
*/
|
||||||
|
#define MLX5E_MAX_TX_IPSEC_DS DIV_ROUND_UP(sizeof(struct mlx5_wqe_inline_seg) + \
|
||||||
|
255 + 1 + 1 + 16, MLX5_SEND_WQE_DS)
|
||||||
|
|
||||||
|
/* 366 should be big enough to cover all L2, L3 and L4 headers with possible
|
||||||
|
* encapsulations.
|
||||||
|
*/
|
||||||
|
#define MLX5E_MAX_TX_INLINE_DS DIV_ROUND_UP(366 - INL_HDR_START_SZ + VLAN_HLEN, \
|
||||||
|
MLX5_SEND_WQE_DS)
|
||||||
|
|
||||||
|
/* Sync the calculation with mlx5e_sq_calc_wqe_attr. */
|
||||||
|
#define MLX5E_MAX_TX_WQEBBS DIV_ROUND_UP(MLX5E_TX_WQE_EMPTY_DS_COUNT + \
|
||||||
|
MLX5E_MAX_TX_INLINE_DS + \
|
||||||
|
MLX5E_MAX_TX_IPSEC_DS + \
|
||||||
|
MAX_SKB_FRAGS + 1, \
|
||||||
|
MLX5_SEND_WQEBB_NUM_DS)
|
||||||
|
|
||||||
#define MLX5E_RX_ERR_CQE(cqe) (get_cqe_opcode(cqe) != MLX5_CQE_RESP_SEND)
|
#define MLX5E_RX_ERR_CQE(cqe) (get_cqe_opcode(cqe) != MLX5_CQE_RESP_SEND)
|
||||||
|
|
||||||
static inline
|
static inline
|
||||||
@ -424,6 +445,8 @@ mlx5e_set_eseg_swp(struct sk_buff *skb, struct mlx5_wqe_eth_seg *eseg,
|
|||||||
|
|
||||||
static inline u16 mlx5e_stop_room_for_wqe(struct mlx5_core_dev *mdev, u16 wqe_size)
|
static inline u16 mlx5e_stop_room_for_wqe(struct mlx5_core_dev *mdev, u16 wqe_size)
|
||||||
{
|
{
|
||||||
|
WARN_ON_ONCE(PAGE_SIZE / MLX5_SEND_WQE_BB < mlx5e_get_max_sq_wqebbs(mdev));
|
||||||
|
|
||||||
/* A WQE must not cross the page boundary, hence two conditions:
|
/* A WQE must not cross the page boundary, hence two conditions:
|
||||||
* 1. Its size must not exceed the page size.
|
* 1. Its size must not exceed the page size.
|
||||||
* 2. If the WQE size is X, and the space remaining in a page is less
|
* 2. If the WQE size is X, and the space remaining in a page is less
|
||||||
@ -436,7 +459,6 @@ static inline u16 mlx5e_stop_room_for_wqe(struct mlx5_core_dev *mdev, u16 wqe_si
|
|||||||
"wqe_size %u is greater than max SQ WQEBBs %u",
|
"wqe_size %u is greater than max SQ WQEBBs %u",
|
||||||
wqe_size, mlx5e_get_max_sq_wqebbs(mdev));
|
wqe_size, mlx5e_get_max_sq_wqebbs(mdev));
|
||||||
|
|
||||||
|
|
||||||
return MLX5E_STOP_ROOM(wqe_size);
|
return MLX5E_STOP_ROOM(wqe_size);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -117,7 +117,7 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq,
|
|||||||
xdpi.page.rq = rq;
|
xdpi.page.rq = rq;
|
||||||
|
|
||||||
dma_addr = page_pool_get_dma_addr(page) + (xdpf->data - (void *)xdpf);
|
dma_addr = page_pool_get_dma_addr(page) + (xdpf->data - (void *)xdpf);
|
||||||
dma_sync_single_for_device(sq->pdev, dma_addr, xdptxd.len, DMA_TO_DEVICE);
|
dma_sync_single_for_device(sq->pdev, dma_addr, xdptxd.len, DMA_BIDIRECTIONAL);
|
||||||
|
|
||||||
if (unlikely(xdp_frame_has_frags(xdpf))) {
|
if (unlikely(xdp_frame_has_frags(xdpf))) {
|
||||||
sinfo = xdp_get_shared_info_from_frame(xdpf);
|
sinfo = xdp_get_shared_info_from_frame(xdpf);
|
||||||
@ -131,7 +131,7 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq,
|
|||||||
skb_frag_off(frag);
|
skb_frag_off(frag);
|
||||||
len = skb_frag_size(frag);
|
len = skb_frag_size(frag);
|
||||||
dma_sync_single_for_device(sq->pdev, addr, len,
|
dma_sync_single_for_device(sq->pdev, addr, len,
|
||||||
DMA_TO_DEVICE);
|
DMA_BIDIRECTIONAL);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -5694,6 +5694,13 @@ int mlx5e_attach_netdev(struct mlx5e_priv *priv)
|
|||||||
mlx5e_fs_set_state_destroy(priv->fs,
|
mlx5e_fs_set_state_destroy(priv->fs,
|
||||||
!test_bit(MLX5E_STATE_DESTROYING, &priv->state));
|
!test_bit(MLX5E_STATE_DESTROYING, &priv->state));
|
||||||
|
|
||||||
|
/* Validate the max_wqe_size_sq capability. */
|
||||||
|
if (WARN_ON_ONCE(mlx5e_get_max_sq_wqebbs(priv->mdev) < MLX5E_MAX_TX_WQEBBS)) {
|
||||||
|
mlx5_core_warn(priv->mdev, "MLX5E: Max SQ WQEBBs firmware capability: %u, needed %lu\n",
|
||||||
|
mlx5e_get_max_sq_wqebbs(priv->mdev), MLX5E_MAX_TX_WQEBBS);
|
||||||
|
return -EIO;
|
||||||
|
}
|
||||||
|
|
||||||
/* max number of channels may have changed */
|
/* max number of channels may have changed */
|
||||||
max_nch = mlx5e_calc_max_nch(priv->mdev, priv->netdev, profile);
|
max_nch = mlx5e_calc_max_nch(priv->mdev, priv->netdev, profile);
|
||||||
if (priv->channels.params.num_channels > max_nch) {
|
if (priv->channels.params.num_channels > max_nch) {
|
||||||
|
@ -266,7 +266,7 @@ static inline bool mlx5e_rx_cache_get(struct mlx5e_rq *rq, union mlx5e_alloc_uni
|
|||||||
|
|
||||||
addr = page_pool_get_dma_addr(au->page);
|
addr = page_pool_get_dma_addr(au->page);
|
||||||
/* Non-XSK always uses PAGE_SIZE. */
|
/* Non-XSK always uses PAGE_SIZE. */
|
||||||
dma_sync_single_for_device(rq->pdev, addr, PAGE_SIZE, DMA_FROM_DEVICE);
|
dma_sync_single_for_device(rq->pdev, addr, PAGE_SIZE, rq->buff.map_dir);
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -282,8 +282,7 @@ static inline int mlx5e_page_alloc_pool(struct mlx5e_rq *rq, union mlx5e_alloc_u
|
|||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
|
||||||
/* Non-XSK always uses PAGE_SIZE. */
|
/* Non-XSK always uses PAGE_SIZE. */
|
||||||
addr = dma_map_page_attrs(rq->pdev, au->page, 0, PAGE_SIZE,
|
addr = dma_map_page(rq->pdev, au->page, 0, PAGE_SIZE, rq->buff.map_dir);
|
||||||
rq->buff.map_dir, DMA_ATTR_SKIP_CPU_SYNC);
|
|
||||||
if (unlikely(dma_mapping_error(rq->pdev, addr))) {
|
if (unlikely(dma_mapping_error(rq->pdev, addr))) {
|
||||||
page_pool_recycle_direct(rq->page_pool, au->page);
|
page_pool_recycle_direct(rq->page_pool, au->page);
|
||||||
au->page = NULL;
|
au->page = NULL;
|
||||||
@ -427,14 +426,15 @@ mlx5e_add_skb_frag(struct mlx5e_rq *rq, struct sk_buff *skb,
|
|||||||
{
|
{
|
||||||
dma_addr_t addr = page_pool_get_dma_addr(au->page);
|
dma_addr_t addr = page_pool_get_dma_addr(au->page);
|
||||||
|
|
||||||
dma_sync_single_for_cpu(rq->pdev, addr + frag_offset, len, DMA_FROM_DEVICE);
|
dma_sync_single_for_cpu(rq->pdev, addr + frag_offset, len,
|
||||||
|
rq->buff.map_dir);
|
||||||
page_ref_inc(au->page);
|
page_ref_inc(au->page);
|
||||||
skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
|
skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
|
||||||
au->page, frag_offset, len, truesize);
|
au->page, frag_offset, len, truesize);
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline void
|
static inline void
|
||||||
mlx5e_copy_skb_header(struct device *pdev, struct sk_buff *skb,
|
mlx5e_copy_skb_header(struct mlx5e_rq *rq, struct sk_buff *skb,
|
||||||
struct page *page, dma_addr_t addr,
|
struct page *page, dma_addr_t addr,
|
||||||
int offset_from, int dma_offset, u32 headlen)
|
int offset_from, int dma_offset, u32 headlen)
|
||||||
{
|
{
|
||||||
@ -442,7 +442,8 @@ mlx5e_copy_skb_header(struct device *pdev, struct sk_buff *skb,
|
|||||||
/* Aligning len to sizeof(long) optimizes memcpy performance */
|
/* Aligning len to sizeof(long) optimizes memcpy performance */
|
||||||
unsigned int len = ALIGN(headlen, sizeof(long));
|
unsigned int len = ALIGN(headlen, sizeof(long));
|
||||||
|
|
||||||
dma_sync_single_for_cpu(pdev, addr + dma_offset, len, DMA_FROM_DEVICE);
|
dma_sync_single_for_cpu(rq->pdev, addr + dma_offset, len,
|
||||||
|
rq->buff.map_dir);
|
||||||
skb_copy_to_linear_data(skb, from, len);
|
skb_copy_to_linear_data(skb, from, len);
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -1538,7 +1539,7 @@ mlx5e_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi,
|
|||||||
|
|
||||||
addr = page_pool_get_dma_addr(au->page);
|
addr = page_pool_get_dma_addr(au->page);
|
||||||
dma_sync_single_range_for_cpu(rq->pdev, addr, wi->offset,
|
dma_sync_single_range_for_cpu(rq->pdev, addr, wi->offset,
|
||||||
frag_size, DMA_FROM_DEVICE);
|
frag_size, rq->buff.map_dir);
|
||||||
net_prefetch(data);
|
net_prefetch(data);
|
||||||
|
|
||||||
prog = rcu_dereference(rq->xdp_prog);
|
prog = rcu_dereference(rq->xdp_prog);
|
||||||
@ -1587,7 +1588,7 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi
|
|||||||
|
|
||||||
addr = page_pool_get_dma_addr(au->page);
|
addr = page_pool_get_dma_addr(au->page);
|
||||||
dma_sync_single_range_for_cpu(rq->pdev, addr, wi->offset,
|
dma_sync_single_range_for_cpu(rq->pdev, addr, wi->offset,
|
||||||
rq->buff.frame0_sz, DMA_FROM_DEVICE);
|
rq->buff.frame0_sz, rq->buff.map_dir);
|
||||||
net_prefetchw(va); /* xdp_frame data area */
|
net_prefetchw(va); /* xdp_frame data area */
|
||||||
net_prefetch(va + rx_headroom);
|
net_prefetch(va + rx_headroom);
|
||||||
|
|
||||||
@ -1608,7 +1609,7 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi
|
|||||||
|
|
||||||
addr = page_pool_get_dma_addr(au->page);
|
addr = page_pool_get_dma_addr(au->page);
|
||||||
dma_sync_single_for_cpu(rq->pdev, addr + wi->offset,
|
dma_sync_single_for_cpu(rq->pdev, addr + wi->offset,
|
||||||
frag_consumed_bytes, DMA_FROM_DEVICE);
|
frag_consumed_bytes, rq->buff.map_dir);
|
||||||
|
|
||||||
if (!xdp_buff_has_frags(&xdp)) {
|
if (!xdp_buff_has_frags(&xdp)) {
|
||||||
/* Init on the first fragment to avoid cold cache access
|
/* Init on the first fragment to avoid cold cache access
|
||||||
@ -1905,7 +1906,7 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w
|
|||||||
mlx5e_fill_skb_data(skb, rq, au, byte_cnt, frag_offset);
|
mlx5e_fill_skb_data(skb, rq, au, byte_cnt, frag_offset);
|
||||||
/* copy header */
|
/* copy header */
|
||||||
addr = page_pool_get_dma_addr(head_au->page);
|
addr = page_pool_get_dma_addr(head_au->page);
|
||||||
mlx5e_copy_skb_header(rq->pdev, skb, head_au->page, addr,
|
mlx5e_copy_skb_header(rq, skb, head_au->page, addr,
|
||||||
head_offset, head_offset, headlen);
|
head_offset, head_offset, headlen);
|
||||||
/* skb linear part was allocated with headlen and aligned to long */
|
/* skb linear part was allocated with headlen and aligned to long */
|
||||||
skb->tail += headlen;
|
skb->tail += headlen;
|
||||||
@ -1939,7 +1940,7 @@ mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi,
|
|||||||
|
|
||||||
addr = page_pool_get_dma_addr(au->page);
|
addr = page_pool_get_dma_addr(au->page);
|
||||||
dma_sync_single_range_for_cpu(rq->pdev, addr, head_offset,
|
dma_sync_single_range_for_cpu(rq->pdev, addr, head_offset,
|
||||||
frag_size, DMA_FROM_DEVICE);
|
frag_size, rq->buff.map_dir);
|
||||||
net_prefetch(data);
|
net_prefetch(data);
|
||||||
|
|
||||||
prog = rcu_dereference(rq->xdp_prog);
|
prog = rcu_dereference(rq->xdp_prog);
|
||||||
@ -1987,7 +1988,7 @@ mlx5e_skb_from_cqe_shampo(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi,
|
|||||||
|
|
||||||
if (likely(frag_size <= BIT(MLX5E_SHAMPO_LOG_MAX_HEADER_ENTRY_SIZE))) {
|
if (likely(frag_size <= BIT(MLX5E_SHAMPO_LOG_MAX_HEADER_ENTRY_SIZE))) {
|
||||||
/* build SKB around header */
|
/* build SKB around header */
|
||||||
dma_sync_single_range_for_cpu(rq->pdev, head->addr, 0, frag_size, DMA_FROM_DEVICE);
|
dma_sync_single_range_for_cpu(rq->pdev, head->addr, 0, frag_size, rq->buff.map_dir);
|
||||||
prefetchw(hdr);
|
prefetchw(hdr);
|
||||||
prefetch(data);
|
prefetch(data);
|
||||||
skb = mlx5e_build_linear_skb(rq, hdr, frag_size, rx_headroom, head_size, 0);
|
skb = mlx5e_build_linear_skb(rq, hdr, frag_size, rx_headroom, head_size, 0);
|
||||||
@ -2009,7 +2010,7 @@ mlx5e_skb_from_cqe_shampo(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi,
|
|||||||
}
|
}
|
||||||
|
|
||||||
prefetchw(skb->data);
|
prefetchw(skb->data);
|
||||||
mlx5e_copy_skb_header(rq->pdev, skb, head->page, head->addr,
|
mlx5e_copy_skb_header(rq, skb, head->page, head->addr,
|
||||||
head_offset + rx_headroom,
|
head_offset + rx_headroom,
|
||||||
rx_headroom, head_size);
|
rx_headroom, head_size);
|
||||||
/* skb linear part was allocated with headlen and aligned to long */
|
/* skb linear part was allocated with headlen and aligned to long */
|
||||||
|
@ -3633,10 +3633,14 @@ mlx5e_clone_flow_attr_for_post_act(struct mlx5_flow_attr *attr,
|
|||||||
attr2->action = 0;
|
attr2->action = 0;
|
||||||
attr2->flags = 0;
|
attr2->flags = 0;
|
||||||
attr2->parse_attr = parse_attr;
|
attr2->parse_attr = parse_attr;
|
||||||
attr2->esw_attr->out_count = 0;
|
|
||||||
attr2->esw_attr->split_count = 0;
|
|
||||||
attr2->dest_chain = 0;
|
attr2->dest_chain = 0;
|
||||||
attr2->dest_ft = NULL;
|
attr2->dest_ft = NULL;
|
||||||
|
|
||||||
|
if (ns_type == MLX5_FLOW_NAMESPACE_FDB) {
|
||||||
|
attr2->esw_attr->out_count = 0;
|
||||||
|
attr2->esw_attr->split_count = 0;
|
||||||
|
}
|
||||||
|
|
||||||
return attr2;
|
return attr2;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -4758,12 +4762,6 @@ int mlx5e_policer_validate(const struct flow_action *action,
|
|||||||
return -EOPNOTSUPP;
|
return -EOPNOTSUPP;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (act->police.rate_pkt_ps) {
|
|
||||||
NL_SET_ERR_MSG_MOD(extack,
|
|
||||||
"QoS offload not support packets per second");
|
|
||||||
return -EOPNOTSUPP;
|
|
||||||
}
|
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -305,6 +305,8 @@ static void mlx5e_sq_calc_wqe_attr(struct sk_buff *skb, const struct mlx5e_tx_at
|
|||||||
u16 ds_cnt_inl = 0;
|
u16 ds_cnt_inl = 0;
|
||||||
u16 ds_cnt_ids = 0;
|
u16 ds_cnt_ids = 0;
|
||||||
|
|
||||||
|
/* Sync the calculation with MLX5E_MAX_TX_WQEBBS. */
|
||||||
|
|
||||||
if (attr->insz)
|
if (attr->insz)
|
||||||
ds_cnt_ids = DIV_ROUND_UP(sizeof(struct mlx5_wqe_inline_seg) + attr->insz,
|
ds_cnt_ids = DIV_ROUND_UP(sizeof(struct mlx5_wqe_inline_seg) + attr->insz,
|
||||||
MLX5_SEND_WQE_DS);
|
MLX5_SEND_WQE_DS);
|
||||||
@ -317,6 +319,9 @@ static void mlx5e_sq_calc_wqe_attr(struct sk_buff *skb, const struct mlx5e_tx_at
|
|||||||
inl += VLAN_HLEN;
|
inl += VLAN_HLEN;
|
||||||
|
|
||||||
ds_cnt_inl = DIV_ROUND_UP(inl, MLX5_SEND_WQE_DS);
|
ds_cnt_inl = DIV_ROUND_UP(inl, MLX5_SEND_WQE_DS);
|
||||||
|
if (WARN_ON_ONCE(ds_cnt_inl > MLX5E_MAX_TX_INLINE_DS))
|
||||||
|
netdev_warn(skb->dev, "ds_cnt_inl = %u > max %u\n", ds_cnt_inl,
|
||||||
|
(u16)MLX5E_MAX_TX_INLINE_DS);
|
||||||
ds_cnt += ds_cnt_inl;
|
ds_cnt += ds_cnt_inl;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1387,12 +1387,14 @@ void mlx5_eswitch_disable_locked(struct mlx5_eswitch *esw)
|
|||||||
esw->mode == MLX5_ESWITCH_LEGACY ? "LEGACY" : "OFFLOADS",
|
esw->mode == MLX5_ESWITCH_LEGACY ? "LEGACY" : "OFFLOADS",
|
||||||
esw->esw_funcs.num_vfs, esw->enabled_vports);
|
esw->esw_funcs.num_vfs, esw->enabled_vports);
|
||||||
|
|
||||||
esw->fdb_table.flags &= ~MLX5_ESW_FDB_CREATED;
|
if (esw->fdb_table.flags & MLX5_ESW_FDB_CREATED) {
|
||||||
if (esw->mode == MLX5_ESWITCH_OFFLOADS)
|
esw->fdb_table.flags &= ~MLX5_ESW_FDB_CREATED;
|
||||||
esw_offloads_disable(esw);
|
if (esw->mode == MLX5_ESWITCH_OFFLOADS)
|
||||||
else if (esw->mode == MLX5_ESWITCH_LEGACY)
|
esw_offloads_disable(esw);
|
||||||
esw_legacy_disable(esw);
|
else if (esw->mode == MLX5_ESWITCH_LEGACY)
|
||||||
mlx5_esw_acls_ns_cleanup(esw);
|
esw_legacy_disable(esw);
|
||||||
|
mlx5_esw_acls_ns_cleanup(esw);
|
||||||
|
}
|
||||||
|
|
||||||
if (esw->mode == MLX5_ESWITCH_OFFLOADS)
|
if (esw->mode == MLX5_ESWITCH_OFFLOADS)
|
||||||
devl_rate_nodes_destroy(devlink);
|
devl_rate_nodes_destroy(devlink);
|
||||||
|
@ -2310,7 +2310,7 @@ out_free:
|
|||||||
static int esw_offloads_start(struct mlx5_eswitch *esw,
|
static int esw_offloads_start(struct mlx5_eswitch *esw,
|
||||||
struct netlink_ext_ack *extack)
|
struct netlink_ext_ack *extack)
|
||||||
{
|
{
|
||||||
int err, err1;
|
int err;
|
||||||
|
|
||||||
esw->mode = MLX5_ESWITCH_OFFLOADS;
|
esw->mode = MLX5_ESWITCH_OFFLOADS;
|
||||||
err = mlx5_eswitch_enable_locked(esw, esw->dev->priv.sriov.num_vfs);
|
err = mlx5_eswitch_enable_locked(esw, esw->dev->priv.sriov.num_vfs);
|
||||||
@ -2318,11 +2318,6 @@ static int esw_offloads_start(struct mlx5_eswitch *esw,
|
|||||||
NL_SET_ERR_MSG_MOD(extack,
|
NL_SET_ERR_MSG_MOD(extack,
|
||||||
"Failed setting eswitch to offloads");
|
"Failed setting eswitch to offloads");
|
||||||
esw->mode = MLX5_ESWITCH_LEGACY;
|
esw->mode = MLX5_ESWITCH_LEGACY;
|
||||||
err1 = mlx5_eswitch_enable_locked(esw, MLX5_ESWITCH_IGNORE_NUM_VFS);
|
|
||||||
if (err1) {
|
|
||||||
NL_SET_ERR_MSG_MOD(extack,
|
|
||||||
"Failed setting eswitch back to legacy");
|
|
||||||
}
|
|
||||||
mlx5_rescan_drivers(esw->dev);
|
mlx5_rescan_drivers(esw->dev);
|
||||||
}
|
}
|
||||||
if (esw->offloads.inline_mode == MLX5_INLINE_MODE_NONE) {
|
if (esw->offloads.inline_mode == MLX5_INLINE_MODE_NONE) {
|
||||||
@ -3389,19 +3384,12 @@ err_metadata:
|
|||||||
static int esw_offloads_stop(struct mlx5_eswitch *esw,
|
static int esw_offloads_stop(struct mlx5_eswitch *esw,
|
||||||
struct netlink_ext_ack *extack)
|
struct netlink_ext_ack *extack)
|
||||||
{
|
{
|
||||||
int err, err1;
|
int err;
|
||||||
|
|
||||||
esw->mode = MLX5_ESWITCH_LEGACY;
|
esw->mode = MLX5_ESWITCH_LEGACY;
|
||||||
err = mlx5_eswitch_enable_locked(esw, MLX5_ESWITCH_IGNORE_NUM_VFS);
|
err = mlx5_eswitch_enable_locked(esw, MLX5_ESWITCH_IGNORE_NUM_VFS);
|
||||||
if (err) {
|
if (err)
|
||||||
NL_SET_ERR_MSG_MOD(extack, "Failed setting eswitch to legacy");
|
NL_SET_ERR_MSG_MOD(extack, "Failed setting eswitch to legacy");
|
||||||
esw->mode = MLX5_ESWITCH_OFFLOADS;
|
|
||||||
err1 = mlx5_eswitch_enable_locked(esw, MLX5_ESWITCH_IGNORE_NUM_VFS);
|
|
||||||
if (err1) {
|
|
||||||
NL_SET_ERR_MSG_MOD(extack,
|
|
||||||
"Failed setting eswitch back to offloads");
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
|
@ -30,9 +30,9 @@ mlx5_eswitch_termtbl_hash(struct mlx5_flow_act *flow_act,
|
|||||||
sizeof(dest->vport.num), hash);
|
sizeof(dest->vport.num), hash);
|
||||||
hash = jhash((const void *)&dest->vport.vhca_id,
|
hash = jhash((const void *)&dest->vport.vhca_id,
|
||||||
sizeof(dest->vport.num), hash);
|
sizeof(dest->vport.num), hash);
|
||||||
if (dest->vport.pkt_reformat)
|
if (flow_act->pkt_reformat)
|
||||||
hash = jhash(dest->vport.pkt_reformat,
|
hash = jhash(flow_act->pkt_reformat,
|
||||||
sizeof(*dest->vport.pkt_reformat),
|
sizeof(*flow_act->pkt_reformat),
|
||||||
hash);
|
hash);
|
||||||
return hash;
|
return hash;
|
||||||
}
|
}
|
||||||
@ -53,9 +53,11 @@ mlx5_eswitch_termtbl_cmp(struct mlx5_flow_act *flow_act1,
|
|||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
return ret;
|
||||||
|
|
||||||
return dest1->vport.pkt_reformat && dest2->vport.pkt_reformat ?
|
if (flow_act1->pkt_reformat && flow_act2->pkt_reformat)
|
||||||
memcmp(dest1->vport.pkt_reformat, dest2->vport.pkt_reformat,
|
return memcmp(flow_act1->pkt_reformat, flow_act2->pkt_reformat,
|
||||||
sizeof(*dest1->vport.pkt_reformat)) : 0;
|
sizeof(*flow_act1->pkt_reformat));
|
||||||
|
|
||||||
|
return !(flow_act1->pkt_reformat == flow_act2->pkt_reformat);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int
|
static int
|
||||||
|
@ -152,7 +152,8 @@ static void mlx5_fw_reset_complete_reload(struct mlx5_core_dev *dev)
|
|||||||
mlx5_unload_one(dev);
|
mlx5_unload_one(dev);
|
||||||
if (mlx5_health_wait_pci_up(dev))
|
if (mlx5_health_wait_pci_up(dev))
|
||||||
mlx5_core_err(dev, "reset reload flow aborted, PCI reads still not working\n");
|
mlx5_core_err(dev, "reset reload flow aborted, PCI reads still not working\n");
|
||||||
mlx5_load_one(dev, false);
|
else
|
||||||
|
mlx5_load_one(dev, false);
|
||||||
devlink_remote_reload_actions_performed(priv_to_devlink(dev), 0,
|
devlink_remote_reload_actions_performed(priv_to_devlink(dev), 0,
|
||||||
BIT(DEVLINK_RELOAD_ACTION_DRIVER_REINIT) |
|
BIT(DEVLINK_RELOAD_ACTION_DRIVER_REINIT) |
|
||||||
BIT(DEVLINK_RELOAD_ACTION_FW_ACTIVATE));
|
BIT(DEVLINK_RELOAD_ACTION_FW_ACTIVATE));
|
||||||
|
@ -7128,9 +7128,8 @@ static int s2io_card_up(struct s2io_nic *sp)
|
|||||||
if (ret) {
|
if (ret) {
|
||||||
DBG_PRINT(ERR_DBG, "%s: Out of memory in Open\n",
|
DBG_PRINT(ERR_DBG, "%s: Out of memory in Open\n",
|
||||||
dev->name);
|
dev->name);
|
||||||
s2io_reset(sp);
|
ret = -ENOMEM;
|
||||||
free_rx_buffers(sp);
|
goto err_fill_buff;
|
||||||
return -ENOMEM;
|
|
||||||
}
|
}
|
||||||
DBG_PRINT(INFO_DBG, "Buf in ring:%d is %d:\n", i,
|
DBG_PRINT(INFO_DBG, "Buf in ring:%d is %d:\n", i,
|
||||||
ring->rx_bufs_left);
|
ring->rx_bufs_left);
|
||||||
@ -7168,18 +7167,16 @@ static int s2io_card_up(struct s2io_nic *sp)
|
|||||||
/* Enable Rx Traffic and interrupts on the NIC */
|
/* Enable Rx Traffic and interrupts on the NIC */
|
||||||
if (start_nic(sp)) {
|
if (start_nic(sp)) {
|
||||||
DBG_PRINT(ERR_DBG, "%s: Starting NIC failed\n", dev->name);
|
DBG_PRINT(ERR_DBG, "%s: Starting NIC failed\n", dev->name);
|
||||||
s2io_reset(sp);
|
ret = -ENODEV;
|
||||||
free_rx_buffers(sp);
|
goto err_out;
|
||||||
return -ENODEV;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Add interrupt service routine */
|
/* Add interrupt service routine */
|
||||||
if (s2io_add_isr(sp) != 0) {
|
if (s2io_add_isr(sp) != 0) {
|
||||||
if (sp->config.intr_type == MSI_X)
|
if (sp->config.intr_type == MSI_X)
|
||||||
s2io_rem_isr(sp);
|
s2io_rem_isr(sp);
|
||||||
s2io_reset(sp);
|
ret = -ENODEV;
|
||||||
free_rx_buffers(sp);
|
goto err_out;
|
||||||
return -ENODEV;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
timer_setup(&sp->alarm_timer, s2io_alarm_handle, 0);
|
timer_setup(&sp->alarm_timer, s2io_alarm_handle, 0);
|
||||||
@ -7199,6 +7196,20 @@ static int s2io_card_up(struct s2io_nic *sp)
|
|||||||
}
|
}
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
|
err_out:
|
||||||
|
if (config->napi) {
|
||||||
|
if (config->intr_type == MSI_X) {
|
||||||
|
for (i = 0; i < sp->config.rx_ring_num; i++)
|
||||||
|
napi_disable(&sp->mac_control.rings[i].napi);
|
||||||
|
} else {
|
||||||
|
napi_disable(&sp->napi);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
err_fill_buff:
|
||||||
|
s2io_reset(sp);
|
||||||
|
free_rx_buffers(sp);
|
||||||
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
@ -900,6 +900,7 @@ static int nixge_open(struct net_device *ndev)
|
|||||||
err_rx_irq:
|
err_rx_irq:
|
||||||
free_irq(priv->tx_irq, ndev);
|
free_irq(priv->tx_irq, ndev);
|
||||||
err_tx_irq:
|
err_tx_irq:
|
||||||
|
napi_disable(&priv->napi);
|
||||||
phy_stop(phy);
|
phy_stop(phy);
|
||||||
phy_disconnect(phy);
|
phy_disconnect(phy);
|
||||||
tasklet_kill(&priv->dma_err_tasklet);
|
tasklet_kill(&priv->dma_err_tasklet);
|
||||||
|
@ -629,7 +629,6 @@ static int ehl_common_data(struct pci_dev *pdev,
|
|||||||
{
|
{
|
||||||
plat->rx_queues_to_use = 8;
|
plat->rx_queues_to_use = 8;
|
||||||
plat->tx_queues_to_use = 8;
|
plat->tx_queues_to_use = 8;
|
||||||
plat->clk_ptp_rate = 200000000;
|
|
||||||
plat->use_phy_wol = 1;
|
plat->use_phy_wol = 1;
|
||||||
|
|
||||||
plat->safety_feat_cfg->tsoee = 1;
|
plat->safety_feat_cfg->tsoee = 1;
|
||||||
@ -654,6 +653,8 @@ static int ehl_sgmii_data(struct pci_dev *pdev,
|
|||||||
plat->serdes_powerup = intel_serdes_powerup;
|
plat->serdes_powerup = intel_serdes_powerup;
|
||||||
plat->serdes_powerdown = intel_serdes_powerdown;
|
plat->serdes_powerdown = intel_serdes_powerdown;
|
||||||
|
|
||||||
|
plat->clk_ptp_rate = 204800000;
|
||||||
|
|
||||||
return ehl_common_data(pdev, plat);
|
return ehl_common_data(pdev, plat);
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -667,6 +668,8 @@ static int ehl_rgmii_data(struct pci_dev *pdev,
|
|||||||
plat->bus_id = 1;
|
plat->bus_id = 1;
|
||||||
plat->phy_interface = PHY_INTERFACE_MODE_RGMII;
|
plat->phy_interface = PHY_INTERFACE_MODE_RGMII;
|
||||||
|
|
||||||
|
plat->clk_ptp_rate = 204800000;
|
||||||
|
|
||||||
return ehl_common_data(pdev, plat);
|
return ehl_common_data(pdev, plat);
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -683,6 +686,8 @@ static int ehl_pse0_common_data(struct pci_dev *pdev,
|
|||||||
plat->bus_id = 2;
|
plat->bus_id = 2;
|
||||||
plat->addr64 = 32;
|
plat->addr64 = 32;
|
||||||
|
|
||||||
|
plat->clk_ptp_rate = 200000000;
|
||||||
|
|
||||||
intel_mgbe_pse_crossts_adj(intel_priv, EHL_PSE_ART_MHZ);
|
intel_mgbe_pse_crossts_adj(intel_priv, EHL_PSE_ART_MHZ);
|
||||||
|
|
||||||
return ehl_common_data(pdev, plat);
|
return ehl_common_data(pdev, plat);
|
||||||
@ -722,6 +727,8 @@ static int ehl_pse1_common_data(struct pci_dev *pdev,
|
|||||||
plat->bus_id = 3;
|
plat->bus_id = 3;
|
||||||
plat->addr64 = 32;
|
plat->addr64 = 32;
|
||||||
|
|
||||||
|
plat->clk_ptp_rate = 200000000;
|
||||||
|
|
||||||
intel_mgbe_pse_crossts_adj(intel_priv, EHL_PSE_ART_MHZ);
|
intel_mgbe_pse_crossts_adj(intel_priv, EHL_PSE_ART_MHZ);
|
||||||
|
|
||||||
return ehl_common_data(pdev, plat);
|
return ehl_common_data(pdev, plat);
|
||||||
@ -757,7 +764,7 @@ static int tgl_common_data(struct pci_dev *pdev,
|
|||||||
{
|
{
|
||||||
plat->rx_queues_to_use = 6;
|
plat->rx_queues_to_use = 6;
|
||||||
plat->tx_queues_to_use = 4;
|
plat->tx_queues_to_use = 4;
|
||||||
plat->clk_ptp_rate = 200000000;
|
plat->clk_ptp_rate = 204800000;
|
||||||
plat->speed_mode_2500 = intel_speed_mode_2500;
|
plat->speed_mode_2500 = intel_speed_mode_2500;
|
||||||
|
|
||||||
plat->safety_feat_cfg->tsoee = 1;
|
plat->safety_feat_cfg->tsoee = 1;
|
||||||
|
@ -75,20 +75,24 @@ static int loongson_dwmac_probe(struct pci_dev *pdev, const struct pci_device_id
|
|||||||
plat->mdio_bus_data = devm_kzalloc(&pdev->dev,
|
plat->mdio_bus_data = devm_kzalloc(&pdev->dev,
|
||||||
sizeof(*plat->mdio_bus_data),
|
sizeof(*plat->mdio_bus_data),
|
||||||
GFP_KERNEL);
|
GFP_KERNEL);
|
||||||
if (!plat->mdio_bus_data)
|
if (!plat->mdio_bus_data) {
|
||||||
return -ENOMEM;
|
ret = -ENOMEM;
|
||||||
|
goto err_put_node;
|
||||||
|
}
|
||||||
plat->mdio_bus_data->needs_reset = true;
|
plat->mdio_bus_data->needs_reset = true;
|
||||||
}
|
}
|
||||||
|
|
||||||
plat->dma_cfg = devm_kzalloc(&pdev->dev, sizeof(*plat->dma_cfg), GFP_KERNEL);
|
plat->dma_cfg = devm_kzalloc(&pdev->dev, sizeof(*plat->dma_cfg), GFP_KERNEL);
|
||||||
if (!plat->dma_cfg)
|
if (!plat->dma_cfg) {
|
||||||
return -ENOMEM;
|
ret = -ENOMEM;
|
||||||
|
goto err_put_node;
|
||||||
|
}
|
||||||
|
|
||||||
/* Enable pci device */
|
/* Enable pci device */
|
||||||
ret = pci_enable_device(pdev);
|
ret = pci_enable_device(pdev);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
dev_err(&pdev->dev, "%s: ERROR: failed to enable device\n", __func__);
|
dev_err(&pdev->dev, "%s: ERROR: failed to enable device\n", __func__);
|
||||||
return ret;
|
goto err_put_node;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Get the base address of device */
|
/* Get the base address of device */
|
||||||
@ -97,7 +101,7 @@ static int loongson_dwmac_probe(struct pci_dev *pdev, const struct pci_device_id
|
|||||||
continue;
|
continue;
|
||||||
ret = pcim_iomap_regions(pdev, BIT(0), pci_name(pdev));
|
ret = pcim_iomap_regions(pdev, BIT(0), pci_name(pdev));
|
||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
goto err_disable_device;
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -108,7 +112,8 @@ static int loongson_dwmac_probe(struct pci_dev *pdev, const struct pci_device_id
|
|||||||
phy_mode = device_get_phy_mode(&pdev->dev);
|
phy_mode = device_get_phy_mode(&pdev->dev);
|
||||||
if (phy_mode < 0) {
|
if (phy_mode < 0) {
|
||||||
dev_err(&pdev->dev, "phy_mode not found\n");
|
dev_err(&pdev->dev, "phy_mode not found\n");
|
||||||
return phy_mode;
|
ret = phy_mode;
|
||||||
|
goto err_disable_device;
|
||||||
}
|
}
|
||||||
|
|
||||||
plat->phy_interface = phy_mode;
|
plat->phy_interface = phy_mode;
|
||||||
@ -125,6 +130,7 @@ static int loongson_dwmac_probe(struct pci_dev *pdev, const struct pci_device_id
|
|||||||
if (res.irq < 0) {
|
if (res.irq < 0) {
|
||||||
dev_err(&pdev->dev, "IRQ macirq not found\n");
|
dev_err(&pdev->dev, "IRQ macirq not found\n");
|
||||||
ret = -ENODEV;
|
ret = -ENODEV;
|
||||||
|
goto err_disable_msi;
|
||||||
}
|
}
|
||||||
|
|
||||||
res.wol_irq = of_irq_get_byname(np, "eth_wake_irq");
|
res.wol_irq = of_irq_get_byname(np, "eth_wake_irq");
|
||||||
@ -137,15 +143,31 @@ static int loongson_dwmac_probe(struct pci_dev *pdev, const struct pci_device_id
|
|||||||
if (res.lpi_irq < 0) {
|
if (res.lpi_irq < 0) {
|
||||||
dev_err(&pdev->dev, "IRQ eth_lpi not found\n");
|
dev_err(&pdev->dev, "IRQ eth_lpi not found\n");
|
||||||
ret = -ENODEV;
|
ret = -ENODEV;
|
||||||
|
goto err_disable_msi;
|
||||||
}
|
}
|
||||||
|
|
||||||
return stmmac_dvr_probe(&pdev->dev, plat, &res);
|
ret = stmmac_dvr_probe(&pdev->dev, plat, &res);
|
||||||
|
if (ret)
|
||||||
|
goto err_disable_msi;
|
||||||
|
|
||||||
|
return ret;
|
||||||
|
|
||||||
|
err_disable_msi:
|
||||||
|
pci_disable_msi(pdev);
|
||||||
|
err_disable_device:
|
||||||
|
pci_disable_device(pdev);
|
||||||
|
err_put_node:
|
||||||
|
of_node_put(plat->mdio_node);
|
||||||
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void loongson_dwmac_remove(struct pci_dev *pdev)
|
static void loongson_dwmac_remove(struct pci_dev *pdev)
|
||||||
{
|
{
|
||||||
|
struct net_device *ndev = dev_get_drvdata(&pdev->dev);
|
||||||
|
struct stmmac_priv *priv = netdev_priv(ndev);
|
||||||
int i;
|
int i;
|
||||||
|
|
||||||
|
of_node_put(priv->plat->mdio_node);
|
||||||
stmmac_dvr_remove(&pdev->dev);
|
stmmac_dvr_remove(&pdev->dev);
|
||||||
|
|
||||||
for (i = 0; i < PCI_STD_NUM_BARS; i++) {
|
for (i = 0; i < PCI_STD_NUM_BARS; i++) {
|
||||||
@ -155,6 +177,7 @@ static void loongson_dwmac_remove(struct pci_dev *pdev)
|
|||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
pci_disable_msi(pdev);
|
||||||
pci_disable_device(pdev);
|
pci_disable_device(pdev);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -272,11 +272,9 @@ static int meson8b_devm_clk_prepare_enable(struct meson8b_dwmac *dwmac,
|
|||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
return ret;
|
||||||
|
|
||||||
devm_add_action_or_reset(dwmac->dev,
|
return devm_add_action_or_reset(dwmac->dev,
|
||||||
(void(*)(void *))clk_disable_unprepare,
|
(void(*)(void *))clk_disable_unprepare,
|
||||||
dwmac->rgmii_tx_clk);
|
clk);
|
||||||
|
|
||||||
return 0;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static int meson8b_init_rgmii_delays(struct meson8b_dwmac *dwmac)
|
static int meson8b_init_rgmii_delays(struct meson8b_dwmac *dwmac)
|
||||||
|
@ -287,7 +287,6 @@ static u32 spl2sw_init_netdev(struct platform_device *pdev, u8 *mac_addr,
|
|||||||
if (ret) {
|
if (ret) {
|
||||||
dev_err(&pdev->dev, "Failed to register net device \"%s\"!\n",
|
dev_err(&pdev->dev, "Failed to register net device \"%s\"!\n",
|
||||||
ndev->name);
|
ndev->name);
|
||||||
free_netdev(ndev);
|
|
||||||
*r_ndev = NULL;
|
*r_ndev = NULL;
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
@ -2823,7 +2823,6 @@ static int am65_cpsw_nuss_remove(struct platform_device *pdev)
|
|||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
return ret;
|
return ret;
|
||||||
|
|
||||||
am65_cpsw_nuss_phylink_cleanup(common);
|
|
||||||
am65_cpsw_unregister_devlink(common);
|
am65_cpsw_unregister_devlink(common);
|
||||||
am65_cpsw_unregister_notifiers(common);
|
am65_cpsw_unregister_notifiers(common);
|
||||||
|
|
||||||
@ -2831,6 +2830,7 @@ static int am65_cpsw_nuss_remove(struct platform_device *pdev)
|
|||||||
* dma_deconfigure(dev) before devres_release_all(dev)
|
* dma_deconfigure(dev) before devres_release_all(dev)
|
||||||
*/
|
*/
|
||||||
am65_cpsw_nuss_cleanup_ndev(common);
|
am65_cpsw_nuss_cleanup_ndev(common);
|
||||||
|
am65_cpsw_nuss_phylink_cleanup(common);
|
||||||
|
|
||||||
of_platform_device_destroy(common->mdio_dev, NULL);
|
of_platform_device_destroy(common->mdio_dev, NULL);
|
||||||
|
|
||||||
|
@ -854,6 +854,8 @@ static int cpsw_ndo_open(struct net_device *ndev)
|
|||||||
|
|
||||||
err_cleanup:
|
err_cleanup:
|
||||||
if (!cpsw->usage_count) {
|
if (!cpsw->usage_count) {
|
||||||
|
napi_disable(&cpsw->napi_rx);
|
||||||
|
napi_disable(&cpsw->napi_tx);
|
||||||
cpdma_ctlr_stop(cpsw->dma);
|
cpdma_ctlr_stop(cpsw->dma);
|
||||||
cpsw_destroy_xdp_rxqs(cpsw);
|
cpsw_destroy_xdp_rxqs(cpsw);
|
||||||
}
|
}
|
||||||
|
@ -1290,12 +1290,15 @@ static int tsi108_open(struct net_device *dev)
|
|||||||
|
|
||||||
data->rxring = dma_alloc_coherent(&data->pdev->dev, rxring_size,
|
data->rxring = dma_alloc_coherent(&data->pdev->dev, rxring_size,
|
||||||
&data->rxdma, GFP_KERNEL);
|
&data->rxdma, GFP_KERNEL);
|
||||||
if (!data->rxring)
|
if (!data->rxring) {
|
||||||
|
free_irq(data->irq_num, dev);
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
}
|
||||||
|
|
||||||
data->txring = dma_alloc_coherent(&data->pdev->dev, txring_size,
|
data->txring = dma_alloc_coherent(&data->pdev->dev, txring_size,
|
||||||
&data->txdma, GFP_KERNEL);
|
&data->txdma, GFP_KERNEL);
|
||||||
if (!data->txring) {
|
if (!data->txring) {
|
||||||
|
free_irq(data->irq_num, dev);
|
||||||
dma_free_coherent(&data->pdev->dev, rxring_size, data->rxring,
|
dma_free_coherent(&data->pdev->dev, rxring_size, data->rxring,
|
||||||
data->rxdma);
|
data->rxdma);
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
@ -533,7 +533,7 @@ static int bpq_device_event(struct notifier_block *this,
|
|||||||
if (!net_eq(dev_net(dev), &init_net))
|
if (!net_eq(dev_net(dev), &init_net))
|
||||||
return NOTIFY_DONE;
|
return NOTIFY_DONE;
|
||||||
|
|
||||||
if (!dev_is_ethdev(dev))
|
if (!dev_is_ethdev(dev) && !bpq_get_ax25_dev(dev))
|
||||||
return NOTIFY_DONE;
|
return NOTIFY_DONE;
|
||||||
|
|
||||||
switch (event) {
|
switch (event) {
|
||||||
|
@ -1413,7 +1413,8 @@ static struct macsec_rx_sc *del_rx_sc(struct macsec_secy *secy, sci_t sci)
|
|||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
static struct macsec_rx_sc *create_rx_sc(struct net_device *dev, sci_t sci)
|
static struct macsec_rx_sc *create_rx_sc(struct net_device *dev, sci_t sci,
|
||||||
|
bool active)
|
||||||
{
|
{
|
||||||
struct macsec_rx_sc *rx_sc;
|
struct macsec_rx_sc *rx_sc;
|
||||||
struct macsec_dev *macsec;
|
struct macsec_dev *macsec;
|
||||||
@ -1437,7 +1438,7 @@ static struct macsec_rx_sc *create_rx_sc(struct net_device *dev, sci_t sci)
|
|||||||
}
|
}
|
||||||
|
|
||||||
rx_sc->sci = sci;
|
rx_sc->sci = sci;
|
||||||
rx_sc->active = true;
|
rx_sc->active = active;
|
||||||
refcount_set(&rx_sc->refcnt, 1);
|
refcount_set(&rx_sc->refcnt, 1);
|
||||||
|
|
||||||
secy = &macsec_priv(dev)->secy;
|
secy = &macsec_priv(dev)->secy;
|
||||||
@ -1838,6 +1839,7 @@ static int macsec_add_rxsa(struct sk_buff *skb, struct genl_info *info)
|
|||||||
secy->key_len);
|
secy->key_len);
|
||||||
|
|
||||||
err = macsec_offload(ops->mdo_add_rxsa, &ctx);
|
err = macsec_offload(ops->mdo_add_rxsa, &ctx);
|
||||||
|
memzero_explicit(ctx.sa.key, secy->key_len);
|
||||||
if (err)
|
if (err)
|
||||||
goto cleanup;
|
goto cleanup;
|
||||||
}
|
}
|
||||||
@ -1876,7 +1878,7 @@ static int macsec_add_rxsc(struct sk_buff *skb, struct genl_info *info)
|
|||||||
struct macsec_rx_sc *rx_sc;
|
struct macsec_rx_sc *rx_sc;
|
||||||
struct nlattr *tb_rxsc[MACSEC_RXSC_ATTR_MAX + 1];
|
struct nlattr *tb_rxsc[MACSEC_RXSC_ATTR_MAX + 1];
|
||||||
struct macsec_secy *secy;
|
struct macsec_secy *secy;
|
||||||
bool was_active;
|
bool active = true;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
if (!attrs[MACSEC_ATTR_IFINDEX])
|
if (!attrs[MACSEC_ATTR_IFINDEX])
|
||||||
@ -1898,16 +1900,15 @@ static int macsec_add_rxsc(struct sk_buff *skb, struct genl_info *info)
|
|||||||
secy = &macsec_priv(dev)->secy;
|
secy = &macsec_priv(dev)->secy;
|
||||||
sci = nla_get_sci(tb_rxsc[MACSEC_RXSC_ATTR_SCI]);
|
sci = nla_get_sci(tb_rxsc[MACSEC_RXSC_ATTR_SCI]);
|
||||||
|
|
||||||
rx_sc = create_rx_sc(dev, sci);
|
if (tb_rxsc[MACSEC_RXSC_ATTR_ACTIVE])
|
||||||
|
active = nla_get_u8(tb_rxsc[MACSEC_RXSC_ATTR_ACTIVE]);
|
||||||
|
|
||||||
|
rx_sc = create_rx_sc(dev, sci, active);
|
||||||
if (IS_ERR(rx_sc)) {
|
if (IS_ERR(rx_sc)) {
|
||||||
rtnl_unlock();
|
rtnl_unlock();
|
||||||
return PTR_ERR(rx_sc);
|
return PTR_ERR(rx_sc);
|
||||||
}
|
}
|
||||||
|
|
||||||
was_active = rx_sc->active;
|
|
||||||
if (tb_rxsc[MACSEC_RXSC_ATTR_ACTIVE])
|
|
||||||
rx_sc->active = !!nla_get_u8(tb_rxsc[MACSEC_RXSC_ATTR_ACTIVE]);
|
|
||||||
|
|
||||||
if (macsec_is_offloaded(netdev_priv(dev))) {
|
if (macsec_is_offloaded(netdev_priv(dev))) {
|
||||||
const struct macsec_ops *ops;
|
const struct macsec_ops *ops;
|
||||||
struct macsec_context ctx;
|
struct macsec_context ctx;
|
||||||
@ -1931,7 +1932,8 @@ static int macsec_add_rxsc(struct sk_buff *skb, struct genl_info *info)
|
|||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
cleanup:
|
cleanup:
|
||||||
rx_sc->active = was_active;
|
del_rx_sc(secy, sci);
|
||||||
|
free_rx_sc(rx_sc);
|
||||||
rtnl_unlock();
|
rtnl_unlock();
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
@ -2080,6 +2082,7 @@ static int macsec_add_txsa(struct sk_buff *skb, struct genl_info *info)
|
|||||||
secy->key_len);
|
secy->key_len);
|
||||||
|
|
||||||
err = macsec_offload(ops->mdo_add_txsa, &ctx);
|
err = macsec_offload(ops->mdo_add_txsa, &ctx);
|
||||||
|
memzero_explicit(ctx.sa.key, secy->key_len);
|
||||||
if (err)
|
if (err)
|
||||||
goto cleanup;
|
goto cleanup;
|
||||||
}
|
}
|
||||||
@ -2570,7 +2573,7 @@ static bool macsec_is_configured(struct macsec_dev *macsec)
|
|||||||
struct macsec_tx_sc *tx_sc = &secy->tx_sc;
|
struct macsec_tx_sc *tx_sc = &secy->tx_sc;
|
||||||
int i;
|
int i;
|
||||||
|
|
||||||
if (secy->n_rx_sc > 0)
|
if (secy->rx_sc)
|
||||||
return true;
|
return true;
|
||||||
|
|
||||||
for (i = 0; i < MACSEC_NUM_AN; i++)
|
for (i = 0; i < MACSEC_NUM_AN; i++)
|
||||||
@ -2654,11 +2657,6 @@ static int macsec_upd_offload(struct sk_buff *skb, struct genl_info *info)
|
|||||||
if (ret)
|
if (ret)
|
||||||
goto rollback;
|
goto rollback;
|
||||||
|
|
||||||
/* Force features update, since they are different for SW MACSec and
|
|
||||||
* HW offloading cases.
|
|
||||||
*/
|
|
||||||
netdev_update_features(dev);
|
|
||||||
|
|
||||||
rtnl_unlock();
|
rtnl_unlock();
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
@ -3432,16 +3430,9 @@ static netdev_tx_t macsec_start_xmit(struct sk_buff *skb,
|
|||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
#define SW_MACSEC_FEATURES \
|
#define MACSEC_FEATURES \
|
||||||
(NETIF_F_SG | NETIF_F_HIGHDMA | NETIF_F_FRAGLIST)
|
(NETIF_F_SG | NETIF_F_HIGHDMA | NETIF_F_FRAGLIST)
|
||||||
|
|
||||||
/* If h/w offloading is enabled, use real device features save for
|
|
||||||
* VLAN_FEATURES - they require additional ops
|
|
||||||
* HW_MACSEC - no reason to report it
|
|
||||||
*/
|
|
||||||
#define REAL_DEV_FEATURES(dev) \
|
|
||||||
((dev)->features & ~(NETIF_F_VLAN_FEATURES | NETIF_F_HW_MACSEC))
|
|
||||||
|
|
||||||
static int macsec_dev_init(struct net_device *dev)
|
static int macsec_dev_init(struct net_device *dev)
|
||||||
{
|
{
|
||||||
struct macsec_dev *macsec = macsec_priv(dev);
|
struct macsec_dev *macsec = macsec_priv(dev);
|
||||||
@ -3458,12 +3449,8 @@ static int macsec_dev_init(struct net_device *dev)
|
|||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (macsec_is_offloaded(macsec)) {
|
dev->features = real_dev->features & MACSEC_FEATURES;
|
||||||
dev->features = REAL_DEV_FEATURES(real_dev);
|
dev->features |= NETIF_F_LLTX | NETIF_F_GSO_SOFTWARE;
|
||||||
} else {
|
|
||||||
dev->features = real_dev->features & SW_MACSEC_FEATURES;
|
|
||||||
dev->features |= NETIF_F_LLTX | NETIF_F_GSO_SOFTWARE;
|
|
||||||
}
|
|
||||||
|
|
||||||
dev->needed_headroom = real_dev->needed_headroom +
|
dev->needed_headroom = real_dev->needed_headroom +
|
||||||
MACSEC_NEEDED_HEADROOM;
|
MACSEC_NEEDED_HEADROOM;
|
||||||
@ -3495,10 +3482,7 @@ static netdev_features_t macsec_fix_features(struct net_device *dev,
|
|||||||
struct macsec_dev *macsec = macsec_priv(dev);
|
struct macsec_dev *macsec = macsec_priv(dev);
|
||||||
struct net_device *real_dev = macsec->real_dev;
|
struct net_device *real_dev = macsec->real_dev;
|
||||||
|
|
||||||
if (macsec_is_offloaded(macsec))
|
features &= (real_dev->features & MACSEC_FEATURES) |
|
||||||
return REAL_DEV_FEATURES(real_dev);
|
|
||||||
|
|
||||||
features &= (real_dev->features & SW_MACSEC_FEATURES) |
|
|
||||||
NETIF_F_GSO_SOFTWARE | NETIF_F_SOFT_FEATURES;
|
NETIF_F_GSO_SOFTWARE | NETIF_F_SOFT_FEATURES;
|
||||||
features |= NETIF_F_LLTX;
|
features |= NETIF_F_LLTX;
|
||||||
|
|
||||||
|
@ -1533,8 +1533,10 @@ destroy_macvlan_port:
|
|||||||
/* the macvlan port may be freed by macvlan_uninit when fail to register.
|
/* the macvlan port may be freed by macvlan_uninit when fail to register.
|
||||||
* so we destroy the macvlan port only when it's valid.
|
* so we destroy the macvlan port only when it's valid.
|
||||||
*/
|
*/
|
||||||
if (create && macvlan_port_get_rtnl(lowerdev))
|
if (create && macvlan_port_get_rtnl(lowerdev)) {
|
||||||
|
macvlan_flush_sources(port, vlan);
|
||||||
macvlan_port_destroy(port->dev);
|
macvlan_port_destroy(port->dev);
|
||||||
|
}
|
||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(macvlan_common_newlink);
|
EXPORT_SYMBOL_GPL(macvlan_common_newlink);
|
||||||
|
@ -632,6 +632,7 @@ static void vsc8584_macsec_free_flow(struct vsc8531_private *priv,
|
|||||||
|
|
||||||
list_del(&flow->list);
|
list_del(&flow->list);
|
||||||
clear_bit(flow->index, bitmap);
|
clear_bit(flow->index, bitmap);
|
||||||
|
memzero_explicit(flow->key, sizeof(flow->key));
|
||||||
kfree(flow);
|
kfree(flow);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1967,17 +1967,25 @@ drop:
|
|||||||
skb_headlen(skb));
|
skb_headlen(skb));
|
||||||
|
|
||||||
if (unlikely(headlen > skb_headlen(skb))) {
|
if (unlikely(headlen > skb_headlen(skb))) {
|
||||||
|
WARN_ON_ONCE(1);
|
||||||
|
err = -ENOMEM;
|
||||||
dev_core_stats_rx_dropped_inc(tun->dev);
|
dev_core_stats_rx_dropped_inc(tun->dev);
|
||||||
|
napi_busy:
|
||||||
napi_free_frags(&tfile->napi);
|
napi_free_frags(&tfile->napi);
|
||||||
rcu_read_unlock();
|
rcu_read_unlock();
|
||||||
mutex_unlock(&tfile->napi_mutex);
|
mutex_unlock(&tfile->napi_mutex);
|
||||||
WARN_ON(1);
|
return err;
|
||||||
return -ENOMEM;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
local_bh_disable();
|
if (likely(napi_schedule_prep(&tfile->napi))) {
|
||||||
napi_gro_frags(&tfile->napi);
|
local_bh_disable();
|
||||||
local_bh_enable();
|
napi_gro_frags(&tfile->napi);
|
||||||
|
napi_complete(&tfile->napi);
|
||||||
|
local_bh_enable();
|
||||||
|
} else {
|
||||||
|
err = -EBUSY;
|
||||||
|
goto napi_busy;
|
||||||
|
}
|
||||||
mutex_unlock(&tfile->napi_mutex);
|
mutex_unlock(&tfile->napi_mutex);
|
||||||
} else if (tfile->napi_enabled) {
|
} else if (tfile->napi_enabled) {
|
||||||
struct sk_buff_head *queue = &tfile->sk.sk_write_queue;
|
struct sk_buff_head *queue = &tfile->sk.sk_write_queue;
|
||||||
|
@ -325,6 +325,7 @@ static int lapbeth_open(struct net_device *dev)
|
|||||||
|
|
||||||
err = lapb_register(dev, &lapbeth_callbacks);
|
err = lapb_register(dev, &lapbeth_callbacks);
|
||||||
if (err != LAPB_OK) {
|
if (err != LAPB_OK) {
|
||||||
|
napi_disable(&lapbeth->napi);
|
||||||
pr_err("lapb_register error: %d\n", err);
|
pr_err("lapb_register error: %d\n", err);
|
||||||
return -ENODEV;
|
return -ENODEV;
|
||||||
}
|
}
|
||||||
@ -446,7 +447,7 @@ static int lapbeth_device_event(struct notifier_block *this,
|
|||||||
if (dev_net(dev) != &init_net)
|
if (dev_net(dev) != &init_net)
|
||||||
return NOTIFY_DONE;
|
return NOTIFY_DONE;
|
||||||
|
|
||||||
if (!dev_is_ethdev(dev))
|
if (!dev_is_ethdev(dev) && !lapbeth_get_x25_dev(dev))
|
||||||
return NOTIFY_DONE;
|
return NOTIFY_DONE;
|
||||||
|
|
||||||
switch (event) {
|
switch (event) {
|
||||||
|
@ -27,7 +27,7 @@
|
|||||||
#define ATH11K_QMI_WLANFW_MAX_NUM_MEM_SEG_V01 52
|
#define ATH11K_QMI_WLANFW_MAX_NUM_MEM_SEG_V01 52
|
||||||
#define ATH11K_QMI_CALDB_SIZE 0x480000
|
#define ATH11K_QMI_CALDB_SIZE 0x480000
|
||||||
#define ATH11K_QMI_BDF_EXT_STR_LENGTH 0x20
|
#define ATH11K_QMI_BDF_EXT_STR_LENGTH 0x20
|
||||||
#define ATH11K_QMI_FW_MEM_REQ_SEGMENT_CNT 3
|
#define ATH11K_QMI_FW_MEM_REQ_SEGMENT_CNT 5
|
||||||
|
|
||||||
#define QMI_WLFW_REQUEST_MEM_IND_V01 0x0035
|
#define QMI_WLFW_REQUEST_MEM_IND_V01 0x0035
|
||||||
#define QMI_WLFW_FW_MEM_READY_IND_V01 0x0037
|
#define QMI_WLFW_FW_MEM_READY_IND_V01 0x0037
|
||||||
|
@ -287,11 +287,7 @@ int ath11k_regd_update(struct ath11k *ar)
|
|||||||
goto err;
|
goto err;
|
||||||
}
|
}
|
||||||
|
|
||||||
rtnl_lock();
|
ret = regulatory_set_wiphy_regd(ar->hw->wiphy, regd_copy);
|
||||||
wiphy_lock(ar->hw->wiphy);
|
|
||||||
ret = regulatory_set_wiphy_regd_sync(ar->hw->wiphy, regd_copy);
|
|
||||||
wiphy_unlock(ar->hw->wiphy);
|
|
||||||
rtnl_unlock();
|
|
||||||
|
|
||||||
kfree(regd_copy);
|
kfree(regd_copy);
|
||||||
|
|
||||||
|
@ -228,6 +228,10 @@ static void brcmf_fweh_event_worker(struct work_struct *work)
|
|||||||
brcmf_fweh_event_name(event->code), event->code,
|
brcmf_fweh_event_name(event->code), event->code,
|
||||||
event->emsg.ifidx, event->emsg.bsscfgidx,
|
event->emsg.ifidx, event->emsg.bsscfgidx,
|
||||||
event->emsg.addr);
|
event->emsg.addr);
|
||||||
|
if (event->emsg.bsscfgidx >= BRCMF_MAX_IFS) {
|
||||||
|
bphy_err(drvr, "invalid bsscfg index: %u\n", event->emsg.bsscfgidx);
|
||||||
|
goto event_free;
|
||||||
|
}
|
||||||
|
|
||||||
/* convert event message */
|
/* convert event message */
|
||||||
emsg_be = &event->emsg;
|
emsg_be = &event->emsg;
|
||||||
|
@ -5232,7 +5232,7 @@ static int get_wep_tx_idx(struct airo_info *ai)
|
|||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int set_wep_key(struct airo_info *ai, u16 index, const char *key,
|
static int set_wep_key(struct airo_info *ai, u16 index, const u8 *key,
|
||||||
u16 keylen, int perm, int lock)
|
u16 keylen, int perm, int lock)
|
||||||
{
|
{
|
||||||
static const unsigned char macaddr[ETH_ALEN] = { 0x01, 0, 0, 0, 0, 0 };
|
static const unsigned char macaddr[ETH_ALEN] = { 0x01, 0, 0, 0, 0, 0 };
|
||||||
@ -5283,7 +5283,7 @@ static void proc_wepkey_on_close(struct inode *inode, struct file *file)
|
|||||||
struct net_device *dev = pde_data(inode);
|
struct net_device *dev = pde_data(inode);
|
||||||
struct airo_info *ai = dev->ml_priv;
|
struct airo_info *ai = dev->ml_priv;
|
||||||
int i, rc;
|
int i, rc;
|
||||||
char key[16];
|
u8 key[16];
|
||||||
u16 index = 0;
|
u16 index = 0;
|
||||||
int j = 0;
|
int j = 0;
|
||||||
|
|
||||||
@ -5311,12 +5311,22 @@ static void proc_wepkey_on_close(struct inode *inode, struct file *file)
|
|||||||
}
|
}
|
||||||
|
|
||||||
for (i = 0; i < 16*3 && data->wbuffer[i+j]; i++) {
|
for (i = 0; i < 16*3 && data->wbuffer[i+j]; i++) {
|
||||||
|
int val;
|
||||||
|
|
||||||
|
if (i % 3 == 2)
|
||||||
|
continue;
|
||||||
|
|
||||||
|
val = hex_to_bin(data->wbuffer[i+j]);
|
||||||
|
if (val < 0) {
|
||||||
|
airo_print_err(ai->dev->name, "WebKey passed invalid key hex");
|
||||||
|
return;
|
||||||
|
}
|
||||||
switch(i%3) {
|
switch(i%3) {
|
||||||
case 0:
|
case 0:
|
||||||
key[i/3] = hex_to_bin(data->wbuffer[i+j])<<4;
|
key[i/3] = (u8)val << 4;
|
||||||
break;
|
break;
|
||||||
case 1:
|
case 1:
|
||||||
key[i/3] |= hex_to_bin(data->wbuffer[i+j]);
|
key[i/3] |= (u8)val;
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -910,6 +910,7 @@ static void hwsim_send_nullfunc(struct mac80211_hwsim_data *data, u8 *mac,
|
|||||||
struct hwsim_vif_priv *vp = (void *)vif->drv_priv;
|
struct hwsim_vif_priv *vp = (void *)vif->drv_priv;
|
||||||
struct sk_buff *skb;
|
struct sk_buff *skb;
|
||||||
struct ieee80211_hdr *hdr;
|
struct ieee80211_hdr *hdr;
|
||||||
|
struct ieee80211_tx_info *cb;
|
||||||
|
|
||||||
if (!vp->assoc)
|
if (!vp->assoc)
|
||||||
return;
|
return;
|
||||||
@ -931,6 +932,10 @@ static void hwsim_send_nullfunc(struct mac80211_hwsim_data *data, u8 *mac,
|
|||||||
memcpy(hdr->addr2, mac, ETH_ALEN);
|
memcpy(hdr->addr2, mac, ETH_ALEN);
|
||||||
memcpy(hdr->addr3, vp->bssid, ETH_ALEN);
|
memcpy(hdr->addr3, vp->bssid, ETH_ALEN);
|
||||||
|
|
||||||
|
cb = IEEE80211_SKB_CB(skb);
|
||||||
|
cb->control.rates[0].count = 1;
|
||||||
|
cb->control.rates[1].idx = -1;
|
||||||
|
|
||||||
rcu_read_lock();
|
rcu_read_lock();
|
||||||
mac80211_hwsim_tx_frame(data->hw, skb,
|
mac80211_hwsim_tx_frame(data->hw, skb,
|
||||||
rcu_dereference(vif->bss_conf.chanctx_conf)->def.chan);
|
rcu_dereference(vif->bss_conf.chanctx_conf)->def.chan);
|
||||||
|
@ -1023,9 +1023,9 @@ static int rt2400pci_set_state(struct rt2x00_dev *rt2x00dev,
|
|||||||
{
|
{
|
||||||
u32 reg, reg2;
|
u32 reg, reg2;
|
||||||
unsigned int i;
|
unsigned int i;
|
||||||
char put_to_sleep;
|
bool put_to_sleep;
|
||||||
char bbp_state;
|
u8 bbp_state;
|
||||||
char rf_state;
|
u8 rf_state;
|
||||||
|
|
||||||
put_to_sleep = (state != STATE_AWAKE);
|
put_to_sleep = (state != STATE_AWAKE);
|
||||||
|
|
||||||
@ -1561,7 +1561,7 @@ static int rt2400pci_probe_hw_mode(struct rt2x00_dev *rt2x00dev)
|
|||||||
{
|
{
|
||||||
struct hw_mode_spec *spec = &rt2x00dev->spec;
|
struct hw_mode_spec *spec = &rt2x00dev->spec;
|
||||||
struct channel_info *info;
|
struct channel_info *info;
|
||||||
char *tx_power;
|
u8 *tx_power;
|
||||||
unsigned int i;
|
unsigned int i;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user