Merge branch '40GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queue

Jeff Kirsher says:

====================
40GbE Intel Wired LAN Driver Updates 2018-09-18

This series contains changes to i40evf so that it becomes a more
generic virtual function driver for current and future silicon.

While doing the rename of i40evf to a more generic name of iavf,
we also put the driver on a severe diet due to how much of the
code was unneeded or was unused.  The outcome is a lean and mean
virtual function driver that continues to work on existing 40GbE
(i40e) virtual devices and prepped for future supported devices,
like the 100GbE (ice) virtual devices.

This solves 2 issues we saw coming or were already present, the
first was constant code duplication happening with i40e/i40evf,
when much of the duplicate code in the i40evf was not used or was
not needed.  The second was to remove the future confusion of why
future VF devices that were not considered "40GbE" only devices
were supported by i40evf.

The thought is that iavf will be the virtual function driver for
all future devices, so it should have a "generic" name to properly
represent that it is the VF driver for multiple generations of
devices.

The last patch in this series is unreleated to the iavf conversion
and just has to do with a MODULE_LICENSE correction.

Known Caveats:
Existing user space configurations may have to change, but the module
alias in patch 1 helps a bit here.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
David S. Miller 2018-09-18 19:27:40 -07:00
commit 89f4b9a6e4
48 changed files with 5098 additions and 9233 deletions

View File

@ -94,8 +94,8 @@ gianfar.txt
- Gianfar Ethernet Driver.
i40e.txt
- README for the Intel Ethernet Controller XL710 Driver (i40e).
i40evf.txt
- Short note on the Driver for the Intel(R) XL710 X710 Virtual Function
iavf.txt
- README for the Intel Ethernet Adaptive Virtual Function Driver (iavf).
ieee802154.txt
- Linux IEEE 802.15.4 implementation, API and drivers
igb.txt

View File

@ -2,7 +2,7 @@ Linux* Base Driver for Intel(R) Network Connection
==================================================
Intel Ethernet Adaptive Virtual Function Linux driver.
Copyright(c) 2013-2017 Intel Corporation.
Copyright(c) 2013-2018 Intel Corporation.
Contents
========
@ -11,20 +11,21 @@ Contents
- Known Issues/Troubleshooting
- Support
This file describes the i40evf Linux* Base Driver.
This file describes the iavf Linux* Base Driver. This driver
was formerly called i40evf.
The i40evf driver supports the below mentioned virtual function
The iavf driver supports the below mentioned virtual function
devices and can only be activated on kernels running the i40e or
newer Physical Function (PF) driver compiled with CONFIG_PCI_IOV.
The i40evf driver requires CONFIG_PCI_MSI to be enabled.
The iavf driver requires CONFIG_PCI_MSI to be enabled.
The guest OS loading the i40evf driver must support MSI-X interrupts.
The guest OS loading the iavf driver must support MSI-X interrupts.
Supported Hardware
==================
Intel XL710 X710 Virtual Function
Intel Ethernet Adaptive Virtual Function
Intel X722 Virtual Function
Intel Ethernet Adaptive Virtual Function
Identifying Your Adapter
========================
@ -32,7 +33,8 @@ Identifying Your Adapter
For more information on how to identify your adapter, go to the
Adapter & Driver ID Guide at:
http://support.intel.com/support/go/network/adapter/idguide.htm
https://www.intel.com/content/www/us/en/support/articles/000005584/network-and-i-o/ethernet-products.html
Known Issues/Troubleshooting
============================

View File

@ -7348,7 +7348,7 @@ F: Documentation/networking/ixgb.txt
F: Documentation/networking/ixgbe.txt
F: Documentation/networking/ixgbevf.txt
F: Documentation/networking/i40e.txt
F: Documentation/networking/i40evf.txt
F: Documentation/networking/iavf.txt
F: Documentation/networking/ice.txt
F: drivers/net/ethernet/intel/
F: drivers/net/ethernet/intel/*/

View File

@ -235,20 +235,27 @@ config I40E_DCB
If unsure, say N.
# this is here to allow seamless migration from I40EVF --> IAVF name
# so that CONFIG_IAVF symbol will always mirror the state of CONFIG_I40EVF
config IAVF
tristate
config I40EVF
tristate "Intel(R) Ethernet Adaptive Virtual Function support"
select IAVF
depends on PCI_MSI
---help---
This driver supports virtual functions for Intel XL710,
X710, X722, and all devices advertising support for Intel
Ethernet Adaptive Virtual Function devices. For more
X710, X722, XXV710, and all devices advertising support for
Intel Ethernet Adaptive Virtual Function devices. For more
information on how to identify your adapter, go to the Adapter
& Driver ID Guide that can be located at:
<http://support.intel.com>
<https://support.intel.com>
This driver was formerly named i40evf.
To compile this driver as a module, choose M here. The module
will be called i40evf. MSI-X interrupt support is required
will be called iavf. MSI-X interrupt support is required
for this driver to work correctly.
config ICE

View File

@ -12,6 +12,6 @@ obj-$(CONFIG_IXGBE) += ixgbe/
obj-$(CONFIG_IXGBEVF) += ixgbevf/
obj-$(CONFIG_I40E) += i40e/
obj-$(CONFIG_IXGB) += ixgb/
obj-$(CONFIG_I40EVF) += i40evf/
obj-$(CONFIG_IAVF) += iavf/
obj-$(CONFIG_FM10K) += fm10k/
obj-$(CONFIG_ICE) += ice/

View File

@ -164,7 +164,7 @@
MODULE_DESCRIPTION(DRV_DESCRIPTION);
MODULE_AUTHOR(DRV_COPYRIGHT);
MODULE_LICENSE("GPL");
MODULE_LICENSE("GPL v2");
MODULE_VERSION(DRV_VERSION);
MODULE_FIRMWARE(FIRMWARE_D101M);
MODULE_FIRMWARE(FIRMWARE_D101S);

View File

@ -195,7 +195,7 @@ static struct pci_driver e1000_driver = {
MODULE_AUTHOR("Intel Corporation, <linux.nics@intel.com>");
MODULE_DESCRIPTION("Intel(R) PRO/1000 Network Driver");
MODULE_LICENSE("GPL");
MODULE_LICENSE("GPL v2");
MODULE_VERSION(DRV_VERSION);
#define DEFAULT_MSG_ENABLE (NETIF_MSG_DRV|NETIF_MSG_PROBE|NETIF_MSG_LINK)

View File

@ -7592,7 +7592,7 @@ module_exit(e1000_exit_module);
MODULE_AUTHOR("Intel Corporation, <linux.nics@intel.com>");
MODULE_DESCRIPTION("Intel(R) PRO/1000 Network Driver");
MODULE_LICENSE("GPL");
MODULE_LICENSE("GPL v2");
MODULE_VERSION(DRV_VERSION);
/* netdev.c */

View File

@ -21,7 +21,7 @@ static const char fm10k_copyright[] =
MODULE_AUTHOR("Intel Corporation, <linux.nics@intel.com>");
MODULE_DESCRIPTION(DRV_SUMMARY);
MODULE_LICENSE("GPL");
MODULE_LICENSE("GPL v2");
MODULE_VERSION(DRV_VERSION);
/* single workqueue for entire fm10k driver */

View File

@ -91,7 +91,7 @@ MODULE_PARM_DESC(debug, "Debug level (0=none,...,16=all), Debug mask (0x8XXXXXXX
MODULE_AUTHOR("Intel Corporation, <e1000-devel@lists.sourceforge.net>");
MODULE_DESCRIPTION("Intel(R) Ethernet Connection XL710 Network Driver");
MODULE_LICENSE("GPL");
MODULE_LICENSE("GPL v2");
MODULE_VERSION(DRV_VERSION);
static struct workqueue_struct *i40e_wq;

View File

@ -1,16 +0,0 @@
# SPDX-License-Identifier: GPL-2.0
# Copyright(c) 2013 - 2018 Intel Corporation.
#
## Makefile for the Intel(R) 40GbE VF driver
#
#
ccflags-y += -I$(src)
subdir-ccflags-y += -I$(src)
obj-$(CONFIG_I40EVF) += i40evf.o
i40evf-objs := i40evf_main.o i40evf_ethtool.o i40evf_virtchnl.o \
i40e_txrx.o i40e_common.o i40e_adminq.o i40evf_client.o

File diff suppressed because it is too large Load Diff

View File

@ -1,35 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2013 - 2018 Intel Corporation. */
#ifndef _I40E_ALLOC_H_
#define _I40E_ALLOC_H_
struct i40e_hw;
/* Memory allocation types */
enum i40e_memory_type {
i40e_mem_arq_buf = 0, /* ARQ indirect command buffer */
i40e_mem_asq_buf = 1,
i40e_mem_atq_buf = 2, /* ATQ indirect command buffer */
i40e_mem_arq_ring = 3, /* ARQ descriptor ring */
i40e_mem_atq_ring = 4, /* ATQ descriptor ring */
i40e_mem_pd = 5, /* Page Descriptor */
i40e_mem_bp = 6, /* Backing Page - 4KB */
i40e_mem_bp_jumbo = 7, /* Backing Page - > 4KB */
i40e_mem_reserved
};
/* prototype for functions used for dynamic memory allocation */
i40e_status i40e_allocate_dma_mem(struct i40e_hw *hw,
struct i40e_dma_mem *mem,
enum i40e_memory_type type,
u64 size, u32 alignment);
i40e_status i40e_free_dma_mem(struct i40e_hw *hw,
struct i40e_dma_mem *mem);
i40e_status i40e_allocate_virt_mem(struct i40e_hw *hw,
struct i40e_virt_mem *mem,
u32 size);
i40e_status i40e_free_virt_mem(struct i40e_hw *hw,
struct i40e_virt_mem *mem);
#endif /* _I40E_ALLOC_H_ */

File diff suppressed because it is too large Load Diff

View File

@ -1,34 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2013 - 2018 Intel Corporation. */
#ifndef _I40E_DEVIDS_H_
#define _I40E_DEVIDS_H_
/* Device IDs */
#define I40E_DEV_ID_SFP_XL710 0x1572
#define I40E_DEV_ID_QEMU 0x1574
#define I40E_DEV_ID_KX_B 0x1580
#define I40E_DEV_ID_KX_C 0x1581
#define I40E_DEV_ID_QSFP_A 0x1583
#define I40E_DEV_ID_QSFP_B 0x1584
#define I40E_DEV_ID_QSFP_C 0x1585
#define I40E_DEV_ID_10G_BASE_T 0x1586
#define I40E_DEV_ID_20G_KR2 0x1587
#define I40E_DEV_ID_20G_KR2_A 0x1588
#define I40E_DEV_ID_10G_BASE_T4 0x1589
#define I40E_DEV_ID_25G_B 0x158A
#define I40E_DEV_ID_25G_SFP28 0x158B
#define I40E_DEV_ID_VF 0x154C
#define I40E_DEV_ID_VF_HV 0x1571
#define I40E_DEV_ID_ADAPTIVE_VF 0x1889
#define I40E_DEV_ID_SFP_X722 0x37D0
#define I40E_DEV_ID_1G_BASE_T_X722 0x37D1
#define I40E_DEV_ID_10G_BASE_T_X722 0x37D2
#define I40E_DEV_ID_SFP_I_X722 0x37D3
#define I40E_DEV_ID_X722_VF 0x37CD
#define i40e_is_40G_device(d) ((d) == I40E_DEV_ID_QSFP_A || \
(d) == I40E_DEV_ID_QSFP_B || \
(d) == I40E_DEV_ID_QSFP_C)
#endif /* _I40E_DEVIDS_H_ */

View File

@ -1,215 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2013 - 2018 Intel Corporation. */
#ifndef _I40E_HMC_H_
#define _I40E_HMC_H_
#define I40E_HMC_MAX_BP_COUNT 512
/* forward-declare the HW struct for the compiler */
struct i40e_hw;
#define I40E_HMC_INFO_SIGNATURE 0x484D5347 /* HMSG */
#define I40E_HMC_PD_CNT_IN_SD 512
#define I40E_HMC_DIRECT_BP_SIZE 0x200000 /* 2M */
#define I40E_HMC_PAGED_BP_SIZE 4096
#define I40E_HMC_PD_BP_BUF_ALIGNMENT 4096
#define I40E_FIRST_VF_FPM_ID 16
struct i40e_hmc_obj_info {
u64 base; /* base addr in FPM */
u32 max_cnt; /* max count available for this hmc func */
u32 cnt; /* count of objects driver actually wants to create */
u64 size; /* size in bytes of one object */
};
enum i40e_sd_entry_type {
I40E_SD_TYPE_INVALID = 0,
I40E_SD_TYPE_PAGED = 1,
I40E_SD_TYPE_DIRECT = 2
};
struct i40e_hmc_bp {
enum i40e_sd_entry_type entry_type;
struct i40e_dma_mem addr; /* populate to be used by hw */
u32 sd_pd_index;
u32 ref_cnt;
};
struct i40e_hmc_pd_entry {
struct i40e_hmc_bp bp;
u32 sd_index;
bool rsrc_pg;
bool valid;
};
struct i40e_hmc_pd_table {
struct i40e_dma_mem pd_page_addr; /* populate to be used by hw */
struct i40e_hmc_pd_entry *pd_entry; /* [512] for sw book keeping */
struct i40e_virt_mem pd_entry_virt_mem; /* virt mem for pd_entry */
u32 ref_cnt;
u32 sd_index;
};
struct i40e_hmc_sd_entry {
enum i40e_sd_entry_type entry_type;
bool valid;
union {
struct i40e_hmc_pd_table pd_table;
struct i40e_hmc_bp bp;
} u;
};
struct i40e_hmc_sd_table {
struct i40e_virt_mem addr; /* used to track sd_entry allocations */
u32 sd_cnt;
u32 ref_cnt;
struct i40e_hmc_sd_entry *sd_entry; /* (sd_cnt*512) entries max */
};
struct i40e_hmc_info {
u32 signature;
/* equals to pci func num for PF and dynamically allocated for VFs */
u8 hmc_fn_id;
u16 first_sd_index; /* index of the first available SD */
/* hmc objects */
struct i40e_hmc_obj_info *hmc_obj;
struct i40e_virt_mem hmc_obj_virt_mem;
struct i40e_hmc_sd_table sd_table;
};
#define I40E_INC_SD_REFCNT(sd_table) ((sd_table)->ref_cnt++)
#define I40E_INC_PD_REFCNT(pd_table) ((pd_table)->ref_cnt++)
#define I40E_INC_BP_REFCNT(bp) ((bp)->ref_cnt++)
#define I40E_DEC_SD_REFCNT(sd_table) ((sd_table)->ref_cnt--)
#define I40E_DEC_PD_REFCNT(pd_table) ((pd_table)->ref_cnt--)
#define I40E_DEC_BP_REFCNT(bp) ((bp)->ref_cnt--)
/**
* I40E_SET_PF_SD_ENTRY - marks the sd entry as valid in the hardware
* @hw: pointer to our hw struct
* @pa: pointer to physical address
* @sd_index: segment descriptor index
* @type: if sd entry is direct or paged
**/
#define I40E_SET_PF_SD_ENTRY(hw, pa, sd_index, type) \
{ \
u32 val1, val2, val3; \
val1 = (u32)(upper_32_bits(pa)); \
val2 = (u32)(pa) | (I40E_HMC_MAX_BP_COUNT << \
I40E_PFHMC_SDDATALOW_PMSDBPCOUNT_SHIFT) | \
((((type) == I40E_SD_TYPE_PAGED) ? 0 : 1) << \
I40E_PFHMC_SDDATALOW_PMSDTYPE_SHIFT) | \
BIT(I40E_PFHMC_SDDATALOW_PMSDVALID_SHIFT); \
val3 = (sd_index) | BIT_ULL(I40E_PFHMC_SDCMD_PMSDWR_SHIFT); \
wr32((hw), I40E_PFHMC_SDDATAHIGH, val1); \
wr32((hw), I40E_PFHMC_SDDATALOW, val2); \
wr32((hw), I40E_PFHMC_SDCMD, val3); \
}
/**
* I40E_CLEAR_PF_SD_ENTRY - marks the sd entry as invalid in the hardware
* @hw: pointer to our hw struct
* @sd_index: segment descriptor index
* @type: if sd entry is direct or paged
**/
#define I40E_CLEAR_PF_SD_ENTRY(hw, sd_index, type) \
{ \
u32 val2, val3; \
val2 = (I40E_HMC_MAX_BP_COUNT << \
I40E_PFHMC_SDDATALOW_PMSDBPCOUNT_SHIFT) | \
((((type) == I40E_SD_TYPE_PAGED) ? 0 : 1) << \
I40E_PFHMC_SDDATALOW_PMSDTYPE_SHIFT); \
val3 = (sd_index) | BIT_ULL(I40E_PFHMC_SDCMD_PMSDWR_SHIFT); \
wr32((hw), I40E_PFHMC_SDDATAHIGH, 0); \
wr32((hw), I40E_PFHMC_SDDATALOW, val2); \
wr32((hw), I40E_PFHMC_SDCMD, val3); \
}
/**
* I40E_INVALIDATE_PF_HMC_PD - Invalidates the pd cache in the hardware
* @hw: pointer to our hw struct
* @sd_idx: segment descriptor index
* @pd_idx: page descriptor index
**/
#define I40E_INVALIDATE_PF_HMC_PD(hw, sd_idx, pd_idx) \
wr32((hw), I40E_PFHMC_PDINV, \
(((sd_idx) << I40E_PFHMC_PDINV_PMSDIDX_SHIFT) | \
((pd_idx) << I40E_PFHMC_PDINV_PMPDIDX_SHIFT)))
/**
* I40E_FIND_SD_INDEX_LIMIT - finds segment descriptor index limit
* @hmc_info: pointer to the HMC configuration information structure
* @type: type of HMC resources we're searching
* @index: starting index for the object
* @cnt: number of objects we're trying to create
* @sd_idx: pointer to return index of the segment descriptor in question
* @sd_limit: pointer to return the maximum number of segment descriptors
*
* This function calculates the segment descriptor index and index limit
* for the resource defined by i40e_hmc_rsrc_type.
**/
#define I40E_FIND_SD_INDEX_LIMIT(hmc_info, type, index, cnt, sd_idx, sd_limit)\
{ \
u64 fpm_addr, fpm_limit; \
fpm_addr = (hmc_info)->hmc_obj[(type)].base + \
(hmc_info)->hmc_obj[(type)].size * (index); \
fpm_limit = fpm_addr + (hmc_info)->hmc_obj[(type)].size * (cnt);\
*(sd_idx) = (u32)(fpm_addr / I40E_HMC_DIRECT_BP_SIZE); \
*(sd_limit) = (u32)((fpm_limit - 1) / I40E_HMC_DIRECT_BP_SIZE); \
/* add one more to the limit to correct our range */ \
*(sd_limit) += 1; \
}
/**
* I40E_FIND_PD_INDEX_LIMIT - finds page descriptor index limit
* @hmc_info: pointer to the HMC configuration information struct
* @type: HMC resource type we're examining
* @idx: starting index for the object
* @cnt: number of objects we're trying to create
* @pd_index: pointer to return page descriptor index
* @pd_limit: pointer to return page descriptor index limit
*
* Calculates the page descriptor index and index limit for the resource
* defined by i40e_hmc_rsrc_type.
**/
#define I40E_FIND_PD_INDEX_LIMIT(hmc_info, type, idx, cnt, pd_index, pd_limit)\
{ \
u64 fpm_adr, fpm_limit; \
fpm_adr = (hmc_info)->hmc_obj[(type)].base + \
(hmc_info)->hmc_obj[(type)].size * (idx); \
fpm_limit = fpm_adr + (hmc_info)->hmc_obj[(type)].size * (cnt); \
*(pd_index) = (u32)(fpm_adr / I40E_HMC_PAGED_BP_SIZE); \
*(pd_limit) = (u32)((fpm_limit - 1) / I40E_HMC_PAGED_BP_SIZE); \
/* add one more to the limit to correct our range */ \
*(pd_limit) += 1; \
}
i40e_status i40e_add_sd_table_entry(struct i40e_hw *hw,
struct i40e_hmc_info *hmc_info,
u32 sd_index,
enum i40e_sd_entry_type type,
u64 direct_mode_sz);
i40e_status i40e_add_pd_table_entry(struct i40e_hw *hw,
struct i40e_hmc_info *hmc_info,
u32 pd_index,
struct i40e_dma_mem *rsrc_pg);
i40e_status i40e_remove_pd_bp(struct i40e_hw *hw,
struct i40e_hmc_info *hmc_info,
u32 idx);
i40e_status i40e_prep_remove_sd_bp(struct i40e_hmc_info *hmc_info,
u32 idx);
i40e_status i40e_remove_sd_bp_new(struct i40e_hw *hw,
struct i40e_hmc_info *hmc_info,
u32 idx, bool is_pf);
i40e_status i40e_prep_remove_pd_page(struct i40e_hmc_info *hmc_info,
u32 idx);
i40e_status i40e_remove_pd_page_new(struct i40e_hw *hw,
struct i40e_hmc_info *hmc_info,
u32 idx, bool is_pf);
#endif /* _I40E_HMC_H_ */

View File

@ -1,158 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2013 - 2018 Intel Corporation. */
#ifndef _I40E_LAN_HMC_H_
#define _I40E_LAN_HMC_H_
/* forward-declare the HW struct for the compiler */
struct i40e_hw;
/* HMC element context information */
/* Rx queue context data
*
* The sizes of the variables may be larger than needed due to crossing byte
* boundaries. If we do not have the width of the variable set to the correct
* size then we could end up shifting bits off the top of the variable when the
* variable is at the top of a byte and crosses over into the next byte.
*/
struct i40e_hmc_obj_rxq {
u16 head;
u16 cpuid; /* bigger than needed, see above for reason */
u64 base;
u16 qlen;
#define I40E_RXQ_CTX_DBUFF_SHIFT 7
u16 dbuff; /* bigger than needed, see above for reason */
#define I40E_RXQ_CTX_HBUFF_SHIFT 6
u16 hbuff; /* bigger than needed, see above for reason */
u8 dtype;
u8 dsize;
u8 crcstrip;
u8 fc_ena;
u8 l2tsel;
u8 hsplit_0;
u8 hsplit_1;
u8 showiv;
u32 rxmax; /* bigger than needed, see above for reason */
u8 tphrdesc_ena;
u8 tphwdesc_ena;
u8 tphdata_ena;
u8 tphhead_ena;
u16 lrxqthresh; /* bigger than needed, see above for reason */
u8 prefena; /* NOTE: normally must be set to 1 at init */
};
/* Tx queue context data
*
* The sizes of the variables may be larger than needed due to crossing byte
* boundaries. If we do not have the width of the variable set to the correct
* size then we could end up shifting bits off the top of the variable when the
* variable is at the top of a byte and crosses over into the next byte.
*/
struct i40e_hmc_obj_txq {
u16 head;
u8 new_context;
u64 base;
u8 fc_ena;
u8 timesync_ena;
u8 fd_ena;
u8 alt_vlan_ena;
u16 thead_wb;
u8 cpuid;
u8 head_wb_ena;
u16 qlen;
u8 tphrdesc_ena;
u8 tphrpacket_ena;
u8 tphwdesc_ena;
u64 head_wb_addr;
u32 crc;
u16 rdylist;
u8 rdylist_act;
};
/* for hsplit_0 field of Rx HMC context */
enum i40e_hmc_obj_rx_hsplit_0 {
I40E_HMC_OBJ_RX_HSPLIT_0_NO_SPLIT = 0,
I40E_HMC_OBJ_RX_HSPLIT_0_SPLIT_L2 = 1,
I40E_HMC_OBJ_RX_HSPLIT_0_SPLIT_IP = 2,
I40E_HMC_OBJ_RX_HSPLIT_0_SPLIT_TCP_UDP = 4,
I40E_HMC_OBJ_RX_HSPLIT_0_SPLIT_SCTP = 8,
};
/* fcoe_cntx and fcoe_filt are for debugging purpose only */
struct i40e_hmc_obj_fcoe_cntx {
u32 rsv[32];
};
struct i40e_hmc_obj_fcoe_filt {
u32 rsv[8];
};
/* Context sizes for LAN objects */
enum i40e_hmc_lan_object_size {
I40E_HMC_LAN_OBJ_SZ_8 = 0x3,
I40E_HMC_LAN_OBJ_SZ_16 = 0x4,
I40E_HMC_LAN_OBJ_SZ_32 = 0x5,
I40E_HMC_LAN_OBJ_SZ_64 = 0x6,
I40E_HMC_LAN_OBJ_SZ_128 = 0x7,
I40E_HMC_LAN_OBJ_SZ_256 = 0x8,
I40E_HMC_LAN_OBJ_SZ_512 = 0x9,
};
#define I40E_HMC_L2OBJ_BASE_ALIGNMENT 512
#define I40E_HMC_OBJ_SIZE_TXQ 128
#define I40E_HMC_OBJ_SIZE_RXQ 32
#define I40E_HMC_OBJ_SIZE_FCOE_CNTX 128
#define I40E_HMC_OBJ_SIZE_FCOE_FILT 64
enum i40e_hmc_lan_rsrc_type {
I40E_HMC_LAN_FULL = 0,
I40E_HMC_LAN_TX = 1,
I40E_HMC_LAN_RX = 2,
I40E_HMC_FCOE_CTX = 3,
I40E_HMC_FCOE_FILT = 4,
I40E_HMC_LAN_MAX = 5
};
enum i40e_hmc_model {
I40E_HMC_MODEL_DIRECT_PREFERRED = 0,
I40E_HMC_MODEL_DIRECT_ONLY = 1,
I40E_HMC_MODEL_PAGED_ONLY = 2,
I40E_HMC_MODEL_UNKNOWN,
};
struct i40e_hmc_lan_create_obj_info {
struct i40e_hmc_info *hmc_info;
u32 rsrc_type;
u32 start_idx;
u32 count;
enum i40e_sd_entry_type entry_type;
u64 direct_mode_sz;
};
struct i40e_hmc_lan_delete_obj_info {
struct i40e_hmc_info *hmc_info;
u32 rsrc_type;
u32 start_idx;
u32 count;
};
i40e_status i40e_init_lan_hmc(struct i40e_hw *hw, u32 txq_num,
u32 rxq_num, u32 fcoe_cntx_num,
u32 fcoe_filt_num);
i40e_status i40e_configure_lan_hmc(struct i40e_hw *hw,
enum i40e_hmc_model model);
i40e_status i40e_shutdown_lan_hmc(struct i40e_hw *hw);
i40e_status i40e_clear_lan_tx_queue_context(struct i40e_hw *hw,
u16 queue);
i40e_status i40e_set_lan_tx_queue_context(struct i40e_hw *hw,
u16 queue,
struct i40e_hmc_obj_txq *s);
i40e_status i40e_clear_lan_rx_queue_context(struct i40e_hw *hw,
u16 queue);
i40e_status i40e_set_lan_rx_queue_context(struct i40e_hw *hw,
u16 queue,
struct i40e_hmc_obj_rxq *s);
#endif /* _I40E_LAN_HMC_H_ */

View File

@ -1,130 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2013 - 2018 Intel Corporation. */
#ifndef _I40E_PROTOTYPE_H_
#define _I40E_PROTOTYPE_H_
#include "i40e_type.h"
#include "i40e_alloc.h"
#include <linux/avf/virtchnl.h>
/* Prototypes for shared code functions that are not in
* the standard function pointer structures. These are
* mostly because they are needed even before the init
* has happened and will assist in the early SW and FW
* setup.
*/
/* adminq functions */
i40e_status i40evf_init_adminq(struct i40e_hw *hw);
i40e_status i40evf_shutdown_adminq(struct i40e_hw *hw);
void i40e_adminq_init_ring_data(struct i40e_hw *hw);
i40e_status i40evf_clean_arq_element(struct i40e_hw *hw,
struct i40e_arq_event_info *e,
u16 *events_pending);
i40e_status i40evf_asq_send_command(struct i40e_hw *hw,
struct i40e_aq_desc *desc,
void *buff, /* can be NULL */
u16 buff_size,
struct i40e_asq_cmd_details *cmd_details);
bool i40evf_asq_done(struct i40e_hw *hw);
/* debug function for adminq */
void i40evf_debug_aq(struct i40e_hw *hw, enum i40e_debug_mask mask,
void *desc, void *buffer, u16 buf_len);
void i40e_idle_aq(struct i40e_hw *hw);
void i40evf_resume_aq(struct i40e_hw *hw);
bool i40evf_check_asq_alive(struct i40e_hw *hw);
i40e_status i40evf_aq_queue_shutdown(struct i40e_hw *hw, bool unloading);
const char *i40evf_aq_str(struct i40e_hw *hw, enum i40e_admin_queue_err aq_err);
const char *i40evf_stat_str(struct i40e_hw *hw, i40e_status stat_err);
i40e_status i40evf_aq_get_rss_lut(struct i40e_hw *hw, u16 seid,
bool pf_lut, u8 *lut, u16 lut_size);
i40e_status i40evf_aq_set_rss_lut(struct i40e_hw *hw, u16 seid,
bool pf_lut, u8 *lut, u16 lut_size);
i40e_status i40evf_aq_get_rss_key(struct i40e_hw *hw,
u16 seid,
struct i40e_aqc_get_set_rss_key_data *key);
i40e_status i40evf_aq_set_rss_key(struct i40e_hw *hw,
u16 seid,
struct i40e_aqc_get_set_rss_key_data *key);
i40e_status i40e_set_mac_type(struct i40e_hw *hw);
extern struct i40e_rx_ptype_decoded i40evf_ptype_lookup[];
static inline struct i40e_rx_ptype_decoded decode_rx_desc_ptype(u8 ptype)
{
return i40evf_ptype_lookup[ptype];
}
/* prototype for functions used for SW locks */
/* i40e_common for VF drivers*/
void i40e_vf_parse_hw_config(struct i40e_hw *hw,
struct virtchnl_vf_resource *msg);
i40e_status i40e_vf_reset(struct i40e_hw *hw);
i40e_status i40e_aq_send_msg_to_pf(struct i40e_hw *hw,
enum virtchnl_ops v_opcode,
i40e_status v_retval,
u8 *msg, u16 msglen,
struct i40e_asq_cmd_details *cmd_details);
i40e_status i40e_set_filter_control(struct i40e_hw *hw,
struct i40e_filter_control_settings *settings);
i40e_status i40e_aq_add_rem_control_packet_filter(struct i40e_hw *hw,
u8 *mac_addr, u16 ethtype, u16 flags,
u16 vsi_seid, u16 queue, bool is_add,
struct i40e_control_filter_stats *stats,
struct i40e_asq_cmd_details *cmd_details);
void i40e_add_filter_to_drop_tx_flow_control_frames(struct i40e_hw *hw,
u16 vsi_seid);
i40e_status i40evf_aq_rx_ctl_read_register(struct i40e_hw *hw,
u32 reg_addr, u32 *reg_val,
struct i40e_asq_cmd_details *cmd_details);
u32 i40evf_read_rx_ctl(struct i40e_hw *hw, u32 reg_addr);
i40e_status i40evf_aq_rx_ctl_write_register(struct i40e_hw *hw,
u32 reg_addr, u32 reg_val,
struct i40e_asq_cmd_details *cmd_details);
void i40evf_write_rx_ctl(struct i40e_hw *hw, u32 reg_addr, u32 reg_val);
i40e_status i40e_aq_set_phy_register(struct i40e_hw *hw,
u8 phy_select, u8 dev_addr,
u32 reg_addr, u32 reg_val,
struct i40e_asq_cmd_details *cmd_details);
i40e_status i40e_aq_get_phy_register(struct i40e_hw *hw,
u8 phy_select, u8 dev_addr,
u32 reg_addr, u32 *reg_val,
struct i40e_asq_cmd_details *cmd_details);
i40e_status i40e_read_phy_register(struct i40e_hw *hw, u8 page,
u16 reg, u8 phy_addr, u16 *value);
i40e_status i40e_write_phy_register(struct i40e_hw *hw, u8 page,
u16 reg, u8 phy_addr, u16 value);
i40e_status i40e_read_phy_register(struct i40e_hw *hw, u8 page, u16 reg,
u8 phy_addr, u16 *value);
i40e_status i40e_write_phy_register(struct i40e_hw *hw, u8 page, u16 reg,
u8 phy_addr, u16 value);
u8 i40e_get_phy_address(struct i40e_hw *hw, u8 dev_num);
i40e_status i40e_blink_phy_link_led(struct i40e_hw *hw,
u32 time, u32 interval);
i40e_status i40evf_aq_write_ddp(struct i40e_hw *hw, void *buff,
u16 buff_size, u32 track_id,
u32 *error_offset, u32 *error_info,
struct i40e_asq_cmd_details *
cmd_details);
i40e_status i40evf_aq_get_ddp_list(struct i40e_hw *hw, void *buff,
u16 buff_size, u8 flags,
struct i40e_asq_cmd_details *
cmd_details);
struct i40e_generic_seg_header *
i40evf_find_segment_in_package(u32 segment_type,
struct i40e_package_header *pkg_header);
enum i40e_status_code
i40evf_write_profile(struct i40e_hw *hw, struct i40e_profile_segment *i40e_seg,
u32 track_id);
enum i40e_status_code
i40evf_add_pinfo_to_list(struct i40e_hw *hw,
struct i40e_profile_segment *profile,
u8 *profile_info_sec, u32 track_id);
#endif /* _I40E_PROTOTYPE_H_ */

View File

@ -1,313 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2013 - 2018 Intel Corporation. */
#ifndef _I40E_REGISTER_H_
#define _I40E_REGISTER_H_
#define I40E_VFMSIX_PBA1(_i) (0x00002000 + ((_i) * 4)) /* _i=0...19 */ /* Reset: VFLR */
#define I40E_VFMSIX_PBA1_MAX_INDEX 19
#define I40E_VFMSIX_PBA1_PENBIT_SHIFT 0
#define I40E_VFMSIX_PBA1_PENBIT_MASK I40E_MASK(0xFFFFFFFF, I40E_VFMSIX_PBA1_PENBIT_SHIFT)
#define I40E_VFMSIX_TADD1(_i) (0x00002100 + ((_i) * 16)) /* _i=0...639 */ /* Reset: VFLR */
#define I40E_VFMSIX_TADD1_MAX_INDEX 639
#define I40E_VFMSIX_TADD1_MSIXTADD10_SHIFT 0
#define I40E_VFMSIX_TADD1_MSIXTADD10_MASK I40E_MASK(0x3, I40E_VFMSIX_TADD1_MSIXTADD10_SHIFT)
#define I40E_VFMSIX_TADD1_MSIXTADD_SHIFT 2
#define I40E_VFMSIX_TADD1_MSIXTADD_MASK I40E_MASK(0x3FFFFFFF, I40E_VFMSIX_TADD1_MSIXTADD_SHIFT)
#define I40E_VFMSIX_TMSG1(_i) (0x00002108 + ((_i) * 16)) /* _i=0...639 */ /* Reset: VFLR */
#define I40E_VFMSIX_TMSG1_MAX_INDEX 639
#define I40E_VFMSIX_TMSG1_MSIXTMSG_SHIFT 0
#define I40E_VFMSIX_TMSG1_MSIXTMSG_MASK I40E_MASK(0xFFFFFFFF, I40E_VFMSIX_TMSG1_MSIXTMSG_SHIFT)
#define I40E_VFMSIX_TUADD1(_i) (0x00002104 + ((_i) * 16)) /* _i=0...639 */ /* Reset: VFLR */
#define I40E_VFMSIX_TUADD1_MAX_INDEX 639
#define I40E_VFMSIX_TUADD1_MSIXTUADD_SHIFT 0
#define I40E_VFMSIX_TUADD1_MSIXTUADD_MASK I40E_MASK(0xFFFFFFFF, I40E_VFMSIX_TUADD1_MSIXTUADD_SHIFT)
#define I40E_VFMSIX_TVCTRL1(_i) (0x0000210C + ((_i) * 16)) /* _i=0...639 */ /* Reset: VFLR */
#define I40E_VFMSIX_TVCTRL1_MAX_INDEX 639
#define I40E_VFMSIX_TVCTRL1_MASK_SHIFT 0
#define I40E_VFMSIX_TVCTRL1_MASK_MASK I40E_MASK(0x1, I40E_VFMSIX_TVCTRL1_MASK_SHIFT)
#define I40E_VF_ARQBAH1 0x00006000 /* Reset: EMPR */
#define I40E_VF_ARQBAH1_ARQBAH_SHIFT 0
#define I40E_VF_ARQBAH1_ARQBAH_MASK I40E_MASK(0xFFFFFFFF, I40E_VF_ARQBAH1_ARQBAH_SHIFT)
#define I40E_VF_ARQBAL1 0x00006C00 /* Reset: EMPR */
#define I40E_VF_ARQBAL1_ARQBAL_SHIFT 0
#define I40E_VF_ARQBAL1_ARQBAL_MASK I40E_MASK(0xFFFFFFFF, I40E_VF_ARQBAL1_ARQBAL_SHIFT)
#define I40E_VF_ARQH1 0x00007400 /* Reset: EMPR */
#define I40E_VF_ARQH1_ARQH_SHIFT 0
#define I40E_VF_ARQH1_ARQH_MASK I40E_MASK(0x3FF, I40E_VF_ARQH1_ARQH_SHIFT)
#define I40E_VF_ARQLEN1 0x00008000 /* Reset: EMPR */
#define I40E_VF_ARQLEN1_ARQLEN_SHIFT 0
#define I40E_VF_ARQLEN1_ARQLEN_MASK I40E_MASK(0x3FF, I40E_VF_ARQLEN1_ARQLEN_SHIFT)
#define I40E_VF_ARQLEN1_ARQVFE_SHIFT 28
#define I40E_VF_ARQLEN1_ARQVFE_MASK I40E_MASK(0x1, I40E_VF_ARQLEN1_ARQVFE_SHIFT)
#define I40E_VF_ARQLEN1_ARQOVFL_SHIFT 29
#define I40E_VF_ARQLEN1_ARQOVFL_MASK I40E_MASK(0x1, I40E_VF_ARQLEN1_ARQOVFL_SHIFT)
#define I40E_VF_ARQLEN1_ARQCRIT_SHIFT 30
#define I40E_VF_ARQLEN1_ARQCRIT_MASK I40E_MASK(0x1, I40E_VF_ARQLEN1_ARQCRIT_SHIFT)
#define I40E_VF_ARQLEN1_ARQENABLE_SHIFT 31
#define I40E_VF_ARQLEN1_ARQENABLE_MASK I40E_MASK(0x1, I40E_VF_ARQLEN1_ARQENABLE_SHIFT)
#define I40E_VF_ARQT1 0x00007000 /* Reset: EMPR */
#define I40E_VF_ARQT1_ARQT_SHIFT 0
#define I40E_VF_ARQT1_ARQT_MASK I40E_MASK(0x3FF, I40E_VF_ARQT1_ARQT_SHIFT)
#define I40E_VF_ATQBAH1 0x00007800 /* Reset: EMPR */
#define I40E_VF_ATQBAH1_ATQBAH_SHIFT 0
#define I40E_VF_ATQBAH1_ATQBAH_MASK I40E_MASK(0xFFFFFFFF, I40E_VF_ATQBAH1_ATQBAH_SHIFT)
#define I40E_VF_ATQBAL1 0x00007C00 /* Reset: EMPR */
#define I40E_VF_ATQBAL1_ATQBAL_SHIFT 0
#define I40E_VF_ATQBAL1_ATQBAL_MASK I40E_MASK(0xFFFFFFFF, I40E_VF_ATQBAL1_ATQBAL_SHIFT)
#define I40E_VF_ATQH1 0x00006400 /* Reset: EMPR */
#define I40E_VF_ATQH1_ATQH_SHIFT 0
#define I40E_VF_ATQH1_ATQH_MASK I40E_MASK(0x3FF, I40E_VF_ATQH1_ATQH_SHIFT)
#define I40E_VF_ATQLEN1 0x00006800 /* Reset: EMPR */
#define I40E_VF_ATQLEN1_ATQLEN_SHIFT 0
#define I40E_VF_ATQLEN1_ATQLEN_MASK I40E_MASK(0x3FF, I40E_VF_ATQLEN1_ATQLEN_SHIFT)
#define I40E_VF_ATQLEN1_ATQVFE_SHIFT 28
#define I40E_VF_ATQLEN1_ATQVFE_MASK I40E_MASK(0x1, I40E_VF_ATQLEN1_ATQVFE_SHIFT)
#define I40E_VF_ATQLEN1_ATQOVFL_SHIFT 29
#define I40E_VF_ATQLEN1_ATQOVFL_MASK I40E_MASK(0x1, I40E_VF_ATQLEN1_ATQOVFL_SHIFT)
#define I40E_VF_ATQLEN1_ATQCRIT_SHIFT 30
#define I40E_VF_ATQLEN1_ATQCRIT_MASK I40E_MASK(0x1, I40E_VF_ATQLEN1_ATQCRIT_SHIFT)
#define I40E_VF_ATQLEN1_ATQENABLE_SHIFT 31
#define I40E_VF_ATQLEN1_ATQENABLE_MASK I40E_MASK(0x1, I40E_VF_ATQLEN1_ATQENABLE_SHIFT)
#define I40E_VF_ATQT1 0x00008400 /* Reset: EMPR */
#define I40E_VF_ATQT1_ATQT_SHIFT 0
#define I40E_VF_ATQT1_ATQT_MASK I40E_MASK(0x3FF, I40E_VF_ATQT1_ATQT_SHIFT)
#define I40E_VFGEN_RSTAT 0x00008800 /* Reset: VFR */
#define I40E_VFGEN_RSTAT_VFR_STATE_SHIFT 0
#define I40E_VFGEN_RSTAT_VFR_STATE_MASK I40E_MASK(0x3, I40E_VFGEN_RSTAT_VFR_STATE_SHIFT)
#define I40E_VFINT_DYN_CTL01 0x00005C00 /* Reset: VFR */
#define I40E_VFINT_DYN_CTL01_INTENA_SHIFT 0
#define I40E_VFINT_DYN_CTL01_INTENA_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTL01_INTENA_SHIFT)
#define I40E_VFINT_DYN_CTL01_CLEARPBA_SHIFT 1
#define I40E_VFINT_DYN_CTL01_CLEARPBA_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTL01_CLEARPBA_SHIFT)
#define I40E_VFINT_DYN_CTL01_SWINT_TRIG_SHIFT 2
#define I40E_VFINT_DYN_CTL01_SWINT_TRIG_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTL01_SWINT_TRIG_SHIFT)
#define I40E_VFINT_DYN_CTL01_ITR_INDX_SHIFT 3
#define I40E_VFINT_DYN_CTL01_ITR_INDX_MASK I40E_MASK(0x3, I40E_VFINT_DYN_CTL01_ITR_INDX_SHIFT)
#define I40E_VFINT_DYN_CTL01_INTERVAL_SHIFT 5
#define I40E_VFINT_DYN_CTL01_INTERVAL_MASK I40E_MASK(0xFFF, I40E_VFINT_DYN_CTL01_INTERVAL_SHIFT)
#define I40E_VFINT_DYN_CTL01_SW_ITR_INDX_ENA_SHIFT 24
#define I40E_VFINT_DYN_CTL01_SW_ITR_INDX_ENA_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTL01_SW_ITR_INDX_ENA_SHIFT)
#define I40E_VFINT_DYN_CTL01_SW_ITR_INDX_SHIFT 25
#define I40E_VFINT_DYN_CTL01_SW_ITR_INDX_MASK I40E_MASK(0x3, I40E_VFINT_DYN_CTL01_SW_ITR_INDX_SHIFT)
#define I40E_VFINT_DYN_CTL01_INTENA_MSK_SHIFT 31
#define I40E_VFINT_DYN_CTL01_INTENA_MSK_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTL01_INTENA_MSK_SHIFT)
#define I40E_VFINT_DYN_CTLN1(_INTVF) (0x00003800 + ((_INTVF) * 4)) /* _i=0...15 */ /* Reset: VFR */
#define I40E_VFINT_DYN_CTLN1_MAX_INDEX 15
#define I40E_VFINT_DYN_CTLN1_INTENA_SHIFT 0
#define I40E_VFINT_DYN_CTLN1_INTENA_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTLN1_INTENA_SHIFT)
#define I40E_VFINT_DYN_CTLN1_CLEARPBA_SHIFT 1
#define I40E_VFINT_DYN_CTLN1_CLEARPBA_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTLN1_CLEARPBA_SHIFT)
#define I40E_VFINT_DYN_CTLN1_SWINT_TRIG_SHIFT 2
#define I40E_VFINT_DYN_CTLN1_SWINT_TRIG_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTLN1_SWINT_TRIG_SHIFT)
#define I40E_VFINT_DYN_CTLN1_ITR_INDX_SHIFT 3
#define I40E_VFINT_DYN_CTLN1_ITR_INDX_MASK I40E_MASK(0x3, I40E_VFINT_DYN_CTLN1_ITR_INDX_SHIFT)
#define I40E_VFINT_DYN_CTLN1_INTERVAL_SHIFT 5
#define I40E_VFINT_DYN_CTLN1_INTERVAL_MASK I40E_MASK(0xFFF, I40E_VFINT_DYN_CTLN1_INTERVAL_SHIFT)
#define I40E_VFINT_DYN_CTLN1_SW_ITR_INDX_ENA_SHIFT 24
#define I40E_VFINT_DYN_CTLN1_SW_ITR_INDX_ENA_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTLN1_SW_ITR_INDX_ENA_SHIFT)
#define I40E_VFINT_DYN_CTLN1_SW_ITR_INDX_SHIFT 25
#define I40E_VFINT_DYN_CTLN1_SW_ITR_INDX_MASK I40E_MASK(0x3, I40E_VFINT_DYN_CTLN1_SW_ITR_INDX_SHIFT)
#define I40E_VFINT_DYN_CTLN1_INTENA_MSK_SHIFT 31
#define I40E_VFINT_DYN_CTLN1_INTENA_MSK_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTLN1_INTENA_MSK_SHIFT)
#define I40E_VFINT_ICR0_ENA1 0x00005000 /* Reset: CORER */
#define I40E_VFINT_ICR0_ENA1_LINK_STAT_CHANGE_SHIFT 25
#define I40E_VFINT_ICR0_ENA1_LINK_STAT_CHANGE_MASK I40E_MASK(0x1, I40E_VFINT_ICR0_ENA1_LINK_STAT_CHANGE_SHIFT)
#define I40E_VFINT_ICR0_ENA1_ADMINQ_SHIFT 30
#define I40E_VFINT_ICR0_ENA1_ADMINQ_MASK I40E_MASK(0x1, I40E_VFINT_ICR0_ENA1_ADMINQ_SHIFT)
#define I40E_VFINT_ICR0_ENA1_RSVD_SHIFT 31
#define I40E_VFINT_ICR0_ENA1_RSVD_MASK I40E_MASK(0x1, I40E_VFINT_ICR0_ENA1_RSVD_SHIFT)
#define I40E_VFINT_ICR01 0x00004800 /* Reset: CORER */
#define I40E_VFINT_ICR01_INTEVENT_SHIFT 0
#define I40E_VFINT_ICR01_INTEVENT_MASK I40E_MASK(0x1, I40E_VFINT_ICR01_INTEVENT_SHIFT)
#define I40E_VFINT_ICR01_QUEUE_0_SHIFT 1
#define I40E_VFINT_ICR01_QUEUE_0_MASK I40E_MASK(0x1, I40E_VFINT_ICR01_QUEUE_0_SHIFT)
#define I40E_VFINT_ICR01_QUEUE_1_SHIFT 2
#define I40E_VFINT_ICR01_QUEUE_1_MASK I40E_MASK(0x1, I40E_VFINT_ICR01_QUEUE_1_SHIFT)
#define I40E_VFINT_ICR01_QUEUE_2_SHIFT 3
#define I40E_VFINT_ICR01_QUEUE_2_MASK I40E_MASK(0x1, I40E_VFINT_ICR01_QUEUE_2_SHIFT)
#define I40E_VFINT_ICR01_QUEUE_3_SHIFT 4
#define I40E_VFINT_ICR01_QUEUE_3_MASK I40E_MASK(0x1, I40E_VFINT_ICR01_QUEUE_3_SHIFT)
#define I40E_VFINT_ICR01_LINK_STAT_CHANGE_SHIFT 25
#define I40E_VFINT_ICR01_LINK_STAT_CHANGE_MASK I40E_MASK(0x1, I40E_VFINT_ICR01_LINK_STAT_CHANGE_SHIFT)
#define I40E_VFINT_ICR01_ADMINQ_SHIFT 30
#define I40E_VFINT_ICR01_ADMINQ_MASK I40E_MASK(0x1, I40E_VFINT_ICR01_ADMINQ_SHIFT)
#define I40E_VFINT_ICR01_SWINT_SHIFT 31
#define I40E_VFINT_ICR01_SWINT_MASK I40E_MASK(0x1, I40E_VFINT_ICR01_SWINT_SHIFT)
#define I40E_VFINT_ITR01(_i) (0x00004C00 + ((_i) * 4)) /* _i=0...2 */ /* Reset: VFR */
#define I40E_VFINT_ITR01_MAX_INDEX 2
#define I40E_VFINT_ITR01_INTERVAL_SHIFT 0
#define I40E_VFINT_ITR01_INTERVAL_MASK I40E_MASK(0xFFF, I40E_VFINT_ITR01_INTERVAL_SHIFT)
#define I40E_VFINT_ITRN1(_i, _INTVF) (0x00002800 + ((_i) * 64 + (_INTVF) * 4)) /* _i=0...2, _INTVF=0...15 */ /* Reset: VFR */
#define I40E_VFINT_ITRN1_MAX_INDEX 2
#define I40E_VFINT_ITRN1_INTERVAL_SHIFT 0
#define I40E_VFINT_ITRN1_INTERVAL_MASK I40E_MASK(0xFFF, I40E_VFINT_ITRN1_INTERVAL_SHIFT)
#define I40E_VFINT_STAT_CTL01 0x00005400 /* Reset: CORER */
#define I40E_VFINT_STAT_CTL01_OTHER_ITR_INDX_SHIFT 2
#define I40E_VFINT_STAT_CTL01_OTHER_ITR_INDX_MASK I40E_MASK(0x3, I40E_VFINT_STAT_CTL01_OTHER_ITR_INDX_SHIFT)
#define I40E_QRX_TAIL1(_Q) (0x00002000 + ((_Q) * 4)) /* _i=0...15 */ /* Reset: CORER */
#define I40E_QRX_TAIL1_MAX_INDEX 15
#define I40E_QRX_TAIL1_TAIL_SHIFT 0
#define I40E_QRX_TAIL1_TAIL_MASK I40E_MASK(0x1FFF, I40E_QRX_TAIL1_TAIL_SHIFT)
#define I40E_QTX_TAIL1(_Q) (0x00000000 + ((_Q) * 4)) /* _i=0...15 */ /* Reset: PFR */
#define I40E_QTX_TAIL1_MAX_INDEX 15
#define I40E_QTX_TAIL1_TAIL_SHIFT 0
#define I40E_QTX_TAIL1_TAIL_MASK I40E_MASK(0x1FFF, I40E_QTX_TAIL1_TAIL_SHIFT)
#define I40E_VFMSIX_PBA 0x00002000 /* Reset: VFLR */
#define I40E_VFMSIX_PBA_PENBIT_SHIFT 0
#define I40E_VFMSIX_PBA_PENBIT_MASK I40E_MASK(0xFFFFFFFF, I40E_VFMSIX_PBA_PENBIT_SHIFT)
#define I40E_VFMSIX_TADD(_i) (0x00000000 + ((_i) * 16)) /* _i=0...16 */ /* Reset: VFLR */
#define I40E_VFMSIX_TADD_MAX_INDEX 16
#define I40E_VFMSIX_TADD_MSIXTADD10_SHIFT 0
#define I40E_VFMSIX_TADD_MSIXTADD10_MASK I40E_MASK(0x3, I40E_VFMSIX_TADD_MSIXTADD10_SHIFT)
#define I40E_VFMSIX_TADD_MSIXTADD_SHIFT 2
#define I40E_VFMSIX_TADD_MSIXTADD_MASK I40E_MASK(0x3FFFFFFF, I40E_VFMSIX_TADD_MSIXTADD_SHIFT)
#define I40E_VFMSIX_TMSG(_i) (0x00000008 + ((_i) * 16)) /* _i=0...16 */ /* Reset: VFLR */
#define I40E_VFMSIX_TMSG_MAX_INDEX 16
#define I40E_VFMSIX_TMSG_MSIXTMSG_SHIFT 0
#define I40E_VFMSIX_TMSG_MSIXTMSG_MASK I40E_MASK(0xFFFFFFFF, I40E_VFMSIX_TMSG_MSIXTMSG_SHIFT)
#define I40E_VFMSIX_TUADD(_i) (0x00000004 + ((_i) * 16)) /* _i=0...16 */ /* Reset: VFLR */
#define I40E_VFMSIX_TUADD_MAX_INDEX 16
#define I40E_VFMSIX_TUADD_MSIXTUADD_SHIFT 0
#define I40E_VFMSIX_TUADD_MSIXTUADD_MASK I40E_MASK(0xFFFFFFFF, I40E_VFMSIX_TUADD_MSIXTUADD_SHIFT)
#define I40E_VFMSIX_TVCTRL(_i) (0x0000000C + ((_i) * 16)) /* _i=0...16 */ /* Reset: VFLR */
#define I40E_VFMSIX_TVCTRL_MAX_INDEX 16
#define I40E_VFMSIX_TVCTRL_MASK_SHIFT 0
#define I40E_VFMSIX_TVCTRL_MASK_MASK I40E_MASK(0x1, I40E_VFMSIX_TVCTRL_MASK_SHIFT)
#define I40E_VFCM_PE_ERRDATA 0x0000DC00 /* Reset: VFR */
#define I40E_VFCM_PE_ERRDATA_ERROR_CODE_SHIFT 0
#define I40E_VFCM_PE_ERRDATA_ERROR_CODE_MASK I40E_MASK(0xF, I40E_VFCM_PE_ERRDATA_ERROR_CODE_SHIFT)
#define I40E_VFCM_PE_ERRDATA_Q_TYPE_SHIFT 4
#define I40E_VFCM_PE_ERRDATA_Q_TYPE_MASK I40E_MASK(0x7, I40E_VFCM_PE_ERRDATA_Q_TYPE_SHIFT)
#define I40E_VFCM_PE_ERRDATA_Q_NUM_SHIFT 8
#define I40E_VFCM_PE_ERRDATA_Q_NUM_MASK I40E_MASK(0x3FFFF, I40E_VFCM_PE_ERRDATA_Q_NUM_SHIFT)
#define I40E_VFCM_PE_ERRINFO 0x0000D800 /* Reset: VFR */
#define I40E_VFCM_PE_ERRINFO_ERROR_VALID_SHIFT 0
#define I40E_VFCM_PE_ERRINFO_ERROR_VALID_MASK I40E_MASK(0x1, I40E_VFCM_PE_ERRINFO_ERROR_VALID_SHIFT)
#define I40E_VFCM_PE_ERRINFO_ERROR_INST_SHIFT 4
#define I40E_VFCM_PE_ERRINFO_ERROR_INST_MASK I40E_MASK(0x7, I40E_VFCM_PE_ERRINFO_ERROR_INST_SHIFT)
#define I40E_VFCM_PE_ERRINFO_DBL_ERROR_CNT_SHIFT 8
#define I40E_VFCM_PE_ERRINFO_DBL_ERROR_CNT_MASK I40E_MASK(0xFF, I40E_VFCM_PE_ERRINFO_DBL_ERROR_CNT_SHIFT)
#define I40E_VFCM_PE_ERRINFO_RLU_ERROR_CNT_SHIFT 16
#define I40E_VFCM_PE_ERRINFO_RLU_ERROR_CNT_MASK I40E_MASK(0xFF, I40E_VFCM_PE_ERRINFO_RLU_ERROR_CNT_SHIFT)
#define I40E_VFCM_PE_ERRINFO_RLS_ERROR_CNT_SHIFT 24
#define I40E_VFCM_PE_ERRINFO_RLS_ERROR_CNT_MASK I40E_MASK(0xFF, I40E_VFCM_PE_ERRINFO_RLS_ERROR_CNT_SHIFT)
#define I40E_VFQF_HENA(_i) (0x0000C400 + ((_i) * 4)) /* _i=0...1 */ /* Reset: CORER */
#define I40E_VFQF_HENA_MAX_INDEX 1
#define I40E_VFQF_HENA_PTYPE_ENA_SHIFT 0
#define I40E_VFQF_HENA_PTYPE_ENA_MASK I40E_MASK(0xFFFFFFFF, I40E_VFQF_HENA_PTYPE_ENA_SHIFT)
#define I40E_VFQF_HKEY(_i) (0x0000CC00 + ((_i) * 4)) /* _i=0...12 */ /* Reset: CORER */
#define I40E_VFQF_HKEY_MAX_INDEX 12
#define I40E_VFQF_HKEY_KEY_0_SHIFT 0
#define I40E_VFQF_HKEY_KEY_0_MASK I40E_MASK(0xFF, I40E_VFQF_HKEY_KEY_0_SHIFT)
#define I40E_VFQF_HKEY_KEY_1_SHIFT 8
#define I40E_VFQF_HKEY_KEY_1_MASK I40E_MASK(0xFF, I40E_VFQF_HKEY_KEY_1_SHIFT)
#define I40E_VFQF_HKEY_KEY_2_SHIFT 16
#define I40E_VFQF_HKEY_KEY_2_MASK I40E_MASK(0xFF, I40E_VFQF_HKEY_KEY_2_SHIFT)
#define I40E_VFQF_HKEY_KEY_3_SHIFT 24
#define I40E_VFQF_HKEY_KEY_3_MASK I40E_MASK(0xFF, I40E_VFQF_HKEY_KEY_3_SHIFT)
#define I40E_VFQF_HLUT(_i) (0x0000D000 + ((_i) * 4)) /* _i=0...15 */ /* Reset: CORER */
#define I40E_VFQF_HLUT_MAX_INDEX 15
#define I40E_VFQF_HLUT_LUT0_SHIFT 0
#define I40E_VFQF_HLUT_LUT0_MASK I40E_MASK(0xF, I40E_VFQF_HLUT_LUT0_SHIFT)
#define I40E_VFQF_HLUT_LUT1_SHIFT 8
#define I40E_VFQF_HLUT_LUT1_MASK I40E_MASK(0xF, I40E_VFQF_HLUT_LUT1_SHIFT)
#define I40E_VFQF_HLUT_LUT2_SHIFT 16
#define I40E_VFQF_HLUT_LUT2_MASK I40E_MASK(0xF, I40E_VFQF_HLUT_LUT2_SHIFT)
#define I40E_VFQF_HLUT_LUT3_SHIFT 24
#define I40E_VFQF_HLUT_LUT3_MASK I40E_MASK(0xF, I40E_VFQF_HLUT_LUT3_SHIFT)
#define I40E_VFQF_HREGION(_i) (0x0000D400 + ((_i) * 4)) /* _i=0...7 */ /* Reset: CORER */
#define I40E_VFQF_HREGION_MAX_INDEX 7
#define I40E_VFQF_HREGION_OVERRIDE_ENA_0_SHIFT 0
#define I40E_VFQF_HREGION_OVERRIDE_ENA_0_MASK I40E_MASK(0x1, I40E_VFQF_HREGION_OVERRIDE_ENA_0_SHIFT)
#define I40E_VFQF_HREGION_REGION_0_SHIFT 1
#define I40E_VFQF_HREGION_REGION_0_MASK I40E_MASK(0x7, I40E_VFQF_HREGION_REGION_0_SHIFT)
#define I40E_VFQF_HREGION_OVERRIDE_ENA_1_SHIFT 4
#define I40E_VFQF_HREGION_OVERRIDE_ENA_1_MASK I40E_MASK(0x1, I40E_VFQF_HREGION_OVERRIDE_ENA_1_SHIFT)
#define I40E_VFQF_HREGION_REGION_1_SHIFT 5
#define I40E_VFQF_HREGION_REGION_1_MASK I40E_MASK(0x7, I40E_VFQF_HREGION_REGION_1_SHIFT)
#define I40E_VFQF_HREGION_OVERRIDE_ENA_2_SHIFT 8
#define I40E_VFQF_HREGION_OVERRIDE_ENA_2_MASK I40E_MASK(0x1, I40E_VFQF_HREGION_OVERRIDE_ENA_2_SHIFT)
#define I40E_VFQF_HREGION_REGION_2_SHIFT 9
#define I40E_VFQF_HREGION_REGION_2_MASK I40E_MASK(0x7, I40E_VFQF_HREGION_REGION_2_SHIFT)
#define I40E_VFQF_HREGION_OVERRIDE_ENA_3_SHIFT 12
#define I40E_VFQF_HREGION_OVERRIDE_ENA_3_MASK I40E_MASK(0x1, I40E_VFQF_HREGION_OVERRIDE_ENA_3_SHIFT)
#define I40E_VFQF_HREGION_REGION_3_SHIFT 13
#define I40E_VFQF_HREGION_REGION_3_MASK I40E_MASK(0x7, I40E_VFQF_HREGION_REGION_3_SHIFT)
#define I40E_VFQF_HREGION_OVERRIDE_ENA_4_SHIFT 16
#define I40E_VFQF_HREGION_OVERRIDE_ENA_4_MASK I40E_MASK(0x1, I40E_VFQF_HREGION_OVERRIDE_ENA_4_SHIFT)
#define I40E_VFQF_HREGION_REGION_4_SHIFT 17
#define I40E_VFQF_HREGION_REGION_4_MASK I40E_MASK(0x7, I40E_VFQF_HREGION_REGION_4_SHIFT)
#define I40E_VFQF_HREGION_OVERRIDE_ENA_5_SHIFT 20
#define I40E_VFQF_HREGION_OVERRIDE_ENA_5_MASK I40E_MASK(0x1, I40E_VFQF_HREGION_OVERRIDE_ENA_5_SHIFT)
#define I40E_VFQF_HREGION_REGION_5_SHIFT 21
#define I40E_VFQF_HREGION_REGION_5_MASK I40E_MASK(0x7, I40E_VFQF_HREGION_REGION_5_SHIFT)
#define I40E_VFQF_HREGION_OVERRIDE_ENA_6_SHIFT 24
#define I40E_VFQF_HREGION_OVERRIDE_ENA_6_MASK I40E_MASK(0x1, I40E_VFQF_HREGION_OVERRIDE_ENA_6_SHIFT)
#define I40E_VFQF_HREGION_REGION_6_SHIFT 25
#define I40E_VFQF_HREGION_REGION_6_MASK I40E_MASK(0x7, I40E_VFQF_HREGION_REGION_6_SHIFT)
#define I40E_VFQF_HREGION_OVERRIDE_ENA_7_SHIFT 28
#define I40E_VFQF_HREGION_OVERRIDE_ENA_7_MASK I40E_MASK(0x1, I40E_VFQF_HREGION_OVERRIDE_ENA_7_SHIFT)
#define I40E_VFQF_HREGION_REGION_7_SHIFT 29
#define I40E_VFQF_HREGION_REGION_7_MASK I40E_MASK(0x7, I40E_VFQF_HREGION_REGION_7_SHIFT)
#define I40E_VFINT_DYN_CTL01_WB_ON_ITR_SHIFT 30
#define I40E_VFINT_DYN_CTL01_WB_ON_ITR_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTL01_WB_ON_ITR_SHIFT)
#define I40E_VFINT_DYN_CTLN1_WB_ON_ITR_SHIFT 30
#define I40E_VFINT_DYN_CTLN1_WB_ON_ITR_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTLN1_WB_ON_ITR_SHIFT)
#define I40E_VFPE_AEQALLOC1 0x0000A400 /* Reset: VFR */
#define I40E_VFPE_AEQALLOC1_AECOUNT_SHIFT 0
#define I40E_VFPE_AEQALLOC1_AECOUNT_MASK I40E_MASK(0xFFFFFFFF, I40E_VFPE_AEQALLOC1_AECOUNT_SHIFT)
#define I40E_VFPE_CCQPHIGH1 0x00009800 /* Reset: VFR */
#define I40E_VFPE_CCQPHIGH1_PECCQPHIGH_SHIFT 0
#define I40E_VFPE_CCQPHIGH1_PECCQPHIGH_MASK I40E_MASK(0xFFFFFFFF, I40E_VFPE_CCQPHIGH1_PECCQPHIGH_SHIFT)
#define I40E_VFPE_CCQPLOW1 0x0000AC00 /* Reset: VFR */
#define I40E_VFPE_CCQPLOW1_PECCQPLOW_SHIFT 0
#define I40E_VFPE_CCQPLOW1_PECCQPLOW_MASK I40E_MASK(0xFFFFFFFF, I40E_VFPE_CCQPLOW1_PECCQPLOW_SHIFT)
#define I40E_VFPE_CCQPSTATUS1 0x0000B800 /* Reset: VFR */
#define I40E_VFPE_CCQPSTATUS1_CCQP_DONE_SHIFT 0
#define I40E_VFPE_CCQPSTATUS1_CCQP_DONE_MASK I40E_MASK(0x1, I40E_VFPE_CCQPSTATUS1_CCQP_DONE_SHIFT)
#define I40E_VFPE_CCQPSTATUS1_HMC_PROFILE_SHIFT 4
#define I40E_VFPE_CCQPSTATUS1_HMC_PROFILE_MASK I40E_MASK(0x7, I40E_VFPE_CCQPSTATUS1_HMC_PROFILE_SHIFT)
#define I40E_VFPE_CCQPSTATUS1_RDMA_EN_VFS_SHIFT 16
#define I40E_VFPE_CCQPSTATUS1_RDMA_EN_VFS_MASK I40E_MASK(0x3F, I40E_VFPE_CCQPSTATUS1_RDMA_EN_VFS_SHIFT)
#define I40E_VFPE_CCQPSTATUS1_CCQP_ERR_SHIFT 31
#define I40E_VFPE_CCQPSTATUS1_CCQP_ERR_MASK I40E_MASK(0x1, I40E_VFPE_CCQPSTATUS1_CCQP_ERR_SHIFT)
#define I40E_VFPE_CQACK1 0x0000B000 /* Reset: VFR */
#define I40E_VFPE_CQACK1_PECQID_SHIFT 0
#define I40E_VFPE_CQACK1_PECQID_MASK I40E_MASK(0x1FFFF, I40E_VFPE_CQACK1_PECQID_SHIFT)
#define I40E_VFPE_CQARM1 0x0000B400 /* Reset: VFR */
#define I40E_VFPE_CQARM1_PECQID_SHIFT 0
#define I40E_VFPE_CQARM1_PECQID_MASK I40E_MASK(0x1FFFF, I40E_VFPE_CQARM1_PECQID_SHIFT)
#define I40E_VFPE_CQPDB1 0x0000BC00 /* Reset: VFR */
#define I40E_VFPE_CQPDB1_WQHEAD_SHIFT 0
#define I40E_VFPE_CQPDB1_WQHEAD_MASK I40E_MASK(0x7FF, I40E_VFPE_CQPDB1_WQHEAD_SHIFT)
#define I40E_VFPE_CQPERRCODES1 0x00009C00 /* Reset: VFR */
#define I40E_VFPE_CQPERRCODES1_CQP_MINOR_CODE_SHIFT 0
#define I40E_VFPE_CQPERRCODES1_CQP_MINOR_CODE_MASK I40E_MASK(0xFFFF, I40E_VFPE_CQPERRCODES1_CQP_MINOR_CODE_SHIFT)
#define I40E_VFPE_CQPERRCODES1_CQP_MAJOR_CODE_SHIFT 16
#define I40E_VFPE_CQPERRCODES1_CQP_MAJOR_CODE_MASK I40E_MASK(0xFFFF, I40E_VFPE_CQPERRCODES1_CQP_MAJOR_CODE_SHIFT)
#define I40E_VFPE_CQPTAIL1 0x0000A000 /* Reset: VFR */
#define I40E_VFPE_CQPTAIL1_WQTAIL_SHIFT 0
#define I40E_VFPE_CQPTAIL1_WQTAIL_MASK I40E_MASK(0x7FF, I40E_VFPE_CQPTAIL1_WQTAIL_SHIFT)
#define I40E_VFPE_CQPTAIL1_CQP_OP_ERR_SHIFT 31
#define I40E_VFPE_CQPTAIL1_CQP_OP_ERR_MASK I40E_MASK(0x1, I40E_VFPE_CQPTAIL1_CQP_OP_ERR_SHIFT)
#define I40E_VFPE_IPCONFIG01 0x00008C00 /* Reset: VFR */
#define I40E_VFPE_IPCONFIG01_PEIPID_SHIFT 0
#define I40E_VFPE_IPCONFIG01_PEIPID_MASK I40E_MASK(0xFFFF, I40E_VFPE_IPCONFIG01_PEIPID_SHIFT)
#define I40E_VFPE_IPCONFIG01_USEENTIREIDRANGE_SHIFT 16
#define I40E_VFPE_IPCONFIG01_USEENTIREIDRANGE_MASK I40E_MASK(0x1, I40E_VFPE_IPCONFIG01_USEENTIREIDRANGE_SHIFT)
#define I40E_VFPE_MRTEIDXMASK1 0x00009000 /* Reset: VFR */
#define I40E_VFPE_MRTEIDXMASK1_MRTEIDXMASKBITS_SHIFT 0
#define I40E_VFPE_MRTEIDXMASK1_MRTEIDXMASKBITS_MASK I40E_MASK(0x1F, I40E_VFPE_MRTEIDXMASK1_MRTEIDXMASKBITS_SHIFT)
#define I40E_VFPE_RCVUNEXPECTEDERROR1 0x00009400 /* Reset: VFR */
#define I40E_VFPE_RCVUNEXPECTEDERROR1_TCP_RX_UNEXP_ERR_SHIFT 0
#define I40E_VFPE_RCVUNEXPECTEDERROR1_TCP_RX_UNEXP_ERR_MASK I40E_MASK(0xFFFFFF, I40E_VFPE_RCVUNEXPECTEDERROR1_TCP_RX_UNEXP_ERR_SHIFT)
#define I40E_VFPE_TCPNOWTIMER1 0x0000A800 /* Reset: VFR */
#define I40E_VFPE_TCPNOWTIMER1_TCP_NOW_SHIFT 0
#define I40E_VFPE_TCPNOWTIMER1_TCP_NOW_MASK I40E_MASK(0xFFFFFFFF, I40E_VFPE_TCPNOWTIMER1_TCP_NOW_SHIFT)
#define I40E_VFPE_WQEALLOC1 0x0000C000 /* Reset: VFR */
#define I40E_VFPE_WQEALLOC1_PEQPID_SHIFT 0
#define I40E_VFPE_WQEALLOC1_PEQPID_MASK I40E_MASK(0x3FFFF, I40E_VFPE_WQEALLOC1_PEQPID_SHIFT)
#define I40E_VFPE_WQEALLOC1_WQE_DESC_INDEX_SHIFT 20
#define I40E_VFPE_WQEALLOC1_WQE_DESC_INDEX_MASK I40E_MASK(0xFFF, I40E_VFPE_WQEALLOC1_WQE_DESC_INDEX_SHIFT)
#endif /* _I40E_REGISTER_H_ */

File diff suppressed because it is too large Load Diff

View File

@ -1,427 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2013 - 2018 Intel Corporation. */
#ifndef _I40EVF_H_
#define _I40EVF_H_
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/aer.h>
#include <linux/netdevice.h>
#include <linux/vmalloc.h>
#include <linux/interrupt.h>
#include <linux/ethtool.h>
#include <linux/if_vlan.h>
#include <linux/ip.h>
#include <linux/tcp.h>
#include <linux/sctp.h>
#include <linux/ipv6.h>
#include <linux/kernel.h>
#include <linux/bitops.h>
#include <linux/timer.h>
#include <linux/workqueue.h>
#include <linux/wait.h>
#include <linux/delay.h>
#include <linux/gfp.h>
#include <linux/skbuff.h>
#include <linux/dma-mapping.h>
#include <linux/etherdevice.h>
#include <linux/socket.h>
#include <linux/jiffies.h>
#include <net/ip6_checksum.h>
#include <net/pkt_cls.h>
#include <net/udp.h>
#include <net/tc_act/tc_gact.h>
#include <net/tc_act/tc_mirred.h>
#include "i40e_type.h"
#include <linux/avf/virtchnl.h>
#include "i40e_txrx.h"
#define DEFAULT_DEBUG_LEVEL_SHIFT 3
#define PFX "i40evf: "
/* VSI state flags shared with common code */
enum i40evf_vsi_state_t {
__I40E_VSI_DOWN,
/* This must be last as it determines the size of the BITMAP */
__I40E_VSI_STATE_SIZE__,
};
/* dummy struct to make common code less painful */
struct i40e_vsi {
struct i40evf_adapter *back;
struct net_device *netdev;
unsigned long active_vlans[BITS_TO_LONGS(VLAN_N_VID)];
u16 seid;
u16 id;
DECLARE_BITMAP(state, __I40E_VSI_STATE_SIZE__);
int base_vector;
u16 work_limit;
u16 qs_handle;
void *priv; /* client driver data reference. */
};
/* How many Rx Buffers do we bundle into one write to the hardware ? */
#define I40EVF_RX_BUFFER_WRITE 16 /* Must be power of 2 */
#define I40EVF_DEFAULT_TXD 512
#define I40EVF_DEFAULT_RXD 512
#define I40EVF_MAX_TXD 4096
#define I40EVF_MIN_TXD 64
#define I40EVF_MAX_RXD 4096
#define I40EVF_MIN_RXD 64
#define I40EVF_REQ_DESCRIPTOR_MULTIPLE 32
#define I40EVF_MAX_AQ_BUF_SIZE 4096
#define I40EVF_AQ_LEN 32
#define I40EVF_AQ_MAX_ERR 20 /* times to try before resetting AQ */
#define MAXIMUM_ETHERNET_VLAN_SIZE (VLAN_ETH_FRAME_LEN + ETH_FCS_LEN)
#define I40E_RX_DESC(R, i) (&(((union i40e_32byte_rx_desc *)((R)->desc))[i]))
#define I40E_TX_DESC(R, i) (&(((struct i40e_tx_desc *)((R)->desc))[i]))
#define I40E_TX_CTXTDESC(R, i) \
(&(((struct i40e_tx_context_desc *)((R)->desc))[i]))
#define I40EVF_MAX_REQ_QUEUES 4
#define I40EVF_HKEY_ARRAY_SIZE ((I40E_VFQF_HKEY_MAX_INDEX + 1) * 4)
#define I40EVF_HLUT_ARRAY_SIZE ((I40E_VFQF_HLUT_MAX_INDEX + 1) * 4)
#define I40EVF_MBPS_DIVISOR 125000 /* divisor to convert to Mbps */
/* MAX_MSIX_Q_VECTORS of these are allocated,
* but we only use one per queue-specific vector.
*/
struct i40e_q_vector {
struct i40evf_adapter *adapter;
struct i40e_vsi *vsi;
struct napi_struct napi;
struct i40e_ring_container rx;
struct i40e_ring_container tx;
u32 ring_mask;
u8 itr_countdown; /* when 0 should adjust adaptive ITR */
u8 num_ringpairs; /* total number of ring pairs in vector */
u16 v_idx; /* index in the vsi->q_vector array. */
u16 reg_idx; /* register index of the interrupt */
char name[IFNAMSIZ + 15];
bool arm_wb_state;
cpumask_t affinity_mask;
struct irq_affinity_notify affinity_notify;
};
/* Helper macros to switch between ints/sec and what the register uses.
* And yes, it's the same math going both ways. The lowest value
* supported by all of the i40e hardware is 8.
*/
#define EITR_INTS_PER_SEC_TO_REG(_eitr) \
((_eitr) ? (1000000000 / ((_eitr) * 256)) : 8)
#define EITR_REG_TO_INTS_PER_SEC EITR_INTS_PER_SEC_TO_REG
#define I40EVF_DESC_UNUSED(R) \
((((R)->next_to_clean > (R)->next_to_use) ? 0 : (R)->count) + \
(R)->next_to_clean - (R)->next_to_use - 1)
#define I40EVF_RX_DESC_ADV(R, i) \
(&(((union i40e_adv_rx_desc *)((R).desc))[i]))
#define I40EVF_TX_DESC_ADV(R, i) \
(&(((union i40e_adv_tx_desc *)((R).desc))[i]))
#define I40EVF_TX_CTXTDESC_ADV(R, i) \
(&(((struct i40e_adv_tx_context_desc *)((R).desc))[i]))
#define OTHER_VECTOR 1
#define NONQ_VECS (OTHER_VECTOR)
#define MIN_MSIX_Q_VECTORS 1
#define MIN_MSIX_COUNT (MIN_MSIX_Q_VECTORS + NONQ_VECS)
#define I40EVF_QUEUE_END_OF_LIST 0x7FF
#define I40EVF_FREE_VECTOR 0x7FFF
struct i40evf_mac_filter {
struct list_head list;
u8 macaddr[ETH_ALEN];
bool remove; /* filter needs to be removed */
bool add; /* filter needs to be added */
};
struct i40evf_vlan_filter {
struct list_head list;
u16 vlan;
bool remove; /* filter needs to be removed */
bool add; /* filter needs to be added */
};
#define I40EVF_MAX_TRAFFIC_CLASS 4
/* State of traffic class creation */
enum i40evf_tc_state_t {
__I40EVF_TC_INVALID, /* no traffic class, default state */
__I40EVF_TC_RUNNING, /* traffic classes have been created */
};
/* channel info */
struct i40evf_channel_config {
struct virtchnl_channel_info ch_info[I40EVF_MAX_TRAFFIC_CLASS];
enum i40evf_tc_state_t state;
u8 total_qps;
};
/* State of cloud filter */
enum i40evf_cloud_filter_state_t {
__I40EVF_CF_INVALID, /* cloud filter not added */
__I40EVF_CF_ADD_PENDING, /* cloud filter pending add by the PF */
__I40EVF_CF_DEL_PENDING, /* cloud filter pending del by the PF */
__I40EVF_CF_ACTIVE, /* cloud filter is active */
};
/* Driver state. The order of these is important! */
enum i40evf_state_t {
__I40EVF_STARTUP, /* driver loaded, probe complete */
__I40EVF_REMOVE, /* driver is being unloaded */
__I40EVF_INIT_VERSION_CHECK, /* aq msg sent, awaiting reply */
__I40EVF_INIT_GET_RESOURCES, /* aq msg sent, awaiting reply */
__I40EVF_INIT_SW, /* got resources, setting up structs */
__I40EVF_RESETTING, /* in reset */
/* Below here, watchdog is running */
__I40EVF_DOWN, /* ready, can be opened */
__I40EVF_DOWN_PENDING, /* descending, waiting for watchdog */
__I40EVF_TESTING, /* in ethtool self-test */
__I40EVF_RUNNING, /* opened, working */
};
enum i40evf_critical_section_t {
__I40EVF_IN_CRITICAL_TASK, /* cannot be interrupted */
__I40EVF_IN_CLIENT_TASK,
__I40EVF_IN_REMOVE_TASK, /* device being removed */
};
#define I40EVF_CLOUD_FIELD_OMAC 0x01
#define I40EVF_CLOUD_FIELD_IMAC 0x02
#define I40EVF_CLOUD_FIELD_IVLAN 0x04
#define I40EVF_CLOUD_FIELD_TEN_ID 0x08
#define I40EVF_CLOUD_FIELD_IIP 0x10
#define I40EVF_CF_FLAGS_OMAC I40EVF_CLOUD_FIELD_OMAC
#define I40EVF_CF_FLAGS_IMAC I40EVF_CLOUD_FIELD_IMAC
#define I40EVF_CF_FLAGS_IMAC_IVLAN (I40EVF_CLOUD_FIELD_IMAC |\
I40EVF_CLOUD_FIELD_IVLAN)
#define I40EVF_CF_FLAGS_IMAC_TEN_ID (I40EVF_CLOUD_FIELD_IMAC |\
I40EVF_CLOUD_FIELD_TEN_ID)
#define I40EVF_CF_FLAGS_OMAC_TEN_ID_IMAC (I40EVF_CLOUD_FIELD_OMAC |\
I40EVF_CLOUD_FIELD_IMAC |\
I40EVF_CLOUD_FIELD_TEN_ID)
#define I40EVF_CF_FLAGS_IMAC_IVLAN_TEN_ID (I40EVF_CLOUD_FIELD_IMAC |\
I40EVF_CLOUD_FIELD_IVLAN |\
I40EVF_CLOUD_FIELD_TEN_ID)
#define I40EVF_CF_FLAGS_IIP I40E_CLOUD_FIELD_IIP
/* bookkeeping of cloud filters */
struct i40evf_cloud_filter {
enum i40evf_cloud_filter_state_t state;
struct list_head list;
struct virtchnl_filter f;
unsigned long cookie;
bool del; /* filter needs to be deleted */
bool add; /* filter needs to be added */
};
/* board specific private data structure */
struct i40evf_adapter {
struct timer_list watchdog_timer;
struct work_struct reset_task;
struct work_struct adminq_task;
struct delayed_work client_task;
struct delayed_work init_task;
wait_queue_head_t down_waitqueue;
struct i40e_q_vector *q_vectors;
struct list_head vlan_filter_list;
struct list_head mac_filter_list;
/* Lock to protect accesses to MAC and VLAN lists */
spinlock_t mac_vlan_list_lock;
char misc_vector_name[IFNAMSIZ + 9];
int num_active_queues;
int num_req_queues;
/* TX */
struct i40e_ring *tx_rings;
u32 tx_timeout_count;
u32 tx_desc_count;
/* RX */
struct i40e_ring *rx_rings;
u64 hw_csum_rx_error;
u32 rx_desc_count;
int num_msix_vectors;
int num_iwarp_msix;
int iwarp_base_vector;
u32 client_pending;
struct i40e_client_instance *cinst;
struct msix_entry *msix_entries;
u32 flags;
#define I40EVF_FLAG_RX_CSUM_ENABLED BIT(0)
#define I40EVF_FLAG_PF_COMMS_FAILED BIT(3)
#define I40EVF_FLAG_RESET_PENDING BIT(4)
#define I40EVF_FLAG_RESET_NEEDED BIT(5)
#define I40EVF_FLAG_WB_ON_ITR_CAPABLE BIT(6)
#define I40EVF_FLAG_ADDR_SET_BY_PF BIT(8)
#define I40EVF_FLAG_SERVICE_CLIENT_REQUESTED BIT(9)
#define I40EVF_FLAG_CLIENT_NEEDS_OPEN BIT(10)
#define I40EVF_FLAG_CLIENT_NEEDS_CLOSE BIT(11)
#define I40EVF_FLAG_CLIENT_NEEDS_L2_PARAMS BIT(12)
#define I40EVF_FLAG_PROMISC_ON BIT(13)
#define I40EVF_FLAG_ALLMULTI_ON BIT(14)
#define I40EVF_FLAG_LEGACY_RX BIT(15)
#define I40EVF_FLAG_REINIT_ITR_NEEDED BIT(16)
#define I40EVF_FLAG_QUEUES_DISABLED BIT(17)
/* duplicates for common code */
#define I40E_FLAG_DCB_ENABLED 0
#define I40E_FLAG_RX_CSUM_ENABLED I40EVF_FLAG_RX_CSUM_ENABLED
#define I40E_FLAG_LEGACY_RX I40EVF_FLAG_LEGACY_RX
/* flags for admin queue service task */
u32 aq_required;
#define I40EVF_FLAG_AQ_ENABLE_QUEUES BIT(0)
#define I40EVF_FLAG_AQ_DISABLE_QUEUES BIT(1)
#define I40EVF_FLAG_AQ_ADD_MAC_FILTER BIT(2)
#define I40EVF_FLAG_AQ_ADD_VLAN_FILTER BIT(3)
#define I40EVF_FLAG_AQ_DEL_MAC_FILTER BIT(4)
#define I40EVF_FLAG_AQ_DEL_VLAN_FILTER BIT(5)
#define I40EVF_FLAG_AQ_CONFIGURE_QUEUES BIT(6)
#define I40EVF_FLAG_AQ_MAP_VECTORS BIT(7)
#define I40EVF_FLAG_AQ_HANDLE_RESET BIT(8)
#define I40EVF_FLAG_AQ_CONFIGURE_RSS BIT(9) /* direct AQ config */
#define I40EVF_FLAG_AQ_GET_CONFIG BIT(10)
/* Newer style, RSS done by the PF so we can ignore hardware vagaries. */
#define I40EVF_FLAG_AQ_GET_HENA BIT(11)
#define I40EVF_FLAG_AQ_SET_HENA BIT(12)
#define I40EVF_FLAG_AQ_SET_RSS_KEY BIT(13)
#define I40EVF_FLAG_AQ_SET_RSS_LUT BIT(14)
#define I40EVF_FLAG_AQ_REQUEST_PROMISC BIT(15)
#define I40EVF_FLAG_AQ_RELEASE_PROMISC BIT(16)
#define I40EVF_FLAG_AQ_REQUEST_ALLMULTI BIT(17)
#define I40EVF_FLAG_AQ_RELEASE_ALLMULTI BIT(18)
#define I40EVF_FLAG_AQ_ENABLE_VLAN_STRIPPING BIT(19)
#define I40EVF_FLAG_AQ_DISABLE_VLAN_STRIPPING BIT(20)
#define I40EVF_FLAG_AQ_ENABLE_CHANNELS BIT(21)
#define I40EVF_FLAG_AQ_DISABLE_CHANNELS BIT(22)
#define I40EVF_FLAG_AQ_ADD_CLOUD_FILTER BIT(23)
#define I40EVF_FLAG_AQ_DEL_CLOUD_FILTER BIT(24)
/* OS defined structs */
struct net_device *netdev;
struct pci_dev *pdev;
struct i40e_hw hw; /* defined in i40e_type.h */
enum i40evf_state_t state;
unsigned long crit_section;
struct work_struct watchdog_task;
bool netdev_registered;
bool link_up;
enum virtchnl_link_speed link_speed;
enum virtchnl_ops current_op;
#define CLIENT_ALLOWED(_a) ((_a)->vf_res ? \
(_a)->vf_res->vf_cap_flags & \
VIRTCHNL_VF_OFFLOAD_IWARP : \
0)
#define CLIENT_ENABLED(_a) ((_a)->cinst)
/* RSS by the PF should be preferred over RSS via other methods. */
#define RSS_PF(_a) ((_a)->vf_res->vf_cap_flags & \
VIRTCHNL_VF_OFFLOAD_RSS_PF)
#define RSS_AQ(_a) ((_a)->vf_res->vf_cap_flags & \
VIRTCHNL_VF_OFFLOAD_RSS_AQ)
#define RSS_REG(_a) (!((_a)->vf_res->vf_cap_flags & \
(VIRTCHNL_VF_OFFLOAD_RSS_AQ | \
VIRTCHNL_VF_OFFLOAD_RSS_PF)))
#define VLAN_ALLOWED(_a) ((_a)->vf_res->vf_cap_flags & \
VIRTCHNL_VF_OFFLOAD_VLAN)
struct virtchnl_vf_resource *vf_res; /* incl. all VSIs */
struct virtchnl_vsi_resource *vsi_res; /* our LAN VSI */
struct virtchnl_version_info pf_version;
#define PF_IS_V11(_a) (((_a)->pf_version.major == 1) && \
((_a)->pf_version.minor == 1))
u16 msg_enable;
struct i40e_eth_stats current_stats;
struct i40e_vsi vsi;
u32 aq_wait_count;
/* RSS stuff */
u64 hena;
u16 rss_key_size;
u16 rss_lut_size;
u8 *rss_key;
u8 *rss_lut;
/* ADQ related members */
struct i40evf_channel_config ch_config;
u8 num_tc;
struct list_head cloud_filter_list;
/* lock to protest access to the cloud filter list */
spinlock_t cloud_filter_list_lock;
u16 num_cloud_filters;
};
/* Ethtool Private Flags */
/* lan device */
struct i40e_device {
struct list_head list;
struct i40evf_adapter *vf;
};
/* needed by i40evf_ethtool.c */
extern char i40evf_driver_name[];
extern const char i40evf_driver_version[];
int i40evf_up(struct i40evf_adapter *adapter);
void i40evf_down(struct i40evf_adapter *adapter);
int i40evf_process_config(struct i40evf_adapter *adapter);
void i40evf_schedule_reset(struct i40evf_adapter *adapter);
void i40evf_reset(struct i40evf_adapter *adapter);
void i40evf_set_ethtool_ops(struct net_device *netdev);
void i40evf_update_stats(struct i40evf_adapter *adapter);
void i40evf_reset_interrupt_capability(struct i40evf_adapter *adapter);
int i40evf_init_interrupt_scheme(struct i40evf_adapter *adapter);
void i40evf_irq_enable_queues(struct i40evf_adapter *adapter, u32 mask);
void i40evf_free_all_tx_resources(struct i40evf_adapter *adapter);
void i40evf_free_all_rx_resources(struct i40evf_adapter *adapter);
void i40e_napi_add_all(struct i40evf_adapter *adapter);
void i40e_napi_del_all(struct i40evf_adapter *adapter);
int i40evf_send_api_ver(struct i40evf_adapter *adapter);
int i40evf_verify_api_ver(struct i40evf_adapter *adapter);
int i40evf_send_vf_config_msg(struct i40evf_adapter *adapter);
int i40evf_get_vf_config(struct i40evf_adapter *adapter);
void i40evf_irq_enable(struct i40evf_adapter *adapter, bool flush);
void i40evf_configure_queues(struct i40evf_adapter *adapter);
void i40evf_deconfigure_queues(struct i40evf_adapter *adapter);
void i40evf_enable_queues(struct i40evf_adapter *adapter);
void i40evf_disable_queues(struct i40evf_adapter *adapter);
void i40evf_map_queues(struct i40evf_adapter *adapter);
int i40evf_request_queues(struct i40evf_adapter *adapter, int num);
void i40evf_add_ether_addrs(struct i40evf_adapter *adapter);
void i40evf_del_ether_addrs(struct i40evf_adapter *adapter);
void i40evf_add_vlans(struct i40evf_adapter *adapter);
void i40evf_del_vlans(struct i40evf_adapter *adapter);
void i40evf_set_promiscuous(struct i40evf_adapter *adapter, int flags);
void i40evf_request_stats(struct i40evf_adapter *adapter);
void i40evf_request_reset(struct i40evf_adapter *adapter);
void i40evf_get_hena(struct i40evf_adapter *adapter);
void i40evf_set_hena(struct i40evf_adapter *adapter);
void i40evf_set_rss_key(struct i40evf_adapter *adapter);
void i40evf_set_rss_lut(struct i40evf_adapter *adapter);
void i40evf_enable_vlan_stripping(struct i40evf_adapter *adapter);
void i40evf_disable_vlan_stripping(struct i40evf_adapter *adapter);
void i40evf_virtchnl_completion(struct i40evf_adapter *adapter,
enum virtchnl_ops v_opcode,
i40e_status v_retval, u8 *msg, u16 msglen);
int i40evf_config_rss(struct i40evf_adapter *adapter);
int i40evf_lan_add_device(struct i40evf_adapter *adapter);
int i40evf_lan_del_device(struct i40evf_adapter *adapter);
void i40evf_client_subtask(struct i40evf_adapter *adapter);
void i40evf_notify_client_message(struct i40e_vsi *vsi, u8 *msg, u16 len);
void i40evf_notify_client_l2_params(struct i40e_vsi *vsi);
void i40evf_notify_client_open(struct i40e_vsi *vsi);
void i40evf_notify_client_close(struct i40e_vsi *vsi, bool reset);
void i40evf_enable_channels(struct i40evf_adapter *adapter);
void i40evf_disable_channels(struct i40evf_adapter *adapter);
void i40evf_add_cloud_filter(struct i40evf_adapter *adapter);
void i40evf_del_cloud_filter(struct i40evf_adapter *adapter);
#endif /* _I40EVF_H_ */

View File

@ -0,0 +1,15 @@
# SPDX-License-Identifier: GPL-2.0
# Copyright(c) 2013 - 2018 Intel Corporation.
#
# Makefile for the Intel(R) Ethernet Adaptive Virtual Function (iavf)
# driver
#
#
ccflags-y += -I$(src)
subdir-ccflags-y += -I$(src)
obj-$(CONFIG_IAVF) += iavf.o
iavf-objs := iavf_main.o iavf_ethtool.o iavf_virtchnl.o \
iavf_txrx.o iavf_common.o i40e_adminq.o iavf_client.o

View File

@ -1,21 +1,11 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2013 - 2018 Intel Corporation. */
#include "i40e_status.h"
#include "i40e_type.h"
#include "i40e_register.h"
#include "iavf_status.h"
#include "iavf_type.h"
#include "iavf_register.h"
#include "i40e_adminq.h"
#include "i40e_prototype.h"
/**
* i40e_is_nvm_update_op - return true if this is an NVM update operation
* @desc: API request descriptor
**/
static inline bool i40e_is_nvm_update_op(struct i40e_aq_desc *desc)
{
return (desc->opcode == i40e_aqc_opc_nvm_erase) ||
(desc->opcode == i40e_aqc_opc_nvm_update);
}
#include "iavf_prototype.h"
/**
* i40e_adminq_init_regs - Initialize AdminQ registers
@ -23,44 +13,42 @@ static inline bool i40e_is_nvm_update_op(struct i40e_aq_desc *desc)
*
* This assumes the alloc_asq and alloc_arq functions have already been called
**/
static void i40e_adminq_init_regs(struct i40e_hw *hw)
static void i40e_adminq_init_regs(struct iavf_hw *hw)
{
/* set head and tail registers in our local struct */
if (i40e_is_vf(hw)) {
hw->aq.asq.tail = I40E_VF_ATQT1;
hw->aq.asq.head = I40E_VF_ATQH1;
hw->aq.asq.len = I40E_VF_ATQLEN1;
hw->aq.asq.bal = I40E_VF_ATQBAL1;
hw->aq.asq.bah = I40E_VF_ATQBAH1;
hw->aq.arq.tail = I40E_VF_ARQT1;
hw->aq.arq.head = I40E_VF_ARQH1;
hw->aq.arq.len = I40E_VF_ARQLEN1;
hw->aq.arq.bal = I40E_VF_ARQBAL1;
hw->aq.arq.bah = I40E_VF_ARQBAH1;
}
hw->aq.asq.tail = IAVF_VF_ATQT1;
hw->aq.asq.head = IAVF_VF_ATQH1;
hw->aq.asq.len = IAVF_VF_ATQLEN1;
hw->aq.asq.bal = IAVF_VF_ATQBAL1;
hw->aq.asq.bah = IAVF_VF_ATQBAH1;
hw->aq.arq.tail = IAVF_VF_ARQT1;
hw->aq.arq.head = IAVF_VF_ARQH1;
hw->aq.arq.len = IAVF_VF_ARQLEN1;
hw->aq.arq.bal = IAVF_VF_ARQBAL1;
hw->aq.arq.bah = IAVF_VF_ARQBAH1;
}
/**
* i40e_alloc_adminq_asq_ring - Allocate Admin Queue send rings
* @hw: pointer to the hardware structure
**/
static i40e_status i40e_alloc_adminq_asq_ring(struct i40e_hw *hw)
static iavf_status i40e_alloc_adminq_asq_ring(struct iavf_hw *hw)
{
i40e_status ret_code;
iavf_status ret_code;
ret_code = i40e_allocate_dma_mem(hw, &hw->aq.asq.desc_buf,
ret_code = iavf_allocate_dma_mem(hw, &hw->aq.asq.desc_buf,
i40e_mem_atq_ring,
(hw->aq.num_asq_entries *
sizeof(struct i40e_aq_desc)),
I40E_ADMINQ_DESC_ALIGNMENT);
IAVF_ADMINQ_DESC_ALIGNMENT);
if (ret_code)
return ret_code;
ret_code = i40e_allocate_virt_mem(hw, &hw->aq.asq.cmd_buf,
ret_code = iavf_allocate_virt_mem(hw, &hw->aq.asq.cmd_buf,
(hw->aq.num_asq_entries *
sizeof(struct i40e_asq_cmd_details)));
if (ret_code) {
i40e_free_dma_mem(hw, &hw->aq.asq.desc_buf);
iavf_free_dma_mem(hw, &hw->aq.asq.desc_buf);
return ret_code;
}
@ -71,15 +59,15 @@ static i40e_status i40e_alloc_adminq_asq_ring(struct i40e_hw *hw)
* i40e_alloc_adminq_arq_ring - Allocate Admin Queue receive rings
* @hw: pointer to the hardware structure
**/
static i40e_status i40e_alloc_adminq_arq_ring(struct i40e_hw *hw)
static iavf_status i40e_alloc_adminq_arq_ring(struct iavf_hw *hw)
{
i40e_status ret_code;
iavf_status ret_code;
ret_code = i40e_allocate_dma_mem(hw, &hw->aq.arq.desc_buf,
ret_code = iavf_allocate_dma_mem(hw, &hw->aq.arq.desc_buf,
i40e_mem_arq_ring,
(hw->aq.num_arq_entries *
sizeof(struct i40e_aq_desc)),
I40E_ADMINQ_DESC_ALIGNMENT);
IAVF_ADMINQ_DESC_ALIGNMENT);
return ret_code;
}
@ -91,9 +79,9 @@ static i40e_status i40e_alloc_adminq_arq_ring(struct i40e_hw *hw)
* This assumes the posted send buffers have already been cleaned
* and de-allocated
**/
static void i40e_free_adminq_asq(struct i40e_hw *hw)
static void i40e_free_adminq_asq(struct iavf_hw *hw)
{
i40e_free_dma_mem(hw, &hw->aq.asq.desc_buf);
iavf_free_dma_mem(hw, &hw->aq.asq.desc_buf);
}
/**
@ -103,20 +91,20 @@ static void i40e_free_adminq_asq(struct i40e_hw *hw)
* This assumes the posted receive buffers have already been cleaned
* and de-allocated
**/
static void i40e_free_adminq_arq(struct i40e_hw *hw)
static void i40e_free_adminq_arq(struct iavf_hw *hw)
{
i40e_free_dma_mem(hw, &hw->aq.arq.desc_buf);
iavf_free_dma_mem(hw, &hw->aq.arq.desc_buf);
}
/**
* i40e_alloc_arq_bufs - Allocate pre-posted buffers for the receive queue
* @hw: pointer to the hardware structure
**/
static i40e_status i40e_alloc_arq_bufs(struct i40e_hw *hw)
static iavf_status i40e_alloc_arq_bufs(struct iavf_hw *hw)
{
i40e_status ret_code;
struct i40e_aq_desc *desc;
struct i40e_dma_mem *bi;
struct iavf_dma_mem *bi;
iavf_status ret_code;
int i;
/* We'll be allocating the buffer info memory first, then we can
@ -124,24 +112,25 @@ static i40e_status i40e_alloc_arq_bufs(struct i40e_hw *hw)
*/
/* buffer_info structures do not need alignment */
ret_code = i40e_allocate_virt_mem(hw, &hw->aq.arq.dma_head,
(hw->aq.num_arq_entries * sizeof(struct i40e_dma_mem)));
ret_code = iavf_allocate_virt_mem(hw, &hw->aq.arq.dma_head,
(hw->aq.num_arq_entries *
sizeof(struct iavf_dma_mem)));
if (ret_code)
goto alloc_arq_bufs;
hw->aq.arq.r.arq_bi = (struct i40e_dma_mem *)hw->aq.arq.dma_head.va;
hw->aq.arq.r.arq_bi = (struct iavf_dma_mem *)hw->aq.arq.dma_head.va;
/* allocate the mapped buffers */
for (i = 0; i < hw->aq.num_arq_entries; i++) {
bi = &hw->aq.arq.r.arq_bi[i];
ret_code = i40e_allocate_dma_mem(hw, bi,
ret_code = iavf_allocate_dma_mem(hw, bi,
i40e_mem_arq_buf,
hw->aq.arq_buf_size,
I40E_ADMINQ_DESC_ALIGNMENT);
IAVF_ADMINQ_DESC_ALIGNMENT);
if (ret_code)
goto unwind_alloc_arq_bufs;
/* now configure the descriptors for use */
desc = I40E_ADMINQ_DESC(hw->aq.arq, i);
desc = IAVF_ADMINQ_DESC(hw->aq.arq, i);
desc->flags = cpu_to_le16(I40E_AQ_FLAG_BUF);
if (hw->aq.arq_buf_size > I40E_AQ_LARGE_BUF)
@ -169,8 +158,8 @@ unwind_alloc_arq_bufs:
/* don't try to free the one that failed... */
i--;
for (; i >= 0; i--)
i40e_free_dma_mem(hw, &hw->aq.arq.r.arq_bi[i]);
i40e_free_virt_mem(hw, &hw->aq.arq.dma_head);
iavf_free_dma_mem(hw, &hw->aq.arq.r.arq_bi[i]);
iavf_free_virt_mem(hw, &hw->aq.arq.dma_head);
return ret_code;
}
@ -179,26 +168,27 @@ unwind_alloc_arq_bufs:
* i40e_alloc_asq_bufs - Allocate empty buffer structs for the send queue
* @hw: pointer to the hardware structure
**/
static i40e_status i40e_alloc_asq_bufs(struct i40e_hw *hw)
static iavf_status i40e_alloc_asq_bufs(struct iavf_hw *hw)
{
i40e_status ret_code;
struct i40e_dma_mem *bi;
struct iavf_dma_mem *bi;
iavf_status ret_code;
int i;
/* No mapped memory needed yet, just the buffer info structures */
ret_code = i40e_allocate_virt_mem(hw, &hw->aq.asq.dma_head,
(hw->aq.num_asq_entries * sizeof(struct i40e_dma_mem)));
ret_code = iavf_allocate_virt_mem(hw, &hw->aq.asq.dma_head,
(hw->aq.num_asq_entries *
sizeof(struct iavf_dma_mem)));
if (ret_code)
goto alloc_asq_bufs;
hw->aq.asq.r.asq_bi = (struct i40e_dma_mem *)hw->aq.asq.dma_head.va;
hw->aq.asq.r.asq_bi = (struct iavf_dma_mem *)hw->aq.asq.dma_head.va;
/* allocate the mapped buffers */
for (i = 0; i < hw->aq.num_asq_entries; i++) {
bi = &hw->aq.asq.r.asq_bi[i];
ret_code = i40e_allocate_dma_mem(hw, bi,
ret_code = iavf_allocate_dma_mem(hw, bi,
i40e_mem_asq_buf,
hw->aq.asq_buf_size,
I40E_ADMINQ_DESC_ALIGNMENT);
IAVF_ADMINQ_DESC_ALIGNMENT);
if (ret_code)
goto unwind_alloc_asq_bufs;
}
@ -209,8 +199,8 @@ unwind_alloc_asq_bufs:
/* don't try to free the one that failed... */
i--;
for (; i >= 0; i--)
i40e_free_dma_mem(hw, &hw->aq.asq.r.asq_bi[i]);
i40e_free_virt_mem(hw, &hw->aq.asq.dma_head);
iavf_free_dma_mem(hw, &hw->aq.asq.r.asq_bi[i]);
iavf_free_virt_mem(hw, &hw->aq.asq.dma_head);
return ret_code;
}
@ -219,42 +209,42 @@ unwind_alloc_asq_bufs:
* i40e_free_arq_bufs - Free receive queue buffer info elements
* @hw: pointer to the hardware structure
**/
static void i40e_free_arq_bufs(struct i40e_hw *hw)
static void i40e_free_arq_bufs(struct iavf_hw *hw)
{
int i;
/* free descriptors */
for (i = 0; i < hw->aq.num_arq_entries; i++)
i40e_free_dma_mem(hw, &hw->aq.arq.r.arq_bi[i]);
iavf_free_dma_mem(hw, &hw->aq.arq.r.arq_bi[i]);
/* free the descriptor memory */
i40e_free_dma_mem(hw, &hw->aq.arq.desc_buf);
iavf_free_dma_mem(hw, &hw->aq.arq.desc_buf);
/* free the dma header */
i40e_free_virt_mem(hw, &hw->aq.arq.dma_head);
iavf_free_virt_mem(hw, &hw->aq.arq.dma_head);
}
/**
* i40e_free_asq_bufs - Free send queue buffer info elements
* @hw: pointer to the hardware structure
**/
static void i40e_free_asq_bufs(struct i40e_hw *hw)
static void i40e_free_asq_bufs(struct iavf_hw *hw)
{
int i;
/* only unmap if the address is non-NULL */
for (i = 0; i < hw->aq.num_asq_entries; i++)
if (hw->aq.asq.r.asq_bi[i].pa)
i40e_free_dma_mem(hw, &hw->aq.asq.r.asq_bi[i]);
iavf_free_dma_mem(hw, &hw->aq.asq.r.asq_bi[i]);
/* free the buffer info list */
i40e_free_virt_mem(hw, &hw->aq.asq.cmd_buf);
iavf_free_virt_mem(hw, &hw->aq.asq.cmd_buf);
/* free the descriptor memory */
i40e_free_dma_mem(hw, &hw->aq.asq.desc_buf);
iavf_free_dma_mem(hw, &hw->aq.asq.desc_buf);
/* free the dma header */
i40e_free_virt_mem(hw, &hw->aq.asq.dma_head);
iavf_free_virt_mem(hw, &hw->aq.asq.dma_head);
}
/**
@ -263,9 +253,9 @@ static void i40e_free_asq_bufs(struct i40e_hw *hw)
*
* Configure base address and length registers for the transmit queue
**/
static i40e_status i40e_config_asq_regs(struct i40e_hw *hw)
static iavf_status i40e_config_asq_regs(struct iavf_hw *hw)
{
i40e_status ret_code = 0;
iavf_status ret_code = 0;
u32 reg = 0;
/* Clear Head and Tail */
@ -274,7 +264,7 @@ static i40e_status i40e_config_asq_regs(struct i40e_hw *hw)
/* set starting point */
wr32(hw, hw->aq.asq.len, (hw->aq.num_asq_entries |
I40E_VF_ATQLEN1_ATQENABLE_MASK));
IAVF_VF_ATQLEN1_ATQENABLE_MASK));
wr32(hw, hw->aq.asq.bal, lower_32_bits(hw->aq.asq.desc_buf.pa));
wr32(hw, hw->aq.asq.bah, upper_32_bits(hw->aq.asq.desc_buf.pa));
@ -292,9 +282,9 @@ static i40e_status i40e_config_asq_regs(struct i40e_hw *hw)
*
* Configure base address and length registers for the receive (event queue)
**/
static i40e_status i40e_config_arq_regs(struct i40e_hw *hw)
static iavf_status i40e_config_arq_regs(struct iavf_hw *hw)
{
i40e_status ret_code = 0;
iavf_status ret_code = 0;
u32 reg = 0;
/* Clear Head and Tail */
@ -303,7 +293,7 @@ static i40e_status i40e_config_arq_regs(struct i40e_hw *hw)
/* set starting point */
wr32(hw, hw->aq.arq.len, (hw->aq.num_arq_entries |
I40E_VF_ARQLEN1_ARQENABLE_MASK));
IAVF_VF_ARQLEN1_ARQENABLE_MASK));
wr32(hw, hw->aq.arq.bal, lower_32_bits(hw->aq.arq.desc_buf.pa));
wr32(hw, hw->aq.arq.bah, upper_32_bits(hw->aq.arq.desc_buf.pa));
@ -331,9 +321,9 @@ static i40e_status i40e_config_arq_regs(struct i40e_hw *hw)
* Do *NOT* hold the lock when calling this as the memory allocation routines
* called are not going to be atomic context safe
**/
static i40e_status i40e_init_asq(struct i40e_hw *hw)
static iavf_status i40e_init_asq(struct iavf_hw *hw)
{
i40e_status ret_code = 0;
iavf_status ret_code = 0;
if (hw->aq.asq.count > 0) {
/* queue already initialized */
@ -390,9 +380,9 @@ init_adminq_exit:
* Do *NOT* hold the lock when calling this as the memory allocation routines
* called are not going to be atomic context safe
**/
static i40e_status i40e_init_arq(struct i40e_hw *hw)
static iavf_status i40e_init_arq(struct iavf_hw *hw)
{
i40e_status ret_code = 0;
iavf_status ret_code = 0;
if (hw->aq.arq.count > 0) {
/* queue already initialized */
@ -442,9 +432,9 @@ init_adminq_exit:
*
* The main shutdown routine for the Admin Send Queue
**/
static i40e_status i40e_shutdown_asq(struct i40e_hw *hw)
static iavf_status i40e_shutdown_asq(struct iavf_hw *hw)
{
i40e_status ret_code = 0;
iavf_status ret_code = 0;
mutex_lock(&hw->aq.asq_mutex);
@ -476,9 +466,9 @@ shutdown_asq_out:
*
* The main shutdown routine for the Admin Receive Queue
**/
static i40e_status i40e_shutdown_arq(struct i40e_hw *hw)
static iavf_status i40e_shutdown_arq(struct iavf_hw *hw)
{
i40e_status ret_code = 0;
iavf_status ret_code = 0;
mutex_lock(&hw->aq.arq_mutex);
@ -505,7 +495,7 @@ shutdown_arq_out:
}
/**
* i40evf_init_adminq - main initialization routine for Admin Queue
* iavf_init_adminq - main initialization routine for Admin Queue
* @hw: pointer to the hardware structure
*
* Prior to calling this function, drivers *MUST* set the following fields
@ -515,9 +505,9 @@ shutdown_arq_out:
* - hw->aq.arq_buf_size
* - hw->aq.asq_buf_size
**/
i40e_status i40evf_init_adminq(struct i40e_hw *hw)
iavf_status iavf_init_adminq(struct iavf_hw *hw)
{
i40e_status ret_code;
iavf_status ret_code;
/* verify input for valid configuration */
if ((hw->aq.num_arq_entries == 0) ||
@ -556,22 +546,19 @@ init_adminq_exit:
}
/**
* i40evf_shutdown_adminq - shutdown routine for the Admin Queue
* iavf_shutdown_adminq - shutdown routine for the Admin Queue
* @hw: pointer to the hardware structure
**/
i40e_status i40evf_shutdown_adminq(struct i40e_hw *hw)
iavf_status iavf_shutdown_adminq(struct iavf_hw *hw)
{
i40e_status ret_code = 0;
iavf_status ret_code = 0;
if (i40evf_check_asq_alive(hw))
i40evf_aq_queue_shutdown(hw, true);
if (iavf_check_asq_alive(hw))
iavf_aq_queue_shutdown(hw, true);
i40e_shutdown_asq(hw);
i40e_shutdown_arq(hw);
if (hw->nvm_buff.va)
i40e_free_virt_mem(hw, &hw->nvm_buff);
return ret_code;
}
@ -581,18 +568,18 @@ i40e_status i40evf_shutdown_adminq(struct i40e_hw *hw)
*
* returns the number of free desc
**/
static u16 i40e_clean_asq(struct i40e_hw *hw)
static u16 i40e_clean_asq(struct iavf_hw *hw)
{
struct i40e_adminq_ring *asq = &(hw->aq.asq);
struct iavf_adminq_ring *asq = &hw->aq.asq;
struct i40e_asq_cmd_details *details;
u16 ntc = asq->next_to_clean;
struct i40e_aq_desc desc_cb;
struct i40e_aq_desc *desc;
desc = I40E_ADMINQ_DESC(*asq, ntc);
desc = IAVF_ADMINQ_DESC(*asq, ntc);
details = I40E_ADMINQ_DETAILS(*asq, ntc);
while (rd32(hw, hw->aq.asq.head) != ntc) {
i40e_debug(hw, I40E_DEBUG_AQ_MESSAGE,
iavf_debug(hw, IAVF_DEBUG_AQ_MESSAGE,
"ntc %d head %d.\n", ntc, rd32(hw, hw->aq.asq.head));
if (details->callback) {
@ -607,33 +594,32 @@ static u16 i40e_clean_asq(struct i40e_hw *hw)
ntc++;
if (ntc == asq->count)
ntc = 0;
desc = I40E_ADMINQ_DESC(*asq, ntc);
desc = IAVF_ADMINQ_DESC(*asq, ntc);
details = I40E_ADMINQ_DETAILS(*asq, ntc);
}
asq->next_to_clean = ntc;
return I40E_DESC_UNUSED(asq);
return IAVF_DESC_UNUSED(asq);
}
/**
* i40evf_asq_done - check if FW has processed the Admin Send Queue
* iavf_asq_done - check if FW has processed the Admin Send Queue
* @hw: pointer to the hw struct
*
* Returns true if the firmware has processed all descriptors on the
* admin send queue. Returns false if there are still requests pending.
**/
bool i40evf_asq_done(struct i40e_hw *hw)
bool iavf_asq_done(struct iavf_hw *hw)
{
/* AQ designers suggest use of head for better
* timing reliability than DD bit
*/
return rd32(hw, hw->aq.asq.head) == hw->aq.asq.next_to_use;
}
/**
* i40evf_asq_send_command - send command to Admin Queue
* iavf_asq_send_command - send command to Admin Queue
* @hw: pointer to the hw struct
* @desc: prefilled descriptor describing the command (non DMA mem)
* @buff: buffer to use for indirect commands
@ -643,24 +629,23 @@ bool i40evf_asq_done(struct i40e_hw *hw)
* This is the main send command driver routine for the Admin Queue send
* queue. It runs the queue, cleans the queue, etc
**/
i40e_status i40evf_asq_send_command(struct i40e_hw *hw,
struct i40e_aq_desc *desc,
void *buff, /* can be NULL */
u16 buff_size,
struct i40e_asq_cmd_details *cmd_details)
iavf_status iavf_asq_send_command(struct iavf_hw *hw, struct i40e_aq_desc *desc,
void *buff, /* can be NULL */
u16 buff_size,
struct i40e_asq_cmd_details *cmd_details)
{
i40e_status status = 0;
struct i40e_dma_mem *dma_buff = NULL;
struct iavf_dma_mem *dma_buff = NULL;
struct i40e_asq_cmd_details *details;
struct i40e_aq_desc *desc_on_ring;
bool cmd_completed = false;
iavf_status status = 0;
u16 retval = 0;
u32 val = 0;
mutex_lock(&hw->aq.asq_mutex);
if (hw->aq.asq.count == 0) {
i40e_debug(hw, I40E_DEBUG_AQ_MESSAGE,
iavf_debug(hw, IAVF_DEBUG_AQ_MESSAGE,
"AQTX: Admin queue not initialized.\n");
status = I40E_ERR_QUEUE_EMPTY;
goto asq_send_command_error;
@ -670,7 +655,7 @@ i40e_status i40evf_asq_send_command(struct i40e_hw *hw,
val = rd32(hw, hw->aq.asq.head);
if (val >= hw->aq.num_asq_entries) {
i40e_debug(hw, I40E_DEBUG_AQ_MESSAGE,
iavf_debug(hw, IAVF_DEBUG_AQ_MESSAGE,
"AQTX: head overrun at %d\n", val);
status = I40E_ERR_QUEUE_EMPTY;
goto asq_send_command_error;
@ -699,8 +684,8 @@ i40e_status i40evf_asq_send_command(struct i40e_hw *hw,
desc->flags |= cpu_to_le16(details->flags_ena);
if (buff_size > hw->aq.asq_buf_size) {
i40e_debug(hw,
I40E_DEBUG_AQ_MESSAGE,
iavf_debug(hw,
IAVF_DEBUG_AQ_MESSAGE,
"AQTX: Invalid buffer size: %d.\n",
buff_size);
status = I40E_ERR_INVALID_SIZE;
@ -708,8 +693,8 @@ i40e_status i40evf_asq_send_command(struct i40e_hw *hw,
}
if (details->postpone && !details->async) {
i40e_debug(hw,
I40E_DEBUG_AQ_MESSAGE,
iavf_debug(hw,
IAVF_DEBUG_AQ_MESSAGE,
"AQTX: Async flag not set along with postpone flag");
status = I40E_ERR_PARAM;
goto asq_send_command_error;
@ -723,22 +708,22 @@ i40e_status i40evf_asq_send_command(struct i40e_hw *hw,
* in case of asynchronous completions
*/
if (i40e_clean_asq(hw) == 0) {
i40e_debug(hw,
I40E_DEBUG_AQ_MESSAGE,
iavf_debug(hw,
IAVF_DEBUG_AQ_MESSAGE,
"AQTX: Error queue is full.\n");
status = I40E_ERR_ADMIN_QUEUE_FULL;
goto asq_send_command_error;
}
/* initialize the temp desc pointer with the right desc */
desc_on_ring = I40E_ADMINQ_DESC(hw->aq.asq, hw->aq.asq.next_to_use);
desc_on_ring = IAVF_ADMINQ_DESC(hw->aq.asq, hw->aq.asq.next_to_use);
/* if the desc is available copy the temp desc to the right place */
*desc_on_ring = *desc;
/* if buff is not NULL assume indirect command */
if (buff != NULL) {
dma_buff = &(hw->aq.asq.r.asq_bi[hw->aq.asq.next_to_use]);
if (buff) {
dma_buff = &hw->aq.asq.r.asq_bi[hw->aq.asq.next_to_use];
/* copy the user buff into the respective DMA buff */
memcpy(dma_buff->va, buff, buff_size);
desc_on_ring->datalen = cpu_to_le16(buff_size);
@ -753,9 +738,9 @@ i40e_status i40evf_asq_send_command(struct i40e_hw *hw,
}
/* bump the tail */
i40e_debug(hw, I40E_DEBUG_AQ_MESSAGE, "AQTX: desc and buffer:\n");
i40evf_debug_aq(hw, I40E_DEBUG_AQ_COMMAND, (void *)desc_on_ring,
buff, buff_size);
iavf_debug(hw, IAVF_DEBUG_AQ_MESSAGE, "AQTX: desc and buffer:\n");
iavf_debug_aq(hw, IAVF_DEBUG_AQ_COMMAND, (void *)desc_on_ring,
buff, buff_size);
(hw->aq.asq.next_to_use)++;
if (hw->aq.asq.next_to_use == hw->aq.asq.count)
hw->aq.asq.next_to_use = 0;
@ -772,7 +757,7 @@ i40e_status i40evf_asq_send_command(struct i40e_hw *hw,
/* AQ designers suggest use of head for better
* timing reliability than DD bit
*/
if (i40evf_asq_done(hw))
if (iavf_asq_done(hw))
break;
udelay(50);
total_delay += 50;
@ -780,14 +765,14 @@ i40e_status i40evf_asq_send_command(struct i40e_hw *hw,
}
/* if ready, copy the desc back to temp */
if (i40evf_asq_done(hw)) {
if (iavf_asq_done(hw)) {
*desc = *desc_on_ring;
if (buff != NULL)
if (buff)
memcpy(buff, dma_buff->va, buff_size);
retval = le16_to_cpu(desc->retval);
if (retval != 0) {
i40e_debug(hw,
I40E_DEBUG_AQ_MESSAGE,
iavf_debug(hw,
IAVF_DEBUG_AQ_MESSAGE,
"AQTX: Command completed with error 0x%X.\n",
retval);
@ -804,10 +789,9 @@ i40e_status i40evf_asq_send_command(struct i40e_hw *hw,
hw->aq.asq_last_status = (enum i40e_admin_queue_err)retval;
}
i40e_debug(hw, I40E_DEBUG_AQ_MESSAGE,
iavf_debug(hw, IAVF_DEBUG_AQ_MESSAGE,
"AQTX: desc and buffer writeback:\n");
i40evf_debug_aq(hw, I40E_DEBUG_AQ_COMMAND, (void *)desc, buff,
buff_size);
iavf_debug_aq(hw, IAVF_DEBUG_AQ_COMMAND, (void *)desc, buff, buff_size);
/* save writeback aq if requested */
if (details->wb_desc)
@ -816,12 +800,12 @@ i40e_status i40evf_asq_send_command(struct i40e_hw *hw,
/* update the error if time out occurred */
if ((!cmd_completed) &&
(!details->async && !details->postpone)) {
if (rd32(hw, hw->aq.asq.len) & I40E_VF_ATQLEN1_ATQCRIT_MASK) {
i40e_debug(hw, I40E_DEBUG_AQ_MESSAGE,
if (rd32(hw, hw->aq.asq.len) & IAVF_VF_ATQLEN1_ATQCRIT_MASK) {
iavf_debug(hw, IAVF_DEBUG_AQ_MESSAGE,
"AQTX: AQ Critical error.\n");
status = I40E_ERR_ADMIN_QUEUE_CRITICAL_ERROR;
} else {
i40e_debug(hw, I40E_DEBUG_AQ_MESSAGE,
iavf_debug(hw, IAVF_DEBUG_AQ_MESSAGE,
"AQTX: Writeback timeout.\n");
status = I40E_ERR_ADMIN_QUEUE_TIMEOUT;
}
@ -833,14 +817,13 @@ asq_send_command_error:
}
/**
* i40evf_fill_default_direct_cmd_desc - AQ descriptor helper function
* iavf_fill_default_direct_cmd_desc - AQ descriptor helper function
* @desc: pointer to the temp descriptor (non DMA mem)
* @opcode: the opcode can be used to decide which flags to turn off or on
*
* Fill the desc with default values
**/
void i40evf_fill_default_direct_cmd_desc(struct i40e_aq_desc *desc,
u16 opcode)
void iavf_fill_default_direct_cmd_desc(struct i40e_aq_desc *desc, u16 opcode)
{
/* zero out the desc */
memset((void *)desc, 0, sizeof(struct i40e_aq_desc));
@ -849,7 +832,7 @@ void i40evf_fill_default_direct_cmd_desc(struct i40e_aq_desc *desc,
}
/**
* i40evf_clean_arq_element
* iavf_clean_arq_element
* @hw: pointer to the hw struct
* @e: event info from the receive descriptor, includes any buffers
* @pending: number of events that could be left to process
@ -858,14 +841,14 @@ void i40evf_fill_default_direct_cmd_desc(struct i40e_aq_desc *desc,
* the contents through e. It can also return how many events are
* left to process through 'pending'
**/
i40e_status i40evf_clean_arq_element(struct i40e_hw *hw,
struct i40e_arq_event_info *e,
u16 *pending)
iavf_status iavf_clean_arq_element(struct iavf_hw *hw,
struct i40e_arq_event_info *e,
u16 *pending)
{
i40e_status ret_code = 0;
u16 ntc = hw->aq.arq.next_to_clean;
struct i40e_aq_desc *desc;
struct i40e_dma_mem *bi;
iavf_status ret_code = 0;
struct iavf_dma_mem *bi;
u16 desc_idx;
u16 datalen;
u16 flags;
@ -878,14 +861,14 @@ i40e_status i40evf_clean_arq_element(struct i40e_hw *hw,
mutex_lock(&hw->aq.arq_mutex);
if (hw->aq.arq.count == 0) {
i40e_debug(hw, I40E_DEBUG_AQ_MESSAGE,
iavf_debug(hw, IAVF_DEBUG_AQ_MESSAGE,
"AQRX: Admin queue not initialized.\n");
ret_code = I40E_ERR_QUEUE_EMPTY;
goto clean_arq_element_err;
}
/* set next_to_use to head */
ntu = rd32(hw, hw->aq.arq.head) & I40E_VF_ARQH1_ARQH_MASK;
ntu = rd32(hw, hw->aq.arq.head) & IAVF_VF_ARQH1_ARQH_MASK;
if (ntu == ntc) {
/* nothing to do - shouldn't need to update ring's values */
ret_code = I40E_ERR_ADMIN_QUEUE_NO_WORK;
@ -893,7 +876,7 @@ i40e_status i40evf_clean_arq_element(struct i40e_hw *hw,
}
/* now clean the next descriptor */
desc = I40E_ADMINQ_DESC(hw->aq.arq, ntc);
desc = IAVF_ADMINQ_DESC(hw->aq.arq, ntc);
desc_idx = ntc;
hw->aq.arq_last_status =
@ -901,8 +884,8 @@ i40e_status i40evf_clean_arq_element(struct i40e_hw *hw,
flags = le16_to_cpu(desc->flags);
if (flags & I40E_AQ_FLAG_ERR) {
ret_code = I40E_ERR_ADMIN_QUEUE_ERROR;
i40e_debug(hw,
I40E_DEBUG_AQ_MESSAGE,
iavf_debug(hw,
IAVF_DEBUG_AQ_MESSAGE,
"AQRX: Event received with error 0x%X.\n",
hw->aq.arq_last_status);
}
@ -910,13 +893,13 @@ i40e_status i40evf_clean_arq_element(struct i40e_hw *hw,
e->desc = *desc;
datalen = le16_to_cpu(desc->datalen);
e->msg_len = min(datalen, e->buf_len);
if (e->msg_buf != NULL && (e->msg_len != 0))
if (e->msg_buf && (e->msg_len != 0))
memcpy(e->msg_buf, hw->aq.arq.r.arq_bi[desc_idx].va,
e->msg_len);
i40e_debug(hw, I40E_DEBUG_AQ_MESSAGE, "AQRX: desc and buffer:\n");
i40evf_debug_aq(hw, I40E_DEBUG_AQ_COMMAND, (void *)desc, e->msg_buf,
hw->aq.arq_buf_size);
iavf_debug(hw, IAVF_DEBUG_AQ_MESSAGE, "AQRX: desc and buffer:\n");
iavf_debug_aq(hw, IAVF_DEBUG_AQ_COMMAND, (void *)desc, e->msg_buf,
hw->aq.arq_buf_size);
/* Restore the original datalen and buffer address in the desc,
* FW updates datalen to indicate the event message
@ -943,7 +926,7 @@ i40e_status i40evf_clean_arq_element(struct i40e_hw *hw,
clean_arq_element_out:
/* Set pending if needed, unlock and return */
if (pending != NULL)
if (pending)
*pending = (ntc > ntu ? hw->aq.arq.count : 0) + (ntu - ntc);
clean_arq_element_err:
@ -951,17 +934,3 @@ clean_arq_element_err:
return ret_code;
}
void i40evf_resume_aq(struct i40e_hw *hw)
{
/* Registers are reset after PF reset */
hw->aq.asq.next_to_use = 0;
hw->aq.asq.next_to_clean = 0;
i40e_config_asq_regs(hw);
hw->aq.arq.next_to_use = 0;
hw->aq.arq.next_to_clean = 0;
i40e_config_arq_regs(hw);
}

View File

@ -1,26 +1,26 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2013 - 2018 Intel Corporation. */
#ifndef _I40E_ADMINQ_H_
#define _I40E_ADMINQ_H_
#ifndef _IAVF_ADMINQ_H_
#define _IAVF_ADMINQ_H_
#include "i40e_osdep.h"
#include "i40e_status.h"
#include "iavf_osdep.h"
#include "iavf_status.h"
#include "i40e_adminq_cmd.h"
#define I40E_ADMINQ_DESC(R, i) \
#define IAVF_ADMINQ_DESC(R, i) \
(&(((struct i40e_aq_desc *)((R).desc_buf.va))[i]))
#define I40E_ADMINQ_DESC_ALIGNMENT 4096
#define IAVF_ADMINQ_DESC_ALIGNMENT 4096
struct i40e_adminq_ring {
struct i40e_virt_mem dma_head; /* space for dma structures */
struct i40e_dma_mem desc_buf; /* descriptor ring memory */
struct i40e_virt_mem cmd_buf; /* command buffer memory */
struct iavf_adminq_ring {
struct iavf_virt_mem dma_head; /* space for dma structures */
struct iavf_dma_mem desc_buf; /* descriptor ring memory */
struct iavf_virt_mem cmd_buf; /* command buffer memory */
union {
struct i40e_dma_mem *asq_bi;
struct i40e_dma_mem *arq_bi;
struct iavf_dma_mem *asq_bi;
struct iavf_dma_mem *arq_bi;
} r;
u16 count; /* Number of descriptors */
@ -61,9 +61,9 @@ struct i40e_arq_event_info {
};
/* Admin Queue information */
struct i40e_adminq_info {
struct i40e_adminq_ring arq; /* receive queue */
struct i40e_adminq_ring asq; /* send queue */
struct iavf_adminq_info {
struct iavf_adminq_ring arq; /* receive queue */
struct iavf_adminq_ring asq; /* send queue */
u32 asq_cmd_timeout; /* send queue cmd write back timeout*/
u16 num_arq_entries; /* receive queue depth */
u16 num_asq_entries; /* send queue depth */
@ -130,7 +130,6 @@ static inline int i40e_aq_rc_to_posix(int aq_ret, int aq_rc)
#define I40E_AQ_LARGE_BUF 512
#define I40E_ASQ_CMD_TIMEOUT 250000 /* usecs */
void i40evf_fill_default_direct_cmd_desc(struct i40e_aq_desc *desc,
u16 opcode);
void iavf_fill_default_direct_cmd_desc(struct i40e_aq_desc *desc, u16 opcode);
#endif /* _I40E_ADMINQ_H_ */
#endif /* _IAVF_ADMINQ_H_ */

View File

@ -0,0 +1,530 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2013 - 2018 Intel Corporation. */
#ifndef _I40E_ADMINQ_CMD_H_
#define _I40E_ADMINQ_CMD_H_
/* This header file defines the i40e Admin Queue commands and is shared between
* i40e Firmware and Software. Do not change the names in this file to IAVF
* because this file should be diff-able against the i40e version, even
* though many parts have been removed in this VF version.
*
* This file needs to comply with the Linux Kernel coding style.
*/
#define I40E_FW_API_VERSION_MAJOR 0x0001
#define I40E_FW_API_VERSION_MINOR_X722 0x0005
#define I40E_FW_API_VERSION_MINOR_X710 0x0007
#define I40E_FW_MINOR_VERSION(_h) ((_h)->mac.type == I40E_MAC_XL710 ? \
I40E_FW_API_VERSION_MINOR_X710 : \
I40E_FW_API_VERSION_MINOR_X722)
/* API version 1.7 implements additional link and PHY-specific APIs */
#define I40E_MINOR_VER_GET_LINK_INFO_XL710 0x0007
struct i40e_aq_desc {
__le16 flags;
__le16 opcode;
__le16 datalen;
__le16 retval;
__le32 cookie_high;
__le32 cookie_low;
union {
struct {
__le32 param0;
__le32 param1;
__le32 param2;
__le32 param3;
} internal;
struct {
__le32 param0;
__le32 param1;
__le32 addr_high;
__le32 addr_low;
} external;
u8 raw[16];
} params;
};
/* Flags sub-structure
* |0 |1 |2 |3 |4 |5 |6 |7 |8 |9 |10 |11 |12 |13 |14 |15 |
* |DD |CMP|ERR|VFE| * * RESERVED * * |LB |RD |VFC|BUF|SI |EI |FE |
*/
/* command flags and offsets*/
#define I40E_AQ_FLAG_DD_SHIFT 0
#define I40E_AQ_FLAG_CMP_SHIFT 1
#define I40E_AQ_FLAG_ERR_SHIFT 2
#define I40E_AQ_FLAG_VFE_SHIFT 3
#define I40E_AQ_FLAG_LB_SHIFT 9
#define I40E_AQ_FLAG_RD_SHIFT 10
#define I40E_AQ_FLAG_VFC_SHIFT 11
#define I40E_AQ_FLAG_BUF_SHIFT 12
#define I40E_AQ_FLAG_SI_SHIFT 13
#define I40E_AQ_FLAG_EI_SHIFT 14
#define I40E_AQ_FLAG_FE_SHIFT 15
#define I40E_AQ_FLAG_DD BIT(I40E_AQ_FLAG_DD_SHIFT) /* 0x1 */
#define I40E_AQ_FLAG_CMP BIT(I40E_AQ_FLAG_CMP_SHIFT) /* 0x2 */
#define I40E_AQ_FLAG_ERR BIT(I40E_AQ_FLAG_ERR_SHIFT) /* 0x4 */
#define I40E_AQ_FLAG_VFE BIT(I40E_AQ_FLAG_VFE_SHIFT) /* 0x8 */
#define I40E_AQ_FLAG_LB BIT(I40E_AQ_FLAG_LB_SHIFT) /* 0x200 */
#define I40E_AQ_FLAG_RD BIT(I40E_AQ_FLAG_RD_SHIFT) /* 0x400 */
#define I40E_AQ_FLAG_VFC BIT(I40E_AQ_FLAG_VFC_SHIFT) /* 0x800 */
#define I40E_AQ_FLAG_BUF BIT(I40E_AQ_FLAG_BUF_SHIFT) /* 0x1000 */
#define I40E_AQ_FLAG_SI BIT(I40E_AQ_FLAG_SI_SHIFT) /* 0x2000 */
#define I40E_AQ_FLAG_EI BIT(I40E_AQ_FLAG_EI_SHIFT) /* 0x4000 */
#define I40E_AQ_FLAG_FE BIT(I40E_AQ_FLAG_FE_SHIFT) /* 0x8000 */
/* error codes */
enum i40e_admin_queue_err {
I40E_AQ_RC_OK = 0, /* success */
I40E_AQ_RC_EPERM = 1, /* Operation not permitted */
I40E_AQ_RC_ENOENT = 2, /* No such element */
I40E_AQ_RC_ESRCH = 3, /* Bad opcode */
I40E_AQ_RC_EINTR = 4, /* operation interrupted */
I40E_AQ_RC_EIO = 5, /* I/O error */
I40E_AQ_RC_ENXIO = 6, /* No such resource */
I40E_AQ_RC_E2BIG = 7, /* Arg too long */
I40E_AQ_RC_EAGAIN = 8, /* Try again */
I40E_AQ_RC_ENOMEM = 9, /* Out of memory */
I40E_AQ_RC_EACCES = 10, /* Permission denied */
I40E_AQ_RC_EFAULT = 11, /* Bad address */
I40E_AQ_RC_EBUSY = 12, /* Device or resource busy */
I40E_AQ_RC_EEXIST = 13, /* object already exists */
I40E_AQ_RC_EINVAL = 14, /* Invalid argument */
I40E_AQ_RC_ENOTTY = 15, /* Not a typewriter */
I40E_AQ_RC_ENOSPC = 16, /* No space left or alloc failure */
I40E_AQ_RC_ENOSYS = 17, /* Function not implemented */
I40E_AQ_RC_ERANGE = 18, /* Parameter out of range */
I40E_AQ_RC_EFLUSHED = 19, /* Cmd flushed due to prev cmd error */
I40E_AQ_RC_BAD_ADDR = 20, /* Descriptor contains a bad pointer */
I40E_AQ_RC_EMODE = 21, /* Op not allowed in current dev mode */
I40E_AQ_RC_EFBIG = 22, /* File too large */
};
/* Admin Queue command opcodes */
enum i40e_admin_queue_opc {
/* aq commands */
i40e_aqc_opc_get_version = 0x0001,
i40e_aqc_opc_driver_version = 0x0002,
i40e_aqc_opc_queue_shutdown = 0x0003,
i40e_aqc_opc_set_pf_context = 0x0004,
/* resource ownership */
i40e_aqc_opc_request_resource = 0x0008,
i40e_aqc_opc_release_resource = 0x0009,
i40e_aqc_opc_list_func_capabilities = 0x000A,
i40e_aqc_opc_list_dev_capabilities = 0x000B,
/* Proxy commands */
i40e_aqc_opc_set_proxy_config = 0x0104,
i40e_aqc_opc_set_ns_proxy_table_entry = 0x0105,
/* LAA */
i40e_aqc_opc_mac_address_read = 0x0107,
i40e_aqc_opc_mac_address_write = 0x0108,
/* PXE */
i40e_aqc_opc_clear_pxe_mode = 0x0110,
/* WoL commands */
i40e_aqc_opc_set_wol_filter = 0x0120,
i40e_aqc_opc_get_wake_reason = 0x0121,
/* internal switch commands */
i40e_aqc_opc_get_switch_config = 0x0200,
i40e_aqc_opc_add_statistics = 0x0201,
i40e_aqc_opc_remove_statistics = 0x0202,
i40e_aqc_opc_set_port_parameters = 0x0203,
i40e_aqc_opc_get_switch_resource_alloc = 0x0204,
i40e_aqc_opc_set_switch_config = 0x0205,
i40e_aqc_opc_rx_ctl_reg_read = 0x0206,
i40e_aqc_opc_rx_ctl_reg_write = 0x0207,
i40e_aqc_opc_add_vsi = 0x0210,
i40e_aqc_opc_update_vsi_parameters = 0x0211,
i40e_aqc_opc_get_vsi_parameters = 0x0212,
i40e_aqc_opc_add_pv = 0x0220,
i40e_aqc_opc_update_pv_parameters = 0x0221,
i40e_aqc_opc_get_pv_parameters = 0x0222,
i40e_aqc_opc_add_veb = 0x0230,
i40e_aqc_opc_update_veb_parameters = 0x0231,
i40e_aqc_opc_get_veb_parameters = 0x0232,
i40e_aqc_opc_delete_element = 0x0243,
i40e_aqc_opc_add_macvlan = 0x0250,
i40e_aqc_opc_remove_macvlan = 0x0251,
i40e_aqc_opc_add_vlan = 0x0252,
i40e_aqc_opc_remove_vlan = 0x0253,
i40e_aqc_opc_set_vsi_promiscuous_modes = 0x0254,
i40e_aqc_opc_add_tag = 0x0255,
i40e_aqc_opc_remove_tag = 0x0256,
i40e_aqc_opc_add_multicast_etag = 0x0257,
i40e_aqc_opc_remove_multicast_etag = 0x0258,
i40e_aqc_opc_update_tag = 0x0259,
i40e_aqc_opc_add_control_packet_filter = 0x025A,
i40e_aqc_opc_remove_control_packet_filter = 0x025B,
i40e_aqc_opc_add_cloud_filters = 0x025C,
i40e_aqc_opc_remove_cloud_filters = 0x025D,
i40e_aqc_opc_clear_wol_switch_filters = 0x025E,
i40e_aqc_opc_add_mirror_rule = 0x0260,
i40e_aqc_opc_delete_mirror_rule = 0x0261,
/* Dynamic Device Personalization */
i40e_aqc_opc_write_personalization_profile = 0x0270,
i40e_aqc_opc_get_personalization_profile_list = 0x0271,
/* DCB commands */
i40e_aqc_opc_dcb_ignore_pfc = 0x0301,
i40e_aqc_opc_dcb_updated = 0x0302,
i40e_aqc_opc_set_dcb_parameters = 0x0303,
/* TX scheduler */
i40e_aqc_opc_configure_vsi_bw_limit = 0x0400,
i40e_aqc_opc_configure_vsi_ets_sla_bw_limit = 0x0406,
i40e_aqc_opc_configure_vsi_tc_bw = 0x0407,
i40e_aqc_opc_query_vsi_bw_config = 0x0408,
i40e_aqc_opc_query_vsi_ets_sla_config = 0x040A,
i40e_aqc_opc_configure_switching_comp_bw_limit = 0x0410,
i40e_aqc_opc_enable_switching_comp_ets = 0x0413,
i40e_aqc_opc_modify_switching_comp_ets = 0x0414,
i40e_aqc_opc_disable_switching_comp_ets = 0x0415,
i40e_aqc_opc_configure_switching_comp_ets_bw_limit = 0x0416,
i40e_aqc_opc_configure_switching_comp_bw_config = 0x0417,
i40e_aqc_opc_query_switching_comp_ets_config = 0x0418,
i40e_aqc_opc_query_port_ets_config = 0x0419,
i40e_aqc_opc_query_switching_comp_bw_config = 0x041A,
i40e_aqc_opc_suspend_port_tx = 0x041B,
i40e_aqc_opc_resume_port_tx = 0x041C,
i40e_aqc_opc_configure_partition_bw = 0x041D,
/* hmc */
i40e_aqc_opc_query_hmc_resource_profile = 0x0500,
i40e_aqc_opc_set_hmc_resource_profile = 0x0501,
/* phy commands*/
i40e_aqc_opc_get_phy_abilities = 0x0600,
i40e_aqc_opc_set_phy_config = 0x0601,
i40e_aqc_opc_set_mac_config = 0x0603,
i40e_aqc_opc_set_link_restart_an = 0x0605,
i40e_aqc_opc_get_link_status = 0x0607,
i40e_aqc_opc_set_phy_int_mask = 0x0613,
i40e_aqc_opc_get_local_advt_reg = 0x0614,
i40e_aqc_opc_set_local_advt_reg = 0x0615,
i40e_aqc_opc_get_partner_advt = 0x0616,
i40e_aqc_opc_set_lb_modes = 0x0618,
i40e_aqc_opc_get_phy_wol_caps = 0x0621,
i40e_aqc_opc_set_phy_debug = 0x0622,
i40e_aqc_opc_upload_ext_phy_fm = 0x0625,
i40e_aqc_opc_run_phy_activity = 0x0626,
i40e_aqc_opc_set_phy_register = 0x0628,
i40e_aqc_opc_get_phy_register = 0x0629,
/* NVM commands */
i40e_aqc_opc_nvm_read = 0x0701,
i40e_aqc_opc_nvm_erase = 0x0702,
i40e_aqc_opc_nvm_update = 0x0703,
i40e_aqc_opc_nvm_config_read = 0x0704,
i40e_aqc_opc_nvm_config_write = 0x0705,
i40e_aqc_opc_oem_post_update = 0x0720,
i40e_aqc_opc_thermal_sensor = 0x0721,
/* virtualization commands */
i40e_aqc_opc_send_msg_to_pf = 0x0801,
i40e_aqc_opc_send_msg_to_vf = 0x0802,
i40e_aqc_opc_send_msg_to_peer = 0x0803,
/* alternate structure */
i40e_aqc_opc_alternate_write = 0x0900,
i40e_aqc_opc_alternate_write_indirect = 0x0901,
i40e_aqc_opc_alternate_read = 0x0902,
i40e_aqc_opc_alternate_read_indirect = 0x0903,
i40e_aqc_opc_alternate_write_done = 0x0904,
i40e_aqc_opc_alternate_set_mode = 0x0905,
i40e_aqc_opc_alternate_clear_port = 0x0906,
/* LLDP commands */
i40e_aqc_opc_lldp_get_mib = 0x0A00,
i40e_aqc_opc_lldp_update_mib = 0x0A01,
i40e_aqc_opc_lldp_add_tlv = 0x0A02,
i40e_aqc_opc_lldp_update_tlv = 0x0A03,
i40e_aqc_opc_lldp_delete_tlv = 0x0A04,
i40e_aqc_opc_lldp_stop = 0x0A05,
i40e_aqc_opc_lldp_start = 0x0A06,
/* Tunnel commands */
i40e_aqc_opc_add_udp_tunnel = 0x0B00,
i40e_aqc_opc_del_udp_tunnel = 0x0B01,
i40e_aqc_opc_set_rss_key = 0x0B02,
i40e_aqc_opc_set_rss_lut = 0x0B03,
i40e_aqc_opc_get_rss_key = 0x0B04,
i40e_aqc_opc_get_rss_lut = 0x0B05,
/* Async Events */
i40e_aqc_opc_event_lan_overflow = 0x1001,
/* OEM commands */
i40e_aqc_opc_oem_parameter_change = 0xFE00,
i40e_aqc_opc_oem_device_status_change = 0xFE01,
i40e_aqc_opc_oem_ocsd_initialize = 0xFE02,
i40e_aqc_opc_oem_ocbb_initialize = 0xFE03,
/* debug commands */
i40e_aqc_opc_debug_read_reg = 0xFF03,
i40e_aqc_opc_debug_write_reg = 0xFF04,
i40e_aqc_opc_debug_modify_reg = 0xFF07,
i40e_aqc_opc_debug_dump_internals = 0xFF08,
};
/* command structures and indirect data structures */
/* Structure naming conventions:
* - no suffix for direct command descriptor structures
* - _data for indirect sent data
* - _resp for indirect return data (data which is both will use _data)
* - _completion for direct return data
* - _element_ for repeated elements (may also be _data or _resp)
*
* Command structures are expected to overlay the params.raw member of the basic
* descriptor, and as such cannot exceed 16 bytes in length.
*/
/* This macro is used to generate a compilation error if a structure
* is not exactly the correct length. It gives a divide by zero error if the
* structure is not of the correct size, otherwise it creates an enum that is
* never used.
*/
#define I40E_CHECK_STRUCT_LEN(n, X) enum i40e_static_assert_enum_##X \
{ i40e_static_assert_##X = (n)/((sizeof(struct X) == (n)) ? 1 : 0) }
/* This macro is used extensively to ensure that command structures are 16
* bytes in length as they have to map to the raw array of that size.
*/
#define I40E_CHECK_CMD_LENGTH(X) I40E_CHECK_STRUCT_LEN(16, X)
/* Queue Shutdown (direct 0x0003) */
struct i40e_aqc_queue_shutdown {
__le32 driver_unloading;
#define I40E_AQ_DRIVER_UNLOADING 0x1
u8 reserved[12];
};
I40E_CHECK_CMD_LENGTH(i40e_aqc_queue_shutdown);
struct i40e_aqc_vsi_properties_data {
/* first 96 byte are written by SW */
__le16 valid_sections;
#define I40E_AQ_VSI_PROP_SWITCH_VALID 0x0001
#define I40E_AQ_VSI_PROP_SECURITY_VALID 0x0002
#define I40E_AQ_VSI_PROP_VLAN_VALID 0x0004
#define I40E_AQ_VSI_PROP_CAS_PV_VALID 0x0008
#define I40E_AQ_VSI_PROP_INGRESS_UP_VALID 0x0010
#define I40E_AQ_VSI_PROP_EGRESS_UP_VALID 0x0020
#define I40E_AQ_VSI_PROP_QUEUE_MAP_VALID 0x0040
#define I40E_AQ_VSI_PROP_QUEUE_OPT_VALID 0x0080
#define I40E_AQ_VSI_PROP_OUTER_UP_VALID 0x0100
#define I40E_AQ_VSI_PROP_SCHED_VALID 0x0200
/* switch section */
__le16 switch_id; /* 12bit id combined with flags below */
#define I40E_AQ_VSI_SW_ID_SHIFT 0x0000
#define I40E_AQ_VSI_SW_ID_MASK (0xFFF << I40E_AQ_VSI_SW_ID_SHIFT)
#define I40E_AQ_VSI_SW_ID_FLAG_NOT_STAG 0x1000
#define I40E_AQ_VSI_SW_ID_FLAG_ALLOW_LB 0x2000
#define I40E_AQ_VSI_SW_ID_FLAG_LOCAL_LB 0x4000
u8 sw_reserved[2];
/* security section */
u8 sec_flags;
#define I40E_AQ_VSI_SEC_FLAG_ALLOW_DEST_OVRD 0x01
#define I40E_AQ_VSI_SEC_FLAG_ENABLE_VLAN_CHK 0x02
#define I40E_AQ_VSI_SEC_FLAG_ENABLE_MAC_CHK 0x04
u8 sec_reserved;
/* VLAN section */
__le16 pvid; /* VLANS include priority bits */
__le16 fcoe_pvid;
u8 port_vlan_flags;
#define I40E_AQ_VSI_PVLAN_MODE_SHIFT 0x00
#define I40E_AQ_VSI_PVLAN_MODE_MASK (0x03 << \
I40E_AQ_VSI_PVLAN_MODE_SHIFT)
#define I40E_AQ_VSI_PVLAN_MODE_TAGGED 0x01
#define I40E_AQ_VSI_PVLAN_MODE_UNTAGGED 0x02
#define I40E_AQ_VSI_PVLAN_MODE_ALL 0x03
#define I40E_AQ_VSI_PVLAN_INSERT_PVID 0x04
#define I40E_AQ_VSI_PVLAN_EMOD_SHIFT 0x03
#define I40E_AQ_VSI_PVLAN_EMOD_MASK (0x3 << \
I40E_AQ_VSI_PVLAN_EMOD_SHIFT)
#define I40E_AQ_VSI_PVLAN_EMOD_STR_BOTH 0x0
#define I40E_AQ_VSI_PVLAN_EMOD_STR_UP 0x08
#define I40E_AQ_VSI_PVLAN_EMOD_STR 0x10
#define I40E_AQ_VSI_PVLAN_EMOD_NOTHING 0x18
u8 pvlan_reserved[3];
/* ingress egress up sections */
__le32 ingress_table; /* bitmap, 3 bits per up */
#define I40E_AQ_VSI_UP_TABLE_UP0_SHIFT 0
#define I40E_AQ_VSI_UP_TABLE_UP0_MASK (0x7 << \
I40E_AQ_VSI_UP_TABLE_UP0_SHIFT)
#define I40E_AQ_VSI_UP_TABLE_UP1_SHIFT 3
#define I40E_AQ_VSI_UP_TABLE_UP1_MASK (0x7 << \
I40E_AQ_VSI_UP_TABLE_UP1_SHIFT)
#define I40E_AQ_VSI_UP_TABLE_UP2_SHIFT 6
#define I40E_AQ_VSI_UP_TABLE_UP2_MASK (0x7 << \
I40E_AQ_VSI_UP_TABLE_UP2_SHIFT)
#define I40E_AQ_VSI_UP_TABLE_UP3_SHIFT 9
#define I40E_AQ_VSI_UP_TABLE_UP3_MASK (0x7 << \
I40E_AQ_VSI_UP_TABLE_UP3_SHIFT)
#define I40E_AQ_VSI_UP_TABLE_UP4_SHIFT 12
#define I40E_AQ_VSI_UP_TABLE_UP4_MASK (0x7 << \
I40E_AQ_VSI_UP_TABLE_UP4_SHIFT)
#define I40E_AQ_VSI_UP_TABLE_UP5_SHIFT 15
#define I40E_AQ_VSI_UP_TABLE_UP5_MASK (0x7 << \
I40E_AQ_VSI_UP_TABLE_UP5_SHIFT)
#define I40E_AQ_VSI_UP_TABLE_UP6_SHIFT 18
#define I40E_AQ_VSI_UP_TABLE_UP6_MASK (0x7 << \
I40E_AQ_VSI_UP_TABLE_UP6_SHIFT)
#define I40E_AQ_VSI_UP_TABLE_UP7_SHIFT 21
#define I40E_AQ_VSI_UP_TABLE_UP7_MASK (0x7 << \
I40E_AQ_VSI_UP_TABLE_UP7_SHIFT)
__le32 egress_table; /* same defines as for ingress table */
/* cascaded PV section */
__le16 cas_pv_tag;
u8 cas_pv_flags;
#define I40E_AQ_VSI_CAS_PV_TAGX_SHIFT 0x00
#define I40E_AQ_VSI_CAS_PV_TAGX_MASK (0x03 << \
I40E_AQ_VSI_CAS_PV_TAGX_SHIFT)
#define I40E_AQ_VSI_CAS_PV_TAGX_LEAVE 0x00
#define I40E_AQ_VSI_CAS_PV_TAGX_REMOVE 0x01
#define I40E_AQ_VSI_CAS_PV_TAGX_COPY 0x02
#define I40E_AQ_VSI_CAS_PV_INSERT_TAG 0x10
#define I40E_AQ_VSI_CAS_PV_ETAG_PRUNE 0x20
#define I40E_AQ_VSI_CAS_PV_ACCEPT_HOST_TAG 0x40
u8 cas_pv_reserved;
/* queue mapping section */
__le16 mapping_flags;
#define I40E_AQ_VSI_QUE_MAP_CONTIG 0x0
#define I40E_AQ_VSI_QUE_MAP_NONCONTIG 0x1
__le16 queue_mapping[16];
#define I40E_AQ_VSI_QUEUE_SHIFT 0x0
#define I40E_AQ_VSI_QUEUE_MASK (0x7FF << I40E_AQ_VSI_QUEUE_SHIFT)
__le16 tc_mapping[8];
#define I40E_AQ_VSI_TC_QUE_OFFSET_SHIFT 0
#define I40E_AQ_VSI_TC_QUE_OFFSET_MASK (0x1FF << \
I40E_AQ_VSI_TC_QUE_OFFSET_SHIFT)
#define I40E_AQ_VSI_TC_QUE_NUMBER_SHIFT 9
#define I40E_AQ_VSI_TC_QUE_NUMBER_MASK (0x7 << \
I40E_AQ_VSI_TC_QUE_NUMBER_SHIFT)
/* queueing option section */
u8 queueing_opt_flags;
#define I40E_AQ_VSI_QUE_OPT_MULTICAST_UDP_ENA 0x04
#define I40E_AQ_VSI_QUE_OPT_UNICAST_UDP_ENA 0x08
#define I40E_AQ_VSI_QUE_OPT_TCP_ENA 0x10
#define I40E_AQ_VSI_QUE_OPT_FCOE_ENA 0x20
#define I40E_AQ_VSI_QUE_OPT_RSS_LUT_PF 0x00
#define I40E_AQ_VSI_QUE_OPT_RSS_LUT_VSI 0x40
u8 queueing_opt_reserved[3];
/* scheduler section */
u8 up_enable_bits;
u8 sched_reserved;
/* outer up section */
__le32 outer_up_table; /* same structure and defines as ingress tbl */
u8 cmd_reserved[8];
/* last 32 bytes are written by FW */
__le16 qs_handle[8];
#define I40E_AQ_VSI_QS_HANDLE_INVALID 0xFFFF
__le16 stat_counter_idx;
__le16 sched_id;
u8 resp_reserved[12];
};
I40E_CHECK_STRUCT_LEN(128, i40e_aqc_vsi_properties_data);
/* Get VEB Parameters (direct 0x0232)
* uses i40e_aqc_switch_seid for the descriptor
*/
struct i40e_aqc_get_veb_parameters_completion {
__le16 seid;
__le16 switch_id;
__le16 veb_flags; /* only the first/last flags from 0x0230 is valid */
__le16 statistic_index;
__le16 vebs_used;
__le16 vebs_free;
u8 reserved[4];
};
I40E_CHECK_CMD_LENGTH(i40e_aqc_get_veb_parameters_completion);
#define I40E_LINK_SPEED_100MB_SHIFT 0x1
#define I40E_LINK_SPEED_1000MB_SHIFT 0x2
#define I40E_LINK_SPEED_10GB_SHIFT 0x3
#define I40E_LINK_SPEED_40GB_SHIFT 0x4
#define I40E_LINK_SPEED_20GB_SHIFT 0x5
#define I40E_LINK_SPEED_25GB_SHIFT 0x6
enum i40e_aq_link_speed {
I40E_LINK_SPEED_UNKNOWN = 0,
I40E_LINK_SPEED_100MB = BIT(I40E_LINK_SPEED_100MB_SHIFT),
I40E_LINK_SPEED_1GB = BIT(I40E_LINK_SPEED_1000MB_SHIFT),
I40E_LINK_SPEED_10GB = BIT(I40E_LINK_SPEED_10GB_SHIFT),
I40E_LINK_SPEED_40GB = BIT(I40E_LINK_SPEED_40GB_SHIFT),
I40E_LINK_SPEED_20GB = BIT(I40E_LINK_SPEED_20GB_SHIFT),
I40E_LINK_SPEED_25GB = BIT(I40E_LINK_SPEED_25GB_SHIFT),
};
/* Send to PF command (indirect 0x0801) id is only used by PF
* Send to VF command (indirect 0x0802) id is only used by PF
* Send to Peer PF command (indirect 0x0803)
*/
struct i40e_aqc_pf_vf_message {
__le32 id;
u8 reserved[4];
__le32 addr_high;
__le32 addr_low;
};
I40E_CHECK_CMD_LENGTH(i40e_aqc_pf_vf_message);
struct i40e_aqc_get_set_rss_key {
#define I40E_AQC_SET_RSS_KEY_VSI_VALID BIT(15)
#define I40E_AQC_SET_RSS_KEY_VSI_ID_SHIFT 0
#define I40E_AQC_SET_RSS_KEY_VSI_ID_MASK (0x3FF << \
I40E_AQC_SET_RSS_KEY_VSI_ID_SHIFT)
__le16 vsi_id;
u8 reserved[6];
__le32 addr_high;
__le32 addr_low;
};
I40E_CHECK_CMD_LENGTH(i40e_aqc_get_set_rss_key);
struct i40e_aqc_get_set_rss_key_data {
u8 standard_rss_key[0x28];
u8 extended_hash_key[0xc];
};
I40E_CHECK_STRUCT_LEN(0x34, i40e_aqc_get_set_rss_key_data);
struct i40e_aqc_get_set_rss_lut {
#define I40E_AQC_SET_RSS_LUT_VSI_VALID BIT(15)
#define I40E_AQC_SET_RSS_LUT_VSI_ID_SHIFT 0
#define I40E_AQC_SET_RSS_LUT_VSI_ID_MASK (0x3FF << \
I40E_AQC_SET_RSS_LUT_VSI_ID_SHIFT)
__le16 vsi_id;
#define I40E_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT 0
#define I40E_AQC_SET_RSS_LUT_TABLE_TYPE_MASK \
BIT(I40E_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT)
#define I40E_AQC_SET_RSS_LUT_TABLE_TYPE_VSI 0
#define I40E_AQC_SET_RSS_LUT_TABLE_TYPE_PF 1
__le16 flags;
u8 reserved[4];
__le32 addr_high;
__le32 addr_low;
};
I40E_CHECK_CMD_LENGTH(i40e_aqc_get_set_rss_lut);
#endif /* _I40E_ADMINQ_CMD_H_ */

View File

@ -0,0 +1,418 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2013 - 2018 Intel Corporation. */
#ifndef _IAVF_H_
#define _IAVF_H_
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/aer.h>
#include <linux/netdevice.h>
#include <linux/vmalloc.h>
#include <linux/interrupt.h>
#include <linux/ethtool.h>
#include <linux/if_vlan.h>
#include <linux/ip.h>
#include <linux/tcp.h>
#include <linux/sctp.h>
#include <linux/ipv6.h>
#include <linux/kernel.h>
#include <linux/bitops.h>
#include <linux/timer.h>
#include <linux/workqueue.h>
#include <linux/wait.h>
#include <linux/delay.h>
#include <linux/gfp.h>
#include <linux/skbuff.h>
#include <linux/dma-mapping.h>
#include <linux/etherdevice.h>
#include <linux/socket.h>
#include <linux/jiffies.h>
#include <net/ip6_checksum.h>
#include <net/pkt_cls.h>
#include <net/udp.h>
#include <net/tc_act/tc_gact.h>
#include <net/tc_act/tc_mirred.h>
#include "iavf_type.h"
#include <linux/avf/virtchnl.h>
#include "iavf_txrx.h"
#define DEFAULT_DEBUG_LEVEL_SHIFT 3
#define PFX "iavf: "
/* VSI state flags shared with common code */
enum iavf_vsi_state_t {
__IAVF_VSI_DOWN,
/* This must be last as it determines the size of the BITMAP */
__IAVF_VSI_STATE_SIZE__,
};
/* dummy struct to make common code less painful */
struct iavf_vsi {
struct iavf_adapter *back;
struct net_device *netdev;
unsigned long active_vlans[BITS_TO_LONGS(VLAN_N_VID)];
u16 seid;
u16 id;
DECLARE_BITMAP(state, __IAVF_VSI_STATE_SIZE__);
int base_vector;
u16 work_limit;
u16 qs_handle;
void *priv; /* client driver data reference. */
};
/* How many Rx Buffers do we bundle into one write to the hardware ? */
#define IAVF_RX_BUFFER_WRITE 16 /* Must be power of 2 */
#define IAVF_DEFAULT_TXD 512
#define IAVF_DEFAULT_RXD 512
#define IAVF_MAX_TXD 4096
#define IAVF_MIN_TXD 64
#define IAVF_MAX_RXD 4096
#define IAVF_MIN_RXD 64
#define IAVF_REQ_DESCRIPTOR_MULTIPLE 32
#define IAVF_MAX_AQ_BUF_SIZE 4096
#define IAVF_AQ_LEN 32
#define IAVF_AQ_MAX_ERR 20 /* times to try before resetting AQ */
#define MAXIMUM_ETHERNET_VLAN_SIZE (VLAN_ETH_FRAME_LEN + ETH_FCS_LEN)
#define IAVF_RX_DESC(R, i) (&(((union iavf_32byte_rx_desc *)((R)->desc))[i]))
#define IAVF_TX_DESC(R, i) (&(((struct iavf_tx_desc *)((R)->desc))[i]))
#define IAVF_TX_CTXTDESC(R, i) \
(&(((struct iavf_tx_context_desc *)((R)->desc))[i]))
#define IAVF_MAX_REQ_QUEUES 4
#define IAVF_HKEY_ARRAY_SIZE ((IAVF_VFQF_HKEY_MAX_INDEX + 1) * 4)
#define IAVF_HLUT_ARRAY_SIZE ((IAVF_VFQF_HLUT_MAX_INDEX + 1) * 4)
#define IAVF_MBPS_DIVISOR 125000 /* divisor to convert to Mbps */
/* MAX_MSIX_Q_VECTORS of these are allocated,
* but we only use one per queue-specific vector.
*/
struct iavf_q_vector {
struct iavf_adapter *adapter;
struct iavf_vsi *vsi;
struct napi_struct napi;
struct iavf_ring_container rx;
struct iavf_ring_container tx;
u32 ring_mask;
u8 itr_countdown; /* when 0 should adjust adaptive ITR */
u8 num_ringpairs; /* total number of ring pairs in vector */
u16 v_idx; /* index in the vsi->q_vector array. */
u16 reg_idx; /* register index of the interrupt */
char name[IFNAMSIZ + 15];
bool arm_wb_state;
cpumask_t affinity_mask;
struct irq_affinity_notify affinity_notify;
};
/* Helper macros to switch between ints/sec and what the register uses.
* And yes, it's the same math going both ways. The lowest value
* supported by all of the i40e hardware is 8.
*/
#define EITR_INTS_PER_SEC_TO_REG(_eitr) \
((_eitr) ? (1000000000 / ((_eitr) * 256)) : 8)
#define EITR_REG_TO_INTS_PER_SEC EITR_INTS_PER_SEC_TO_REG
#define IAVF_DESC_UNUSED(R) \
((((R)->next_to_clean > (R)->next_to_use) ? 0 : (R)->count) + \
(R)->next_to_clean - (R)->next_to_use - 1)
#define OTHER_VECTOR 1
#define NONQ_VECS (OTHER_VECTOR)
#define MIN_MSIX_Q_VECTORS 1
#define MIN_MSIX_COUNT (MIN_MSIX_Q_VECTORS + NONQ_VECS)
#define IAVF_QUEUE_END_OF_LIST 0x7FF
#define IAVF_FREE_VECTOR 0x7FFF
struct iavf_mac_filter {
struct list_head list;
u8 macaddr[ETH_ALEN];
bool remove; /* filter needs to be removed */
bool add; /* filter needs to be added */
};
struct iavf_vlan_filter {
struct list_head list;
u16 vlan;
bool remove; /* filter needs to be removed */
bool add; /* filter needs to be added */
};
#define IAVF_MAX_TRAFFIC_CLASS 4
/* State of traffic class creation */
enum iavf_tc_state_t {
__IAVF_TC_INVALID, /* no traffic class, default state */
__IAVF_TC_RUNNING, /* traffic classes have been created */
};
/* channel info */
struct iavf_channel_config {
struct virtchnl_channel_info ch_info[IAVF_MAX_TRAFFIC_CLASS];
enum iavf_tc_state_t state;
u8 total_qps;
};
/* State of cloud filter */
enum iavf_cloud_filter_state_t {
__IAVF_CF_INVALID, /* cloud filter not added */
__IAVF_CF_ADD_PENDING, /* cloud filter pending add by the PF */
__IAVF_CF_DEL_PENDING, /* cloud filter pending del by the PF */
__IAVF_CF_ACTIVE, /* cloud filter is active */
};
/* Driver state. The order of these is important! */
enum iavf_state_t {
__IAVF_STARTUP, /* driver loaded, probe complete */
__IAVF_REMOVE, /* driver is being unloaded */
__IAVF_INIT_VERSION_CHECK, /* aq msg sent, awaiting reply */
__IAVF_INIT_GET_RESOURCES, /* aq msg sent, awaiting reply */
__IAVF_INIT_SW, /* got resources, setting up structs */
__IAVF_RESETTING, /* in reset */
/* Below here, watchdog is running */
__IAVF_DOWN, /* ready, can be opened */
__IAVF_DOWN_PENDING, /* descending, waiting for watchdog */
__IAVF_TESTING, /* in ethtool self-test */
__IAVF_RUNNING, /* opened, working */
};
enum iavf_critical_section_t {
__IAVF_IN_CRITICAL_TASK, /* cannot be interrupted */
__IAVF_IN_CLIENT_TASK,
__IAVF_IN_REMOVE_TASK, /* device being removed */
};
#define IAVF_CLOUD_FIELD_OMAC 0x01
#define IAVF_CLOUD_FIELD_IMAC 0x02
#define IAVF_CLOUD_FIELD_IVLAN 0x04
#define IAVF_CLOUD_FIELD_TEN_ID 0x08
#define IAVF_CLOUD_FIELD_IIP 0x10
#define IAVF_CF_FLAGS_OMAC IAVF_CLOUD_FIELD_OMAC
#define IAVF_CF_FLAGS_IMAC IAVF_CLOUD_FIELD_IMAC
#define IAVF_CF_FLAGS_IMAC_IVLAN (IAVF_CLOUD_FIELD_IMAC |\
IAVF_CLOUD_FIELD_IVLAN)
#define IAVF_CF_FLAGS_IMAC_TEN_ID (IAVF_CLOUD_FIELD_IMAC |\
IAVF_CLOUD_FIELD_TEN_ID)
#define IAVF_CF_FLAGS_OMAC_TEN_ID_IMAC (IAVF_CLOUD_FIELD_OMAC |\
IAVF_CLOUD_FIELD_IMAC |\
IAVF_CLOUD_FIELD_TEN_ID)
#define IAVF_CF_FLAGS_IMAC_IVLAN_TEN_ID (IAVF_CLOUD_FIELD_IMAC |\
IAVF_CLOUD_FIELD_IVLAN |\
IAVF_CLOUD_FIELD_TEN_ID)
#define IAVF_CF_FLAGS_IIP IAVF_CLOUD_FIELD_IIP
/* bookkeeping of cloud filters */
struct iavf_cloud_filter {
enum iavf_cloud_filter_state_t state;
struct list_head list;
struct virtchnl_filter f;
unsigned long cookie;
bool del; /* filter needs to be deleted */
bool add; /* filter needs to be added */
};
/* board specific private data structure */
struct iavf_adapter {
struct timer_list watchdog_timer;
struct work_struct reset_task;
struct work_struct adminq_task;
struct delayed_work client_task;
struct delayed_work init_task;
wait_queue_head_t down_waitqueue;
struct iavf_q_vector *q_vectors;
struct list_head vlan_filter_list;
struct list_head mac_filter_list;
/* Lock to protect accesses to MAC and VLAN lists */
spinlock_t mac_vlan_list_lock;
char misc_vector_name[IFNAMSIZ + 9];
int num_active_queues;
int num_req_queues;
/* TX */
struct iavf_ring *tx_rings;
u32 tx_timeout_count;
u32 tx_desc_count;
/* RX */
struct iavf_ring *rx_rings;
u64 hw_csum_rx_error;
u32 rx_desc_count;
int num_msix_vectors;
int num_iwarp_msix;
int iwarp_base_vector;
u32 client_pending;
struct i40e_client_instance *cinst;
struct msix_entry *msix_entries;
u32 flags;
#define IAVF_FLAG_RX_CSUM_ENABLED BIT(0)
#define IAVF_FLAG_PF_COMMS_FAILED BIT(3)
#define IAVF_FLAG_RESET_PENDING BIT(4)
#define IAVF_FLAG_RESET_NEEDED BIT(5)
#define IAVF_FLAG_WB_ON_ITR_CAPABLE BIT(6)
#define IAVF_FLAG_ADDR_SET_BY_PF BIT(8)
#define IAVF_FLAG_SERVICE_CLIENT_REQUESTED BIT(9)
#define IAVF_FLAG_CLIENT_NEEDS_OPEN BIT(10)
#define IAVF_FLAG_CLIENT_NEEDS_CLOSE BIT(11)
#define IAVF_FLAG_CLIENT_NEEDS_L2_PARAMS BIT(12)
#define IAVF_FLAG_PROMISC_ON BIT(13)
#define IAVF_FLAG_ALLMULTI_ON BIT(14)
#define IAVF_FLAG_LEGACY_RX BIT(15)
#define IAVF_FLAG_REINIT_ITR_NEEDED BIT(16)
#define IAVF_FLAG_QUEUES_DISABLED BIT(17)
/* duplicates for common code */
#define IAVF_FLAG_DCB_ENABLED 0
/* flags for admin queue service task */
u32 aq_required;
#define IAVF_FLAG_AQ_ENABLE_QUEUES BIT(0)
#define IAVF_FLAG_AQ_DISABLE_QUEUES BIT(1)
#define IAVF_FLAG_AQ_ADD_MAC_FILTER BIT(2)
#define IAVF_FLAG_AQ_ADD_VLAN_FILTER BIT(3)
#define IAVF_FLAG_AQ_DEL_MAC_FILTER BIT(4)
#define IAVF_FLAG_AQ_DEL_VLAN_FILTER BIT(5)
#define IAVF_FLAG_AQ_CONFIGURE_QUEUES BIT(6)
#define IAVF_FLAG_AQ_MAP_VECTORS BIT(7)
#define IAVF_FLAG_AQ_HANDLE_RESET BIT(8)
#define IAVF_FLAG_AQ_CONFIGURE_RSS BIT(9) /* direct AQ config */
#define IAVF_FLAG_AQ_GET_CONFIG BIT(10)
/* Newer style, RSS done by the PF so we can ignore hardware vagaries. */
#define IAVF_FLAG_AQ_GET_HENA BIT(11)
#define IAVF_FLAG_AQ_SET_HENA BIT(12)
#define IAVF_FLAG_AQ_SET_RSS_KEY BIT(13)
#define IAVF_FLAG_AQ_SET_RSS_LUT BIT(14)
#define IAVF_FLAG_AQ_REQUEST_PROMISC BIT(15)
#define IAVF_FLAG_AQ_RELEASE_PROMISC BIT(16)
#define IAVF_FLAG_AQ_REQUEST_ALLMULTI BIT(17)
#define IAVF_FLAG_AQ_RELEASE_ALLMULTI BIT(18)
#define IAVF_FLAG_AQ_ENABLE_VLAN_STRIPPING BIT(19)
#define IAVF_FLAG_AQ_DISABLE_VLAN_STRIPPING BIT(20)
#define IAVF_FLAG_AQ_ENABLE_CHANNELS BIT(21)
#define IAVF_FLAG_AQ_DISABLE_CHANNELS BIT(22)
#define IAVF_FLAG_AQ_ADD_CLOUD_FILTER BIT(23)
#define IAVF_FLAG_AQ_DEL_CLOUD_FILTER BIT(24)
/* OS defined structs */
struct net_device *netdev;
struct pci_dev *pdev;
struct iavf_hw hw; /* defined in iavf_type.h */
enum iavf_state_t state;
unsigned long crit_section;
struct work_struct watchdog_task;
bool netdev_registered;
bool link_up;
enum virtchnl_link_speed link_speed;
enum virtchnl_ops current_op;
#define CLIENT_ALLOWED(_a) ((_a)->vf_res ? \
(_a)->vf_res->vf_cap_flags & \
VIRTCHNL_VF_OFFLOAD_IWARP : \
0)
#define CLIENT_ENABLED(_a) ((_a)->cinst)
/* RSS by the PF should be preferred over RSS via other methods. */
#define RSS_PF(_a) ((_a)->vf_res->vf_cap_flags & \
VIRTCHNL_VF_OFFLOAD_RSS_PF)
#define RSS_AQ(_a) ((_a)->vf_res->vf_cap_flags & \
VIRTCHNL_VF_OFFLOAD_RSS_AQ)
#define RSS_REG(_a) (!((_a)->vf_res->vf_cap_flags & \
(VIRTCHNL_VF_OFFLOAD_RSS_AQ | \
VIRTCHNL_VF_OFFLOAD_RSS_PF)))
#define VLAN_ALLOWED(_a) ((_a)->vf_res->vf_cap_flags & \
VIRTCHNL_VF_OFFLOAD_VLAN)
struct virtchnl_vf_resource *vf_res; /* incl. all VSIs */
struct virtchnl_vsi_resource *vsi_res; /* our LAN VSI */
struct virtchnl_version_info pf_version;
#define PF_IS_V11(_a) (((_a)->pf_version.major == 1) && \
((_a)->pf_version.minor == 1))
u16 msg_enable;
struct iavf_eth_stats current_stats;
struct iavf_vsi vsi;
u32 aq_wait_count;
/* RSS stuff */
u64 hena;
u16 rss_key_size;
u16 rss_lut_size;
u8 *rss_key;
u8 *rss_lut;
/* ADQ related members */
struct iavf_channel_config ch_config;
u8 num_tc;
struct list_head cloud_filter_list;
/* lock to protest access to the cloud filter list */
spinlock_t cloud_filter_list_lock;
u16 num_cloud_filters;
};
/* Ethtool Private Flags */
/* lan device, used by client interface */
struct i40e_device {
struct list_head list;
struct iavf_adapter *vf;
};
/* needed by iavf_ethtool.c */
extern char iavf_driver_name[];
extern const char iavf_driver_version[];
int iavf_up(struct iavf_adapter *adapter);
void iavf_down(struct iavf_adapter *adapter);
int iavf_process_config(struct iavf_adapter *adapter);
void iavf_schedule_reset(struct iavf_adapter *adapter);
void iavf_reset(struct iavf_adapter *adapter);
void iavf_set_ethtool_ops(struct net_device *netdev);
void iavf_update_stats(struct iavf_adapter *adapter);
void iavf_reset_interrupt_capability(struct iavf_adapter *adapter);
int iavf_init_interrupt_scheme(struct iavf_adapter *adapter);
void iavf_irq_enable_queues(struct iavf_adapter *adapter, u32 mask);
void iavf_free_all_tx_resources(struct iavf_adapter *adapter);
void iavf_free_all_rx_resources(struct iavf_adapter *adapter);
void iavf_napi_add_all(struct iavf_adapter *adapter);
void iavf_napi_del_all(struct iavf_adapter *adapter);
int iavf_send_api_ver(struct iavf_adapter *adapter);
int iavf_verify_api_ver(struct iavf_adapter *adapter);
int iavf_send_vf_config_msg(struct iavf_adapter *adapter);
int iavf_get_vf_config(struct iavf_adapter *adapter);
void iavf_irq_enable(struct iavf_adapter *adapter, bool flush);
void iavf_configure_queues(struct iavf_adapter *adapter);
void iavf_deconfigure_queues(struct iavf_adapter *adapter);
void iavf_enable_queues(struct iavf_adapter *adapter);
void iavf_disable_queues(struct iavf_adapter *adapter);
void iavf_map_queues(struct iavf_adapter *adapter);
int iavf_request_queues(struct iavf_adapter *adapter, int num);
void iavf_add_ether_addrs(struct iavf_adapter *adapter);
void iavf_del_ether_addrs(struct iavf_adapter *adapter);
void iavf_add_vlans(struct iavf_adapter *adapter);
void iavf_del_vlans(struct iavf_adapter *adapter);
void iavf_set_promiscuous(struct iavf_adapter *adapter, int flags);
void iavf_request_stats(struct iavf_adapter *adapter);
void iavf_request_reset(struct iavf_adapter *adapter);
void iavf_get_hena(struct iavf_adapter *adapter);
void iavf_set_hena(struct iavf_adapter *adapter);
void iavf_set_rss_key(struct iavf_adapter *adapter);
void iavf_set_rss_lut(struct iavf_adapter *adapter);
void iavf_enable_vlan_stripping(struct iavf_adapter *adapter);
void iavf_disable_vlan_stripping(struct iavf_adapter *adapter);
void iavf_virtchnl_completion(struct iavf_adapter *adapter,
enum virtchnl_ops v_opcode,
iavf_status v_retval, u8 *msg, u16 msglen);
int iavf_config_rss(struct iavf_adapter *adapter);
int iavf_lan_add_device(struct iavf_adapter *adapter);
int iavf_lan_del_device(struct iavf_adapter *adapter);
void iavf_client_subtask(struct iavf_adapter *adapter);
void iavf_notify_client_message(struct iavf_vsi *vsi, u8 *msg, u16 len);
void iavf_notify_client_l2_params(struct iavf_vsi *vsi);
void iavf_notify_client_open(struct iavf_vsi *vsi);
void iavf_notify_client_close(struct iavf_vsi *vsi, bool reset);
void iavf_enable_channels(struct iavf_adapter *adapter);
void iavf_disable_channels(struct iavf_adapter *adapter);
void iavf_add_cloud_filter(struct iavf_adapter *adapter);
void iavf_del_cloud_filter(struct iavf_adapter *adapter);
#endif /* _IAVF_H_ */

View File

@ -0,0 +1,31 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2013 - 2018 Intel Corporation. */
#ifndef _IAVF_ALLOC_H_
#define _IAVF_ALLOC_H_
struct iavf_hw;
/* Memory allocation types */
enum iavf_memory_type {
iavf_mem_arq_buf = 0, /* ARQ indirect command buffer */
iavf_mem_asq_buf = 1,
iavf_mem_atq_buf = 2, /* ATQ indirect command buffer */
iavf_mem_arq_ring = 3, /* ARQ descriptor ring */
iavf_mem_atq_ring = 4, /* ATQ descriptor ring */
iavf_mem_pd = 5, /* Page Descriptor */
iavf_mem_bp = 6, /* Backing Page - 4KB */
iavf_mem_bp_jumbo = 7, /* Backing Page - > 4KB */
iavf_mem_reserved
};
/* prototype for functions used for dynamic memory allocation */
iavf_status iavf_allocate_dma_mem(struct iavf_hw *hw, struct iavf_dma_mem *mem,
enum iavf_memory_type type,
u64 size, u32 alignment);
iavf_status iavf_free_dma_mem(struct iavf_hw *hw, struct iavf_dma_mem *mem);
iavf_status iavf_allocate_virt_mem(struct iavf_hw *hw,
struct iavf_virt_mem *mem, u32 size);
iavf_status iavf_free_virt_mem(struct iavf_hw *hw, struct iavf_virt_mem *mem);
#endif /* _IAVF_ALLOC_H_ */

View File

@ -4,36 +4,36 @@
#include <linux/list.h>
#include <linux/errno.h>
#include "i40evf.h"
#include "i40e_prototype.h"
#include "i40evf_client.h"
#include "iavf.h"
#include "iavf_prototype.h"
#include "iavf_client.h"
static
const char i40evf_client_interface_version_str[] = I40EVF_CLIENT_VERSION_STR;
const char iavf_client_interface_version_str[] = IAVF_CLIENT_VERSION_STR;
static struct i40e_client *vf_registered_client;
static LIST_HEAD(i40evf_devices);
static DEFINE_MUTEX(i40evf_device_mutex);
static LIST_HEAD(i40e_devices);
static DEFINE_MUTEX(iavf_device_mutex);
static u32 i40evf_client_virtchnl_send(struct i40e_info *ldev,
struct i40e_client *client,
u8 *msg, u16 len);
static u32 iavf_client_virtchnl_send(struct i40e_info *ldev,
struct i40e_client *client,
u8 *msg, u16 len);
static int i40evf_client_setup_qvlist(struct i40e_info *ldev,
struct i40e_client *client,
struct i40e_qvlist_info *qvlist_info);
static int iavf_client_setup_qvlist(struct i40e_info *ldev,
struct i40e_client *client,
struct i40e_qvlist_info *qvlist_info);
static struct i40e_ops i40evf_lan_ops = {
.virtchnl_send = i40evf_client_virtchnl_send,
.setup_qvlist = i40evf_client_setup_qvlist,
static struct i40e_ops iavf_lan_ops = {
.virtchnl_send = iavf_client_virtchnl_send,
.setup_qvlist = iavf_client_setup_qvlist,
};
/**
* i40evf_client_get_params - retrieve relevant client parameters
* iavf_client_get_params - retrieve relevant client parameters
* @vsi: VSI with parameters
* @params: client param struct
**/
static
void i40evf_client_get_params(struct i40e_vsi *vsi, struct i40e_params *params)
void iavf_client_get_params(struct iavf_vsi *vsi, struct i40e_params *params)
{
int i;
@ -41,21 +41,21 @@ void i40evf_client_get_params(struct i40e_vsi *vsi, struct i40e_params *params)
params->mtu = vsi->netdev->mtu;
params->link_up = vsi->back->link_up;
for (i = 0; i < I40E_MAX_USER_PRIORITY; i++) {
for (i = 0; i < IAVF_MAX_USER_PRIORITY; i++) {
params->qos.prio_qos[i].tc = 0;
params->qos.prio_qos[i].qs_handle = vsi->qs_handle;
}
}
/**
* i40evf_notify_client_message - call the client message receive callback
* iavf_notify_client_message - call the client message receive callback
* @vsi: the VSI associated with this client
* @msg: message buffer
* @len: length of message
*
* If there is a client to this VSI, call the client
**/
void i40evf_notify_client_message(struct i40e_vsi *vsi, u8 *msg, u16 len)
void iavf_notify_client_message(struct iavf_vsi *vsi, u8 *msg, u16 len)
{
struct i40e_client_instance *cinst;
@ -74,12 +74,12 @@ void i40evf_notify_client_message(struct i40e_vsi *vsi, u8 *msg, u16 len)
}
/**
* i40evf_notify_client_l2_params - call the client notify callback
* iavf_notify_client_l2_params - call the client notify callback
* @vsi: the VSI with l2 param changes
*
* If there is a client to this VSI, call the client
**/
void i40evf_notify_client_l2_params(struct i40e_vsi *vsi)
void iavf_notify_client_l2_params(struct iavf_vsi *vsi)
{
struct i40e_client_instance *cinst;
struct i40e_params params;
@ -95,21 +95,21 @@ void i40evf_notify_client_l2_params(struct i40e_vsi *vsi)
"Cannot locate client instance l2_param_change function\n");
return;
}
i40evf_client_get_params(vsi, &params);
iavf_client_get_params(vsi, &params);
cinst->lan_info.params = params;
cinst->client->ops->l2_param_change(&cinst->lan_info, cinst->client,
&params);
}
/**
* i40evf_notify_client_open - call the client open callback
* iavf_notify_client_open - call the client open callback
* @vsi: the VSI with netdev opened
*
* If there is a client to this netdev, call the client with open
**/
void i40evf_notify_client_open(struct i40e_vsi *vsi)
void iavf_notify_client_open(struct iavf_vsi *vsi)
{
struct i40evf_adapter *adapter = vsi->back;
struct iavf_adapter *adapter = vsi->back;
struct i40e_client_instance *cinst = adapter->cinst;
int ret;
@ -127,22 +127,22 @@ void i40evf_notify_client_open(struct i40e_vsi *vsi)
}
/**
* i40evf_client_release_qvlist - send a message to the PF to release iwarp qv map
* iavf_client_release_qvlist - send a message to the PF to release iwarp qv map
* @ldev: pointer to L2 context.
*
* Return 0 on success or < 0 on error
**/
static int i40evf_client_release_qvlist(struct i40e_info *ldev)
static int iavf_client_release_qvlist(struct i40e_info *ldev)
{
struct i40evf_adapter *adapter = ldev->vf;
i40e_status err;
struct iavf_adapter *adapter = ldev->vf;
iavf_status err;
if (adapter->aq_required)
return -EAGAIN;
err = i40e_aq_send_msg_to_pf(&adapter->hw,
VIRTCHNL_OP_RELEASE_IWARP_IRQ_MAP,
I40E_SUCCESS, NULL, 0, NULL);
err = iavf_aq_send_msg_to_pf(&adapter->hw,
VIRTCHNL_OP_RELEASE_IWARP_IRQ_MAP,
I40E_SUCCESS, NULL, 0, NULL);
if (err)
dev_err(&adapter->pdev->dev,
@ -153,15 +153,15 @@ static int i40evf_client_release_qvlist(struct i40e_info *ldev)
}
/**
* i40evf_notify_client_close - call the client close callback
* iavf_notify_client_close - call the client close callback
* @vsi: the VSI with netdev closed
* @reset: true when close called due to reset pending
*
* If there is a client to this netdev, call the client with close
**/
void i40evf_notify_client_close(struct i40e_vsi *vsi, bool reset)
void iavf_notify_client_close(struct iavf_vsi *vsi, bool reset)
{
struct i40evf_adapter *adapter = vsi->back;
struct iavf_adapter *adapter = vsi->back;
struct i40e_client_instance *cinst = adapter->cinst;
if (!cinst || !cinst->client || !cinst->client->ops ||
@ -171,21 +171,21 @@ void i40evf_notify_client_close(struct i40e_vsi *vsi, bool reset)
return;
}
cinst->client->ops->close(&cinst->lan_info, cinst->client, reset);
i40evf_client_release_qvlist(&cinst->lan_info);
iavf_client_release_qvlist(&cinst->lan_info);
clear_bit(__I40E_CLIENT_INSTANCE_OPENED, &cinst->state);
}
/**
* i40evf_client_add_instance - add a client instance to the instance list
* iavf_client_add_instance - add a client instance to the instance list
* @adapter: pointer to the board struct
*
* Returns cinst ptr on success, NULL on failure
**/
static struct i40e_client_instance *
i40evf_client_add_instance(struct i40evf_adapter *adapter)
iavf_client_add_instance(struct iavf_adapter *adapter)
{
struct i40e_client_instance *cinst = NULL;
struct i40e_vsi *vsi = &adapter->vsi;
struct iavf_vsi *vsi = &adapter->vsi;
struct netdev_hw_addr *mac = NULL;
struct i40e_params params;
@ -207,11 +207,11 @@ i40evf_client_add_instance(struct i40evf_adapter *adapter)
cinst->lan_info.fid = 0;
cinst->lan_info.ftype = I40E_CLIENT_FTYPE_VF;
cinst->lan_info.hw_addr = adapter->hw.hw_addr;
cinst->lan_info.ops = &i40evf_lan_ops;
cinst->lan_info.version.major = I40EVF_CLIENT_VERSION_MAJOR;
cinst->lan_info.version.minor = I40EVF_CLIENT_VERSION_MINOR;
cinst->lan_info.version.build = I40EVF_CLIENT_VERSION_BUILD;
i40evf_client_get_params(vsi, &params);
cinst->lan_info.ops = &iavf_lan_ops;
cinst->lan_info.version.major = IAVF_CLIENT_VERSION_MAJOR;
cinst->lan_info.version.minor = IAVF_CLIENT_VERSION_MINOR;
cinst->lan_info.version.build = IAVF_CLIENT_VERSION_BUILD;
iavf_client_get_params(vsi, &params);
cinst->lan_info.params = params;
set_bit(__I40E_CLIENT_INSTANCE_NONE, &cinst->state);
@ -233,28 +233,28 @@ out:
}
/**
* i40evf_client_del_instance - removes a client instance from the list
* iavf_client_del_instance - removes a client instance from the list
* @adapter: pointer to the board struct
*
**/
static
void i40evf_client_del_instance(struct i40evf_adapter *adapter)
void iavf_client_del_instance(struct iavf_adapter *adapter)
{
kfree(adapter->cinst);
adapter->cinst = NULL;
}
/**
* i40evf_client_subtask - client maintenance work
* iavf_client_subtask - client maintenance work
* @adapter: board private structure
**/
void i40evf_client_subtask(struct i40evf_adapter *adapter)
void iavf_client_subtask(struct iavf_adapter *adapter)
{
struct i40e_client *client = vf_registered_client;
struct i40e_client_instance *cinst;
int ret = 0;
if (adapter->state < __I40EVF_DOWN)
if (adapter->state < __IAVF_DOWN)
return;
/* first check client is registered */
@ -262,7 +262,7 @@ void i40evf_client_subtask(struct i40evf_adapter *adapter)
return;
/* Add the client instance to the instance list */
cinst = i40evf_client_add_instance(adapter);
cinst = iavf_client_add_instance(adapter);
if (!cinst)
return;
@ -279,23 +279,23 @@ void i40evf_client_subtask(struct i40evf_adapter *adapter)
&cinst->state);
else
/* remove client instance */
i40evf_client_del_instance(adapter);
iavf_client_del_instance(adapter);
}
}
/**
* i40evf_lan_add_device - add a lan device struct to the list of lan devices
* iavf_lan_add_device - add a lan device struct to the list of lan devices
* @adapter: pointer to the board struct
*
* Returns 0 on success or none 0 on error
**/
int i40evf_lan_add_device(struct i40evf_adapter *adapter)
int iavf_lan_add_device(struct iavf_adapter *adapter)
{
struct i40e_device *ldev;
int ret = 0;
mutex_lock(&i40evf_device_mutex);
list_for_each_entry(ldev, &i40evf_devices, list) {
mutex_lock(&iavf_device_mutex);
list_for_each_entry(ldev, &i40e_devices, list) {
if (ldev->vf == adapter) {
ret = -EEXIST;
goto out;
@ -308,7 +308,7 @@ int i40evf_lan_add_device(struct i40evf_adapter *adapter)
}
ldev->vf = adapter;
INIT_LIST_HEAD(&ldev->list);
list_add(&ldev->list, &i40evf_devices);
list_add(&ldev->list, &i40e_devices);
dev_info(&adapter->pdev->dev, "Added LAN device bus=0x%02x dev=0x%02x func=0x%02x\n",
adapter->hw.bus.bus_id, adapter->hw.bus.device,
adapter->hw.bus.func);
@ -316,26 +316,26 @@ int i40evf_lan_add_device(struct i40evf_adapter *adapter)
/* Since in some cases register may have happened before a device gets
* added, we can schedule a subtask to go initiate the clients.
*/
adapter->flags |= I40EVF_FLAG_SERVICE_CLIENT_REQUESTED;
adapter->flags |= IAVF_FLAG_SERVICE_CLIENT_REQUESTED;
out:
mutex_unlock(&i40evf_device_mutex);
mutex_unlock(&iavf_device_mutex);
return ret;
}
/**
* i40evf_lan_del_device - removes a lan device from the device list
* iavf_lan_del_device - removes a lan device from the device list
* @adapter: pointer to the board struct
*
* Returns 0 on success or non-0 on error
**/
int i40evf_lan_del_device(struct i40evf_adapter *adapter)
int iavf_lan_del_device(struct iavf_adapter *adapter)
{
struct i40e_device *ldev, *tmp;
int ret = -ENODEV;
mutex_lock(&i40evf_device_mutex);
list_for_each_entry_safe(ldev, tmp, &i40evf_devices, list) {
mutex_lock(&iavf_device_mutex);
list_for_each_entry_safe(ldev, tmp, &i40e_devices, list) {
if (ldev->vf == adapter) {
dev_info(&adapter->pdev->dev,
"Deleted LAN device bus=0x%02x dev=0x%02x func=0x%02x\n",
@ -348,23 +348,23 @@ int i40evf_lan_del_device(struct i40evf_adapter *adapter)
}
}
mutex_unlock(&i40evf_device_mutex);
mutex_unlock(&iavf_device_mutex);
return ret;
}
/**
* i40evf_client_release - release client specific resources
* iavf_client_release - release client specific resources
* @client: pointer to the registered client
*
**/
static void i40evf_client_release(struct i40e_client *client)
static void iavf_client_release(struct i40e_client *client)
{
struct i40e_client_instance *cinst;
struct i40e_device *ldev;
struct i40evf_adapter *adapter;
struct iavf_adapter *adapter;
mutex_lock(&i40evf_device_mutex);
list_for_each_entry(ldev, &i40evf_devices, list) {
mutex_lock(&iavf_device_mutex);
list_for_each_entry(ldev, &i40e_devices, list) {
adapter = ldev->vf;
cinst = adapter->cinst;
if (!cinst)
@ -373,41 +373,41 @@ static void i40evf_client_release(struct i40e_client *client)
if (client->ops && client->ops->close)
client->ops->close(&cinst->lan_info, client,
false);
i40evf_client_release_qvlist(&cinst->lan_info);
iavf_client_release_qvlist(&cinst->lan_info);
clear_bit(__I40E_CLIENT_INSTANCE_OPENED, &cinst->state);
dev_warn(&adapter->pdev->dev,
"Client %s instance closed\n", client->name);
}
/* delete the client instance */
i40evf_client_del_instance(adapter);
iavf_client_del_instance(adapter);
dev_info(&adapter->pdev->dev, "Deleted client instance of Client %s\n",
client->name);
}
mutex_unlock(&i40evf_device_mutex);
mutex_unlock(&iavf_device_mutex);
}
/**
* i40evf_client_prepare - prepare client specific resources
* iavf_client_prepare - prepare client specific resources
* @client: pointer to the registered client
*
**/
static void i40evf_client_prepare(struct i40e_client *client)
static void iavf_client_prepare(struct i40e_client *client)
{
struct i40e_device *ldev;
struct i40evf_adapter *adapter;
struct iavf_adapter *adapter;
mutex_lock(&i40evf_device_mutex);
list_for_each_entry(ldev, &i40evf_devices, list) {
mutex_lock(&iavf_device_mutex);
list_for_each_entry(ldev, &i40e_devices, list) {
adapter = ldev->vf;
/* Signal the watchdog to service the client */
adapter->flags |= I40EVF_FLAG_SERVICE_CLIENT_REQUESTED;
adapter->flags |= IAVF_FLAG_SERVICE_CLIENT_REQUESTED;
}
mutex_unlock(&i40evf_device_mutex);
mutex_unlock(&iavf_device_mutex);
}
/**
* i40evf_client_virtchnl_send - send a message to the PF instance
* iavf_client_virtchnl_send - send a message to the PF instance
* @ldev: pointer to L2 context.
* @client: Client pointer.
* @msg: pointer to message buffer
@ -415,17 +415,17 @@ static void i40evf_client_prepare(struct i40e_client *client)
*
* Return 0 on success or < 0 on error
**/
static u32 i40evf_client_virtchnl_send(struct i40e_info *ldev,
struct i40e_client *client,
u8 *msg, u16 len)
static u32 iavf_client_virtchnl_send(struct i40e_info *ldev,
struct i40e_client *client,
u8 *msg, u16 len)
{
struct i40evf_adapter *adapter = ldev->vf;
i40e_status err;
struct iavf_adapter *adapter = ldev->vf;
iavf_status err;
if (adapter->aq_required)
return -EAGAIN;
err = i40e_aq_send_msg_to_pf(&adapter->hw, VIRTCHNL_OP_IWARP,
err = iavf_aq_send_msg_to_pf(&adapter->hw, VIRTCHNL_OP_IWARP,
I40E_SUCCESS, msg, len, NULL);
if (err)
dev_err(&adapter->pdev->dev, "Unable to send iWarp message to PF, error %d, aq status %d\n",
@ -435,21 +435,21 @@ static u32 i40evf_client_virtchnl_send(struct i40e_info *ldev,
}
/**
* i40evf_client_setup_qvlist - send a message to the PF to setup iwarp qv map
* iavf_client_setup_qvlist - send a message to the PF to setup iwarp qv map
* @ldev: pointer to L2 context.
* @client: Client pointer.
* @qvlist_info: queue and vector list
*
* Return 0 on success or < 0 on error
**/
static int i40evf_client_setup_qvlist(struct i40e_info *ldev,
struct i40e_client *client,
struct i40e_qvlist_info *qvlist_info)
static int iavf_client_setup_qvlist(struct i40e_info *ldev,
struct i40e_client *client,
struct i40e_qvlist_info *qvlist_info)
{
struct virtchnl_iwarp_qvlist_info *v_qvlist_info;
struct i40evf_adapter *adapter = ldev->vf;
struct iavf_adapter *adapter = ldev->vf;
struct i40e_qv_info *qv_info;
i40e_status err;
iavf_status err;
u32 v_idx, i;
u32 msg_size;
@ -474,9 +474,9 @@ static int i40evf_client_setup_qvlist(struct i40e_info *ldev,
(v_qvlist_info->num_vectors - 1));
adapter->client_pending |= BIT(VIRTCHNL_OP_CONFIG_IWARP_IRQ_MAP);
err = i40e_aq_send_msg_to_pf(&adapter->hw,
VIRTCHNL_OP_CONFIG_IWARP_IRQ_MAP,
I40E_SUCCESS, (u8 *)v_qvlist_info, msg_size, NULL);
err = iavf_aq_send_msg_to_pf(&adapter->hw,
VIRTCHNL_OP_CONFIG_IWARP_IRQ_MAP, I40E_SUCCESS,
(u8 *)v_qvlist_info, msg_size, NULL);
if (err) {
dev_err(&adapter->pdev->dev,
@ -499,12 +499,12 @@ out:
}
/**
* i40evf_register_client - Register a i40e client driver with the L2 driver
* iavf_register_client - Register a i40e client driver with the L2 driver
* @client: pointer to the i40e_client struct
*
* Returns 0 on success or non-0 on error
**/
int i40evf_register_client(struct i40e_client *client)
int iavf_register_client(struct i40e_client *client)
{
int ret = 0;
@ -514,48 +514,48 @@ int i40evf_register_client(struct i40e_client *client)
}
if (strlen(client->name) == 0) {
pr_info("i40evf: Failed to register client with no name\n");
pr_info("iavf: Failed to register client with no name\n");
ret = -EIO;
goto out;
}
if (vf_registered_client) {
pr_info("i40evf: Client %s has already been registered!\n",
pr_info("iavf: Client %s has already been registered!\n",
client->name);
ret = -EEXIST;
goto out;
}
if ((client->version.major != I40EVF_CLIENT_VERSION_MAJOR) ||
(client->version.minor != I40EVF_CLIENT_VERSION_MINOR)) {
pr_info("i40evf: Failed to register client %s due to mismatched client interface version\n",
if ((client->version.major != IAVF_CLIENT_VERSION_MAJOR) ||
(client->version.minor != IAVF_CLIENT_VERSION_MINOR)) {
pr_info("iavf: Failed to register client %s due to mismatched client interface version\n",
client->name);
pr_info("Client is using version: %02d.%02d.%02d while LAN driver supports %s\n",
client->version.major, client->version.minor,
client->version.build,
i40evf_client_interface_version_str);
iavf_client_interface_version_str);
ret = -EIO;
goto out;
}
vf_registered_client = client;
i40evf_client_prepare(client);
iavf_client_prepare(client);
pr_info("i40evf: Registered client %s with return code %d\n",
pr_info("iavf: Registered client %s with return code %d\n",
client->name, ret);
out:
return ret;
}
EXPORT_SYMBOL(i40evf_register_client);
EXPORT_SYMBOL(iavf_register_client);
/**
* i40evf_unregister_client - Unregister a i40e client driver with the L2 driver
* iavf_unregister_client - Unregister a i40e client driver with the L2 driver
* @client: pointer to the i40e_client struct
*
* Returns 0 on success or non-0 on error
**/
int i40evf_unregister_client(struct i40e_client *client)
int iavf_unregister_client(struct i40e_client *client)
{
int ret = 0;
@ -563,17 +563,17 @@ int i40evf_unregister_client(struct i40e_client *client)
* a close for each of the client instances that were opened.
* client_release function is called to handle this.
*/
i40evf_client_release(client);
iavf_client_release(client);
if (vf_registered_client != client) {
pr_info("i40evf: Client %s has not been registered\n",
pr_info("iavf: Client %s has not been registered\n",
client->name);
ret = -ENODEV;
goto out;
}
vf_registered_client = NULL;
pr_info("i40evf: Unregistered client %s\n", client->name);
pr_info("iavf: Unregistered client %s\n", client->name);
out:
return ret;
}
EXPORT_SYMBOL(i40evf_unregister_client);
EXPORT_SYMBOL(iavf_unregister_client);

View File

@ -1,21 +1,21 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2013 - 2018 Intel Corporation. */
#ifndef _I40EVF_CLIENT_H_
#define _I40EVF_CLIENT_H_
#ifndef _IAVF_CLIENT_H_
#define _IAVF_CLIENT_H_
#define I40EVF_CLIENT_STR_LENGTH 10
#define IAVF_CLIENT_STR_LENGTH 10
/* Client interface version should be updated anytime there is a change in the
* existing APIs or data structures.
*/
#define I40EVF_CLIENT_VERSION_MAJOR 0
#define I40EVF_CLIENT_VERSION_MINOR 01
#define I40EVF_CLIENT_VERSION_BUILD 00
#define I40EVF_CLIENT_VERSION_STR \
__stringify(I40EVF_CLIENT_VERSION_MAJOR) "." \
__stringify(I40EVF_CLIENT_VERSION_MINOR) "." \
__stringify(I40EVF_CLIENT_VERSION_BUILD)
#define IAVF_CLIENT_VERSION_MAJOR 0
#define IAVF_CLIENT_VERSION_MINOR 01
#define IAVF_CLIENT_VERSION_BUILD 00
#define IAVF_CLIENT_VERSION_STR \
__stringify(IAVF_CLIENT_VERSION_MAJOR) "." \
__stringify(IAVF_CLIENT_VERSION_MINOR) "." \
__stringify(IAVF_CLIENT_VERSION_BUILD)
struct i40e_client_version {
u8 major;
@ -90,7 +90,7 @@ struct i40e_info {
#define I40E_CLIENT_FTYPE_PF 0
#define I40E_CLIENT_FTYPE_VF 1
u8 ftype; /* function type, PF or VF */
void *vf; /* cast to i40evf_adapter */
void *vf; /* cast to iavf_adapter */
/* All L2 params that could change during the life span of the device
* and needs to be communicated to the client when they change
@ -151,7 +151,7 @@ struct i40e_client_instance {
struct i40e_client {
struct list_head list; /* list of registered clients */
char name[I40EVF_CLIENT_STR_LENGTH];
char name[IAVF_CLIENT_STR_LENGTH];
struct i40e_client_version version;
unsigned long state; /* client state */
atomic_t ref_cnt; /* Count of all the client devices of this kind */
@ -164,6 +164,6 @@ struct i40e_client {
};
/* used by clients */
int i40evf_register_client(struct i40e_client *client);
int i40evf_unregister_client(struct i40e_client *client);
#endif /* _I40EVF_CLIENT_H_ */
int iavf_register_client(struct i40e_client *client);
int iavf_unregister_client(struct i40e_client *client);
#endif /* _IAVF_CLIENT_H_ */

View File

@ -0,0 +1,955 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2013 - 2018 Intel Corporation. */
#include "iavf_type.h"
#include "i40e_adminq.h"
#include "iavf_prototype.h"
#include <linux/avf/virtchnl.h>
/**
* iavf_set_mac_type - Sets MAC type
* @hw: pointer to the HW structure
*
* This function sets the mac type of the adapter based on the
* vendor ID and device ID stored in the hw structure.
**/
iavf_status iavf_set_mac_type(struct iavf_hw *hw)
{
iavf_status status = 0;
if (hw->vendor_id == PCI_VENDOR_ID_INTEL) {
switch (hw->device_id) {
case IAVF_DEV_ID_X722_VF:
hw->mac.type = IAVF_MAC_X722_VF;
break;
case IAVF_DEV_ID_VF:
case IAVF_DEV_ID_VF_HV:
case IAVF_DEV_ID_ADAPTIVE_VF:
hw->mac.type = IAVF_MAC_VF;
break;
default:
hw->mac.type = IAVF_MAC_GENERIC;
break;
}
} else {
status = I40E_ERR_DEVICE_NOT_SUPPORTED;
}
hw_dbg(hw, "found mac: %d, returns: %d\n", hw->mac.type, status);
return status;
}
/**
* iavf_aq_str - convert AQ err code to a string
* @hw: pointer to the HW structure
* @aq_err: the AQ error code to convert
**/
const char *iavf_aq_str(struct iavf_hw *hw, enum i40e_admin_queue_err aq_err)
{
switch (aq_err) {
case I40E_AQ_RC_OK:
return "OK";
case I40E_AQ_RC_EPERM:
return "I40E_AQ_RC_EPERM";
case I40E_AQ_RC_ENOENT:
return "I40E_AQ_RC_ENOENT";
case I40E_AQ_RC_ESRCH:
return "I40E_AQ_RC_ESRCH";
case I40E_AQ_RC_EINTR:
return "I40E_AQ_RC_EINTR";
case I40E_AQ_RC_EIO:
return "I40E_AQ_RC_EIO";
case I40E_AQ_RC_ENXIO:
return "I40E_AQ_RC_ENXIO";
case I40E_AQ_RC_E2BIG:
return "I40E_AQ_RC_E2BIG";
case I40E_AQ_RC_EAGAIN:
return "I40E_AQ_RC_EAGAIN";
case I40E_AQ_RC_ENOMEM:
return "I40E_AQ_RC_ENOMEM";
case I40E_AQ_RC_EACCES:
return "I40E_AQ_RC_EACCES";
case I40E_AQ_RC_EFAULT:
return "I40E_AQ_RC_EFAULT";
case I40E_AQ_RC_EBUSY:
return "I40E_AQ_RC_EBUSY";
case I40E_AQ_RC_EEXIST:
return "I40E_AQ_RC_EEXIST";
case I40E_AQ_RC_EINVAL:
return "I40E_AQ_RC_EINVAL";
case I40E_AQ_RC_ENOTTY:
return "I40E_AQ_RC_ENOTTY";
case I40E_AQ_RC_ENOSPC:
return "I40E_AQ_RC_ENOSPC";
case I40E_AQ_RC_ENOSYS:
return "I40E_AQ_RC_ENOSYS";
case I40E_AQ_RC_ERANGE:
return "I40E_AQ_RC_ERANGE";
case I40E_AQ_RC_EFLUSHED:
return "I40E_AQ_RC_EFLUSHED";
case I40E_AQ_RC_BAD_ADDR:
return "I40E_AQ_RC_BAD_ADDR";
case I40E_AQ_RC_EMODE:
return "I40E_AQ_RC_EMODE";
case I40E_AQ_RC_EFBIG:
return "I40E_AQ_RC_EFBIG";
}
snprintf(hw->err_str, sizeof(hw->err_str), "%d", aq_err);
return hw->err_str;
}
/**
* iavf_stat_str - convert status err code to a string
* @hw: pointer to the HW structure
* @stat_err: the status error code to convert
**/
const char *iavf_stat_str(struct iavf_hw *hw, iavf_status stat_err)
{
switch (stat_err) {
case 0:
return "OK";
case I40E_ERR_NVM:
return "I40E_ERR_NVM";
case I40E_ERR_NVM_CHECKSUM:
return "I40E_ERR_NVM_CHECKSUM";
case I40E_ERR_PHY:
return "I40E_ERR_PHY";
case I40E_ERR_CONFIG:
return "I40E_ERR_CONFIG";
case I40E_ERR_PARAM:
return "I40E_ERR_PARAM";
case I40E_ERR_MAC_TYPE:
return "I40E_ERR_MAC_TYPE";
case I40E_ERR_UNKNOWN_PHY:
return "I40E_ERR_UNKNOWN_PHY";
case I40E_ERR_LINK_SETUP:
return "I40E_ERR_LINK_SETUP";
case I40E_ERR_ADAPTER_STOPPED:
return "I40E_ERR_ADAPTER_STOPPED";
case I40E_ERR_INVALID_MAC_ADDR:
return "I40E_ERR_INVALID_MAC_ADDR";
case I40E_ERR_DEVICE_NOT_SUPPORTED:
return "I40E_ERR_DEVICE_NOT_SUPPORTED";
case I40E_ERR_MASTER_REQUESTS_PENDING:
return "I40E_ERR_MASTER_REQUESTS_PENDING";
case I40E_ERR_INVALID_LINK_SETTINGS:
return "I40E_ERR_INVALID_LINK_SETTINGS";
case I40E_ERR_AUTONEG_NOT_COMPLETE:
return "I40E_ERR_AUTONEG_NOT_COMPLETE";
case I40E_ERR_RESET_FAILED:
return "I40E_ERR_RESET_FAILED";
case I40E_ERR_SWFW_SYNC:
return "I40E_ERR_SWFW_SYNC";
case I40E_ERR_NO_AVAILABLE_VSI:
return "I40E_ERR_NO_AVAILABLE_VSI";
case I40E_ERR_NO_MEMORY:
return "I40E_ERR_NO_MEMORY";
case I40E_ERR_BAD_PTR:
return "I40E_ERR_BAD_PTR";
case I40E_ERR_RING_FULL:
return "I40E_ERR_RING_FULL";
case I40E_ERR_INVALID_PD_ID:
return "I40E_ERR_INVALID_PD_ID";
case I40E_ERR_INVALID_QP_ID:
return "I40E_ERR_INVALID_QP_ID";
case I40E_ERR_INVALID_CQ_ID:
return "I40E_ERR_INVALID_CQ_ID";
case I40E_ERR_INVALID_CEQ_ID:
return "I40E_ERR_INVALID_CEQ_ID";
case I40E_ERR_INVALID_AEQ_ID:
return "I40E_ERR_INVALID_AEQ_ID";
case I40E_ERR_INVALID_SIZE:
return "I40E_ERR_INVALID_SIZE";
case I40E_ERR_INVALID_ARP_INDEX:
return "I40E_ERR_INVALID_ARP_INDEX";
case I40E_ERR_INVALID_FPM_FUNC_ID:
return "I40E_ERR_INVALID_FPM_FUNC_ID";
case I40E_ERR_QP_INVALID_MSG_SIZE:
return "I40E_ERR_QP_INVALID_MSG_SIZE";
case I40E_ERR_QP_TOOMANY_WRS_POSTED:
return "I40E_ERR_QP_TOOMANY_WRS_POSTED";
case I40E_ERR_INVALID_FRAG_COUNT:
return "I40E_ERR_INVALID_FRAG_COUNT";
case I40E_ERR_QUEUE_EMPTY:
return "I40E_ERR_QUEUE_EMPTY";
case I40E_ERR_INVALID_ALIGNMENT:
return "I40E_ERR_INVALID_ALIGNMENT";
case I40E_ERR_FLUSHED_QUEUE:
return "I40E_ERR_FLUSHED_QUEUE";
case I40E_ERR_INVALID_PUSH_PAGE_INDEX:
return "I40E_ERR_INVALID_PUSH_PAGE_INDEX";
case I40E_ERR_INVALID_IMM_DATA_SIZE:
return "I40E_ERR_INVALID_IMM_DATA_SIZE";
case I40E_ERR_TIMEOUT:
return "I40E_ERR_TIMEOUT";
case I40E_ERR_OPCODE_MISMATCH:
return "I40E_ERR_OPCODE_MISMATCH";
case I40E_ERR_CQP_COMPL_ERROR:
return "I40E_ERR_CQP_COMPL_ERROR";
case I40E_ERR_INVALID_VF_ID:
return "I40E_ERR_INVALID_VF_ID";
case I40E_ERR_INVALID_HMCFN_ID:
return "I40E_ERR_INVALID_HMCFN_ID";
case I40E_ERR_BACKING_PAGE_ERROR:
return "I40E_ERR_BACKING_PAGE_ERROR";
case I40E_ERR_NO_PBLCHUNKS_AVAILABLE:
return "I40E_ERR_NO_PBLCHUNKS_AVAILABLE";
case I40E_ERR_INVALID_PBLE_INDEX:
return "I40E_ERR_INVALID_PBLE_INDEX";
case I40E_ERR_INVALID_SD_INDEX:
return "I40E_ERR_INVALID_SD_INDEX";
case I40E_ERR_INVALID_PAGE_DESC_INDEX:
return "I40E_ERR_INVALID_PAGE_DESC_INDEX";
case I40E_ERR_INVALID_SD_TYPE:
return "I40E_ERR_INVALID_SD_TYPE";
case I40E_ERR_MEMCPY_FAILED:
return "I40E_ERR_MEMCPY_FAILED";
case I40E_ERR_INVALID_HMC_OBJ_INDEX:
return "I40E_ERR_INVALID_HMC_OBJ_INDEX";
case I40E_ERR_INVALID_HMC_OBJ_COUNT:
return "I40E_ERR_INVALID_HMC_OBJ_COUNT";
case I40E_ERR_INVALID_SRQ_ARM_LIMIT:
return "I40E_ERR_INVALID_SRQ_ARM_LIMIT";
case I40E_ERR_SRQ_ENABLED:
return "I40E_ERR_SRQ_ENABLED";
case I40E_ERR_ADMIN_QUEUE_ERROR:
return "I40E_ERR_ADMIN_QUEUE_ERROR";
case I40E_ERR_ADMIN_QUEUE_TIMEOUT:
return "I40E_ERR_ADMIN_QUEUE_TIMEOUT";
case I40E_ERR_BUF_TOO_SHORT:
return "I40E_ERR_BUF_TOO_SHORT";
case I40E_ERR_ADMIN_QUEUE_FULL:
return "I40E_ERR_ADMIN_QUEUE_FULL";
case I40E_ERR_ADMIN_QUEUE_NO_WORK:
return "I40E_ERR_ADMIN_QUEUE_NO_WORK";
case I40E_ERR_BAD_IWARP_CQE:
return "I40E_ERR_BAD_IWARP_CQE";
case I40E_ERR_NVM_BLANK_MODE:
return "I40E_ERR_NVM_BLANK_MODE";
case I40E_ERR_NOT_IMPLEMENTED:
return "I40E_ERR_NOT_IMPLEMENTED";
case I40E_ERR_PE_DOORBELL_NOT_ENABLED:
return "I40E_ERR_PE_DOORBELL_NOT_ENABLED";
case I40E_ERR_DIAG_TEST_FAILED:
return "I40E_ERR_DIAG_TEST_FAILED";
case I40E_ERR_NOT_READY:
return "I40E_ERR_NOT_READY";
case I40E_NOT_SUPPORTED:
return "I40E_NOT_SUPPORTED";
case I40E_ERR_FIRMWARE_API_VERSION:
return "I40E_ERR_FIRMWARE_API_VERSION";
case I40E_ERR_ADMIN_QUEUE_CRITICAL_ERROR:
return "I40E_ERR_ADMIN_QUEUE_CRITICAL_ERROR";
}
snprintf(hw->err_str, sizeof(hw->err_str), "%d", stat_err);
return hw->err_str;
}
/**
* iavf_debug_aq
* @hw: debug mask related to admin queue
* @mask: debug mask
* @desc: pointer to admin queue descriptor
* @buffer: pointer to command buffer
* @buf_len: max length of buffer
*
* Dumps debug log about adminq command with descriptor contents.
**/
void iavf_debug_aq(struct iavf_hw *hw, enum iavf_debug_mask mask, void *desc,
void *buffer, u16 buf_len)
{
struct i40e_aq_desc *aq_desc = (struct i40e_aq_desc *)desc;
u8 *buf = (u8 *)buffer;
if ((!(mask & hw->debug_mask)) || !desc)
return;
iavf_debug(hw, mask,
"AQ CMD: opcode 0x%04X, flags 0x%04X, datalen 0x%04X, retval 0x%04X\n",
le16_to_cpu(aq_desc->opcode),
le16_to_cpu(aq_desc->flags),
le16_to_cpu(aq_desc->datalen),
le16_to_cpu(aq_desc->retval));
iavf_debug(hw, mask, "\tcookie (h,l) 0x%08X 0x%08X\n",
le32_to_cpu(aq_desc->cookie_high),
le32_to_cpu(aq_desc->cookie_low));
iavf_debug(hw, mask, "\tparam (0,1) 0x%08X 0x%08X\n",
le32_to_cpu(aq_desc->params.internal.param0),
le32_to_cpu(aq_desc->params.internal.param1));
iavf_debug(hw, mask, "\taddr (h,l) 0x%08X 0x%08X\n",
le32_to_cpu(aq_desc->params.external.addr_high),
le32_to_cpu(aq_desc->params.external.addr_low));
if (buffer && aq_desc->datalen) {
u16 len = le16_to_cpu(aq_desc->datalen);
iavf_debug(hw, mask, "AQ CMD Buffer:\n");
if (buf_len < len)
len = buf_len;
/* write the full 16-byte chunks */
if (hw->debug_mask & mask) {
char prefix[27];
snprintf(prefix, sizeof(prefix),
"iavf %02x:%02x.%x: \t0x",
hw->bus.bus_id,
hw->bus.device,
hw->bus.func);
print_hex_dump(KERN_INFO, prefix, DUMP_PREFIX_OFFSET,
16, 1, buf, len, false);
}
}
}
/**
* iavf_check_asq_alive
* @hw: pointer to the hw struct
*
* Returns true if Queue is enabled else false.
**/
bool iavf_check_asq_alive(struct iavf_hw *hw)
{
if (hw->aq.asq.len)
return !!(rd32(hw, hw->aq.asq.len) &
IAVF_VF_ATQLEN1_ATQENABLE_MASK);
else
return false;
}
/**
* iavf_aq_queue_shutdown
* @hw: pointer to the hw struct
* @unloading: is the driver unloading itself
*
* Tell the Firmware that we're shutting down the AdminQ and whether
* or not the driver is unloading as well.
**/
iavf_status iavf_aq_queue_shutdown(struct iavf_hw *hw, bool unloading)
{
struct i40e_aq_desc desc;
struct i40e_aqc_queue_shutdown *cmd =
(struct i40e_aqc_queue_shutdown *)&desc.params.raw;
iavf_status status;
iavf_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_queue_shutdown);
if (unloading)
cmd->driver_unloading = cpu_to_le32(I40E_AQ_DRIVER_UNLOADING);
status = iavf_asq_send_command(hw, &desc, NULL, 0, NULL);
return status;
}
/**
* iavf_aq_get_set_rss_lut
* @hw: pointer to the hardware structure
* @vsi_id: vsi fw index
* @pf_lut: for PF table set true, for VSI table set false
* @lut: pointer to the lut buffer provided by the caller
* @lut_size: size of the lut buffer
* @set: set true to set the table, false to get the table
*
* Internal function to get or set RSS look up table
**/
static iavf_status iavf_aq_get_set_rss_lut(struct iavf_hw *hw,
u16 vsi_id, bool pf_lut,
u8 *lut, u16 lut_size,
bool set)
{
iavf_status status;
struct i40e_aq_desc desc;
struct i40e_aqc_get_set_rss_lut *cmd_resp =
(struct i40e_aqc_get_set_rss_lut *)&desc.params.raw;
if (set)
iavf_fill_default_direct_cmd_desc(&desc,
i40e_aqc_opc_set_rss_lut);
else
iavf_fill_default_direct_cmd_desc(&desc,
i40e_aqc_opc_get_rss_lut);
/* Indirect command */
desc.flags |= cpu_to_le16((u16)I40E_AQ_FLAG_BUF);
desc.flags |= cpu_to_le16((u16)I40E_AQ_FLAG_RD);
cmd_resp->vsi_id =
cpu_to_le16((u16)((vsi_id <<
I40E_AQC_SET_RSS_LUT_VSI_ID_SHIFT) &
I40E_AQC_SET_RSS_LUT_VSI_ID_MASK));
cmd_resp->vsi_id |= cpu_to_le16((u16)I40E_AQC_SET_RSS_LUT_VSI_VALID);
if (pf_lut)
cmd_resp->flags |= cpu_to_le16((u16)
((I40E_AQC_SET_RSS_LUT_TABLE_TYPE_PF <<
I40E_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT) &
I40E_AQC_SET_RSS_LUT_TABLE_TYPE_MASK));
else
cmd_resp->flags |= cpu_to_le16((u16)
((I40E_AQC_SET_RSS_LUT_TABLE_TYPE_VSI <<
I40E_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT) &
I40E_AQC_SET_RSS_LUT_TABLE_TYPE_MASK));
status = iavf_asq_send_command(hw, &desc, lut, lut_size, NULL);
return status;
}
/**
* iavf_aq_get_rss_lut
* @hw: pointer to the hardware structure
* @vsi_id: vsi fw index
* @pf_lut: for PF table set true, for VSI table set false
* @lut: pointer to the lut buffer provided by the caller
* @lut_size: size of the lut buffer
*
* get the RSS lookup table, PF or VSI type
**/
iavf_status iavf_aq_get_rss_lut(struct iavf_hw *hw, u16 vsi_id,
bool pf_lut, u8 *lut, u16 lut_size)
{
return iavf_aq_get_set_rss_lut(hw, vsi_id, pf_lut, lut, lut_size,
false);
}
/**
* iavf_aq_set_rss_lut
* @hw: pointer to the hardware structure
* @vsi_id: vsi fw index
* @pf_lut: for PF table set true, for VSI table set false
* @lut: pointer to the lut buffer provided by the caller
* @lut_size: size of the lut buffer
*
* set the RSS lookup table, PF or VSI type
**/
iavf_status iavf_aq_set_rss_lut(struct iavf_hw *hw, u16 vsi_id,
bool pf_lut, u8 *lut, u16 lut_size)
{
return iavf_aq_get_set_rss_lut(hw, vsi_id, pf_lut, lut, lut_size, true);
}
/**
* iavf_aq_get_set_rss_key
* @hw: pointer to the hw struct
* @vsi_id: vsi fw index
* @key: pointer to key info struct
* @set: set true to set the key, false to get the key
*
* get the RSS key per VSI
**/
static
iavf_status iavf_aq_get_set_rss_key(struct iavf_hw *hw, u16 vsi_id,
struct i40e_aqc_get_set_rss_key_data *key,
bool set)
{
iavf_status status;
struct i40e_aq_desc desc;
struct i40e_aqc_get_set_rss_key *cmd_resp =
(struct i40e_aqc_get_set_rss_key *)&desc.params.raw;
u16 key_size = sizeof(struct i40e_aqc_get_set_rss_key_data);
if (set)
iavf_fill_default_direct_cmd_desc(&desc,
i40e_aqc_opc_set_rss_key);
else
iavf_fill_default_direct_cmd_desc(&desc,
i40e_aqc_opc_get_rss_key);
/* Indirect command */
desc.flags |= cpu_to_le16((u16)I40E_AQ_FLAG_BUF);
desc.flags |= cpu_to_le16((u16)I40E_AQ_FLAG_RD);
cmd_resp->vsi_id =
cpu_to_le16((u16)((vsi_id <<
I40E_AQC_SET_RSS_KEY_VSI_ID_SHIFT) &
I40E_AQC_SET_RSS_KEY_VSI_ID_MASK));
cmd_resp->vsi_id |= cpu_to_le16((u16)I40E_AQC_SET_RSS_KEY_VSI_VALID);
status = iavf_asq_send_command(hw, &desc, key, key_size, NULL);
return status;
}
/**
* iavf_aq_get_rss_key
* @hw: pointer to the hw struct
* @vsi_id: vsi fw index
* @key: pointer to key info struct
*
**/
iavf_status iavf_aq_get_rss_key(struct iavf_hw *hw, u16 vsi_id,
struct i40e_aqc_get_set_rss_key_data *key)
{
return iavf_aq_get_set_rss_key(hw, vsi_id, key, false);
}
/**
* iavf_aq_set_rss_key
* @hw: pointer to the hw struct
* @vsi_id: vsi fw index
* @key: pointer to key info struct
*
* set the RSS key per VSI
**/
iavf_status iavf_aq_set_rss_key(struct iavf_hw *hw, u16 vsi_id,
struct i40e_aqc_get_set_rss_key_data *key)
{
return iavf_aq_get_set_rss_key(hw, vsi_id, key, true);
}
/* The iavf_ptype_lookup table is used to convert from the 8-bit ptype in the
* hardware to a bit-field that can be used by SW to more easily determine the
* packet type.
*
* Macros are used to shorten the table lines and make this table human
* readable.
*
* We store the PTYPE in the top byte of the bit field - this is just so that
* we can check that the table doesn't have a row missing, as the index into
* the table should be the PTYPE.
*
* Typical work flow:
*
* IF NOT iavf_ptype_lookup[ptype].known
* THEN
* Packet is unknown
* ELSE IF iavf_ptype_lookup[ptype].outer_ip == I40E_RX_PTYPE_OUTER_IP
* Use the rest of the fields to look at the tunnels, inner protocols, etc
* ELSE
* Use the enum iavf_rx_l2_ptype to decode the packet type
* ENDIF
*/
/* macro to make the table lines short */
#define IAVF_PTT(PTYPE, OUTER_IP, OUTER_IP_VER, OUTER_FRAG, T, TE, TEF, I, PL)\
{ PTYPE, \
1, \
IAVF_RX_PTYPE_OUTER_##OUTER_IP, \
IAVF_RX_PTYPE_OUTER_##OUTER_IP_VER, \
IAVF_RX_PTYPE_##OUTER_FRAG, \
IAVF_RX_PTYPE_TUNNEL_##T, \
IAVF_RX_PTYPE_TUNNEL_END_##TE, \
IAVF_RX_PTYPE_##TEF, \
IAVF_RX_PTYPE_INNER_PROT_##I, \
IAVF_RX_PTYPE_PAYLOAD_LAYER_##PL }
#define IAVF_PTT_UNUSED_ENTRY(PTYPE) \
{ PTYPE, 0, 0, 0, 0, 0, 0, 0, 0, 0 }
/* shorter macros makes the table fit but are terse */
#define IAVF_RX_PTYPE_NOF IAVF_RX_PTYPE_NOT_FRAG
#define IAVF_RX_PTYPE_FRG IAVF_RX_PTYPE_FRAG
#define IAVF_RX_PTYPE_INNER_PROT_TS IAVF_RX_PTYPE_INNER_PROT_TIMESYNC
/* Lookup table mapping the HW PTYPE to the bit field for decoding */
struct iavf_rx_ptype_decoded iavf_ptype_lookup[] = {
/* L2 Packet types */
IAVF_PTT_UNUSED_ENTRY(0),
IAVF_PTT(1, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
IAVF_PTT(2, L2, NONE, NOF, NONE, NONE, NOF, TS, PAY2),
IAVF_PTT(3, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
IAVF_PTT_UNUSED_ENTRY(4),
IAVF_PTT_UNUSED_ENTRY(5),
IAVF_PTT(6, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
IAVF_PTT(7, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
IAVF_PTT_UNUSED_ENTRY(8),
IAVF_PTT_UNUSED_ENTRY(9),
IAVF_PTT(10, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
IAVF_PTT(11, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE),
IAVF_PTT(12, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
IAVF_PTT(13, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
IAVF_PTT(14, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
IAVF_PTT(15, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
IAVF_PTT(16, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
IAVF_PTT(17, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
IAVF_PTT(18, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
IAVF_PTT(19, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
IAVF_PTT(20, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
IAVF_PTT(21, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
/* Non Tunneled IPv4 */
IAVF_PTT(22, IP, IPV4, FRG, NONE, NONE, NOF, NONE, PAY3),
IAVF_PTT(23, IP, IPV4, NOF, NONE, NONE, NOF, NONE, PAY3),
IAVF_PTT(24, IP, IPV4, NOF, NONE, NONE, NOF, UDP, PAY4),
IAVF_PTT_UNUSED_ENTRY(25),
IAVF_PTT(26, IP, IPV4, NOF, NONE, NONE, NOF, TCP, PAY4),
IAVF_PTT(27, IP, IPV4, NOF, NONE, NONE, NOF, SCTP, PAY4),
IAVF_PTT(28, IP, IPV4, NOF, NONE, NONE, NOF, ICMP, PAY4),
/* IPv4 --> IPv4 */
IAVF_PTT(29, IP, IPV4, NOF, IP_IP, IPV4, FRG, NONE, PAY3),
IAVF_PTT(30, IP, IPV4, NOF, IP_IP, IPV4, NOF, NONE, PAY3),
IAVF_PTT(31, IP, IPV4, NOF, IP_IP, IPV4, NOF, UDP, PAY4),
IAVF_PTT_UNUSED_ENTRY(32),
IAVF_PTT(33, IP, IPV4, NOF, IP_IP, IPV4, NOF, TCP, PAY4),
IAVF_PTT(34, IP, IPV4, NOF, IP_IP, IPV4, NOF, SCTP, PAY4),
IAVF_PTT(35, IP, IPV4, NOF, IP_IP, IPV4, NOF, ICMP, PAY4),
/* IPv4 --> IPv6 */
IAVF_PTT(36, IP, IPV4, NOF, IP_IP, IPV6, FRG, NONE, PAY3),
IAVF_PTT(37, IP, IPV4, NOF, IP_IP, IPV6, NOF, NONE, PAY3),
IAVF_PTT(38, IP, IPV4, NOF, IP_IP, IPV6, NOF, UDP, PAY4),
IAVF_PTT_UNUSED_ENTRY(39),
IAVF_PTT(40, IP, IPV4, NOF, IP_IP, IPV6, NOF, TCP, PAY4),
IAVF_PTT(41, IP, IPV4, NOF, IP_IP, IPV6, NOF, SCTP, PAY4),
IAVF_PTT(42, IP, IPV4, NOF, IP_IP, IPV6, NOF, ICMP, PAY4),
/* IPv4 --> GRE/NAT */
IAVF_PTT(43, IP, IPV4, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3),
/* IPv4 --> GRE/NAT --> IPv4 */
IAVF_PTT(44, IP, IPV4, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3),
IAVF_PTT(45, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3),
IAVF_PTT(46, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, UDP, PAY4),
IAVF_PTT_UNUSED_ENTRY(47),
IAVF_PTT(48, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, TCP, PAY4),
IAVF_PTT(49, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4),
IAVF_PTT(50, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4),
/* IPv4 --> GRE/NAT --> IPv6 */
IAVF_PTT(51, IP, IPV4, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3),
IAVF_PTT(52, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3),
IAVF_PTT(53, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, UDP, PAY4),
IAVF_PTT_UNUSED_ENTRY(54),
IAVF_PTT(55, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, TCP, PAY4),
IAVF_PTT(56, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4),
IAVF_PTT(57, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4),
/* IPv4 --> GRE/NAT --> MAC */
IAVF_PTT(58, IP, IPV4, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3),
/* IPv4 --> GRE/NAT --> MAC --> IPv4 */
IAVF_PTT(59, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3),
IAVF_PTT(60, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3),
IAVF_PTT(61, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP, PAY4),
IAVF_PTT_UNUSED_ENTRY(62),
IAVF_PTT(63, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP, PAY4),
IAVF_PTT(64, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4),
IAVF_PTT(65, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4),
/* IPv4 --> GRE/NAT -> MAC --> IPv6 */
IAVF_PTT(66, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3),
IAVF_PTT(67, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3),
IAVF_PTT(68, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP, PAY4),
IAVF_PTT_UNUSED_ENTRY(69),
IAVF_PTT(70, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP, PAY4),
IAVF_PTT(71, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4),
IAVF_PTT(72, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4),
/* IPv4 --> GRE/NAT --> MAC/VLAN */
IAVF_PTT(73, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3),
/* IPv4 ---> GRE/NAT -> MAC/VLAN --> IPv4 */
IAVF_PTT(74, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3),
IAVF_PTT(75, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3),
IAVF_PTT(76, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP, PAY4),
IAVF_PTT_UNUSED_ENTRY(77),
IAVF_PTT(78, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP, PAY4),
IAVF_PTT(79, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4),
IAVF_PTT(80, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4),
/* IPv4 -> GRE/NAT -> MAC/VLAN --> IPv6 */
IAVF_PTT(81, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3),
IAVF_PTT(82, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3),
IAVF_PTT(83, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP, PAY4),
IAVF_PTT_UNUSED_ENTRY(84),
IAVF_PTT(85, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP, PAY4),
IAVF_PTT(86, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4),
IAVF_PTT(87, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4),
/* Non Tunneled IPv6 */
IAVF_PTT(88, IP, IPV6, FRG, NONE, NONE, NOF, NONE, PAY3),
IAVF_PTT(89, IP, IPV6, NOF, NONE, NONE, NOF, NONE, PAY3),
IAVF_PTT(90, IP, IPV6, NOF, NONE, NONE, NOF, UDP, PAY3),
IAVF_PTT_UNUSED_ENTRY(91),
IAVF_PTT(92, IP, IPV6, NOF, NONE, NONE, NOF, TCP, PAY4),
IAVF_PTT(93, IP, IPV6, NOF, NONE, NONE, NOF, SCTP, PAY4),
IAVF_PTT(94, IP, IPV6, NOF, NONE, NONE, NOF, ICMP, PAY4),
/* IPv6 --> IPv4 */
IAVF_PTT(95, IP, IPV6, NOF, IP_IP, IPV4, FRG, NONE, PAY3),
IAVF_PTT(96, IP, IPV6, NOF, IP_IP, IPV4, NOF, NONE, PAY3),
IAVF_PTT(97, IP, IPV6, NOF, IP_IP, IPV4, NOF, UDP, PAY4),
IAVF_PTT_UNUSED_ENTRY(98),
IAVF_PTT(99, IP, IPV6, NOF, IP_IP, IPV4, NOF, TCP, PAY4),
IAVF_PTT(100, IP, IPV6, NOF, IP_IP, IPV4, NOF, SCTP, PAY4),
IAVF_PTT(101, IP, IPV6, NOF, IP_IP, IPV4, NOF, ICMP, PAY4),
/* IPv6 --> IPv6 */
IAVF_PTT(102, IP, IPV6, NOF, IP_IP, IPV6, FRG, NONE, PAY3),
IAVF_PTT(103, IP, IPV6, NOF, IP_IP, IPV6, NOF, NONE, PAY3),
IAVF_PTT(104, IP, IPV6, NOF, IP_IP, IPV6, NOF, UDP, PAY4),
IAVF_PTT_UNUSED_ENTRY(105),
IAVF_PTT(106, IP, IPV6, NOF, IP_IP, IPV6, NOF, TCP, PAY4),
IAVF_PTT(107, IP, IPV6, NOF, IP_IP, IPV6, NOF, SCTP, PAY4),
IAVF_PTT(108, IP, IPV6, NOF, IP_IP, IPV6, NOF, ICMP, PAY4),
/* IPv6 --> GRE/NAT */
IAVF_PTT(109, IP, IPV6, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3),
/* IPv6 --> GRE/NAT -> IPv4 */
IAVF_PTT(110, IP, IPV6, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3),
IAVF_PTT(111, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3),
IAVF_PTT(112, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, UDP, PAY4),
IAVF_PTT_UNUSED_ENTRY(113),
IAVF_PTT(114, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, TCP, PAY4),
IAVF_PTT(115, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4),
IAVF_PTT(116, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4),
/* IPv6 --> GRE/NAT -> IPv6 */
IAVF_PTT(117, IP, IPV6, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3),
IAVF_PTT(118, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3),
IAVF_PTT(119, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, UDP, PAY4),
IAVF_PTT_UNUSED_ENTRY(120),
IAVF_PTT(121, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, TCP, PAY4),
IAVF_PTT(122, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4),
IAVF_PTT(123, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4),
/* IPv6 --> GRE/NAT -> MAC */
IAVF_PTT(124, IP, IPV6, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3),
/* IPv6 --> GRE/NAT -> MAC -> IPv4 */
IAVF_PTT(125, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3),
IAVF_PTT(126, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3),
IAVF_PTT(127, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP, PAY4),
IAVF_PTT_UNUSED_ENTRY(128),
IAVF_PTT(129, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP, PAY4),
IAVF_PTT(130, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4),
IAVF_PTT(131, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4),
/* IPv6 --> GRE/NAT -> MAC -> IPv6 */
IAVF_PTT(132, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3),
IAVF_PTT(133, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3),
IAVF_PTT(134, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP, PAY4),
IAVF_PTT_UNUSED_ENTRY(135),
IAVF_PTT(136, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP, PAY4),
IAVF_PTT(137, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4),
IAVF_PTT(138, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4),
/* IPv6 --> GRE/NAT -> MAC/VLAN */
IAVF_PTT(139, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3),
/* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv4 */
IAVF_PTT(140, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3),
IAVF_PTT(141, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3),
IAVF_PTT(142, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP, PAY4),
IAVF_PTT_UNUSED_ENTRY(143),
IAVF_PTT(144, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP, PAY4),
IAVF_PTT(145, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4),
IAVF_PTT(146, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4),
/* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv6 */
IAVF_PTT(147, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3),
IAVF_PTT(148, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3),
IAVF_PTT(149, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP, PAY4),
IAVF_PTT_UNUSED_ENTRY(150),
IAVF_PTT(151, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP, PAY4),
IAVF_PTT(152, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4),
IAVF_PTT(153, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4),
/* unused entries */
IAVF_PTT_UNUSED_ENTRY(154),
IAVF_PTT_UNUSED_ENTRY(155),
IAVF_PTT_UNUSED_ENTRY(156),
IAVF_PTT_UNUSED_ENTRY(157),
IAVF_PTT_UNUSED_ENTRY(158),
IAVF_PTT_UNUSED_ENTRY(159),
IAVF_PTT_UNUSED_ENTRY(160),
IAVF_PTT_UNUSED_ENTRY(161),
IAVF_PTT_UNUSED_ENTRY(162),
IAVF_PTT_UNUSED_ENTRY(163),
IAVF_PTT_UNUSED_ENTRY(164),
IAVF_PTT_UNUSED_ENTRY(165),
IAVF_PTT_UNUSED_ENTRY(166),
IAVF_PTT_UNUSED_ENTRY(167),
IAVF_PTT_UNUSED_ENTRY(168),
IAVF_PTT_UNUSED_ENTRY(169),
IAVF_PTT_UNUSED_ENTRY(170),
IAVF_PTT_UNUSED_ENTRY(171),
IAVF_PTT_UNUSED_ENTRY(172),
IAVF_PTT_UNUSED_ENTRY(173),
IAVF_PTT_UNUSED_ENTRY(174),
IAVF_PTT_UNUSED_ENTRY(175),
IAVF_PTT_UNUSED_ENTRY(176),
IAVF_PTT_UNUSED_ENTRY(177),
IAVF_PTT_UNUSED_ENTRY(178),
IAVF_PTT_UNUSED_ENTRY(179),
IAVF_PTT_UNUSED_ENTRY(180),
IAVF_PTT_UNUSED_ENTRY(181),
IAVF_PTT_UNUSED_ENTRY(182),
IAVF_PTT_UNUSED_ENTRY(183),
IAVF_PTT_UNUSED_ENTRY(184),
IAVF_PTT_UNUSED_ENTRY(185),
IAVF_PTT_UNUSED_ENTRY(186),
IAVF_PTT_UNUSED_ENTRY(187),
IAVF_PTT_UNUSED_ENTRY(188),
IAVF_PTT_UNUSED_ENTRY(189),
IAVF_PTT_UNUSED_ENTRY(190),
IAVF_PTT_UNUSED_ENTRY(191),
IAVF_PTT_UNUSED_ENTRY(192),
IAVF_PTT_UNUSED_ENTRY(193),
IAVF_PTT_UNUSED_ENTRY(194),
IAVF_PTT_UNUSED_ENTRY(195),
IAVF_PTT_UNUSED_ENTRY(196),
IAVF_PTT_UNUSED_ENTRY(197),
IAVF_PTT_UNUSED_ENTRY(198),
IAVF_PTT_UNUSED_ENTRY(199),
IAVF_PTT_UNUSED_ENTRY(200),
IAVF_PTT_UNUSED_ENTRY(201),
IAVF_PTT_UNUSED_ENTRY(202),
IAVF_PTT_UNUSED_ENTRY(203),
IAVF_PTT_UNUSED_ENTRY(204),
IAVF_PTT_UNUSED_ENTRY(205),
IAVF_PTT_UNUSED_ENTRY(206),
IAVF_PTT_UNUSED_ENTRY(207),
IAVF_PTT_UNUSED_ENTRY(208),
IAVF_PTT_UNUSED_ENTRY(209),
IAVF_PTT_UNUSED_ENTRY(210),
IAVF_PTT_UNUSED_ENTRY(211),
IAVF_PTT_UNUSED_ENTRY(212),
IAVF_PTT_UNUSED_ENTRY(213),
IAVF_PTT_UNUSED_ENTRY(214),
IAVF_PTT_UNUSED_ENTRY(215),
IAVF_PTT_UNUSED_ENTRY(216),
IAVF_PTT_UNUSED_ENTRY(217),
IAVF_PTT_UNUSED_ENTRY(218),
IAVF_PTT_UNUSED_ENTRY(219),
IAVF_PTT_UNUSED_ENTRY(220),
IAVF_PTT_UNUSED_ENTRY(221),
IAVF_PTT_UNUSED_ENTRY(222),
IAVF_PTT_UNUSED_ENTRY(223),
IAVF_PTT_UNUSED_ENTRY(224),
IAVF_PTT_UNUSED_ENTRY(225),
IAVF_PTT_UNUSED_ENTRY(226),
IAVF_PTT_UNUSED_ENTRY(227),
IAVF_PTT_UNUSED_ENTRY(228),
IAVF_PTT_UNUSED_ENTRY(229),
IAVF_PTT_UNUSED_ENTRY(230),
IAVF_PTT_UNUSED_ENTRY(231),
IAVF_PTT_UNUSED_ENTRY(232),
IAVF_PTT_UNUSED_ENTRY(233),
IAVF_PTT_UNUSED_ENTRY(234),
IAVF_PTT_UNUSED_ENTRY(235),
IAVF_PTT_UNUSED_ENTRY(236),
IAVF_PTT_UNUSED_ENTRY(237),
IAVF_PTT_UNUSED_ENTRY(238),
IAVF_PTT_UNUSED_ENTRY(239),
IAVF_PTT_UNUSED_ENTRY(240),
IAVF_PTT_UNUSED_ENTRY(241),
IAVF_PTT_UNUSED_ENTRY(242),
IAVF_PTT_UNUSED_ENTRY(243),
IAVF_PTT_UNUSED_ENTRY(244),
IAVF_PTT_UNUSED_ENTRY(245),
IAVF_PTT_UNUSED_ENTRY(246),
IAVF_PTT_UNUSED_ENTRY(247),
IAVF_PTT_UNUSED_ENTRY(248),
IAVF_PTT_UNUSED_ENTRY(249),
IAVF_PTT_UNUSED_ENTRY(250),
IAVF_PTT_UNUSED_ENTRY(251),
IAVF_PTT_UNUSED_ENTRY(252),
IAVF_PTT_UNUSED_ENTRY(253),
IAVF_PTT_UNUSED_ENTRY(254),
IAVF_PTT_UNUSED_ENTRY(255)
};
/**
* iavf_aq_send_msg_to_pf
* @hw: pointer to the hardware structure
* @v_opcode: opcodes for VF-PF communication
* @v_retval: return error code
* @msg: pointer to the msg buffer
* @msglen: msg length
* @cmd_details: pointer to command details
*
* Send message to PF driver using admin queue. By default, this message
* is sent asynchronously, i.e. iavf_asq_send_command() does not wait for
* completion before returning.
**/
iavf_status iavf_aq_send_msg_to_pf(struct iavf_hw *hw,
enum virtchnl_ops v_opcode,
iavf_status v_retval, u8 *msg, u16 msglen,
struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_asq_cmd_details details;
struct i40e_aq_desc desc;
iavf_status status;
iavf_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_send_msg_to_pf);
desc.flags |= cpu_to_le16((u16)I40E_AQ_FLAG_SI);
desc.cookie_high = cpu_to_le32(v_opcode);
desc.cookie_low = cpu_to_le32(v_retval);
if (msglen) {
desc.flags |= cpu_to_le16((u16)(I40E_AQ_FLAG_BUF
| I40E_AQ_FLAG_RD));
if (msglen > I40E_AQ_LARGE_BUF)
desc.flags |= cpu_to_le16((u16)I40E_AQ_FLAG_LB);
desc.datalen = cpu_to_le16(msglen);
}
if (!cmd_details) {
memset(&details, 0, sizeof(details));
details.async = true;
cmd_details = &details;
}
status = iavf_asq_send_command(hw, &desc, msg, msglen, cmd_details);
return status;
}
/**
* iavf_vf_parse_hw_config
* @hw: pointer to the hardware structure
* @msg: pointer to the virtual channel VF resource structure
*
* Given a VF resource message from the PF, populate the hw struct
* with appropriate information.
**/
void iavf_vf_parse_hw_config(struct iavf_hw *hw,
struct virtchnl_vf_resource *msg)
{
struct virtchnl_vsi_resource *vsi_res;
int i;
vsi_res = &msg->vsi_res[0];
hw->dev_caps.num_vsis = msg->num_vsis;
hw->dev_caps.num_rx_qp = msg->num_queue_pairs;
hw->dev_caps.num_tx_qp = msg->num_queue_pairs;
hw->dev_caps.num_msix_vectors_vf = msg->max_vectors;
hw->dev_caps.dcb = msg->vf_cap_flags &
VIRTCHNL_VF_OFFLOAD_L2;
hw->dev_caps.fcoe = 0;
for (i = 0; i < msg->num_vsis; i++) {
if (vsi_res->vsi_type == VIRTCHNL_VSI_SRIOV) {
ether_addr_copy(hw->mac.perm_addr,
vsi_res->default_mac_addr);
ether_addr_copy(hw->mac.addr,
vsi_res->default_mac_addr);
}
vsi_res++;
}
}
/**
* iavf_vf_reset
* @hw: pointer to the hardware structure
*
* Send a VF_RESET message to the PF. Does not wait for response from PF
* as none will be forthcoming. Immediately after calling this function,
* the admin queue should be shut down and (optionally) reinitialized.
**/
iavf_status iavf_vf_reset(struct iavf_hw *hw)
{
return iavf_aq_send_msg_to_pf(hw, VIRTCHNL_OP_RESET_VF,
0, NULL, 0, NULL);
}

View File

@ -0,0 +1,12 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2013 - 2018 Intel Corporation. */
#ifndef _IAVF_DEVIDS_H_
#define _IAVF_DEVIDS_H_
/* Device IDs for the VF driver */
#define IAVF_DEV_ID_VF 0x154C
#define IAVF_DEV_ID_VF_HV 0x1571
#define IAVF_DEV_ID_ADAPTIVE_VF 0x1889
#define IAVF_DEV_ID_X722_VF 0x37CD
#endif /* _IAVF_DEVIDS_H_ */

View File

@ -1,8 +1,8 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2013 - 2018 Intel Corporation. */
#ifndef _I40E_OSDEP_H_
#define _I40E_OSDEP_H_
#ifndef _IAVF_OSDEP_H_
#define _IAVF_OSDEP_H_
#include <linux/types.h>
#include <linux/if_ether.h>
@ -24,29 +24,29 @@
#define wr64(a, reg, value) writeq((value), ((a)->hw_addr + (reg)))
#define rd64(a, reg) readq((a)->hw_addr + (reg))
#define i40e_flush(a) readl((a)->hw_addr + I40E_VFGEN_RSTAT)
#define iavf_flush(a) readl((a)->hw_addr + IAVF_VFGEN_RSTAT)
/* memory allocation tracking */
struct i40e_dma_mem {
struct iavf_dma_mem {
void *va;
dma_addr_t pa;
u32 size;
};
#define i40e_allocate_dma_mem(h, m, unused, s, a) \
i40evf_allocate_dma_mem_d(h, m, s, a)
#define i40e_free_dma_mem(h, m) i40evf_free_dma_mem_d(h, m)
#define iavf_allocate_dma_mem(h, m, unused, s, a) \
iavf_allocate_dma_mem_d(h, m, s, a)
#define iavf_free_dma_mem(h, m) iavf_free_dma_mem_d(h, m)
struct i40e_virt_mem {
struct iavf_virt_mem {
void *va;
u32 size;
};
#define i40e_allocate_virt_mem(h, m, s) i40evf_allocate_virt_mem_d(h, m, s)
#define i40e_free_virt_mem(h, m) i40evf_free_virt_mem_d(h, m)
#define iavf_allocate_virt_mem(h, m, s) iavf_allocate_virt_mem_d(h, m, s)
#define iavf_free_virt_mem(h, m) iavf_free_virt_mem_d(h, m)
#define i40e_debug(h, m, s, ...) i40evf_debug_d(h, m, s, ##__VA_ARGS__)
extern void i40evf_debug_d(void *hw, u32 mask, char *fmt_str, ...)
#define iavf_debug(h, m, s, ...) iavf_debug_d(h, m, s, ##__VA_ARGS__)
extern void iavf_debug_d(void *hw, u32 mask, char *fmt_str, ...)
__attribute__ ((format(gnu_printf, 3, 4)));
typedef enum i40e_status_code i40e_status;
#endif /* _I40E_OSDEP_H_ */
typedef enum iavf_status_code iavf_status;
#endif /* _IAVF_OSDEP_H_ */

View File

@ -0,0 +1,67 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2013 - 2018 Intel Corporation. */
#ifndef _IAVF_PROTOTYPE_H_
#define _IAVF_PROTOTYPE_H_
#include "iavf_type.h"
#include "iavf_alloc.h"
#include <linux/avf/virtchnl.h>
/* Prototypes for shared code functions that are not in
* the standard function pointer structures. These are
* mostly because they are needed even before the init
* has happened and will assist in the early SW and FW
* setup.
*/
/* adminq functions */
iavf_status iavf_init_adminq(struct iavf_hw *hw);
iavf_status iavf_shutdown_adminq(struct iavf_hw *hw);
void i40e_adminq_init_ring_data(struct iavf_hw *hw);
iavf_status iavf_clean_arq_element(struct iavf_hw *hw,
struct i40e_arq_event_info *e,
u16 *events_pending);
iavf_status iavf_asq_send_command(struct iavf_hw *hw, struct i40e_aq_desc *desc,
void *buff, /* can be NULL */
u16 buff_size,
struct i40e_asq_cmd_details *cmd_details);
bool iavf_asq_done(struct iavf_hw *hw);
/* debug function for adminq */
void iavf_debug_aq(struct iavf_hw *hw, enum iavf_debug_mask mask,
void *desc, void *buffer, u16 buf_len);
void i40e_idle_aq(struct iavf_hw *hw);
void iavf_resume_aq(struct iavf_hw *hw);
bool iavf_check_asq_alive(struct iavf_hw *hw);
iavf_status iavf_aq_queue_shutdown(struct iavf_hw *hw, bool unloading);
const char *iavf_aq_str(struct iavf_hw *hw, enum i40e_admin_queue_err aq_err);
const char *iavf_stat_str(struct iavf_hw *hw, iavf_status stat_err);
iavf_status iavf_aq_get_rss_lut(struct iavf_hw *hw, u16 seid,
bool pf_lut, u8 *lut, u16 lut_size);
iavf_status iavf_aq_set_rss_lut(struct iavf_hw *hw, u16 seid,
bool pf_lut, u8 *lut, u16 lut_size);
iavf_status iavf_aq_get_rss_key(struct iavf_hw *hw, u16 seid,
struct i40e_aqc_get_set_rss_key_data *key);
iavf_status iavf_aq_set_rss_key(struct iavf_hw *hw, u16 seid,
struct i40e_aqc_get_set_rss_key_data *key);
iavf_status iavf_set_mac_type(struct iavf_hw *hw);
extern struct iavf_rx_ptype_decoded iavf_ptype_lookup[];
static inline struct iavf_rx_ptype_decoded decode_rx_desc_ptype(u8 ptype)
{
return iavf_ptype_lookup[ptype];
}
void iavf_vf_parse_hw_config(struct iavf_hw *hw,
struct virtchnl_vf_resource *msg);
iavf_status iavf_vf_reset(struct iavf_hw *hw);
iavf_status iavf_aq_send_msg_to_pf(struct iavf_hw *hw,
enum virtchnl_ops v_opcode,
iavf_status v_retval, u8 *msg, u16 msglen,
struct i40e_asq_cmd_details *cmd_details);
#endif /* _IAVF_PROTOTYPE_H_ */

View File

@ -0,0 +1,68 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2013 - 2018 Intel Corporation. */
#ifndef _IAVF_REGISTER_H_
#define _IAVF_REGISTER_H_
#define IAVF_VF_ARQBAH1 0x00006000 /* Reset: EMPR */
#define IAVF_VF_ARQBAL1 0x00006C00 /* Reset: EMPR */
#define IAVF_VF_ARQH1 0x00007400 /* Reset: EMPR */
#define IAVF_VF_ARQH1_ARQH_SHIFT 0
#define IAVF_VF_ARQH1_ARQH_MASK IAVF_MASK(0x3FF, IAVF_VF_ARQH1_ARQH_SHIFT)
#define IAVF_VF_ARQLEN1 0x00008000 /* Reset: EMPR */
#define IAVF_VF_ARQLEN1_ARQVFE_SHIFT 28
#define IAVF_VF_ARQLEN1_ARQVFE_MASK IAVF_MASK(0x1, IAVF_VF_ARQLEN1_ARQVFE_SHIFT)
#define IAVF_VF_ARQLEN1_ARQOVFL_SHIFT 29
#define IAVF_VF_ARQLEN1_ARQOVFL_MASK IAVF_MASK(0x1, IAVF_VF_ARQLEN1_ARQOVFL_SHIFT)
#define IAVF_VF_ARQLEN1_ARQCRIT_SHIFT 30
#define IAVF_VF_ARQLEN1_ARQCRIT_MASK IAVF_MASK(0x1, IAVF_VF_ARQLEN1_ARQCRIT_SHIFT)
#define IAVF_VF_ARQLEN1_ARQENABLE_SHIFT 31
#define IAVF_VF_ARQLEN1_ARQENABLE_MASK IAVF_MASK(0x1, IAVF_VF_ARQLEN1_ARQENABLE_SHIFT)
#define IAVF_VF_ARQT1 0x00007000 /* Reset: EMPR */
#define IAVF_VF_ATQBAH1 0x00007800 /* Reset: EMPR */
#define IAVF_VF_ATQBAL1 0x00007C00 /* Reset: EMPR */
#define IAVF_VF_ATQH1 0x00006400 /* Reset: EMPR */
#define IAVF_VF_ATQLEN1 0x00006800 /* Reset: EMPR */
#define IAVF_VF_ATQLEN1_ATQVFE_SHIFT 28
#define IAVF_VF_ATQLEN1_ATQVFE_MASK IAVF_MASK(0x1, IAVF_VF_ATQLEN1_ATQVFE_SHIFT)
#define IAVF_VF_ATQLEN1_ATQOVFL_SHIFT 29
#define IAVF_VF_ATQLEN1_ATQOVFL_MASK IAVF_MASK(0x1, IAVF_VF_ATQLEN1_ATQOVFL_SHIFT)
#define IAVF_VF_ATQLEN1_ATQCRIT_SHIFT 30
#define IAVF_VF_ATQLEN1_ATQCRIT_MASK IAVF_MASK(0x1, IAVF_VF_ATQLEN1_ATQCRIT_SHIFT)
#define IAVF_VF_ATQLEN1_ATQENABLE_SHIFT 31
#define IAVF_VF_ATQLEN1_ATQENABLE_MASK IAVF_MASK(0x1, IAVF_VF_ATQLEN1_ATQENABLE_SHIFT)
#define IAVF_VF_ATQT1 0x00008400 /* Reset: EMPR */
#define IAVF_VFGEN_RSTAT 0x00008800 /* Reset: VFR */
#define IAVF_VFGEN_RSTAT_VFR_STATE_SHIFT 0
#define IAVF_VFGEN_RSTAT_VFR_STATE_MASK IAVF_MASK(0x3, IAVF_VFGEN_RSTAT_VFR_STATE_SHIFT)
#define IAVF_VFINT_DYN_CTL01 0x00005C00 /* Reset: VFR */
#define IAVF_VFINT_DYN_CTL01_INTENA_SHIFT 0
#define IAVF_VFINT_DYN_CTL01_INTENA_MASK IAVF_MASK(0x1, IAVF_VFINT_DYN_CTL01_INTENA_SHIFT)
#define IAVF_VFINT_DYN_CTL01_ITR_INDX_SHIFT 3
#define IAVF_VFINT_DYN_CTL01_ITR_INDX_MASK IAVF_MASK(0x3, IAVF_VFINT_DYN_CTL01_ITR_INDX_SHIFT)
#define IAVF_VFINT_DYN_CTLN1(_INTVF) (0x00003800 + ((_INTVF) * 4)) /* _i=0...15 */ /* Reset: VFR */
#define IAVF_VFINT_DYN_CTLN1_INTENA_SHIFT 0
#define IAVF_VFINT_DYN_CTLN1_INTENA_MASK IAVF_MASK(0x1, IAVF_VFINT_DYN_CTLN1_INTENA_SHIFT)
#define IAVF_VFINT_DYN_CTLN1_SWINT_TRIG_SHIFT 2
#define IAVF_VFINT_DYN_CTLN1_SWINT_TRIG_MASK IAVF_MASK(0x1, IAVF_VFINT_DYN_CTLN1_SWINT_TRIG_SHIFT)
#define IAVF_VFINT_DYN_CTLN1_ITR_INDX_SHIFT 3
#define IAVF_VFINT_DYN_CTLN1_ITR_INDX_MASK IAVF_MASK(0x3, IAVF_VFINT_DYN_CTLN1_ITR_INDX_SHIFT)
#define IAVF_VFINT_DYN_CTLN1_INTERVAL_SHIFT 5
#define IAVF_VFINT_DYN_CTLN1_SW_ITR_INDX_ENA_SHIFT 24
#define IAVF_VFINT_DYN_CTLN1_SW_ITR_INDX_ENA_MASK IAVF_MASK(0x1, IAVF_VFINT_DYN_CTLN1_SW_ITR_INDX_ENA_SHIFT)
#define IAVF_VFINT_ICR0_ENA1 0x00005000 /* Reset: CORER */
#define IAVF_VFINT_ICR0_ENA1_ADMINQ_SHIFT 30
#define IAVF_VFINT_ICR0_ENA1_ADMINQ_MASK IAVF_MASK(0x1, IAVF_VFINT_ICR0_ENA1_ADMINQ_SHIFT)
#define IAVF_VFINT_ICR0_ENA1_RSVD_SHIFT 31
#define IAVF_VFINT_ICR01 0x00004800 /* Reset: CORER */
#define IAVF_VFINT_ITRN1(_i, _INTVF) (0x00002800 + ((_i) * 64 + (_INTVF) * 4)) /* _i=0...2, _INTVF=0...15 */ /* Reset: VFR */
#define IAVF_QRX_TAIL1(_Q) (0x00002000 + ((_Q) * 4)) /* _i=0...15 */ /* Reset: CORER */
#define IAVF_QTX_TAIL1(_Q) (0x00000000 + ((_Q) * 4)) /* _i=0...15 */ /* Reset: PFR */
#define IAVF_VFQF_HENA(_i) (0x0000C400 + ((_i) * 4)) /* _i=0...1 */ /* Reset: CORER */
#define IAVF_VFQF_HKEY(_i) (0x0000CC00 + ((_i) * 4)) /* _i=0...12 */ /* Reset: CORER */
#define IAVF_VFQF_HKEY_MAX_INDEX 12
#define IAVF_VFQF_HLUT(_i) (0x0000D000 + ((_i) * 4)) /* _i=0...15 */ /* Reset: CORER */
#define IAVF_VFQF_HLUT_MAX_INDEX 15
#define IAVF_VFINT_DYN_CTLN1_WB_ON_ITR_SHIFT 30
#define IAVF_VFINT_DYN_CTLN1_WB_ON_ITR_MASK IAVF_MASK(0x1, IAVF_VFINT_DYN_CTLN1_WB_ON_ITR_SHIFT)
#endif /* _IAVF_REGISTER_H_ */

View File

@ -1,11 +1,11 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2013 - 2018 Intel Corporation. */
#ifndef _I40E_STATUS_H_
#define _I40E_STATUS_H_
#ifndef _IAVF_STATUS_H_
#define _IAVF_STATUS_H_
/* Error Codes */
enum i40e_status_code {
enum iavf_status_code {
I40E_SUCCESS = 0,
I40E_ERR_NVM = -1,
I40E_ERR_NVM_CHECKSUM = -2,
@ -75,4 +75,4 @@ enum i40e_status_code {
I40E_ERR_ADMIN_QUEUE_CRITICAL_ERROR = -66,
};
#endif /* _I40E_STATUS_H_ */
#endif /* _IAVF_STATUS_H_ */

View File

@ -3,16 +3,16 @@
/* Modeled on trace-events-sample.h */
/* The trace subsystem name for i40evf will be "i40evf".
/* The trace subsystem name for iavf will be "iavf".
*
* This file is named i40e_trace.h.
* This file is named iavf_trace.h.
*
* Since this include file's name is different from the trace
* subsystem name, we'll have to define TRACE_INCLUDE_FILE at the end
* of this file.
*/
#undef TRACE_SYSTEM
#define TRACE_SYSTEM i40evf
#define TRACE_SYSTEM iavf
/* See trace-events-sample.h for a detailed description of why this
* guard clause is different from most normal include files.
@ -23,14 +23,14 @@
#include <linux/tracepoint.h>
/**
* i40e_trace() macro enables shared code to refer to trace points
* iavf_trace() macro enables shared code to refer to trace points
* like:
*
* trace_i40e{,vf}_example(args...)
* trace_iavf{,vf}_example(args...)
*
* ... as:
*
* i40e_trace(example, args...)
* iavf_trace(example, args...)
*
* ... to resolve to the PF or VF version of the tracepoint without
* ifdefs, and to allow tracepoints to be disabled entirely at build
@ -39,29 +39,29 @@
* Trace point should always be referred to in the driver via this
* macro.
*
* Similarly, i40e_trace_enabled(trace_name) wraps references to
* trace_i40e{,vf}_<trace_name>_enabled() functions.
* Similarly, iavf_trace_enabled(trace_name) wraps references to
* trace_iavf{,vf}_<trace_name>_enabled() functions.
*/
#define _I40E_TRACE_NAME(trace_name) (trace_ ## i40evf ## _ ## trace_name)
#define I40E_TRACE_NAME(trace_name) _I40E_TRACE_NAME(trace_name)
#define _IAVF_TRACE_NAME(trace_name) (trace_ ## iavf ## _ ## trace_name)
#define IAVF_TRACE_NAME(trace_name) _IAVF_TRACE_NAME(trace_name)
#define i40e_trace(trace_name, args...) I40E_TRACE_NAME(trace_name)(args)
#define iavf_trace(trace_name, args...) IAVF_TRACE_NAME(trace_name)(args)
#define i40e_trace_enabled(trace_name) I40E_TRACE_NAME(trace_name##_enabled)()
#define iavf_trace_enabled(trace_name) IAVF_TRACE_NAME(trace_name##_enabled)()
/* Events common to PF and VF. Corresponding versions will be defined
* for both, named trace_i40e_* and trace_i40evf_*. The i40e_trace()
* for both, named trace_iavf_* and trace_iavf_*. The iavf_trace()
* macro above will select the right trace point name for the driver
* being built from shared code.
*/
/* Events related to a vsi & ring */
DECLARE_EVENT_CLASS(
i40evf_tx_template,
iavf_tx_template,
TP_PROTO(struct i40e_ring *ring,
struct i40e_tx_desc *desc,
struct i40e_tx_buffer *buf),
TP_PROTO(struct iavf_ring *ring,
struct iavf_tx_desc *desc,
struct iavf_tx_buffer *buf),
TP_ARGS(ring, desc, buf),
@ -93,26 +93,26 @@ DECLARE_EVENT_CLASS(
);
DEFINE_EVENT(
i40evf_tx_template, i40evf_clean_tx_irq,
TP_PROTO(struct i40e_ring *ring,
struct i40e_tx_desc *desc,
struct i40e_tx_buffer *buf),
iavf_tx_template, iavf_clean_tx_irq,
TP_PROTO(struct iavf_ring *ring,
struct iavf_tx_desc *desc,
struct iavf_tx_buffer *buf),
TP_ARGS(ring, desc, buf));
DEFINE_EVENT(
i40evf_tx_template, i40evf_clean_tx_irq_unmap,
TP_PROTO(struct i40e_ring *ring,
struct i40e_tx_desc *desc,
struct i40e_tx_buffer *buf),
iavf_tx_template, iavf_clean_tx_irq_unmap,
TP_PROTO(struct iavf_ring *ring,
struct iavf_tx_desc *desc,
struct iavf_tx_buffer *buf),
TP_ARGS(ring, desc, buf));
DECLARE_EVENT_CLASS(
i40evf_rx_template,
iavf_rx_template,
TP_PROTO(struct i40e_ring *ring,
union i40e_32byte_rx_desc *desc,
TP_PROTO(struct iavf_ring *ring,
union iavf_32byte_rx_desc *desc,
struct sk_buff *skb),
TP_ARGS(ring, desc, skb),
@ -138,26 +138,26 @@ DECLARE_EVENT_CLASS(
);
DEFINE_EVENT(
i40evf_rx_template, i40evf_clean_rx_irq,
TP_PROTO(struct i40e_ring *ring,
union i40e_32byte_rx_desc *desc,
iavf_rx_template, iavf_clean_rx_irq,
TP_PROTO(struct iavf_ring *ring,
union iavf_32byte_rx_desc *desc,
struct sk_buff *skb),
TP_ARGS(ring, desc, skb));
DEFINE_EVENT(
i40evf_rx_template, i40evf_clean_rx_irq_rx,
TP_PROTO(struct i40e_ring *ring,
union i40e_32byte_rx_desc *desc,
iavf_rx_template, iavf_clean_rx_irq_rx,
TP_PROTO(struct iavf_ring *ring,
union iavf_32byte_rx_desc *desc,
struct sk_buff *skb),
TP_ARGS(ring, desc, skb));
DECLARE_EVENT_CLASS(
i40evf_xmit_template,
iavf_xmit_template,
TP_PROTO(struct sk_buff *skb,
struct i40e_ring *ring),
struct iavf_ring *ring),
TP_ARGS(skb, ring),
@ -180,23 +180,23 @@ DECLARE_EVENT_CLASS(
);
DEFINE_EVENT(
i40evf_xmit_template, i40evf_xmit_frame_ring,
iavf_xmit_template, iavf_xmit_frame_ring,
TP_PROTO(struct sk_buff *skb,
struct i40e_ring *ring),
struct iavf_ring *ring),
TP_ARGS(skb, ring));
DEFINE_EVENT(
i40evf_xmit_template, i40evf_xmit_frame_ring_drop,
iavf_xmit_template, iavf_xmit_frame_ring_drop,
TP_PROTO(struct sk_buff *skb,
struct i40e_ring *ring),
struct iavf_ring *ring),
TP_ARGS(skb, ring));
/* Events unique to the VF. */
#endif /* _I40E_TRACE_H_ */
/* This must be outside ifdef _I40E_TRACE_H */
#endif /* _IAVF_TRACE_H_ */
/* This must be outside ifdef _IAVF_TRACE_H */
/* This trace include file is not located in the .../include/trace
* with the kernel tracepoint definitions, because we're a loadable
@ -205,5 +205,5 @@ DEFINE_EVENT(
#undef TRACE_INCLUDE_PATH
#define TRACE_INCLUDE_PATH .
#undef TRACE_INCLUDE_FILE
#define TRACE_INCLUDE_FILE i40e_trace
#define TRACE_INCLUDE_FILE iavf_trace
#include <trace/define_trace.h>

View File

@ -1,11 +1,11 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2013 - 2018 Intel Corporation. */
#ifndef _I40E_TXRX_H_
#define _I40E_TXRX_H_
#ifndef _IAVF_TXRX_H_
#define _IAVF_TXRX_H_
/* Interrupt Throttling and Rate Limiting Goodies */
#define I40E_DEFAULT_IRQ_WORK 256
#define IAVF_DEFAULT_IRQ_WORK 256
/* The datasheet for the X710 and XL710 indicate that the maximum value for
* the ITR is 8160usec which is then called out as 0xFF0 with a 2usec
@ -13,80 +13,80 @@
* the register value which is divided by 2 lets use the actual values and
* avoid an excessive amount of translation.
*/
#define I40E_ITR_DYNAMIC 0x8000 /* use top bit as a flag */
#define I40E_ITR_MASK 0x1FFE /* mask for ITR register value */
#define I40E_MIN_ITR 2 /* reg uses 2 usec resolution */
#define I40E_ITR_100K 10 /* all values below must be even */
#define I40E_ITR_50K 20
#define I40E_ITR_20K 50
#define I40E_ITR_18K 60
#define I40E_ITR_8K 122
#define I40E_MAX_ITR 8160 /* maximum value as per datasheet */
#define ITR_TO_REG(setting) ((setting) & ~I40E_ITR_DYNAMIC)
#define ITR_REG_ALIGN(setting) __ALIGN_MASK(setting, ~I40E_ITR_MASK)
#define ITR_IS_DYNAMIC(setting) (!!((setting) & I40E_ITR_DYNAMIC))
#define IAVF_ITR_DYNAMIC 0x8000 /* use top bit as a flag */
#define IAVF_ITR_MASK 0x1FFE /* mask for ITR register value */
#define IAVF_MIN_ITR 2 /* reg uses 2 usec resolution */
#define IAVF_ITR_100K 10 /* all values below must be even */
#define IAVF_ITR_50K 20
#define IAVF_ITR_20K 50
#define IAVF_ITR_18K 60
#define IAVF_ITR_8K 122
#define IAVF_MAX_ITR 8160 /* maximum value as per datasheet */
#define ITR_TO_REG(setting) ((setting) & ~IAVF_ITR_DYNAMIC)
#define ITR_REG_ALIGN(setting) __ALIGN_MASK(setting, ~IAVF_ITR_MASK)
#define ITR_IS_DYNAMIC(setting) (!!((setting) & IAVF_ITR_DYNAMIC))
#define I40E_ITR_RX_DEF (I40E_ITR_20K | I40E_ITR_DYNAMIC)
#define I40E_ITR_TX_DEF (I40E_ITR_20K | I40E_ITR_DYNAMIC)
#define IAVF_ITR_RX_DEF (IAVF_ITR_20K | IAVF_ITR_DYNAMIC)
#define IAVF_ITR_TX_DEF (IAVF_ITR_20K | IAVF_ITR_DYNAMIC)
/* 0x40 is the enable bit for interrupt rate limiting, and must be set if
* the value of the rate limit is non-zero
*/
#define INTRL_ENA BIT(6)
#define I40E_MAX_INTRL 0x3B /* reg uses 4 usec resolution */
#define IAVF_MAX_INTRL 0x3B /* reg uses 4 usec resolution */
#define INTRL_REG_TO_USEC(intrl) ((intrl & ~INTRL_ENA) << 2)
#define INTRL_USEC_TO_REG(set) ((set) ? ((set) >> 2) | INTRL_ENA : 0)
#define I40E_INTRL_8K 125 /* 8000 ints/sec */
#define I40E_INTRL_62K 16 /* 62500 ints/sec */
#define I40E_INTRL_83K 12 /* 83333 ints/sec */
#define IAVF_INTRL_8K 125 /* 8000 ints/sec */
#define IAVF_INTRL_62K 16 /* 62500 ints/sec */
#define IAVF_INTRL_83K 12 /* 83333 ints/sec */
#define I40E_QUEUE_END_OF_LIST 0x7FF
#define IAVF_QUEUE_END_OF_LIST 0x7FF
/* this enum matches hardware bits and is meant to be used by DYN_CTLN
* registers and QINT registers or more generally anywhere in the manual
* mentioning ITR_INDX, ITR_NONE cannot be used as an index 'n' into any
* register but instead is a special value meaning "don't update" ITR0/1/2.
*/
enum i40e_dyn_idx_t {
I40E_IDX_ITR0 = 0,
I40E_IDX_ITR1 = 1,
I40E_IDX_ITR2 = 2,
I40E_ITR_NONE = 3 /* ITR_NONE must not be used as an index */
enum iavf_dyn_idx_t {
IAVF_IDX_ITR0 = 0,
IAVF_IDX_ITR1 = 1,
IAVF_IDX_ITR2 = 2,
IAVF_ITR_NONE = 3 /* ITR_NONE must not be used as an index */
};
/* these are indexes into ITRN registers */
#define I40E_RX_ITR I40E_IDX_ITR0
#define I40E_TX_ITR I40E_IDX_ITR1
#define I40E_PE_ITR I40E_IDX_ITR2
#define IAVF_RX_ITR IAVF_IDX_ITR0
#define IAVF_TX_ITR IAVF_IDX_ITR1
#define IAVF_PE_ITR IAVF_IDX_ITR2
/* Supported RSS offloads */
#define I40E_DEFAULT_RSS_HENA ( \
BIT_ULL(I40E_FILTER_PCTYPE_NONF_IPV4_UDP) | \
BIT_ULL(I40E_FILTER_PCTYPE_NONF_IPV4_SCTP) | \
BIT_ULL(I40E_FILTER_PCTYPE_NONF_IPV4_TCP) | \
BIT_ULL(I40E_FILTER_PCTYPE_NONF_IPV4_OTHER) | \
BIT_ULL(I40E_FILTER_PCTYPE_FRAG_IPV4) | \
BIT_ULL(I40E_FILTER_PCTYPE_NONF_IPV6_UDP) | \
BIT_ULL(I40E_FILTER_PCTYPE_NONF_IPV6_TCP) | \
BIT_ULL(I40E_FILTER_PCTYPE_NONF_IPV6_SCTP) | \
BIT_ULL(I40E_FILTER_PCTYPE_NONF_IPV6_OTHER) | \
BIT_ULL(I40E_FILTER_PCTYPE_FRAG_IPV6) | \
BIT_ULL(I40E_FILTER_PCTYPE_L2_PAYLOAD))
#define IAVF_DEFAULT_RSS_HENA ( \
BIT_ULL(IAVF_FILTER_PCTYPE_NONF_IPV4_UDP) | \
BIT_ULL(IAVF_FILTER_PCTYPE_NONF_IPV4_SCTP) | \
BIT_ULL(IAVF_FILTER_PCTYPE_NONF_IPV4_TCP) | \
BIT_ULL(IAVF_FILTER_PCTYPE_NONF_IPV4_OTHER) | \
BIT_ULL(IAVF_FILTER_PCTYPE_FRAG_IPV4) | \
BIT_ULL(IAVF_FILTER_PCTYPE_NONF_IPV6_UDP) | \
BIT_ULL(IAVF_FILTER_PCTYPE_NONF_IPV6_TCP) | \
BIT_ULL(IAVF_FILTER_PCTYPE_NONF_IPV6_SCTP) | \
BIT_ULL(IAVF_FILTER_PCTYPE_NONF_IPV6_OTHER) | \
BIT_ULL(IAVF_FILTER_PCTYPE_FRAG_IPV6) | \
BIT_ULL(IAVF_FILTER_PCTYPE_L2_PAYLOAD))
#define I40E_DEFAULT_RSS_HENA_EXPANDED (I40E_DEFAULT_RSS_HENA | \
BIT_ULL(I40E_FILTER_PCTYPE_NONF_IPV4_TCP_SYN_NO_ACK) | \
BIT_ULL(I40E_FILTER_PCTYPE_NONF_UNICAST_IPV4_UDP) | \
BIT_ULL(I40E_FILTER_PCTYPE_NONF_MULTICAST_IPV4_UDP) | \
BIT_ULL(I40E_FILTER_PCTYPE_NONF_IPV6_TCP_SYN_NO_ACK) | \
BIT_ULL(I40E_FILTER_PCTYPE_NONF_UNICAST_IPV6_UDP) | \
BIT_ULL(I40E_FILTER_PCTYPE_NONF_MULTICAST_IPV6_UDP))
#define IAVF_DEFAULT_RSS_HENA_EXPANDED (IAVF_DEFAULT_RSS_HENA | \
BIT_ULL(IAVF_FILTER_PCTYPE_NONF_IPV4_TCP_SYN_NO_ACK) | \
BIT_ULL(IAVF_FILTER_PCTYPE_NONF_UNICAST_IPV4_UDP) | \
BIT_ULL(IAVF_FILTER_PCTYPE_NONF_MULTICAST_IPV4_UDP) | \
BIT_ULL(IAVF_FILTER_PCTYPE_NONF_IPV6_TCP_SYN_NO_ACK) | \
BIT_ULL(IAVF_FILTER_PCTYPE_NONF_UNICAST_IPV6_UDP) | \
BIT_ULL(IAVF_FILTER_PCTYPE_NONF_MULTICAST_IPV6_UDP))
/* Supported Rx Buffer Sizes (a multiple of 128) */
#define I40E_RXBUFFER_256 256
#define I40E_RXBUFFER_1536 1536 /* 128B aligned standard Ethernet frame */
#define I40E_RXBUFFER_2048 2048
#define I40E_RXBUFFER_3072 3072 /* Used for large frames w/ padding */
#define I40E_MAX_RXBUFFER 9728 /* largest size for single descriptor */
#define IAVF_RXBUFFER_256 256
#define IAVF_RXBUFFER_1536 1536 /* 128B aligned standard Ethernet frame */
#define IAVF_RXBUFFER_2048 2048
#define IAVF_RXBUFFER_3072 3072 /* Used for large frames w/ padding */
#define IAVF_MAX_RXBUFFER 9728 /* largest size for single descriptor */
/* NOTE: netdev_alloc_skb reserves up to 64 bytes, NET_IP_ALIGN means we
* reserve 2 more, and skb_shared_info adds an additional 384 bytes more,
@ -95,11 +95,11 @@ enum i40e_dyn_idx_t {
* i.e. RXBUFFER_256 --> 960 byte skb (size-1024 slab)
* i.e. RXBUFFER_512 --> 1216 byte skb (size-2048 slab)
*/
#define I40E_RX_HDR_SIZE I40E_RXBUFFER_256
#define I40E_PACKET_HDR_PAD (ETH_HLEN + ETH_FCS_LEN + (VLAN_HLEN * 2))
#define i40e_rx_desc i40e_32byte_rx_desc
#define IAVF_RX_HDR_SIZE IAVF_RXBUFFER_256
#define IAVF_PACKET_HDR_PAD (ETH_HLEN + ETH_FCS_LEN + (VLAN_HLEN * 2))
#define iavf_rx_desc iavf_32byte_rx_desc
#define I40E_RX_DMA_ATTR \
#define IAVF_RX_DMA_ATTR \
(DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING)
/* Attempt to maximize the headroom available for incoming frames. We
@ -113,10 +113,10 @@ enum i40e_dyn_idx_t {
* receive path.
*/
#if (PAGE_SIZE < 8192)
#define I40E_2K_TOO_SMALL_WITH_PADDING \
((NET_SKB_PAD + I40E_RXBUFFER_1536) > SKB_WITH_OVERHEAD(I40E_RXBUFFER_2048))
#define IAVF_2K_TOO_SMALL_WITH_PADDING \
((NET_SKB_PAD + IAVF_RXBUFFER_1536) > SKB_WITH_OVERHEAD(IAVF_RXBUFFER_2048))
static inline int i40e_compute_pad(int rx_buf_len)
static inline int iavf_compute_pad(int rx_buf_len)
{
int page_size, pad_size;
@ -126,7 +126,7 @@ static inline int i40e_compute_pad(int rx_buf_len)
return pad_size;
}
static inline int i40e_skb_pad(void)
static inline int iavf_skb_pad(void)
{
int rx_buf_len;
@ -137,25 +137,25 @@ static inline int i40e_skb_pad(void)
* tailroom due to NET_IP_ALIGN possibly shifting us out of
* cache-line alignment.
*/
if (I40E_2K_TOO_SMALL_WITH_PADDING)
rx_buf_len = I40E_RXBUFFER_3072 + SKB_DATA_ALIGN(NET_IP_ALIGN);
if (IAVF_2K_TOO_SMALL_WITH_PADDING)
rx_buf_len = IAVF_RXBUFFER_3072 + SKB_DATA_ALIGN(NET_IP_ALIGN);
else
rx_buf_len = I40E_RXBUFFER_1536;
rx_buf_len = IAVF_RXBUFFER_1536;
/* if needed make room for NET_IP_ALIGN */
rx_buf_len -= NET_IP_ALIGN;
return i40e_compute_pad(rx_buf_len);
return iavf_compute_pad(rx_buf_len);
}
#define I40E_SKB_PAD i40e_skb_pad()
#define IAVF_SKB_PAD iavf_skb_pad()
#else
#define I40E_2K_TOO_SMALL_WITH_PADDING false
#define I40E_SKB_PAD (NET_SKB_PAD + NET_IP_ALIGN)
#define IAVF_2K_TOO_SMALL_WITH_PADDING false
#define IAVF_SKB_PAD (NET_SKB_PAD + NET_IP_ALIGN)
#endif
/**
* i40e_test_staterr - tests bits in Rx descriptor status and error fields
* iavf_test_staterr - tests bits in Rx descriptor status and error fields
* @rx_desc: pointer to receive descriptor (in le64 format)
* @stat_err_bits: value to mask
*
@ -164,7 +164,7 @@ static inline int i40e_skb_pad(void)
* The status_error_len doesn't need to be shifted because it begins
* at offset zero.
*/
static inline bool i40e_test_staterr(union i40e_rx_desc *rx_desc,
static inline bool iavf_test_staterr(union iavf_rx_desc *rx_desc,
const u64 stat_err_bits)
{
return !!(rx_desc->wb.qword1.status_error_len &
@ -172,8 +172,7 @@ static inline bool i40e_test_staterr(union i40e_rx_desc *rx_desc,
}
/* How many Rx Buffers do we bundle into one write to the hardware ? */
#define I40E_RX_BUFFER_WRITE 32 /* Must be power of 2 */
#define I40E_RX_INCREMENT(r, i) \
#define IAVF_RX_INCREMENT(r, i) \
do { \
(i)++; \
if ((i) == (r)->count) \
@ -181,34 +180,34 @@ static inline bool i40e_test_staterr(union i40e_rx_desc *rx_desc,
r->next_to_clean = i; \
} while (0)
#define I40E_RX_NEXT_DESC(r, i, n) \
#define IAVF_RX_NEXT_DESC(r, i, n) \
do { \
(i)++; \
if ((i) == (r)->count) \
i = 0; \
(n) = I40E_RX_DESC((r), (i)); \
(n) = IAVF_RX_DESC((r), (i)); \
} while (0)
#define I40E_RX_NEXT_DESC_PREFETCH(r, i, n) \
#define IAVF_RX_NEXT_DESC_PREFETCH(r, i, n) \
do { \
I40E_RX_NEXT_DESC((r), (i), (n)); \
IAVF_RX_NEXT_DESC((r), (i), (n)); \
prefetch((n)); \
} while (0)
#define I40E_MAX_BUFFER_TXD 8
#define I40E_MIN_TX_LEN 17
#define IAVF_MAX_BUFFER_TXD 8
#define IAVF_MIN_TX_LEN 17
/* The size limit for a transmit buffer in a descriptor is (16K - 1).
* In order to align with the read requests we will align the value to
* the nearest 4K which represents our maximum read request size.
*/
#define I40E_MAX_READ_REQ_SIZE 4096
#define I40E_MAX_DATA_PER_TXD (16 * 1024 - 1)
#define I40E_MAX_DATA_PER_TXD_ALIGNED \
(I40E_MAX_DATA_PER_TXD & ~(I40E_MAX_READ_REQ_SIZE - 1))
#define IAVF_MAX_READ_REQ_SIZE 4096
#define IAVF_MAX_DATA_PER_TXD (16 * 1024 - 1)
#define IAVF_MAX_DATA_PER_TXD_ALIGNED \
(IAVF_MAX_DATA_PER_TXD & ~(IAVF_MAX_READ_REQ_SIZE - 1))
/**
* i40e_txd_use_count - estimate the number of descriptors needed for Tx
* iavf_txd_use_count - estimate the number of descriptors needed for Tx
* @size: transmit request size in bytes
*
* Due to hardware alignment restrictions (4K alignment), we need to
@ -235,31 +234,31 @@ static inline bool i40e_test_staterr(union i40e_rx_desc *rx_desc,
* operations into:
* return ((size * 85) >> 20) + 1;
*/
static inline unsigned int i40e_txd_use_count(unsigned int size)
static inline unsigned int iavf_txd_use_count(unsigned int size)
{
return ((size * 85) >> 20) + 1;
}
/* Tx Descriptors needed, worst case */
#define DESC_NEEDED (MAX_SKB_FRAGS + 6)
#define I40E_MIN_DESC_PENDING 4
#define IAVF_MIN_DESC_PENDING 4
#define I40E_TX_FLAGS_HW_VLAN BIT(1)
#define I40E_TX_FLAGS_SW_VLAN BIT(2)
#define I40E_TX_FLAGS_TSO BIT(3)
#define I40E_TX_FLAGS_IPV4 BIT(4)
#define I40E_TX_FLAGS_IPV6 BIT(5)
#define I40E_TX_FLAGS_FCCRC BIT(6)
#define I40E_TX_FLAGS_FSO BIT(7)
#define I40E_TX_FLAGS_FD_SB BIT(9)
#define I40E_TX_FLAGS_VXLAN_TUNNEL BIT(10)
#define I40E_TX_FLAGS_VLAN_MASK 0xffff0000
#define I40E_TX_FLAGS_VLAN_PRIO_MASK 0xe0000000
#define I40E_TX_FLAGS_VLAN_PRIO_SHIFT 29
#define I40E_TX_FLAGS_VLAN_SHIFT 16
#define IAVF_TX_FLAGS_HW_VLAN BIT(1)
#define IAVF_TX_FLAGS_SW_VLAN BIT(2)
#define IAVF_TX_FLAGS_TSO BIT(3)
#define IAVF_TX_FLAGS_IPV4 BIT(4)
#define IAVF_TX_FLAGS_IPV6 BIT(5)
#define IAVF_TX_FLAGS_FCCRC BIT(6)
#define IAVF_TX_FLAGS_FSO BIT(7)
#define IAVF_TX_FLAGS_FD_SB BIT(9)
#define IAVF_TX_FLAGS_VXLAN_TUNNEL BIT(10)
#define IAVF_TX_FLAGS_VLAN_MASK 0xffff0000
#define IAVF_TX_FLAGS_VLAN_PRIO_MASK 0xe0000000
#define IAVF_TX_FLAGS_VLAN_PRIO_SHIFT 29
#define IAVF_TX_FLAGS_VLAN_SHIFT 16
struct i40e_tx_buffer {
struct i40e_tx_desc *next_to_watch;
struct iavf_tx_buffer {
struct iavf_tx_desc *next_to_watch;
union {
struct sk_buff *skb;
void *raw_buf;
@ -272,7 +271,7 @@ struct i40e_tx_buffer {
u32 tx_flags;
};
struct i40e_rx_buffer {
struct iavf_rx_buffer {
dma_addr_t dma;
struct page *page;
#if (BITS_PER_LONG > 32) || (PAGE_SIZE >= 65536)
@ -283,12 +282,12 @@ struct i40e_rx_buffer {
__u16 pagecnt_bias;
};
struct i40e_queue_stats {
struct iavf_queue_stats {
u64 packets;
u64 bytes;
};
struct i40e_tx_queue_stats {
struct iavf_tx_queue_stats {
u64 restart_queue;
u64 tx_busy;
u64 tx_done_old;
@ -298,7 +297,7 @@ struct i40e_tx_queue_stats {
u64 tx_lost_interrupt;
};
struct i40e_rx_queue_stats {
struct iavf_rx_queue_stats {
u64 non_eop_descs;
u64 alloc_page_failed;
u64 alloc_buff_failed;
@ -306,34 +305,34 @@ struct i40e_rx_queue_stats {
u64 realloc_count;
};
enum i40e_ring_state_t {
__I40E_TX_FDIR_INIT_DONE,
__I40E_TX_XPS_INIT_DONE,
__I40E_RING_STATE_NBITS /* must be last */
enum iavf_ring_state_t {
__IAVF_TX_FDIR_INIT_DONE,
__IAVF_TX_XPS_INIT_DONE,
__IAVF_RING_STATE_NBITS /* must be last */
};
/* some useful defines for virtchannel interface, which
* is the only remaining user of header split
*/
#define I40E_RX_DTYPE_NO_SPLIT 0
#define I40E_RX_DTYPE_HEADER_SPLIT 1
#define I40E_RX_DTYPE_SPLIT_ALWAYS 2
#define I40E_RX_SPLIT_L2 0x1
#define I40E_RX_SPLIT_IP 0x2
#define I40E_RX_SPLIT_TCP_UDP 0x4
#define I40E_RX_SPLIT_SCTP 0x8
#define IAVF_RX_DTYPE_NO_SPLIT 0
#define IAVF_RX_DTYPE_HEADER_SPLIT 1
#define IAVF_RX_DTYPE_SPLIT_ALWAYS 2
#define IAVF_RX_SPLIT_L2 0x1
#define IAVF_RX_SPLIT_IP 0x2
#define IAVF_RX_SPLIT_TCP_UDP 0x4
#define IAVF_RX_SPLIT_SCTP 0x8
/* struct that defines a descriptor ring, associated with a VSI */
struct i40e_ring {
struct i40e_ring *next; /* pointer to next ring in q_vector */
struct iavf_ring {
struct iavf_ring *next; /* pointer to next ring in q_vector */
void *desc; /* Descriptor ring memory */
struct device *dev; /* Used for DMA mapping */
struct net_device *netdev; /* netdev ring maps to */
union {
struct i40e_tx_buffer *tx_bi;
struct i40e_rx_buffer *rx_bi;
struct iavf_tx_buffer *tx_bi;
struct iavf_rx_buffer *rx_bi;
};
DECLARE_BITMAP(state, __I40E_RING_STATE_NBITS);
DECLARE_BITMAP(state, __IAVF_RING_STATE_NBITS);
u16 queue_index; /* Queue number of ring */
u8 dcb_tc; /* Traffic class of ring */
u8 __iomem *tail;
@ -361,59 +360,59 @@ struct i40e_ring {
u8 packet_stride;
u16 flags;
#define I40E_TXR_FLAGS_WB_ON_ITR BIT(0)
#define I40E_RXR_FLAGS_BUILD_SKB_ENABLED BIT(1)
#define IAVF_TXR_FLAGS_WB_ON_ITR BIT(0)
#define IAVF_RXR_FLAGS_BUILD_SKB_ENABLED BIT(1)
/* stats structs */
struct i40e_queue_stats stats;
struct iavf_queue_stats stats;
struct u64_stats_sync syncp;
union {
struct i40e_tx_queue_stats tx_stats;
struct i40e_rx_queue_stats rx_stats;
struct iavf_tx_queue_stats tx_stats;
struct iavf_rx_queue_stats rx_stats;
};
unsigned int size; /* length of descriptor ring in bytes */
dma_addr_t dma; /* physical address of ring */
struct i40e_vsi *vsi; /* Backreference to associated VSI */
struct i40e_q_vector *q_vector; /* Backreference to associated vector */
struct iavf_vsi *vsi; /* Backreference to associated VSI */
struct iavf_q_vector *q_vector; /* Backreference to associated vector */
struct rcu_head rcu; /* to avoid race on free */
u16 next_to_alloc;
struct sk_buff *skb; /* When i40evf_clean_rx_ring_irq() must
struct sk_buff *skb; /* When iavf_clean_rx_ring_irq() must
* return before it sees the EOP for
* the current packet, we save that skb
* here and resume receiving this
* packet the next time
* i40evf_clean_rx_ring_irq() is called
* iavf_clean_rx_ring_irq() is called
* for this ring.
*/
} ____cacheline_internodealigned_in_smp;
static inline bool ring_uses_build_skb(struct i40e_ring *ring)
static inline bool ring_uses_build_skb(struct iavf_ring *ring)
{
return !!(ring->flags & I40E_RXR_FLAGS_BUILD_SKB_ENABLED);
return !!(ring->flags & IAVF_RXR_FLAGS_BUILD_SKB_ENABLED);
}
static inline void set_ring_build_skb_enabled(struct i40e_ring *ring)
static inline void set_ring_build_skb_enabled(struct iavf_ring *ring)
{
ring->flags |= I40E_RXR_FLAGS_BUILD_SKB_ENABLED;
ring->flags |= IAVF_RXR_FLAGS_BUILD_SKB_ENABLED;
}
static inline void clear_ring_build_skb_enabled(struct i40e_ring *ring)
static inline void clear_ring_build_skb_enabled(struct iavf_ring *ring)
{
ring->flags &= ~I40E_RXR_FLAGS_BUILD_SKB_ENABLED;
ring->flags &= ~IAVF_RXR_FLAGS_BUILD_SKB_ENABLED;
}
#define I40E_ITR_ADAPTIVE_MIN_INC 0x0002
#define I40E_ITR_ADAPTIVE_MIN_USECS 0x0002
#define I40E_ITR_ADAPTIVE_MAX_USECS 0x007e
#define I40E_ITR_ADAPTIVE_LATENCY 0x8000
#define I40E_ITR_ADAPTIVE_BULK 0x0000
#define ITR_IS_BULK(x) (!((x) & I40E_ITR_ADAPTIVE_LATENCY))
#define IAVF_ITR_ADAPTIVE_MIN_INC 0x0002
#define IAVF_ITR_ADAPTIVE_MIN_USECS 0x0002
#define IAVF_ITR_ADAPTIVE_MAX_USECS 0x007e
#define IAVF_ITR_ADAPTIVE_LATENCY 0x8000
#define IAVF_ITR_ADAPTIVE_BULK 0x0000
#define ITR_IS_BULK(x) (!((x) & IAVF_ITR_ADAPTIVE_LATENCY))
struct i40e_ring_container {
struct i40e_ring *ring; /* pointer to linked list of ring(s) */
struct iavf_ring_container {
struct iavf_ring *ring; /* pointer to linked list of ring(s) */
unsigned long next_update; /* jiffies value of next update */
unsigned int total_bytes; /* total bytes processed this int */
unsigned int total_packets; /* total packets processed this int */
@ -423,10 +422,10 @@ struct i40e_ring_container {
};
/* iterator for handling rings in ring container */
#define i40e_for_each_ring(pos, head) \
#define iavf_for_each_ring(pos, head) \
for (pos = (head).ring; pos != NULL; pos = pos->next)
static inline unsigned int i40e_rx_pg_order(struct i40e_ring *ring)
static inline unsigned int iavf_rx_pg_order(struct iavf_ring *ring)
{
#if (PAGE_SIZE < 8192)
if (ring->rx_buf_len > (PAGE_SIZE / 2))
@ -435,25 +434,25 @@ static inline unsigned int i40e_rx_pg_order(struct i40e_ring *ring)
return 0;
}
#define i40e_rx_pg_size(_ring) (PAGE_SIZE << i40e_rx_pg_order(_ring))
#define iavf_rx_pg_size(_ring) (PAGE_SIZE << iavf_rx_pg_order(_ring))
bool i40evf_alloc_rx_buffers(struct i40e_ring *rxr, u16 cleaned_count);
netdev_tx_t i40evf_xmit_frame(struct sk_buff *skb, struct net_device *netdev);
void i40evf_clean_tx_ring(struct i40e_ring *tx_ring);
void i40evf_clean_rx_ring(struct i40e_ring *rx_ring);
int i40evf_setup_tx_descriptors(struct i40e_ring *tx_ring);
int i40evf_setup_rx_descriptors(struct i40e_ring *rx_ring);
void i40evf_free_tx_resources(struct i40e_ring *tx_ring);
void i40evf_free_rx_resources(struct i40e_ring *rx_ring);
int i40evf_napi_poll(struct napi_struct *napi, int budget);
void i40evf_force_wb(struct i40e_vsi *vsi, struct i40e_q_vector *q_vector);
u32 i40evf_get_tx_pending(struct i40e_ring *ring, bool in_sw);
void i40evf_detect_recover_hung(struct i40e_vsi *vsi);
int __i40evf_maybe_stop_tx(struct i40e_ring *tx_ring, int size);
bool __i40evf_chk_linearize(struct sk_buff *skb);
bool iavf_alloc_rx_buffers(struct iavf_ring *rxr, u16 cleaned_count);
netdev_tx_t iavf_xmit_frame(struct sk_buff *skb, struct net_device *netdev);
void iavf_clean_tx_ring(struct iavf_ring *tx_ring);
void iavf_clean_rx_ring(struct iavf_ring *rx_ring);
int iavf_setup_tx_descriptors(struct iavf_ring *tx_ring);
int iavf_setup_rx_descriptors(struct iavf_ring *rx_ring);
void iavf_free_tx_resources(struct iavf_ring *tx_ring);
void iavf_free_rx_resources(struct iavf_ring *rx_ring);
int iavf_napi_poll(struct napi_struct *napi, int budget);
void iavf_force_wb(struct iavf_vsi *vsi, struct iavf_q_vector *q_vector);
u32 iavf_get_tx_pending(struct iavf_ring *ring, bool in_sw);
void iavf_detect_recover_hung(struct iavf_vsi *vsi);
int __iavf_maybe_stop_tx(struct iavf_ring *tx_ring, int size);
bool __iavf_chk_linearize(struct sk_buff *skb);
/**
* i40e_xmit_descriptor_count - calculate number of Tx descriptors needed
* iavf_xmit_descriptor_count - calculate number of Tx descriptors needed
* @skb: send buffer
* @tx_ring: ring to send buffer on
*
@ -461,14 +460,14 @@ bool __i40evf_chk_linearize(struct sk_buff *skb);
* there is not enough descriptors available in this ring since we need at least
* one descriptor.
**/
static inline int i40e_xmit_descriptor_count(struct sk_buff *skb)
static inline int iavf_xmit_descriptor_count(struct sk_buff *skb)
{
const struct skb_frag_struct *frag = &skb_shinfo(skb)->frags[0];
unsigned int nr_frags = skb_shinfo(skb)->nr_frags;
int count = 0, size = skb_headlen(skb);
for (;;) {
count += i40e_txd_use_count(size);
count += iavf_txd_use_count(size);
if (!nr_frags--)
break;
@ -480,21 +479,21 @@ static inline int i40e_xmit_descriptor_count(struct sk_buff *skb)
}
/**
* i40e_maybe_stop_tx - 1st level check for Tx stop conditions
* iavf_maybe_stop_tx - 1st level check for Tx stop conditions
* @tx_ring: the ring to be checked
* @size: the size buffer we want to assure is available
*
* Returns 0 if stop is not needed
**/
static inline int i40e_maybe_stop_tx(struct i40e_ring *tx_ring, int size)
static inline int iavf_maybe_stop_tx(struct iavf_ring *tx_ring, int size)
{
if (likely(I40E_DESC_UNUSED(tx_ring) >= size))
if (likely(IAVF_DESC_UNUSED(tx_ring) >= size))
return 0;
return __i40evf_maybe_stop_tx(tx_ring, size);
return __iavf_maybe_stop_tx(tx_ring, size);
}
/**
* i40e_chk_linearize - Check if there are more than 8 fragments per packet
* iavf_chk_linearize - Check if there are more than 8 fragments per packet
* @skb: send buffer
* @count: number of buffers used
*
@ -502,23 +501,23 @@ static inline int i40e_maybe_stop_tx(struct i40e_ring *tx_ring, int size)
* a packet on the wire and so we need to figure out the cases where we
* need to linearize the skb.
**/
static inline bool i40e_chk_linearize(struct sk_buff *skb, int count)
static inline bool iavf_chk_linearize(struct sk_buff *skb, int count)
{
/* Both TSO and single send will work if count is less than 8 */
if (likely(count < I40E_MAX_BUFFER_TXD))
if (likely(count < IAVF_MAX_BUFFER_TXD))
return false;
if (skb_is_gso(skb))
return __i40evf_chk_linearize(skb);
return __iavf_chk_linearize(skb);
/* we can support up to 8 data buffers for a single send */
return count != I40E_MAX_BUFFER_TXD;
return count != IAVF_MAX_BUFFER_TXD;
}
/**
* @ring: Tx ring to find the netdev equivalent of
**/
static inline struct netdev_queue *txring_txq(const struct i40e_ring *ring)
static inline struct netdev_queue *txring_txq(const struct iavf_ring *ring)
{
return netdev_get_tx_queue(ring->netdev, ring->queue_index);
}
#endif /* _I40E_TXRX_H_ */
#endif /* _IAVF_TXRX_H_ */

View File

@ -0,0 +1,688 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2013 - 2018 Intel Corporation. */
#ifndef _IAVF_TYPE_H_
#define _IAVF_TYPE_H_
#include "iavf_status.h"
#include "iavf_osdep.h"
#include "iavf_register.h"
#include "i40e_adminq.h"
#include "iavf_devids.h"
#define IAVF_RXQ_CTX_DBUFF_SHIFT 7
/* IAVF_MASK is a macro used on 32 bit registers */
#define IAVF_MASK(mask, shift) ((u32)(mask) << (shift))
#define IAVF_MAX_VSI_QP 16
#define IAVF_MAX_VF_VSI 3
#define IAVF_MAX_CHAINED_RX_BUFFERS 5
/* forward declaration */
struct iavf_hw;
typedef void (*I40E_ADMINQ_CALLBACK)(struct iavf_hw *, struct i40e_aq_desc *);
/* Data type manipulation macros. */
#define IAVF_DESC_UNUSED(R) \
((((R)->next_to_clean > (R)->next_to_use) ? 0 : (R)->count) + \
(R)->next_to_clean - (R)->next_to_use - 1)
/* bitfields for Tx queue mapping in QTX_CTL */
#define IAVF_QTX_CTL_VF_QUEUE 0x0
#define IAVF_QTX_CTL_VM_QUEUE 0x1
#define IAVF_QTX_CTL_PF_QUEUE 0x2
/* debug masks - set these bits in hw->debug_mask to control output */
enum iavf_debug_mask {
IAVF_DEBUG_INIT = 0x00000001,
IAVF_DEBUG_RELEASE = 0x00000002,
IAVF_DEBUG_LINK = 0x00000010,
IAVF_DEBUG_PHY = 0x00000020,
IAVF_DEBUG_HMC = 0x00000040,
IAVF_DEBUG_NVM = 0x00000080,
IAVF_DEBUG_LAN = 0x00000100,
IAVF_DEBUG_FLOW = 0x00000200,
IAVF_DEBUG_DCB = 0x00000400,
IAVF_DEBUG_DIAG = 0x00000800,
IAVF_DEBUG_FD = 0x00001000,
IAVF_DEBUG_PACKAGE = 0x00002000,
IAVF_DEBUG_AQ_MESSAGE = 0x01000000,
IAVF_DEBUG_AQ_DESCRIPTOR = 0x02000000,
IAVF_DEBUG_AQ_DESC_BUFFER = 0x04000000,
IAVF_DEBUG_AQ_COMMAND = 0x06000000,
IAVF_DEBUG_AQ = 0x0F000000,
IAVF_DEBUG_USER = 0xF0000000,
IAVF_DEBUG_ALL = 0xFFFFFFFF
};
/* These are structs for managing the hardware information and the operations.
* The structures of function pointers are filled out at init time when we
* know for sure exactly which hardware we're working with. This gives us the
* flexibility of using the same main driver code but adapting to slightly
* different hardware needs as new parts are developed. For this architecture,
* the Firmware and AdminQ are intended to insulate the driver from most of the
* future changes, but these structures will also do part of the job.
*/
enum iavf_mac_type {
IAVF_MAC_UNKNOWN = 0,
IAVF_MAC_XL710,
IAVF_MAC_VF,
IAVF_MAC_X722,
IAVF_MAC_X722_VF,
IAVF_MAC_GENERIC,
};
enum iavf_vsi_type {
IAVF_VSI_MAIN = 0,
IAVF_VSI_VMDQ1 = 1,
IAVF_VSI_VMDQ2 = 2,
IAVF_VSI_CTRL = 3,
IAVF_VSI_FCOE = 4,
IAVF_VSI_MIRROR = 5,
IAVF_VSI_SRIOV = 6,
IAVF_VSI_FDIR = 7,
IAVF_VSI_TYPE_UNKNOWN
};
enum iavf_queue_type {
IAVF_QUEUE_TYPE_RX = 0,
IAVF_QUEUE_TYPE_TX,
IAVF_QUEUE_TYPE_PE_CEQ,
IAVF_QUEUE_TYPE_UNKNOWN
};
#define IAVF_HW_CAP_MAX_GPIO 30
/* Capabilities of a PF or a VF or the whole device */
struct iavf_hw_capabilities {
bool dcb;
bool fcoe;
u32 num_vsis;
u32 num_rx_qp;
u32 num_tx_qp;
u32 base_queue;
u32 num_msix_vectors_vf;
};
struct iavf_mac_info {
enum iavf_mac_type type;
u8 addr[ETH_ALEN];
u8 perm_addr[ETH_ALEN];
u8 san_addr[ETH_ALEN];
u16 max_fcoeq;
};
/* PCI bus types */
enum iavf_bus_type {
iavf_bus_type_unknown = 0,
iavf_bus_type_pci,
iavf_bus_type_pcix,
iavf_bus_type_pci_express,
iavf_bus_type_reserved
};
/* PCI bus speeds */
enum iavf_bus_speed {
iavf_bus_speed_unknown = 0,
iavf_bus_speed_33 = 33,
iavf_bus_speed_66 = 66,
iavf_bus_speed_100 = 100,
iavf_bus_speed_120 = 120,
iavf_bus_speed_133 = 133,
iavf_bus_speed_2500 = 2500,
iavf_bus_speed_5000 = 5000,
iavf_bus_speed_8000 = 8000,
iavf_bus_speed_reserved
};
/* PCI bus widths */
enum iavf_bus_width {
iavf_bus_width_unknown = 0,
iavf_bus_width_pcie_x1 = 1,
iavf_bus_width_pcie_x2 = 2,
iavf_bus_width_pcie_x4 = 4,
iavf_bus_width_pcie_x8 = 8,
iavf_bus_width_32 = 32,
iavf_bus_width_64 = 64,
iavf_bus_width_reserved
};
/* Bus parameters */
struct iavf_bus_info {
enum iavf_bus_speed speed;
enum iavf_bus_width width;
enum iavf_bus_type type;
u16 func;
u16 device;
u16 lan_id;
u16 bus_id;
};
#define IAVF_MAX_USER_PRIORITY 8
/* Port hardware description */
struct iavf_hw {
u8 __iomem *hw_addr;
void *back;
/* subsystem structs */
struct iavf_mac_info mac;
struct iavf_bus_info bus;
/* pci info */
u16 device_id;
u16 vendor_id;
u16 subsystem_device_id;
u16 subsystem_vendor_id;
u8 revision_id;
/* capabilities for entire device and PCI func */
struct iavf_hw_capabilities dev_caps;
/* Admin Queue info */
struct iavf_adminq_info aq;
/* debug mask */
u32 debug_mask;
char err_str[16];
};
struct iavf_driver_version {
u8 major_version;
u8 minor_version;
u8 build_version;
u8 subbuild_version;
u8 driver_string[32];
};
/* RX Descriptors */
union iavf_16byte_rx_desc {
struct {
__le64 pkt_addr; /* Packet buffer address */
__le64 hdr_addr; /* Header buffer address */
} read;
struct {
struct {
struct {
union {
__le16 mirroring_status;
__le16 fcoe_ctx_id;
} mirr_fcoe;
__le16 l2tag1;
} lo_dword;
union {
__le32 rss; /* RSS Hash */
__le32 fd_id; /* Flow director filter id */
__le32 fcoe_param; /* FCoE DDP Context id */
} hi_dword;
} qword0;
struct {
/* ext status/error/pktype/length */
__le64 status_error_len;
} qword1;
} wb; /* writeback */
};
union iavf_32byte_rx_desc {
struct {
__le64 pkt_addr; /* Packet buffer address */
__le64 hdr_addr; /* Header buffer address */
/* bit 0 of hdr_buffer_addr is DD bit */
__le64 rsvd1;
__le64 rsvd2;
} read;
struct {
struct {
struct {
union {
__le16 mirroring_status;
__le16 fcoe_ctx_id;
} mirr_fcoe;
__le16 l2tag1;
} lo_dword;
union {
__le32 rss; /* RSS Hash */
__le32 fcoe_param; /* FCoE DDP Context id */
/* Flow director filter id in case of
* Programming status desc WB
*/
__le32 fd_id;
} hi_dword;
} qword0;
struct {
/* status/error/pktype/length */
__le64 status_error_len;
} qword1;
struct {
__le16 ext_status; /* extended status */
__le16 rsvd;
__le16 l2tag2_1;
__le16 l2tag2_2;
} qword2;
struct {
union {
__le32 flex_bytes_lo;
__le32 pe_status;
} lo_dword;
union {
__le32 flex_bytes_hi;
__le32 fd_id;
} hi_dword;
} qword3;
} wb; /* writeback */
};
enum iavf_rx_desc_status_bits {
/* Note: These are predefined bit offsets */
IAVF_RX_DESC_STATUS_DD_SHIFT = 0,
IAVF_RX_DESC_STATUS_EOF_SHIFT = 1,
IAVF_RX_DESC_STATUS_L2TAG1P_SHIFT = 2,
IAVF_RX_DESC_STATUS_L3L4P_SHIFT = 3,
IAVF_RX_DESC_STATUS_CRCP_SHIFT = 4,
IAVF_RX_DESC_STATUS_TSYNINDX_SHIFT = 5, /* 2 BITS */
IAVF_RX_DESC_STATUS_TSYNVALID_SHIFT = 7,
/* Note: Bit 8 is reserved in X710 and XL710 */
IAVF_RX_DESC_STATUS_EXT_UDP_0_SHIFT = 8,
IAVF_RX_DESC_STATUS_UMBCAST_SHIFT = 9, /* 2 BITS */
IAVF_RX_DESC_STATUS_FLM_SHIFT = 11,
IAVF_RX_DESC_STATUS_FLTSTAT_SHIFT = 12, /* 2 BITS */
IAVF_RX_DESC_STATUS_LPBK_SHIFT = 14,
IAVF_RX_DESC_STATUS_IPV6EXADD_SHIFT = 15,
IAVF_RX_DESC_STATUS_RESERVED_SHIFT = 16, /* 2 BITS */
/* Note: For non-tunnel packets INT_UDP_0 is the right status for
* UDP header
*/
IAVF_RX_DESC_STATUS_INT_UDP_0_SHIFT = 18,
IAVF_RX_DESC_STATUS_LAST /* this entry must be last!!! */
};
#define IAVF_RXD_QW1_STATUS_SHIFT 0
#define IAVF_RXD_QW1_STATUS_MASK ((BIT(IAVF_RX_DESC_STATUS_LAST) - 1) \
<< IAVF_RXD_QW1_STATUS_SHIFT)
#define IAVF_RXD_QW1_STATUS_TSYNINDX_SHIFT IAVF_RX_DESC_STATUS_TSYNINDX_SHIFT
#define IAVF_RXD_QW1_STATUS_TSYNINDX_MASK (0x3UL << \
IAVF_RXD_QW1_STATUS_TSYNINDX_SHIFT)
#define IAVF_RXD_QW1_STATUS_TSYNVALID_SHIFT IAVF_RX_DESC_STATUS_TSYNVALID_SHIFT
#define IAVF_RXD_QW1_STATUS_TSYNVALID_MASK \
BIT_ULL(IAVF_RXD_QW1_STATUS_TSYNVALID_SHIFT)
enum iavf_rx_desc_fltstat_values {
IAVF_RX_DESC_FLTSTAT_NO_DATA = 0,
IAVF_RX_DESC_FLTSTAT_RSV_FD_ID = 1, /* 16byte desc? FD_ID : RSV */
IAVF_RX_DESC_FLTSTAT_RSV = 2,
IAVF_RX_DESC_FLTSTAT_RSS_HASH = 3,
};
#define IAVF_RXD_QW1_ERROR_SHIFT 19
#define IAVF_RXD_QW1_ERROR_MASK (0xFFUL << IAVF_RXD_QW1_ERROR_SHIFT)
enum iavf_rx_desc_error_bits {
/* Note: These are predefined bit offsets */
IAVF_RX_DESC_ERROR_RXE_SHIFT = 0,
IAVF_RX_DESC_ERROR_RECIPE_SHIFT = 1,
IAVF_RX_DESC_ERROR_HBO_SHIFT = 2,
IAVF_RX_DESC_ERROR_L3L4E_SHIFT = 3, /* 3 BITS */
IAVF_RX_DESC_ERROR_IPE_SHIFT = 3,
IAVF_RX_DESC_ERROR_L4E_SHIFT = 4,
IAVF_RX_DESC_ERROR_EIPE_SHIFT = 5,
IAVF_RX_DESC_ERROR_OVERSIZE_SHIFT = 6,
IAVF_RX_DESC_ERROR_PPRS_SHIFT = 7
};
enum iavf_rx_desc_error_l3l4e_fcoe_masks {
IAVF_RX_DESC_ERROR_L3L4E_NONE = 0,
IAVF_RX_DESC_ERROR_L3L4E_PROT = 1,
IAVF_RX_DESC_ERROR_L3L4E_FC = 2,
IAVF_RX_DESC_ERROR_L3L4E_DMAC_ERR = 3,
IAVF_RX_DESC_ERROR_L3L4E_DMAC_WARN = 4
};
#define IAVF_RXD_QW1_PTYPE_SHIFT 30
#define IAVF_RXD_QW1_PTYPE_MASK (0xFFULL << IAVF_RXD_QW1_PTYPE_SHIFT)
/* Packet type non-ip values */
enum iavf_rx_l2_ptype {
IAVF_RX_PTYPE_L2_RESERVED = 0,
IAVF_RX_PTYPE_L2_MAC_PAY2 = 1,
IAVF_RX_PTYPE_L2_TIMESYNC_PAY2 = 2,
IAVF_RX_PTYPE_L2_FIP_PAY2 = 3,
IAVF_RX_PTYPE_L2_OUI_PAY2 = 4,
IAVF_RX_PTYPE_L2_MACCNTRL_PAY2 = 5,
IAVF_RX_PTYPE_L2_LLDP_PAY2 = 6,
IAVF_RX_PTYPE_L2_ECP_PAY2 = 7,
IAVF_RX_PTYPE_L2_EVB_PAY2 = 8,
IAVF_RX_PTYPE_L2_QCN_PAY2 = 9,
IAVF_RX_PTYPE_L2_EAPOL_PAY2 = 10,
IAVF_RX_PTYPE_L2_ARP = 11,
IAVF_RX_PTYPE_L2_FCOE_PAY3 = 12,
IAVF_RX_PTYPE_L2_FCOE_FCDATA_PAY3 = 13,
IAVF_RX_PTYPE_L2_FCOE_FCRDY_PAY3 = 14,
IAVF_RX_PTYPE_L2_FCOE_FCRSP_PAY3 = 15,
IAVF_RX_PTYPE_L2_FCOE_FCOTHER_PA = 16,
IAVF_RX_PTYPE_L2_FCOE_VFT_PAY3 = 17,
IAVF_RX_PTYPE_L2_FCOE_VFT_FCDATA = 18,
IAVF_RX_PTYPE_L2_FCOE_VFT_FCRDY = 19,
IAVF_RX_PTYPE_L2_FCOE_VFT_FCRSP = 20,
IAVF_RX_PTYPE_L2_FCOE_VFT_FCOTHER = 21,
IAVF_RX_PTYPE_GRENAT4_MAC_PAY3 = 58,
IAVF_RX_PTYPE_GRENAT4_MACVLAN_IPV6_ICMP_PAY4 = 87,
IAVF_RX_PTYPE_GRENAT6_MAC_PAY3 = 124,
IAVF_RX_PTYPE_GRENAT6_MACVLAN_IPV6_ICMP_PAY4 = 153
};
struct iavf_rx_ptype_decoded {
u32 ptype:8;
u32 known:1;
u32 outer_ip:1;
u32 outer_ip_ver:1;
u32 outer_frag:1;
u32 tunnel_type:3;
u32 tunnel_end_prot:2;
u32 tunnel_end_frag:1;
u32 inner_prot:4;
u32 payload_layer:3;
};
enum iavf_rx_ptype_outer_ip {
IAVF_RX_PTYPE_OUTER_L2 = 0,
IAVF_RX_PTYPE_OUTER_IP = 1
};
enum iavf_rx_ptype_outer_ip_ver {
IAVF_RX_PTYPE_OUTER_NONE = 0,
IAVF_RX_PTYPE_OUTER_IPV4 = 0,
IAVF_RX_PTYPE_OUTER_IPV6 = 1
};
enum iavf_rx_ptype_outer_fragmented {
IAVF_RX_PTYPE_NOT_FRAG = 0,
IAVF_RX_PTYPE_FRAG = 1
};
enum iavf_rx_ptype_tunnel_type {
IAVF_RX_PTYPE_TUNNEL_NONE = 0,
IAVF_RX_PTYPE_TUNNEL_IP_IP = 1,
IAVF_RX_PTYPE_TUNNEL_IP_GRENAT = 2,
IAVF_RX_PTYPE_TUNNEL_IP_GRENAT_MAC = 3,
IAVF_RX_PTYPE_TUNNEL_IP_GRENAT_MAC_VLAN = 4,
};
enum iavf_rx_ptype_tunnel_end_prot {
IAVF_RX_PTYPE_TUNNEL_END_NONE = 0,
IAVF_RX_PTYPE_TUNNEL_END_IPV4 = 1,
IAVF_RX_PTYPE_TUNNEL_END_IPV6 = 2,
};
enum iavf_rx_ptype_inner_prot {
IAVF_RX_PTYPE_INNER_PROT_NONE = 0,
IAVF_RX_PTYPE_INNER_PROT_UDP = 1,
IAVF_RX_PTYPE_INNER_PROT_TCP = 2,
IAVF_RX_PTYPE_INNER_PROT_SCTP = 3,
IAVF_RX_PTYPE_INNER_PROT_ICMP = 4,
IAVF_RX_PTYPE_INNER_PROT_TIMESYNC = 5
};
enum iavf_rx_ptype_payload_layer {
IAVF_RX_PTYPE_PAYLOAD_LAYER_NONE = 0,
IAVF_RX_PTYPE_PAYLOAD_LAYER_PAY2 = 1,
IAVF_RX_PTYPE_PAYLOAD_LAYER_PAY3 = 2,
IAVF_RX_PTYPE_PAYLOAD_LAYER_PAY4 = 3,
};
#define IAVF_RXD_QW1_LENGTH_PBUF_SHIFT 38
#define IAVF_RXD_QW1_LENGTH_PBUF_MASK (0x3FFFULL << \
IAVF_RXD_QW1_LENGTH_PBUF_SHIFT)
#define IAVF_RXD_QW1_LENGTH_HBUF_SHIFT 52
#define IAVF_RXD_QW1_LENGTH_HBUF_MASK (0x7FFULL << \
IAVF_RXD_QW1_LENGTH_HBUF_SHIFT)
#define IAVF_RXD_QW1_LENGTH_SPH_SHIFT 63
#define IAVF_RXD_QW1_LENGTH_SPH_MASK BIT_ULL(IAVF_RXD_QW1_LENGTH_SPH_SHIFT)
enum iavf_rx_desc_ext_status_bits {
/* Note: These are predefined bit offsets */
IAVF_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT = 0,
IAVF_RX_DESC_EXT_STATUS_L2TAG3P_SHIFT = 1,
IAVF_RX_DESC_EXT_STATUS_FLEXBL_SHIFT = 2, /* 2 BITS */
IAVF_RX_DESC_EXT_STATUS_FLEXBH_SHIFT = 4, /* 2 BITS */
IAVF_RX_DESC_EXT_STATUS_FDLONGB_SHIFT = 9,
IAVF_RX_DESC_EXT_STATUS_FCOELONGB_SHIFT = 10,
IAVF_RX_DESC_EXT_STATUS_PELONGB_SHIFT = 11,
};
enum iavf_rx_desc_pe_status_bits {
/* Note: These are predefined bit offsets */
IAVF_RX_DESC_PE_STATUS_QPID_SHIFT = 0, /* 18 BITS */
IAVF_RX_DESC_PE_STATUS_L4PORT_SHIFT = 0, /* 16 BITS */
IAVF_RX_DESC_PE_STATUS_IPINDEX_SHIFT = 16, /* 8 BITS */
IAVF_RX_DESC_PE_STATUS_QPIDHIT_SHIFT = 24,
IAVF_RX_DESC_PE_STATUS_APBVTHIT_SHIFT = 25,
IAVF_RX_DESC_PE_STATUS_PORTV_SHIFT = 26,
IAVF_RX_DESC_PE_STATUS_URG_SHIFT = 27,
IAVF_RX_DESC_PE_STATUS_IPFRAG_SHIFT = 28,
IAVF_RX_DESC_PE_STATUS_IPOPT_SHIFT = 29
};
#define IAVF_RX_PROG_STATUS_DESC_LENGTH_SHIFT 38
#define IAVF_RX_PROG_STATUS_DESC_LENGTH 0x2000000
#define IAVF_RX_PROG_STATUS_DESC_QW1_PROGID_SHIFT 2
#define IAVF_RX_PROG_STATUS_DESC_QW1_PROGID_MASK (0x7UL << \
IAVF_RX_PROG_STATUS_DESC_QW1_PROGID_SHIFT)
#define IAVF_RX_PROG_STATUS_DESC_QW1_ERROR_SHIFT 19
#define IAVF_RX_PROG_STATUS_DESC_QW1_ERROR_MASK (0x3FUL << \
IAVF_RX_PROG_STATUS_DESC_QW1_ERROR_SHIFT)
enum iavf_rx_prog_status_desc_status_bits {
/* Note: These are predefined bit offsets */
IAVF_RX_PROG_STATUS_DESC_DD_SHIFT = 0,
IAVF_RX_PROG_STATUS_DESC_PROG_ID_SHIFT = 2 /* 3 BITS */
};
enum iavf_rx_prog_status_desc_prog_id_masks {
IAVF_RX_PROG_STATUS_DESC_FD_FILTER_STATUS = 1,
IAVF_RX_PROG_STATUS_DESC_FCOE_CTXT_PROG_STATUS = 2,
IAVF_RX_PROG_STATUS_DESC_FCOE_CTXT_INVL_STATUS = 4,
};
enum iavf_rx_prog_status_desc_error_bits {
/* Note: These are predefined bit offsets */
IAVF_RX_PROG_STATUS_DESC_FD_TBL_FULL_SHIFT = 0,
IAVF_RX_PROG_STATUS_DESC_NO_FD_ENTRY_SHIFT = 1,
IAVF_RX_PROG_STATUS_DESC_FCOE_TBL_FULL_SHIFT = 2,
IAVF_RX_PROG_STATUS_DESC_FCOE_CONFLICT_SHIFT = 3
};
/* TX Descriptor */
struct iavf_tx_desc {
__le64 buffer_addr; /* Address of descriptor's data buf */
__le64 cmd_type_offset_bsz;
};
#define IAVF_TXD_QW1_DTYPE_SHIFT 0
#define IAVF_TXD_QW1_DTYPE_MASK (0xFUL << IAVF_TXD_QW1_DTYPE_SHIFT)
enum iavf_tx_desc_dtype_value {
IAVF_TX_DESC_DTYPE_DATA = 0x0,
IAVF_TX_DESC_DTYPE_NOP = 0x1, /* same as Context desc */
IAVF_TX_DESC_DTYPE_CONTEXT = 0x1,
IAVF_TX_DESC_DTYPE_FCOE_CTX = 0x2,
IAVF_TX_DESC_DTYPE_FILTER_PROG = 0x8,
IAVF_TX_DESC_DTYPE_DDP_CTX = 0x9,
IAVF_TX_DESC_DTYPE_FLEX_DATA = 0xB,
IAVF_TX_DESC_DTYPE_FLEX_CTX_1 = 0xC,
IAVF_TX_DESC_DTYPE_FLEX_CTX_2 = 0xD,
IAVF_TX_DESC_DTYPE_DESC_DONE = 0xF
};
#define IAVF_TXD_QW1_CMD_SHIFT 4
#define IAVF_TXD_QW1_CMD_MASK (0x3FFUL << IAVF_TXD_QW1_CMD_SHIFT)
enum iavf_tx_desc_cmd_bits {
IAVF_TX_DESC_CMD_EOP = 0x0001,
IAVF_TX_DESC_CMD_RS = 0x0002,
IAVF_TX_DESC_CMD_ICRC = 0x0004,
IAVF_TX_DESC_CMD_IL2TAG1 = 0x0008,
IAVF_TX_DESC_CMD_DUMMY = 0x0010,
IAVF_TX_DESC_CMD_IIPT_NONIP = 0x0000, /* 2 BITS */
IAVF_TX_DESC_CMD_IIPT_IPV6 = 0x0020, /* 2 BITS */
IAVF_TX_DESC_CMD_IIPT_IPV4 = 0x0040, /* 2 BITS */
IAVF_TX_DESC_CMD_IIPT_IPV4_CSUM = 0x0060, /* 2 BITS */
IAVF_TX_DESC_CMD_FCOET = 0x0080,
IAVF_TX_DESC_CMD_L4T_EOFT_UNK = 0x0000, /* 2 BITS */
IAVF_TX_DESC_CMD_L4T_EOFT_TCP = 0x0100, /* 2 BITS */
IAVF_TX_DESC_CMD_L4T_EOFT_SCTP = 0x0200, /* 2 BITS */
IAVF_TX_DESC_CMD_L4T_EOFT_UDP = 0x0300, /* 2 BITS */
IAVF_TX_DESC_CMD_L4T_EOFT_EOF_N = 0x0000, /* 2 BITS */
IAVF_TX_DESC_CMD_L4T_EOFT_EOF_T = 0x0100, /* 2 BITS */
IAVF_TX_DESC_CMD_L4T_EOFT_EOF_NI = 0x0200, /* 2 BITS */
IAVF_TX_DESC_CMD_L4T_EOFT_EOF_A = 0x0300, /* 2 BITS */
};
#define IAVF_TXD_QW1_OFFSET_SHIFT 16
#define IAVF_TXD_QW1_OFFSET_MASK (0x3FFFFULL << \
IAVF_TXD_QW1_OFFSET_SHIFT)
enum iavf_tx_desc_length_fields {
/* Note: These are predefined bit offsets */
IAVF_TX_DESC_LENGTH_MACLEN_SHIFT = 0, /* 7 BITS */
IAVF_TX_DESC_LENGTH_IPLEN_SHIFT = 7, /* 7 BITS */
IAVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT = 14 /* 4 BITS */
};
#define IAVF_TXD_QW1_TX_BUF_SZ_SHIFT 34
#define IAVF_TXD_QW1_TX_BUF_SZ_MASK (0x3FFFULL << \
IAVF_TXD_QW1_TX_BUF_SZ_SHIFT)
#define IAVF_TXD_QW1_L2TAG1_SHIFT 48
#define IAVF_TXD_QW1_L2TAG1_MASK (0xFFFFULL << IAVF_TXD_QW1_L2TAG1_SHIFT)
/* Context descriptors */
struct iavf_tx_context_desc {
__le32 tunneling_params;
__le16 l2tag2;
__le16 rsvd;
__le64 type_cmd_tso_mss;
};
#define IAVF_TXD_CTX_QW1_CMD_SHIFT 4
#define IAVF_TXD_CTX_QW1_CMD_MASK (0xFFFFUL << IAVF_TXD_CTX_QW1_CMD_SHIFT)
enum iavf_tx_ctx_desc_cmd_bits {
IAVF_TX_CTX_DESC_TSO = 0x01,
IAVF_TX_CTX_DESC_TSYN = 0x02,
IAVF_TX_CTX_DESC_IL2TAG2 = 0x04,
IAVF_TX_CTX_DESC_IL2TAG2_IL2H = 0x08,
IAVF_TX_CTX_DESC_SWTCH_NOTAG = 0x00,
IAVF_TX_CTX_DESC_SWTCH_UPLINK = 0x10,
IAVF_TX_CTX_DESC_SWTCH_LOCAL = 0x20,
IAVF_TX_CTX_DESC_SWTCH_VSI = 0x30,
IAVF_TX_CTX_DESC_SWPE = 0x40
};
/* Packet Classifier Types for filters */
enum iavf_filter_pctype {
/* Note: Values 0-28 are reserved for future use.
* Value 29, 30, 32 are not supported on XL710 and X710.
*/
IAVF_FILTER_PCTYPE_NONF_UNICAST_IPV4_UDP = 29,
IAVF_FILTER_PCTYPE_NONF_MULTICAST_IPV4_UDP = 30,
IAVF_FILTER_PCTYPE_NONF_IPV4_UDP = 31,
IAVF_FILTER_PCTYPE_NONF_IPV4_TCP_SYN_NO_ACK = 32,
IAVF_FILTER_PCTYPE_NONF_IPV4_TCP = 33,
IAVF_FILTER_PCTYPE_NONF_IPV4_SCTP = 34,
IAVF_FILTER_PCTYPE_NONF_IPV4_OTHER = 35,
IAVF_FILTER_PCTYPE_FRAG_IPV4 = 36,
/* Note: Values 37-38 are reserved for future use.
* Value 39, 40, 42 are not supported on XL710 and X710.
*/
IAVF_FILTER_PCTYPE_NONF_UNICAST_IPV6_UDP = 39,
IAVF_FILTER_PCTYPE_NONF_MULTICAST_IPV6_UDP = 40,
IAVF_FILTER_PCTYPE_NONF_IPV6_UDP = 41,
IAVF_FILTER_PCTYPE_NONF_IPV6_TCP_SYN_NO_ACK = 42,
IAVF_FILTER_PCTYPE_NONF_IPV6_TCP = 43,
IAVF_FILTER_PCTYPE_NONF_IPV6_SCTP = 44,
IAVF_FILTER_PCTYPE_NONF_IPV6_OTHER = 45,
IAVF_FILTER_PCTYPE_FRAG_IPV6 = 46,
/* Note: Value 47 is reserved for future use */
IAVF_FILTER_PCTYPE_FCOE_OX = 48,
IAVF_FILTER_PCTYPE_FCOE_RX = 49,
IAVF_FILTER_PCTYPE_FCOE_OTHER = 50,
/* Note: Values 51-62 are reserved for future use */
IAVF_FILTER_PCTYPE_L2_PAYLOAD = 63,
};
#define IAVF_TXD_CTX_QW1_TSO_LEN_SHIFT 30
#define IAVF_TXD_CTX_QW1_TSO_LEN_MASK (0x3FFFFULL << \
IAVF_TXD_CTX_QW1_TSO_LEN_SHIFT)
#define IAVF_TXD_CTX_QW1_MSS_SHIFT 50
#define IAVF_TXD_CTX_QW1_MSS_MASK (0x3FFFULL << \
IAVF_TXD_CTX_QW1_MSS_SHIFT)
#define IAVF_TXD_CTX_QW1_VSI_SHIFT 50
#define IAVF_TXD_CTX_QW1_VSI_MASK (0x1FFULL << IAVF_TXD_CTX_QW1_VSI_SHIFT)
#define IAVF_TXD_CTX_QW0_EXT_IP_SHIFT 0
#define IAVF_TXD_CTX_QW0_EXT_IP_MASK (0x3ULL << \
IAVF_TXD_CTX_QW0_EXT_IP_SHIFT)
enum iavf_tx_ctx_desc_eipt_offload {
IAVF_TX_CTX_EXT_IP_NONE = 0x0,
IAVF_TX_CTX_EXT_IP_IPV6 = 0x1,
IAVF_TX_CTX_EXT_IP_IPV4_NO_CSUM = 0x2,
IAVF_TX_CTX_EXT_IP_IPV4 = 0x3
};
#define IAVF_TXD_CTX_QW0_EXT_IPLEN_SHIFT 2
#define IAVF_TXD_CTX_QW0_EXT_IPLEN_MASK (0x3FULL << \
IAVF_TXD_CTX_QW0_EXT_IPLEN_SHIFT)
#define IAVF_TXD_CTX_QW0_NATT_SHIFT 9
#define IAVF_TXD_CTX_QW0_NATT_MASK (0x3ULL << IAVF_TXD_CTX_QW0_NATT_SHIFT)
#define IAVF_TXD_CTX_UDP_TUNNELING BIT_ULL(IAVF_TXD_CTX_QW0_NATT_SHIFT)
#define IAVF_TXD_CTX_GRE_TUNNELING (0x2ULL << IAVF_TXD_CTX_QW0_NATT_SHIFT)
#define IAVF_TXD_CTX_QW0_EIP_NOINC_SHIFT 11
#define IAVF_TXD_CTX_QW0_EIP_NOINC_MASK \
BIT_ULL(IAVF_TXD_CTX_QW0_EIP_NOINC_SHIFT)
#define IAVF_TXD_CTX_EIP_NOINC_IPID_CONST IAVF_TXD_CTX_QW0_EIP_NOINC_MASK
#define IAVF_TXD_CTX_QW0_NATLEN_SHIFT 12
#define IAVF_TXD_CTX_QW0_NATLEN_MASK (0X7FULL << \
IAVF_TXD_CTX_QW0_NATLEN_SHIFT)
#define IAVF_TXD_CTX_QW0_DECTTL_SHIFT 19
#define IAVF_TXD_CTX_QW0_DECTTL_MASK (0xFULL << \
IAVF_TXD_CTX_QW0_DECTTL_SHIFT)
#define IAVF_TXD_CTX_QW0_L4T_CS_SHIFT 23
#define IAVF_TXD_CTX_QW0_L4T_CS_MASK BIT_ULL(IAVF_TXD_CTX_QW0_L4T_CS_SHIFT)
/* Statistics collected by each port, VSI, VEB, and S-channel */
struct iavf_eth_stats {
u64 rx_bytes; /* gorc */
u64 rx_unicast; /* uprc */
u64 rx_multicast; /* mprc */
u64 rx_broadcast; /* bprc */
u64 rx_discards; /* rdpc */
u64 rx_unknown_protocol; /* rupp */
u64 tx_bytes; /* gotc */
u64 tx_unicast; /* uptc */
u64 tx_multicast; /* mptc */
u64 tx_broadcast; /* bptc */
u64 tx_discards; /* tdpc */
u64 tx_errors; /* tepc */
};
#endif /* _IAVF_TYPE_H_ */

View File

@ -15,7 +15,7 @@ static const char ice_copyright[] = "Copyright (c) 2018, Intel Corporation.";
MODULE_AUTHOR("Intel Corporation, <linux.nics@intel.com>");
MODULE_DESCRIPTION(DRV_SUMMARY);
MODULE_LICENSE("GPL");
MODULE_LICENSE("GPL v2");
MODULE_VERSION(DRV_VERSION);
static int debug = -1;

View File

@ -243,7 +243,7 @@ static struct pci_driver igb_driver = {
MODULE_AUTHOR("Intel Corporation, <e1000-devel@lists.sourceforge.net>");
MODULE_DESCRIPTION("Intel(R) Gigabit Ethernet Network Driver");
MODULE_LICENSE("GPL");
MODULE_LICENSE("GPL v2");
MODULE_VERSION(DRV_VERSION);
#define DEFAULT_MSG_ENABLE (NETIF_MSG_DRV|NETIF_MSG_PROBE|NETIF_MSG_LINK)

View File

@ -3011,7 +3011,7 @@ module_exit(igbvf_exit_module);
MODULE_AUTHOR("Intel Corporation, <e1000-devel@lists.sourceforge.net>");
MODULE_DESCRIPTION("Intel(R) Gigabit Virtual Function Network Driver");
MODULE_LICENSE("GPL");
MODULE_LICENSE("GPL v2");
MODULE_VERSION(DRV_VERSION);
/* netdev.c */

View File

@ -107,7 +107,7 @@ static struct pci_driver ixgb_driver = {
MODULE_AUTHOR("Intel Corporation, <linux.nics@intel.com>");
MODULE_DESCRIPTION("Intel(R) PRO/10GbE Network Driver");
MODULE_LICENSE("GPL");
MODULE_LICENSE("GPL v2");
MODULE_VERSION(DRV_VERSION);
#define DEFAULT_MSG_ENABLE (NETIF_MSG_DRV|NETIF_MSG_PROBE|NETIF_MSG_LINK)

View File

@ -159,7 +159,7 @@ MODULE_PARM_DESC(debug, "Debug level (0=none,...,16=all)");
MODULE_AUTHOR("Intel Corporation, <linux.nics@intel.com>");
MODULE_DESCRIPTION("Intel(R) 10 Gigabit PCI Express Network Driver");
MODULE_LICENSE("GPL");
MODULE_LICENSE("GPL v2");
MODULE_VERSION(DRV_VERSION);
static struct workqueue_struct *ixgbe_wq;

View File

@ -79,7 +79,7 @@ MODULE_DEVICE_TABLE(pci, ixgbevf_pci_tbl);
MODULE_AUTHOR("Intel Corporation, <linux.nics@intel.com>");
MODULE_DESCRIPTION("Intel(R) 10 Gigabit Virtual Function Network Driver");
MODULE_LICENSE("GPL");
MODULE_LICENSE("GPL v2");
MODULE_VERSION(DRV_VERSION);
#define DEFAULT_MSG_ENABLE (NETIF_MSG_DRV|NETIF_MSG_PROBE|NETIF_MSG_LINK)