Networking fixes for 5.15-rc5, including fixes from xfrm, bpf,
netfilter, and wireless. Current release - regressions: - xfrm: fix XFRM_MSG_MAPPING ABI breakage caused by inserting a new value in the middle of an enum - unix: fix an issue in unix_shutdown causing the other end read/write failures - phy: mdio: fix memory leak Current release - new code bugs: - mlx5e: improve MQPRIO resiliency against bad configs Previous releases - regressions: - bpf: fix integer overflow leading to OOB access in map element pre-allocation - stmmac: dwmac-rk: fix ethernet on rk3399 based devices - netfilter: conntrack: fix boot failure with nf_conntrack.enable_hooks=1 - brcmfmac: revert using ISO3166 country code and 0 rev as fallback - i40e: fix freeing of uninitialized misc IRQ vector - iavf: fix double unlock of crit_lock Previous releases - always broken: - bpf, arm: fix register clobbering in div/mod implementation - netfilter: nf_tables: correct issues in netlink rule change event notifications - dsa: tag_dsa: fix mask for trunked packets - usb: r8152: don't resubmit rx immediately to avoid soft lockup on device unplug - i40e: fix endless loop under rtnl if FW fails to correctly respond to capability query - mlx5e: fix rx checksum offload coexistence with ipsec offload - mlx5: force round second at 1PPS out start time and allow it only in supported clock modes - phy: pcs: xpcs: fix incorrect CL37 AN sequence, EEE disable sequence Misc: - xfrm: slightly rejig the new policy uAPI to make it less cryptic Signed-off-by: Jakub Kicinski <kuba@kernel.org> -----BEGIN PGP SIGNATURE----- iQIzBAABCAAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAmFfEpUACgkQMUZtbf5S Iru3vQ//fgm+pdDE860BXmLEgrbJTHU4rq/YD1vwZBcWw/i5wI1vnLr6BzZsdPNX DkhcKFGOZUTj+8ctuRDuqrkqoDjva6uRjwM0vcPWh5i9sGqJpKjxB3dFksyxJELR SnXM3Jmlk7YiGAw9Bi+66OuIwt2ouRLR/bNIwg8/qCnFI1efIF7IPeCpuvKCw/yd SOiSBIfuSPD1qcs5Sy4aHZqA8Xr9qbwDNwWQfFLXgNDKEiY7XOSdo3CoCddSxdR+ 2nmpOiz4w68wspapdZn3GSZHYrQh5kjz7b0Aru0Jvw86M79mKp3b9AfJ9uXTcJhp 4cQBralLnQvLKanvKi1z5CI6NjXx+r6rXI43N6NjHOtjRUPoFMqxZEH0d7o11aT1 sN3UDgtFtuE9Pfrhnc5ZHuHqFCCyxA6NWD6nt1dUoSEo0oWt9mOHfeoFT4+45fO0 5no5+1q3EkYdH4jiJlavnM2DMvVzMd6FbxDzWpXJ2j4W1vM6TLkexEJIK4MLGxPV 76lxeXzcvbM9a0vq5BabR4QbPIAv+A9qYPmXJwPVrvjo+zynwuWc3gMO5hc4EaOf FXF2Ka5Jn97jW8/JS7i7Gj6M8GKdyIxaFHgS4MLtJNs6pt3h7m6bSgcIOQZ5psBZ dKRYjM2lxeVkvDDmy5Gztkw2asbofYQP004tgagc+jwXP7DwXaY= =xzZg -----END PGP SIGNATURE----- Merge tag 'net-5.15-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net Pull networking fixes from Jakub Kicinski: "Including fixes from xfrm, bpf, netfilter, and wireless. Current release - regressions: - xfrm: fix XFRM_MSG_MAPPING ABI breakage caused by inserting a new value in the middle of an enum - unix: fix an issue in unix_shutdown causing the other end read/write failures - phy: mdio: fix memory leak Current release - new code bugs: - mlx5e: improve MQPRIO resiliency against bad configs Previous releases - regressions: - bpf: fix integer overflow leading to OOB access in map element pre-allocation - stmmac: dwmac-rk: fix ethernet on rk3399 based devices - netfilter: conntrack: fix boot failure with nf_conntrack.enable_hooks=1 - brcmfmac: revert using ISO3166 country code and 0 rev as fallback - i40e: fix freeing of uninitialized misc IRQ vector - iavf: fix double unlock of crit_lock Previous releases - always broken: - bpf, arm: fix register clobbering in div/mod implementation - netfilter: nf_tables: correct issues in netlink rule change event notifications - dsa: tag_dsa: fix mask for trunked packets - usb: r8152: don't resubmit rx immediately to avoid soft lockup on device unplug - i40e: fix endless loop under rtnl if FW fails to correctly respond to capability query - mlx5e: fix rx checksum offload coexistence with ipsec offload - mlx5: force round second at 1PPS out start time and allow it only in supported clock modes - phy: pcs: xpcs: fix incorrect CL37 AN sequence, EEE disable sequence Misc: - xfrm: slightly rejig the new policy uAPI to make it less cryptic" * tag 'net-5.15-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (66 commits) net: prefer socket bound to interface when not in VRF iavf: fix double unlock of crit_lock i40e: Fix freeing of uninitialized misc IRQ vector i40e: fix endless loop under rtnl dt-bindings: net: dsa: marvell: fix compatible in example ionic: move filter sync_needed bit set gve: report 64bit tx_bytes counter from gve_handle_report_stats() gve: fix gve_get_stats() rtnetlink: fix if_nlmsg_stats_size() under estimation gve: Properly handle errors in gve_assign_qpl gve: Avoid freeing NULL pointer gve: Correct available tx qpl check unix: Fix an issue in unix_shutdown causing the other end read/write failures net: stmmac: trigger PCS EEE to turn off on link down net: pcs: xpcs: fix incorrect steps on disable EEE netlink: annotate data races around nlk->bound net: pcs: xpcs: fix incorrect CL37 AN sequence net: sfp: Fix typo in state machine debug string net/sched: sch_taprio: properly cancel timer from taprio_destroy() net: bridge: fix under estimation in br_get_linkxstats_size() ...
This commit is contained in:
commit
4a16df549d
1
CREDITS
1
CREDITS
@ -971,6 +971,7 @@ D: PowerPC
|
||||
N: Daniel Drake
|
||||
E: dsd@gentoo.org
|
||||
D: USBAT02 CompactFlash support in usb-storage
|
||||
D: ZD1211RW wireless driver
|
||||
S: UK
|
||||
|
||||
N: Oleg Drokin
|
||||
|
@ -83,7 +83,7 @@ Example:
|
||||
#interrupt-cells = <2>;
|
||||
|
||||
switch0: switch@0 {
|
||||
compatible = "marvell,mv88e6390";
|
||||
compatible = "marvell,mv88e6190";
|
||||
reg = <0>;
|
||||
reset-gpios = <&gpio5 1 GPIO_ACTIVE_LOW>;
|
||||
|
||||
|
@ -8609,9 +8609,8 @@ F: Documentation/devicetree/bindings/iio/humidity/st,hts221.yaml
|
||||
F: drivers/iio/humidity/hts221*
|
||||
|
||||
HUAWEI ETHERNET DRIVER
|
||||
M: Bin Luo <luobin9@huawei.com>
|
||||
L: netdev@vger.kernel.org
|
||||
S: Supported
|
||||
S: Orphan
|
||||
F: Documentation/networking/device_drivers/ethernet/huawei/hinic.rst
|
||||
F: drivers/net/ethernet/huawei/hinic/
|
||||
|
||||
@ -17794,7 +17793,6 @@ F: drivers/staging/nvec/
|
||||
|
||||
STAGING - OLPC SECONDARY DISPLAY CONTROLLER (DCON)
|
||||
M: Jens Frederich <jfrederich@gmail.com>
|
||||
M: Daniel Drake <dsd@laptop.org>
|
||||
M: Jon Nettleton <jon.nettleton@gmail.com>
|
||||
S: Maintained
|
||||
W: http://wiki.laptop.org/go/DCON
|
||||
@ -20700,7 +20698,6 @@ S: Maintained
|
||||
F: mm/zbud.c
|
||||
|
||||
ZD1211RW WIRELESS DRIVER
|
||||
M: Daniel Drake <dsd@gentoo.org>
|
||||
M: Ulrich Kunitz <kune@deine-taler.de>
|
||||
L: linux-wireless@vger.kernel.org
|
||||
L: zd1211-devs@lists.sourceforge.net (subscribers-only)
|
||||
|
@ -36,6 +36,10 @@
|
||||
* +-----+
|
||||
* |RSVD | JIT scratchpad
|
||||
* current ARM_SP => +-----+ <= (BPF_FP - STACK_SIZE + SCRATCH_SIZE)
|
||||
* | ... | caller-saved registers
|
||||
* +-----+
|
||||
* | ... | arguments passed on stack
|
||||
* ARM_SP during call => +-----|
|
||||
* | |
|
||||
* | ... | Function call stack
|
||||
* | |
|
||||
@ -63,6 +67,12 @@
|
||||
*
|
||||
* When popping registers off the stack at the end of a BPF function, we
|
||||
* reference them via the current ARM_FP register.
|
||||
*
|
||||
* Some eBPF operations are implemented via a call to a helper function.
|
||||
* Such calls are "invisible" in the eBPF code, so it is up to the calling
|
||||
* program to preserve any caller-saved ARM registers during the call. The
|
||||
* JIT emits code to push and pop those registers onto the stack, immediately
|
||||
* above the callee stack frame.
|
||||
*/
|
||||
#define CALLEE_MASK (1 << ARM_R4 | 1 << ARM_R5 | 1 << ARM_R6 | \
|
||||
1 << ARM_R7 | 1 << ARM_R8 | 1 << ARM_R9 | \
|
||||
@ -70,6 +80,8 @@
|
||||
#define CALLEE_PUSH_MASK (CALLEE_MASK | 1 << ARM_LR)
|
||||
#define CALLEE_POP_MASK (CALLEE_MASK | 1 << ARM_PC)
|
||||
|
||||
#define CALLER_MASK (1 << ARM_R0 | 1 << ARM_R1 | 1 << ARM_R2 | 1 << ARM_R3)
|
||||
|
||||
enum {
|
||||
/* Stack layout - these are offsets from (top of stack - 4) */
|
||||
BPF_R2_HI,
|
||||
@ -464,6 +476,7 @@ static inline int epilogue_offset(const struct jit_ctx *ctx)
|
||||
|
||||
static inline void emit_udivmod(u8 rd, u8 rm, u8 rn, struct jit_ctx *ctx, u8 op)
|
||||
{
|
||||
const int exclude_mask = BIT(ARM_R0) | BIT(ARM_R1);
|
||||
const s8 *tmp = bpf2a32[TMP_REG_1];
|
||||
|
||||
#if __LINUX_ARM_ARCH__ == 7
|
||||
@ -495,11 +508,17 @@ static inline void emit_udivmod(u8 rd, u8 rm, u8 rn, struct jit_ctx *ctx, u8 op)
|
||||
emit(ARM_MOV_R(ARM_R0, rm), ctx);
|
||||
}
|
||||
|
||||
/* Push caller-saved registers on stack */
|
||||
emit(ARM_PUSH(CALLER_MASK & ~exclude_mask), ctx);
|
||||
|
||||
/* Call appropriate function */
|
||||
emit_mov_i(ARM_IP, op == BPF_DIV ?
|
||||
(u32)jit_udiv32 : (u32)jit_mod32, ctx);
|
||||
emit_blx_r(ARM_IP, ctx);
|
||||
|
||||
/* Restore caller-saved registers from stack */
|
||||
emit(ARM_POP(CALLER_MASK & ~exclude_mask), ctx);
|
||||
|
||||
/* Save return value */
|
||||
if (rd != ARM_R0)
|
||||
emit(ARM_MOV_R(rd, ARM_R0), ctx);
|
||||
|
@ -154,7 +154,7 @@
|
||||
|
||||
fm1mac3: ethernet@e4000 {
|
||||
phy-handle = <&sgmii_aqr_phy3>;
|
||||
phy-connection-type = "sgmii-2500";
|
||||
phy-connection-type = "2500base-x";
|
||||
sleep = <&rcpm 0x20000000>;
|
||||
};
|
||||
|
||||
|
@ -780,7 +780,7 @@ struct gve_queue_page_list *gve_assign_rx_qpl(struct gve_priv *priv)
|
||||
gve_num_tx_qpls(priv));
|
||||
|
||||
/* we are out of rx qpls */
|
||||
if (id == priv->qpl_cfg.qpl_map_size)
|
||||
if (id == gve_num_tx_qpls(priv) + gve_num_rx_qpls(priv))
|
||||
return NULL;
|
||||
|
||||
set_bit(id, priv->qpl_cfg.qpl_id_map);
|
||||
|
@ -41,6 +41,7 @@ static void gve_get_stats(struct net_device *dev, struct rtnl_link_stats64 *s)
|
||||
{
|
||||
struct gve_priv *priv = netdev_priv(dev);
|
||||
unsigned int start;
|
||||
u64 packets, bytes;
|
||||
int ring;
|
||||
|
||||
if (priv->rx) {
|
||||
@ -48,10 +49,12 @@ static void gve_get_stats(struct net_device *dev, struct rtnl_link_stats64 *s)
|
||||
do {
|
||||
start =
|
||||
u64_stats_fetch_begin(&priv->rx[ring].statss);
|
||||
s->rx_packets += priv->rx[ring].rpackets;
|
||||
s->rx_bytes += priv->rx[ring].rbytes;
|
||||
packets = priv->rx[ring].rpackets;
|
||||
bytes = priv->rx[ring].rbytes;
|
||||
} while (u64_stats_fetch_retry(&priv->rx[ring].statss,
|
||||
start));
|
||||
s->rx_packets += packets;
|
||||
s->rx_bytes += bytes;
|
||||
}
|
||||
}
|
||||
if (priv->tx) {
|
||||
@ -59,10 +62,12 @@ static void gve_get_stats(struct net_device *dev, struct rtnl_link_stats64 *s)
|
||||
do {
|
||||
start =
|
||||
u64_stats_fetch_begin(&priv->tx[ring].statss);
|
||||
s->tx_packets += priv->tx[ring].pkt_done;
|
||||
s->tx_bytes += priv->tx[ring].bytes_done;
|
||||
packets = priv->tx[ring].pkt_done;
|
||||
bytes = priv->tx[ring].bytes_done;
|
||||
} while (u64_stats_fetch_retry(&priv->tx[ring].statss,
|
||||
start));
|
||||
s->tx_packets += packets;
|
||||
s->tx_bytes += bytes;
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -82,6 +87,9 @@ static int gve_alloc_counter_array(struct gve_priv *priv)
|
||||
|
||||
static void gve_free_counter_array(struct gve_priv *priv)
|
||||
{
|
||||
if (!priv->counter_array)
|
||||
return;
|
||||
|
||||
dma_free_coherent(&priv->pdev->dev,
|
||||
priv->num_event_counters *
|
||||
sizeof(*priv->counter_array),
|
||||
@ -142,6 +150,9 @@ static int gve_alloc_stats_report(struct gve_priv *priv)
|
||||
|
||||
static void gve_free_stats_report(struct gve_priv *priv)
|
||||
{
|
||||
if (!priv->stats_report)
|
||||
return;
|
||||
|
||||
del_timer_sync(&priv->stats_report_timer);
|
||||
dma_free_coherent(&priv->pdev->dev, priv->stats_report_len,
|
||||
priv->stats_report, priv->stats_report_bus);
|
||||
@ -370,18 +381,19 @@ static void gve_free_notify_blocks(struct gve_priv *priv)
|
||||
{
|
||||
int i;
|
||||
|
||||
if (priv->msix_vectors) {
|
||||
/* Free the irqs */
|
||||
for (i = 0; i < priv->num_ntfy_blks; i++) {
|
||||
struct gve_notify_block *block = &priv->ntfy_blocks[i];
|
||||
int msix_idx = i;
|
||||
if (!priv->msix_vectors)
|
||||
return;
|
||||
|
||||
irq_set_affinity_hint(priv->msix_vectors[msix_idx].vector,
|
||||
NULL);
|
||||
free_irq(priv->msix_vectors[msix_idx].vector, block);
|
||||
}
|
||||
free_irq(priv->msix_vectors[priv->mgmt_msix_idx].vector, priv);
|
||||
/* Free the irqs */
|
||||
for (i = 0; i < priv->num_ntfy_blks; i++) {
|
||||
struct gve_notify_block *block = &priv->ntfy_blocks[i];
|
||||
int msix_idx = i;
|
||||
|
||||
irq_set_affinity_hint(priv->msix_vectors[msix_idx].vector,
|
||||
NULL);
|
||||
free_irq(priv->msix_vectors[msix_idx].vector, block);
|
||||
}
|
||||
free_irq(priv->msix_vectors[priv->mgmt_msix_idx].vector, priv);
|
||||
dma_free_coherent(&priv->pdev->dev,
|
||||
priv->num_ntfy_blks * sizeof(*priv->ntfy_blocks),
|
||||
priv->ntfy_blocks, priv->ntfy_block_bus);
|
||||
@ -1185,9 +1197,10 @@ static void gve_handle_reset(struct gve_priv *priv)
|
||||
|
||||
void gve_handle_report_stats(struct gve_priv *priv)
|
||||
{
|
||||
int idx, stats_idx = 0, tx_bytes;
|
||||
unsigned int start = 0;
|
||||
struct stats *stats = priv->stats_report->stats;
|
||||
int idx, stats_idx = 0;
|
||||
unsigned int start = 0;
|
||||
u64 tx_bytes;
|
||||
|
||||
if (!gve_get_report_stats(priv))
|
||||
return;
|
||||
|
@ -104,8 +104,14 @@ static int gve_prefill_rx_pages(struct gve_rx_ring *rx)
|
||||
if (!rx->data.page_info)
|
||||
return -ENOMEM;
|
||||
|
||||
if (!rx->data.raw_addressing)
|
||||
if (!rx->data.raw_addressing) {
|
||||
rx->data.qpl = gve_assign_rx_qpl(priv);
|
||||
if (!rx->data.qpl) {
|
||||
kvfree(rx->data.page_info);
|
||||
rx->data.page_info = NULL;
|
||||
return -ENOMEM;
|
||||
}
|
||||
}
|
||||
for (i = 0; i < slots; i++) {
|
||||
if (!rx->data.raw_addressing) {
|
||||
struct page *page = rx->data.qpl->pages[i];
|
||||
|
@ -4871,7 +4871,8 @@ static void i40e_clear_interrupt_scheme(struct i40e_pf *pf)
|
||||
{
|
||||
int i;
|
||||
|
||||
i40e_free_misc_vector(pf);
|
||||
if (test_bit(__I40E_MISC_IRQ_REQUESTED, pf->state))
|
||||
i40e_free_misc_vector(pf);
|
||||
|
||||
i40e_put_lump(pf->irq_pile, pf->iwarp_base_vector,
|
||||
I40E_IWARP_IRQ_PILE_ID);
|
||||
@ -10113,7 +10114,7 @@ static int i40e_get_capabilities(struct i40e_pf *pf,
|
||||
if (pf->hw.aq.asq_last_status == I40E_AQ_RC_ENOMEM) {
|
||||
/* retry with a larger buffer */
|
||||
buf_len = data_size;
|
||||
} else if (pf->hw.aq.asq_last_status != I40E_AQ_RC_OK) {
|
||||
} else if (pf->hw.aq.asq_last_status != I40E_AQ_RC_OK || err) {
|
||||
dev_info(&pf->pdev->dev,
|
||||
"capability discovery failed, err %s aq_err %s\n",
|
||||
i40e_stat_str(&pf->hw, err),
|
||||
|
@ -1965,7 +1965,6 @@ static void iavf_watchdog_task(struct work_struct *work)
|
||||
}
|
||||
adapter->aq_required = 0;
|
||||
adapter->current_op = VIRTCHNL_OP_UNKNOWN;
|
||||
mutex_unlock(&adapter->crit_lock);
|
||||
queue_delayed_work(iavf_wq,
|
||||
&adapter->watchdog_task,
|
||||
msecs_to_jiffies(10));
|
||||
|
@ -252,6 +252,7 @@ struct mlx5e_params {
|
||||
struct {
|
||||
u16 mode;
|
||||
u8 num_tc;
|
||||
struct netdev_tc_txq tc_to_txq[TC_MAX_QUEUE];
|
||||
} mqprio;
|
||||
bool rx_cqe_compress_def;
|
||||
bool tunneled_offload_en;
|
||||
@ -845,6 +846,7 @@ struct mlx5e_priv {
|
||||
struct mlx5e_channel_stats channel_stats[MLX5E_MAX_NUM_CHANNELS];
|
||||
struct mlx5e_channel_stats trap_stats;
|
||||
struct mlx5e_ptp_stats ptp_stats;
|
||||
u16 stats_nch;
|
||||
u16 max_nch;
|
||||
u8 max_opened_tc;
|
||||
bool tx_ptp_opened;
|
||||
@ -1100,12 +1102,6 @@ int mlx5e_ethtool_set_pauseparam(struct mlx5e_priv *priv,
|
||||
struct ethtool_pauseparam *pauseparam);
|
||||
|
||||
/* mlx5e generic netdev management API */
|
||||
static inline unsigned int
|
||||
mlx5e_calc_max_nch(struct mlx5e_priv *priv, const struct mlx5e_profile *profile)
|
||||
{
|
||||
return priv->netdev->num_rx_queues / max_t(u8, profile->rq_groups, 1);
|
||||
}
|
||||
|
||||
static inline bool
|
||||
mlx5e_tx_mpwqe_supported(struct mlx5_core_dev *mdev)
|
||||
{
|
||||
@ -1114,11 +1110,13 @@ mlx5e_tx_mpwqe_supported(struct mlx5_core_dev *mdev)
|
||||
}
|
||||
|
||||
int mlx5e_priv_init(struct mlx5e_priv *priv,
|
||||
const struct mlx5e_profile *profile,
|
||||
struct net_device *netdev,
|
||||
struct mlx5_core_dev *mdev);
|
||||
void mlx5e_priv_cleanup(struct mlx5e_priv *priv);
|
||||
struct net_device *
|
||||
mlx5e_create_netdev(struct mlx5_core_dev *mdev, unsigned int txqs, unsigned int rxqs);
|
||||
mlx5e_create_netdev(struct mlx5_core_dev *mdev, const struct mlx5e_profile *profile,
|
||||
unsigned int txqs, unsigned int rxqs);
|
||||
int mlx5e_attach_netdev(struct mlx5e_priv *priv);
|
||||
void mlx5e_detach_netdev(struct mlx5e_priv *priv);
|
||||
void mlx5e_destroy_netdev(struct mlx5e_priv *priv);
|
||||
|
@ -35,7 +35,7 @@ static void mlx5e_hv_vhca_fill_stats(struct mlx5e_priv *priv, void *data,
|
||||
{
|
||||
int ch, i = 0;
|
||||
|
||||
for (ch = 0; ch < priv->max_nch; ch++) {
|
||||
for (ch = 0; ch < priv->stats_nch; ch++) {
|
||||
void *buf = data + i;
|
||||
|
||||
if (WARN_ON_ONCE(buf +
|
||||
@ -51,7 +51,7 @@ static void mlx5e_hv_vhca_fill_stats(struct mlx5e_priv *priv, void *data,
|
||||
static int mlx5e_hv_vhca_stats_buf_size(struct mlx5e_priv *priv)
|
||||
{
|
||||
return (sizeof(struct mlx5e_hv_vhca_per_ring_stats) *
|
||||
priv->max_nch);
|
||||
priv->stats_nch);
|
||||
}
|
||||
|
||||
static void mlx5e_hv_vhca_stats_work(struct work_struct *work)
|
||||
@ -100,7 +100,7 @@ static void mlx5e_hv_vhca_stats_control(struct mlx5_hv_vhca_agent *agent,
|
||||
sagent = &priv->stats_agent;
|
||||
|
||||
block->version = MLX5_HV_VHCA_STATS_VERSION;
|
||||
block->rings = priv->max_nch;
|
||||
block->rings = priv->stats_nch;
|
||||
|
||||
if (!block->command) {
|
||||
cancel_delayed_work_sync(&priv->stats_agent.work);
|
||||
|
@ -13,8 +13,6 @@ struct mlx5e_ptp_fs {
|
||||
bool valid;
|
||||
};
|
||||
|
||||
#define MLX5E_PTP_CHANNEL_IX 0
|
||||
|
||||
struct mlx5e_ptp_params {
|
||||
struct mlx5e_params params;
|
||||
struct mlx5e_sq_param txq_sq_param;
|
||||
@ -509,6 +507,7 @@ static int mlx5e_init_ptp_rq(struct mlx5e_ptp *c, struct mlx5e_params *params,
|
||||
rq->mdev = mdev;
|
||||
rq->hw_mtu = MLX5E_SW2HW_MTU(params, params->sw_mtu);
|
||||
rq->stats = &c->priv->ptp_stats.rq;
|
||||
rq->ix = MLX5E_PTP_CHANNEL_IX;
|
||||
rq->ptp_cyc2time = mlx5_rq_ts_translator(mdev);
|
||||
err = mlx5e_rq_set_handlers(rq, params, false);
|
||||
if (err)
|
||||
|
@ -8,6 +8,8 @@
|
||||
#include "en_stats.h"
|
||||
#include <linux/ptp_classify.h>
|
||||
|
||||
#define MLX5E_PTP_CHANNEL_IX 0
|
||||
|
||||
struct mlx5e_ptpsq {
|
||||
struct mlx5e_txqsq txqsq;
|
||||
struct mlx5e_cq ts_cq;
|
||||
|
@ -2036,6 +2036,17 @@ static int set_pflag_tx_port_ts(struct net_device *netdev, bool enable)
|
||||
}
|
||||
|
||||
new_params = priv->channels.params;
|
||||
/* Don't allow enabling TX-port-TS if MQPRIO mode channel offload is
|
||||
* active, since it defines explicitly which TC accepts the packet.
|
||||
* This conflicts with TX-port-TS hijacking the PTP traffic to a specific
|
||||
* HW TX-queue.
|
||||
*/
|
||||
if (enable && new_params.mqprio.mode == TC_MQPRIO_MODE_CHANNEL) {
|
||||
netdev_err(priv->netdev,
|
||||
"%s: MQPRIO mode channel offload is active, cannot set the TX-port-TS\n",
|
||||
__func__);
|
||||
return -EINVAL;
|
||||
}
|
||||
MLX5E_SET_PFLAG(&new_params, MLX5E_PFLAG_TX_PORT_TS, enable);
|
||||
/* No need to verify SQ stop room as
|
||||
* ptpsq.txqsq.stop_room <= generic_sq->stop_room, and both
|
||||
|
@ -2264,7 +2264,7 @@ void mlx5e_set_netdev_mtu_boundaries(struct mlx5e_priv *priv)
|
||||
}
|
||||
|
||||
static int mlx5e_netdev_set_tcs(struct net_device *netdev, u16 nch, u8 ntc,
|
||||
struct tc_mqprio_qopt_offload *mqprio)
|
||||
struct netdev_tc_txq *tc_to_txq)
|
||||
{
|
||||
int tc, err;
|
||||
|
||||
@ -2282,11 +2282,8 @@ static int mlx5e_netdev_set_tcs(struct net_device *netdev, u16 nch, u8 ntc,
|
||||
for (tc = 0; tc < ntc; tc++) {
|
||||
u16 count, offset;
|
||||
|
||||
/* For DCB mode, map netdev TCs to offset 0
|
||||
* We have our own UP to TXQ mapping for QoS
|
||||
*/
|
||||
count = mqprio ? mqprio->qopt.count[tc] : nch;
|
||||
offset = mqprio ? mqprio->qopt.offset[tc] : 0;
|
||||
count = tc_to_txq[tc].count;
|
||||
offset = tc_to_txq[tc].offset;
|
||||
netdev_set_tc_queue(netdev, tc, count, offset);
|
||||
}
|
||||
|
||||
@ -2315,19 +2312,24 @@ int mlx5e_update_tx_netdev_queues(struct mlx5e_priv *priv)
|
||||
|
||||
static int mlx5e_update_netdev_queues(struct mlx5e_priv *priv)
|
||||
{
|
||||
struct netdev_tc_txq old_tc_to_txq[TC_MAX_QUEUE], *tc_to_txq;
|
||||
struct net_device *netdev = priv->netdev;
|
||||
int old_num_txqs, old_ntc;
|
||||
int num_rxqs, nch, ntc;
|
||||
int err;
|
||||
int i;
|
||||
|
||||
old_num_txqs = netdev->real_num_tx_queues;
|
||||
old_ntc = netdev->num_tc ? : 1;
|
||||
for (i = 0; i < ARRAY_SIZE(old_tc_to_txq); i++)
|
||||
old_tc_to_txq[i] = netdev->tc_to_txq[i];
|
||||
|
||||
nch = priv->channels.params.num_channels;
|
||||
ntc = mlx5e_get_dcb_num_tc(&priv->channels.params);
|
||||
ntc = priv->channels.params.mqprio.num_tc;
|
||||
num_rxqs = nch * priv->profile->rq_groups;
|
||||
tc_to_txq = priv->channels.params.mqprio.tc_to_txq;
|
||||
|
||||
err = mlx5e_netdev_set_tcs(netdev, nch, ntc, NULL);
|
||||
err = mlx5e_netdev_set_tcs(netdev, nch, ntc, tc_to_txq);
|
||||
if (err)
|
||||
goto err_out;
|
||||
err = mlx5e_update_tx_netdev_queues(priv);
|
||||
@ -2350,11 +2352,14 @@ err_txqs:
|
||||
WARN_ON_ONCE(netif_set_real_num_tx_queues(netdev, old_num_txqs));
|
||||
|
||||
err_tcs:
|
||||
mlx5e_netdev_set_tcs(netdev, old_num_txqs / old_ntc, old_ntc, NULL);
|
||||
WARN_ON_ONCE(mlx5e_netdev_set_tcs(netdev, old_num_txqs / old_ntc, old_ntc,
|
||||
old_tc_to_txq));
|
||||
err_out:
|
||||
return err;
|
||||
}
|
||||
|
||||
static MLX5E_DEFINE_PREACTIVATE_WRAPPER_CTX(mlx5e_update_netdev_queues);
|
||||
|
||||
static void mlx5e_set_default_xps_cpumasks(struct mlx5e_priv *priv,
|
||||
struct mlx5e_params *params)
|
||||
{
|
||||
@ -2861,6 +2866,58 @@ static int mlx5e_modify_channels_vsd(struct mlx5e_channels *chs, bool vsd)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void mlx5e_mqprio_build_default_tc_to_txq(struct netdev_tc_txq *tc_to_txq,
|
||||
int ntc, int nch)
|
||||
{
|
||||
int tc;
|
||||
|
||||
memset(tc_to_txq, 0, sizeof(*tc_to_txq) * TC_MAX_QUEUE);
|
||||
|
||||
/* Map netdev TCs to offset 0.
|
||||
* We have our own UP to TXQ mapping for DCB mode of QoS
|
||||
*/
|
||||
for (tc = 0; tc < ntc; tc++) {
|
||||
tc_to_txq[tc] = (struct netdev_tc_txq) {
|
||||
.count = nch,
|
||||
.offset = 0,
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
static void mlx5e_mqprio_build_tc_to_txq(struct netdev_tc_txq *tc_to_txq,
|
||||
struct tc_mqprio_qopt *qopt)
|
||||
{
|
||||
int tc;
|
||||
|
||||
for (tc = 0; tc < TC_MAX_QUEUE; tc++) {
|
||||
tc_to_txq[tc] = (struct netdev_tc_txq) {
|
||||
.count = qopt->count[tc],
|
||||
.offset = qopt->offset[tc],
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
static void mlx5e_params_mqprio_dcb_set(struct mlx5e_params *params, u8 num_tc)
|
||||
{
|
||||
params->mqprio.mode = TC_MQPRIO_MODE_DCB;
|
||||
params->mqprio.num_tc = num_tc;
|
||||
mlx5e_mqprio_build_default_tc_to_txq(params->mqprio.tc_to_txq, num_tc,
|
||||
params->num_channels);
|
||||
}
|
||||
|
||||
static void mlx5e_params_mqprio_channel_set(struct mlx5e_params *params,
|
||||
struct tc_mqprio_qopt *qopt)
|
||||
{
|
||||
params->mqprio.mode = TC_MQPRIO_MODE_CHANNEL;
|
||||
params->mqprio.num_tc = qopt->num_tc;
|
||||
mlx5e_mqprio_build_tc_to_txq(params->mqprio.tc_to_txq, qopt);
|
||||
}
|
||||
|
||||
static void mlx5e_params_mqprio_reset(struct mlx5e_params *params)
|
||||
{
|
||||
mlx5e_params_mqprio_dcb_set(params, 1);
|
||||
}
|
||||
|
||||
static int mlx5e_setup_tc_mqprio_dcb(struct mlx5e_priv *priv,
|
||||
struct tc_mqprio_qopt *mqprio)
|
||||
{
|
||||
@ -2874,8 +2931,7 @@ static int mlx5e_setup_tc_mqprio_dcb(struct mlx5e_priv *priv,
|
||||
return -EINVAL;
|
||||
|
||||
new_params = priv->channels.params;
|
||||
new_params.mqprio.mode = TC_MQPRIO_MODE_DCB;
|
||||
new_params.mqprio.num_tc = tc ? tc : 1;
|
||||
mlx5e_params_mqprio_dcb_set(&new_params, tc ? tc : 1);
|
||||
|
||||
err = mlx5e_safe_switch_params(priv, &new_params,
|
||||
mlx5e_num_channels_changed_ctx, NULL, true);
|
||||
@ -2889,9 +2945,17 @@ static int mlx5e_mqprio_channel_validate(struct mlx5e_priv *priv,
|
||||
struct tc_mqprio_qopt_offload *mqprio)
|
||||
{
|
||||
struct net_device *netdev = priv->netdev;
|
||||
struct mlx5e_ptp *ptp_channel;
|
||||
int agg_count = 0;
|
||||
int i;
|
||||
|
||||
ptp_channel = priv->channels.ptp;
|
||||
if (ptp_channel && test_bit(MLX5E_PTP_STATE_TX, ptp_channel->state)) {
|
||||
netdev_err(netdev,
|
||||
"Cannot activate MQPRIO mode channel since it conflicts with TX port TS\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (mqprio->qopt.offset[0] != 0 || mqprio->qopt.num_tc < 1 ||
|
||||
mqprio->qopt.num_tc > MLX5E_MAX_NUM_MQPRIO_CH_TC)
|
||||
return -EINVAL;
|
||||
@ -2926,25 +2990,12 @@ static int mlx5e_mqprio_channel_validate(struct mlx5e_priv *priv,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int mlx5e_mqprio_channel_set_tcs_ctx(struct mlx5e_priv *priv, void *ctx)
|
||||
{
|
||||
struct tc_mqprio_qopt_offload *mqprio = (struct tc_mqprio_qopt_offload *)ctx;
|
||||
struct net_device *netdev = priv->netdev;
|
||||
u8 num_tc;
|
||||
|
||||
if (priv->channels.params.mqprio.mode != TC_MQPRIO_MODE_CHANNEL)
|
||||
return -EINVAL;
|
||||
|
||||
num_tc = priv->channels.params.mqprio.num_tc;
|
||||
mlx5e_netdev_set_tcs(netdev, 0, num_tc, mqprio);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int mlx5e_setup_tc_mqprio_channel(struct mlx5e_priv *priv,
|
||||
struct tc_mqprio_qopt_offload *mqprio)
|
||||
{
|
||||
mlx5e_fp_preactivate preactivate;
|
||||
struct mlx5e_params new_params;
|
||||
bool nch_changed;
|
||||
int err;
|
||||
|
||||
err = mlx5e_mqprio_channel_validate(priv, mqprio);
|
||||
@ -2952,12 +3003,12 @@ static int mlx5e_setup_tc_mqprio_channel(struct mlx5e_priv *priv,
|
||||
return err;
|
||||
|
||||
new_params = priv->channels.params;
|
||||
new_params.mqprio.mode = TC_MQPRIO_MODE_CHANNEL;
|
||||
new_params.mqprio.num_tc = mqprio->qopt.num_tc;
|
||||
err = mlx5e_safe_switch_params(priv, &new_params,
|
||||
mlx5e_mqprio_channel_set_tcs_ctx, mqprio, true);
|
||||
mlx5e_params_mqprio_channel_set(&new_params, &mqprio->qopt);
|
||||
|
||||
return err;
|
||||
nch_changed = mlx5e_get_dcb_num_tc(&priv->channels.params) > 1;
|
||||
preactivate = nch_changed ? mlx5e_num_channels_changed_ctx :
|
||||
mlx5e_update_netdev_queues_ctx;
|
||||
return mlx5e_safe_switch_params(priv, &new_params, preactivate, NULL, true);
|
||||
}
|
||||
|
||||
static int mlx5e_setup_tc_mqprio(struct mlx5e_priv *priv,
|
||||
@ -3065,7 +3116,7 @@ void mlx5e_fold_sw_stats64(struct mlx5e_priv *priv, struct rtnl_link_stats64 *s)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i < priv->max_nch; i++) {
|
||||
for (i = 0; i < priv->stats_nch; i++) {
|
||||
struct mlx5e_channel_stats *channel_stats = &priv->channel_stats[i];
|
||||
struct mlx5e_rq_stats *xskrq_stats = &channel_stats->xskrq;
|
||||
struct mlx5e_rq_stats *rq_stats = &channel_stats->rq;
|
||||
@ -4186,13 +4237,11 @@ void mlx5e_build_nic_params(struct mlx5e_priv *priv, struct mlx5e_xsk *xsk, u16
|
||||
struct mlx5_core_dev *mdev = priv->mdev;
|
||||
u8 rx_cq_period_mode;
|
||||
|
||||
priv->max_nch = mlx5e_calc_max_nch(priv, priv->profile);
|
||||
|
||||
params->sw_mtu = mtu;
|
||||
params->hard_mtu = MLX5E_ETH_HARD_MTU;
|
||||
params->num_channels = min_t(unsigned int, MLX5E_MAX_NUM_CHANNELS / 2,
|
||||
priv->max_nch);
|
||||
params->mqprio.num_tc = 1;
|
||||
mlx5e_params_mqprio_reset(params);
|
||||
|
||||
/* Set an initial non-zero value, so that mlx5e_select_queue won't
|
||||
* divide by zero if called before first activating channels.
|
||||
@ -4682,8 +4731,35 @@ static const struct mlx5e_profile mlx5e_nic_profile = {
|
||||
.rx_ptp_support = true,
|
||||
};
|
||||
|
||||
static unsigned int
|
||||
mlx5e_calc_max_nch(struct mlx5_core_dev *mdev, struct net_device *netdev,
|
||||
const struct mlx5e_profile *profile)
|
||||
|
||||
{
|
||||
unsigned int max_nch, tmp;
|
||||
|
||||
/* core resources */
|
||||
max_nch = mlx5e_get_max_num_channels(mdev);
|
||||
|
||||
/* netdev rx queues */
|
||||
tmp = netdev->num_rx_queues / max_t(u8, profile->rq_groups, 1);
|
||||
max_nch = min_t(unsigned int, max_nch, tmp);
|
||||
|
||||
/* netdev tx queues */
|
||||
tmp = netdev->num_tx_queues;
|
||||
if (mlx5_qos_is_supported(mdev))
|
||||
tmp -= mlx5e_qos_max_leaf_nodes(mdev);
|
||||
if (MLX5_CAP_GEN(mdev, ts_cqe_to_dest_cqn))
|
||||
tmp -= profile->max_tc;
|
||||
tmp = tmp / profile->max_tc;
|
||||
max_nch = min_t(unsigned int, max_nch, tmp);
|
||||
|
||||
return max_nch;
|
||||
}
|
||||
|
||||
/* mlx5e generic netdev management API (move to en_common.c) */
|
||||
int mlx5e_priv_init(struct mlx5e_priv *priv,
|
||||
const struct mlx5e_profile *profile,
|
||||
struct net_device *netdev,
|
||||
struct mlx5_core_dev *mdev)
|
||||
{
|
||||
@ -4691,6 +4767,8 @@ int mlx5e_priv_init(struct mlx5e_priv *priv,
|
||||
priv->mdev = mdev;
|
||||
priv->netdev = netdev;
|
||||
priv->msglevel = MLX5E_MSG_LEVEL;
|
||||
priv->max_nch = mlx5e_calc_max_nch(mdev, netdev, profile);
|
||||
priv->stats_nch = priv->max_nch;
|
||||
priv->max_opened_tc = 1;
|
||||
|
||||
if (!alloc_cpumask_var(&priv->scratchpad.cpumask, GFP_KERNEL))
|
||||
@ -4734,7 +4812,8 @@ void mlx5e_priv_cleanup(struct mlx5e_priv *priv)
|
||||
}
|
||||
|
||||
struct net_device *
|
||||
mlx5e_create_netdev(struct mlx5_core_dev *mdev, unsigned int txqs, unsigned int rxqs)
|
||||
mlx5e_create_netdev(struct mlx5_core_dev *mdev, const struct mlx5e_profile *profile,
|
||||
unsigned int txqs, unsigned int rxqs)
|
||||
{
|
||||
struct net_device *netdev;
|
||||
int err;
|
||||
@ -4745,7 +4824,7 @@ mlx5e_create_netdev(struct mlx5_core_dev *mdev, unsigned int txqs, unsigned int
|
||||
return NULL;
|
||||
}
|
||||
|
||||
err = mlx5e_priv_init(netdev_priv(netdev), netdev, mdev);
|
||||
err = mlx5e_priv_init(netdev_priv(netdev), profile, netdev, mdev);
|
||||
if (err) {
|
||||
mlx5_core_err(mdev, "mlx5e_priv_init failed, err=%d\n", err);
|
||||
goto err_free_netdev;
|
||||
@ -4787,7 +4866,7 @@ int mlx5e_attach_netdev(struct mlx5e_priv *priv)
|
||||
clear_bit(MLX5E_STATE_DESTROYING, &priv->state);
|
||||
|
||||
/* max number of channels may have changed */
|
||||
max_nch = mlx5e_get_max_num_channels(priv->mdev);
|
||||
max_nch = mlx5e_calc_max_nch(priv->mdev, priv->netdev, profile);
|
||||
if (priv->channels.params.num_channels > max_nch) {
|
||||
mlx5_core_warn(priv->mdev, "MLX5E: Reducing number of channels to %d\n", max_nch);
|
||||
/* Reducing the number of channels - RXFH has to be reset, and
|
||||
@ -4795,7 +4874,18 @@ int mlx5e_attach_netdev(struct mlx5e_priv *priv)
|
||||
*/
|
||||
priv->netdev->priv_flags &= ~IFF_RXFH_CONFIGURED;
|
||||
priv->channels.params.num_channels = max_nch;
|
||||
if (priv->channels.params.mqprio.mode == TC_MQPRIO_MODE_CHANNEL) {
|
||||
mlx5_core_warn(priv->mdev, "MLX5E: Disabling MQPRIO channel mode\n");
|
||||
mlx5e_params_mqprio_reset(&priv->channels.params);
|
||||
}
|
||||
}
|
||||
if (max_nch != priv->max_nch) {
|
||||
mlx5_core_warn(priv->mdev,
|
||||
"MLX5E: Updating max number of channels from %u to %u\n",
|
||||
priv->max_nch, max_nch);
|
||||
priv->max_nch = max_nch;
|
||||
}
|
||||
|
||||
/* 1. Set the real number of queues in the kernel the first time.
|
||||
* 2. Set our default XPS cpumask.
|
||||
* 3. Build the RQT.
|
||||
@ -4860,7 +4950,7 @@ mlx5e_netdev_attach_profile(struct net_device *netdev, struct mlx5_core_dev *mde
|
||||
struct mlx5e_priv *priv = netdev_priv(netdev);
|
||||
int err;
|
||||
|
||||
err = mlx5e_priv_init(priv, netdev, mdev);
|
||||
err = mlx5e_priv_init(priv, new_profile, netdev, mdev);
|
||||
if (err) {
|
||||
mlx5_core_err(mdev, "mlx5e_priv_init failed, err=%d\n", err);
|
||||
return err;
|
||||
@ -4886,20 +4976,12 @@ priv_cleanup:
|
||||
int mlx5e_netdev_change_profile(struct mlx5e_priv *priv,
|
||||
const struct mlx5e_profile *new_profile, void *new_ppriv)
|
||||
{
|
||||
unsigned int new_max_nch = mlx5e_calc_max_nch(priv, new_profile);
|
||||
const struct mlx5e_profile *orig_profile = priv->profile;
|
||||
struct net_device *netdev = priv->netdev;
|
||||
struct mlx5_core_dev *mdev = priv->mdev;
|
||||
void *orig_ppriv = priv->ppriv;
|
||||
int err, rollback_err;
|
||||
|
||||
/* sanity */
|
||||
if (new_max_nch != priv->max_nch) {
|
||||
netdev_warn(netdev, "%s: Replacing profile with different max channels\n",
|
||||
__func__);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* cleanup old profile */
|
||||
mlx5e_detach_netdev(priv);
|
||||
priv->profile->cleanup(priv);
|
||||
@ -4995,7 +5077,7 @@ static int mlx5e_probe(struct auxiliary_device *adev,
|
||||
nch = mlx5e_get_max_num_channels(mdev);
|
||||
txqs = nch * profile->max_tc + ptp_txqs + qos_sqs;
|
||||
rxqs = nch * profile->rq_groups;
|
||||
netdev = mlx5e_create_netdev(mdev, txqs, rxqs);
|
||||
netdev = mlx5e_create_netdev(mdev, profile, txqs, rxqs);
|
||||
if (!netdev) {
|
||||
mlx5_core_err(mdev, "mlx5e_create_netdev failed\n");
|
||||
return -ENOMEM;
|
||||
|
@ -596,7 +596,6 @@ static void mlx5e_build_rep_params(struct net_device *netdev)
|
||||
MLX5_CQ_PERIOD_MODE_START_FROM_CQE :
|
||||
MLX5_CQ_PERIOD_MODE_START_FROM_EQE;
|
||||
|
||||
priv->max_nch = mlx5e_calc_max_nch(priv, priv->profile);
|
||||
params = &priv->channels.params;
|
||||
|
||||
params->num_channels = MLX5E_REP_PARAMS_DEF_NUM_CHANNELS;
|
||||
@ -1169,7 +1168,7 @@ mlx5e_vport_vf_rep_load(struct mlx5_core_dev *dev, struct mlx5_eswitch_rep *rep)
|
||||
nch = mlx5e_get_max_num_channels(dev);
|
||||
txqs = nch * profile->max_tc;
|
||||
rxqs = nch * profile->rq_groups;
|
||||
netdev = mlx5e_create_netdev(dev, txqs, rxqs);
|
||||
netdev = mlx5e_create_netdev(dev, profile, txqs, rxqs);
|
||||
if (!netdev) {
|
||||
mlx5_core_warn(dev,
|
||||
"Failed to create representor netdev for vport %d\n",
|
||||
|
@ -1001,14 +1001,9 @@ static inline void mlx5e_handle_csum(struct net_device *netdev,
|
||||
goto csum_unnecessary;
|
||||
|
||||
if (likely(is_last_ethertype_ip(skb, &network_depth, &proto))) {
|
||||
u8 ipproto = get_ip_proto(skb, network_depth, proto);
|
||||
|
||||
if (unlikely(ipproto == IPPROTO_SCTP))
|
||||
if (unlikely(get_ip_proto(skb, network_depth, proto) == IPPROTO_SCTP))
|
||||
goto csum_unnecessary;
|
||||
|
||||
if (unlikely(mlx5_ipsec_is_rx_flow(cqe)))
|
||||
goto csum_none;
|
||||
|
||||
stats->csum_complete++;
|
||||
skb->ip_summed = CHECKSUM_COMPLETE;
|
||||
skb->csum = csum_unfold((__force __sum16)cqe->check_sum);
|
||||
|
@ -34,6 +34,7 @@
|
||||
#include "en.h"
|
||||
#include "en_accel/tls.h"
|
||||
#include "en_accel/en_accel.h"
|
||||
#include "en/ptp.h"
|
||||
|
||||
static unsigned int stats_grps_num(struct mlx5e_priv *priv)
|
||||
{
|
||||
@ -450,7 +451,7 @@ static MLX5E_DECLARE_STATS_GRP_OP_UPDATE_STATS(sw)
|
||||
|
||||
memset(s, 0, sizeof(*s));
|
||||
|
||||
for (i = 0; i < priv->max_nch; i++) {
|
||||
for (i = 0; i < priv->stats_nch; i++) {
|
||||
struct mlx5e_channel_stats *channel_stats =
|
||||
&priv->channel_stats[i];
|
||||
int j;
|
||||
@ -2076,7 +2077,7 @@ static MLX5E_DECLARE_STATS_GRP_OP_FILL_STRS(ptp)
|
||||
if (priv->rx_ptp_opened) {
|
||||
for (i = 0; i < NUM_PTP_RQ_STATS; i++)
|
||||
sprintf(data + (idx++) * ETH_GSTRING_LEN,
|
||||
ptp_rq_stats_desc[i].format);
|
||||
ptp_rq_stats_desc[i].format, MLX5E_PTP_CHANNEL_IX);
|
||||
}
|
||||
return idx;
|
||||
}
|
||||
@ -2119,7 +2120,7 @@ static MLX5E_DECLARE_STATS_GRP_OP_UPDATE_STATS(ptp) { return; }
|
||||
|
||||
static MLX5E_DECLARE_STATS_GRP_OP_NUM_STATS(channels)
|
||||
{
|
||||
int max_nch = priv->max_nch;
|
||||
int max_nch = priv->stats_nch;
|
||||
|
||||
return (NUM_RQ_STATS * max_nch) +
|
||||
(NUM_CH_STATS * max_nch) +
|
||||
@ -2133,7 +2134,7 @@ static MLX5E_DECLARE_STATS_GRP_OP_NUM_STATS(channels)
|
||||
static MLX5E_DECLARE_STATS_GRP_OP_FILL_STRS(channels)
|
||||
{
|
||||
bool is_xsk = priv->xsk.ever_used;
|
||||
int max_nch = priv->max_nch;
|
||||
int max_nch = priv->stats_nch;
|
||||
int i, j, tc;
|
||||
|
||||
for (i = 0; i < max_nch; i++)
|
||||
@ -2175,7 +2176,7 @@ static MLX5E_DECLARE_STATS_GRP_OP_FILL_STRS(channels)
|
||||
static MLX5E_DECLARE_STATS_GRP_OP_FILL_STATS(channels)
|
||||
{
|
||||
bool is_xsk = priv->xsk.ever_used;
|
||||
int max_nch = priv->max_nch;
|
||||
int max_nch = priv->stats_nch;
|
||||
int i, j, tc;
|
||||
|
||||
for (i = 0; i < max_nch; i++)
|
||||
|
@ -79,12 +79,16 @@ int esw_acl_egress_lgcy_setup(struct mlx5_eswitch *esw,
|
||||
int dest_num = 0;
|
||||
int err = 0;
|
||||
|
||||
if (MLX5_CAP_ESW_EGRESS_ACL(esw->dev, flow_counter)) {
|
||||
if (vport->egress.legacy.drop_counter) {
|
||||
drop_counter = vport->egress.legacy.drop_counter;
|
||||
} else if (MLX5_CAP_ESW_EGRESS_ACL(esw->dev, flow_counter)) {
|
||||
drop_counter = mlx5_fc_create(esw->dev, false);
|
||||
if (IS_ERR(drop_counter))
|
||||
if (IS_ERR(drop_counter)) {
|
||||
esw_warn(esw->dev,
|
||||
"vport[%d] configure egress drop rule counter err(%ld)\n",
|
||||
vport->vport, PTR_ERR(drop_counter));
|
||||
drop_counter = NULL;
|
||||
}
|
||||
vport->egress.legacy.drop_counter = drop_counter;
|
||||
}
|
||||
|
||||
@ -123,7 +127,7 @@ int esw_acl_egress_lgcy_setup(struct mlx5_eswitch *esw,
|
||||
flow_act.action = MLX5_FLOW_CONTEXT_ACTION_DROP;
|
||||
|
||||
/* Attach egress drop flow counter */
|
||||
if (!IS_ERR_OR_NULL(drop_counter)) {
|
||||
if (drop_counter) {
|
||||
flow_act.action |= MLX5_FLOW_CONTEXT_ACTION_COUNT;
|
||||
drop_ctr_dst.type = MLX5_FLOW_DESTINATION_TYPE_COUNTER;
|
||||
drop_ctr_dst.counter_id = mlx5_fc_id(drop_counter);
|
||||
@ -162,7 +166,7 @@ void esw_acl_egress_lgcy_cleanup(struct mlx5_eswitch *esw,
|
||||
esw_acl_egress_table_destroy(vport);
|
||||
|
||||
clean_drop_counter:
|
||||
if (!IS_ERR_OR_NULL(vport->egress.legacy.drop_counter)) {
|
||||
if (vport->egress.legacy.drop_counter) {
|
||||
mlx5_fc_destroy(esw->dev, vport->egress.legacy.drop_counter);
|
||||
vport->egress.legacy.drop_counter = NULL;
|
||||
}
|
||||
|
@ -160,7 +160,9 @@ int esw_acl_ingress_lgcy_setup(struct mlx5_eswitch *esw,
|
||||
|
||||
esw_acl_ingress_lgcy_rules_destroy(vport);
|
||||
|
||||
if (MLX5_CAP_ESW_INGRESS_ACL(esw->dev, flow_counter)) {
|
||||
if (vport->ingress.legacy.drop_counter) {
|
||||
counter = vport->ingress.legacy.drop_counter;
|
||||
} else if (MLX5_CAP_ESW_INGRESS_ACL(esw->dev, flow_counter)) {
|
||||
counter = mlx5_fc_create(esw->dev, false);
|
||||
if (IS_ERR(counter)) {
|
||||
esw_warn(esw->dev,
|
||||
|
@ -113,7 +113,7 @@ static void mlx5i_grp_sw_update_stats(struct mlx5e_priv *priv)
|
||||
struct mlx5e_sw_stats s = { 0 };
|
||||
int i, j;
|
||||
|
||||
for (i = 0; i < priv->max_nch; i++) {
|
||||
for (i = 0; i < priv->stats_nch; i++) {
|
||||
struct mlx5e_channel_stats *channel_stats;
|
||||
struct mlx5e_rq_stats *rq_stats;
|
||||
|
||||
@ -711,7 +711,7 @@ static int mlx5_rdma_setup_rn(struct ib_device *ibdev, u32 port_num,
|
||||
goto destroy_ht;
|
||||
}
|
||||
|
||||
err = mlx5e_priv_init(epriv, netdev, mdev);
|
||||
err = mlx5e_priv_init(epriv, prof, netdev, mdev);
|
||||
if (err)
|
||||
goto destroy_mdev_resources;
|
||||
|
||||
|
@ -448,22 +448,20 @@ static u64 find_target_cycles(struct mlx5_core_dev *mdev, s64 target_ns)
|
||||
return cycles_now + cycles_delta;
|
||||
}
|
||||
|
||||
static u64 perout_conf_internal_timer(struct mlx5_core_dev *mdev,
|
||||
s64 sec, u32 nsec)
|
||||
static u64 perout_conf_internal_timer(struct mlx5_core_dev *mdev, s64 sec)
|
||||
{
|
||||
struct timespec64 ts;
|
||||
struct timespec64 ts = {};
|
||||
s64 target_ns;
|
||||
|
||||
ts.tv_sec = sec;
|
||||
ts.tv_nsec = nsec;
|
||||
target_ns = timespec64_to_ns(&ts);
|
||||
|
||||
return find_target_cycles(mdev, target_ns);
|
||||
}
|
||||
|
||||
static u64 perout_conf_real_time(s64 sec, u32 nsec)
|
||||
static u64 perout_conf_real_time(s64 sec)
|
||||
{
|
||||
return (u64)nsec | (u64)sec << 32;
|
||||
return (u64)sec << 32;
|
||||
}
|
||||
|
||||
static int mlx5_perout_configure(struct ptp_clock_info *ptp,
|
||||
@ -474,6 +472,7 @@ static int mlx5_perout_configure(struct ptp_clock_info *ptp,
|
||||
container_of(ptp, struct mlx5_clock, ptp_info);
|
||||
struct mlx5_core_dev *mdev =
|
||||
container_of(clock, struct mlx5_core_dev, clock);
|
||||
bool rt_mode = mlx5_real_time_mode(mdev);
|
||||
u32 in[MLX5_ST_SZ_DW(mtpps_reg)] = {0};
|
||||
struct timespec64 ts;
|
||||
u32 field_select = 0;
|
||||
@ -501,8 +500,10 @@ static int mlx5_perout_configure(struct ptp_clock_info *ptp,
|
||||
|
||||
if (on) {
|
||||
bool rt_mode = mlx5_real_time_mode(mdev);
|
||||
u32 nsec;
|
||||
s64 sec;
|
||||
s64 sec = rq->perout.start.sec;
|
||||
|
||||
if (rq->perout.start.nsec)
|
||||
return -EINVAL;
|
||||
|
||||
pin_mode = MLX5_PIN_MODE_OUT;
|
||||
pattern = MLX5_OUT_PATTERN_PERIODIC;
|
||||
@ -513,14 +514,11 @@ static int mlx5_perout_configure(struct ptp_clock_info *ptp,
|
||||
if ((ns >> 1) != 500000000LL)
|
||||
return -EINVAL;
|
||||
|
||||
nsec = rq->perout.start.nsec;
|
||||
sec = rq->perout.start.sec;
|
||||
|
||||
if (rt_mode && sec > U32_MAX)
|
||||
return -EINVAL;
|
||||
|
||||
time_stamp = rt_mode ? perout_conf_real_time(sec, nsec) :
|
||||
perout_conf_internal_timer(mdev, sec, nsec);
|
||||
time_stamp = rt_mode ? perout_conf_real_time(sec) :
|
||||
perout_conf_internal_timer(mdev, sec);
|
||||
|
||||
field_select |= MLX5_MTPPS_FS_PIN_MODE |
|
||||
MLX5_MTPPS_FS_PATTERN |
|
||||
@ -538,6 +536,9 @@ static int mlx5_perout_configure(struct ptp_clock_info *ptp,
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
if (rt_mode)
|
||||
return 0;
|
||||
|
||||
return mlx5_set_mtppse(mdev, pin, 0,
|
||||
MLX5_EVENT_MODE_REPETETIVE & on);
|
||||
}
|
||||
@ -705,20 +706,14 @@ static void ts_next_sec(struct timespec64 *ts)
|
||||
static u64 perout_conf_next_event_timer(struct mlx5_core_dev *mdev,
|
||||
struct mlx5_clock *clock)
|
||||
{
|
||||
bool rt_mode = mlx5_real_time_mode(mdev);
|
||||
struct timespec64 ts;
|
||||
s64 target_ns;
|
||||
|
||||
if (rt_mode)
|
||||
ts = mlx5_ptp_gettimex_real_time(mdev, NULL);
|
||||
else
|
||||
mlx5_ptp_gettimex(&clock->ptp_info, &ts, NULL);
|
||||
|
||||
mlx5_ptp_gettimex(&clock->ptp_info, &ts, NULL);
|
||||
ts_next_sec(&ts);
|
||||
target_ns = timespec64_to_ns(&ts);
|
||||
|
||||
return rt_mode ? perout_conf_real_time(ts.tv_sec, ts.tv_nsec) :
|
||||
find_target_cycles(mdev, target_ns);
|
||||
return find_target_cycles(mdev, target_ns);
|
||||
}
|
||||
|
||||
static int mlx5_pps_event(struct notifier_block *nb,
|
||||
|
@ -13,8 +13,8 @@
|
||||
#endif
|
||||
|
||||
#define MLX5_MAX_IRQ_NAME (32)
|
||||
/* max irq_index is 255. three chars */
|
||||
#define MLX5_MAX_IRQ_IDX_CHARS (3)
|
||||
/* max irq_index is 2047, so four chars */
|
||||
#define MLX5_MAX_IRQ_IDX_CHARS (4)
|
||||
|
||||
#define MLX5_SFS_PER_CTRL_IRQ 64
|
||||
#define MLX5_IRQ_CTRL_SF_MAX 8
|
||||
@ -633,8 +633,9 @@ void mlx5_irq_table_destroy(struct mlx5_core_dev *dev)
|
||||
int mlx5_irq_table_get_sfs_vec(struct mlx5_irq_table *table)
|
||||
{
|
||||
if (table->sf_comp_pool)
|
||||
return table->sf_comp_pool->xa_num_irqs.max -
|
||||
table->sf_comp_pool->xa_num_irqs.min + 1;
|
||||
return min_t(int, num_online_cpus(),
|
||||
table->sf_comp_pool->xa_num_irqs.max -
|
||||
table->sf_comp_pool->xa_num_irqs.min + 1);
|
||||
else
|
||||
return mlx5_irq_table_get_num_comp(table);
|
||||
}
|
||||
|
@ -998,8 +998,8 @@ ocelot_vcap_block_find_filter_by_index(struct ocelot_vcap_block *block,
|
||||
}
|
||||
|
||||
struct ocelot_vcap_filter *
|
||||
ocelot_vcap_block_find_filter_by_id(struct ocelot_vcap_block *block, int cookie,
|
||||
bool tc_offload)
|
||||
ocelot_vcap_block_find_filter_by_id(struct ocelot_vcap_block *block,
|
||||
unsigned long cookie, bool tc_offload)
|
||||
{
|
||||
struct ocelot_vcap_filter *filter;
|
||||
|
||||
|
@ -1292,8 +1292,10 @@ int ionic_lif_addr_add(struct ionic_lif *lif, const u8 *addr)
|
||||
if (err && err != -EEXIST) {
|
||||
/* set the state back to NEW so we can try again later */
|
||||
f = ionic_rx_filter_by_addr(lif, addr);
|
||||
if (f && f->state == IONIC_FILTER_STATE_SYNCED)
|
||||
if (f && f->state == IONIC_FILTER_STATE_SYNCED) {
|
||||
f->state = IONIC_FILTER_STATE_NEW;
|
||||
set_bit(IONIC_LIF_F_FILTER_SYNC_NEEDED, lif->state);
|
||||
}
|
||||
|
||||
spin_unlock_bh(&lif->rx_filters.lock);
|
||||
|
||||
|
@ -349,9 +349,6 @@ loop_out:
|
||||
list_for_each_entry_safe(sync_item, spos, &sync_add_list, list) {
|
||||
(void)ionic_lif_addr_add(lif, sync_item->f.cmd.mac.addr);
|
||||
|
||||
if (sync_item->f.state != IONIC_FILTER_STATE_SYNCED)
|
||||
set_bit(IONIC_LIF_F_FILTER_SYNC_NEEDED, lif->state);
|
||||
|
||||
list_del(&sync_item->list);
|
||||
devm_kfree(dev, sync_item);
|
||||
}
|
||||
|
@ -21,6 +21,7 @@
|
||||
#include <linux/delay.h>
|
||||
#include <linux/mfd/syscon.h>
|
||||
#include <linux/regmap.h>
|
||||
#include <linux/pm_runtime.h>
|
||||
|
||||
#include "stmmac_platform.h"
|
||||
|
||||
@ -1528,6 +1529,8 @@ static int rk_gmac_powerup(struct rk_priv_data *bsp_priv)
|
||||
return ret;
|
||||
}
|
||||
|
||||
pm_runtime_get_sync(dev);
|
||||
|
||||
if (bsp_priv->integrated_phy)
|
||||
rk_gmac_integrated_phy_powerup(bsp_priv);
|
||||
|
||||
@ -1539,6 +1542,8 @@ static void rk_gmac_powerdown(struct rk_priv_data *gmac)
|
||||
if (gmac->integrated_phy)
|
||||
rk_gmac_integrated_phy_powerdown(gmac);
|
||||
|
||||
pm_runtime_put_sync(&gmac->pdev->dev);
|
||||
|
||||
phy_power_on(gmac, false);
|
||||
gmac_clk_enable(gmac, false);
|
||||
}
|
||||
|
@ -477,6 +477,10 @@ bool stmmac_eee_init(struct stmmac_priv *priv)
|
||||
stmmac_lpi_entry_timer_config(priv, 0);
|
||||
del_timer_sync(&priv->eee_ctrl_timer);
|
||||
stmmac_set_eee_timer(priv, priv->hw, 0, eee_tw_timer);
|
||||
if (priv->hw->xpcs)
|
||||
xpcs_config_eee(priv->hw->xpcs,
|
||||
priv->plat->mult_fact_100ns,
|
||||
false);
|
||||
}
|
||||
mutex_unlock(&priv->lock);
|
||||
return false;
|
||||
@ -1038,7 +1042,7 @@ static void stmmac_mac_link_down(struct phylink_config *config,
|
||||
stmmac_mac_set(priv, priv->ioaddr, false);
|
||||
priv->eee_active = false;
|
||||
priv->tx_lpi_enabled = false;
|
||||
stmmac_eee_init(priv);
|
||||
priv->eee_enabled = stmmac_eee_init(priv);
|
||||
stmmac_set_eee_pls(priv, priv->hw, false);
|
||||
|
||||
if (priv->dma_cap.fpesel)
|
||||
|
@ -666,6 +666,10 @@ int xpcs_config_eee(struct dw_xpcs *xpcs, int mult_fact_100ns, int enable)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = xpcs_read(xpcs, MDIO_MMD_VEND2, DW_VR_MII_EEE_MCTRL0);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
if (enable) {
|
||||
/* Enable EEE */
|
||||
ret = DW_VR_MII_EEE_LTX_EN | DW_VR_MII_EEE_LRX_EN |
|
||||
@ -673,9 +677,6 @@ int xpcs_config_eee(struct dw_xpcs *xpcs, int mult_fact_100ns, int enable)
|
||||
DW_VR_MII_EEE_TX_EN_CTRL | DW_VR_MII_EEE_RX_EN_CTRL |
|
||||
mult_fact_100ns << DW_VR_MII_EEE_MULT_FACT_100NS_SHIFT;
|
||||
} else {
|
||||
ret = xpcs_read(xpcs, MDIO_MMD_VEND2, DW_VR_MII_EEE_MCTRL0);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
ret &= ~(DW_VR_MII_EEE_LTX_EN | DW_VR_MII_EEE_LRX_EN |
|
||||
DW_VR_MII_EEE_TX_QUIET_EN | DW_VR_MII_EEE_RX_QUIET_EN |
|
||||
DW_VR_MII_EEE_TX_EN_CTRL | DW_VR_MII_EEE_RX_EN_CTRL |
|
||||
@ -690,21 +691,28 @@ int xpcs_config_eee(struct dw_xpcs *xpcs, int mult_fact_100ns, int enable)
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
ret |= DW_VR_MII_EEE_TRN_LPI;
|
||||
if (enable)
|
||||
ret |= DW_VR_MII_EEE_TRN_LPI;
|
||||
else
|
||||
ret &= ~DW_VR_MII_EEE_TRN_LPI;
|
||||
|
||||
return xpcs_write(xpcs, MDIO_MMD_VEND2, DW_VR_MII_EEE_MCTRL1, ret);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(xpcs_config_eee);
|
||||
|
||||
static int xpcs_config_aneg_c37_sgmii(struct dw_xpcs *xpcs, unsigned int mode)
|
||||
{
|
||||
int ret;
|
||||
int ret, mdio_ctrl;
|
||||
|
||||
/* For AN for C37 SGMII mode, the settings are :-
|
||||
* 1) VR_MII_AN_CTRL Bit(2:1)[PCS_MODE] = 10b (SGMII AN)
|
||||
* 2) VR_MII_AN_CTRL Bit(3) [TX_CONFIG] = 0b (MAC side SGMII)
|
||||
* 1) VR_MII_MMD_CTRL Bit(12) [AN_ENABLE] = 0b (Disable SGMII AN in case
|
||||
it is already enabled)
|
||||
* 2) VR_MII_AN_CTRL Bit(2:1)[PCS_MODE] = 10b (SGMII AN)
|
||||
* 3) VR_MII_AN_CTRL Bit(3) [TX_CONFIG] = 0b (MAC side SGMII)
|
||||
* DW xPCS used with DW EQoS MAC is always MAC side SGMII.
|
||||
* 3) VR_MII_DIG_CTRL1 Bit(9) [MAC_AUTO_SW] = 1b (Automatic
|
||||
* 4) VR_MII_DIG_CTRL1 Bit(9) [MAC_AUTO_SW] = 1b (Automatic
|
||||
* speed/duplex mode change by HW after SGMII AN complete)
|
||||
* 5) VR_MII_MMD_CTRL Bit(12) [AN_ENABLE] = 1b (Enable SGMII AN)
|
||||
*
|
||||
* Note: Since it is MAC side SGMII, there is no need to set
|
||||
* SR_MII_AN_ADV. MAC side SGMII receives AN Tx Config from
|
||||
@ -712,6 +720,17 @@ static int xpcs_config_aneg_c37_sgmii(struct dw_xpcs *xpcs, unsigned int mode)
|
||||
* between PHY and Link Partner. There is also no need to
|
||||
* trigger AN restart for MAC-side SGMII.
|
||||
*/
|
||||
mdio_ctrl = xpcs_read(xpcs, MDIO_MMD_VEND2, DW_VR_MII_MMD_CTRL);
|
||||
if (mdio_ctrl < 0)
|
||||
return mdio_ctrl;
|
||||
|
||||
if (mdio_ctrl & AN_CL37_EN) {
|
||||
ret = xpcs_write(xpcs, MDIO_MMD_VEND2, DW_VR_MII_MMD_CTRL,
|
||||
mdio_ctrl & ~AN_CL37_EN);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = xpcs_read(xpcs, MDIO_MMD_VEND2, DW_VR_MII_AN_CTRL);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
@ -736,7 +755,15 @@ static int xpcs_config_aneg_c37_sgmii(struct dw_xpcs *xpcs, unsigned int mode)
|
||||
else
|
||||
ret &= ~DW_VR_MII_DIG_CTRL1_MAC_AUTO_SW;
|
||||
|
||||
return xpcs_write(xpcs, MDIO_MMD_VEND2, DW_VR_MII_DIG_CTRL1, ret);
|
||||
ret = xpcs_write(xpcs, MDIO_MMD_VEND2, DW_VR_MII_DIG_CTRL1, ret);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
if (phylink_autoneg_inband(mode))
|
||||
ret = xpcs_write(xpcs, MDIO_MMD_VEND2, DW_VR_MII_MMD_CTRL,
|
||||
mdio_ctrl | AN_CL37_EN);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int xpcs_config_2500basex(struct dw_xpcs *xpcs)
|
||||
|
@ -538,10 +538,16 @@ int __mdiobus_register(struct mii_bus *bus, struct module *owner)
|
||||
bus->dev.groups = NULL;
|
||||
dev_set_name(&bus->dev, "%s", bus->id);
|
||||
|
||||
/* We need to set state to MDIOBUS_UNREGISTERED to correctly release
|
||||
* the device in mdiobus_free()
|
||||
*
|
||||
* State will be updated later in this function in case of success
|
||||
*/
|
||||
bus->state = MDIOBUS_UNREGISTERED;
|
||||
|
||||
err = device_register(&bus->dev);
|
||||
if (err) {
|
||||
pr_err("mii_bus %s failed to register\n", bus->id);
|
||||
put_device(&bus->dev);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
|
@ -134,7 +134,7 @@ static const char * const sm_state_strings[] = {
|
||||
[SFP_S_LINK_UP] = "link_up",
|
||||
[SFP_S_TX_FAULT] = "tx_fault",
|
||||
[SFP_S_REINIT] = "reinit",
|
||||
[SFP_S_TX_DISABLE] = "rx_disable",
|
||||
[SFP_S_TX_DISABLE] = "tx_disable",
|
||||
};
|
||||
|
||||
static const char *sm_state_to_str(unsigned short sm_state)
|
||||
|
@ -767,6 +767,7 @@ enum rtl8152_flags {
|
||||
PHY_RESET,
|
||||
SCHEDULE_TASKLET,
|
||||
GREEN_ETHERNET,
|
||||
RX_EPROTO,
|
||||
};
|
||||
|
||||
#define DEVICE_ID_THINKPAD_THUNDERBOLT3_DOCK_GEN2 0x3082
|
||||
@ -1770,6 +1771,14 @@ static void read_bulk_callback(struct urb *urb)
|
||||
rtl_set_unplug(tp);
|
||||
netif_device_detach(tp->netdev);
|
||||
return;
|
||||
case -EPROTO:
|
||||
urb->actual_length = 0;
|
||||
spin_lock_irqsave(&tp->rx_lock, flags);
|
||||
list_add_tail(&agg->list, &tp->rx_done);
|
||||
spin_unlock_irqrestore(&tp->rx_lock, flags);
|
||||
set_bit(RX_EPROTO, &tp->flags);
|
||||
schedule_delayed_work(&tp->schedule, 1);
|
||||
return;
|
||||
case -ENOENT:
|
||||
return; /* the urb is in unlink state */
|
||||
case -ETIME:
|
||||
@ -2425,6 +2434,7 @@ static int rx_bottom(struct r8152 *tp, int budget)
|
||||
if (list_empty(&tp->rx_done))
|
||||
goto out1;
|
||||
|
||||
clear_bit(RX_EPROTO, &tp->flags);
|
||||
INIT_LIST_HEAD(&rx_queue);
|
||||
spin_lock_irqsave(&tp->rx_lock, flags);
|
||||
list_splice_init(&tp->rx_done, &rx_queue);
|
||||
@ -2441,7 +2451,7 @@ static int rx_bottom(struct r8152 *tp, int budget)
|
||||
|
||||
agg = list_entry(cursor, struct rx_agg, list);
|
||||
urb = agg->urb;
|
||||
if (urb->actual_length < ETH_ZLEN)
|
||||
if (urb->status != 0 || urb->actual_length < ETH_ZLEN)
|
||||
goto submit;
|
||||
|
||||
agg_free = rtl_get_free_rx(tp, GFP_ATOMIC);
|
||||
@ -6643,6 +6653,10 @@ static void rtl_work_func_t(struct work_struct *work)
|
||||
netif_carrier_ok(tp->netdev))
|
||||
tasklet_schedule(&tp->tx_tl);
|
||||
|
||||
if (test_and_clear_bit(RX_EPROTO, &tp->flags) &&
|
||||
!list_empty(&tp->rx_done))
|
||||
napi_schedule(&tp->napi);
|
||||
|
||||
mutex_unlock(&tp->control);
|
||||
|
||||
out1:
|
||||
|
@ -3,9 +3,7 @@ config ATH5K
|
||||
tristate "Atheros 5xxx wireless cards support"
|
||||
depends on (PCI || ATH25) && MAC80211
|
||||
select ATH_COMMON
|
||||
select MAC80211_LEDS
|
||||
select LEDS_CLASS
|
||||
select NEW_LEDS
|
||||
select MAC80211_LEDS if LEDS_CLASS=y || LEDS_CLASS=MAC80211
|
||||
select ATH5K_AHB if ATH25
|
||||
select ATH5K_PCI if !ATH25
|
||||
help
|
||||
|
@ -89,7 +89,8 @@ static const struct pci_device_id ath5k_led_devices[] = {
|
||||
|
||||
void ath5k_led_enable(struct ath5k_hw *ah)
|
||||
{
|
||||
if (test_bit(ATH_STAT_LEDSOFT, ah->status)) {
|
||||
if (IS_ENABLED(CONFIG_MAC80211_LEDS) &&
|
||||
test_bit(ATH_STAT_LEDSOFT, ah->status)) {
|
||||
ath5k_hw_set_gpio_output(ah, ah->led_pin);
|
||||
ath5k_led_off(ah);
|
||||
}
|
||||
@ -104,7 +105,8 @@ static void ath5k_led_on(struct ath5k_hw *ah)
|
||||
|
||||
void ath5k_led_off(struct ath5k_hw *ah)
|
||||
{
|
||||
if (!test_bit(ATH_STAT_LEDSOFT, ah->status))
|
||||
if (!IS_ENABLED(CONFIG_MAC80211_LEDS) ||
|
||||
!test_bit(ATH_STAT_LEDSOFT, ah->status))
|
||||
return;
|
||||
ath5k_hw_set_gpio(ah, ah->led_pin, !ah->led_on);
|
||||
}
|
||||
@ -146,7 +148,7 @@ ath5k_register_led(struct ath5k_hw *ah, struct ath5k_led *led,
|
||||
static void
|
||||
ath5k_unregister_led(struct ath5k_led *led)
|
||||
{
|
||||
if (!led->ah)
|
||||
if (!IS_ENABLED(CONFIG_MAC80211_LEDS) || !led->ah)
|
||||
return;
|
||||
led_classdev_unregister(&led->led_dev);
|
||||
ath5k_led_off(led->ah);
|
||||
@ -169,7 +171,7 @@ int ath5k_init_leds(struct ath5k_hw *ah)
|
||||
char name[ATH5K_LED_MAX_NAME_LEN + 1];
|
||||
const struct pci_device_id *match;
|
||||
|
||||
if (!ah->pdev)
|
||||
if (!IS_ENABLED(CONFIG_MAC80211_LEDS) || !ah->pdev)
|
||||
return 0;
|
||||
|
||||
#ifdef CONFIG_ATH5K_AHB
|
||||
|
@ -7463,23 +7463,18 @@ static s32 brcmf_translate_country_code(struct brcmf_pub *drvr, char alpha2[2],
|
||||
s32 found_index;
|
||||
int i;
|
||||
|
||||
country_codes = drvr->settings->country_codes;
|
||||
if (!country_codes) {
|
||||
brcmf_dbg(TRACE, "No country codes configured for device\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if ((alpha2[0] == ccreq->country_abbrev[0]) &&
|
||||
(alpha2[1] == ccreq->country_abbrev[1])) {
|
||||
brcmf_dbg(TRACE, "Country code already set\n");
|
||||
return -EAGAIN;
|
||||
}
|
||||
|
||||
country_codes = drvr->settings->country_codes;
|
||||
if (!country_codes) {
|
||||
brcmf_dbg(TRACE, "No country codes configured for device, using ISO3166 code and 0 rev\n");
|
||||
memset(ccreq, 0, sizeof(*ccreq));
|
||||
ccreq->country_abbrev[0] = alpha2[0];
|
||||
ccreq->country_abbrev[1] = alpha2[1];
|
||||
ccreq->ccode[0] = alpha2[0];
|
||||
ccreq->ccode[1] = alpha2[1];
|
||||
return 0;
|
||||
}
|
||||
|
||||
found_index = -1;
|
||||
for (i = 0; i < country_codes->table_size; i++) {
|
||||
cc = &country_codes->table[i];
|
||||
|
@ -160,6 +160,7 @@ static void iwl_mvm_wowlan_program_keys(struct ieee80211_hw *hw,
|
||||
mvm->ptk_icvlen = key->icv_len;
|
||||
mvm->gtk_ivlen = key->iv_len;
|
||||
mvm->gtk_icvlen = key->icv_len;
|
||||
mutex_unlock(&mvm->mutex);
|
||||
|
||||
/* don't upload key again */
|
||||
return;
|
||||
@ -360,11 +361,11 @@ static void iwl_mvm_wowlan_get_rsc_v5_data(struct ieee80211_hw *hw,
|
||||
if (sta) {
|
||||
rsc = data->rsc->ucast_rsc;
|
||||
} else {
|
||||
if (WARN_ON(data->gtks > ARRAY_SIZE(data->gtk_ids)))
|
||||
if (WARN_ON(data->gtks >= ARRAY_SIZE(data->gtk_ids)))
|
||||
return;
|
||||
data->gtk_ids[data->gtks] = key->keyidx;
|
||||
rsc = data->rsc->mcast_rsc[data->gtks % 2];
|
||||
if (WARN_ON(key->keyidx >
|
||||
if (WARN_ON(key->keyidx >=
|
||||
ARRAY_SIZE(data->rsc->mcast_key_id_map)))
|
||||
return;
|
||||
data->rsc->mcast_key_id_map[key->keyidx] = data->gtks % 2;
|
||||
|
@ -662,12 +662,13 @@ static bool __iwl_mvm_remove_time_event(struct iwl_mvm *mvm,
|
||||
u32 *uid)
|
||||
{
|
||||
u32 id;
|
||||
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(te_data->vif);
|
||||
struct iwl_mvm_vif *mvmvif;
|
||||
enum nl80211_iftype iftype;
|
||||
|
||||
if (!te_data->vif)
|
||||
return false;
|
||||
|
||||
mvmvif = iwl_mvm_vif_from_mac80211(te_data->vif);
|
||||
iftype = te_data->vif->type;
|
||||
|
||||
/*
|
||||
|
@ -547,6 +547,8 @@ static const struct iwl_dev_info iwl_dev_info_table[] = {
|
||||
IWL_DEV_INFO(0x43F0, 0x0074, iwl_ax201_cfg_qu_hr, NULL),
|
||||
IWL_DEV_INFO(0x43F0, 0x0078, iwl_ax201_cfg_qu_hr, NULL),
|
||||
IWL_DEV_INFO(0x43F0, 0x007C, iwl_ax201_cfg_qu_hr, NULL),
|
||||
IWL_DEV_INFO(0x43F0, 0x1651, killer1650s_2ax_cfg_qu_b0_hr_b0, iwl_ax201_killer_1650s_name),
|
||||
IWL_DEV_INFO(0x43F0, 0x1652, killer1650i_2ax_cfg_qu_b0_hr_b0, iwl_ax201_killer_1650i_name),
|
||||
IWL_DEV_INFO(0x43F0, 0x2074, iwl_ax201_cfg_qu_hr, NULL),
|
||||
IWL_DEV_INFO(0x43F0, 0x4070, iwl_ax201_cfg_qu_hr, NULL),
|
||||
IWL_DEV_INFO(0xA0F0, 0x0070, iwl_ax201_cfg_qu_hr, NULL),
|
||||
|
@ -62,8 +62,8 @@ void *mwifiex_process_sta_txpd(struct mwifiex_private *priv,
|
||||
|
||||
pkt_type = mwifiex_is_skb_mgmt_frame(skb) ? PKT_TYPE_MGMT : 0;
|
||||
|
||||
pad = ((void *)skb->data - (sizeof(*local_tx_pd) + hroom)-
|
||||
NULL) & (MWIFIEX_DMA_ALIGN_SZ - 1);
|
||||
pad = ((uintptr_t)skb->data - (sizeof(*local_tx_pd) + hroom)) &
|
||||
(MWIFIEX_DMA_ALIGN_SZ - 1);
|
||||
skb_push(skb, sizeof(*local_tx_pd) + pad);
|
||||
|
||||
local_tx_pd = (struct txpd *) skb->data;
|
||||
|
@ -475,8 +475,8 @@ void *mwifiex_process_uap_txpd(struct mwifiex_private *priv,
|
||||
|
||||
pkt_type = mwifiex_is_skb_mgmt_frame(skb) ? PKT_TYPE_MGMT : 0;
|
||||
|
||||
pad = ((void *)skb->data - (sizeof(*txpd) + hroom) - NULL) &
|
||||
(MWIFIEX_DMA_ALIGN_SZ - 1);
|
||||
pad = ((uintptr_t)skb->data - (sizeof(*txpd) + hroom)) &
|
||||
(MWIFIEX_DMA_ALIGN_SZ - 1);
|
||||
|
||||
skb_push(skb, sizeof(*txpd) + pad);
|
||||
|
||||
|
@ -644,6 +644,7 @@ static const struct pci_device_id pch_ieee1588_pcidev_id[] = {
|
||||
},
|
||||
{0}
|
||||
};
|
||||
MODULE_DEVICE_TABLE(pci, pch_ieee1588_pcidev_id);
|
||||
|
||||
static SIMPLE_DEV_PM_OPS(pch_pm_ops, pch_suspend, pch_resume);
|
||||
|
||||
|
@ -308,7 +308,7 @@ static inline void ether_addr_copy(u8 *dst, const u8 *src)
|
||||
*/
|
||||
static inline void eth_hw_addr_set(struct net_device *dev, const u8 *addr)
|
||||
{
|
||||
ether_addr_copy(dev->dev_addr, addr);
|
||||
__dev_addr_set(dev, addr, ETH_ALEN);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -17,7 +17,6 @@ struct inet_frags_ctl;
|
||||
struct nft_ct_frag6_pernet {
|
||||
struct ctl_table_header *nf_frag_frags_hdr;
|
||||
struct fqdir *fqdir;
|
||||
unsigned int users;
|
||||
};
|
||||
|
||||
#endif /* _NF_DEFRAG_IPV6_H */
|
||||
|
@ -1202,7 +1202,7 @@ struct nft_object *nft_obj_lookup(const struct net *net,
|
||||
|
||||
void nft_obj_notify(struct net *net, const struct nft_table *table,
|
||||
struct nft_object *obj, u32 portid, u32 seq,
|
||||
int event, int family, int report, gfp_t gfp);
|
||||
int event, u16 flags, int family, int report, gfp_t gfp);
|
||||
|
||||
/**
|
||||
* struct nft_object_type - stateful object type
|
||||
|
@ -27,5 +27,11 @@ struct netns_nf {
|
||||
#if IS_ENABLED(CONFIG_DECNET)
|
||||
struct nf_hook_entries __rcu *hooks_decnet[NF_DN_NUMHOOKS];
|
||||
#endif
|
||||
#if IS_ENABLED(CONFIG_NF_DEFRAG_IPV4)
|
||||
unsigned int defrag_ipv4_users;
|
||||
#endif
|
||||
#if IS_ENABLED(CONFIG_NF_DEFRAG_IPV6)
|
||||
unsigned int defrag_ipv6_users;
|
||||
#endif
|
||||
};
|
||||
#endif
|
||||
|
@ -307,6 +307,7 @@ struct bpf_local_storage;
|
||||
* @sk_priority: %SO_PRIORITY setting
|
||||
* @sk_type: socket type (%SOCK_STREAM, etc)
|
||||
* @sk_protocol: which protocol this socket belongs in this network family
|
||||
* @sk_peer_lock: lock protecting @sk_peer_pid and @sk_peer_cred
|
||||
* @sk_peer_pid: &struct pid for this socket's peer
|
||||
* @sk_peer_cred: %SO_PEERCRED setting
|
||||
* @sk_rcvlowat: %SO_RCVLOWAT setting
|
||||
|
@ -694,7 +694,7 @@ int ocelot_vcap_filter_add(struct ocelot *ocelot,
|
||||
int ocelot_vcap_filter_del(struct ocelot *ocelot,
|
||||
struct ocelot_vcap_filter *rule);
|
||||
struct ocelot_vcap_filter *
|
||||
ocelot_vcap_block_find_filter_by_id(struct ocelot_vcap_block *block, int id,
|
||||
bool tc_offload);
|
||||
ocelot_vcap_block_find_filter_by_id(struct ocelot_vcap_block *block,
|
||||
unsigned long cookie, bool tc_offload);
|
||||
|
||||
#endif /* _OCELOT_VCAP_H_ */
|
||||
|
@ -213,13 +213,13 @@ enum {
|
||||
XFRM_MSG_GETSPDINFO,
|
||||
#define XFRM_MSG_GETSPDINFO XFRM_MSG_GETSPDINFO
|
||||
|
||||
XFRM_MSG_MAPPING,
|
||||
#define XFRM_MSG_MAPPING XFRM_MSG_MAPPING
|
||||
|
||||
XFRM_MSG_SETDEFAULT,
|
||||
#define XFRM_MSG_SETDEFAULT XFRM_MSG_SETDEFAULT
|
||||
XFRM_MSG_GETDEFAULT,
|
||||
#define XFRM_MSG_GETDEFAULT XFRM_MSG_GETDEFAULT
|
||||
|
||||
XFRM_MSG_MAPPING,
|
||||
#define XFRM_MSG_MAPPING XFRM_MSG_MAPPING
|
||||
__XFRM_MSG_MAX
|
||||
};
|
||||
#define XFRM_MSG_MAX (__XFRM_MSG_MAX - 1)
|
||||
@ -514,9 +514,12 @@ struct xfrm_user_offload {
|
||||
#define XFRM_OFFLOAD_INBOUND 2
|
||||
|
||||
struct xfrm_userpolicy_default {
|
||||
#define XFRM_USERPOLICY_DIRMASK_MAX (sizeof(__u8) * 8)
|
||||
__u8 dirmask;
|
||||
__u8 action;
|
||||
#define XFRM_USERPOLICY_UNSPEC 0
|
||||
#define XFRM_USERPOLICY_BLOCK 1
|
||||
#define XFRM_USERPOLICY_ACCEPT 2
|
||||
__u8 in;
|
||||
__u8 fwd;
|
||||
__u8 out;
|
||||
};
|
||||
|
||||
#ifndef __KERNEL__
|
||||
|
@ -63,7 +63,8 @@ static inline int stack_map_data_size(struct bpf_map *map)
|
||||
|
||||
static int prealloc_elems_and_freelist(struct bpf_stack_map *smap)
|
||||
{
|
||||
u32 elem_size = sizeof(struct stack_map_bucket) + smap->map.value_size;
|
||||
u64 elem_size = sizeof(struct stack_map_bucket) +
|
||||
(u64)smap->map.value_size;
|
||||
int err;
|
||||
|
||||
smap->elems = bpf_map_area_alloc(elem_size * smap->map.max_entries,
|
||||
|
@ -1666,7 +1666,8 @@ static size_t br_get_linkxstats_size(const struct net_device *dev, int attr)
|
||||
}
|
||||
|
||||
return numvls * nla_total_size(sizeof(struct bridge_vlan_xstats)) +
|
||||
nla_total_size(sizeof(struct br_mcast_stats)) +
|
||||
nla_total_size_64bit(sizeof(struct br_mcast_stats)) +
|
||||
(p ? nla_total_size_64bit(sizeof(p->stp_xstats)) : 0) +
|
||||
nla_total_size(0);
|
||||
}
|
||||
|
||||
|
@ -5262,7 +5262,7 @@ nla_put_failure:
|
||||
static size_t if_nlmsg_stats_size(const struct net_device *dev,
|
||||
u32 filter_mask)
|
||||
{
|
||||
size_t size = 0;
|
||||
size_t size = NLMSG_ALIGN(sizeof(struct if_stats_msg));
|
||||
|
||||
if (stats_attr_valid(filter_mask, IFLA_STATS_LINK_64, 0))
|
||||
size += nla_total_size_64bit(sizeof(struct rtnl_link_stats64));
|
||||
|
@ -210,7 +210,7 @@ static struct sk_buff *dsa_rcv_ll(struct sk_buff *skb, struct net_device *dev,
|
||||
cmd = dsa_header[0] >> 6;
|
||||
switch (cmd) {
|
||||
case DSA_CMD_FORWARD:
|
||||
trunk = !!(dsa_header[1] & 7);
|
||||
trunk = !!(dsa_header[1] & 4);
|
||||
break;
|
||||
|
||||
case DSA_CMD_TO_CPU:
|
||||
|
@ -242,8 +242,10 @@ static inline int compute_score(struct sock *sk, struct net *net,
|
||||
|
||||
if (!inet_sk_bound_dev_eq(net, sk->sk_bound_dev_if, dif, sdif))
|
||||
return -1;
|
||||
score = sk->sk_bound_dev_if ? 2 : 1;
|
||||
|
||||
score = sk->sk_family == PF_INET ? 2 : 1;
|
||||
if (sk->sk_family == PF_INET)
|
||||
score++;
|
||||
if (READ_ONCE(sk->sk_incoming_cpu) == raw_smp_processor_id())
|
||||
score++;
|
||||
}
|
||||
|
@ -20,13 +20,8 @@
|
||||
#endif
|
||||
#include <net/netfilter/nf_conntrack_zones.h>
|
||||
|
||||
static unsigned int defrag4_pernet_id __read_mostly;
|
||||
static DEFINE_MUTEX(defrag4_mutex);
|
||||
|
||||
struct defrag4_pernet {
|
||||
unsigned int users;
|
||||
};
|
||||
|
||||
static int nf_ct_ipv4_gather_frags(struct net *net, struct sk_buff *skb,
|
||||
u_int32_t user)
|
||||
{
|
||||
@ -111,19 +106,15 @@ static const struct nf_hook_ops ipv4_defrag_ops[] = {
|
||||
|
||||
static void __net_exit defrag4_net_exit(struct net *net)
|
||||
{
|
||||
struct defrag4_pernet *nf_defrag = net_generic(net, defrag4_pernet_id);
|
||||
|
||||
if (nf_defrag->users) {
|
||||
if (net->nf.defrag_ipv4_users) {
|
||||
nf_unregister_net_hooks(net, ipv4_defrag_ops,
|
||||
ARRAY_SIZE(ipv4_defrag_ops));
|
||||
nf_defrag->users = 0;
|
||||
net->nf.defrag_ipv4_users = 0;
|
||||
}
|
||||
}
|
||||
|
||||
static struct pernet_operations defrag4_net_ops = {
|
||||
.exit = defrag4_net_exit,
|
||||
.id = &defrag4_pernet_id,
|
||||
.size = sizeof(struct defrag4_pernet),
|
||||
};
|
||||
|
||||
static int __init nf_defrag_init(void)
|
||||
@ -138,24 +129,23 @@ static void __exit nf_defrag_fini(void)
|
||||
|
||||
int nf_defrag_ipv4_enable(struct net *net)
|
||||
{
|
||||
struct defrag4_pernet *nf_defrag = net_generic(net, defrag4_pernet_id);
|
||||
int err = 0;
|
||||
|
||||
mutex_lock(&defrag4_mutex);
|
||||
if (nf_defrag->users == UINT_MAX) {
|
||||
if (net->nf.defrag_ipv4_users == UINT_MAX) {
|
||||
err = -EOVERFLOW;
|
||||
goto out_unlock;
|
||||
}
|
||||
|
||||
if (nf_defrag->users) {
|
||||
nf_defrag->users++;
|
||||
if (net->nf.defrag_ipv4_users) {
|
||||
net->nf.defrag_ipv4_users++;
|
||||
goto out_unlock;
|
||||
}
|
||||
|
||||
err = nf_register_net_hooks(net, ipv4_defrag_ops,
|
||||
ARRAY_SIZE(ipv4_defrag_ops));
|
||||
if (err == 0)
|
||||
nf_defrag->users = 1;
|
||||
net->nf.defrag_ipv4_users = 1;
|
||||
|
||||
out_unlock:
|
||||
mutex_unlock(&defrag4_mutex);
|
||||
@ -165,12 +155,10 @@ EXPORT_SYMBOL_GPL(nf_defrag_ipv4_enable);
|
||||
|
||||
void nf_defrag_ipv4_disable(struct net *net)
|
||||
{
|
||||
struct defrag4_pernet *nf_defrag = net_generic(net, defrag4_pernet_id);
|
||||
|
||||
mutex_lock(&defrag4_mutex);
|
||||
if (nf_defrag->users) {
|
||||
nf_defrag->users--;
|
||||
if (nf_defrag->users == 0)
|
||||
if (net->nf.defrag_ipv4_users) {
|
||||
net->nf.defrag_ipv4_users--;
|
||||
if (net->nf.defrag_ipv4_users == 0)
|
||||
nf_unregister_net_hooks(net, ipv4_defrag_ops,
|
||||
ARRAY_SIZE(ipv4_defrag_ops));
|
||||
}
|
||||
|
@ -390,7 +390,8 @@ static int compute_score(struct sock *sk, struct net *net,
|
||||
dif, sdif);
|
||||
if (!dev_match)
|
||||
return -1;
|
||||
score += 4;
|
||||
if (sk->sk_bound_dev_if)
|
||||
score += 4;
|
||||
|
||||
if (READ_ONCE(sk->sk_incoming_cpu) == raw_smp_processor_id())
|
||||
score++;
|
||||
|
@ -106,7 +106,7 @@ static inline int compute_score(struct sock *sk, struct net *net,
|
||||
if (!inet_sk_bound_dev_eq(net, sk->sk_bound_dev_if, dif, sdif))
|
||||
return -1;
|
||||
|
||||
score = 1;
|
||||
score = sk->sk_bound_dev_if ? 2 : 1;
|
||||
if (READ_ONCE(sk->sk_incoming_cpu) == raw_smp_processor_id())
|
||||
score++;
|
||||
}
|
||||
|
@ -33,7 +33,7 @@
|
||||
|
||||
static const char nf_frags_cache_name[] = "nf-frags";
|
||||
|
||||
unsigned int nf_frag_pernet_id __read_mostly;
|
||||
static unsigned int nf_frag_pernet_id __read_mostly;
|
||||
static struct inet_frags nf_frags;
|
||||
|
||||
static struct nft_ct_frag6_pernet *nf_frag_pernet(struct net *net)
|
||||
|
@ -25,8 +25,6 @@
|
||||
#include <net/netfilter/nf_conntrack_zones.h>
|
||||
#include <net/netfilter/ipv6/nf_defrag_ipv6.h>
|
||||
|
||||
extern unsigned int nf_frag_pernet_id;
|
||||
|
||||
static DEFINE_MUTEX(defrag6_mutex);
|
||||
|
||||
static enum ip6_defrag_users nf_ct6_defrag_user(unsigned int hooknum,
|
||||
@ -91,12 +89,10 @@ static const struct nf_hook_ops ipv6_defrag_ops[] = {
|
||||
|
||||
static void __net_exit defrag6_net_exit(struct net *net)
|
||||
{
|
||||
struct nft_ct_frag6_pernet *nf_frag = net_generic(net, nf_frag_pernet_id);
|
||||
|
||||
if (nf_frag->users) {
|
||||
if (net->nf.defrag_ipv6_users) {
|
||||
nf_unregister_net_hooks(net, ipv6_defrag_ops,
|
||||
ARRAY_SIZE(ipv6_defrag_ops));
|
||||
nf_frag->users = 0;
|
||||
net->nf.defrag_ipv6_users = 0;
|
||||
}
|
||||
}
|
||||
|
||||
@ -134,24 +130,23 @@ static void __exit nf_defrag_fini(void)
|
||||
|
||||
int nf_defrag_ipv6_enable(struct net *net)
|
||||
{
|
||||
struct nft_ct_frag6_pernet *nf_frag = net_generic(net, nf_frag_pernet_id);
|
||||
int err = 0;
|
||||
|
||||
mutex_lock(&defrag6_mutex);
|
||||
if (nf_frag->users == UINT_MAX) {
|
||||
if (net->nf.defrag_ipv6_users == UINT_MAX) {
|
||||
err = -EOVERFLOW;
|
||||
goto out_unlock;
|
||||
}
|
||||
|
||||
if (nf_frag->users) {
|
||||
nf_frag->users++;
|
||||
if (net->nf.defrag_ipv6_users) {
|
||||
net->nf.defrag_ipv6_users++;
|
||||
goto out_unlock;
|
||||
}
|
||||
|
||||
err = nf_register_net_hooks(net, ipv6_defrag_ops,
|
||||
ARRAY_SIZE(ipv6_defrag_ops));
|
||||
if (err == 0)
|
||||
nf_frag->users = 1;
|
||||
net->nf.defrag_ipv6_users = 1;
|
||||
|
||||
out_unlock:
|
||||
mutex_unlock(&defrag6_mutex);
|
||||
@ -161,12 +156,10 @@ EXPORT_SYMBOL_GPL(nf_defrag_ipv6_enable);
|
||||
|
||||
void nf_defrag_ipv6_disable(struct net *net)
|
||||
{
|
||||
struct nft_ct_frag6_pernet *nf_frag = net_generic(net, nf_frag_pernet_id);
|
||||
|
||||
mutex_lock(&defrag6_mutex);
|
||||
if (nf_frag->users) {
|
||||
nf_frag->users--;
|
||||
if (nf_frag->users == 0)
|
||||
if (net->nf.defrag_ipv6_users) {
|
||||
net->nf.defrag_ipv6_users--;
|
||||
if (net->nf.defrag_ipv6_users == 0)
|
||||
nf_unregister_net_hooks(net, ipv6_defrag_ops,
|
||||
ARRAY_SIZE(ipv6_defrag_ops));
|
||||
}
|
||||
|
@ -133,7 +133,8 @@ static int compute_score(struct sock *sk, struct net *net,
|
||||
dev_match = udp_sk_bound_dev_eq(net, sk->sk_bound_dev_if, dif, sdif);
|
||||
if (!dev_match)
|
||||
return -1;
|
||||
score++;
|
||||
if (sk->sk_bound_dev_if)
|
||||
score++;
|
||||
|
||||
if (READ_ONCE(sk->sk_incoming_cpu) == raw_smp_processor_id())
|
||||
score++;
|
||||
|
@ -780,6 +780,7 @@ static void nf_tables_table_notify(const struct nft_ctx *ctx, int event)
|
||||
{
|
||||
struct nftables_pernet *nft_net;
|
||||
struct sk_buff *skb;
|
||||
u16 flags = 0;
|
||||
int err;
|
||||
|
||||
if (!ctx->report &&
|
||||
@ -790,8 +791,11 @@ static void nf_tables_table_notify(const struct nft_ctx *ctx, int event)
|
||||
if (skb == NULL)
|
||||
goto err;
|
||||
|
||||
if (ctx->flags & (NLM_F_CREATE | NLM_F_EXCL))
|
||||
flags |= ctx->flags & (NLM_F_CREATE | NLM_F_EXCL);
|
||||
|
||||
err = nf_tables_fill_table_info(skb, ctx->net, ctx->portid, ctx->seq,
|
||||
event, 0, ctx->family, ctx->table);
|
||||
event, flags, ctx->family, ctx->table);
|
||||
if (err < 0) {
|
||||
kfree_skb(skb);
|
||||
goto err;
|
||||
@ -1563,6 +1567,7 @@ static void nf_tables_chain_notify(const struct nft_ctx *ctx, int event)
|
||||
{
|
||||
struct nftables_pernet *nft_net;
|
||||
struct sk_buff *skb;
|
||||
u16 flags = 0;
|
||||
int err;
|
||||
|
||||
if (!ctx->report &&
|
||||
@ -1573,8 +1578,11 @@ static void nf_tables_chain_notify(const struct nft_ctx *ctx, int event)
|
||||
if (skb == NULL)
|
||||
goto err;
|
||||
|
||||
if (ctx->flags & (NLM_F_CREATE | NLM_F_EXCL))
|
||||
flags |= ctx->flags & (NLM_F_CREATE | NLM_F_EXCL);
|
||||
|
||||
err = nf_tables_fill_chain_info(skb, ctx->net, ctx->portid, ctx->seq,
|
||||
event, 0, ctx->family, ctx->table,
|
||||
event, flags, ctx->family, ctx->table,
|
||||
ctx->chain);
|
||||
if (err < 0) {
|
||||
kfree_skb(skb);
|
||||
@ -2866,8 +2874,7 @@ static int nf_tables_fill_rule_info(struct sk_buff *skb, struct net *net,
|
||||
u32 flags, int family,
|
||||
const struct nft_table *table,
|
||||
const struct nft_chain *chain,
|
||||
const struct nft_rule *rule,
|
||||
const struct nft_rule *prule)
|
||||
const struct nft_rule *rule, u64 handle)
|
||||
{
|
||||
struct nlmsghdr *nlh;
|
||||
const struct nft_expr *expr, *next;
|
||||
@ -2887,9 +2894,8 @@ static int nf_tables_fill_rule_info(struct sk_buff *skb, struct net *net,
|
||||
NFTA_RULE_PAD))
|
||||
goto nla_put_failure;
|
||||
|
||||
if (event != NFT_MSG_DELRULE && prule) {
|
||||
if (nla_put_be64(skb, NFTA_RULE_POSITION,
|
||||
cpu_to_be64(prule->handle),
|
||||
if (event != NFT_MSG_DELRULE && handle) {
|
||||
if (nla_put_be64(skb, NFTA_RULE_POSITION, cpu_to_be64(handle),
|
||||
NFTA_RULE_PAD))
|
||||
goto nla_put_failure;
|
||||
}
|
||||
@ -2925,7 +2931,10 @@ static void nf_tables_rule_notify(const struct nft_ctx *ctx,
|
||||
const struct nft_rule *rule, int event)
|
||||
{
|
||||
struct nftables_pernet *nft_net = nft_pernet(ctx->net);
|
||||
const struct nft_rule *prule;
|
||||
struct sk_buff *skb;
|
||||
u64 handle = 0;
|
||||
u16 flags = 0;
|
||||
int err;
|
||||
|
||||
if (!ctx->report &&
|
||||
@ -2936,9 +2945,20 @@ static void nf_tables_rule_notify(const struct nft_ctx *ctx,
|
||||
if (skb == NULL)
|
||||
goto err;
|
||||
|
||||
if (event == NFT_MSG_NEWRULE &&
|
||||
!list_is_first(&rule->list, &ctx->chain->rules) &&
|
||||
!list_is_last(&rule->list, &ctx->chain->rules)) {
|
||||
prule = list_prev_entry(rule, list);
|
||||
handle = prule->handle;
|
||||
}
|
||||
if (ctx->flags & (NLM_F_APPEND | NLM_F_REPLACE))
|
||||
flags |= NLM_F_APPEND;
|
||||
if (ctx->flags & (NLM_F_CREATE | NLM_F_EXCL))
|
||||
flags |= ctx->flags & (NLM_F_CREATE | NLM_F_EXCL);
|
||||
|
||||
err = nf_tables_fill_rule_info(skb, ctx->net, ctx->portid, ctx->seq,
|
||||
event, 0, ctx->family, ctx->table,
|
||||
ctx->chain, rule, NULL);
|
||||
event, flags, ctx->family, ctx->table,
|
||||
ctx->chain, rule, handle);
|
||||
if (err < 0) {
|
||||
kfree_skb(skb);
|
||||
goto err;
|
||||
@ -2964,6 +2984,7 @@ static int __nf_tables_dump_rules(struct sk_buff *skb,
|
||||
struct net *net = sock_net(skb->sk);
|
||||
const struct nft_rule *rule, *prule;
|
||||
unsigned int s_idx = cb->args[0];
|
||||
u64 handle;
|
||||
|
||||
prule = NULL;
|
||||
list_for_each_entry_rcu(rule, &chain->rules, list) {
|
||||
@ -2975,12 +2996,17 @@ static int __nf_tables_dump_rules(struct sk_buff *skb,
|
||||
memset(&cb->args[1], 0,
|
||||
sizeof(cb->args) - sizeof(cb->args[0]));
|
||||
}
|
||||
if (prule)
|
||||
handle = prule->handle;
|
||||
else
|
||||
handle = 0;
|
||||
|
||||
if (nf_tables_fill_rule_info(skb, net, NETLINK_CB(cb->skb).portid,
|
||||
cb->nlh->nlmsg_seq,
|
||||
NFT_MSG_NEWRULE,
|
||||
NLM_F_MULTI | NLM_F_APPEND,
|
||||
table->family,
|
||||
table, chain, rule, prule) < 0)
|
||||
table, chain, rule, handle) < 0)
|
||||
return 1;
|
||||
|
||||
nl_dump_check_consistent(cb, nlmsg_hdr(skb));
|
||||
@ -3143,7 +3169,7 @@ static int nf_tables_getrule(struct sk_buff *skb, const struct nfnl_info *info,
|
||||
|
||||
err = nf_tables_fill_rule_info(skb2, net, NETLINK_CB(skb).portid,
|
||||
info->nlh->nlmsg_seq, NFT_MSG_NEWRULE, 0,
|
||||
family, table, chain, rule, NULL);
|
||||
family, table, chain, rule, 0);
|
||||
if (err < 0)
|
||||
goto err_fill_rule_info;
|
||||
|
||||
@ -3403,17 +3429,15 @@ static int nf_tables_newrule(struct sk_buff *skb, const struct nfnl_info *info,
|
||||
}
|
||||
|
||||
if (info->nlh->nlmsg_flags & NLM_F_REPLACE) {
|
||||
err = nft_delrule(&ctx, old_rule);
|
||||
if (err < 0)
|
||||
goto err_destroy_flow_rule;
|
||||
|
||||
trans = nft_trans_rule_add(&ctx, NFT_MSG_NEWRULE, rule);
|
||||
if (trans == NULL) {
|
||||
err = -ENOMEM;
|
||||
goto err_destroy_flow_rule;
|
||||
}
|
||||
err = nft_delrule(&ctx, old_rule);
|
||||
if (err < 0) {
|
||||
nft_trans_destroy(trans);
|
||||
goto err_destroy_flow_rule;
|
||||
}
|
||||
|
||||
list_add_tail_rcu(&rule->list, &old_rule->list);
|
||||
} else {
|
||||
trans = nft_trans_rule_add(&ctx, NFT_MSG_NEWRULE, rule);
|
||||
@ -3943,8 +3967,9 @@ static void nf_tables_set_notify(const struct nft_ctx *ctx,
|
||||
gfp_t gfp_flags)
|
||||
{
|
||||
struct nftables_pernet *nft_net = nft_pernet(ctx->net);
|
||||
struct sk_buff *skb;
|
||||
u32 portid = ctx->portid;
|
||||
struct sk_buff *skb;
|
||||
u16 flags = 0;
|
||||
int err;
|
||||
|
||||
if (!ctx->report &&
|
||||
@ -3955,7 +3980,10 @@ static void nf_tables_set_notify(const struct nft_ctx *ctx,
|
||||
if (skb == NULL)
|
||||
goto err;
|
||||
|
||||
err = nf_tables_fill_set(skb, ctx, set, event, 0);
|
||||
if (ctx->flags & (NLM_F_CREATE | NLM_F_EXCL))
|
||||
flags |= ctx->flags & (NLM_F_CREATE | NLM_F_EXCL);
|
||||
|
||||
err = nf_tables_fill_set(skb, ctx, set, event, flags);
|
||||
if (err < 0) {
|
||||
kfree_skb(skb);
|
||||
goto err;
|
||||
@ -5231,12 +5259,13 @@ static int nf_tables_getsetelem(struct sk_buff *skb,
|
||||
static void nf_tables_setelem_notify(const struct nft_ctx *ctx,
|
||||
const struct nft_set *set,
|
||||
const struct nft_set_elem *elem,
|
||||
int event, u16 flags)
|
||||
int event)
|
||||
{
|
||||
struct nftables_pernet *nft_net;
|
||||
struct net *net = ctx->net;
|
||||
u32 portid = ctx->portid;
|
||||
struct sk_buff *skb;
|
||||
u16 flags = 0;
|
||||
int err;
|
||||
|
||||
if (!ctx->report && !nfnetlink_has_listeners(net, NFNLGRP_NFTABLES))
|
||||
@ -5246,6 +5275,9 @@ static void nf_tables_setelem_notify(const struct nft_ctx *ctx,
|
||||
if (skb == NULL)
|
||||
goto err;
|
||||
|
||||
if (ctx->flags & (NLM_F_CREATE | NLM_F_EXCL))
|
||||
flags |= ctx->flags & (NLM_F_CREATE | NLM_F_EXCL);
|
||||
|
||||
err = nf_tables_fill_setelem_info(skb, ctx, 0, portid, event, flags,
|
||||
set, elem);
|
||||
if (err < 0) {
|
||||
@ -6921,7 +6953,7 @@ static int nf_tables_delobj(struct sk_buff *skb, const struct nfnl_info *info,
|
||||
|
||||
void nft_obj_notify(struct net *net, const struct nft_table *table,
|
||||
struct nft_object *obj, u32 portid, u32 seq, int event,
|
||||
int family, int report, gfp_t gfp)
|
||||
u16 flags, int family, int report, gfp_t gfp)
|
||||
{
|
||||
struct nftables_pernet *nft_net = nft_pernet(net);
|
||||
struct sk_buff *skb;
|
||||
@ -6946,8 +6978,9 @@ void nft_obj_notify(struct net *net, const struct nft_table *table,
|
||||
if (skb == NULL)
|
||||
goto err;
|
||||
|
||||
err = nf_tables_fill_obj_info(skb, net, portid, seq, event, 0, family,
|
||||
table, obj, false);
|
||||
err = nf_tables_fill_obj_info(skb, net, portid, seq, event,
|
||||
flags & (NLM_F_CREATE | NLM_F_EXCL),
|
||||
family, table, obj, false);
|
||||
if (err < 0) {
|
||||
kfree_skb(skb);
|
||||
goto err;
|
||||
@ -6964,7 +6997,7 @@ static void nf_tables_obj_notify(const struct nft_ctx *ctx,
|
||||
struct nft_object *obj, int event)
|
||||
{
|
||||
nft_obj_notify(ctx->net, ctx->table, obj, ctx->portid, ctx->seq, event,
|
||||
ctx->family, ctx->report, GFP_KERNEL);
|
||||
ctx->flags, ctx->family, ctx->report, GFP_KERNEL);
|
||||
}
|
||||
|
||||
/*
|
||||
@ -7745,6 +7778,7 @@ static void nf_tables_flowtable_notify(struct nft_ctx *ctx,
|
||||
{
|
||||
struct nftables_pernet *nft_net = nft_pernet(ctx->net);
|
||||
struct sk_buff *skb;
|
||||
u16 flags = 0;
|
||||
int err;
|
||||
|
||||
if (!ctx->report &&
|
||||
@ -7755,8 +7789,11 @@ static void nf_tables_flowtable_notify(struct nft_ctx *ctx,
|
||||
if (skb == NULL)
|
||||
goto err;
|
||||
|
||||
if (ctx->flags & (NLM_F_CREATE | NLM_F_EXCL))
|
||||
flags |= ctx->flags & (NLM_F_CREATE | NLM_F_EXCL);
|
||||
|
||||
err = nf_tables_fill_flowtable_info(skb, ctx->net, ctx->portid,
|
||||
ctx->seq, event, 0,
|
||||
ctx->seq, event, flags,
|
||||
ctx->family, flowtable, hook_list);
|
||||
if (err < 0) {
|
||||
kfree_skb(skb);
|
||||
@ -8634,7 +8671,7 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
|
||||
nft_setelem_activate(net, te->set, &te->elem);
|
||||
nf_tables_setelem_notify(&trans->ctx, te->set,
|
||||
&te->elem,
|
||||
NFT_MSG_NEWSETELEM, 0);
|
||||
NFT_MSG_NEWSETELEM);
|
||||
nft_trans_destroy(trans);
|
||||
break;
|
||||
case NFT_MSG_DELSETELEM:
|
||||
@ -8642,7 +8679,7 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
|
||||
|
||||
nf_tables_setelem_notify(&trans->ctx, te->set,
|
||||
&te->elem,
|
||||
NFT_MSG_DELSETELEM, 0);
|
||||
NFT_MSG_DELSETELEM);
|
||||
nft_setelem_remove(net, te->set, &te->elem);
|
||||
if (!nft_setelem_is_catchall(te->set, &te->elem)) {
|
||||
atomic_dec(&te->set->nelems);
|
||||
|
@ -60,7 +60,7 @@ static void nft_quota_obj_eval(struct nft_object *obj,
|
||||
if (overquota &&
|
||||
!test_and_set_bit(NFT_QUOTA_DEPLETED_BIT, &priv->flags))
|
||||
nft_obj_notify(nft_net(pkt), obj->key.table, obj, 0, 0,
|
||||
NFT_MSG_NEWOBJ, nft_pf(pkt), 0, GFP_ATOMIC);
|
||||
NFT_MSG_NEWOBJ, 0, nft_pf(pkt), 0, GFP_ATOMIC);
|
||||
}
|
||||
|
||||
static int nft_quota_do_init(const struct nlattr * const tb[],
|
||||
|
@ -594,7 +594,10 @@ static int netlink_insert(struct sock *sk, u32 portid)
|
||||
|
||||
/* We need to ensure that the socket is hashed and visible. */
|
||||
smp_wmb();
|
||||
nlk_sk(sk)->bound = portid;
|
||||
/* Paired with lockless reads from netlink_bind(),
|
||||
* netlink_connect() and netlink_sendmsg().
|
||||
*/
|
||||
WRITE_ONCE(nlk_sk(sk)->bound, portid);
|
||||
|
||||
err:
|
||||
release_sock(sk);
|
||||
@ -1012,7 +1015,8 @@ static int netlink_bind(struct socket *sock, struct sockaddr *addr,
|
||||
if (nlk->ngroups < BITS_PER_LONG)
|
||||
groups &= (1UL << nlk->ngroups) - 1;
|
||||
|
||||
bound = nlk->bound;
|
||||
/* Paired with WRITE_ONCE() in netlink_insert() */
|
||||
bound = READ_ONCE(nlk->bound);
|
||||
if (bound) {
|
||||
/* Ensure nlk->portid is up-to-date. */
|
||||
smp_rmb();
|
||||
@ -1098,8 +1102,9 @@ static int netlink_connect(struct socket *sock, struct sockaddr *addr,
|
||||
|
||||
/* No need for barriers here as we return to user-space without
|
||||
* using any of the bound attributes.
|
||||
* Paired with WRITE_ONCE() in netlink_insert().
|
||||
*/
|
||||
if (!nlk->bound)
|
||||
if (!READ_ONCE(nlk->bound))
|
||||
err = netlink_autobind(sock);
|
||||
|
||||
if (err == 0) {
|
||||
@ -1888,7 +1893,8 @@ static int netlink_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
|
||||
dst_group = nlk->dst_group;
|
||||
}
|
||||
|
||||
if (!nlk->bound) {
|
||||
/* Paired with WRITE_ONCE() in netlink_insert() */
|
||||
if (!READ_ONCE(nlk->bound)) {
|
||||
err = netlink_autobind(sock);
|
||||
if (err)
|
||||
goto out;
|
||||
|
@ -233,6 +233,9 @@ int fifo_set_limit(struct Qdisc *q, unsigned int limit)
|
||||
if (strncmp(q->ops->id + 1, "fifo", 4) != 0)
|
||||
return 0;
|
||||
|
||||
if (!q->ops->change)
|
||||
return 0;
|
||||
|
||||
nla = kmalloc(nla_attr_size(sizeof(struct tc_fifo_qopt)), GFP_KERNEL);
|
||||
if (nla) {
|
||||
nla->nla_type = RTM_NEWQDISC;
|
||||
|
@ -1641,6 +1641,10 @@ static void taprio_destroy(struct Qdisc *sch)
|
||||
list_del(&q->taprio_list);
|
||||
spin_unlock(&taprio_list_lock);
|
||||
|
||||
/* Note that taprio_reset() might not be called if an error
|
||||
* happens in qdisc_create(), after taprio_init() has been called.
|
||||
*/
|
||||
hrtimer_cancel(&q->advance_timer);
|
||||
|
||||
taprio_disable_offload(dev, q, NULL);
|
||||
|
||||
|
@ -2882,6 +2882,9 @@ static int unix_shutdown(struct socket *sock, int mode)
|
||||
|
||||
unix_state_lock(sk);
|
||||
sk->sk_shutdown |= mode;
|
||||
if ((sk->sk_type == SOCK_STREAM || sk->sk_type == SOCK_SEQPACKET) &&
|
||||
mode == SHUTDOWN_MASK)
|
||||
sk->sk_state = TCP_CLOSE;
|
||||
other = unix_peer(sk);
|
||||
if (other)
|
||||
sock_hold(other);
|
||||
@ -2904,12 +2907,10 @@ static int unix_shutdown(struct socket *sock, int mode)
|
||||
other->sk_shutdown |= peer_mode;
|
||||
unix_state_unlock(other);
|
||||
other->sk_state_change(other);
|
||||
if (peer_mode == SHUTDOWN_MASK) {
|
||||
if (peer_mode == SHUTDOWN_MASK)
|
||||
sk_wake_async(other, SOCK_WAKE_WAITD, POLL_HUP);
|
||||
other->sk_state = TCP_CLOSE;
|
||||
} else if (peer_mode & RCV_SHUTDOWN) {
|
||||
else if (peer_mode & RCV_SHUTDOWN)
|
||||
sk_wake_async(other, SOCK_WAKE_WAITD, POLL_IN);
|
||||
}
|
||||
}
|
||||
if (other)
|
||||
sock_put(other);
|
||||
|
@ -1961,24 +1961,65 @@ static struct sk_buff *xfrm_policy_netlink(struct sk_buff *in_skb,
|
||||
return skb;
|
||||
}
|
||||
|
||||
static int xfrm_notify_userpolicy(struct net *net)
|
||||
{
|
||||
struct xfrm_userpolicy_default *up;
|
||||
int len = NLMSG_ALIGN(sizeof(*up));
|
||||
struct nlmsghdr *nlh;
|
||||
struct sk_buff *skb;
|
||||
int err;
|
||||
|
||||
skb = nlmsg_new(len, GFP_ATOMIC);
|
||||
if (skb == NULL)
|
||||
return -ENOMEM;
|
||||
|
||||
nlh = nlmsg_put(skb, 0, 0, XFRM_MSG_GETDEFAULT, sizeof(*up), 0);
|
||||
if (nlh == NULL) {
|
||||
kfree_skb(skb);
|
||||
return -EMSGSIZE;
|
||||
}
|
||||
|
||||
up = nlmsg_data(nlh);
|
||||
up->in = net->xfrm.policy_default & XFRM_POL_DEFAULT_IN ?
|
||||
XFRM_USERPOLICY_BLOCK : XFRM_USERPOLICY_ACCEPT;
|
||||
up->fwd = net->xfrm.policy_default & XFRM_POL_DEFAULT_FWD ?
|
||||
XFRM_USERPOLICY_BLOCK : XFRM_USERPOLICY_ACCEPT;
|
||||
up->out = net->xfrm.policy_default & XFRM_POL_DEFAULT_OUT ?
|
||||
XFRM_USERPOLICY_BLOCK : XFRM_USERPOLICY_ACCEPT;
|
||||
|
||||
nlmsg_end(skb, nlh);
|
||||
|
||||
rcu_read_lock();
|
||||
err = xfrm_nlmsg_multicast(net, skb, 0, XFRMNLGRP_POLICY);
|
||||
rcu_read_unlock();
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
static int xfrm_set_default(struct sk_buff *skb, struct nlmsghdr *nlh,
|
||||
struct nlattr **attrs)
|
||||
{
|
||||
struct net *net = sock_net(skb->sk);
|
||||
struct xfrm_userpolicy_default *up = nlmsg_data(nlh);
|
||||
u8 dirmask;
|
||||
u8 old_default = net->xfrm.policy_default;
|
||||
|
||||
if (up->dirmask >= XFRM_USERPOLICY_DIRMASK_MAX)
|
||||
return -EINVAL;
|
||||
if (up->in == XFRM_USERPOLICY_BLOCK)
|
||||
net->xfrm.policy_default |= XFRM_POL_DEFAULT_IN;
|
||||
else if (up->in == XFRM_USERPOLICY_ACCEPT)
|
||||
net->xfrm.policy_default &= ~XFRM_POL_DEFAULT_IN;
|
||||
|
||||
dirmask = (1 << up->dirmask) & XFRM_POL_DEFAULT_MASK;
|
||||
if (up->fwd == XFRM_USERPOLICY_BLOCK)
|
||||
net->xfrm.policy_default |= XFRM_POL_DEFAULT_FWD;
|
||||
else if (up->fwd == XFRM_USERPOLICY_ACCEPT)
|
||||
net->xfrm.policy_default &= ~XFRM_POL_DEFAULT_FWD;
|
||||
|
||||
net->xfrm.policy_default = (old_default & (0xff ^ dirmask))
|
||||
| (up->action << up->dirmask);
|
||||
if (up->out == XFRM_USERPOLICY_BLOCK)
|
||||
net->xfrm.policy_default |= XFRM_POL_DEFAULT_OUT;
|
||||
else if (up->out == XFRM_USERPOLICY_ACCEPT)
|
||||
net->xfrm.policy_default &= ~XFRM_POL_DEFAULT_OUT;
|
||||
|
||||
rt_genid_bump_all(net);
|
||||
|
||||
xfrm_notify_userpolicy(net);
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -1988,13 +2029,11 @@ static int xfrm_get_default(struct sk_buff *skb, struct nlmsghdr *nlh,
|
||||
struct sk_buff *r_skb;
|
||||
struct nlmsghdr *r_nlh;
|
||||
struct net *net = sock_net(skb->sk);
|
||||
struct xfrm_userpolicy_default *r_up, *up;
|
||||
struct xfrm_userpolicy_default *r_up;
|
||||
int len = NLMSG_ALIGN(sizeof(struct xfrm_userpolicy_default));
|
||||
u32 portid = NETLINK_CB(skb).portid;
|
||||
u32 seq = nlh->nlmsg_seq;
|
||||
|
||||
up = nlmsg_data(nlh);
|
||||
|
||||
r_skb = nlmsg_new(len, GFP_ATOMIC);
|
||||
if (!r_skb)
|
||||
return -ENOMEM;
|
||||
@ -2007,8 +2046,12 @@ static int xfrm_get_default(struct sk_buff *skb, struct nlmsghdr *nlh,
|
||||
|
||||
r_up = nlmsg_data(r_nlh);
|
||||
|
||||
r_up->action = ((net->xfrm.policy_default & (1 << up->dirmask)) >> up->dirmask);
|
||||
r_up->dirmask = up->dirmask;
|
||||
r_up->in = net->xfrm.policy_default & XFRM_POL_DEFAULT_IN ?
|
||||
XFRM_USERPOLICY_BLOCK : XFRM_USERPOLICY_ACCEPT;
|
||||
r_up->fwd = net->xfrm.policy_default & XFRM_POL_DEFAULT_FWD ?
|
||||
XFRM_USERPOLICY_BLOCK : XFRM_USERPOLICY_ACCEPT;
|
||||
r_up->out = net->xfrm.policy_default & XFRM_POL_DEFAULT_OUT ?
|
||||
XFRM_USERPOLICY_BLOCK : XFRM_USERPOLICY_ACCEPT;
|
||||
nlmsg_end(r_skb, r_nlh);
|
||||
|
||||
return nlmsg_unicast(net->xfrm.nlsk, r_skb, portid);
|
||||
|
@ -322,17 +322,11 @@ $(obj)/hbm_edt_kern.o: $(src)/hbm.h $(src)/hbm_kern.h
|
||||
|
||||
-include $(BPF_SAMPLES_PATH)/Makefile.target
|
||||
|
||||
VMLINUX_BTF_PATHS ?= $(if $(O),$(O)/vmlinux) \
|
||||
$(if $(KBUILD_OUTPUT),$(KBUILD_OUTPUT)/vmlinux) \
|
||||
../../../../vmlinux \
|
||||
/sys/kernel/btf/vmlinux \
|
||||
/boot/vmlinux-$(shell uname -r)
|
||||
VMLINUX_BTF_PATHS ?= $(abspath $(if $(O),$(O)/vmlinux)) \
|
||||
$(abspath $(if $(KBUILD_OUTPUT),$(KBUILD_OUTPUT)/vmlinux)) \
|
||||
$(abspath ./vmlinux)
|
||||
VMLINUX_BTF ?= $(abspath $(firstword $(wildcard $(VMLINUX_BTF_PATHS))))
|
||||
|
||||
ifeq ($(VMLINUX_BTF),)
|
||||
$(error Cannot find a vmlinux for VMLINUX_BTF at any of "$(VMLINUX_BTF_PATHS)")
|
||||
endif
|
||||
|
||||
$(obj)/vmlinux.h: $(VMLINUX_BTF) $(BPFTOOL)
|
||||
ifeq ($(VMLINUX_H),)
|
||||
$(Q)$(BPFTOOL) btf dump file $(VMLINUX_BTF) format c > $@
|
||||
@ -340,6 +334,11 @@ else
|
||||
$(Q)cp "$(VMLINUX_H)" $@
|
||||
endif
|
||||
|
||||
ifeq ($(VMLINUX_BTF),)
|
||||
$(error Cannot find a vmlinux for VMLINUX_BTF at any of "$(VMLINUX_BTF_PATHS)",\
|
||||
build the kernel or set VMLINUX_BTF variable)
|
||||
endif
|
||||
|
||||
clean-files += vmlinux.h
|
||||
|
||||
# Get Clang's default includes on this system, as opposed to those seen by
|
||||
|
@ -1,4 +1,4 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
/* SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) */
|
||||
/* eBPF instruction mini library */
|
||||
#ifndef __BPF_INSN_H
|
||||
#define __BPF_INSN_H
|
||||
|
@ -5,11 +5,6 @@
|
||||
#include "xdp_sample.bpf.h"
|
||||
#include "xdp_sample_shared.h"
|
||||
|
||||
enum {
|
||||
BPF_F_BROADCAST = (1ULL << 3),
|
||||
BPF_F_EXCLUDE_INGRESS = (1ULL << 4),
|
||||
};
|
||||
|
||||
struct {
|
||||
__uint(type, BPF_MAP_TYPE_DEVMAP_HASH);
|
||||
__uint(key_size, sizeof(int));
|
||||
|
@ -126,6 +126,8 @@ static const struct nlmsg_perm nlmsg_xfrm_perms[] =
|
||||
{ XFRM_MSG_NEWSPDINFO, NETLINK_XFRM_SOCKET__NLMSG_WRITE },
|
||||
{ XFRM_MSG_GETSPDINFO, NETLINK_XFRM_SOCKET__NLMSG_READ },
|
||||
{ XFRM_MSG_MAPPING, NETLINK_XFRM_SOCKET__NLMSG_READ },
|
||||
{ XFRM_MSG_SETDEFAULT, NETLINK_XFRM_SOCKET__NLMSG_WRITE },
|
||||
{ XFRM_MSG_GETDEFAULT, NETLINK_XFRM_SOCKET__NLMSG_READ },
|
||||
};
|
||||
|
||||
static const struct nlmsg_perm nlmsg_audit_perms[] =
|
||||
@ -189,7 +191,7 @@ int selinux_nlmsg_lookup(u16 sclass, u16 nlmsg_type, u32 *perm)
|
||||
* structures at the top of this file with the new mappings
|
||||
* before updating the BUILD_BUG_ON() macro!
|
||||
*/
|
||||
BUILD_BUG_ON(XFRM_MSG_MAX != XFRM_MSG_MAPPING);
|
||||
BUILD_BUG_ON(XFRM_MSG_MAX != XFRM_MSG_GETDEFAULT);
|
||||
err = nlmsg_perm(nlmsg_type, perm, nlmsg_xfrm_perms,
|
||||
sizeof(nlmsg_xfrm_perms));
|
||||
break;
|
||||
|
@ -6894,7 +6894,8 @@ int bpf_object__load_xattr(struct bpf_object_load_attr *attr)
|
||||
|
||||
if (obj->gen_loader) {
|
||||
/* reset FDs */
|
||||
btf__set_fd(obj->btf, -1);
|
||||
if (obj->btf)
|
||||
btf__set_fd(obj->btf, -1);
|
||||
for (i = 0; i < obj->nr_maps; i++)
|
||||
obj->maps[i].fd = -1;
|
||||
if (!err)
|
||||
|
@ -88,6 +88,7 @@ void strset__free(struct strset *set)
|
||||
|
||||
hashmap__free(set->strs_hash);
|
||||
free(set->strs_data);
|
||||
free(set);
|
||||
}
|
||||
|
||||
size_t strset__data_size(const struct strset *set)
|
||||
|
Loading…
Reference in New Issue
Block a user