forked from Minki/linux
Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
Pull networking fixes from David Miller: 1) Fix memory leak in ieee80211_prep_connection(), sta_info leaked on error. From Eytan Lifshitz. 2) Unintentional switch case fallthrough in nft_reject_inet_eval(), from Patrick McHardy. 3) Must check if payload lenth is a power of 2 in nft_payload_select_ops(), from Nikolay Aleksandrov. 4) Fix mis-checksumming in xen-netfront driver, ip_hdr() is not in the correct place when we invoke skb_checksum_setup(). From Wei Liu. 5) TUN driver should not advertise HW vlan offload features in vlan_features. Fix from Fernando Luis Vazquez Cao. 6) IPV6_VTI needs to select NET_IPV_TUNNEL to avoid build errors, fix from Steffen Klassert. 7) Add missing locking in xfrm_migrade_state_find(), we must hold the per-namespace xfrm_state_lock while traversing the lists. Fix from Steffen Klassert. 8) Missing locking in ath9k driver, access to tid->sched must be done under ath_txq_lock(). Fix from Stanislaw Gruszka. 9) Fix two bugs in TCP fastopen. First respect the size argument given to tcp_sendmsg() in the fastopen path, and secondly prevent tcp_send_syn_data() from potentially using order-5 allocations. From Eric Dumazet. 10) Fix handling of default neigh garbage collection params, from Jiri Pirko. 11) Fix cwnd bloat and over-inflation of RTT when transmit segmentation is in use. From Eric Dumazet. 12) Missing initialization of Realtek r8169 driver's statistics seqlocks. Fix from Kyle McMartin. 13) Fix RTNL assertion failures in 802.3ad and AB ARP monitor of bonding driver, from Ding Tianhong. 14) Bonding slave release race can cause divide by zero, fix from Nikolay Aleksandrov. 15) Overzealous return from neigh_periodic_work() causes reachability time to not be computed. Fix from Duain Jiong. 16) Fix regression in ipv6_find_hdr(), it should not return -ENOENT when a specific target is specified and found. From Hans Schillstrom. 17) Fix VLAN tag stripping regression in BNA driver, from Ivan Vecera. 18) Tail loss probe can calculate bogus RTTs due to missing packet marking on retransmit. Fix from Yuchung Cheng. 19) We cannot do skb_dst_drop() in iptunnel_pull_header() because multicast loopback detection in later code paths need access to skb_rtable(). Fix from Xin Long. 20) The macvlan driver regresses in that it propagates lower device offload support disables into itself, causing severe slowdowns when running over a bridge. Provide the software offloads always on macvlan devices to deal with this and the regression is gone. From Vlad Yasevich. * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (103 commits) macvlan: Add support for 'always_on' offload features net: sctp: fix sctp_sf_do_5_1D_ce to verify if we/peer is AUTH capable ip_tunnel:multicast process cause panic due to skb->_skb_refdst NULL pointer net: cpsw: fix cpdma rx descriptor leak on down interface be2net: isolate TX workarounds not applicable to Skyhawk-R be2net: Fix skb double free in be_xmit_wrokarounds() failure path be2net: clear promiscuous bits in adapter->flags while disabling promiscuous mode be2net: Fix to reset transparent vlan tagging qlcnic: dcb: a couple off by one bugs tcp: fix bogus RTT on special retransmission hsr: off by one sanity check in hsr_register_frame_in() can: remove CAN FD compatibility for CAN 2.0 sockets can: flexcan: factor out soft reset into seperate funtion can: flexcan: flexcan_remove(): add missing netif_napi_del() can: flexcan: fix transition from and to freeze mode in chip_{,un}freeze can: flexcan: factor out transceiver {en,dis}able into seperate functions can: flexcan: fix transition from and to low power mode in chip_{en,dis}able can: flexcan: flexcan_open(): fix error path if flexcan_chip_start() fails can: flexcan: fix shutdown: first disable chip, then all interrupts USB AX88179/178A: Support D-Link DUB-1312 ...
This commit is contained in:
commit
c3bebc71c4
22
Documentation/devicetree/bindings/net/opencores-ethoc.txt
Normal file
22
Documentation/devicetree/bindings/net/opencores-ethoc.txt
Normal file
@ -0,0 +1,22 @@
|
||||
* OpenCores MAC 10/100 Mbps
|
||||
|
||||
Required properties:
|
||||
- compatible: Should be "opencores,ethoc".
|
||||
- reg: two memory regions (address and length),
|
||||
first region is for the device registers and descriptor rings,
|
||||
second is for the device packet memory.
|
||||
- interrupts: interrupt for the device.
|
||||
|
||||
Optional properties:
|
||||
- clocks: phandle to refer to the clk used as per
|
||||
Documentation/devicetree/bindings/clock/clock-bindings.txt
|
||||
|
||||
Examples:
|
||||
|
||||
enet0: ethoc@fd030000 {
|
||||
compatible = "opencores,ethoc";
|
||||
reg = <0xfd030000 0x4000 0xfd800000 0x4000>;
|
||||
interrupts = <1>;
|
||||
local-mac-address = [00 50 c2 13 6f 00];
|
||||
clocks = <&osc>;
|
||||
};
|
@ -554,12 +554,6 @@ solution for a couple of reasons:
|
||||
not specified in the struct can_frame and therefore it is only valid in
|
||||
CANFD_MTU sized CAN FD frames.
|
||||
|
||||
As long as the payload length is <=8 the received CAN frames from CAN FD
|
||||
capable CAN devices can be received and read by legacy sockets too. When
|
||||
user-generated CAN FD frames have a payload length <=8 these can be send
|
||||
by legacy CAN network interfaces too. Sending CAN FD frames with payload
|
||||
length > 8 to a legacy CAN network interface returns an -EMSGSIZE error.
|
||||
|
||||
Implementation hint for new CAN applications:
|
||||
|
||||
To build a CAN FD aware application use struct canfd_frame as basic CAN
|
||||
|
@ -4561,6 +4561,7 @@ F: Documentation/networking/ixgbevf.txt
|
||||
F: Documentation/networking/i40e.txt
|
||||
F: Documentation/networking/i40evf.txt
|
||||
F: drivers/net/ethernet/intel/
|
||||
F: drivers/net/ethernet/intel/*/
|
||||
|
||||
INTEL-MID GPIO DRIVER
|
||||
M: David Cohen <david.a.cohen@linux.intel.com>
|
||||
|
@ -53,8 +53,8 @@
|
||||
#include "user.h"
|
||||
|
||||
#define DRV_NAME MLX4_IB_DRV_NAME
|
||||
#define DRV_VERSION "1.0"
|
||||
#define DRV_RELDATE "April 4, 2008"
|
||||
#define DRV_VERSION "2.2-1"
|
||||
#define DRV_RELDATE "Feb 2014"
|
||||
|
||||
#define MLX4_IB_FLOW_MAX_PRIO 0xFFF
|
||||
#define MLX4_IB_FLOW_QPN_MASK 0xFFFFFF
|
||||
|
@ -46,8 +46,8 @@
|
||||
#include "mlx5_ib.h"
|
||||
|
||||
#define DRIVER_NAME "mlx5_ib"
|
||||
#define DRIVER_VERSION "1.0"
|
||||
#define DRIVER_RELDATE "June 2013"
|
||||
#define DRIVER_VERSION "2.2-1"
|
||||
#define DRIVER_RELDATE "Feb 2014"
|
||||
|
||||
MODULE_AUTHOR("Eli Cohen <eli@mellanox.com>");
|
||||
MODULE_DESCRIPTION("Mellanox Connect-IB HCA IB driver");
|
||||
|
@ -181,7 +181,7 @@ static inline int __agg_has_partner(struct aggregator *agg)
|
||||
*/
|
||||
static inline void __disable_port(struct port *port)
|
||||
{
|
||||
bond_set_slave_inactive_flags(port->slave);
|
||||
bond_set_slave_inactive_flags(port->slave, BOND_SLAVE_NOTIFY_LATER);
|
||||
}
|
||||
|
||||
/**
|
||||
@ -193,7 +193,7 @@ static inline void __enable_port(struct port *port)
|
||||
struct slave *slave = port->slave;
|
||||
|
||||
if ((slave->link == BOND_LINK_UP) && IS_UP(slave->dev))
|
||||
bond_set_slave_active_flags(slave);
|
||||
bond_set_slave_active_flags(slave, BOND_SLAVE_NOTIFY_LATER);
|
||||
}
|
||||
|
||||
/**
|
||||
@ -2062,6 +2062,7 @@ void bond_3ad_state_machine_handler(struct work_struct *work)
|
||||
struct list_head *iter;
|
||||
struct slave *slave;
|
||||
struct port *port;
|
||||
bool should_notify_rtnl = BOND_SLAVE_NOTIFY_LATER;
|
||||
|
||||
read_lock(&bond->lock);
|
||||
rcu_read_lock();
|
||||
@ -2119,8 +2120,19 @@ void bond_3ad_state_machine_handler(struct work_struct *work)
|
||||
}
|
||||
|
||||
re_arm:
|
||||
bond_for_each_slave_rcu(bond, slave, iter) {
|
||||
if (slave->should_notify) {
|
||||
should_notify_rtnl = BOND_SLAVE_NOTIFY_NOW;
|
||||
break;
|
||||
}
|
||||
}
|
||||
rcu_read_unlock();
|
||||
read_unlock(&bond->lock);
|
||||
|
||||
if (should_notify_rtnl && rtnl_trylock()) {
|
||||
bond_slave_state_notify(bond);
|
||||
rtnl_unlock();
|
||||
}
|
||||
queue_delayed_work(bond->wq, &bond->ad_work, ad_delta_in_ticks);
|
||||
}
|
||||
|
||||
|
@ -829,21 +829,25 @@ void bond_change_active_slave(struct bonding *bond, struct slave *new_active)
|
||||
if (bond_is_lb(bond)) {
|
||||
bond_alb_handle_active_change(bond, new_active);
|
||||
if (old_active)
|
||||
bond_set_slave_inactive_flags(old_active);
|
||||
bond_set_slave_inactive_flags(old_active,
|
||||
BOND_SLAVE_NOTIFY_NOW);
|
||||
if (new_active)
|
||||
bond_set_slave_active_flags(new_active);
|
||||
bond_set_slave_active_flags(new_active,
|
||||
BOND_SLAVE_NOTIFY_NOW);
|
||||
} else {
|
||||
rcu_assign_pointer(bond->curr_active_slave, new_active);
|
||||
}
|
||||
|
||||
if (bond->params.mode == BOND_MODE_ACTIVEBACKUP) {
|
||||
if (old_active)
|
||||
bond_set_slave_inactive_flags(old_active);
|
||||
bond_set_slave_inactive_flags(old_active,
|
||||
BOND_SLAVE_NOTIFY_NOW);
|
||||
|
||||
if (new_active) {
|
||||
bool should_notify_peers = false;
|
||||
|
||||
bond_set_slave_active_flags(new_active);
|
||||
bond_set_slave_active_flags(new_active,
|
||||
BOND_SLAVE_NOTIFY_NOW);
|
||||
|
||||
if (bond->params.fail_over_mac)
|
||||
bond_do_fail_over_mac(bond, new_active,
|
||||
@ -1193,6 +1197,11 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev)
|
||||
return -EBUSY;
|
||||
}
|
||||
|
||||
if (bond_dev == slave_dev) {
|
||||
pr_err("%s: cannot enslave bond to itself.\n", bond_dev->name);
|
||||
return -EPERM;
|
||||
}
|
||||
|
||||
/* vlan challenged mutual exclusion */
|
||||
/* no need to lock since we're protected by rtnl_lock */
|
||||
if (slave_dev->features & NETIF_F_VLAN_CHALLENGED) {
|
||||
@ -1463,14 +1472,15 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev)
|
||||
|
||||
switch (bond->params.mode) {
|
||||
case BOND_MODE_ACTIVEBACKUP:
|
||||
bond_set_slave_inactive_flags(new_slave);
|
||||
bond_set_slave_inactive_flags(new_slave,
|
||||
BOND_SLAVE_NOTIFY_NOW);
|
||||
break;
|
||||
case BOND_MODE_8023AD:
|
||||
/* in 802.3ad mode, the internal mechanism
|
||||
* will activate the slaves in the selected
|
||||
* aggregator
|
||||
*/
|
||||
bond_set_slave_inactive_flags(new_slave);
|
||||
bond_set_slave_inactive_flags(new_slave, BOND_SLAVE_NOTIFY_NOW);
|
||||
/* if this is the first slave */
|
||||
if (!prev_slave) {
|
||||
SLAVE_AD_INFO(new_slave).id = 1;
|
||||
@ -1488,7 +1498,7 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev)
|
||||
case BOND_MODE_TLB:
|
||||
case BOND_MODE_ALB:
|
||||
bond_set_active_slave(new_slave);
|
||||
bond_set_slave_inactive_flags(new_slave);
|
||||
bond_set_slave_inactive_flags(new_slave, BOND_SLAVE_NOTIFY_NOW);
|
||||
break;
|
||||
default:
|
||||
pr_debug("This slave is always active in trunk mode\n");
|
||||
@ -1654,9 +1664,6 @@ static int __bond_release_one(struct net_device *bond_dev,
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* release the slave from its bond */
|
||||
bond->slave_cnt--;
|
||||
|
||||
bond_sysfs_slave_del(slave);
|
||||
|
||||
bond_upper_dev_unlink(bond_dev, slave_dev);
|
||||
@ -1738,6 +1745,7 @@ static int __bond_release_one(struct net_device *bond_dev,
|
||||
|
||||
unblock_netpoll_tx();
|
||||
synchronize_rcu();
|
||||
bond->slave_cnt--;
|
||||
|
||||
if (!bond_has_slaves(bond)) {
|
||||
call_netdevice_notifiers(NETDEV_CHANGEADDR, bond->dev);
|
||||
@ -2015,7 +2023,8 @@ static void bond_miimon_commit(struct bonding *bond)
|
||||
|
||||
if (bond->params.mode == BOND_MODE_ACTIVEBACKUP ||
|
||||
bond->params.mode == BOND_MODE_8023AD)
|
||||
bond_set_slave_inactive_flags(slave);
|
||||
bond_set_slave_inactive_flags(slave,
|
||||
BOND_SLAVE_NOTIFY_NOW);
|
||||
|
||||
pr_info("%s: link status definitely down for interface %s, disabling it\n",
|
||||
bond->dev->name, slave->dev->name);
|
||||
@ -2562,7 +2571,8 @@ static void bond_ab_arp_commit(struct bonding *bond)
|
||||
slave->link = BOND_LINK_UP;
|
||||
if (bond->current_arp_slave) {
|
||||
bond_set_slave_inactive_flags(
|
||||
bond->current_arp_slave);
|
||||
bond->current_arp_slave,
|
||||
BOND_SLAVE_NOTIFY_NOW);
|
||||
bond->current_arp_slave = NULL;
|
||||
}
|
||||
|
||||
@ -2582,7 +2592,8 @@ static void bond_ab_arp_commit(struct bonding *bond)
|
||||
slave->link_failure_count++;
|
||||
|
||||
slave->link = BOND_LINK_DOWN;
|
||||
bond_set_slave_inactive_flags(slave);
|
||||
bond_set_slave_inactive_flags(slave,
|
||||
BOND_SLAVE_NOTIFY_NOW);
|
||||
|
||||
pr_info("%s: link status definitely down for interface %s, disabling it\n",
|
||||
bond->dev->name, slave->dev->name);
|
||||
@ -2615,17 +2626,17 @@ do_failover:
|
||||
|
||||
/*
|
||||
* Send ARP probes for active-backup mode ARP monitor.
|
||||
*
|
||||
* Called with rcu_read_lock hold.
|
||||
*/
|
||||
static bool bond_ab_arp_probe(struct bonding *bond)
|
||||
{
|
||||
struct slave *slave, *before = NULL, *new_slave = NULL,
|
||||
*curr_arp_slave, *curr_active_slave;
|
||||
*curr_arp_slave = rcu_dereference(bond->current_arp_slave),
|
||||
*curr_active_slave = rcu_dereference(bond->curr_active_slave);
|
||||
struct list_head *iter;
|
||||
bool found = false;
|
||||
|
||||
rcu_read_lock();
|
||||
curr_arp_slave = rcu_dereference(bond->current_arp_slave);
|
||||
curr_active_slave = rcu_dereference(bond->curr_active_slave);
|
||||
bool should_notify_rtnl = BOND_SLAVE_NOTIFY_LATER;
|
||||
|
||||
if (curr_arp_slave && curr_active_slave)
|
||||
pr_info("PROBE: c_arp %s && cas %s BAD\n",
|
||||
@ -2634,32 +2645,23 @@ static bool bond_ab_arp_probe(struct bonding *bond)
|
||||
|
||||
if (curr_active_slave) {
|
||||
bond_arp_send_all(bond, curr_active_slave);
|
||||
rcu_read_unlock();
|
||||
return true;
|
||||
return should_notify_rtnl;
|
||||
}
|
||||
rcu_read_unlock();
|
||||
|
||||
/* if we don't have a curr_active_slave, search for the next available
|
||||
* backup slave from the current_arp_slave and make it the candidate
|
||||
* for becoming the curr_active_slave
|
||||
*/
|
||||
|
||||
if (!rtnl_trylock())
|
||||
return false;
|
||||
/* curr_arp_slave might have gone away */
|
||||
curr_arp_slave = ACCESS_ONCE(bond->current_arp_slave);
|
||||
|
||||
if (!curr_arp_slave) {
|
||||
curr_arp_slave = bond_first_slave(bond);
|
||||
if (!curr_arp_slave) {
|
||||
rtnl_unlock();
|
||||
return true;
|
||||
}
|
||||
curr_arp_slave = bond_first_slave_rcu(bond);
|
||||
if (!curr_arp_slave)
|
||||
return should_notify_rtnl;
|
||||
}
|
||||
|
||||
bond_set_slave_inactive_flags(curr_arp_slave);
|
||||
bond_set_slave_inactive_flags(curr_arp_slave, BOND_SLAVE_NOTIFY_LATER);
|
||||
|
||||
bond_for_each_slave(bond, slave, iter) {
|
||||
bond_for_each_slave_rcu(bond, slave, iter) {
|
||||
if (!found && !before && IS_UP(slave->dev))
|
||||
before = slave;
|
||||
|
||||
@ -2677,7 +2679,8 @@ static bool bond_ab_arp_probe(struct bonding *bond)
|
||||
if (slave->link_failure_count < UINT_MAX)
|
||||
slave->link_failure_count++;
|
||||
|
||||
bond_set_slave_inactive_flags(slave);
|
||||
bond_set_slave_inactive_flags(slave,
|
||||
BOND_SLAVE_NOTIFY_LATER);
|
||||
|
||||
pr_info("%s: backup interface %s is now down.\n",
|
||||
bond->dev->name, slave->dev->name);
|
||||
@ -2689,26 +2692,31 @@ static bool bond_ab_arp_probe(struct bonding *bond)
|
||||
if (!new_slave && before)
|
||||
new_slave = before;
|
||||
|
||||
if (!new_slave) {
|
||||
rtnl_unlock();
|
||||
return true;
|
||||
}
|
||||
if (!new_slave)
|
||||
goto check_state;
|
||||
|
||||
new_slave->link = BOND_LINK_BACK;
|
||||
bond_set_slave_active_flags(new_slave);
|
||||
bond_set_slave_active_flags(new_slave, BOND_SLAVE_NOTIFY_LATER);
|
||||
bond_arp_send_all(bond, new_slave);
|
||||
new_slave->jiffies = jiffies;
|
||||
rcu_assign_pointer(bond->current_arp_slave, new_slave);
|
||||
rtnl_unlock();
|
||||
|
||||
return true;
|
||||
check_state:
|
||||
bond_for_each_slave_rcu(bond, slave, iter) {
|
||||
if (slave->should_notify) {
|
||||
should_notify_rtnl = BOND_SLAVE_NOTIFY_NOW;
|
||||
break;
|
||||
}
|
||||
}
|
||||
return should_notify_rtnl;
|
||||
}
|
||||
|
||||
static void bond_activebackup_arp_mon(struct work_struct *work)
|
||||
{
|
||||
struct bonding *bond = container_of(work, struct bonding,
|
||||
arp_work.work);
|
||||
bool should_notify_peers = false, should_commit = false;
|
||||
bool should_notify_peers = false;
|
||||
bool should_notify_rtnl = false;
|
||||
int delta_in_ticks;
|
||||
|
||||
delta_in_ticks = msecs_to_jiffies(bond->params.arp_interval);
|
||||
@ -2717,11 +2725,12 @@ static void bond_activebackup_arp_mon(struct work_struct *work)
|
||||
goto re_arm;
|
||||
|
||||
rcu_read_lock();
|
||||
should_notify_peers = bond_should_notify_peers(bond);
|
||||
should_commit = bond_ab_arp_inspect(bond);
|
||||
rcu_read_unlock();
|
||||
|
||||
if (should_commit) {
|
||||
should_notify_peers = bond_should_notify_peers(bond);
|
||||
|
||||
if (bond_ab_arp_inspect(bond)) {
|
||||
rcu_read_unlock();
|
||||
|
||||
/* Race avoidance with bond_close flush of workqueue */
|
||||
if (!rtnl_trylock()) {
|
||||
delta_in_ticks = 1;
|
||||
@ -2730,23 +2739,28 @@ static void bond_activebackup_arp_mon(struct work_struct *work)
|
||||
}
|
||||
|
||||
bond_ab_arp_commit(bond);
|
||||
|
||||
rtnl_unlock();
|
||||
rcu_read_lock();
|
||||
}
|
||||
|
||||
if (!bond_ab_arp_probe(bond)) {
|
||||
/* rtnl locking failed, re-arm */
|
||||
delta_in_ticks = 1;
|
||||
should_notify_peers = false;
|
||||
}
|
||||
should_notify_rtnl = bond_ab_arp_probe(bond);
|
||||
rcu_read_unlock();
|
||||
|
||||
re_arm:
|
||||
if (bond->params.arp_interval)
|
||||
queue_delayed_work(bond->wq, &bond->arp_work, delta_in_ticks);
|
||||
|
||||
if (should_notify_peers) {
|
||||
if (should_notify_peers || should_notify_rtnl) {
|
||||
if (!rtnl_trylock())
|
||||
return;
|
||||
call_netdevice_notifiers(NETDEV_NOTIFY_PEERS, bond->dev);
|
||||
|
||||
if (should_notify_peers)
|
||||
call_netdevice_notifiers(NETDEV_NOTIFY_PEERS,
|
||||
bond->dev);
|
||||
if (should_notify_rtnl)
|
||||
bond_slave_state_notify(bond);
|
||||
|
||||
rtnl_unlock();
|
||||
}
|
||||
}
|
||||
@ -3046,9 +3060,11 @@ static int bond_open(struct net_device *bond_dev)
|
||||
bond_for_each_slave(bond, slave, iter) {
|
||||
if ((bond->params.mode == BOND_MODE_ACTIVEBACKUP)
|
||||
&& (slave != bond->curr_active_slave)) {
|
||||
bond_set_slave_inactive_flags(slave);
|
||||
bond_set_slave_inactive_flags(slave,
|
||||
BOND_SLAVE_NOTIFY_NOW);
|
||||
} else {
|
||||
bond_set_slave_active_flags(slave);
|
||||
bond_set_slave_active_flags(slave,
|
||||
BOND_SLAVE_NOTIFY_NOW);
|
||||
}
|
||||
}
|
||||
read_unlock(&bond->curr_slave_lock);
|
||||
|
@ -195,7 +195,8 @@ struct slave {
|
||||
s8 new_link;
|
||||
u8 backup:1, /* indicates backup slave. Value corresponds with
|
||||
BOND_STATE_ACTIVE and BOND_STATE_BACKUP */
|
||||
inactive:1; /* indicates inactive slave */
|
||||
inactive:1, /* indicates inactive slave */
|
||||
should_notify:1; /* indicateds whether the state changed */
|
||||
u8 duplex;
|
||||
u32 original_mtu;
|
||||
u32 link_failure_count;
|
||||
@ -303,6 +304,24 @@ static inline void bond_set_backup_slave(struct slave *slave)
|
||||
}
|
||||
}
|
||||
|
||||
static inline void bond_set_slave_state(struct slave *slave,
|
||||
int slave_state, bool notify)
|
||||
{
|
||||
if (slave->backup == slave_state)
|
||||
return;
|
||||
|
||||
slave->backup = slave_state;
|
||||
if (notify) {
|
||||
rtmsg_ifinfo(RTM_NEWLINK, slave->dev, 0, GFP_KERNEL);
|
||||
slave->should_notify = 0;
|
||||
} else {
|
||||
if (slave->should_notify)
|
||||
slave->should_notify = 0;
|
||||
else
|
||||
slave->should_notify = 1;
|
||||
}
|
||||
}
|
||||
|
||||
static inline void bond_slave_state_change(struct bonding *bond)
|
||||
{
|
||||
struct list_head *iter;
|
||||
@ -316,6 +335,19 @@ static inline void bond_slave_state_change(struct bonding *bond)
|
||||
}
|
||||
}
|
||||
|
||||
static inline void bond_slave_state_notify(struct bonding *bond)
|
||||
{
|
||||
struct list_head *iter;
|
||||
struct slave *tmp;
|
||||
|
||||
bond_for_each_slave(bond, tmp, iter) {
|
||||
if (tmp->should_notify) {
|
||||
rtmsg_ifinfo(RTM_NEWLINK, tmp->dev, 0, GFP_KERNEL);
|
||||
tmp->should_notify = 0;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static inline int bond_slave_state(struct slave *slave)
|
||||
{
|
||||
return slave->backup;
|
||||
@ -343,6 +375,9 @@ static inline bool bond_is_active_slave(struct slave *slave)
|
||||
#define BOND_ARP_VALIDATE_ALL (BOND_ARP_VALIDATE_ACTIVE | \
|
||||
BOND_ARP_VALIDATE_BACKUP)
|
||||
|
||||
#define BOND_SLAVE_NOTIFY_NOW true
|
||||
#define BOND_SLAVE_NOTIFY_LATER false
|
||||
|
||||
static inline int slave_do_arp_validate(struct bonding *bond,
|
||||
struct slave *slave)
|
||||
{
|
||||
@ -394,17 +429,19 @@ static inline void bond_netpoll_send_skb(const struct slave *slave,
|
||||
}
|
||||
#endif
|
||||
|
||||
static inline void bond_set_slave_inactive_flags(struct slave *slave)
|
||||
static inline void bond_set_slave_inactive_flags(struct slave *slave,
|
||||
bool notify)
|
||||
{
|
||||
if (!bond_is_lb(slave->bond))
|
||||
bond_set_backup_slave(slave);
|
||||
bond_set_slave_state(slave, BOND_STATE_BACKUP, notify);
|
||||
if (!slave->bond->params.all_slaves_active)
|
||||
slave->inactive = 1;
|
||||
}
|
||||
|
||||
static inline void bond_set_slave_active_flags(struct slave *slave)
|
||||
static inline void bond_set_slave_active_flags(struct slave *slave,
|
||||
bool notify)
|
||||
{
|
||||
bond_set_active_slave(slave);
|
||||
bond_set_slave_state(slave, BOND_STATE_ACTIVE, notify);
|
||||
slave->inactive = 0;
|
||||
}
|
||||
|
||||
|
@ -144,6 +144,8 @@
|
||||
|
||||
#define FLEXCAN_MB_CODE_MASK (0xf0ffffff)
|
||||
|
||||
#define FLEXCAN_TIMEOUT_US (50)
|
||||
|
||||
/*
|
||||
* FLEXCAN hardware feature flags
|
||||
*
|
||||
@ -262,6 +264,22 @@ static inline void flexcan_write(u32 val, void __iomem *addr)
|
||||
}
|
||||
#endif
|
||||
|
||||
static inline int flexcan_transceiver_enable(const struct flexcan_priv *priv)
|
||||
{
|
||||
if (!priv->reg_xceiver)
|
||||
return 0;
|
||||
|
||||
return regulator_enable(priv->reg_xceiver);
|
||||
}
|
||||
|
||||
static inline int flexcan_transceiver_disable(const struct flexcan_priv *priv)
|
||||
{
|
||||
if (!priv->reg_xceiver)
|
||||
return 0;
|
||||
|
||||
return regulator_disable(priv->reg_xceiver);
|
||||
}
|
||||
|
||||
static inline int flexcan_has_and_handle_berr(const struct flexcan_priv *priv,
|
||||
u32 reg_esr)
|
||||
{
|
||||
@ -269,26 +287,95 @@ static inline int flexcan_has_and_handle_berr(const struct flexcan_priv *priv,
|
||||
(reg_esr & FLEXCAN_ESR_ERR_BUS);
|
||||
}
|
||||
|
||||
static inline void flexcan_chip_enable(struct flexcan_priv *priv)
|
||||
static int flexcan_chip_enable(struct flexcan_priv *priv)
|
||||
{
|
||||
struct flexcan_regs __iomem *regs = priv->base;
|
||||
unsigned int timeout = FLEXCAN_TIMEOUT_US / 10;
|
||||
u32 reg;
|
||||
|
||||
reg = flexcan_read(®s->mcr);
|
||||
reg &= ~FLEXCAN_MCR_MDIS;
|
||||
flexcan_write(reg, ®s->mcr);
|
||||
|
||||
udelay(10);
|
||||
while (timeout-- && (flexcan_read(®s->mcr) & FLEXCAN_MCR_LPM_ACK))
|
||||
usleep_range(10, 20);
|
||||
|
||||
if (flexcan_read(®s->mcr) & FLEXCAN_MCR_LPM_ACK)
|
||||
return -ETIMEDOUT;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline void flexcan_chip_disable(struct flexcan_priv *priv)
|
||||
static int flexcan_chip_disable(struct flexcan_priv *priv)
|
||||
{
|
||||
struct flexcan_regs __iomem *regs = priv->base;
|
||||
unsigned int timeout = FLEXCAN_TIMEOUT_US / 10;
|
||||
u32 reg;
|
||||
|
||||
reg = flexcan_read(®s->mcr);
|
||||
reg |= FLEXCAN_MCR_MDIS;
|
||||
flexcan_write(reg, ®s->mcr);
|
||||
|
||||
while (timeout-- && !(flexcan_read(®s->mcr) & FLEXCAN_MCR_LPM_ACK))
|
||||
usleep_range(10, 20);
|
||||
|
||||
if (!(flexcan_read(®s->mcr) & FLEXCAN_MCR_LPM_ACK))
|
||||
return -ETIMEDOUT;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int flexcan_chip_freeze(struct flexcan_priv *priv)
|
||||
{
|
||||
struct flexcan_regs __iomem *regs = priv->base;
|
||||
unsigned int timeout = 1000 * 1000 * 10 / priv->can.bittiming.bitrate;
|
||||
u32 reg;
|
||||
|
||||
reg = flexcan_read(®s->mcr);
|
||||
reg |= FLEXCAN_MCR_HALT;
|
||||
flexcan_write(reg, ®s->mcr);
|
||||
|
||||
while (timeout-- && !(flexcan_read(®s->mcr) & FLEXCAN_MCR_FRZ_ACK))
|
||||
usleep_range(100, 200);
|
||||
|
||||
if (!(flexcan_read(®s->mcr) & FLEXCAN_MCR_FRZ_ACK))
|
||||
return -ETIMEDOUT;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int flexcan_chip_unfreeze(struct flexcan_priv *priv)
|
||||
{
|
||||
struct flexcan_regs __iomem *regs = priv->base;
|
||||
unsigned int timeout = FLEXCAN_TIMEOUT_US / 10;
|
||||
u32 reg;
|
||||
|
||||
reg = flexcan_read(®s->mcr);
|
||||
reg &= ~FLEXCAN_MCR_HALT;
|
||||
flexcan_write(reg, ®s->mcr);
|
||||
|
||||
while (timeout-- && (flexcan_read(®s->mcr) & FLEXCAN_MCR_FRZ_ACK))
|
||||
usleep_range(10, 20);
|
||||
|
||||
if (flexcan_read(®s->mcr) & FLEXCAN_MCR_FRZ_ACK)
|
||||
return -ETIMEDOUT;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int flexcan_chip_softreset(struct flexcan_priv *priv)
|
||||
{
|
||||
struct flexcan_regs __iomem *regs = priv->base;
|
||||
unsigned int timeout = FLEXCAN_TIMEOUT_US / 10;
|
||||
|
||||
flexcan_write(FLEXCAN_MCR_SOFTRST, ®s->mcr);
|
||||
while (timeout-- && (flexcan_read(®s->mcr) & FLEXCAN_MCR_SOFTRST))
|
||||
usleep_range(10, 20);
|
||||
|
||||
if (flexcan_read(®s->mcr) & FLEXCAN_MCR_SOFTRST)
|
||||
return -ETIMEDOUT;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int flexcan_get_berr_counter(const struct net_device *dev,
|
||||
@ -709,19 +796,14 @@ static int flexcan_chip_start(struct net_device *dev)
|
||||
u32 reg_mcr, reg_ctrl;
|
||||
|
||||
/* enable module */
|
||||
flexcan_chip_enable(priv);
|
||||
err = flexcan_chip_enable(priv);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
/* soft reset */
|
||||
flexcan_write(FLEXCAN_MCR_SOFTRST, ®s->mcr);
|
||||
udelay(10);
|
||||
|
||||
reg_mcr = flexcan_read(®s->mcr);
|
||||
if (reg_mcr & FLEXCAN_MCR_SOFTRST) {
|
||||
netdev_err(dev, "Failed to softreset can module (mcr=0x%08x)\n",
|
||||
reg_mcr);
|
||||
err = -ENODEV;
|
||||
goto out;
|
||||
}
|
||||
err = flexcan_chip_softreset(priv);
|
||||
if (err)
|
||||
goto out_chip_disable;
|
||||
|
||||
flexcan_set_bittiming(dev);
|
||||
|
||||
@ -788,16 +870,14 @@ static int flexcan_chip_start(struct net_device *dev)
|
||||
if (priv->devtype_data->features & FLEXCAN_HAS_V10_FEATURES)
|
||||
flexcan_write(0x0, ®s->rxfgmask);
|
||||
|
||||
if (priv->reg_xceiver) {
|
||||
err = regulator_enable(priv->reg_xceiver);
|
||||
if (err)
|
||||
goto out;
|
||||
}
|
||||
err = flexcan_transceiver_enable(priv);
|
||||
if (err)
|
||||
goto out_chip_disable;
|
||||
|
||||
/* synchronize with the can bus */
|
||||
reg_mcr = flexcan_read(®s->mcr);
|
||||
reg_mcr &= ~FLEXCAN_MCR_HALT;
|
||||
flexcan_write(reg_mcr, ®s->mcr);
|
||||
err = flexcan_chip_unfreeze(priv);
|
||||
if (err)
|
||||
goto out_transceiver_disable;
|
||||
|
||||
priv->can.state = CAN_STATE_ERROR_ACTIVE;
|
||||
|
||||
@ -810,7 +890,9 @@ static int flexcan_chip_start(struct net_device *dev)
|
||||
|
||||
return 0;
|
||||
|
||||
out:
|
||||
out_transceiver_disable:
|
||||
flexcan_transceiver_disable(priv);
|
||||
out_chip_disable:
|
||||
flexcan_chip_disable(priv);
|
||||
return err;
|
||||
}
|
||||
@ -825,18 +907,17 @@ static void flexcan_chip_stop(struct net_device *dev)
|
||||
{
|
||||
struct flexcan_priv *priv = netdev_priv(dev);
|
||||
struct flexcan_regs __iomem *regs = priv->base;
|
||||
u32 reg;
|
||||
|
||||
/* freeze + disable module */
|
||||
flexcan_chip_freeze(priv);
|
||||
flexcan_chip_disable(priv);
|
||||
|
||||
/* Disable all interrupts */
|
||||
flexcan_write(0, ®s->imask1);
|
||||
flexcan_write(priv->reg_ctrl_default & ~FLEXCAN_CTRL_ERR_ALL,
|
||||
®s->ctrl);
|
||||
|
||||
/* Disable + halt module */
|
||||
reg = flexcan_read(®s->mcr);
|
||||
reg |= FLEXCAN_MCR_MDIS | FLEXCAN_MCR_HALT;
|
||||
flexcan_write(reg, ®s->mcr);
|
||||
|
||||
if (priv->reg_xceiver)
|
||||
regulator_disable(priv->reg_xceiver);
|
||||
flexcan_transceiver_disable(priv);
|
||||
priv->can.state = CAN_STATE_STOPPED;
|
||||
|
||||
return;
|
||||
@ -866,7 +947,7 @@ static int flexcan_open(struct net_device *dev)
|
||||
/* start chip and queuing */
|
||||
err = flexcan_chip_start(dev);
|
||||
if (err)
|
||||
goto out_close;
|
||||
goto out_free_irq;
|
||||
|
||||
can_led_event(dev, CAN_LED_EVENT_OPEN);
|
||||
|
||||
@ -875,6 +956,8 @@ static int flexcan_open(struct net_device *dev)
|
||||
|
||||
return 0;
|
||||
|
||||
out_free_irq:
|
||||
free_irq(dev->irq, dev);
|
||||
out_close:
|
||||
close_candev(dev);
|
||||
out_disable_per:
|
||||
@ -945,12 +1028,16 @@ static int register_flexcandev(struct net_device *dev)
|
||||
goto out_disable_ipg;
|
||||
|
||||
/* select "bus clock", chip must be disabled */
|
||||
flexcan_chip_disable(priv);
|
||||
err = flexcan_chip_disable(priv);
|
||||
if (err)
|
||||
goto out_disable_per;
|
||||
reg = flexcan_read(®s->ctrl);
|
||||
reg |= FLEXCAN_CTRL_CLK_SRC;
|
||||
flexcan_write(reg, ®s->ctrl);
|
||||
|
||||
flexcan_chip_enable(priv);
|
||||
err = flexcan_chip_enable(priv);
|
||||
if (err)
|
||||
goto out_chip_disable;
|
||||
|
||||
/* set freeze, halt and activate FIFO, restrict register access */
|
||||
reg = flexcan_read(®s->mcr);
|
||||
@ -967,14 +1054,15 @@ static int register_flexcandev(struct net_device *dev)
|
||||
if (!(reg & FLEXCAN_MCR_FEN)) {
|
||||
netdev_err(dev, "Could not enable RX FIFO, unsupported core\n");
|
||||
err = -ENODEV;
|
||||
goto out_disable_per;
|
||||
goto out_chip_disable;
|
||||
}
|
||||
|
||||
err = register_candev(dev);
|
||||
|
||||
out_disable_per:
|
||||
/* disable core and turn off clocks */
|
||||
out_chip_disable:
|
||||
flexcan_chip_disable(priv);
|
||||
out_disable_per:
|
||||
clk_disable_unprepare(priv->clk_per);
|
||||
out_disable_ipg:
|
||||
clk_disable_unprepare(priv->clk_ipg);
|
||||
@ -1104,9 +1192,10 @@ static int flexcan_probe(struct platform_device *pdev)
|
||||
static int flexcan_remove(struct platform_device *pdev)
|
||||
{
|
||||
struct net_device *dev = platform_get_drvdata(pdev);
|
||||
struct flexcan_priv *priv = netdev_priv(dev);
|
||||
|
||||
unregister_flexcandev(dev);
|
||||
|
||||
netif_napi_del(&priv->napi);
|
||||
free_candev(dev);
|
||||
|
||||
return 0;
|
||||
@ -1117,8 +1206,11 @@ static int flexcan_suspend(struct device *device)
|
||||
{
|
||||
struct net_device *dev = dev_get_drvdata(device);
|
||||
struct flexcan_priv *priv = netdev_priv(dev);
|
||||
int err;
|
||||
|
||||
flexcan_chip_disable(priv);
|
||||
err = flexcan_chip_disable(priv);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
if (netif_running(dev)) {
|
||||
netif_stop_queue(dev);
|
||||
@ -1139,9 +1231,7 @@ static int flexcan_resume(struct device *device)
|
||||
netif_device_attach(dev);
|
||||
netif_start_queue(dev);
|
||||
}
|
||||
flexcan_chip_enable(priv);
|
||||
|
||||
return 0;
|
||||
return flexcan_chip_enable(priv);
|
||||
}
|
||||
#endif /* CONFIG_PM_SLEEP */
|
||||
|
||||
|
@ -1484,6 +1484,10 @@ static int b44_open(struct net_device *dev)
|
||||
add_timer(&bp->timer);
|
||||
|
||||
b44_enable_ints(bp);
|
||||
|
||||
if (bp->flags & B44_FLAG_EXTERNAL_PHY)
|
||||
phy_start(bp->phydev);
|
||||
|
||||
netif_start_queue(dev);
|
||||
out:
|
||||
return err;
|
||||
@ -1646,6 +1650,9 @@ static int b44_close(struct net_device *dev)
|
||||
|
||||
netif_stop_queue(dev);
|
||||
|
||||
if (bp->flags & B44_FLAG_EXTERNAL_PHY)
|
||||
phy_stop(bp->phydev);
|
||||
|
||||
napi_disable(&bp->napi);
|
||||
|
||||
del_timer_sync(&bp->timer);
|
||||
@ -2222,7 +2229,12 @@ static void b44_adjust_link(struct net_device *dev)
|
||||
}
|
||||
|
||||
if (status_changed) {
|
||||
b44_check_phy(bp);
|
||||
u32 val = br32(bp, B44_TX_CTRL);
|
||||
if (bp->flags & B44_FLAG_FULL_DUPLEX)
|
||||
val |= TX_CTRL_DUPLEX;
|
||||
else
|
||||
val &= ~TX_CTRL_DUPLEX;
|
||||
bw32(bp, B44_TX_CTRL, val);
|
||||
phy_print_status(phydev);
|
||||
}
|
||||
}
|
||||
|
@ -3875,7 +3875,9 @@ netdev_tx_t bnx2x_start_xmit(struct sk_buff *skb, struct net_device *dev)
|
||||
xmit_type);
|
||||
}
|
||||
|
||||
/* Add the macs to the parsing BD this is a vf */
|
||||
/* Add the macs to the parsing BD if this is a vf or if
|
||||
* Tx Switching is enabled.
|
||||
*/
|
||||
if (IS_VF(bp)) {
|
||||
/* override GRE parameters in BD */
|
||||
bnx2x_set_fw_mac_addr(&pbd_e2->data.mac_addr.src_hi,
|
||||
@ -3883,6 +3885,11 @@ netdev_tx_t bnx2x_start_xmit(struct sk_buff *skb, struct net_device *dev)
|
||||
&pbd_e2->data.mac_addr.src_lo,
|
||||
eth->h_source);
|
||||
|
||||
bnx2x_set_fw_mac_addr(&pbd_e2->data.mac_addr.dst_hi,
|
||||
&pbd_e2->data.mac_addr.dst_mid,
|
||||
&pbd_e2->data.mac_addr.dst_lo,
|
||||
eth->h_dest);
|
||||
} else if (bp->flags & TX_SWITCHING) {
|
||||
bnx2x_set_fw_mac_addr(&pbd_e2->data.mac_addr.dst_hi,
|
||||
&pbd_e2->data.mac_addr.dst_mid,
|
||||
&pbd_e2->data.mac_addr.dst_lo,
|
||||
|
@ -6843,8 +6843,7 @@ static int tg3_rx(struct tg3_napi *tnapi, int budget)
|
||||
|
||||
work_mask |= opaque_key;
|
||||
|
||||
if ((desc->err_vlan & RXD_ERR_MASK) != 0 &&
|
||||
(desc->err_vlan != RXD_ERR_ODD_NIBBLE_RCVD_MII)) {
|
||||
if (desc->err_vlan & RXD_ERR_MASK) {
|
||||
drop_it:
|
||||
tg3_recycle_rx(tnapi, tpr, opaque_key,
|
||||
desc_idx, *post_ptr);
|
||||
|
@ -2608,7 +2608,11 @@ struct tg3_rx_buffer_desc {
|
||||
#define RXD_ERR_TOO_SMALL 0x00400000
|
||||
#define RXD_ERR_NO_RESOURCES 0x00800000
|
||||
#define RXD_ERR_HUGE_FRAME 0x01000000
|
||||
#define RXD_ERR_MASK 0xffff0000
|
||||
|
||||
#define RXD_ERR_MASK (RXD_ERR_BAD_CRC | RXD_ERR_COLLISION | \
|
||||
RXD_ERR_LINK_LOST | RXD_ERR_PHY_DECODE | \
|
||||
RXD_ERR_MAC_ABRT | RXD_ERR_TOO_SMALL | \
|
||||
RXD_ERR_NO_RESOURCES | RXD_ERR_HUGE_FRAME)
|
||||
|
||||
u32 reserved;
|
||||
u32 opaque;
|
||||
|
@ -707,7 +707,8 @@ bnad_cq_process(struct bnad *bnad, struct bna_ccb *ccb, int budget)
|
||||
else
|
||||
skb_checksum_none_assert(skb);
|
||||
|
||||
if (flags & BNA_CQ_EF_VLAN)
|
||||
if ((flags & BNA_CQ_EF_VLAN) &&
|
||||
(bnad->netdev->features & NETIF_F_HW_VLAN_CTAG_RX))
|
||||
__vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), ntohs(cmpl->vlan_tag));
|
||||
|
||||
if (BNAD_RXBUF_IS_SK_BUFF(unmap_q->type))
|
||||
@ -2094,7 +2095,9 @@ bnad_init_rx_config(struct bnad *bnad, struct bna_rx_config *rx_config)
|
||||
rx_config->q1_buf_size = BFI_SMALL_RXBUF_SIZE;
|
||||
}
|
||||
|
||||
rx_config->vlan_strip_status = BNA_STATUS_T_ENABLED;
|
||||
rx_config->vlan_strip_status =
|
||||
(bnad->netdev->features & NETIF_F_HW_VLAN_CTAG_RX) ?
|
||||
BNA_STATUS_T_ENABLED : BNA_STATUS_T_DISABLED;
|
||||
}
|
||||
|
||||
static void
|
||||
@ -3245,11 +3248,6 @@ bnad_set_rx_mode(struct net_device *netdev)
|
||||
BNA_RXMODE_ALLMULTI;
|
||||
bna_rx_mode_set(bnad->rx_info[0].rx, new_mode, mode_mask, NULL);
|
||||
|
||||
if (bnad->cfg_flags & BNAD_CF_PROMISC)
|
||||
bna_rx_vlan_strip_disable(bnad->rx_info[0].rx);
|
||||
else
|
||||
bna_rx_vlan_strip_enable(bnad->rx_info[0].rx);
|
||||
|
||||
spin_unlock_irqrestore(&bnad->bna_lock, flags);
|
||||
}
|
||||
|
||||
@ -3374,6 +3372,27 @@ bnad_vlan_rx_kill_vid(struct net_device *netdev, __be16 proto, u16 vid)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int bnad_set_features(struct net_device *dev, netdev_features_t features)
|
||||
{
|
||||
struct bnad *bnad = netdev_priv(dev);
|
||||
netdev_features_t changed = features ^ dev->features;
|
||||
|
||||
if ((changed & NETIF_F_HW_VLAN_CTAG_RX) && netif_running(dev)) {
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&bnad->bna_lock, flags);
|
||||
|
||||
if (features & NETIF_F_HW_VLAN_CTAG_RX)
|
||||
bna_rx_vlan_strip_enable(bnad->rx_info[0].rx);
|
||||
else
|
||||
bna_rx_vlan_strip_disable(bnad->rx_info[0].rx);
|
||||
|
||||
spin_unlock_irqrestore(&bnad->bna_lock, flags);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_NET_POLL_CONTROLLER
|
||||
static void
|
||||
bnad_netpoll(struct net_device *netdev)
|
||||
@ -3421,6 +3440,7 @@ static const struct net_device_ops bnad_netdev_ops = {
|
||||
.ndo_change_mtu = bnad_change_mtu,
|
||||
.ndo_vlan_rx_add_vid = bnad_vlan_rx_add_vid,
|
||||
.ndo_vlan_rx_kill_vid = bnad_vlan_rx_kill_vid,
|
||||
.ndo_set_features = bnad_set_features,
|
||||
#ifdef CONFIG_NET_POLL_CONTROLLER
|
||||
.ndo_poll_controller = bnad_netpoll
|
||||
#endif
|
||||
@ -3433,14 +3453,14 @@ bnad_netdev_init(struct bnad *bnad, bool using_dac)
|
||||
|
||||
netdev->hw_features = NETIF_F_SG | NETIF_F_RXCSUM |
|
||||
NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM |
|
||||
NETIF_F_TSO | NETIF_F_TSO6 | NETIF_F_HW_VLAN_CTAG_TX;
|
||||
NETIF_F_TSO | NETIF_F_TSO6 | NETIF_F_HW_VLAN_CTAG_TX |
|
||||
NETIF_F_HW_VLAN_CTAG_RX;
|
||||
|
||||
netdev->vlan_features = NETIF_F_SG | NETIF_F_HIGHDMA |
|
||||
NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM |
|
||||
NETIF_F_TSO | NETIF_F_TSO6;
|
||||
|
||||
netdev->features |= netdev->hw_features |
|
||||
NETIF_F_HW_VLAN_CTAG_RX | NETIF_F_HW_VLAN_CTAG_FILTER;
|
||||
netdev->features |= netdev->hw_features | NETIF_F_HW_VLAN_CTAG_FILTER;
|
||||
|
||||
if (using_dac)
|
||||
netdev->features |= NETIF_F_HIGHDMA;
|
||||
|
@ -6179,6 +6179,7 @@ static struct pci_driver cxgb4_driver = {
|
||||
.id_table = cxgb4_pci_tbl,
|
||||
.probe = init_one,
|
||||
.remove = remove_one,
|
||||
.shutdown = remove_one,
|
||||
.err_handler = &cxgb4_eeh,
|
||||
};
|
||||
|
||||
|
@ -350,11 +350,13 @@ struct be_drv_stats {
|
||||
u32 roce_drops_crc;
|
||||
};
|
||||
|
||||
/* A vlan-id of 0xFFFF must be used to clear transparent vlan-tagging */
|
||||
#define BE_RESET_VLAN_TAG_ID 0xFFFF
|
||||
|
||||
struct be_vf_cfg {
|
||||
unsigned char mac_addr[ETH_ALEN];
|
||||
int if_handle;
|
||||
int pmac_id;
|
||||
u16 def_vid;
|
||||
u16 vlan_tag;
|
||||
u32 tx_rate;
|
||||
};
|
||||
|
@ -913,24 +913,14 @@ static int be_ipv6_tx_stall_chk(struct be_adapter *adapter,
|
||||
return BE3_chip(adapter) && be_ipv6_exthdr_check(skb);
|
||||
}
|
||||
|
||||
static struct sk_buff *be_xmit_workarounds(struct be_adapter *adapter,
|
||||
struct sk_buff *skb,
|
||||
bool *skip_hw_vlan)
|
||||
static struct sk_buff *be_lancer_xmit_workarounds(struct be_adapter *adapter,
|
||||
struct sk_buff *skb,
|
||||
bool *skip_hw_vlan)
|
||||
{
|
||||
struct vlan_ethhdr *veh = (struct vlan_ethhdr *)skb->data;
|
||||
unsigned int eth_hdr_len;
|
||||
struct iphdr *ip;
|
||||
|
||||
/* Lancer, SH-R ASICs have a bug wherein Packets that are 32 bytes or less
|
||||
* may cause a transmit stall on that port. So the work-around is to
|
||||
* pad short packets (<= 32 bytes) to a 36-byte length.
|
||||
*/
|
||||
if (unlikely(!BEx_chip(adapter) && skb->len <= 32)) {
|
||||
if (skb_padto(skb, 36))
|
||||
goto tx_drop;
|
||||
skb->len = 36;
|
||||
}
|
||||
|
||||
/* For padded packets, BE HW modifies tot_len field in IP header
|
||||
* incorrecly when VLAN tag is inserted by HW.
|
||||
* For padded packets, Lancer computes incorrect checksum.
|
||||
@ -959,7 +949,7 @@ static struct sk_buff *be_xmit_workarounds(struct be_adapter *adapter,
|
||||
vlan_tx_tag_present(skb)) {
|
||||
skb = be_insert_vlan_in_pkt(adapter, skb, skip_hw_vlan);
|
||||
if (unlikely(!skb))
|
||||
goto tx_drop;
|
||||
goto err;
|
||||
}
|
||||
|
||||
/* HW may lockup when VLAN HW tagging is requested on
|
||||
@ -981,15 +971,39 @@ static struct sk_buff *be_xmit_workarounds(struct be_adapter *adapter,
|
||||
be_vlan_tag_tx_chk(adapter, skb)) {
|
||||
skb = be_insert_vlan_in_pkt(adapter, skb, skip_hw_vlan);
|
||||
if (unlikely(!skb))
|
||||
goto tx_drop;
|
||||
goto err;
|
||||
}
|
||||
|
||||
return skb;
|
||||
tx_drop:
|
||||
dev_kfree_skb_any(skb);
|
||||
err:
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static struct sk_buff *be_xmit_workarounds(struct be_adapter *adapter,
|
||||
struct sk_buff *skb,
|
||||
bool *skip_hw_vlan)
|
||||
{
|
||||
/* Lancer, SH-R ASICs have a bug wherein Packets that are 32 bytes or
|
||||
* less may cause a transmit stall on that port. So the work-around is
|
||||
* to pad short packets (<= 32 bytes) to a 36-byte length.
|
||||
*/
|
||||
if (unlikely(!BEx_chip(adapter) && skb->len <= 32)) {
|
||||
if (skb_padto(skb, 36))
|
||||
return NULL;
|
||||
skb->len = 36;
|
||||
}
|
||||
|
||||
if (BEx_chip(adapter) || lancer_chip(adapter)) {
|
||||
skb = be_lancer_xmit_workarounds(adapter, skb, skip_hw_vlan);
|
||||
if (!skb)
|
||||
return NULL;
|
||||
}
|
||||
|
||||
return skb;
|
||||
}
|
||||
|
||||
static netdev_tx_t be_xmit(struct sk_buff *skb, struct net_device *netdev)
|
||||
{
|
||||
struct be_adapter *adapter = netdev_priv(netdev);
|
||||
@ -1157,6 +1171,14 @@ ret:
|
||||
return status;
|
||||
}
|
||||
|
||||
static void be_clear_promisc(struct be_adapter *adapter)
|
||||
{
|
||||
adapter->promiscuous = false;
|
||||
adapter->flags &= ~BE_FLAGS_VLAN_PROMISC;
|
||||
|
||||
be_cmd_rx_filter(adapter, IFF_PROMISC, OFF);
|
||||
}
|
||||
|
||||
static void be_set_rx_mode(struct net_device *netdev)
|
||||
{
|
||||
struct be_adapter *adapter = netdev_priv(netdev);
|
||||
@ -1170,9 +1192,7 @@ static void be_set_rx_mode(struct net_device *netdev)
|
||||
|
||||
/* BE was previously in promiscuous mode; disable it */
|
||||
if (adapter->promiscuous) {
|
||||
adapter->promiscuous = false;
|
||||
be_cmd_rx_filter(adapter, IFF_PROMISC, OFF);
|
||||
|
||||
be_clear_promisc(adapter);
|
||||
if (adapter->vlans_added)
|
||||
be_vid_config(adapter);
|
||||
}
|
||||
@ -1287,24 +1307,20 @@ static int be_set_vf_vlan(struct net_device *netdev,
|
||||
|
||||
if (vlan || qos) {
|
||||
vlan |= qos << VLAN_PRIO_SHIFT;
|
||||
if (vf_cfg->vlan_tag != vlan) {
|
||||
/* If this is new value, program it. Else skip. */
|
||||
vf_cfg->vlan_tag = vlan;
|
||||
if (vf_cfg->vlan_tag != vlan)
|
||||
status = be_cmd_set_hsw_config(adapter, vlan, vf + 1,
|
||||
vf_cfg->if_handle, 0);
|
||||
}
|
||||
} else {
|
||||
/* Reset Transparent Vlan Tagging. */
|
||||
vf_cfg->vlan_tag = 0;
|
||||
vlan = vf_cfg->def_vid;
|
||||
status = be_cmd_set_hsw_config(adapter, vlan, vf + 1,
|
||||
vf_cfg->if_handle, 0);
|
||||
status = be_cmd_set_hsw_config(adapter, BE_RESET_VLAN_TAG_ID,
|
||||
vf + 1, vf_cfg->if_handle, 0);
|
||||
}
|
||||
|
||||
|
||||
if (status)
|
||||
if (!status)
|
||||
vf_cfg->vlan_tag = vlan;
|
||||
else
|
||||
dev_info(&adapter->pdev->dev,
|
||||
"VLAN %d config on VF %d failed\n", vlan, vf);
|
||||
"VLAN %d config on VF %d failed\n", vlan, vf);
|
||||
return status;
|
||||
}
|
||||
|
||||
@ -3013,11 +3029,11 @@ static int be_vf_setup_init(struct be_adapter *adapter)
|
||||
|
||||
static int be_vf_setup(struct be_adapter *adapter)
|
||||
{
|
||||
struct be_vf_cfg *vf_cfg;
|
||||
u16 def_vlan, lnk_speed;
|
||||
int status, old_vfs, vf;
|
||||
struct device *dev = &adapter->pdev->dev;
|
||||
struct be_vf_cfg *vf_cfg;
|
||||
int status, old_vfs, vf;
|
||||
u32 privileges;
|
||||
u16 lnk_speed;
|
||||
|
||||
old_vfs = pci_num_vf(adapter->pdev);
|
||||
if (old_vfs) {
|
||||
@ -3084,12 +3100,6 @@ static int be_vf_setup(struct be_adapter *adapter)
|
||||
if (!status)
|
||||
vf_cfg->tx_rate = lnk_speed;
|
||||
|
||||
status = be_cmd_get_hsw_config(adapter, &def_vlan,
|
||||
vf + 1, vf_cfg->if_handle, NULL);
|
||||
if (status)
|
||||
goto err;
|
||||
vf_cfg->def_vid = def_vlan;
|
||||
|
||||
if (!old_vfs)
|
||||
be_cmd_enable_vf(adapter, vf + 1);
|
||||
}
|
||||
|
@ -389,12 +389,6 @@ fec_enet_start_xmit(struct sk_buff *skb, struct net_device *ndev)
|
||||
netdev_err(ndev, "Tx DMA memory map failed\n");
|
||||
return NETDEV_TX_OK;
|
||||
}
|
||||
/* Send it on its way. Tell FEC it's ready, interrupt when done,
|
||||
* it's the last BD of the frame, and to put the CRC on the end.
|
||||
*/
|
||||
status |= (BD_ENET_TX_READY | BD_ENET_TX_INTR
|
||||
| BD_ENET_TX_LAST | BD_ENET_TX_TC);
|
||||
bdp->cbd_sc = status;
|
||||
|
||||
if (fep->bufdesc_ex) {
|
||||
|
||||
@ -416,6 +410,13 @@ fec_enet_start_xmit(struct sk_buff *skb, struct net_device *ndev)
|
||||
}
|
||||
}
|
||||
|
||||
/* Send it on its way. Tell FEC it's ready, interrupt when done,
|
||||
* it's the last BD of the frame, and to put the CRC on the end.
|
||||
*/
|
||||
status |= (BD_ENET_TX_READY | BD_ENET_TX_INTR
|
||||
| BD_ENET_TX_LAST | BD_ENET_TX_TC);
|
||||
bdp->cbd_sc = status;
|
||||
|
||||
bdp_pre = fec_enet_get_prevdesc(bdp, fep);
|
||||
if ((id_entry->driver_data & FEC_QUIRK_ERR006358) &&
|
||||
!(bdp_pre->cbd_sc & BD_ENET_TX_READY)) {
|
||||
|
@ -51,8 +51,8 @@
|
||||
|
||||
#define DRV_NAME "mlx4_core"
|
||||
#define PFX DRV_NAME ": "
|
||||
#define DRV_VERSION "1.1"
|
||||
#define DRV_RELDATE "Dec, 2011"
|
||||
#define DRV_VERSION "2.2-1"
|
||||
#define DRV_RELDATE "Feb, 2014"
|
||||
|
||||
#define MLX4_FS_UDP_UC_EN (1 << 1)
|
||||
#define MLX4_FS_TCP_UC_EN (1 << 2)
|
||||
|
@ -57,8 +57,8 @@
|
||||
#include "en_port.h"
|
||||
|
||||
#define DRV_NAME "mlx4_en"
|
||||
#define DRV_VERSION "2.0"
|
||||
#define DRV_RELDATE "Dec 2011"
|
||||
#define DRV_VERSION "2.2-1"
|
||||
#define DRV_RELDATE "Feb 2014"
|
||||
|
||||
#define MLX4_EN_MSG_LEVEL (NETIF_MSG_LINK | NETIF_MSG_IFDOWN)
|
||||
|
||||
|
@ -46,8 +46,8 @@
|
||||
#include "mlx5_core.h"
|
||||
|
||||
#define DRIVER_NAME "mlx5_core"
|
||||
#define DRIVER_VERSION "1.0"
|
||||
#define DRIVER_RELDATE "June 2013"
|
||||
#define DRIVER_VERSION "2.2-1"
|
||||
#define DRIVER_RELDATE "Feb 2014"
|
||||
|
||||
MODULE_AUTHOR("Eli Cohen <eli@mellanox.com>");
|
||||
MODULE_DESCRIPTION("Mellanox ConnectX-IB HCA core library");
|
||||
|
@ -340,6 +340,7 @@ int qlcnic_83xx_setup_intr(struct qlcnic_adapter *adapter)
|
||||
if (qlcnic_sriov_vf_check(adapter))
|
||||
return -EINVAL;
|
||||
num_msix = 1;
|
||||
adapter->drv_sds_rings = QLCNIC_SINGLE_RING;
|
||||
adapter->drv_tx_rings = QLCNIC_SINGLE_RING;
|
||||
}
|
||||
}
|
||||
|
@ -807,7 +807,7 @@ qlcnic_dcb_get_pg_tc_cfg_tx(struct net_device *netdev, int tc, u8 *prio,
|
||||
!type->tc_param_valid)
|
||||
return;
|
||||
|
||||
if (tc < 0 || (tc > QLC_DCB_MAX_TC))
|
||||
if (tc < 0 || (tc >= QLC_DCB_MAX_TC))
|
||||
return;
|
||||
|
||||
tc_cfg = &type->tc_cfg[tc];
|
||||
@ -843,7 +843,7 @@ static void qlcnic_dcb_get_pg_bwg_cfg_tx(struct net_device *netdev, int pgid,
|
||||
!type->tc_param_valid)
|
||||
return;
|
||||
|
||||
if (pgid < 0 || pgid > QLC_DCB_MAX_PG)
|
||||
if (pgid < 0 || pgid >= QLC_DCB_MAX_PG)
|
||||
return;
|
||||
|
||||
pgcfg = &type->pg_cfg[pgid];
|
||||
|
@ -816,9 +816,10 @@ static int qlcnic_82xx_setup_intr(struct qlcnic_adapter *adapter)
|
||||
|
||||
if (!(adapter->flags & QLCNIC_MSIX_ENABLED)) {
|
||||
qlcnic_disable_multi_tx(adapter);
|
||||
adapter->drv_sds_rings = QLCNIC_SINGLE_RING;
|
||||
|
||||
err = qlcnic_enable_msi_legacy(adapter);
|
||||
if (!err)
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
}
|
||||
@ -3863,7 +3864,7 @@ int qlcnic_validate_rings(struct qlcnic_adapter *adapter, __u32 ring_cnt,
|
||||
strcpy(buf, "Tx");
|
||||
}
|
||||
|
||||
if (!qlcnic_use_msi_x && !qlcnic_use_msi) {
|
||||
if (!QLCNIC_IS_MSI_FAMILY(adapter)) {
|
||||
netdev_err(netdev, "No RSS/TSS support in INT-x mode\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
@ -13,8 +13,6 @@
|
||||
#define QLC_VF_MIN_TX_RATE 100
|
||||
#define QLC_VF_MAX_TX_RATE 9999
|
||||
#define QLC_MAC_OPCODE_MASK 0x7
|
||||
#define QLC_MAC_STAR_ADD 6
|
||||
#define QLC_MAC_STAR_DEL 7
|
||||
#define QLC_VF_FLOOD_BIT BIT_16
|
||||
#define QLC_FLOOD_MODE 0x5
|
||||
|
||||
@ -1206,13 +1204,6 @@ static int qlcnic_sriov_validate_cfg_macvlan(struct qlcnic_adapter *adapter,
|
||||
struct qlcnic_vport *vp = vf->vp;
|
||||
u8 op, new_op;
|
||||
|
||||
if (((cmd->req.arg[1] & QLC_MAC_OPCODE_MASK) == QLC_MAC_STAR_ADD) ||
|
||||
((cmd->req.arg[1] & QLC_MAC_OPCODE_MASK) == QLC_MAC_STAR_DEL)) {
|
||||
netdev_err(adapter->netdev, "MAC + any VLAN filter not allowed from VF %d\n",
|
||||
vf->pci_func);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (!(cmd->req.arg[1] & BIT_8))
|
||||
return -EINVAL;
|
||||
|
||||
|
@ -7118,6 +7118,8 @@ rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
|
||||
}
|
||||
|
||||
mutex_init(&tp->wk.mutex);
|
||||
u64_stats_init(&tp->rx_stats.syncp);
|
||||
u64_stats_init(&tp->tx_stats.syncp);
|
||||
|
||||
/* Get MAC address */
|
||||
for (i = 0; i < ETH_ALEN; i++)
|
||||
|
@ -1668,6 +1668,13 @@ void efx_ptp_event(struct efx_nic *efx, efx_qword_t *ev)
|
||||
struct efx_ptp_data *ptp = efx->ptp_data;
|
||||
int code = EFX_QWORD_FIELD(*ev, MCDI_EVENT_CODE);
|
||||
|
||||
if (!ptp) {
|
||||
if (net_ratelimit())
|
||||
netif_warn(efx, drv, efx->net_dev,
|
||||
"Received PTP event but PTP not set up\n");
|
||||
return;
|
||||
}
|
||||
|
||||
if (!ptp->enabled)
|
||||
return;
|
||||
|
||||
|
@ -1705,7 +1705,7 @@ static int stmmac_open(struct net_device *dev)
|
||||
priv->dma_rx_size = STMMAC_ALIGN(dma_rxsize);
|
||||
priv->dma_buf_sz = STMMAC_ALIGN(buf_sz);
|
||||
|
||||
alloc_dma_desc_resources(priv);
|
||||
ret = alloc_dma_desc_resources(priv);
|
||||
if (ret < 0) {
|
||||
pr_err("%s: DMA descriptors allocation failed\n", __func__);
|
||||
goto dma_desc_error;
|
||||
|
@ -1164,11 +1164,17 @@ static void cpsw_init_host_port(struct cpsw_priv *priv)
|
||||
|
||||
static void cpsw_slave_stop(struct cpsw_slave *slave, struct cpsw_priv *priv)
|
||||
{
|
||||
u32 slave_port;
|
||||
|
||||
slave_port = cpsw_get_slave_port(priv, slave->slave_num);
|
||||
|
||||
if (!slave->phy)
|
||||
return;
|
||||
phy_stop(slave->phy);
|
||||
phy_disconnect(slave->phy);
|
||||
slave->phy = NULL;
|
||||
cpsw_ale_control_set(priv->ale, slave_port,
|
||||
ALE_PORT_STATE, ALE_PORT_STATE_DISABLE);
|
||||
}
|
||||
|
||||
static int cpsw_ndo_open(struct net_device *ndev)
|
||||
|
@ -506,6 +506,9 @@ static int macvlan_change_mtu(struct net_device *dev, int new_mtu)
|
||||
static struct lock_class_key macvlan_netdev_xmit_lock_key;
|
||||
static struct lock_class_key macvlan_netdev_addr_lock_key;
|
||||
|
||||
#define ALWAYS_ON_FEATURES \
|
||||
(NETIF_F_SG | NETIF_F_GEN_CSUM | NETIF_F_GSO_SOFTWARE | NETIF_F_LLTX)
|
||||
|
||||
#define MACVLAN_FEATURES \
|
||||
(NETIF_F_SG | NETIF_F_ALL_CSUM | NETIF_F_HIGHDMA | NETIF_F_FRAGLIST | \
|
||||
NETIF_F_GSO | NETIF_F_TSO | NETIF_F_UFO | NETIF_F_GSO_ROBUST | \
|
||||
@ -539,7 +542,7 @@ static int macvlan_init(struct net_device *dev)
|
||||
dev->state = (dev->state & ~MACVLAN_STATE_MASK) |
|
||||
(lowerdev->state & MACVLAN_STATE_MASK);
|
||||
dev->features = lowerdev->features & MACVLAN_FEATURES;
|
||||
dev->features |= NETIF_F_LLTX;
|
||||
dev->features |= ALWAYS_ON_FEATURES;
|
||||
dev->gso_max_size = lowerdev->gso_max_size;
|
||||
dev->iflink = lowerdev->ifindex;
|
||||
dev->hard_header_len = lowerdev->hard_header_len;
|
||||
@ -699,7 +702,7 @@ static netdev_features_t macvlan_fix_features(struct net_device *dev,
|
||||
features = netdev_increment_features(vlan->lowerdev->features,
|
||||
features,
|
||||
mask);
|
||||
features |= NETIF_F_LLTX;
|
||||
features |= ALWAYS_ON_FEATURES;
|
||||
|
||||
return features;
|
||||
}
|
||||
|
@ -916,6 +916,8 @@ int genphy_read_status(struct phy_device *phydev)
|
||||
int err;
|
||||
int lpa;
|
||||
int lpagb = 0;
|
||||
int common_adv;
|
||||
int common_adv_gb = 0;
|
||||
|
||||
/* Update the link, but return if there was an error */
|
||||
err = genphy_update_link(phydev);
|
||||
@ -937,7 +939,7 @@ int genphy_read_status(struct phy_device *phydev)
|
||||
|
||||
phydev->lp_advertising =
|
||||
mii_stat1000_to_ethtool_lpa_t(lpagb);
|
||||
lpagb &= adv << 2;
|
||||
common_adv_gb = lpagb & adv << 2;
|
||||
}
|
||||
|
||||
lpa = phy_read(phydev, MII_LPA);
|
||||
@ -950,25 +952,25 @@ int genphy_read_status(struct phy_device *phydev)
|
||||
if (adv < 0)
|
||||
return adv;
|
||||
|
||||
lpa &= adv;
|
||||
common_adv = lpa & adv;
|
||||
|
||||
phydev->speed = SPEED_10;
|
||||
phydev->duplex = DUPLEX_HALF;
|
||||
phydev->pause = 0;
|
||||
phydev->asym_pause = 0;
|
||||
|
||||
if (lpagb & (LPA_1000FULL | LPA_1000HALF)) {
|
||||
if (common_adv_gb & (LPA_1000FULL | LPA_1000HALF)) {
|
||||
phydev->speed = SPEED_1000;
|
||||
|
||||
if (lpagb & LPA_1000FULL)
|
||||
if (common_adv_gb & LPA_1000FULL)
|
||||
phydev->duplex = DUPLEX_FULL;
|
||||
} else if (lpa & (LPA_100FULL | LPA_100HALF)) {
|
||||
} else if (common_adv & (LPA_100FULL | LPA_100HALF)) {
|
||||
phydev->speed = SPEED_100;
|
||||
|
||||
if (lpa & LPA_100FULL)
|
||||
if (common_adv & LPA_100FULL)
|
||||
phydev->duplex = DUPLEX_FULL;
|
||||
} else
|
||||
if (lpa & LPA_10FULL)
|
||||
if (common_adv & LPA_10FULL)
|
||||
phydev->duplex = DUPLEX_FULL;
|
||||
|
||||
if (phydev->duplex == DUPLEX_FULL) {
|
||||
|
@ -1686,7 +1686,9 @@ static int tun_set_iff(struct net *net, struct file *file, struct ifreq *ifr)
|
||||
TUN_USER_FEATURES | NETIF_F_HW_VLAN_CTAG_TX |
|
||||
NETIF_F_HW_VLAN_STAG_TX;
|
||||
dev->features = dev->hw_features;
|
||||
dev->vlan_features = dev->features;
|
||||
dev->vlan_features = dev->features &
|
||||
~(NETIF_F_HW_VLAN_CTAG_TX |
|
||||
NETIF_F_HW_VLAN_STAG_TX);
|
||||
|
||||
INIT_LIST_HEAD(&tun->disabled);
|
||||
err = tun_attach(tun, file, false);
|
||||
|
@ -1395,6 +1395,19 @@ static const struct driver_info ax88178a_info = {
|
||||
.tx_fixup = ax88179_tx_fixup,
|
||||
};
|
||||
|
||||
static const struct driver_info dlink_dub1312_info = {
|
||||
.description = "D-Link DUB-1312 USB 3.0 to Gigabit Ethernet Adapter",
|
||||
.bind = ax88179_bind,
|
||||
.unbind = ax88179_unbind,
|
||||
.status = ax88179_status,
|
||||
.link_reset = ax88179_link_reset,
|
||||
.reset = ax88179_reset,
|
||||
.stop = ax88179_stop,
|
||||
.flags = FLAG_ETHER | FLAG_FRAMING_AX,
|
||||
.rx_fixup = ax88179_rx_fixup,
|
||||
.tx_fixup = ax88179_tx_fixup,
|
||||
};
|
||||
|
||||
static const struct driver_info sitecom_info = {
|
||||
.description = "Sitecom USB 3.0 to Gigabit Adapter",
|
||||
.bind = ax88179_bind,
|
||||
@ -1421,6 +1434,19 @@ static const struct driver_info samsung_info = {
|
||||
.tx_fixup = ax88179_tx_fixup,
|
||||
};
|
||||
|
||||
static const struct driver_info lenovo_info = {
|
||||
.description = "Lenovo OneLinkDock Gigabit LAN",
|
||||
.bind = ax88179_bind,
|
||||
.unbind = ax88179_unbind,
|
||||
.status = ax88179_status,
|
||||
.link_reset = ax88179_link_reset,
|
||||
.reset = ax88179_reset,
|
||||
.stop = ax88179_stop,
|
||||
.flags = FLAG_ETHER | FLAG_FRAMING_AX,
|
||||
.rx_fixup = ax88179_rx_fixup,
|
||||
.tx_fixup = ax88179_tx_fixup,
|
||||
};
|
||||
|
||||
static const struct usb_device_id products[] = {
|
||||
{
|
||||
/* ASIX AX88179 10/100/1000 */
|
||||
@ -1430,6 +1456,10 @@ static const struct usb_device_id products[] = {
|
||||
/* ASIX AX88178A 10/100/1000 */
|
||||
USB_DEVICE(0x0b95, 0x178a),
|
||||
.driver_info = (unsigned long)&ax88178a_info,
|
||||
}, {
|
||||
/* D-Link DUB-1312 USB 3.0 to Gigabit Ethernet Adapter */
|
||||
USB_DEVICE(0x2001, 0x4a00),
|
||||
.driver_info = (unsigned long)&dlink_dub1312_info,
|
||||
}, {
|
||||
/* Sitecom USB 3.0 to Gigabit Adapter */
|
||||
USB_DEVICE(0x0df6, 0x0072),
|
||||
@ -1438,6 +1468,10 @@ static const struct usb_device_id products[] = {
|
||||
/* Samsung USB Ethernet Adapter */
|
||||
USB_DEVICE(0x04e8, 0xa100),
|
||||
.driver_info = (unsigned long)&samsung_info,
|
||||
}, {
|
||||
/* Lenovo OneLinkDock Gigabit LAN */
|
||||
USB_DEVICE(0x17ef, 0x304b),
|
||||
.driver_info = (unsigned long)&lenovo_info,
|
||||
},
|
||||
{ },
|
||||
};
|
||||
|
@ -285,7 +285,8 @@ static void veth_setup(struct net_device *dev)
|
||||
dev->ethtool_ops = &veth_ethtool_ops;
|
||||
dev->features |= NETIF_F_LLTX;
|
||||
dev->features |= VETH_FEATURES;
|
||||
dev->vlan_features = dev->features;
|
||||
dev->vlan_features = dev->features &
|
||||
~(NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_STAG_TX);
|
||||
dev->destructor = veth_dev_free;
|
||||
|
||||
dev->hw_features = VETH_FEATURES;
|
||||
|
@ -1711,7 +1711,8 @@ static int virtnet_probe(struct virtio_device *vdev)
|
||||
/* If we can receive ANY GSO packets, we must allocate large ones. */
|
||||
if (virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_TSO4) ||
|
||||
virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_TSO6) ||
|
||||
virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_ECN))
|
||||
virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_ECN) ||
|
||||
virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_UFO))
|
||||
vi->big_packets = true;
|
||||
|
||||
if (virtio_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF))
|
||||
|
@ -57,7 +57,7 @@ static const u32 ar9462_2p0_baseband_postamble[][5] = {
|
||||
{0x00009e14, 0x37b95d5e, 0x37b9605e, 0x3236605e, 0x32365a5e},
|
||||
{0x00009e18, 0x00000000, 0x00000000, 0x00000000, 0x00000000},
|
||||
{0x00009e1c, 0x0001cf9c, 0x0001cf9c, 0x00021f9c, 0x00021f9c},
|
||||
{0x00009e20, 0x000003b5, 0x000003b5, 0x000003ce, 0x000003ce},
|
||||
{0x00009e20, 0x000003a5, 0x000003a5, 0x000003a5, 0x000003a5},
|
||||
{0x00009e2c, 0x0000001c, 0x0000001c, 0x00000021, 0x00000021},
|
||||
{0x00009e3c, 0xcf946220, 0xcf946220, 0xcfd5c782, 0xcfd5c282},
|
||||
{0x00009e44, 0x62321e27, 0x62321e27, 0xfe291e27, 0xfe291e27},
|
||||
@ -96,7 +96,7 @@ static const u32 ar9462_2p0_baseband_postamble[][5] = {
|
||||
{0x0000ae04, 0x001c0000, 0x001c0000, 0x001c0000, 0x00100000},
|
||||
{0x0000ae18, 0x00000000, 0x00000000, 0x00000000, 0x00000000},
|
||||
{0x0000ae1c, 0x0000019c, 0x0000019c, 0x0000019c, 0x0000019c},
|
||||
{0x0000ae20, 0x000001b5, 0x000001b5, 0x000001ce, 0x000001ce},
|
||||
{0x0000ae20, 0x000001a6, 0x000001a6, 0x000001aa, 0x000001aa},
|
||||
{0x0000b284, 0x00000000, 0x00000000, 0x00000550, 0x00000550},
|
||||
};
|
||||
|
||||
|
@ -1534,7 +1534,7 @@ EXPORT_SYMBOL(ath9k_hw_check_nav);
|
||||
bool ath9k_hw_check_alive(struct ath_hw *ah)
|
||||
{
|
||||
int count = 50;
|
||||
u32 reg;
|
||||
u32 reg, last_val;
|
||||
|
||||
if (AR_SREV_9300(ah))
|
||||
return !ath9k_hw_detect_mac_hang(ah);
|
||||
@ -1542,9 +1542,13 @@ bool ath9k_hw_check_alive(struct ath_hw *ah)
|
||||
if (AR_SREV_9285_12_OR_LATER(ah))
|
||||
return true;
|
||||
|
||||
last_val = REG_READ(ah, AR_OBS_BUS_1);
|
||||
do {
|
||||
reg = REG_READ(ah, AR_OBS_BUS_1);
|
||||
if (reg != last_val)
|
||||
return true;
|
||||
|
||||
last_val = reg;
|
||||
if ((reg & 0x7E7FFFEF) == 0x00702400)
|
||||
continue;
|
||||
|
||||
@ -1556,6 +1560,8 @@ bool ath9k_hw_check_alive(struct ath_hw *ah)
|
||||
default:
|
||||
return true;
|
||||
}
|
||||
|
||||
udelay(1);
|
||||
} while (count-- > 0);
|
||||
|
||||
return false;
|
||||
|
@ -732,11 +732,18 @@ static struct ath_rxbuf *ath_get_next_rx_buf(struct ath_softc *sc,
|
||||
return NULL;
|
||||
|
||||
/*
|
||||
* mark descriptor as zero-length and set the 'more'
|
||||
* flag to ensure that both buffers get discarded
|
||||
* Re-check previous descriptor, in case it has been filled
|
||||
* in the mean time.
|
||||
*/
|
||||
rs->rs_datalen = 0;
|
||||
rs->rs_more = true;
|
||||
ret = ath9k_hw_rxprocdesc(ah, ds, rs);
|
||||
if (ret == -EINPROGRESS) {
|
||||
/*
|
||||
* mark descriptor as zero-length and set the 'more'
|
||||
* flag to ensure that both buffers get discarded
|
||||
*/
|
||||
rs->rs_datalen = 0;
|
||||
rs->rs_more = true;
|
||||
}
|
||||
}
|
||||
|
||||
list_del(&bf->list);
|
||||
@ -985,32 +992,32 @@ static int ath9k_rx_skb_preprocess(struct ath_softc *sc,
|
||||
struct ath_common *common = ath9k_hw_common(ah);
|
||||
struct ieee80211_hdr *hdr;
|
||||
bool discard_current = sc->rx.discard_next;
|
||||
int ret = 0;
|
||||
|
||||
/*
|
||||
* Discard corrupt descriptors which are marked in
|
||||
* ath_get_next_rx_buf().
|
||||
*/
|
||||
sc->rx.discard_next = rx_stats->rs_more;
|
||||
if (discard_current)
|
||||
return -EINVAL;
|
||||
goto corrupt;
|
||||
|
||||
sc->rx.discard_next = false;
|
||||
|
||||
/*
|
||||
* Discard zero-length packets.
|
||||
*/
|
||||
if (!rx_stats->rs_datalen) {
|
||||
RX_STAT_INC(rx_len_err);
|
||||
return -EINVAL;
|
||||
goto corrupt;
|
||||
}
|
||||
|
||||
/*
|
||||
* rs_status follows rs_datalen so if rs_datalen is too large
|
||||
* we can take a hint that hardware corrupted it, so ignore
|
||||
* those frames.
|
||||
*/
|
||||
/*
|
||||
* rs_status follows rs_datalen so if rs_datalen is too large
|
||||
* we can take a hint that hardware corrupted it, so ignore
|
||||
* those frames.
|
||||
*/
|
||||
if (rx_stats->rs_datalen > (common->rx_bufsize - ah->caps.rx_status_len)) {
|
||||
RX_STAT_INC(rx_len_err);
|
||||
return -EINVAL;
|
||||
goto corrupt;
|
||||
}
|
||||
|
||||
/* Only use status info from the last fragment */
|
||||
@ -1024,10 +1031,8 @@ static int ath9k_rx_skb_preprocess(struct ath_softc *sc,
|
||||
* This is different from the other corrupt descriptor
|
||||
* condition handled above.
|
||||
*/
|
||||
if (rx_stats->rs_status & ATH9K_RXERR_CORRUPT_DESC) {
|
||||
ret = -EINVAL;
|
||||
goto exit;
|
||||
}
|
||||
if (rx_stats->rs_status & ATH9K_RXERR_CORRUPT_DESC)
|
||||
goto corrupt;
|
||||
|
||||
hdr = (struct ieee80211_hdr *) (skb->data + ah->caps.rx_status_len);
|
||||
|
||||
@ -1043,18 +1048,15 @@ static int ath9k_rx_skb_preprocess(struct ath_softc *sc,
|
||||
if (ath_process_fft(sc, hdr, rx_stats, rx_status->mactime))
|
||||
RX_STAT_INC(rx_spectral);
|
||||
|
||||
ret = -EINVAL;
|
||||
goto exit;
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/*
|
||||
* everything but the rate is checked here, the rate check is done
|
||||
* separately to avoid doing two lookups for a rate for each frame.
|
||||
*/
|
||||
if (!ath9k_rx_accept(common, hdr, rx_status, rx_stats, decrypt_error)) {
|
||||
ret = -EINVAL;
|
||||
goto exit;
|
||||
}
|
||||
if (!ath9k_rx_accept(common, hdr, rx_status, rx_stats, decrypt_error))
|
||||
return -EINVAL;
|
||||
|
||||
if (ath_is_mybeacon(common, hdr)) {
|
||||
RX_STAT_INC(rx_beacons);
|
||||
@ -1064,15 +1066,11 @@ static int ath9k_rx_skb_preprocess(struct ath_softc *sc,
|
||||
/*
|
||||
* This shouldn't happen, but have a safety check anyway.
|
||||
*/
|
||||
if (WARN_ON(!ah->curchan)) {
|
||||
ret = -EINVAL;
|
||||
goto exit;
|
||||
}
|
||||
if (WARN_ON(!ah->curchan))
|
||||
return -EINVAL;
|
||||
|
||||
if (ath9k_process_rate(common, hw, rx_stats, rx_status)) {
|
||||
ret =-EINVAL;
|
||||
goto exit;
|
||||
}
|
||||
if (ath9k_process_rate(common, hw, rx_stats, rx_status))
|
||||
return -EINVAL;
|
||||
|
||||
ath9k_process_rssi(common, hw, rx_stats, rx_status);
|
||||
|
||||
@ -1087,9 +1085,11 @@ static int ath9k_rx_skb_preprocess(struct ath_softc *sc,
|
||||
sc->rx.num_pkts++;
|
||||
#endif
|
||||
|
||||
exit:
|
||||
sc->rx.discard_next = false;
|
||||
return ret;
|
||||
return 0;
|
||||
|
||||
corrupt:
|
||||
sc->rx.discard_next = rx_stats->rs_more;
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
static void ath9k_rx_skb_postprocess(struct ath_common *common,
|
||||
|
@ -1444,14 +1444,16 @@ void ath_tx_aggr_sleep(struct ieee80211_sta *sta, struct ath_softc *sc,
|
||||
for (tidno = 0, tid = &an->tid[tidno];
|
||||
tidno < IEEE80211_NUM_TIDS; tidno++, tid++) {
|
||||
|
||||
if (!tid->sched)
|
||||
continue;
|
||||
|
||||
ac = tid->ac;
|
||||
txq = ac->txq;
|
||||
|
||||
ath_txq_lock(sc, txq);
|
||||
|
||||
if (!tid->sched) {
|
||||
ath_txq_unlock(sc, txq);
|
||||
continue;
|
||||
}
|
||||
|
||||
buffered = ath_tid_has_buffered(tid);
|
||||
|
||||
tid->sched = false;
|
||||
@ -2184,14 +2186,15 @@ int ath_tx_start(struct ieee80211_hw *hw, struct sk_buff *skb,
|
||||
txq->stopped = true;
|
||||
}
|
||||
|
||||
if (txctl->an)
|
||||
tid = ath_get_skb_tid(sc, txctl->an, skb);
|
||||
|
||||
if (info->flags & IEEE80211_TX_CTL_PS_RESPONSE) {
|
||||
ath_txq_unlock(sc, txq);
|
||||
txq = sc->tx.uapsdq;
|
||||
ath_txq_lock(sc, txq);
|
||||
} else if (txctl->an &&
|
||||
ieee80211_is_data_present(hdr->frame_control)) {
|
||||
tid = ath_get_skb_tid(sc, txctl->an, skb);
|
||||
|
||||
WARN_ON(tid->ac->txq != txctl->txq);
|
||||
|
||||
if (info->flags & IEEE80211_TX_CTL_CLEAR_PS_FILT)
|
||||
|
@ -457,7 +457,6 @@ struct brcmf_sdio {
|
||||
|
||||
u8 tx_hdrlen; /* sdio bus header length for tx packet */
|
||||
bool txglom; /* host tx glomming enable flag */
|
||||
struct sk_buff *txglom_sgpad; /* scatter-gather padding buffer */
|
||||
u16 head_align; /* buffer pointer alignment */
|
||||
u16 sgentry_align; /* scatter-gather buffer alignment */
|
||||
};
|
||||
@ -1944,9 +1943,8 @@ static int brcmf_sdio_txpkt_prep_sg(struct brcmf_sdio *bus,
|
||||
if (lastfrm && chain_pad)
|
||||
tail_pad += blksize - chain_pad;
|
||||
if (skb_tailroom(pkt) < tail_pad && pkt->len > blksize) {
|
||||
pkt_pad = bus->txglom_sgpad;
|
||||
if (pkt_pad == NULL)
|
||||
brcmu_pkt_buf_get_skb(tail_pad + tail_chop);
|
||||
pkt_pad = brcmu_pkt_buf_get_skb(tail_pad + tail_chop +
|
||||
bus->head_align);
|
||||
if (pkt_pad == NULL)
|
||||
return -ENOMEM;
|
||||
ret = brcmf_sdio_txpkt_hdalign(bus, pkt_pad);
|
||||
@ -1957,6 +1955,7 @@ static int brcmf_sdio_txpkt_prep_sg(struct brcmf_sdio *bus,
|
||||
tail_chop);
|
||||
*(u32 *)(pkt_pad->cb) = ALIGN_SKB_FLAG + tail_chop;
|
||||
skb_trim(pkt, pkt->len - tail_chop);
|
||||
skb_trim(pkt_pad, tail_pad + tail_chop);
|
||||
__skb_queue_after(pktq, pkt, pkt_pad);
|
||||
} else {
|
||||
ntail = pkt->data_len + tail_pad -
|
||||
@ -2011,7 +2010,7 @@ brcmf_sdio_txpkt_prep(struct brcmf_sdio *bus, struct sk_buff_head *pktq,
|
||||
return ret;
|
||||
head_pad = (u16)ret;
|
||||
if (head_pad)
|
||||
memset(pkt_next->data, 0, head_pad + bus->tx_hdrlen);
|
||||
memset(pkt_next->data + bus->tx_hdrlen, 0, head_pad);
|
||||
|
||||
total_len += pkt_next->len;
|
||||
|
||||
@ -3486,10 +3485,6 @@ static int brcmf_sdio_bus_preinit(struct device *dev)
|
||||
bus->txglom = false;
|
||||
value = 1;
|
||||
pad_size = bus->sdiodev->func[2]->cur_blksize << 1;
|
||||
bus->txglom_sgpad = brcmu_pkt_buf_get_skb(pad_size);
|
||||
if (!bus->txglom_sgpad)
|
||||
brcmf_err("allocating txglom padding skb failed, reduced performance\n");
|
||||
|
||||
err = brcmf_iovar_data_set(bus->sdiodev->dev, "bus:rxglom",
|
||||
&value, sizeof(u32));
|
||||
if (err < 0) {
|
||||
@ -4053,7 +4048,6 @@ void brcmf_sdio_remove(struct brcmf_sdio *bus)
|
||||
brcmf_sdio_chip_detach(&bus->ci);
|
||||
}
|
||||
|
||||
brcmu_pkt_buf_free_skb(bus->txglom_sgpad);
|
||||
kfree(bus->rxbuf);
|
||||
kfree(bus->hdrbuf);
|
||||
kfree(bus);
|
||||
|
@ -147,7 +147,7 @@ static void ap_free_sta(struct ap_data *ap, struct sta_info *sta)
|
||||
|
||||
if (!sta->ap && sta->u.sta.challenge)
|
||||
kfree(sta->u.sta.challenge);
|
||||
del_timer(&sta->timer);
|
||||
del_timer_sync(&sta->timer);
|
||||
#endif /* PRISM2_NO_KERNEL_IEEE80211_MGMT */
|
||||
|
||||
kfree(sta);
|
||||
|
@ -590,6 +590,7 @@ void iwl_deactivate_station(struct iwl_priv *priv, const u8 sta_id,
|
||||
sizeof(priv->tid_data[sta_id][tid]));
|
||||
|
||||
priv->stations[sta_id].used &= ~IWL_STA_DRIVER_ACTIVE;
|
||||
priv->stations[sta_id].used &= ~IWL_STA_UCODE_INPROGRESS;
|
||||
|
||||
priv->num_stations--;
|
||||
|
||||
|
@ -1291,8 +1291,6 @@ int iwlagn_rx_reply_compressed_ba(struct iwl_priv *priv,
|
||||
struct iwl_compressed_ba_resp *ba_resp = (void *)pkt->data;
|
||||
struct iwl_ht_agg *agg;
|
||||
struct sk_buff_head reclaimed_skbs;
|
||||
struct ieee80211_tx_info *info;
|
||||
struct ieee80211_hdr *hdr;
|
||||
struct sk_buff *skb;
|
||||
int sta_id;
|
||||
int tid;
|
||||
@ -1379,22 +1377,28 @@ int iwlagn_rx_reply_compressed_ba(struct iwl_priv *priv,
|
||||
freed = 0;
|
||||
|
||||
skb_queue_walk(&reclaimed_skbs, skb) {
|
||||
hdr = (struct ieee80211_hdr *)skb->data;
|
||||
struct ieee80211_hdr *hdr = (void *)skb->data;
|
||||
struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
|
||||
|
||||
if (ieee80211_is_data_qos(hdr->frame_control))
|
||||
freed++;
|
||||
else
|
||||
WARN_ON_ONCE(1);
|
||||
|
||||
info = IEEE80211_SKB_CB(skb);
|
||||
iwl_trans_free_tx_cmd(priv->trans, info->driver_data[1]);
|
||||
|
||||
memset(&info->status, 0, sizeof(info->status));
|
||||
/* Packet was transmitted successfully, failures come as single
|
||||
* frames because before failing a frame the firmware transmits
|
||||
* it without aggregation at least once.
|
||||
*/
|
||||
info->flags |= IEEE80211_TX_STAT_ACK;
|
||||
|
||||
if (freed == 1) {
|
||||
/* this is the first skb we deliver in this batch */
|
||||
/* put the rate scaling data there */
|
||||
info = IEEE80211_SKB_CB(skb);
|
||||
memset(&info->status, 0, sizeof(info->status));
|
||||
info->flags |= IEEE80211_TX_STAT_ACK;
|
||||
info->flags |= IEEE80211_TX_STAT_AMPDU;
|
||||
info->status.ampdu_ack_len = ba_resp->txed_2_done;
|
||||
info->status.ampdu_len = ba_resp->txed;
|
||||
|
@ -152,7 +152,7 @@ enum iwl_power_scheme {
|
||||
IWL_POWER_SCHEME_LP
|
||||
};
|
||||
|
||||
#define IWL_CONN_MAX_LISTEN_INTERVAL 70
|
||||
#define IWL_CONN_MAX_LISTEN_INTERVAL 10
|
||||
#define IWL_UAPSD_AC_INFO (IEEE80211_WMM_IE_STA_QOSINFO_AC_VO |\
|
||||
IEEE80211_WMM_IE_STA_QOSINFO_AC_VI |\
|
||||
IEEE80211_WMM_IE_STA_QOSINFO_AC_BK |\
|
||||
|
@ -822,16 +822,12 @@ int iwl_mvm_rx_ba_notif(struct iwl_mvm *mvm, struct iwl_rx_cmd_buffer *rxb,
|
||||
struct iwl_mvm_ba_notif *ba_notif = (void *)pkt->data;
|
||||
struct sk_buff_head reclaimed_skbs;
|
||||
struct iwl_mvm_tid_data *tid_data;
|
||||
struct ieee80211_tx_info *info;
|
||||
struct ieee80211_sta *sta;
|
||||
struct iwl_mvm_sta *mvmsta;
|
||||
struct ieee80211_hdr *hdr;
|
||||
struct sk_buff *skb;
|
||||
int sta_id, tid, freed;
|
||||
|
||||
/* "flow" corresponds to Tx queue */
|
||||
u16 scd_flow = le16_to_cpu(ba_notif->scd_flow);
|
||||
|
||||
/* "ssn" is start of block-ack Tx window, corresponds to index
|
||||
* (in Tx queue's circular buffer) of first TFD/frame in window */
|
||||
u16 ba_resp_scd_ssn = le16_to_cpu(ba_notif->scd_ssn);
|
||||
@ -888,22 +884,26 @@ int iwl_mvm_rx_ba_notif(struct iwl_mvm *mvm, struct iwl_rx_cmd_buffer *rxb,
|
||||
freed = 0;
|
||||
|
||||
skb_queue_walk(&reclaimed_skbs, skb) {
|
||||
hdr = (struct ieee80211_hdr *)skb->data;
|
||||
struct ieee80211_hdr *hdr = (void *)skb->data;
|
||||
struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
|
||||
|
||||
if (ieee80211_is_data_qos(hdr->frame_control))
|
||||
freed++;
|
||||
else
|
||||
WARN_ON_ONCE(1);
|
||||
|
||||
info = IEEE80211_SKB_CB(skb);
|
||||
iwl_trans_free_tx_cmd(mvm->trans, info->driver_data[1]);
|
||||
|
||||
memset(&info->status, 0, sizeof(info->status));
|
||||
/* Packet was transmitted successfully, failures come as single
|
||||
* frames because before failing a frame the firmware transmits
|
||||
* it without aggregation at least once.
|
||||
*/
|
||||
info->flags |= IEEE80211_TX_STAT_ACK;
|
||||
|
||||
if (freed == 1) {
|
||||
/* this is the first skb we deliver in this batch */
|
||||
/* put the rate scaling data there */
|
||||
info = IEEE80211_SKB_CB(skb);
|
||||
memset(&info->status, 0, sizeof(info->status));
|
||||
info->flags |= IEEE80211_TX_STAT_ACK;
|
||||
info->flags |= IEEE80211_TX_STAT_AMPDU;
|
||||
info->status.ampdu_ack_len = ba_notif->txed_2_done;
|
||||
info->status.ampdu_len = ba_notif->txed;
|
||||
|
@ -621,7 +621,7 @@ static int lbs_ret_scan(struct lbs_private *priv, unsigned long dummy,
|
||||
id = *pos++;
|
||||
elen = *pos++;
|
||||
left -= 2;
|
||||
if (elen > left || elen == 0) {
|
||||
if (elen > left) {
|
||||
lbs_deb_scan("scan response: invalid IE fmt\n");
|
||||
goto done;
|
||||
}
|
||||
|
@ -1211,6 +1211,12 @@ static int mwifiex_pcie_process_recv_data(struct mwifiex_adapter *adapter)
|
||||
rd_index = card->rxbd_rdptr & reg->rx_mask;
|
||||
skb_data = card->rx_buf_list[rd_index];
|
||||
|
||||
/* If skb allocation was failed earlier for Rx packet,
|
||||
* rx_buf_list[rd_index] would have been left with a NULL.
|
||||
*/
|
||||
if (!skb_data)
|
||||
return -ENOMEM;
|
||||
|
||||
MWIFIEX_SKB_PACB(skb_data, &buf_pa);
|
||||
pci_unmap_single(card->dev, buf_pa, MWIFIEX_RX_DATA_BUF_SIZE,
|
||||
PCI_DMA_FROMDEVICE);
|
||||
@ -1525,6 +1531,14 @@ static int mwifiex_pcie_process_cmd_complete(struct mwifiex_adapter *adapter)
|
||||
if (adapter->ps_state == PS_STATE_SLEEP_CFM) {
|
||||
mwifiex_process_sleep_confirm_resp(adapter, skb->data,
|
||||
skb->len);
|
||||
mwifiex_pcie_enable_host_int(adapter);
|
||||
if (mwifiex_write_reg(adapter,
|
||||
PCIE_CPU_INT_EVENT,
|
||||
CPU_INTR_SLEEP_CFM_DONE)) {
|
||||
dev_warn(adapter->dev,
|
||||
"Write register failed\n");
|
||||
return -1;
|
||||
}
|
||||
while (reg->sleep_cookie && (count++ < 10) &&
|
||||
mwifiex_pcie_ok_to_access_hw(adapter))
|
||||
usleep_range(50, 60);
|
||||
@ -1993,23 +2007,9 @@ static void mwifiex_interrupt_status(struct mwifiex_adapter *adapter)
|
||||
adapter->int_status |= pcie_ireg;
|
||||
spin_unlock_irqrestore(&adapter->int_lock, flags);
|
||||
|
||||
if (pcie_ireg & HOST_INTR_CMD_DONE) {
|
||||
if ((adapter->ps_state == PS_STATE_SLEEP_CFM) ||
|
||||
(adapter->ps_state == PS_STATE_SLEEP)) {
|
||||
mwifiex_pcie_enable_host_int(adapter);
|
||||
if (mwifiex_write_reg(adapter,
|
||||
PCIE_CPU_INT_EVENT,
|
||||
CPU_INTR_SLEEP_CFM_DONE)
|
||||
) {
|
||||
dev_warn(adapter->dev,
|
||||
"Write register failed\n");
|
||||
return;
|
||||
|
||||
}
|
||||
}
|
||||
} else if (!adapter->pps_uapsd_mode &&
|
||||
adapter->ps_state == PS_STATE_SLEEP &&
|
||||
mwifiex_pcie_ok_to_access_hw(adapter)) {
|
||||
if (!adapter->pps_uapsd_mode &&
|
||||
adapter->ps_state == PS_STATE_SLEEP &&
|
||||
mwifiex_pcie_ok_to_access_hw(adapter)) {
|
||||
/* Potentially for PCIe we could get other
|
||||
* interrupts like shared. Don't change power
|
||||
* state until cookie is set */
|
||||
|
@ -22,8 +22,6 @@
|
||||
|
||||
#define USB_VERSION "1.0"
|
||||
|
||||
static const char usbdriver_name[] = "usb8xxx";
|
||||
|
||||
static struct mwifiex_if_ops usb_ops;
|
||||
static struct semaphore add_remove_card_sem;
|
||||
static struct usb_card_rec *usb_card;
|
||||
@ -527,13 +525,6 @@ static int mwifiex_usb_resume(struct usb_interface *intf)
|
||||
MWIFIEX_BSS_ROLE_ANY),
|
||||
MWIFIEX_ASYNC_CMD);
|
||||
|
||||
#ifdef CONFIG_PM
|
||||
/* Resume handler may be called due to remote wakeup,
|
||||
* force to exit suspend anyway
|
||||
*/
|
||||
usb_disable_autosuspend(card->udev);
|
||||
#endif /* CONFIG_PM */
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -567,13 +558,12 @@ static void mwifiex_usb_disconnect(struct usb_interface *intf)
|
||||
}
|
||||
|
||||
static struct usb_driver mwifiex_usb_driver = {
|
||||
.name = usbdriver_name,
|
||||
.name = "mwifiex_usb",
|
||||
.probe = mwifiex_usb_probe,
|
||||
.disconnect = mwifiex_usb_disconnect,
|
||||
.id_table = mwifiex_usb_table,
|
||||
.suspend = mwifiex_usb_suspend,
|
||||
.resume = mwifiex_usb_resume,
|
||||
.supports_autosuspend = 1,
|
||||
};
|
||||
|
||||
static int mwifiex_usb_tx_init(struct mwifiex_adapter *adapter)
|
||||
|
@ -559,7 +559,8 @@ mwifiex_clean_txrx(struct mwifiex_private *priv)
|
||||
mwifiex_wmm_delete_all_ralist(priv);
|
||||
memcpy(tos_to_tid, ac_to_tid, sizeof(tos_to_tid));
|
||||
|
||||
if (priv->adapter->if_ops.clean_pcie_ring)
|
||||
if (priv->adapter->if_ops.clean_pcie_ring &&
|
||||
!priv->adapter->surprise_removed)
|
||||
priv->adapter->if_ops.clean_pcie_ring(priv->adapter);
|
||||
spin_unlock_irqrestore(&priv->wmm.ra_list_spinlock, flags);
|
||||
}
|
||||
|
@ -907,6 +907,7 @@ static int handle_incoming_queue(struct net_device *dev,
|
||||
|
||||
/* Ethernet work: Delayed to here as it peeks the header. */
|
||||
skb->protocol = eth_type_trans(skb, dev);
|
||||
skb_reset_network_header(skb);
|
||||
|
||||
if (checksum_setup(dev, skb)) {
|
||||
kfree_skb(skb);
|
||||
|
@ -1660,7 +1660,6 @@ int qeth_qdio_clear_card(struct qeth_card *card, int use_halt)
|
||||
QDIO_FLAG_CLEANUP_USING_CLEAR);
|
||||
if (rc)
|
||||
QETH_CARD_TEXT_(card, 3, "1err%d", rc);
|
||||
qdio_free(CARD_DDEV(card));
|
||||
atomic_set(&card->qdio.state, QETH_QDIO_ALLOCATED);
|
||||
break;
|
||||
case QETH_QDIO_CLEANING:
|
||||
@ -2605,6 +2604,7 @@ static int qeth_mpc_initialize(struct qeth_card *card)
|
||||
return 0;
|
||||
out_qdio:
|
||||
qeth_qdio_clear_card(card, card->info.type != QETH_CARD_TYPE_IQD);
|
||||
qdio_free(CARD_DDEV(card));
|
||||
return rc;
|
||||
}
|
||||
|
||||
@ -4906,9 +4906,11 @@ retry:
|
||||
if (retries < 3)
|
||||
QETH_DBF_MESSAGE(2, "%s Retrying to do IDX activates.\n",
|
||||
dev_name(&card->gdev->dev));
|
||||
rc = qeth_qdio_clear_card(card, card->info.type != QETH_CARD_TYPE_IQD);
|
||||
ccw_device_set_offline(CARD_DDEV(card));
|
||||
ccw_device_set_offline(CARD_WDEV(card));
|
||||
ccw_device_set_offline(CARD_RDEV(card));
|
||||
qdio_free(CARD_DDEV(card));
|
||||
rc = ccw_device_set_online(CARD_RDEV(card));
|
||||
if (rc)
|
||||
goto retriable;
|
||||
@ -4918,7 +4920,6 @@ retry:
|
||||
rc = ccw_device_set_online(CARD_DDEV(card));
|
||||
if (rc)
|
||||
goto retriable;
|
||||
rc = qeth_qdio_clear_card(card, card->info.type != QETH_CARD_TYPE_IQD);
|
||||
retriable:
|
||||
if (rc == -ERESTARTSYS) {
|
||||
QETH_DBF_TEXT(SETUP, 2, "break1");
|
||||
|
@ -1091,6 +1091,7 @@ out_remove:
|
||||
ccw_device_set_offline(CARD_DDEV(card));
|
||||
ccw_device_set_offline(CARD_WDEV(card));
|
||||
ccw_device_set_offline(CARD_RDEV(card));
|
||||
qdio_free(CARD_DDEV(card));
|
||||
if (recover_flag == CARD_STATE_RECOVER)
|
||||
card->state = CARD_STATE_RECOVER;
|
||||
else
|
||||
@ -1132,6 +1133,7 @@ static int __qeth_l2_set_offline(struct ccwgroup_device *cgdev,
|
||||
rc = (rc2) ? rc2 : rc3;
|
||||
if (rc)
|
||||
QETH_DBF_TEXT_(SETUP, 2, "1err%d", rc);
|
||||
qdio_free(CARD_DDEV(card));
|
||||
if (recover_flag == CARD_STATE_UP)
|
||||
card->state = CARD_STATE_RECOVER;
|
||||
/* let user_space know that device is offline */
|
||||
@ -1194,6 +1196,7 @@ static void qeth_l2_shutdown(struct ccwgroup_device *gdev)
|
||||
qeth_hw_trap(card, QETH_DIAGS_TRAP_DISARM);
|
||||
qeth_qdio_clear_card(card, 0);
|
||||
qeth_clear_qdio_buffers(card);
|
||||
qdio_free(CARD_DDEV(card));
|
||||
}
|
||||
|
||||
static int qeth_l2_pm_suspend(struct ccwgroup_device *gdev)
|
||||
|
@ -3447,6 +3447,7 @@ out_remove:
|
||||
ccw_device_set_offline(CARD_DDEV(card));
|
||||
ccw_device_set_offline(CARD_WDEV(card));
|
||||
ccw_device_set_offline(CARD_RDEV(card));
|
||||
qdio_free(CARD_DDEV(card));
|
||||
if (recover_flag == CARD_STATE_RECOVER)
|
||||
card->state = CARD_STATE_RECOVER;
|
||||
else
|
||||
@ -3493,6 +3494,7 @@ static int __qeth_l3_set_offline(struct ccwgroup_device *cgdev,
|
||||
rc = (rc2) ? rc2 : rc3;
|
||||
if (rc)
|
||||
QETH_DBF_TEXT_(SETUP, 2, "1err%d", rc);
|
||||
qdio_free(CARD_DDEV(card));
|
||||
if (recover_flag == CARD_STATE_UP)
|
||||
card->state = CARD_STATE_RECOVER;
|
||||
/* let user_space know that device is offline */
|
||||
@ -3545,6 +3547,7 @@ static void qeth_l3_shutdown(struct ccwgroup_device *gdev)
|
||||
qeth_hw_trap(card, QETH_DIAGS_TRAP_DISARM);
|
||||
qeth_qdio_clear_card(card, 0);
|
||||
qeth_clear_qdio_buffers(card);
|
||||
qdio_free(CARD_DDEV(card));
|
||||
}
|
||||
|
||||
static int qeth_l3_pm_suspend(struct ccwgroup_device *gdev)
|
||||
|
@ -2725,7 +2725,7 @@ static inline void nf_reset(struct sk_buff *skb)
|
||||
|
||||
static inline void nf_reset_trace(struct sk_buff *skb)
|
||||
{
|
||||
#if IS_ENABLED(CONFIG_NETFILTER_XT_TARGET_TRACE)
|
||||
#if IS_ENABLED(CONFIG_NETFILTER_XT_TARGET_TRACE) || defined(CONFIG_NF_TABLES)
|
||||
skb->nf_trace = 0;
|
||||
#endif
|
||||
}
|
||||
@ -2742,6 +2742,9 @@ static inline void __nf_copy(struct sk_buff *dst, const struct sk_buff *src)
|
||||
dst->nf_bridge = src->nf_bridge;
|
||||
nf_bridge_get(src->nf_bridge);
|
||||
#endif
|
||||
#if IS_ENABLED(CONFIG_NETFILTER_XT_TARGET_TRACE) || defined(CONFIG_NF_TABLES)
|
||||
dst->nf_trace = src->nf_trace;
|
||||
#endif
|
||||
}
|
||||
|
||||
static inline void nf_copy(struct sk_buff *dst, const struct sk_buff *src)
|
||||
|
@ -129,6 +129,7 @@ int ip_tunnel_changelink(struct net_device *dev, struct nlattr *tb[],
|
||||
int ip_tunnel_newlink(struct net_device *dev, struct nlattr *tb[],
|
||||
struct ip_tunnel_parm *p);
|
||||
void ip_tunnel_setup(struct net_device *dev, int net_id);
|
||||
void ip_tunnel_dst_reset_all(struct ip_tunnel *t);
|
||||
|
||||
/* Extract dsfield from inner protocol */
|
||||
static inline u8 ip_tunnel_get_dsfield(const struct iphdr *iph,
|
||||
|
@ -1303,7 +1303,8 @@ struct tcp_fastopen_request {
|
||||
/* Fast Open cookie. Size 0 means a cookie request */
|
||||
struct tcp_fastopen_cookie cookie;
|
||||
struct msghdr *data; /* data in MSG_FASTOPEN */
|
||||
u16 copied; /* queued in tcp_connect() */
|
||||
size_t size;
|
||||
int copied; /* queued in tcp_connect() */
|
||||
};
|
||||
void tcp_free_fastopen_req(struct tcp_sock *tp);
|
||||
|
||||
|
@ -1648,6 +1648,11 @@ static inline int xfrm_aevent_is_on(struct net *net)
|
||||
}
|
||||
#endif
|
||||
|
||||
static inline int aead_len(struct xfrm_algo_aead *alg)
|
||||
{
|
||||
return sizeof(*alg) + ((alg->alg_key_len + 7) / 8);
|
||||
}
|
||||
|
||||
static inline int xfrm_alg_len(const struct xfrm_algo *alg)
|
||||
{
|
||||
return sizeof(*alg) + ((alg->alg_key_len + 7) / 8);
|
||||
@ -1686,6 +1691,12 @@ static inline int xfrm_replay_clone(struct xfrm_state *x,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline struct xfrm_algo_aead *xfrm_algo_aead_clone(struct xfrm_algo_aead *orig)
|
||||
{
|
||||
return kmemdup(orig, aead_len(orig), GFP_KERNEL);
|
||||
}
|
||||
|
||||
|
||||
static inline struct xfrm_algo *xfrm_algo_clone(struct xfrm_algo *orig)
|
||||
{
|
||||
return kmemdup(orig, xfrm_alg_len(orig), GFP_KERNEL);
|
||||
|
@ -121,13 +121,9 @@ static void raw_rcv(struct sk_buff *oskb, void *data)
|
||||
if (!ro->recv_own_msgs && oskb->sk == sk)
|
||||
return;
|
||||
|
||||
/* do not pass frames with DLC > 8 to a legacy socket */
|
||||
if (!ro->fd_frames) {
|
||||
struct canfd_frame *cfd = (struct canfd_frame *)oskb->data;
|
||||
|
||||
if (unlikely(cfd->len > CAN_MAX_DLEN))
|
||||
return;
|
||||
}
|
||||
/* do not pass non-CAN2.0 frames to a legacy socket */
|
||||
if (!ro->fd_frames && oskb->len != CAN_MTU)
|
||||
return;
|
||||
|
||||
/* clone the given skb to be able to enqueue it into the rcv queue */
|
||||
skb = skb_clone(oskb, GFP_ATOMIC);
|
||||
@ -738,9 +734,7 @@ static int raw_recvmsg(struct kiocb *iocb, struct socket *sock,
|
||||
struct msghdr *msg, size_t size, int flags)
|
||||
{
|
||||
struct sock *sk = sock->sk;
|
||||
struct raw_sock *ro = raw_sk(sk);
|
||||
struct sk_buff *skb;
|
||||
int rxmtu;
|
||||
int err = 0;
|
||||
int noblock;
|
||||
|
||||
@ -751,20 +745,10 @@ static int raw_recvmsg(struct kiocb *iocb, struct socket *sock,
|
||||
if (!skb)
|
||||
return err;
|
||||
|
||||
/*
|
||||
* when serving a legacy socket the DLC <= 8 is already checked inside
|
||||
* raw_rcv(). Now check if we need to pass a canfd_frame to a legacy
|
||||
* socket and cut the possible CANFD_MTU/CAN_MTU length to CAN_MTU
|
||||
*/
|
||||
if (!ro->fd_frames)
|
||||
rxmtu = CAN_MTU;
|
||||
else
|
||||
rxmtu = skb->len;
|
||||
|
||||
if (size < rxmtu)
|
||||
if (size < skb->len)
|
||||
msg->msg_flags |= MSG_TRUNC;
|
||||
else
|
||||
size = rxmtu;
|
||||
size = skb->len;
|
||||
|
||||
err = memcpy_toiovec(msg->msg_iov, skb->data, size);
|
||||
if (err < 0) {
|
||||
|
@ -766,9 +766,6 @@ static void neigh_periodic_work(struct work_struct *work)
|
||||
nht = rcu_dereference_protected(tbl->nht,
|
||||
lockdep_is_held(&tbl->lock));
|
||||
|
||||
if (atomic_read(&tbl->entries) < tbl->gc_thresh1)
|
||||
goto out;
|
||||
|
||||
/*
|
||||
* periodically recompute ReachableTime from random function
|
||||
*/
|
||||
@ -781,6 +778,9 @@ static void neigh_periodic_work(struct work_struct *work)
|
||||
neigh_rand_reach_time(NEIGH_VAR(p, BASE_REACHABLE_TIME));
|
||||
}
|
||||
|
||||
if (atomic_read(&tbl->entries) < tbl->gc_thresh1)
|
||||
goto out;
|
||||
|
||||
for (i = 0 ; i < (1 << nht->hash_shift); i++) {
|
||||
np = &nht->hash_buckets[i];
|
||||
|
||||
@ -3046,7 +3046,7 @@ int neigh_sysctl_register(struct net_device *dev, struct neigh_parms *p,
|
||||
if (!t)
|
||||
goto err;
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(t->neigh_vars); i++) {
|
||||
for (i = 0; i < NEIGH_VAR_GC_INTERVAL; i++) {
|
||||
t->neigh_vars[i].data += (long) p;
|
||||
t->neigh_vars[i].extra1 = dev;
|
||||
t->neigh_vars[i].extra2 = p;
|
||||
|
@ -707,9 +707,6 @@ static void __copy_skb_header(struct sk_buff *new, const struct sk_buff *old)
|
||||
new->mark = old->mark;
|
||||
new->skb_iif = old->skb_iif;
|
||||
__nf_copy(new, old);
|
||||
#if IS_ENABLED(CONFIG_NETFILTER_XT_TARGET_TRACE)
|
||||
new->nf_trace = old->nf_trace;
|
||||
#endif
|
||||
#ifdef CONFIG_NET_SCHED
|
||||
new->tc_index = old->tc_index;
|
||||
#ifdef CONFIG_NET_CLS_ACT
|
||||
|
@ -297,7 +297,7 @@ static bool seq_nr_after(u16 a, u16 b)
|
||||
|
||||
void hsr_register_frame_in(struct node_entry *node, enum hsr_dev_idx dev_idx)
|
||||
{
|
||||
if ((dev_idx < 0) || (dev_idx >= HSR_MAX_DEV)) {
|
||||
if ((dev_idx < 0) || (dev_idx >= HSR_MAX_SLAVE)) {
|
||||
WARN_ONCE(1, "%s: Invalid dev_idx (%d)\n", __func__, dev_idx);
|
||||
return;
|
||||
}
|
||||
|
@ -1296,8 +1296,11 @@ static struct sk_buff *inet_gso_segment(struct sk_buff *skb,
|
||||
|
||||
segs = ERR_PTR(-EPROTONOSUPPORT);
|
||||
|
||||
/* Note : following gso_segment() might change skb->encapsulation */
|
||||
udpfrag = !skb->encapsulation && proto == IPPROTO_UDP;
|
||||
if (skb->encapsulation &&
|
||||
skb_shinfo(skb)->gso_type & (SKB_GSO_SIT|SKB_GSO_IPIP))
|
||||
udpfrag = proto == IPPROTO_UDP && encap;
|
||||
else
|
||||
udpfrag = proto == IPPROTO_UDP && !skb->encapsulation;
|
||||
|
||||
ops = rcu_dereference(inet_offloads[proto]);
|
||||
if (likely(ops && ops->callbacks.gso_segment))
|
||||
|
@ -422,9 +422,6 @@ static void ip_copy_metadata(struct sk_buff *to, struct sk_buff *from)
|
||||
to->tc_index = from->tc_index;
|
||||
#endif
|
||||
nf_copy(to, from);
|
||||
#if IS_ENABLED(CONFIG_NETFILTER_XT_TARGET_TRACE)
|
||||
to->nf_trace = from->nf_trace;
|
||||
#endif
|
||||
#if defined(CONFIG_IP_VS) || defined(CONFIG_IP_VS_MODULE)
|
||||
to->ipvs_property = from->ipvs_property;
|
||||
#endif
|
||||
|
@ -93,13 +93,14 @@ static void tunnel_dst_reset(struct ip_tunnel *t)
|
||||
tunnel_dst_set(t, NULL);
|
||||
}
|
||||
|
||||
static void tunnel_dst_reset_all(struct ip_tunnel *t)
|
||||
void ip_tunnel_dst_reset_all(struct ip_tunnel *t)
|
||||
{
|
||||
int i;
|
||||
|
||||
for_each_possible_cpu(i)
|
||||
__tunnel_dst_set(per_cpu_ptr(t->dst_cache, i), NULL);
|
||||
}
|
||||
EXPORT_SYMBOL(ip_tunnel_dst_reset_all);
|
||||
|
||||
static struct rtable *tunnel_rtable_get(struct ip_tunnel *t, u32 cookie)
|
||||
{
|
||||
@ -119,52 +120,6 @@ static struct rtable *tunnel_rtable_get(struct ip_tunnel *t, u32 cookie)
|
||||
return (struct rtable *)dst;
|
||||
}
|
||||
|
||||
/* Often modified stats are per cpu, other are shared (netdev->stats) */
|
||||
struct rtnl_link_stats64 *ip_tunnel_get_stats64(struct net_device *dev,
|
||||
struct rtnl_link_stats64 *tot)
|
||||
{
|
||||
int i;
|
||||
|
||||
for_each_possible_cpu(i) {
|
||||
const struct pcpu_sw_netstats *tstats =
|
||||
per_cpu_ptr(dev->tstats, i);
|
||||
u64 rx_packets, rx_bytes, tx_packets, tx_bytes;
|
||||
unsigned int start;
|
||||
|
||||
do {
|
||||
start = u64_stats_fetch_begin_bh(&tstats->syncp);
|
||||
rx_packets = tstats->rx_packets;
|
||||
tx_packets = tstats->tx_packets;
|
||||
rx_bytes = tstats->rx_bytes;
|
||||
tx_bytes = tstats->tx_bytes;
|
||||
} while (u64_stats_fetch_retry_bh(&tstats->syncp, start));
|
||||
|
||||
tot->rx_packets += rx_packets;
|
||||
tot->tx_packets += tx_packets;
|
||||
tot->rx_bytes += rx_bytes;
|
||||
tot->tx_bytes += tx_bytes;
|
||||
}
|
||||
|
||||
tot->multicast = dev->stats.multicast;
|
||||
|
||||
tot->rx_crc_errors = dev->stats.rx_crc_errors;
|
||||
tot->rx_fifo_errors = dev->stats.rx_fifo_errors;
|
||||
tot->rx_length_errors = dev->stats.rx_length_errors;
|
||||
tot->rx_frame_errors = dev->stats.rx_frame_errors;
|
||||
tot->rx_errors = dev->stats.rx_errors;
|
||||
|
||||
tot->tx_fifo_errors = dev->stats.tx_fifo_errors;
|
||||
tot->tx_carrier_errors = dev->stats.tx_carrier_errors;
|
||||
tot->tx_dropped = dev->stats.tx_dropped;
|
||||
tot->tx_aborted_errors = dev->stats.tx_aborted_errors;
|
||||
tot->tx_errors = dev->stats.tx_errors;
|
||||
|
||||
tot->collisions = dev->stats.collisions;
|
||||
|
||||
return tot;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(ip_tunnel_get_stats64);
|
||||
|
||||
static bool ip_tunnel_key_match(const struct ip_tunnel_parm *p,
|
||||
__be16 flags, __be32 key)
|
||||
{
|
||||
@ -759,7 +714,7 @@ static void ip_tunnel_update(struct ip_tunnel_net *itn,
|
||||
if (set_mtu)
|
||||
dev->mtu = mtu;
|
||||
}
|
||||
tunnel_dst_reset_all(t);
|
||||
ip_tunnel_dst_reset_all(t);
|
||||
netdev_state_change(dev);
|
||||
}
|
||||
|
||||
@ -1088,7 +1043,7 @@ void ip_tunnel_uninit(struct net_device *dev)
|
||||
if (itn->fb_tunnel_dev != dev)
|
||||
ip_tunnel_del(netdev_priv(dev));
|
||||
|
||||
tunnel_dst_reset_all(tunnel);
|
||||
ip_tunnel_dst_reset_all(tunnel);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(ip_tunnel_uninit);
|
||||
|
||||
|
@ -108,7 +108,6 @@ int iptunnel_pull_header(struct sk_buff *skb, int hdr_len, __be16 inner_proto)
|
||||
nf_reset(skb);
|
||||
secpath_reset(skb);
|
||||
skb_clear_hash_if_not_l4(skb);
|
||||
skb_dst_drop(skb);
|
||||
skb->vlan_tci = 0;
|
||||
skb_set_queue_mapping(skb, 0);
|
||||
skb->pkt_type = PACKET_HOST;
|
||||
@ -148,3 +147,49 @@ error:
|
||||
return ERR_PTR(err);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(iptunnel_handle_offloads);
|
||||
|
||||
/* Often modified stats are per cpu, other are shared (netdev->stats) */
|
||||
struct rtnl_link_stats64 *ip_tunnel_get_stats64(struct net_device *dev,
|
||||
struct rtnl_link_stats64 *tot)
|
||||
{
|
||||
int i;
|
||||
|
||||
for_each_possible_cpu(i) {
|
||||
const struct pcpu_sw_netstats *tstats =
|
||||
per_cpu_ptr(dev->tstats, i);
|
||||
u64 rx_packets, rx_bytes, tx_packets, tx_bytes;
|
||||
unsigned int start;
|
||||
|
||||
do {
|
||||
start = u64_stats_fetch_begin_bh(&tstats->syncp);
|
||||
rx_packets = tstats->rx_packets;
|
||||
tx_packets = tstats->tx_packets;
|
||||
rx_bytes = tstats->rx_bytes;
|
||||
tx_bytes = tstats->tx_bytes;
|
||||
} while (u64_stats_fetch_retry_bh(&tstats->syncp, start));
|
||||
|
||||
tot->rx_packets += rx_packets;
|
||||
tot->tx_packets += tx_packets;
|
||||
tot->rx_bytes += rx_bytes;
|
||||
tot->tx_bytes += tx_bytes;
|
||||
}
|
||||
|
||||
tot->multicast = dev->stats.multicast;
|
||||
|
||||
tot->rx_crc_errors = dev->stats.rx_crc_errors;
|
||||
tot->rx_fifo_errors = dev->stats.rx_fifo_errors;
|
||||
tot->rx_length_errors = dev->stats.rx_length_errors;
|
||||
tot->rx_frame_errors = dev->stats.rx_frame_errors;
|
||||
tot->rx_errors = dev->stats.rx_errors;
|
||||
|
||||
tot->tx_fifo_errors = dev->stats.tx_fifo_errors;
|
||||
tot->tx_carrier_errors = dev->stats.tx_carrier_errors;
|
||||
tot->tx_dropped = dev->stats.tx_dropped;
|
||||
tot->tx_aborted_errors = dev->stats.tx_aborted_errors;
|
||||
tot->tx_errors = dev->stats.tx_errors;
|
||||
|
||||
tot->collisions = dev->stats.collisions;
|
||||
|
||||
return tot;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(ip_tunnel_get_stats64);
|
||||
|
@ -1198,8 +1198,8 @@ static int snmp_translate(struct nf_conn *ct,
|
||||
map.to = NOCT1(&ct->tuplehash[!dir].tuple.dst.u3.ip);
|
||||
} else {
|
||||
/* DNAT replies */
|
||||
map.from = NOCT1(&ct->tuplehash[dir].tuple.src.u3.ip);
|
||||
map.to = NOCT1(&ct->tuplehash[!dir].tuple.dst.u3.ip);
|
||||
map.from = NOCT1(&ct->tuplehash[!dir].tuple.src.u3.ip);
|
||||
map.to = NOCT1(&ct->tuplehash[dir].tuple.dst.u3.ip);
|
||||
}
|
||||
|
||||
if (map.from == map.to)
|
||||
|
@ -1044,7 +1044,8 @@ void tcp_free_fastopen_req(struct tcp_sock *tp)
|
||||
}
|
||||
}
|
||||
|
||||
static int tcp_sendmsg_fastopen(struct sock *sk, struct msghdr *msg, int *size)
|
||||
static int tcp_sendmsg_fastopen(struct sock *sk, struct msghdr *msg,
|
||||
int *copied, size_t size)
|
||||
{
|
||||
struct tcp_sock *tp = tcp_sk(sk);
|
||||
int err, flags;
|
||||
@ -1059,11 +1060,12 @@ static int tcp_sendmsg_fastopen(struct sock *sk, struct msghdr *msg, int *size)
|
||||
if (unlikely(tp->fastopen_req == NULL))
|
||||
return -ENOBUFS;
|
||||
tp->fastopen_req->data = msg;
|
||||
tp->fastopen_req->size = size;
|
||||
|
||||
flags = (msg->msg_flags & MSG_DONTWAIT) ? O_NONBLOCK : 0;
|
||||
err = __inet_stream_connect(sk->sk_socket, msg->msg_name,
|
||||
msg->msg_namelen, flags);
|
||||
*size = tp->fastopen_req->copied;
|
||||
*copied = tp->fastopen_req->copied;
|
||||
tcp_free_fastopen_req(tp);
|
||||
return err;
|
||||
}
|
||||
@ -1083,7 +1085,7 @@ int tcp_sendmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg,
|
||||
|
||||
flags = msg->msg_flags;
|
||||
if (flags & MSG_FASTOPEN) {
|
||||
err = tcp_sendmsg_fastopen(sk, msg, &copied_syn);
|
||||
err = tcp_sendmsg_fastopen(sk, msg, &copied_syn, size);
|
||||
if (err == -EINPROGRESS && copied_syn > 0)
|
||||
goto out;
|
||||
else if (err)
|
||||
|
@ -290,8 +290,7 @@ bool tcp_is_cwnd_limited(const struct sock *sk, u32 in_flight)
|
||||
left = tp->snd_cwnd - in_flight;
|
||||
if (sk_can_gso(sk) &&
|
||||
left * sysctl_tcp_tso_win_divisor < tp->snd_cwnd &&
|
||||
left * tp->mss_cache < sk->sk_gso_max_size &&
|
||||
left < sk->sk_gso_max_segs)
|
||||
left < tp->xmit_size_goal_segs)
|
||||
return true;
|
||||
return left <= tcp_max_tso_deferred_mss(tp);
|
||||
}
|
||||
|
@ -1945,8 +1945,9 @@ void tcp_enter_loss(struct sock *sk, int how)
|
||||
if (skb == tcp_send_head(sk))
|
||||
break;
|
||||
|
||||
if (TCP_SKB_CB(skb)->sacked & TCPCB_RETRANS)
|
||||
if (TCP_SKB_CB(skb)->sacked & TCPCB_SACKED_RETRANS)
|
||||
tp->undo_marker = 0;
|
||||
|
||||
TCP_SKB_CB(skb)->sacked &= (~TCPCB_TAGBITS)|TCPCB_SACKED_ACKED;
|
||||
if (!(TCP_SKB_CB(skb)->sacked&TCPCB_SACKED_ACKED) || how) {
|
||||
TCP_SKB_CB(skb)->sacked &= ~TCPCB_SACKED_ACKED;
|
||||
|
@ -864,8 +864,8 @@ static int tcp_transmit_skb(struct sock *sk, struct sk_buff *skb, int clone_it,
|
||||
|
||||
if (unlikely(skb->fclone == SKB_FCLONE_ORIG &&
|
||||
fclone->fclone == SKB_FCLONE_CLONE))
|
||||
NET_INC_STATS_BH(sock_net(sk),
|
||||
LINUX_MIB_TCPSPURIOUS_RTX_HOSTQUEUES);
|
||||
NET_INC_STATS(sock_net(sk),
|
||||
LINUX_MIB_TCPSPURIOUS_RTX_HOSTQUEUES);
|
||||
|
||||
if (unlikely(skb_cloned(skb)))
|
||||
skb = pskb_copy(skb, gfp_mask);
|
||||
@ -2337,6 +2337,7 @@ int __tcp_retransmit_skb(struct sock *sk, struct sk_buff *skb)
|
||||
struct tcp_sock *tp = tcp_sk(sk);
|
||||
struct inet_connection_sock *icsk = inet_csk(sk);
|
||||
unsigned int cur_mss;
|
||||
int err;
|
||||
|
||||
/* Inconslusive MTU probe */
|
||||
if (icsk->icsk_mtup.probe_size) {
|
||||
@ -2400,11 +2401,15 @@ int __tcp_retransmit_skb(struct sock *sk, struct sk_buff *skb)
|
||||
skb_headroom(skb) >= 0xFFFF)) {
|
||||
struct sk_buff *nskb = __pskb_copy(skb, MAX_TCP_HEADER,
|
||||
GFP_ATOMIC);
|
||||
return nskb ? tcp_transmit_skb(sk, nskb, 0, GFP_ATOMIC) :
|
||||
-ENOBUFS;
|
||||
err = nskb ? tcp_transmit_skb(sk, nskb, 0, GFP_ATOMIC) :
|
||||
-ENOBUFS;
|
||||
} else {
|
||||
return tcp_transmit_skb(sk, skb, 1, GFP_ATOMIC);
|
||||
err = tcp_transmit_skb(sk, skb, 1, GFP_ATOMIC);
|
||||
}
|
||||
|
||||
if (likely(!err))
|
||||
TCP_SKB_CB(skb)->sacked |= TCPCB_EVER_RETRANS;
|
||||
return err;
|
||||
}
|
||||
|
||||
int tcp_retransmit_skb(struct sock *sk, struct sk_buff *skb)
|
||||
@ -2908,7 +2913,12 @@ static int tcp_send_syn_data(struct sock *sk, struct sk_buff *syn)
|
||||
space = __tcp_mtu_to_mss(sk, inet_csk(sk)->icsk_pmtu_cookie) -
|
||||
MAX_TCP_OPTION_SPACE;
|
||||
|
||||
syn_data = skb_copy_expand(syn, skb_headroom(syn), space,
|
||||
space = min_t(size_t, space, fo->size);
|
||||
|
||||
/* limit to order-0 allocations */
|
||||
space = min_t(size_t, space, SKB_MAX_HEAD(MAX_TCP_HEADER));
|
||||
|
||||
syn_data = skb_copy_expand(syn, MAX_TCP_HEADER, space,
|
||||
sk->sk_allocation);
|
||||
if (syn_data == NULL)
|
||||
goto fallback;
|
||||
|
@ -138,6 +138,7 @@ config INET6_XFRM_MODE_ROUTEOPTIMIZATION
|
||||
config IPV6_VTI
|
||||
tristate "Virtual (secure) IPv6: tunneling"
|
||||
select IPV6_TUNNEL
|
||||
select NET_IP_TUNNEL
|
||||
depends on INET6_XFRM_MODE_TUNNEL
|
||||
---help---
|
||||
Tunneling means encapsulating data of one protocol type within
|
||||
|
@ -212,7 +212,7 @@ int ipv6_find_hdr(const struct sk_buff *skb, unsigned int *offset,
|
||||
found = (nexthdr == target);
|
||||
|
||||
if ((!ipv6_ext_hdr(nexthdr)) || nexthdr == NEXTHDR_NONE) {
|
||||
if (target < 0)
|
||||
if (target < 0 || found)
|
||||
break;
|
||||
return -ENOENT;
|
||||
}
|
||||
|
@ -89,7 +89,7 @@ static struct sk_buff *ipv6_gso_segment(struct sk_buff *skb,
|
||||
unsigned int unfrag_ip6hlen;
|
||||
u8 *prevhdr;
|
||||
int offset = 0;
|
||||
bool tunnel;
|
||||
bool encap, udpfrag;
|
||||
int nhoff;
|
||||
|
||||
if (unlikely(skb_shinfo(skb)->gso_type &
|
||||
@ -110,8 +110,8 @@ static struct sk_buff *ipv6_gso_segment(struct sk_buff *skb,
|
||||
if (unlikely(!pskb_may_pull(skb, sizeof(*ipv6h))))
|
||||
goto out;
|
||||
|
||||
tunnel = SKB_GSO_CB(skb)->encap_level > 0;
|
||||
if (tunnel)
|
||||
encap = SKB_GSO_CB(skb)->encap_level > 0;
|
||||
if (encap)
|
||||
features = skb->dev->hw_enc_features & netif_skb_features(skb);
|
||||
SKB_GSO_CB(skb)->encap_level += sizeof(*ipv6h);
|
||||
|
||||
@ -121,6 +121,12 @@ static struct sk_buff *ipv6_gso_segment(struct sk_buff *skb,
|
||||
|
||||
proto = ipv6_gso_pull_exthdrs(skb, ipv6h->nexthdr);
|
||||
|
||||
if (skb->encapsulation &&
|
||||
skb_shinfo(skb)->gso_type & (SKB_GSO_SIT|SKB_GSO_IPIP))
|
||||
udpfrag = proto == IPPROTO_UDP && encap;
|
||||
else
|
||||
udpfrag = proto == IPPROTO_UDP && !skb->encapsulation;
|
||||
|
||||
ops = rcu_dereference(inet6_offloads[proto]);
|
||||
if (likely(ops && ops->callbacks.gso_segment)) {
|
||||
skb_reset_transport_header(skb);
|
||||
@ -133,13 +139,9 @@ static struct sk_buff *ipv6_gso_segment(struct sk_buff *skb,
|
||||
for (skb = segs; skb; skb = skb->next) {
|
||||
ipv6h = (struct ipv6hdr *)(skb_mac_header(skb) + nhoff);
|
||||
ipv6h->payload_len = htons(skb->len - nhoff - sizeof(*ipv6h));
|
||||
if (tunnel) {
|
||||
skb_reset_inner_headers(skb);
|
||||
skb->encapsulation = 1;
|
||||
}
|
||||
skb->network_header = (u8 *)ipv6h - skb->head;
|
||||
|
||||
if (!tunnel && proto == IPPROTO_UDP) {
|
||||
if (udpfrag) {
|
||||
unfrag_ip6hlen = ip6_find_1stfragopt(skb, &prevhdr);
|
||||
fptr = (struct frag_hdr *)((u8 *)ipv6h + unfrag_ip6hlen);
|
||||
fptr->frag_off = htons(offset);
|
||||
@ -148,6 +150,8 @@ static struct sk_buff *ipv6_gso_segment(struct sk_buff *skb,
|
||||
offset += (ntohs(ipv6h->payload_len) -
|
||||
sizeof(struct frag_hdr));
|
||||
}
|
||||
if (encap)
|
||||
skb_reset_inner_headers(skb);
|
||||
}
|
||||
|
||||
out:
|
||||
|
@ -530,9 +530,6 @@ static void ip6_copy_metadata(struct sk_buff *to, struct sk_buff *from)
|
||||
to->tc_index = from->tc_index;
|
||||
#endif
|
||||
nf_copy(to, from);
|
||||
#if IS_ENABLED(CONFIG_NETFILTER_XT_TARGET_TRACE)
|
||||
to->nf_trace = from->nf_trace;
|
||||
#endif
|
||||
skb_copy_secmark(to, from);
|
||||
}
|
||||
|
||||
|
@ -135,6 +135,7 @@ int ping_v6_sendmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg,
|
||||
fl6.flowi6_proto = IPPROTO_ICMPV6;
|
||||
fl6.saddr = np->saddr;
|
||||
fl6.daddr = *daddr;
|
||||
fl6.flowi6_mark = sk->sk_mark;
|
||||
fl6.fl6_icmp_type = user_icmph.icmp6_type;
|
||||
fl6.fl6_icmp_code = user_icmph.icmp6_code;
|
||||
security_sk_classify_flow(sk, flowi6_to_flowi(&fl6));
|
||||
|
@ -475,6 +475,7 @@ static void ipip6_tunnel_uninit(struct net_device *dev)
|
||||
ipip6_tunnel_unlink(sitn, tunnel);
|
||||
ipip6_tunnel_del_prl(tunnel, NULL);
|
||||
}
|
||||
ip_tunnel_dst_reset_all(tunnel);
|
||||
dev_put(dev);
|
||||
}
|
||||
|
||||
@ -1082,6 +1083,7 @@ static void ipip6_tunnel_update(struct ip_tunnel *t, struct ip_tunnel_parm *p)
|
||||
t->parms.link = p->link;
|
||||
ipip6_tunnel_bind_dev(t->dev);
|
||||
}
|
||||
ip_tunnel_dst_reset_all(t);
|
||||
netdev_state_change(t->dev);
|
||||
}
|
||||
|
||||
@ -1112,6 +1114,7 @@ static int ipip6_tunnel_update_6rd(struct ip_tunnel *t,
|
||||
t->ip6rd.relay_prefix = relay_prefix;
|
||||
t->ip6rd.prefixlen = ip6rd->prefixlen;
|
||||
t->ip6rd.relay_prefixlen = ip6rd->relay_prefixlen;
|
||||
ip_tunnel_dst_reset_all(t);
|
||||
netdev_state_change(t->dev);
|
||||
return 0;
|
||||
}
|
||||
@ -1271,6 +1274,7 @@ ipip6_tunnel_ioctl (struct net_device *dev, struct ifreq *ifr, int cmd)
|
||||
err = ipip6_tunnel_add_prl(t, &prl, cmd == SIOCCHGPRL);
|
||||
break;
|
||||
}
|
||||
ip_tunnel_dst_reset_all(t);
|
||||
netdev_state_change(dev);
|
||||
break;
|
||||
|
||||
@ -1326,6 +1330,9 @@ static const struct net_device_ops ipip6_netdev_ops = {
|
||||
|
||||
static void ipip6_dev_free(struct net_device *dev)
|
||||
{
|
||||
struct ip_tunnel *tunnel = netdev_priv(dev);
|
||||
|
||||
free_percpu(tunnel->dst_cache);
|
||||
free_percpu(dev->tstats);
|
||||
free_netdev(dev);
|
||||
}
|
||||
@ -1375,6 +1382,12 @@ static int ipip6_tunnel_init(struct net_device *dev)
|
||||
u64_stats_init(&ipip6_tunnel_stats->syncp);
|
||||
}
|
||||
|
||||
tunnel->dst_cache = alloc_percpu(struct ip_tunnel_dst);
|
||||
if (!tunnel->dst_cache) {
|
||||
free_percpu(dev->tstats);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -1405,6 +1418,12 @@ static int __net_init ipip6_fb_tunnel_init(struct net_device *dev)
|
||||
u64_stats_init(&ipip6_fb_stats->syncp);
|
||||
}
|
||||
|
||||
tunnel->dst_cache = alloc_percpu(struct ip_tunnel_dst);
|
||||
if (!tunnel->dst_cache) {
|
||||
free_percpu(dev->tstats);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
dev_hold(dev);
|
||||
rcu_assign_pointer(sitn->tunnels_wc[0], tunnel);
|
||||
return 0;
|
||||
|
@ -113,7 +113,7 @@ static struct sk_buff *udp6_ufo_fragment(struct sk_buff *skb,
|
||||
fptr = (struct frag_hdr *)(skb_network_header(skb) + unfrag_ip6hlen);
|
||||
fptr->nexthdr = nexthdr;
|
||||
fptr->reserved = 0;
|
||||
ipv6_select_ident(fptr, (struct rt6_info *)skb_dst(skb));
|
||||
fptr->identification = skb_shinfo(skb)->ip6_frag_id;
|
||||
|
||||
/* Fragment the skb. ipv6 header and the remaining fields of the
|
||||
* fragment header are updated in ipv6_gso_segment()
|
||||
|
@ -1692,14 +1692,8 @@ void ieee80211_stop_queue_by_reason(struct ieee80211_hw *hw, int queue,
|
||||
void ieee80211_propagate_queue_wake(struct ieee80211_local *local, int queue);
|
||||
void ieee80211_add_pending_skb(struct ieee80211_local *local,
|
||||
struct sk_buff *skb);
|
||||
void ieee80211_add_pending_skbs_fn(struct ieee80211_local *local,
|
||||
struct sk_buff_head *skbs,
|
||||
void (*fn)(void *data), void *data);
|
||||
static inline void ieee80211_add_pending_skbs(struct ieee80211_local *local,
|
||||
struct sk_buff_head *skbs)
|
||||
{
|
||||
ieee80211_add_pending_skbs_fn(local, skbs, NULL, NULL);
|
||||
}
|
||||
void ieee80211_add_pending_skbs(struct ieee80211_local *local,
|
||||
struct sk_buff_head *skbs);
|
||||
void ieee80211_flush_queues(struct ieee80211_local *local,
|
||||
struct ieee80211_sub_if_data *sdata);
|
||||
|
||||
|
@ -222,6 +222,7 @@ ieee80211_determine_chantype(struct ieee80211_sub_if_data *sdata,
|
||||
switch (vht_oper->chan_width) {
|
||||
case IEEE80211_VHT_CHANWIDTH_USE_HT:
|
||||
vht_chandef.width = chandef->width;
|
||||
vht_chandef.center_freq1 = chandef->center_freq1;
|
||||
break;
|
||||
case IEEE80211_VHT_CHANWIDTH_80MHZ:
|
||||
vht_chandef.width = NL80211_CHAN_WIDTH_80;
|
||||
@ -271,6 +272,28 @@ ieee80211_determine_chantype(struct ieee80211_sub_if_data *sdata,
|
||||
ret = 0;
|
||||
|
||||
out:
|
||||
/*
|
||||
* When tracking the current AP, don't do any further checks if the
|
||||
* new chandef is identical to the one we're currently using for the
|
||||
* connection. This keeps us from playing ping-pong with regulatory,
|
||||
* without it the following can happen (for example):
|
||||
* - connect to an AP with 80 MHz, world regdom allows 80 MHz
|
||||
* - AP advertises regdom US
|
||||
* - CRDA loads regdom US with 80 MHz prohibited (old database)
|
||||
* - the code below detects an unsupported channel, downgrades, and
|
||||
* we disconnect from the AP in the caller
|
||||
* - disconnect causes CRDA to reload world regdomain and the game
|
||||
* starts anew.
|
||||
* (see https://bugzilla.kernel.org/show_bug.cgi?id=70881)
|
||||
*
|
||||
* It seems possible that there are still scenarios with CSA or real
|
||||
* bandwidth changes where a this could happen, but those cases are
|
||||
* less common and wouldn't completely prevent using the AP.
|
||||
*/
|
||||
if (tracking &&
|
||||
cfg80211_chandef_identical(chandef, &sdata->vif.bss_conf.chandef))
|
||||
return ret;
|
||||
|
||||
/* don't print the message below for VHT mismatch if VHT is disabled */
|
||||
if (ret & IEEE80211_STA_DISABLE_VHT)
|
||||
vht_chandef = *chandef;
|
||||
@ -3753,6 +3776,7 @@ static int ieee80211_prep_connection(struct ieee80211_sub_if_data *sdata,
|
||||
chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf);
|
||||
if (WARN_ON(!chanctx_conf)) {
|
||||
rcu_read_unlock();
|
||||
sta_info_free(local, new_sta);
|
||||
return -EINVAL;
|
||||
}
|
||||
rate_flags = ieee80211_chandef_rate_flags(&chanctx_conf->def);
|
||||
|
@ -1128,6 +1128,13 @@ static void sta_ps_end(struct sta_info *sta)
|
||||
sta->sta.addr, sta->sta.aid);
|
||||
|
||||
if (test_sta_flag(sta, WLAN_STA_PS_DRIVER)) {
|
||||
/*
|
||||
* Clear the flag only if the other one is still set
|
||||
* so that the TX path won't start TX'ing new frames
|
||||
* directly ... In the case that the driver flag isn't
|
||||
* set ieee80211_sta_ps_deliver_wakeup() will clear it.
|
||||
*/
|
||||
clear_sta_flag(sta, WLAN_STA_PS_STA);
|
||||
ps_dbg(sta->sdata, "STA %pM aid %d driver-ps-blocked\n",
|
||||
sta->sta.addr, sta->sta.aid);
|
||||
return;
|
||||
|
@ -91,7 +91,7 @@ static int sta_info_hash_del(struct ieee80211_local *local,
|
||||
return -ENOENT;
|
||||
}
|
||||
|
||||
static void cleanup_single_sta(struct sta_info *sta)
|
||||
static void __cleanup_single_sta(struct sta_info *sta)
|
||||
{
|
||||
int ac, i;
|
||||
struct tid_ampdu_tx *tid_tx;
|
||||
@ -99,7 +99,8 @@ static void cleanup_single_sta(struct sta_info *sta)
|
||||
struct ieee80211_local *local = sdata->local;
|
||||
struct ps_data *ps;
|
||||
|
||||
if (test_sta_flag(sta, WLAN_STA_PS_STA)) {
|
||||
if (test_sta_flag(sta, WLAN_STA_PS_STA) ||
|
||||
test_sta_flag(sta, WLAN_STA_PS_DRIVER)) {
|
||||
if (sta->sdata->vif.type == NL80211_IFTYPE_AP ||
|
||||
sta->sdata->vif.type == NL80211_IFTYPE_AP_VLAN)
|
||||
ps = &sdata->bss->ps;
|
||||
@ -109,6 +110,7 @@ static void cleanup_single_sta(struct sta_info *sta)
|
||||
return;
|
||||
|
||||
clear_sta_flag(sta, WLAN_STA_PS_STA);
|
||||
clear_sta_flag(sta, WLAN_STA_PS_DRIVER);
|
||||
|
||||
atomic_dec(&ps->num_sta_ps);
|
||||
sta_info_recalc_tim(sta);
|
||||
@ -139,7 +141,14 @@ static void cleanup_single_sta(struct sta_info *sta)
|
||||
ieee80211_purge_tx_queue(&local->hw, &tid_tx->pending);
|
||||
kfree(tid_tx);
|
||||
}
|
||||
}
|
||||
|
||||
static void cleanup_single_sta(struct sta_info *sta)
|
||||
{
|
||||
struct ieee80211_sub_if_data *sdata = sta->sdata;
|
||||
struct ieee80211_local *local = sdata->local;
|
||||
|
||||
__cleanup_single_sta(sta);
|
||||
sta_info_free(local, sta);
|
||||
}
|
||||
|
||||
@ -330,6 +339,7 @@ struct sta_info *sta_info_alloc(struct ieee80211_sub_if_data *sdata,
|
||||
rcu_read_unlock();
|
||||
|
||||
spin_lock_init(&sta->lock);
|
||||
spin_lock_init(&sta->ps_lock);
|
||||
INIT_WORK(&sta->drv_unblock_wk, sta_unblock);
|
||||
INIT_WORK(&sta->ampdu_mlme.work, ieee80211_ba_session_work);
|
||||
mutex_init(&sta->ampdu_mlme.mtx);
|
||||
@ -487,21 +497,26 @@ static int sta_info_insert_finish(struct sta_info *sta) __acquires(RCU)
|
||||
goto out_err;
|
||||
}
|
||||
|
||||
/* notify driver */
|
||||
err = sta_info_insert_drv_state(local, sdata, sta);
|
||||
if (err)
|
||||
goto out_err;
|
||||
|
||||
local->num_sta++;
|
||||
local->sta_generation++;
|
||||
smp_mb();
|
||||
|
||||
/* simplify things and don't accept BA sessions yet */
|
||||
set_sta_flag(sta, WLAN_STA_BLOCK_BA);
|
||||
|
||||
/* make the station visible */
|
||||
sta_info_hash_add(local, sta);
|
||||
|
||||
list_add_rcu(&sta->list, &local->sta_list);
|
||||
|
||||
/* notify driver */
|
||||
err = sta_info_insert_drv_state(local, sdata, sta);
|
||||
if (err)
|
||||
goto out_remove;
|
||||
|
||||
set_sta_flag(sta, WLAN_STA_INSERTED);
|
||||
/* accept BA sessions now */
|
||||
clear_sta_flag(sta, WLAN_STA_BLOCK_BA);
|
||||
|
||||
ieee80211_recalc_min_chandef(sdata);
|
||||
ieee80211_sta_debugfs_add(sta);
|
||||
@ -522,6 +537,12 @@ static int sta_info_insert_finish(struct sta_info *sta) __acquires(RCU)
|
||||
mesh_accept_plinks_update(sdata);
|
||||
|
||||
return 0;
|
||||
out_remove:
|
||||
sta_info_hash_del(local, sta);
|
||||
list_del_rcu(&sta->list);
|
||||
local->num_sta--;
|
||||
synchronize_net();
|
||||
__cleanup_single_sta(sta);
|
||||
out_err:
|
||||
mutex_unlock(&local->sta_mtx);
|
||||
rcu_read_lock();
|
||||
@ -1071,10 +1092,14 @@ struct ieee80211_sta *ieee80211_find_sta(struct ieee80211_vif *vif,
|
||||
}
|
||||
EXPORT_SYMBOL(ieee80211_find_sta);
|
||||
|
||||
static void clear_sta_ps_flags(void *_sta)
|
||||
/* powersave support code */
|
||||
void ieee80211_sta_ps_deliver_wakeup(struct sta_info *sta)
|
||||
{
|
||||
struct sta_info *sta = _sta;
|
||||
struct ieee80211_sub_if_data *sdata = sta->sdata;
|
||||
struct ieee80211_local *local = sdata->local;
|
||||
struct sk_buff_head pending;
|
||||
int filtered = 0, buffered = 0, ac;
|
||||
unsigned long flags;
|
||||
struct ps_data *ps;
|
||||
|
||||
if (sdata->vif.type == NL80211_IFTYPE_AP ||
|
||||
@ -1085,20 +1110,6 @@ static void clear_sta_ps_flags(void *_sta)
|
||||
else
|
||||
return;
|
||||
|
||||
clear_sta_flag(sta, WLAN_STA_PS_DRIVER);
|
||||
if (test_and_clear_sta_flag(sta, WLAN_STA_PS_STA))
|
||||
atomic_dec(&ps->num_sta_ps);
|
||||
}
|
||||
|
||||
/* powersave support code */
|
||||
void ieee80211_sta_ps_deliver_wakeup(struct sta_info *sta)
|
||||
{
|
||||
struct ieee80211_sub_if_data *sdata = sta->sdata;
|
||||
struct ieee80211_local *local = sdata->local;
|
||||
struct sk_buff_head pending;
|
||||
int filtered = 0, buffered = 0, ac;
|
||||
unsigned long flags;
|
||||
|
||||
clear_sta_flag(sta, WLAN_STA_SP);
|
||||
|
||||
BUILD_BUG_ON(BITS_TO_LONGS(IEEE80211_NUM_TIDS) > 1);
|
||||
@ -1109,6 +1120,8 @@ void ieee80211_sta_ps_deliver_wakeup(struct sta_info *sta)
|
||||
|
||||
skb_queue_head_init(&pending);
|
||||
|
||||
/* sync with ieee80211_tx_h_unicast_ps_buf */
|
||||
spin_lock(&sta->ps_lock);
|
||||
/* Send all buffered frames to the station */
|
||||
for (ac = 0; ac < IEEE80211_NUM_ACS; ac++) {
|
||||
int count = skb_queue_len(&pending), tmp;
|
||||
@ -1127,7 +1140,12 @@ void ieee80211_sta_ps_deliver_wakeup(struct sta_info *sta)
|
||||
buffered += tmp - count;
|
||||
}
|
||||
|
||||
ieee80211_add_pending_skbs_fn(local, &pending, clear_sta_ps_flags, sta);
|
||||
ieee80211_add_pending_skbs(local, &pending);
|
||||
clear_sta_flag(sta, WLAN_STA_PS_DRIVER);
|
||||
clear_sta_flag(sta, WLAN_STA_PS_STA);
|
||||
spin_unlock(&sta->ps_lock);
|
||||
|
||||
atomic_dec(&ps->num_sta_ps);
|
||||
|
||||
/* This station just woke up and isn't aware of our SMPS state */
|
||||
if (!ieee80211_smps_is_restrictive(sta->known_smps_mode,
|
||||
|
@ -267,6 +267,7 @@ struct ieee80211_tx_latency_stat {
|
||||
* @drv_unblock_wk: used for driver PS unblocking
|
||||
* @listen_interval: listen interval of this station, when we're acting as AP
|
||||
* @_flags: STA flags, see &enum ieee80211_sta_info_flags, do not use directly
|
||||
* @ps_lock: used for powersave (when mac80211 is the AP) related locking
|
||||
* @ps_tx_buf: buffers (per AC) of frames to transmit to this station
|
||||
* when it leaves power saving state or polls
|
||||
* @tx_filtered: buffers (per AC) of frames we already tried to
|
||||
@ -356,10 +357,8 @@ struct sta_info {
|
||||
/* use the accessors defined below */
|
||||
unsigned long _flags;
|
||||
|
||||
/*
|
||||
* STA powersave frame queues, no more than the internal
|
||||
* locking required.
|
||||
*/
|
||||
/* STA powersave lock and frame queues */
|
||||
spinlock_t ps_lock;
|
||||
struct sk_buff_head ps_tx_buf[IEEE80211_NUM_ACS];
|
||||
struct sk_buff_head tx_filtered[IEEE80211_NUM_ACS];
|
||||
unsigned long driver_buffered_tids;
|
||||
|
@ -478,6 +478,20 @@ ieee80211_tx_h_unicast_ps_buf(struct ieee80211_tx_data *tx)
|
||||
sta->sta.addr, sta->sta.aid, ac);
|
||||
if (tx->local->total_ps_buffered >= TOTAL_MAX_TX_BUFFER)
|
||||
purge_old_ps_buffers(tx->local);
|
||||
|
||||
/* sync with ieee80211_sta_ps_deliver_wakeup */
|
||||
spin_lock(&sta->ps_lock);
|
||||
/*
|
||||
* STA woke up the meantime and all the frames on ps_tx_buf have
|
||||
* been queued to pending queue. No reordering can happen, go
|
||||
* ahead and Tx the packet.
|
||||
*/
|
||||
if (!test_sta_flag(sta, WLAN_STA_PS_STA) &&
|
||||
!test_sta_flag(sta, WLAN_STA_PS_DRIVER)) {
|
||||
spin_unlock(&sta->ps_lock);
|
||||
return TX_CONTINUE;
|
||||
}
|
||||
|
||||
if (skb_queue_len(&sta->ps_tx_buf[ac]) >= STA_MAX_TX_BUFFER) {
|
||||
struct sk_buff *old = skb_dequeue(&sta->ps_tx_buf[ac]);
|
||||
ps_dbg(tx->sdata,
|
||||
@ -492,6 +506,7 @@ ieee80211_tx_h_unicast_ps_buf(struct ieee80211_tx_data *tx)
|
||||
info->flags |= IEEE80211_TX_INTFL_NEED_TXPROCESSING;
|
||||
info->flags &= ~IEEE80211_TX_TEMPORARY_FLAGS;
|
||||
skb_queue_tail(&sta->ps_tx_buf[ac], tx->skb);
|
||||
spin_unlock(&sta->ps_lock);
|
||||
|
||||
if (!timer_pending(&local->sta_cleanup))
|
||||
mod_timer(&local->sta_cleanup,
|
||||
|
@ -435,9 +435,8 @@ void ieee80211_add_pending_skb(struct ieee80211_local *local,
|
||||
spin_unlock_irqrestore(&local->queue_stop_reason_lock, flags);
|
||||
}
|
||||
|
||||
void ieee80211_add_pending_skbs_fn(struct ieee80211_local *local,
|
||||
struct sk_buff_head *skbs,
|
||||
void (*fn)(void *data), void *data)
|
||||
void ieee80211_add_pending_skbs(struct ieee80211_local *local,
|
||||
struct sk_buff_head *skbs)
|
||||
{
|
||||
struct ieee80211_hw *hw = &local->hw;
|
||||
struct sk_buff *skb;
|
||||
@ -461,9 +460,6 @@ void ieee80211_add_pending_skbs_fn(struct ieee80211_local *local,
|
||||
__skb_queue_tail(&local->pending[queue], skb);
|
||||
}
|
||||
|
||||
if (fn)
|
||||
fn(data);
|
||||
|
||||
for (i = 0; i < hw->queues; i++)
|
||||
__ieee80211_wake_queue(hw, i,
|
||||
IEEE80211_QUEUE_STOP_REASON_SKB_ADD);
|
||||
@ -1740,6 +1736,26 @@ int ieee80211_reconfig(struct ieee80211_local *local)
|
||||
ieee80211_wake_queues_by_reason(hw, IEEE80211_MAX_QUEUE_MAP,
|
||||
IEEE80211_QUEUE_STOP_REASON_SUSPEND);
|
||||
|
||||
/*
|
||||
* Reconfigure sched scan if it was interrupted by FW restart or
|
||||
* suspend.
|
||||
*/
|
||||
mutex_lock(&local->mtx);
|
||||
sched_scan_sdata = rcu_dereference_protected(local->sched_scan_sdata,
|
||||
lockdep_is_held(&local->mtx));
|
||||
if (sched_scan_sdata && local->sched_scan_req)
|
||||
/*
|
||||
* Sched scan stopped, but we don't want to report it. Instead,
|
||||
* we're trying to reschedule.
|
||||
*/
|
||||
if (__ieee80211_request_sched_scan_start(sched_scan_sdata,
|
||||
local->sched_scan_req))
|
||||
sched_scan_stopped = true;
|
||||
mutex_unlock(&local->mtx);
|
||||
|
||||
if (sched_scan_stopped)
|
||||
cfg80211_sched_scan_stopped(local->hw.wiphy);
|
||||
|
||||
/*
|
||||
* If this is for hw restart things are still running.
|
||||
* We may want to change that later, however.
|
||||
@ -1768,26 +1784,6 @@ int ieee80211_reconfig(struct ieee80211_local *local)
|
||||
WARN_ON(1);
|
||||
#endif
|
||||
|
||||
/*
|
||||
* Reconfigure sched scan if it was interrupted by FW restart or
|
||||
* suspend.
|
||||
*/
|
||||
mutex_lock(&local->mtx);
|
||||
sched_scan_sdata = rcu_dereference_protected(local->sched_scan_sdata,
|
||||
lockdep_is_held(&local->mtx));
|
||||
if (sched_scan_sdata && local->sched_scan_req)
|
||||
/*
|
||||
* Sched scan stopped, but we don't want to report it. Instead,
|
||||
* we're trying to reschedule.
|
||||
*/
|
||||
if (__ieee80211_request_sched_scan_start(sched_scan_sdata,
|
||||
local->sched_scan_req))
|
||||
sched_scan_stopped = true;
|
||||
mutex_unlock(&local->mtx);
|
||||
|
||||
if (sched_scan_stopped)
|
||||
cfg80211_sched_scan_stopped(local->hw.wiphy);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -154,6 +154,11 @@ u16 ieee80211_select_queue(struct ieee80211_sub_if_data *sdata,
|
||||
return IEEE80211_AC_BE;
|
||||
}
|
||||
|
||||
if (skb->protocol == sdata->control_port_protocol) {
|
||||
skb->priority = 7;
|
||||
return ieee80211_downgrade_queue(sdata, skb);
|
||||
}
|
||||
|
||||
/* use the data classifier to determine what 802.1d tag the
|
||||
* data frame has */
|
||||
rcu_read_lock();
|
||||
|
@ -1310,27 +1310,22 @@ ctnetlink_change_status(struct nf_conn *ct, const struct nlattr * const cda[])
|
||||
}
|
||||
|
||||
static int
|
||||
ctnetlink_change_nat(struct nf_conn *ct, const struct nlattr * const cda[])
|
||||
ctnetlink_setup_nat(struct nf_conn *ct, const struct nlattr * const cda[])
|
||||
{
|
||||
#ifdef CONFIG_NF_NAT_NEEDED
|
||||
int ret;
|
||||
|
||||
if (cda[CTA_NAT_DST]) {
|
||||
ret = ctnetlink_parse_nat_setup(ct,
|
||||
NF_NAT_MANIP_DST,
|
||||
cda[CTA_NAT_DST]);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
}
|
||||
if (cda[CTA_NAT_SRC]) {
|
||||
ret = ctnetlink_parse_nat_setup(ct,
|
||||
NF_NAT_MANIP_SRC,
|
||||
cda[CTA_NAT_SRC]);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
}
|
||||
return 0;
|
||||
ret = ctnetlink_parse_nat_setup(ct, NF_NAT_MANIP_DST,
|
||||
cda[CTA_NAT_DST]);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
ret = ctnetlink_parse_nat_setup(ct, NF_NAT_MANIP_SRC,
|
||||
cda[CTA_NAT_SRC]);
|
||||
return ret;
|
||||
#else
|
||||
if (!cda[CTA_NAT_DST] && !cda[CTA_NAT_SRC])
|
||||
return 0;
|
||||
return -EOPNOTSUPP;
|
||||
#endif
|
||||
}
|
||||
@ -1659,11 +1654,9 @@ ctnetlink_create_conntrack(struct net *net, u16 zone,
|
||||
goto err2;
|
||||
}
|
||||
|
||||
if (cda[CTA_NAT_SRC] || cda[CTA_NAT_DST]) {
|
||||
err = ctnetlink_change_nat(ct, cda);
|
||||
if (err < 0)
|
||||
goto err2;
|
||||
}
|
||||
err = ctnetlink_setup_nat(ct, cda);
|
||||
if (err < 0)
|
||||
goto err2;
|
||||
|
||||
nf_ct_acct_ext_add(ct, GFP_ATOMIC);
|
||||
nf_ct_tstamp_ext_add(ct, GFP_ATOMIC);
|
||||
|
@ -432,15 +432,15 @@ nf_nat_setup_info(struct nf_conn *ct,
|
||||
}
|
||||
EXPORT_SYMBOL(nf_nat_setup_info);
|
||||
|
||||
unsigned int
|
||||
nf_nat_alloc_null_binding(struct nf_conn *ct, unsigned int hooknum)
|
||||
static unsigned int
|
||||
__nf_nat_alloc_null_binding(struct nf_conn *ct, enum nf_nat_manip_type manip)
|
||||
{
|
||||
/* Force range to this IP; let proto decide mapping for
|
||||
* per-proto parts (hence not IP_NAT_RANGE_PROTO_SPECIFIED).
|
||||
* Use reply in case it's already been mangled (eg local packet).
|
||||
*/
|
||||
union nf_inet_addr ip =
|
||||
(HOOK2MANIP(hooknum) == NF_NAT_MANIP_SRC ?
|
||||
(manip == NF_NAT_MANIP_SRC ?
|
||||
ct->tuplehash[IP_CT_DIR_REPLY].tuple.dst.u3 :
|
||||
ct->tuplehash[IP_CT_DIR_REPLY].tuple.src.u3);
|
||||
struct nf_nat_range range = {
|
||||
@ -448,7 +448,13 @@ nf_nat_alloc_null_binding(struct nf_conn *ct, unsigned int hooknum)
|
||||
.min_addr = ip,
|
||||
.max_addr = ip,
|
||||
};
|
||||
return nf_nat_setup_info(ct, &range, HOOK2MANIP(hooknum));
|
||||
return nf_nat_setup_info(ct, &range, manip);
|
||||
}
|
||||
|
||||
unsigned int
|
||||
nf_nat_alloc_null_binding(struct nf_conn *ct, unsigned int hooknum)
|
||||
{
|
||||
return __nf_nat_alloc_null_binding(ct, HOOK2MANIP(hooknum));
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(nf_nat_alloc_null_binding);
|
||||
|
||||
@ -702,9 +708,9 @@ static const struct nla_policy nat_nla_policy[CTA_NAT_MAX+1] = {
|
||||
|
||||
static int
|
||||
nfnetlink_parse_nat(const struct nlattr *nat,
|
||||
const struct nf_conn *ct, struct nf_nat_range *range)
|
||||
const struct nf_conn *ct, struct nf_nat_range *range,
|
||||
const struct nf_nat_l3proto *l3proto)
|
||||
{
|
||||
const struct nf_nat_l3proto *l3proto;
|
||||
struct nlattr *tb[CTA_NAT_MAX+1];
|
||||
int err;
|
||||
|
||||
@ -714,38 +720,46 @@ nfnetlink_parse_nat(const struct nlattr *nat,
|
||||
if (err < 0)
|
||||
return err;
|
||||
|
||||
rcu_read_lock();
|
||||
l3proto = __nf_nat_l3proto_find(nf_ct_l3num(ct));
|
||||
if (l3proto == NULL) {
|
||||
err = -EAGAIN;
|
||||
goto out;
|
||||
}
|
||||
err = l3proto->nlattr_to_range(tb, range);
|
||||
if (err < 0)
|
||||
goto out;
|
||||
return err;
|
||||
|
||||
if (!tb[CTA_NAT_PROTO])
|
||||
goto out;
|
||||
return 0;
|
||||
|
||||
err = nfnetlink_parse_nat_proto(tb[CTA_NAT_PROTO], ct, range);
|
||||
out:
|
||||
rcu_read_unlock();
|
||||
return err;
|
||||
return nfnetlink_parse_nat_proto(tb[CTA_NAT_PROTO], ct, range);
|
||||
}
|
||||
|
||||
/* This function is called under rcu_read_lock() */
|
||||
static int
|
||||
nfnetlink_parse_nat_setup(struct nf_conn *ct,
|
||||
enum nf_nat_manip_type manip,
|
||||
const struct nlattr *attr)
|
||||
{
|
||||
struct nf_nat_range range;
|
||||
const struct nf_nat_l3proto *l3proto;
|
||||
int err;
|
||||
|
||||
err = nfnetlink_parse_nat(attr, ct, &range);
|
||||
/* Should not happen, restricted to creating new conntracks
|
||||
* via ctnetlink.
|
||||
*/
|
||||
if (WARN_ON_ONCE(nf_nat_initialized(ct, manip)))
|
||||
return -EEXIST;
|
||||
|
||||
/* Make sure that L3 NAT is there by when we call nf_nat_setup_info to
|
||||
* attach the null binding, otherwise this may oops.
|
||||
*/
|
||||
l3proto = __nf_nat_l3proto_find(nf_ct_l3num(ct));
|
||||
if (l3proto == NULL)
|
||||
return -EAGAIN;
|
||||
|
||||
/* No NAT information has been passed, allocate the null-binding */
|
||||
if (attr == NULL)
|
||||
return __nf_nat_alloc_null_binding(ct, manip);
|
||||
|
||||
err = nfnetlink_parse_nat(attr, ct, &range, l3proto);
|
||||
if (err < 0)
|
||||
return err;
|
||||
if (nf_nat_initialized(ct, manip))
|
||||
return -EEXIST;
|
||||
|
||||
return nf_nat_setup_info(ct, &range, manip);
|
||||
}
|
||||
|
@ -116,7 +116,7 @@ static void nft_meta_get_eval(const struct nft_expr *expr,
|
||||
skb->sk->sk_socket->file->f_cred->fsgid);
|
||||
read_unlock_bh(&skb->sk->sk_callback_lock);
|
||||
break;
|
||||
#ifdef CONFIG_NET_CLS_ROUTE
|
||||
#ifdef CONFIG_IP_ROUTE_CLASSID
|
||||
case NFT_META_RTCLASSID: {
|
||||
const struct dst_entry *dst = skb_dst(skb);
|
||||
|
||||
@ -199,7 +199,7 @@ static int nft_meta_init_validate_get(uint32_t key)
|
||||
case NFT_META_OIFTYPE:
|
||||
case NFT_META_SKUID:
|
||||
case NFT_META_SKGID:
|
||||
#ifdef CONFIG_NET_CLS_ROUTE
|
||||
#ifdef CONFIG_IP_ROUTE_CLASSID
|
||||
case NFT_META_RTCLASSID:
|
||||
#endif
|
||||
#ifdef CONFIG_NETWORK_SECMARK
|
||||
|
@ -135,7 +135,8 @@ nft_payload_select_ops(const struct nft_ctx *ctx,
|
||||
if (len == 0 || len > FIELD_SIZEOF(struct nft_data, data))
|
||||
return ERR_PTR(-EINVAL);
|
||||
|
||||
if (len <= 4 && IS_ALIGNED(offset, len) && base != NFT_PAYLOAD_LL_HEADER)
|
||||
if (len <= 4 && is_power_of_2(len) && IS_ALIGNED(offset, len) &&
|
||||
base != NFT_PAYLOAD_LL_HEADER)
|
||||
return &nft_payload_fast_ops;
|
||||
else
|
||||
return &nft_payload_ops;
|
||||
|
@ -21,9 +21,9 @@ static void nft_reject_inet_eval(const struct nft_expr *expr,
|
||||
{
|
||||
switch (pkt->ops->pf) {
|
||||
case NFPROTO_IPV4:
|
||||
nft_reject_ipv4_eval(expr, data, pkt);
|
||||
return nft_reject_ipv4_eval(expr, data, pkt);
|
||||
case NFPROTO_IPV6:
|
||||
nft_reject_ipv6_eval(expr, data, pkt);
|
||||
return nft_reject_ipv6_eval(expr, data, pkt);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -1489,8 +1489,8 @@ static int netlink_connect(struct socket *sock, struct sockaddr *addr,
|
||||
if (addr->sa_family != AF_NETLINK)
|
||||
return -EINVAL;
|
||||
|
||||
/* Only superuser is allowed to send multicasts */
|
||||
if (nladdr->nl_groups && !netlink_capable(sock, NL_CFG_F_NONROOT_SEND))
|
||||
if ((nladdr->nl_groups || nladdr->nl_pid) &&
|
||||
!netlink_capable(sock, NL_CFG_F_NONROOT_SEND))
|
||||
return -EPERM;
|
||||
|
||||
if (!nlk->portid)
|
||||
|
@ -301,7 +301,7 @@ static int nci_open_device(struct nci_dev *ndev)
|
||||
rc = __nci_request(ndev, nci_reset_req, 0,
|
||||
msecs_to_jiffies(NCI_RESET_TIMEOUT));
|
||||
|
||||
if (ndev->ops->setup(ndev))
|
||||
if (ndev->ops->setup)
|
||||
ndev->ops->setup(ndev);
|
||||
|
||||
if (!rc) {
|
||||
|
@ -334,18 +334,6 @@ static int tbf_change(struct Qdisc *sch, struct nlattr *opt)
|
||||
qdisc_put_rtab(qdisc_get_rtab(&qopt->peakrate,
|
||||
tb[TCA_TBF_PTAB]));
|
||||
|
||||
if (q->qdisc != &noop_qdisc) {
|
||||
err = fifo_set_limit(q->qdisc, qopt->limit);
|
||||
if (err)
|
||||
goto done;
|
||||
} else if (qopt->limit > 0) {
|
||||
child = fifo_create_dflt(sch, &bfifo_qdisc_ops, qopt->limit);
|
||||
if (IS_ERR(child)) {
|
||||
err = PTR_ERR(child);
|
||||
goto done;
|
||||
}
|
||||
}
|
||||
|
||||
buffer = min_t(u64, PSCHED_TICKS2NS(qopt->buffer), ~0U);
|
||||
mtu = min_t(u64, PSCHED_TICKS2NS(qopt->mtu), ~0U);
|
||||
|
||||
@ -390,6 +378,18 @@ static int tbf_change(struct Qdisc *sch, struct nlattr *opt)
|
||||
goto done;
|
||||
}
|
||||
|
||||
if (q->qdisc != &noop_qdisc) {
|
||||
err = fifo_set_limit(q->qdisc, qopt->limit);
|
||||
if (err)
|
||||
goto done;
|
||||
} else if (qopt->limit > 0) {
|
||||
child = fifo_create_dflt(sch, &bfifo_qdisc_ops, qopt->limit);
|
||||
if (IS_ERR(child)) {
|
||||
err = PTR_ERR(child);
|
||||
goto done;
|
||||
}
|
||||
}
|
||||
|
||||
sch_tree_lock(sch);
|
||||
if (child) {
|
||||
qdisc_tree_decrease_qlen(q->qdisc, q->qdisc->q.qlen);
|
||||
|
@ -1239,78 +1239,107 @@ void sctp_assoc_update(struct sctp_association *asoc,
|
||||
}
|
||||
|
||||
/* Update the retran path for sending a retransmitted packet.
|
||||
* Round-robin through the active transports, else round-robin
|
||||
* through the inactive transports as this is the next best thing
|
||||
* we can try.
|
||||
* See also RFC4960, 6.4. Multi-Homed SCTP Endpoints:
|
||||
*
|
||||
* When there is outbound data to send and the primary path
|
||||
* becomes inactive (e.g., due to failures), or where the
|
||||
* SCTP user explicitly requests to send data to an
|
||||
* inactive destination transport address, before reporting
|
||||
* an error to its ULP, the SCTP endpoint should try to send
|
||||
* the data to an alternate active destination transport
|
||||
* address if one exists.
|
||||
*
|
||||
* When retransmitting data that timed out, if the endpoint
|
||||
* is multihomed, it should consider each source-destination
|
||||
* address pair in its retransmission selection policy.
|
||||
* When retransmitting timed-out data, the endpoint should
|
||||
* attempt to pick the most divergent source-destination
|
||||
* pair from the original source-destination pair to which
|
||||
* the packet was transmitted.
|
||||
*
|
||||
* Note: Rules for picking the most divergent source-destination
|
||||
* pair are an implementation decision and are not specified
|
||||
* within this document.
|
||||
*
|
||||
* Our basic strategy is to round-robin transports in priorities
|
||||
* according to sctp_state_prio_map[] e.g., if no such
|
||||
* transport with state SCTP_ACTIVE exists, round-robin through
|
||||
* SCTP_UNKNOWN, etc. You get the picture.
|
||||
*/
|
||||
void sctp_assoc_update_retran_path(struct sctp_association *asoc)
|
||||
static const u8 sctp_trans_state_to_prio_map[] = {
|
||||
[SCTP_ACTIVE] = 3, /* best case */
|
||||
[SCTP_UNKNOWN] = 2,
|
||||
[SCTP_PF] = 1,
|
||||
[SCTP_INACTIVE] = 0, /* worst case */
|
||||
};
|
||||
|
||||
static u8 sctp_trans_score(const struct sctp_transport *trans)
|
||||
{
|
||||
struct sctp_transport *t, *next;
|
||||
struct list_head *head = &asoc->peer.transport_addr_list;
|
||||
struct list_head *pos;
|
||||
|
||||
if (asoc->peer.transport_count == 1)
|
||||
return;
|
||||
|
||||
/* Find the next transport in a round-robin fashion. */
|
||||
t = asoc->peer.retran_path;
|
||||
pos = &t->transports;
|
||||
next = NULL;
|
||||
|
||||
while (1) {
|
||||
/* Skip the head. */
|
||||
if (pos->next == head)
|
||||
pos = head->next;
|
||||
else
|
||||
pos = pos->next;
|
||||
|
||||
t = list_entry(pos, struct sctp_transport, transports);
|
||||
|
||||
/* We have exhausted the list, but didn't find any
|
||||
* other active transports. If so, use the next
|
||||
* transport.
|
||||
*/
|
||||
if (t == asoc->peer.retran_path) {
|
||||
t = next;
|
||||
break;
|
||||
}
|
||||
|
||||
/* Try to find an active transport. */
|
||||
|
||||
if ((t->state == SCTP_ACTIVE) ||
|
||||
(t->state == SCTP_UNKNOWN)) {
|
||||
break;
|
||||
} else {
|
||||
/* Keep track of the next transport in case
|
||||
* we don't find any active transport.
|
||||
*/
|
||||
if (t->state != SCTP_UNCONFIRMED && !next)
|
||||
next = t;
|
||||
}
|
||||
}
|
||||
|
||||
if (t)
|
||||
asoc->peer.retran_path = t;
|
||||
else
|
||||
t = asoc->peer.retran_path;
|
||||
|
||||
pr_debug("%s: association:%p addr:%pISpc\n", __func__, asoc,
|
||||
&t->ipaddr.sa);
|
||||
return sctp_trans_state_to_prio_map[trans->state];
|
||||
}
|
||||
|
||||
/* Choose the transport for sending retransmit packet. */
|
||||
struct sctp_transport *sctp_assoc_choose_alter_transport(
|
||||
struct sctp_association *asoc, struct sctp_transport *last_sent_to)
|
||||
static struct sctp_transport *sctp_trans_elect_best(struct sctp_transport *curr,
|
||||
struct sctp_transport *best)
|
||||
{
|
||||
if (best == NULL)
|
||||
return curr;
|
||||
|
||||
return sctp_trans_score(curr) > sctp_trans_score(best) ? curr : best;
|
||||
}
|
||||
|
||||
void sctp_assoc_update_retran_path(struct sctp_association *asoc)
|
||||
{
|
||||
struct sctp_transport *trans = asoc->peer.retran_path;
|
||||
struct sctp_transport *trans_next = NULL;
|
||||
|
||||
/* We're done as we only have the one and only path. */
|
||||
if (asoc->peer.transport_count == 1)
|
||||
return;
|
||||
/* If active_path and retran_path are the same and active,
|
||||
* then this is the only active path. Use it.
|
||||
*/
|
||||
if (asoc->peer.active_path == asoc->peer.retran_path &&
|
||||
asoc->peer.active_path->state == SCTP_ACTIVE)
|
||||
return;
|
||||
|
||||
/* Iterate from retran_path's successor back to retran_path. */
|
||||
for (trans = list_next_entry(trans, transports); 1;
|
||||
trans = list_next_entry(trans, transports)) {
|
||||
/* Manually skip the head element. */
|
||||
if (&trans->transports == &asoc->peer.transport_addr_list)
|
||||
continue;
|
||||
if (trans->state == SCTP_UNCONFIRMED)
|
||||
continue;
|
||||
trans_next = sctp_trans_elect_best(trans, trans_next);
|
||||
/* Active is good enough for immediate return. */
|
||||
if (trans_next->state == SCTP_ACTIVE)
|
||||
break;
|
||||
/* We've reached the end, time to update path. */
|
||||
if (trans == asoc->peer.retran_path)
|
||||
break;
|
||||
}
|
||||
|
||||
if (trans_next != NULL)
|
||||
asoc->peer.retran_path = trans_next;
|
||||
|
||||
pr_debug("%s: association:%p updated new path to addr:%pISpc\n",
|
||||
__func__, asoc, &asoc->peer.retran_path->ipaddr.sa);
|
||||
}
|
||||
|
||||
struct sctp_transport *
|
||||
sctp_assoc_choose_alter_transport(struct sctp_association *asoc,
|
||||
struct sctp_transport *last_sent_to)
|
||||
{
|
||||
/* If this is the first time packet is sent, use the active path,
|
||||
* else use the retran path. If the last packet was sent over the
|
||||
* retran path, update the retran path and use it.
|
||||
*/
|
||||
if (!last_sent_to)
|
||||
if (last_sent_to == NULL) {
|
||||
return asoc->peer.active_path;
|
||||
else {
|
||||
} else {
|
||||
if (last_sent_to == asoc->peer.retran_path)
|
||||
sctp_assoc_update_retran_path(asoc);
|
||||
|
||||
return asoc->peer.retran_path;
|
||||
}
|
||||
}
|
||||
|
@ -495,11 +495,12 @@ static void sctp_do_8_2_transport_strike(sctp_cmd_seq_t *commands,
|
||||
}
|
||||
|
||||
/* If the transport error count is greater than the pf_retrans
|
||||
* threshold, and less than pathmaxrtx, then mark this transport
|
||||
* as Partially Failed, ee SCTP Quick Failover Draft, secon 5.1,
|
||||
* point 1
|
||||
* threshold, and less than pathmaxrtx, and if the current state
|
||||
* is not SCTP_UNCONFIRMED, then mark this transport as Partially
|
||||
* Failed, see SCTP Quick Failover Draft, section 5.1
|
||||
*/
|
||||
if ((transport->state != SCTP_PF) &&
|
||||
(transport->state != SCTP_UNCONFIRMED) &&
|
||||
(asoc->pf_retrans < transport->pathmaxrxt) &&
|
||||
(transport->error_count > asoc->pf_retrans)) {
|
||||
|
||||
|
@ -758,6 +758,13 @@ sctp_disposition_t sctp_sf_do_5_1D_ce(struct net *net,
|
||||
struct sctp_chunk auth;
|
||||
sctp_ierror_t ret;
|
||||
|
||||
/* Make sure that we and the peer are AUTH capable */
|
||||
if (!net->sctp.auth_enable || !new_asoc->peer.auth_capable) {
|
||||
kfree_skb(chunk->auth_chunk);
|
||||
sctp_association_free(new_asoc);
|
||||
return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
|
||||
}
|
||||
|
||||
/* set-up our fake chunk so that we can process it */
|
||||
auth.skb = chunk->auth_chunk;
|
||||
auth.asoc = chunk->asoc;
|
||||
|
@ -610,8 +610,13 @@ static struct notifier_block notifier = {
|
||||
|
||||
int tipc_bearer_setup(void)
|
||||
{
|
||||
int err;
|
||||
|
||||
err = register_netdevice_notifier(¬ifier);
|
||||
if (err)
|
||||
return err;
|
||||
dev_add_pack(&tipc_packet_type);
|
||||
return register_netdevice_notifier(¬ifier);
|
||||
return 0;
|
||||
}
|
||||
|
||||
void tipc_bearer_cleanup(void)
|
||||
|
@ -181,7 +181,7 @@ static struct sk_buff *cfg_set_own_addr(void)
|
||||
if (tipc_own_addr)
|
||||
return tipc_cfg_reply_error_string(TIPC_CFG_NOT_SUPPORTED
|
||||
" (cannot change node address once assigned)");
|
||||
tipc_core_start_net(addr);
|
||||
tipc_net_start(addr);
|
||||
return tipc_cfg_reply_none();
|
||||
}
|
||||
|
||||
|
109
net/tipc/core.c
109
net/tipc/core.c
@ -76,38 +76,14 @@ struct sk_buff *tipc_buf_acquire(u32 size)
|
||||
return skb;
|
||||
}
|
||||
|
||||
/**
|
||||
* tipc_core_stop_net - shut down TIPC networking sub-systems
|
||||
*/
|
||||
static void tipc_core_stop_net(void)
|
||||
{
|
||||
tipc_net_stop();
|
||||
tipc_bearer_cleanup();
|
||||
}
|
||||
|
||||
/**
|
||||
* start_net - start TIPC networking sub-systems
|
||||
*/
|
||||
int tipc_core_start_net(unsigned long addr)
|
||||
{
|
||||
int res;
|
||||
|
||||
tipc_net_start(addr);
|
||||
res = tipc_bearer_setup();
|
||||
if (res < 0)
|
||||
goto err;
|
||||
return res;
|
||||
|
||||
err:
|
||||
tipc_core_stop_net();
|
||||
return res;
|
||||
}
|
||||
|
||||
/**
|
||||
* tipc_core_stop - switch TIPC from SINGLE NODE to NOT RUNNING mode
|
||||
*/
|
||||
static void tipc_core_stop(void)
|
||||
{
|
||||
tipc_handler_stop();
|
||||
tipc_net_stop();
|
||||
tipc_bearer_cleanup();
|
||||
tipc_netlink_stop();
|
||||
tipc_cfg_stop();
|
||||
tipc_subscr_stop();
|
||||
@ -122,30 +98,65 @@ static void tipc_core_stop(void)
|
||||
*/
|
||||
static int tipc_core_start(void)
|
||||
{
|
||||
int res;
|
||||
int err;
|
||||
|
||||
get_random_bytes(&tipc_random, sizeof(tipc_random));
|
||||
|
||||
res = tipc_handler_start();
|
||||
if (!res)
|
||||
res = tipc_ref_table_init(tipc_max_ports, tipc_random);
|
||||
if (!res)
|
||||
res = tipc_nametbl_init();
|
||||
if (!res)
|
||||
res = tipc_netlink_start();
|
||||
if (!res)
|
||||
res = tipc_socket_init();
|
||||
if (!res)
|
||||
res = tipc_register_sysctl();
|
||||
if (!res)
|
||||
res = tipc_subscr_start();
|
||||
if (!res)
|
||||
res = tipc_cfg_init();
|
||||
if (res) {
|
||||
tipc_handler_stop();
|
||||
tipc_core_stop();
|
||||
}
|
||||
return res;
|
||||
err = tipc_handler_start();
|
||||
if (err)
|
||||
goto out_handler;
|
||||
|
||||
err = tipc_ref_table_init(tipc_max_ports, tipc_random);
|
||||
if (err)
|
||||
goto out_reftbl;
|
||||
|
||||
err = tipc_nametbl_init();
|
||||
if (err)
|
||||
goto out_nametbl;
|
||||
|
||||
err = tipc_netlink_start();
|
||||
if (err)
|
||||
goto out_netlink;
|
||||
|
||||
err = tipc_socket_init();
|
||||
if (err)
|
||||
goto out_socket;
|
||||
|
||||
err = tipc_register_sysctl();
|
||||
if (err)
|
||||
goto out_sysctl;
|
||||
|
||||
err = tipc_subscr_start();
|
||||
if (err)
|
||||
goto out_subscr;
|
||||
|
||||
err = tipc_cfg_init();
|
||||
if (err)
|
||||
goto out_cfg;
|
||||
|
||||
err = tipc_bearer_setup();
|
||||
if (err)
|
||||
goto out_bearer;
|
||||
|
||||
return 0;
|
||||
out_bearer:
|
||||
tipc_cfg_stop();
|
||||
out_cfg:
|
||||
tipc_subscr_stop();
|
||||
out_subscr:
|
||||
tipc_unregister_sysctl();
|
||||
out_sysctl:
|
||||
tipc_socket_stop();
|
||||
out_socket:
|
||||
tipc_netlink_stop();
|
||||
out_netlink:
|
||||
tipc_nametbl_stop();
|
||||
out_nametbl:
|
||||
tipc_ref_table_stop();
|
||||
out_reftbl:
|
||||
tipc_handler_stop();
|
||||
out_handler:
|
||||
return err;
|
||||
}
|
||||
|
||||
static int __init tipc_init(void)
|
||||
@ -174,8 +185,6 @@ static int __init tipc_init(void)
|
||||
|
||||
static void __exit tipc_exit(void)
|
||||
{
|
||||
tipc_handler_stop();
|
||||
tipc_core_stop_net();
|
||||
tipc_core_stop();
|
||||
pr_info("Deactivated\n");
|
||||
}
|
||||
|
@ -90,7 +90,6 @@ extern int tipc_random __read_mostly;
|
||||
/*
|
||||
* Routines available to privileged subsystems
|
||||
*/
|
||||
int tipc_core_start_net(unsigned long);
|
||||
int tipc_handler_start(void);
|
||||
void tipc_handler_stop(void);
|
||||
int tipc_netlink_start(void);
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user