As mentioned by Joe Perches, TCP_OFF() and TCP_PAGE() macros are
useless.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
netdev_queue_release() should be called even if CONFIG_XPS=n
to properly release device reference.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
commit f07d960df3 (tcp: avoid frag allocation for small frames)
breaked assumption in tcp stack that skb is either linear (skb->data_len
== 0), or fully fragged (skb->data_len == skb->len)
tcp_trim_head() made this assumption, we must fix it.
Thanks to Vijay for providing a very detailed explanation.
Reported-by: Vijay Subramanian <subramanian.vijay@gmail.com>
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch adds functionality for avoiding orphaning SKB too early.
The original skb is stashed away and the original destructor is called
from the hi-jacked flow-on callback. If CAIF interface goes down and a
hi-jacked SKB exists, the original skb->destructor is restored.
Signed-off-by: Sjur Brændeland <sjur.brandeland@stericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Flow control is implemented by inspecting the qdisc queue length
in order to detect potential overflow on the TX queue. When a threshold
is reached flow-off is sent upwards in the CAIF stack. At the same time
the skb->destructor is hi-jacked by orphaning the SKB and the original
destructor is replaced with a "flow-on" callback. When the "hi-jacked"
SKB is consumed the queue should be empty, and the "flow-on" callback
is called and xon is sent upwards in the CAIF stack.
Signed-off-by: Sjur Brændeland <sjur.brandeland@stericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
NCM 1.0 does not support anything but Ethernet framing, hence
CAIF payload will be put into Ethernet frames.
Discovery is based on fixed USB vendor 0x04cc (ST-Ericsson),
product-id 0x230f (NCM). In this variant only CAIF payload is sent over
the NCM interface.
The CAIF stack (cfusbl.c) will when USB interface register first check if
we got a CDC NCM USB interface with the right VID, PID.
It will then read the device's Ethernet address and create a 'template'
Ethernet TX header, using a broadcast address as the destination address,
and EthType 0x88b5 (802.1 Local Experimental - vendor specific).
A protocol handler for 0x88b5 is setup for reception of CAIF frames from
the CDC NCM USB interface.
Signed-off-by: Sjur Brændeland <sjur.brandeland@stericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add EthType 0x88b5.
This Ethertype value is available for public use for prototype and
vendor-specific protocol development,as defined in Amendment 802a
to IEEE Std 802.
Signed-off-by: Sjur Brændeland <sjur.brandeland@stericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Reduce the number of dst_get_neighbour_noref() calls within a single
call chain. Primarily by passing the neighbour pointer down to the
helper functions.
Handle dst_get_neighbour_noref() returning NULL in ipoib_start_xmit()
by incrementing the dropped counter and freeing the packet. We don't
want it to fall through into the ARP/RARP/multicast handling, since
that should only happen when skb_dst() is NULL.
Signed-off-by: David S. Miller <davem@davemloft.net>
Acked-by: Roland Dreier <roland@purestorage.com>
Three pieces of code do the same thing, create a l2t entry and then
import this information into the c4iw_ep object.
Create a helper function and call it from these 3 locations instead.
Signed-off-by: David S. Miller <davem@davemloft.net>
Acked-by: Roland Dreier <roland@purestorage.com>
This way we consolidate the RCU locking down into the place where it
actually matters, and also we can make the code handle
dst_get_neighbour_noref() returning NULL properly.
Signed-off-by: David S. Miller <davem@davemloft.net>
IPV4 should do exactly what the IPV6 code does here, which is
use the neighbour obtained via the dst entry.
And now that the two code paths do the same thing, use a common
helper function to perform the operation.
Signed-off-by: David S. Miller <davem@davemloft.net>
Acked-by: Eric Dumazet <eric.dumazet@gmail.com>
Acked-by: Roland Dreier <roland@purestorage.com>
To reflect the fact that a refrence is not obtained to the
resulting neighbour entry.
Signed-off-by: David S. Miller <davem@davemloft.net>
Acked-by: Roland Dreier <roland@purestorage.com>
Transitioning through an IEEE DCBX version from a CEE DCBX
and back (CEE->IEEE->CEE) may leave IEEE attributes programmed
in the hardware. DCB uses a bit field in the set routines to
determine which attributes PG, PFC, APP need to be reprogrammed.
This is needed because user flow allows queueing a series
of changes and then reprogramming the hardware with the
entire set in one operation.
When transitioning from IEEE DCBX mode back into CEE DCBX
mode the PG and PFC bits need to be set so the possibly
different CEE attributes get programmed into the device.
This patch fixes broken logic that was evaluating to 0
and never setting any bits. Further this removes some
checks for num_tc in set routines. This logic only worked
when the number of traffic classes and user priorities
were equal. This is no longer the case for X540 devices.
Besides we can trust user input in this case if the
device is incorrectly configured the DCB bandwidths will
be incorrectly mapped but no OOPs, BUG, or hardware
failure will occur.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
The order of operations is important in DCBnl set_all(). When FCoE
is configured it uses the up2tc map to learn which queues to configure
the hardware offloads on. Therefore we need to setup the map before
configuring FCoE.
This is only seen when the both up2tc mappings and APP info are
configured simultaneously.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This patch updates the DMA Coalescing feature parameters to account for
larger MTUs. Previously, sufficient space may not have been allocated in
the receive buffer, causing packet drop.
Signed-off-by: Matthew Vick <matthew.vick@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Based on a patch from Mike McElroy created against the out-of-tree e1000e
driver:
Hitting the BUG_ON in napi_enable(). Code inspection shows that this can
only be triggered by calling napi_enable() twice without an intervening
napi_disable().
I saw the following sequence of events in the stack trace:
1) We simulated a cable pull using an Extreme switch.
2) e1000_tx_timeout() was entered.
3) e1000_reset_task() was called. Saw the message from e_err() in the
console log.
4) e1000_reinit_locked was called. This function calls e1000_down() and
e1000_up(). These functions call napi_disable() and napi_enable()
respectively.
5) Then on another thread, a monitor task saw carrier was down and executed
'ip set link down' and 'ip set link up' commands.
6) Saw the '_E1000_RESETTING'warning fron the e1000_close function.
7) Either the e1000_open() executed between the e1000_down() and e1000_up()
calls in step 4 or the e1000_open() call executed after the e1000_up()
call. In either case, napi_enable() is called twice which triggers the
BUG_ON.
Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Cc: Mike McElroy <mike.mcelroy@stratus.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Based on the original patch submitted my Michael Wang
<wangyun@linux.vnet.ibm.com>.
Descriptors may not be write-back while checking TX hang with flag
FLAG2_DMA_BURST on.
So when we detect hang, we just flush the descriptor and detect
again for once.
-v2 change 1 to true and 0 to false and remove extra ()
CC: Michael Wang <wangyun@linux.vnet.ibm.com>
CC: Flavio Leitner <fbl@redhat.com>
Acked-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
If our TCP_PAGE(sk) is not shared (page_count() == 1), we can set page
offset to 0.
This permits better filling of the pages on small to medium tcp writes.
"tbench 16" results on my dev server (2x4x2 machine) :
Before : 3072 MB/s
After : 3146 MB/s (2.4 % gain)
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
We discovered that TCP stack could retransmit misaligned skbs if a
malicious peer acknowledged sub MSS frame. This currently can happen
only if output interface is non SG enabled : If SG is enabled, tcp
builds headless skbs (all payload is included in fragments), so the tcp
trimming process only removes parts of skb fragments, header stay
aligned.
Some arches cant handle misalignments, so force a head reallocation and
shrink headroom to MAX_TCP_HEADER.
Dont care about misaligments on x86 and PPC (or other arches setting
NET_IP_ALIGN to 0)
This patch introduces __pskb_copy() which can specify the headroom of
new head, and pskb_copy() becomes a wrapper on top of __pskb_copy()
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The advantage of kcalloc is, that will prevent integer overflows which could
result from the multiplication of number of elements and size and it is also
a bit nicer to read.
The semantic patch that makes this change is available
in https://lkml.org/lkml/2011/11/25/107
Signed-off-by: Thomas Meyer <thomas@m3y3r.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
The advantage of kcalloc is, that will prevent integer overflows which could
result from the multiplication of number of elements and size and it is also
a bit nicer to read.
The semantic patch that makes this change is available
in https://lkml.org/lkml/2011/11/25/107
Signed-off-by: Thomas Meyer <thomas@m3y3r.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
The advantage of kcalloc is, that will prevent integer overflows which could
result from the multiplication of number of elements and size and it is also
a bit nicer to read.
The semantic patch that makes this change is available
in https://lkml.org/lkml/2011/11/25/107
Signed-off-by: Thomas Meyer <thomas@m3y3r.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
The advantage of kcalloc is, that will prevent integer overflows which could
result from the multiplication of number of elements and size and it is also
a bit nicer to read.
The semantic patch that makes this change is available
in https://lkml.org/lkml/2011/11/25/107
Signed-off-by: Thomas Meyer <thomas@m3y3r.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Denys Fedoryshchenko reported that SYN+FIN attacks were bringing his
linux machines to their limits.
Dont call conn_request() if the TCP flags includes SYN flag
Reported-by: Denys Fedoryshchenko <denys@visp.net.lb>
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
It's only used in net/ipv6/route.c and the NULL device check is
superfluous for all of the existing call sites.
Just expand the __ndisc_lookup_errno() call at each location.
Signed-off-by: David S. Miller <davem@davemloft.net>
1) x == NULL --> !x
2) x != NULL --> x
3) (x&BIT) --> (x & BIT)
4) (BIT1|BIT2) --> (BIT1 | BIT2)
5) proper argument and struct member alignment
Signed-off-by: David S. Miller <davem@davemloft.net>
Open vSwitch is a multilayer Ethernet switch targeted at virtualized
environments. In addition to supporting a variety of features
expected in a traditional hardware switch, it enables fine-grained
programmatic extension and flow-based control of the network.
This control is useful in a wide variety of applications but is
particularly important in multi-server virtualization deployments,
which are often characterized by highly dynamic endpoints and the need
to maintain logical abstractions for multiple tenants.
The Open vSwitch datapath provides an in-kernel fast path for packet
forwarding. It is complemented by a userspace daemon, ovs-vswitchd,
which is able to accept configuration from a variety of sources and
translate it into packet processing rules.
See http://openvswitch.org for more information and userspace
utilities.
Signed-off-by: Jesse Gross <jesse@nicira.com>
While parsing through IPv6 extension headers, fragment headers are
skipped making them invisible to the caller. This reports the
fragment offset of the last header in order to make it possible to
determine whether the packet is fragmented and, if so whether it is
a first or last fragment.
Signed-off-by: Jesse Gross <jesse@nicira.com>
This adds rcu_dereference_genl and genl_dereference, which are genl
variants of the RTNL functions to enforce proper locking with lockdep
and sparse.
Signed-off-by: Jesse Gross <jesse@nicira.com>
Open vSwitch uses genl_mutex locking to protect datapath
data-structures like flow-table, flow-actions. Following patch adds
lockdep_genl_is_held() which is used for rcu annotation to prove
locking.
Signed-off-by: Pravin B Shelar <pshelar@nicira.com>
Signed-off-by: Jesse Gross <jesse@nicira.com>
Open vSwitch uses Generic Netlink interface for communication
between userspace and kernel module. genl_notify() is used
for sending notification back to userspace.
genl_notify() is analogous to rtnl_notify() but uses genl_sock
instead of rtnl.
Signed-off-by: Pravin B Shelar <pshelar@nicira.com>
Signed-off-by: Jesse Gross <jesse@nicira.com>
Moved netdev_completed_queue() out of while loop in function nv_tx_done_optimized().
Because this function was in while loop,
BUG_ON(count > dql->num_queued - dql->num_completed)
was hit in dql_completed().
Signed-off-by: Igor Maravic <igorm@etf.rs>
Acked-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (73 commits)
netfilter: Remove ADVANCED dependency from NF_CONNTRACK_NETBIOS_NS
ipv4: flush route cache after change accept_local
sch_red: fix red_change
Revert "udp: remove redundant variable"
bridge: master device stuck in no-carrier state forever when in user-stp mode
ipv4: Perform peer validation on cached route lookup.
net/core: fix rollback handler in register_netdevice_notifier
sch_red: fix red_calc_qavg_from_idle_time
bonding: only use primary address for ARP
ipv4: fix lockdep splat in rt_cache_seq_show
sch_teql: fix lockdep splat
net: fec: Select the FEC driver by default for i.MX SoCs
isdn: avoid copying too long drvid
isdn: make sure strings are null terminated
netlabel: Fix build problems when IPv6 is not enabled
sctp: better integer overflow check in sctp_auth_create_key()
sctp: integer overflow in sctp_auth_create_key()
ipv6: Set mcast_hops to IPV6_DEFAULT_MCASTHOPS when -1 was given.
net: Fix corruption in /proc/*/net/dev_mcast
mac80211: fix race between the AGG SM and the Tx data path
...
After reset ipv4_devconf->data[IPV4_DEVCONF_ACCEPT_LOCAL] to 0,
we should flush route cache, or it will continue receive packets with local
source address, which should be dropped.
Signed-off-by: Weiping Pan <panweiping3@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Le mercredi 30 novembre 2011 à 14:36 -0800, Stephen Hemminger a écrit :
> (Almost) nobody uses RED because they can't figure it out.
> According to Wikipedia, VJ says that:
> "there are not one, but two bugs in classic RED."
RED is useful for high throughput routers, I doubt many linux machines
act as such devices.
I was considering adding Adaptative RED (Sally Floyd, Ramakrishna
Gummadi, Scott Shender), August 2001
In this version, maxp is dynamic (from 1% to 50%), and user only have to
setup min_th (target average queue size)
(max_th and wq (burst in linux RED) are automatically setup)
By the way it seems we have a small bug in red_change()
if (skb_queue_empty(&sch->q))
red_end_of_idle_period(&q->parms);
First, if queue is empty, we should call
red_start_of_idle_period(&q->parms);
Second, since we dont use anymore sch->q, but q->qdisc, the test is
meaningless.
Oh well...
[PATCH] sch_red: fix red_change()
Now RED is classful, we must check q->qdisc->q.qlen, and if queue is empty,
we start an idle period, not end it.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>