The method ndo_start_xmit() is defined as returning an 'netdev_tx_t',
which is a typedef for an enum type, but the implementation in this
driver returns an 'int'.
Found by coccinelle.
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Acked-by: Shannon Nelson <shannon.nelson@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add the appropriate SPDX license tags to the Sun network drivers
as outlined in Documentation/process/license-rules.rst.
Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com>
Reviewed-by: Zhu Yanjun <yanjun.zhu@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In preparation for unconditionally passing the struct timer_list pointer to
all timer callbacks, switch to using the new timer_setup() and from_timer()
to pass the timer pointer explicitly.
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Philippe Reynes <tremyfr@gmail.com>
Cc: Jarod Wilson <jarod@redhat.com>
Cc: Shannon Nelson <shannon.nelson@oracle.com>
Cc: Rob Herring <robh@kernel.org>
Cc: chris hyser <chris.hyser@oracle.com>
Cc: Tushar Dave <tushar.n.dave@oracle.com>
Cc: Tobias Klauser <tklauser@distanz.ch>
Cc: netdev@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
Acked-by: Shannon Nelson <shannon.nelson@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The original code didn't handle non-IPv4 packets very well, so the
offload advertising had to be scaled back down to just IP. Here we
add the bits needed to support TCP and UDP packets over IPv6 and
turn the offload advertising back on.
Orabug: 26289579
Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The sunvnet netdev is connected to the controlling ldom's vswitch
for network bridging. However, for higher performance between ldoms,
there also is a channel between each client ldom. These connections are
represented in the sunvnet driver by a queue for each ldom. The driver
uses select_queue to tell the stack which queue to use by tracking the mac
addresses on the other end of each port. When a connected ldom shuts down,
the driver receives an LDC_EVENT_RESET and the port is removed from the
driver, thus a queue with no ldom on the other end will never be selected
for Tx.
The driver was trying to reinforce the "don't use this queue" notion with
netif_tx_stop_queue() and netif_tx_wake_queue(), which really should only
be used to signal a Tx queue is full (aka XOFF). This misuse of queue
state resulted in NETDEV WATCHDOG messages and lots of unnecessary calls
into the driver's tx_timeout handler. Simply removing these takes care
of the problem.
Orabug: 25190537
Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Make sure multicast packets get counted in the device.
Orabug: 25190537
Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Track our used and unused queue indexies correctly. Otherwise, as ports
dropped out and returned, they all eventually ended up with the same
queue index.
Orabug: 25190537
Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In this driver, there is a "port" created for the connection to each of
the other ldoms; a netdev queue is mapped to each port, and they are
collected under a single netdev. The generic netdev statistics show
us all the traffic in and out of our network device, but don't show
individual queue/port stats. This patch breaks out the traffic counts
for the individual ports and gives us a little view into the state of
those connections.
Orabug: 25190537
Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When an ldom VM is bound, the network vswitch infrastructure is set up for
it, but was being forced 'UP' by the userland switch configuration script.
When 'UP' but not actually connected to a running VM, the ipv6 neighbor
probes fail (not a horrible thing) and start cluttering up the kernel logs.
Funny thing: these are debug messages that never actually show up, but
we do see the net_ratelimited messages that say N callbacks were
suppressed.
This patch defers the netif_carrier_on() until an actual link has been
established with the VM, as indicated by receiving an LDC_EVENT_UP from
the underlying LDC protocol. Similarly, we take the link down when we
see the LDC_EVENT_RESET. Now when we see the ndo_open(), we reset the
link to get things talking again.
Orabug: 25525312
Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The ldmvsw driver is specifically for supporting the ldom virtual
networking by running in the primary ldom and using the LDC to connect
the remaining ldoms to the outside world via a bridge. With TSO and GSO
supported while connected the bridge, things tend to misbehave as seen
in our case by delayed packets, enough to begin triggering retransmits
and affecting overall throughput. By turning off advertised support for
TSO and GSO we restore stable traffic flow through the bridge.
Orabug: 23293104
Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The RCU read lock is grabbed first thing in sunvnet_start_xmit_common()
so it always needs to be released. This removes the conditional release
in the dropped packet error path and removes a couple of superfluous
calls in the middle of the code.
Reported-by: Bijan Mottahedeh <bijan.mottahedeh@oracle.com>
Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The use of gotos for handling the incoming events made this code
harder to read and support than it should be. This patch straightens
out and clears up the logic.
Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In order to allow the underlying LDC and outstanding memory operations
to potentially catch up with the driver's Tx requests, add a memory
barrier before checking again for available tx descriptors.
Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The vio_dring_state *dr variable is unused in maybe_tx_wakeup().
As the comments indicate, we call maybe_tx_wakeup() whenever we
get a STOPPED LDC message on the port. If the queue is stopped,
we want to wake it up so that we will send another START message
at the next TX and trigger the consumer to drain the dring.
Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com>
Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When the sunvnet_common code was split out for use by both sunvnet
and the newer ldmvsw, it was made into a static kernel library, which
limits the usefulness of sunvnet and ldmvsw as loadables, since most
of the real work is being done in the shared code. Also, this is
simply dead code in kernels that aren't running the LDoms.
This patch makes the sunvnet_common into a dynamically loadable
module and makes sunvnet and ldmvsw dependent on sunvnet_common.
Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
napi_complete_done() allows to opt-in for gro_flush_timeout,
added back in linux-3.19, commit 3b47d30396
("net: gro: add a per device gro flush timer")
This allows for more efficient GRO aggregation without
sacrifying latencies.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The following patch fixes an issue with the ldmvsw driver where
the network connection of a guest domain becomes non-functional after
the guest domain has panic'd and rebooted.
The root cause was determined to be from the following series of
events:
1. Guest domain panics - resulting in the guest no longer processing
network packets (from ldmvsw driver)
2. The ldmvsw driver (in the control domain) eventually exerts flow
control due to no more available tx drings and stops the tx queue
for the guest domain
3. The LDC of the network connection for the guest is reset when
the guest domain reboots after the panic.
4. The LDC reset event is received by the ldmvsw driver and the ldmvsw
responds by clearing the tx queue for the guest.
5. ldmvsw waits indefinitely for a DATA ACK from the guest - which is
the normal method to re-enable the tx queue. But the ACK never comes
because the tx queue was cleared due to the LDC reset.
To fix this issue, in addition to clearing the tx queue, re-enable the
tx queue on a LDC reset. This prevents the ldmvsw from getting caught in
this deadlocked state of waiting for a DATA ACK which will never come.
Signed-off-by: Aaron Young <Aaron.Young@oracle.com>
Acked-by: Sowmini Varadhan <sowmini.varadhan@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
cassini: min_mtu 60, max_mtu 9000
niu: min_mtu 68, max_mtu 9216
sungem: min_mtu 68, max_mtu 1500 (comments say jumbo mode is broken)
sunvnet: min_mtu 68, max_mtu 65535
- removed sunvnet_change_mut_common as it does nothing now
CC: netdev@vger.kernel.org
Signed-off-by: Jarod Wilson <jarod@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Checkpatch updates for sunvnet.c and sunvnet_common.c.
Signed-off-by: Aaron Young <aaron.young@oracle.com>
Signed-off-by: Rashmi Narasimhan <rashmi.narasimhan@oracle.com>
Reviewed-by: Sowmini Varadhan <sowmini.varadhan@oracle.com>
Reviewed-by: Alexandre Chartre <Alexandre.Chartre@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Modify sunvnet common code and data structures to be compatible
with both sunvnet and ldmvsw drivers.
Details:
Sunvnet operates on "vnet-port" nodes which appear in the Machine
Description (MD) in a guest domain. Ldmvsw operates on "vsw-port"
nodes which appear in the MD of a service domain.
A difference between the sunvnet driver and the ldmvsw driver is
the sunvnet driver creates a network interface (i.e. a struct net_device)
for every vnet-port *parent* "network" node. Several vnet-ports may appear
under this common parent network node - each corresponding to a common parent
network interface. Conversely, since bridge/vswitch software will need
to interface with every vsw-port in a system, the ldmvsw driver creates
a network interface (i.e. a struct net_device) for every vsw-port - not
every parent node as with sunvnet. This difference required some special
handling in the common code as explained below.
There are 2 key data structures used by the sunvnet and ldmvsw drivers
(which are now found in sunvnet_common.h):
1. struct vnet_port
This structure represents a vnet-port node in sunvnet and a vsw-port
in the ldmvsw driver.
2. struct vnet
This structure represents a parent "network" node in sunvnet and a parent
"virtual-network-switch" node in ldmvsw.
Since the sunvnet driver allocates a net_device for every parent "network"
node, a net_device member appears in the struct vnet. Since the ldmvsw
driver allocates a net_device for every port, a net_device member was
added to the vnet_port. The common code distinguishes which structure
net_device member to use by checking a 'vsw' bit that was added to the
vnet_port structure. See the VNET_PORT_TO_NET_DEVICE() marco in
sunvnet_common.h.
The netdev_priv() in sunvnet is allocated as a vnet. The netdev_priv()
in ldmvsw is a vnet_port. Therefore, any place in the common code
where a netdev_priv() call was made, a wrapper function was implemented
in each driver to first get the vnet and/or vnet_port (in a driver
specific way) and pass them as newly added parameters to the common
functions (see wrapper funcs: vnet_set_rx_mode() and vnet_poll_controller()).
Since these wrapper functions call __tx_port_find(), __tx_port_find() was
moved from the common code back into sunvnet.c. Note - ldmvsw.c does not
require this function.
These changes also required that port_is_up() be made
into a common function and thus it was given a _common suffix and
exported like the other common functions.
A wrapper function was also added for vnet_start_xmit_common() to pass a
driver-specific function arg to return the port associated with a given
struct sk_buff and struct net_device. This was required because
vnet_start_xmit_common() grabs a lock prior to getting the associated
port. Using a function pointer arg allowed the code to work unchanged
without risking changes to the non-trivial locking logic in
vnet_start_xmit_common().
Signed-off-by: Aaron Young <aaron.young@oracle.com>
Signed-off-by: Rashmi Narasimhan <rashmi.narasimhan@oracle.com>
Reviewed-by: Sowmini Varadhan <sowmini.varadhan@oracle.com>
Reviewed-by: Alexandre Chartre <Alexandre.Chartre@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Split sunvnet.c into sunvnet.c and sunvnet_common.c.
Details:
Since the sunvnet and ldmvsw drivers will both use common sunvnet code,
move the functions (and support functions) anticipated to be common code
from sunvnet.c to sunvnet_common.c. Similarly, sunvnet.h was renamed to
sunvnet_common.h. The sunvnet_common.c code will be compiled into the
kernel and act as a library of functions that are linked by either
(or both) drivers when loaded.
Function names for external functions in sunvnet_common.c (to be
called by both the sunvnet and ldmvsw drivers) were tagged with a "_common"
suffix to clearly designate them as common functions.
No functional changes as of yet... just moved code verbatim to the new
sunvnet_common.c/h files.
Makefile/Kconfig support added to build sunvnet_common.c file. The code
is included in the kernel if SUN_LDOMS is defined/selected.
NOTE - per the SubmittingPatches documentation, since the code was just
moved from one file another, the code was NOT checkpatch'd in this commit
to aid in review.
Signed-off-by: Aaron Young <aaron.young@oracle.com>
Signed-off-by: Rashmi Narasimhan <rashmi.narasimhan@oracle.com>
Reviewed-by: Sowmini Varadhan <sowmini.varadhan@oracle.com>
Reviewed-by: Alexandre Chartre <Alexandre.Chartre@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>