Commit Graph

65873 Commits

Author SHA1 Message Date
Florian Fainelli
25149ef9d2 net: phy: Check phydev->drv
There are number of function calls, originating from user-space,
typically through the Ethernet driver that can make us crash by
dereferencing phydev->drv which will be NULL once we unbind the driver
from the PHY.

There are still functional issues that prevent an unbind then rebind to
work, but these will be addressed separately.

Suggested-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-20 10:15:11 -05:00
Florian Fainelli
7b9a88a390 net: phy: Fix PHY unbind crash
The PHY library does not deal very well with bind and unbind events. The first
thing we would see is that we were not properly canceling the PHY state machine
workqueue, so we would be crashing while dereferencing phydev->drv since there
is no driver attached anymore.

Suggested-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-20 10:15:11 -05:00
david.wu
d4ff816e97 net: ethernet: stmmac: dwmac-rk: Add RK3328 gmac support
Add constants and callback functions for the dwmac on rk3328 socs.
As can be seen, the base structure is the same, only registers and the
bits in them moved slightly.

Signed-off-by: david.wu <david.wu@rock-chips.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-19 18:19:37 -05:00
Jason Wang
61845d2072 virtio-net: batch stats updating
We already have counters for sent/recv packets and sent/recv bytes.
Doing a batched update to reduce the number of
u64_stats_update_begin/end().

Take care not to bother with stats update when called
speculatively.

Cc: Willem de Bruijn <willemb@google.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-19 18:15:23 -05:00
Eric Dumazet
f5a5772337 mlx4: fix potential divide by 0 in mlx4_en_auto_moderation()
1) In the case where rate == priv->pkt_rate_low == priv->pkt_rate_high,
mlx4_en_auto_moderation() does a divide by zero.

2) We want to properly change the moderation parameters if rx_frames
was changed (like in ethtool -C eth0 rx-frames 16)

Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-19 18:15:23 -05:00
Thomas Falcon
249168ad07 ibmvnic: Make CRQ interrupt tasklet wait for all capabilities crqs
After sending device capability queries and requests to the vNIC Server,
an interrupt is triggered and the responses are written to the driver's
CRQ response buffer. Since the interrupt can be triggered before all
responses are written and visible to the partition, there is a danger
that the interrupt handler or tasklet can terminate before all responses
are read, resulting in a failure to initialize the device.

To avoid this scenario, when capability commands are sent, we set
a flag that will be checked in the following interrupt tasklet that
will handle the capability responses from the server. Once all
responses have been handled, the flag is disabled; and the tasklet
is allowed to terminate.

Signed-off-by: Thomas Falcon <tlfalcon@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-19 18:12:03 -05:00
Thomas Falcon
901e040aa3 ibmvnic: Use common counter for capabilities checks
Two different counters were being used for capabilities
requests and queries. These commands are not called
at the same time so there is no reason a single counter
cannot be used.

Signed-off-by: Thomas Falcon <tlfalcon@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-19 18:12:03 -05:00
Thomas Falcon
6c267b3dea ibmvnic: Handle processing of CRQ messages in a tasklet
Create a tasklet to process queued commands or messages received from
firmware instead of processing them in the interrupt handler. Note that
this handler does not process network traffic, but communications related
to resource allocation and device settings.

Signed-off-by: Thomas Falcon <tlfalcon@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-19 18:12:03 -05:00
Arun Easi
1e128c8129 qed: Add support for hardware offloaded FCoE.
This adds the backbone required for the various HW initalizations
which are necessary for the FCoE driver (qedf) for QLogic FastLinQ
4xxxx line of adapters - FW notification, resource initializations, etc.

Signed-off-by: Arun Easi <arun.easi@cavium.com>
Signed-off-by: Yuval Mintz <yuval.mintz@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-19 18:10:42 -05:00
David S. Miller
f787d1debf Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net 2017-02-19 11:18:46 -05:00
Paolo Abeni
22f0708a71 vxlan: fix oops in dev_fill_metadata_dst
Since the commit 0c1d70af92 ("net: use dst_cache for vxlan device")
vxlan_fill_metadata_dst() calls vxlan_get_route() passing a NULL
dst_cache pointer, so the latter should explicitly check for
valid dst_cache ptr. Unfortunately the commit d71785ffc7 ("net: add
dst_cache to ovs vxlan lwtunnel") removed said check.

As a result is possible to trigger a null pointer access calling
vxlan_fill_metadata_dst(), e.g. with:

ovs-vsctl add-br ovs-br0
ovs-vsctl add-port ovs-br0 vxlan0 -- set interface vxlan0 \
	type=vxlan options:remote_ip=192.168.1.1 \
	options:key=1234 options:dst_port=4789 ofport_request=10
ip address add dev ovs-br0 172.16.1.2/24
ovs-vsctl set Bridge ovs-br0 ipfix=@i -- --id=@i create IPFIX \
	targets=\"172.16.1.1:1234\" sampling=1
iperf -c 172.16.1.1 -u -l 1000 -b 10M -t 1 -p 1234

This commit addresses the issue passing to vxlan_get_route() the
dst_cache already available into the lwt info processed by
vxlan_fill_metadata_dst().

Fixes: d71785ffc7 ("net: add dst_cache to ovs vxlan lwtunnel")
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Acked-by: Jiri Benc <jbenc@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-17 15:32:06 -05:00
Peter Dunning
9c568fd884 sfc: do not device_attach if a reset is pending
efx_start_all can return without initialising queues as a reset is pending.
 This means that when netif_device_attach is called, the kernel can start
 sending traffic without having an initialised TX queue to send to.
This patch avoids this by not calling netif_device_attach if there is a
 pending reset.

Fixes: e283546c04 ("sfc:On MCDI timeout, issue an FLR (and mark MCDI to fail-fast)")
Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-17 15:29:40 -05:00
Bert Kenward
105eac6c35 sfc: forget filters from sw table if hw replies ENOENT on removing them
If the hw doesn't think they exist, we should defer to its authority.

Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-17 15:29:39 -05:00
Jon Cooper
0ccb998bf4 sfc: fix filter_id misinterpretation in edge case
On EF10, hardware filter IDs are 13 bits, but in some places we store
 32-bit "full filter IDs" in which higher order bits encode the filter
 match-priority.  This could cause a filter to have a full filter ID of
 0xffff, which is also the value EFX_EF10_FILTER_ID_INVALID which we use
 in 16-bit "short" filter IDs (without match-priority bits).  This would
 occur if the hardware filter ID was 0x1fff and the match-priority was 7.
Unfortunately, some code that checks for EFX_EF10_FILTER_ID_INVALID can
 be called on full filter IDs, and will WARN_ON if this ever happens.
So, since we have plenty of spare bits in the full filter ID, this patch
 shifts the priority bits left one bit when constructing the full filter
 IDs, ensuring that the 0x2000 bit of a full filter ID will always be 0
 and thus no full filter ID can ever equal EFX_EF10_FILTER_ID_INVALID.

This patch also replaces open-coded full<->short filter ID conversions
 with calls to functions, thus keeping the definition of the full filter
 ID format in one place.

Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-17 15:29:39 -05:00
Arnd Bergmann
fbdf0e28d0 vmxnet3: prevent building with 64K pages
I got a warning about broken code on ARM64 with 64K pages:

drivers/net/vmxnet3/vmxnet3_drv.c: In function 'vmxnet3_rq_init':
drivers/net/vmxnet3/vmxnet3_drv.c:1679:29: error: large integer implicitly truncated to unsigned type [-Werror=overflow]
    rq->buf_info[0][i].len = PAGE_SIZE;

'len' here is a 16-bit integer, so this clearly won't work. I don't think
this driver is used much on anything other than x86, so there is no need
to fix this properly and we can work around it with a Kconfig dependency
to forbid known-broken configurations. qemu in theory supports it on
other architectures too, but presumably only for compatibility with x86
guests that also run on vmware.

CONFIG_PAGE_SIZE_64KB is used on hexagon, mips, sh and tile, the other
symbols are architecture-specific names for the same thing.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-17 15:25:49 -05:00
Valentin Longchamp
74179d44b6 net/wan: add MODULE_LICENSE for fsl_ucc_hdlc
It is required to build it as a module.

Signed-off-by: Valentin Longchamp <valentin.longchamp@keymile.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-17 15:20:43 -05:00
Simon Horman
3b4735281f nfp: Use PCI_DEVICE_ID_NETRONOME_NFP* defines
Use PCI_DEVICE_ID_NETRONOME_NFP*, defined in linux/pci_ids.h,
rather than replicating the same values in the NFP driver.

Signed-off-by: Simon Horman <simon.horman@netronome.com>
Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-17 15:11:34 -05:00
Simon Xiao
b5124720ed netvsc: fix typo on statistics
Return the correct tx_errors stats in netvsc.

Reviewed-by: Haiyang Zhang <haiyangz@microsoft.com>
Signed-off-by: Simon Xiao <sixiao@microsoft.com>
Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-17 15:06:52 -05:00
Philippe Reynes
99f18f1d2f net: qlogic: netxen: use new api ethtool_{get|set}_link_ksettings
The ethtool api {get|set}_settings is deprecated.
We move this driver to new api {get|set}_link_ksettings.

As I don't have the hardware, I'd be very pleased if
someone may test this patch.

Signed-off-by: Philippe Reynes <tremyfr@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-17 14:06:41 -05:00
Philippe Reynes
336f8a71aa net: hamachi: use new api ethtool_{get|set}_link_ksettings
The ethtool api {get|set}_settings is deprecated.
We move this driver to new api {get|set}_link_ksettings.

As I don't have the hardware, I'd be very pleased if
someone may test this patch.

Signed-off-by: Philippe Reynes <tremyfr@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-17 14:06:41 -05:00
Tobias Klauser
6850f8b509 net: bgmac: store MAC address directly in netdev->dev_addr
After commit 34a5102c32 ("net: bgmac: allocate struct bgmac just once
& don't copy it") the mac_addr member of struct bgmac is no longer
necessary to pass the MAC address to bgmac_enet_probe(). Instead it can
directly be stored in netdev->dev_addr.

Also use eth_hw_addr_random() instead of eth_random_addr() in case a
random MAC is nedded. This will make sure netdev->addr_assign_type will
be properly set.

Signed-off-by: Tobias Klauser <tklauser@distanz.ch>
Acked-by: Jon Mason <jon.mason@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-17 13:03:39 -05:00
David S. Miller
3105dfb2a9 wireless-drivers-next patches for 4.11
Mostly small fixes, not really any new features.
 
 Major changes:
 
 ath10k
 
 * when trying older firmware versions don't confuse user with error messages
 
 ath9k
 
 * fix crash in AP mode (regression)
 * fix relayfs crash (regression)
 * fix initialisation with AR9340 and AR9550
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQEcBAABAgAGBQJYpamYAAoJEG4XJFUm622b7FAIAKLgTCq5S3mNFGsuA+r+2v8L
 HwDSmIF1bd/L74h2i08rHuEycU3h92RXPCYFZT8bR1fbU58BZlO4YLc0Ey9ug5v7
 1uc2lB00j2CJJ97XuW5HK3ZGA9jv/V5P+N8GM0Z1UdxG7J12h++MSVu7lKyDrPus
 tPrA4C+/iANbPiSJpqgS+Zeyd6qUHa+udTNTnL8XxuztKLrAH5DDiHdxiGIpDmKt
 HhyAeK0A+vJYdcGCIzD2RXpw/VXUvmF/p7WiR6/3LYRXNNpZoC+U/MdLKh5LBj1C
 hpdlNBmqV+Wf7g7W3CrM3ZVZm8s+3ad8+2rIiW9lqE/UEivVmAZ70iuSppqC/nE=
 =WwLM
 -----END PGP SIGNATURE-----

Merge tag 'wireless-drivers-next-for-davem-2017-02-16' of git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/wireless-drivers-next

Kalle Valo says:

====================
wireless-drivers-next patches for 4.11

Mostly small fixes, not really any new features.

Major changes:

ath10k

* when trying older firmware versions don't confuse user with error messages

ath9k

* fix crash in AP mode (regression)
* fix relayfs crash (regression)
* fix initialisation with AR9340 and AR9550
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-17 13:00:45 -05:00
Tobias Klauser
6d6a505a1e net: ethoc: Use eth_hw_addr_random()
Use eth_hw_addr_random() to set a random dev_addr and update
addr_assign_type instead of open-coding it.

Signed-off-by: Tobias Klauser <tklauser@distanz.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-17 12:42:24 -05:00
Dan Carpenter
785f35775d dpaa_eth: small leak on error
This should be >= instead of > here.  It means that we don't increment
the free count enough so it becomes off by one.

Fixes: 9ad1a37493 ("dpaa_eth: add support for DPAA Ethernet")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-17 12:18:43 -05:00
Jisheng Zhang
4581be42fc net: mvneta: make mvneta_eth_tool_ops static
The mvneta_eth_tool_ops is only used internally in mvneta driver, so
make it static.

Signed-off-by: Jisheng Zhang <jszhang@marvell.com>
Acked-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-17 12:16:29 -05:00
Philippe Reynes
ad8e963ca6 net: oki-semi: pch_gbe: use new api ethtool_{get|set}_link_ksettings
The ethtool api {get|set}_settings is deprecated.
We move this driver to new api {get|set}_link_ksettings.

As I don't have the hardware, I'd be very pleased if
someone may test this patch.

Signed-off-by: Philippe Reynes <tremyfr@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-17 12:00:14 -05:00
Ivan Khoronzhuk
9fe9aa0b73 net: ethernet: ti: cpsw: correct ale dev to cpsw
The ale is a property of cpsw, so change dev to cpsw->dev,
aka pdev->dev, to be consistent.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-17 11:12:32 -05:00
Philippe Reynes
0fa9e2899c net: nvidia: forcedeth: use new api ethtool_{get|set}_link_ksettings
The ethtool api {get|set}_settings is deprecated.
We move this driver to new api {get|set}_link_ksettings.

As I don't have the hardware, I'd be very pleased if
someone may test this patch.

Signed-off-by: Philippe Reynes <tremyfr@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-17 11:05:33 -05:00
Eric Dumazet
01f0f42534 mlx4: do not fire tasklet unless necessary
All rx and rx netdev interrupts are handled by respectively
by mlx4_en_rx_irq() and mlx4_en_tx_irq() which simply schedule a NAPI.

But mlx4_eq_int() also fires a tasklet to service all items that were
queued via mlx4_add_cq_to_tasklet(), but this handler was not called
unless user cqe was handled.

This is very confusing, as "mpstat -I SCPU ..." show huge number of
tasklet invocations.

This patch saves this overhead, by carefully firing the tasklet directly
from mlx4_add_cq_to_tasklet(), removing four atomic operations per IRQ.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Tariq Toukan <tariqt@mellanox.com>
Cc: Saeed Mahameed <saeedm@mellanox.com>
Acked-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-17 10:55:22 -05:00
David S. Miller
5237b9dde3 Merge branch '10GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queue
Jeff Kirsher says:

====================
10GbE Intel Wired LAN Driver Updates 2017-02-16

This series contains updates to ixgbe only.

Tony updates the driver to advertise 2.5Gb and 5.0Gb if the adapter
supports it.

Stephen Hemminger renames our dcbnl_ops since it is global to
ixgbe_dcbnl_ops to avoid namespace issues.

Mark updates the driver version based on the recent changes.

Alex has the remainder of the changes, starting with consolidating
functions that represent logical steps in the receive process so we can
later update them more easily (and align with igb).  Modify the receive
path to only synchronize the length of the frame versus the entire buffer.
Provided performance improvements by adding support for
DMA_ATTR_SKIP_CPU_SYNC and DMA_ATTR_WEAK_ORDERING.  Also made additional
performance gains by batching the page count updates instead of doing
them one at a time.  Adjusted the receive path to use 3k buffers with
8k backing them in order to support build_skb with jumbo frames.  Made
additional driver improvements by using the length of the packet instead
of the DD status to determine if a new descriptor is ready to be
processed, which cuts down on reads.  To reduce code duplication, pulled
apart the receive path into separate functions.  Added support for
providing a buffer with headroom and tailroom to allow for shared info
for NET_SKB_PAD and NET_IP_ALIGN.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-16 21:19:44 -05:00
David S. Miller
3f64116a83 Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net 2017-02-16 19:34:01 -05:00
Ganesh Goudar
f3caf8618b cxgb4: Remove redundant code in t4_uld_clean_up()
Remove variable rxq_info and also remove redundant assignment
to it.

Signed-off-by: Ganesh Goudar <ganeshgr@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-16 14:32:52 -05:00
Ganesh Goudar
5d071c24f0 cxgb4: Add new T5 and T6 pci device id's
Signed-off-by: Ganesh Goudar <ganeshgr@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-16 14:32:52 -05:00
Arjun V
45da1ca2e2 cxgb4: Increase max number of tc u32 links
Make max number of supported tc u32 links equal to max number of filters
supported by hardware.

Signed-off-by: Arjun V <arjun@chelsio.com>
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Signed-off-by: Casey Leedom <leedom@chelsio.com>
Signed-off-by: Ganesh Goudar <ganeshgr@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-16 14:32:52 -05:00
Alexander Duyck
ffed21bcee ixgbe: Don't bother clearing buffer memory for descriptor rings
This patch makes it so that we don't need to bother with clearing the
memory out for the descriptor rings.  The general idea is to only free
buffers associated with buffers in use which are located between the
next_to_clean and next_to_use or next_to_alloc values.  Everything outside
of those regions can be safely ignored since they should have no buffers
associated with them.

The advantage to doing things this way is that is should speed up bring-up
and tear-down of the rings.  Specifically we can avoid the 512 or more
cycles required to memset the rings in tear-down.  In the bring-up phase we
then clear the memory as a part of initialization.  The general idea is
that the clearing in initialization can act as a prefetch of sorts for the
buffer info structures so they are in the local CPU when we go to populate
them.  This should help to improve overall time needed to perform a
suspend/resume.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-02-16 04:02:44 -08:00
Alexander Duyck
6f429223b3 ixgbe: Add support for build_skb
This patch adds build_skb support to the Rx path.  There are several
advantages to this change.

1.  It avoids the memcpy and skb->head allocation for small packets which
    improves performance by about 5% in my tests.
2.  It avoids the memcpy, skb->head allocation, and eth_get_headlen
    for larger packets improving performance by about 10% in my tests.
3.  For VXLAN packets it allows the full header to be in skb->data which
    improves the performance by as much as 30% in some of my tests.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-02-16 04:02:44 -08:00
Alexander Duyck
2ccdf26ff6 ixgbe: Add private flag to control buffer mode
Since there are potential drawbacks to the new Rx allocation approach I
thought it best to add a "chicken bit" so that we can turn the feature off
if in the event that a problem is found.

It also provides a means of validating the legacy Rx path in the event that
we are forced to fall back.  At some point in the future when we are
convinced we don't need it anymore we might be able to drop the legacy-rx
flag.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-02-16 04:02:44 -08:00
Alexander Duyck
2de6aa3a66 ixgbe: Add support for padding packet
This patch adds support for providing a buffer with headroom and tailroom
to allow for shared info, NET_SKB_PAD, and NET_IP_ALIGN.  With this
combined with the DMA changes we can start using build_skb to build frames
around an incoming Rx buffer instead of having to memcpy the headers.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-02-16 04:02:44 -08:00
Alexander Duyck
3fd218767f ixgbe: Break out Rx buffer page management
We are going to be expanding the number of Rx paths in the driver.  Instead
of duplicating all that code I am pulling it apart into separate functions
so that we don't have so much code duplication.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-02-16 04:02:44 -08:00
Alexander Duyck
c3630cc40b ixgbe: Use length to determine if descriptor is done
This change makes it so that we use the length of the packet instead of the
DD status bit to determine if a new descriptor is ready to be processed.
The obvious advantage is that it cuts down on reads as we don't really even
need the DD bit if going from a 0 to a non-zero value on size is enough to
inform us that the packet has been completed.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-02-16 04:02:44 -08:00
Alexander Duyck
4f4542bfb3 ixgbe: Make use of order 1 pages and 3K buffers independent of FCoE
In order to support build_skb with jumbo frames it will be necessary to use
3K buffers for the Rx path with 8K pages backing them.  This is needed on
architectures that implement 4K pages because we can't support 2K buffers
plus padding in a 4K page.

In the case of systems that support page sizes larger than 4K the 3K
attribute will only be applied to FCoE as we can fall back to using just 2K
buffers and adding the padding.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-02-16 04:02:44 -08:00
Alexander Duyck
1b56cf49f5 ixgbe: Update code to better handle incrementing page count
Batch the page count updates instead of doing them one at a time.  By doing
this we can improve the overall performance as the atomic increment
operations can be expensive due to the fact that on x86 they are locked
operations which can cause stalls.  By doing bulk updates we can
consolidate the stall which should help to improve the overall receive
performance.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Acked-by: John Fastabend <john.r.fastabend@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-02-16 04:02:44 -08:00
Alexander Duyck
f3213d9321 ixgbe: Update driver to make use of DMA attributes in Rx path
This patch adds support for DMA_ATTR_SKIP_CPU_SYNC and
DMA_ATTR_WEAK_ORDERING.  By enabling both of these for the Rx path we are
able to see performance improvements on architectures that implement either
one due to the fact that page mapping and unmapping only has to sync what
is actually being used instead of the entire buffer.  In addition by
enabling the weak ordering attribute enables a performance improvement for
architectures that can associate a memory ordering with a DMA buffer such
as Sparc.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-02-16 04:02:44 -08:00
Alexander Duyck
f215af8cae ixgbe: Only DMA sync frame length
On some platforms, syncing a buffer for DMA is expensive. Rather than
sync the whole 2K receive buffer, only synchronise the length of the
frame, which will typically be the MTU, or a much smaller TCP ACK.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-02-16 04:02:44 -08:00
Alexander Duyck
af43da0dba ixgbe: Add function for checking to see if we can reuse page
This patch consolidates the code for the ixgbe driver so that it is more
inline with what is already in igb.  The general idea is to just
consolidate functions that represent logical steps in the Rx process so we
can later update them more easily.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-02-16 04:02:44 -08:00
Mark Rustad
1733284d02 ixgbe: Update version to reflect added functionality
Update the driver version to reflect the new devices that it
supports.

Signed-off-by: Mark Rustad <mark.d.rustad@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-02-16 04:02:44 -08:00
Stephen Hemminger
3f40c74cce ixgbe: prefix Data Center Bridge ops struct
Since dcbnl_ops is global, it should be prefixed by ixgbe_

Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-02-16 04:02:44 -08:00
Tony Nguyen
1dc0eb75a8 ixgbe: Support 2.5Gb and 5Gb speed
Though not advertised through ethtool, if the link partner advertises a
2.5Gb or 5Gb connection, and the adapter supports it, allow the speed to be
used.

Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Tested-by: Krishneil Singh <krishneil.k.singh@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-02-16 04:02:44 -08:00
Thomas Falcon
75224c93fa ibmvnic: Fix endian errors in error reporting output
Error reports received from firmware were not being converted from
big endian values, leading to bogus error codes reported on little
endian systems.

Signed-off-by: Thomas Falcon <tlfalcon@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-15 14:48:31 -05:00
Thomas Falcon
28f4d16570 ibmvnic: Fix endian error when requesting device capabilities
When a vNIC client driver requests a faulty device setting, the
server returns an acceptable value for the client to request.
This 64 bit value was incorrectly being swapped as a 32 bit value,
resulting in loss of data. This patch corrects that by using
the 64 bit swap function.

Signed-off-by: Thomas Falcon <tlfalcon@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-15 14:48:31 -05:00