Commit Graph

13733 Commits

Author SHA1 Message Date
Jacob Keller
1e4c32f3ed fm10k: prevent RCU issues during AER events
During an AER action response, we were calling fm10k_close without
holding the rtnl_lock() which could lead to possible RCU warnings being
produced due to 64bit stat updates among other causes. Similarly, we
need rtnl_lock() around fm10k_open during fm10k_io_resume. Follow the
same pattern elsewhere in the driver and protect the entire open/close
sequence.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Krishneil Singh <Krishneil.k.singh@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-20 23:06:21 -07:00
Jacob Keller
2d0f76bedb fm10k: use DRV_SUMMARY to reduce code duplication
Use DRV_SUMMARY, similar to DRV_VERSION so that we don't have to
duplicate the driver summary in multiple places.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Krishneil Singh <Krishneil.k.singh@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-20 23:06:21 -07:00
Alexander Duyck
144d830558 fm10k: Add support for bulk Tx cleanup & cleanup boolean logic
This patch enables bulk free in Tx cleanup for fm10k and cleans up the
boolean logic in the polling routines for fm10k in the hopes of avoiding
any mix-ups similar to what occurred with i40e and i40evf.

Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-20 23:06:20 -07:00
Jacob Keller
3ef2f56326 fm10k: remove debug-statistics support
This change fixes an (ab)use of the ethtool stats API, which could
result in corrupt memory or misleading stat output. The ethtool stats
API is not robust enough to handle varying number of statistics due to
how it requests the size and allocates memory. Remove the poorly conceived
support originally added for extra debug statistics. In the future,
a new stats API may open up the ability to display these statistics.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Krishneil Singh <Krishneil.k.singh@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-20 23:04:57 -07:00
Jacob Keller
09401ae251 fm10k: add helper functions to set strings and data for ethtool stats
Reduce duplicate code and the amount of indentation by adding
fm10k_add_stat_strings and fm10k_add_ethtool_stats functions which help
add fm10k_stat structures to the ethtool stats callbacks. This helps
increase ease of use for future stat additions, and increases code
readability.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Krishneil Singh <Krishneil.k.singh@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-20 22:17:29 -07:00
Andrew Goodbody
df63719390 Revert "Prevent NUll pointer dereference with two PHYs on cpsw"
This reverts commit cfe2556001

This can result in a "Unable to handle kernel paging request"
during boot. This was due to using an uninitialised struct member,
data->slaves.

Signed-off-by: Andrew Goodbody <andrew.goodbody@cambrionix.com>
Tested-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-20 12:01:36 -04:00
Konstantin Khlebnikov
851b10d608 net/mlx4_en: do batched put_page using atomic_sub
This patch fixes couple error paths after allocation failures.
Atomic set of page reference counter is safe only if it is zero,
otherwise set can race with any speculative get_page_unless_zero.

Signed-off-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-19 20:04:24 -04:00
Konstantin Khlebnikov
04aeb56a17 net/mlx4_en: allocate non 0-order pages for RX ring with __GFP_NOMEMALLOC
High order pages are optional here since commit 51151a16a6 ("mlx4: allow
order-0 memory allocations in RX path"), so here is no reason for depleting
reserves. Generic __netdev_alloc_frag() implements the same logic.

Signed-off-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-19 20:04:24 -04:00
Arnd Bergmann
b67d1df5ad net: w5100: don't build spi driver without w5100
The w5100-spi driver front-end only makes sense when the w5100
core driver is enabled, not for a configuration that only has w5300:

drivers/net/built-in.o: In function `w5100_spi_remove':
drivers/net/ethernet/wiznet/w5100-spi.c:277: undefined reference to `w5100_remove'
drivers/net/built-in.o: In function `w5100_spi_probe':
drivers/net/ethernet/wiznet/w5100-spi.c:272: undefined reference to `w5100_probe'
drivers/net/built-in.o: In function `w5200_spi_init':
drivers/net/ethernet/wiznet/w5100-spi.c:125: undefined reference to `w5100_ops_priv'
drivers/net/built-in.o: In function `w5200_spi_readbulk':
drivers/net/ethernet/wiznet/w5100-spi.c:125: undefined reference to `w5100_ops_priv'
drivers/net/built-in.o: In function `w5200_spi_writebulk':
drivers/net/ethernet/wiznet/w5100-spi.c:125: undefined reference to `w5100_ops_priv'
drivers/net/built-in.o:(.data+0x3ed1c): undefined reference to `w5100_pm_ops'

This adds an appropriate Kconfig dependency.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Fixes: 630cf09751 ("net: w5100: support SPI interface mode")
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-19 01:05:15 -04:00
Linus Torvalds
12566cc35d PCI updates for v4.6:
VPD
     Add pci_set_vpd_size() (Hariprasad Shenai)
     cxgb4: Set VPD size so we can read both VPD structures (Hariprasad Shenai)
 
   Freescale i.MX6 host bridge driver
     Revert "PCI: imx6: Add support for active-low reset GPIO" (Fabio Estevam)
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIcBAABAgAGBQJXFXQVAAoJEFmIoMA60/r8UdgQAL986rapG6wG0ZC63BnMs/vl
 SHYM8prWKBDCGFJ2VPdBMWzcGYnZ+M++G8p7Ys5DLjZzEa2BV/LieyqX15HfKn8P
 d9VlLExTfMyb7O0FMgmZQMYfwtEoXorYwqP6JcJAGg/+CoinNj60dT4SvN8q+XdT
 sr5yNeTVNYHpFWOYDs0Ep2XgRLoE4Sd7NnwJISFL56ZrkpgGy5tZteD+iN8/0ZVN
 cMDZmkBZmN+8iHiS/3Rq7/woTpR+o2o57Wdw4Hsm6QoS177MoExB+foT+cQMB2CS
 U/YqvUElXpwPFOgficw/VEPtkCsKmwerN3FUpXKCXQobxkH+p8p5XYBtNRoeuiQS
 Bm+ijgAoLJNE5lnG1ibj5ENs55bOHnJa81mLWdht8V8R1CUd9zgdUb8F04GYiA4e
 OFEZ/4pKh/7+8w1gtF/NozWGxvgK3QBCT0avN7FI9zkJRe3b0i8FvPcjIYYXtLcw
 spNM+7nLQI9DEF3Kkve7DdIlMaZMO/zdNNuOkJQdLfVLt/8Spn01Vua1GC28Kf6C
 WGAOVmA30PuiLvrF5mNnNpKKp5SlLOq/hPgx6PBRIgDqzH7ekJi7LYPzP3ibrdaY
 70Rm5phJ9Mmpq+NpgLjC+Gc+3C2uAI8CGvtsKEkC7xfZncBk6w/lAFFvdm1zgJFH
 2GnNhL5+nSKrty5GvAP0
 =oW8E
 -----END PGP SIGNATURE-----

Merge tag 'pci-v4.6-fixes-2' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci

Pull PCI fixes from Bjorn Helgaas:
 "These are fixes for two issues:

   - The VPD parsing code we added for v4.6 keeps some devices from
     crashing, but also keeps cxgb4 from reading non-standard extra VPD
     data that is relies on.  Hariprasad added a way for the driver to
     specify how much VPD is valid.

   - The i.MX6 active-low reset GPIO support we added in v4.5 caused
     regressions on some boards, so we're reverting that.

  VPD:
    Add pci_set_vpd_size() (Hariprasad Shenai)
    cxgb4: Set VPD size so we can read both VPD structures (Hariprasad Shenai)

  Freescale i.MX6 host bridge driver:
    Revert "PCI: imx6: Add support for active-low reset GPIO" (Fabio Estevam)"

* tag 'pci-v4.6-fixes-2' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci:
  cxgb4: Set VPD size so we can read both VPD structures
  PCI: Add pci_set_vpd_size() to set VPD size
  Revert "PCI: imx6: Add support for active-low reset GPIO"
2016-04-18 19:52:47 -07:00
Govindarajulu Varadarajan
e7600449be enic: set netdev->vlan_features
Driver sets vlan_feature to netdev->features as hardware supports all of
them on vlan interface.

Signed-off-by: Govindarajulu Varadarajan <_govind@gmx.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-18 14:53:21 -04:00
Philippe Reynes
54846f5838 fec: move to new ethtool api {get|set}_link_ksettings
The ethtool api {get|set}_settings is deprecated.
We move the fec driver to new api {get|set}_link_ksettings.

Signed-off-by: Philippe Reynes <tremyfr@gmail.com>
Acked-by: Fugang Duan <fugang.duan@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-18 14:45:09 -04:00
Jakub Kicinski
3d780b926a nfp: add async reconfiguration mechanism
Some callers of nfp_net_reconfig() are in atomic context so
we used to busy wait for commands to complete.  In worst case
scenario that means locking up a core for up to 5 seconds
when a command times out.  Lets add a timer-based mechanism
of asynchronously checking whether reconfiguration completed
successfully for atomic callers to use.  Non-atomic callers
can now just sleep.

The approach taken is quite simple because (1) synchronous
reconfigurations always happen under RTNL (or before device
is registered); (2) we can coalesce pending reconfigs.
There is no need for request queues, timer which eventually
takes a look at reconfiguration result to report errors is
good enough.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-16 22:34:40 -04:00
Jakub Kicinski
180012dc05 nfp: remove buggy RX buffer length validation
Meaning of data_len and meta_len RX WB descriptor fields is
slightly confusing.  Add a comment with a diagram clarifying
the layout.  Also remove the buffer length validation:
(a) it's imprecise for static rx-offsets; (b) if firmware
is buggy enough to DMA past the end of the buffer
WARN_ON_ONCE() doesn't seem like a strong enough response.
skb_put() will do the checking for us anyway.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-16 22:34:40 -04:00
Jakub Kicinski
2db221cd44 nfp: remove unused suspicious mask defines
NFP_NET_RXR_MASK sounds like a mask which could be used on
NFP_NET_CFG_RXRS_ENABLE register but its value is quite
strange.  In fact there are no users of this define so let's
just remove it.  Same for TX rings.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-16 22:34:40 -04:00
Jakub Kicinski
6ffa622d85 nfp: correct names of constants in comments
Documentation in comments lacks CFG in some names.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-16 22:34:39 -04:00
Jakub Kicinski
c160692e86 nfp: remove unnecessary static
There is no reason for those local variables to be static.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-16 22:34:39 -04:00
Jakub Kicinski
5b161096f0 nfp: check the right pointer for errors
Correct checking error condition on wrong pointer -
copy/paste mistake most likely.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-16 22:34:39 -04:00
Eric Dumazet
6517eb59b0 net: bcmgenet: device stats are unsigned long
On 64bit kernels, device stats are 64bit wide, not 32bit.

Fixes: 1c1008c793 ("net: bcmgenet: add main driver file")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-16 22:03:39 -04:00
Dinh Nguyen
f66bc94174 stmmac: socfpga: remove extra call to socfpga_dwmac_setup
In the socfpga_dwmac_probe function, we have a call to socfpga_dwmac_setup,
which is already called from socfpga_dwmac_init later in the probe function.
Remove this extra call to socfpga_dwmac_setup.

Also we should not be calling socfpga_dwmac_setup() directly without wrapping
it around the proper reset assert/deasserts. That is because the
socfpga_dwmac_setup() is setting up PHY modes in the system manager, and it
is requires the EMAC's to be in reset during the PHY setup.

Reported-by: Matthew Gerlach <mgerlach@opensource.altera.com>
Signed-off-by: Dinh Nguyen <dinguyen@opensource.altera.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-16 21:46:56 -04:00
Akinobu Mita
0c165ff2d8 net: w5100: support W5200
This adds support for W5200 chip.

W5100 and W5200 have similar memory map although some of their offsets
are different.  The register access sequences between them are different
but w5100 driver has abstraction layer for difference bus interface
modes so it is easy to add W5200 support to w5100 and w5100-spi drivers.

Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
Cc: Mike Sinkovsky <msink@permonline.ru>
Cc: David S. Miller <davem@davemloft.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-16 18:30:27 -04:00
Akinobu Mita
630cf09751 net: w5100: support SPI interface mode
This adds new w5100-spi driver which shares the bus interface
independent code with existing w5100 driver.

Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
Cc: Mike Sinkovsky <msink@permonline.ru>
Cc: David S. Miller <davem@davemloft.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-16 18:30:27 -04:00
Akinobu Mita
bf2c6b90b3 net: w5100: enable to support sleepable register access interface
SPI transfer routines are callable only from contexts that can sleep.

This adds ability to tell the core driver that the interface mode
cannot access w5100 register on atomic contexts.  In this case,
workqueue and threaded irq are required.

This also corrects timeout period waiting for command register to be
automatically cleared because the latency of the register access with
SPI transfer can be interfered by other contexts.

Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
Cc: Mike Sinkovsky <msink@permonline.ru>
Cc: David S. Miller <davem@davemloft.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-16 18:30:27 -04:00
Akinobu Mita
850576cfed net: w5100: add ability to support other bus interface
The w5100 driver currently only supports direct and indirect bus
interface mode which use MMIO space for accessing w5100 registers.

In order to support SPI interface mode which is supported by W5100 chip,
this makes the bus interface abstraction layer more generic so that
separated w5100-spi driver can use w5100 driver as core module.

Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
Cc: Mike Sinkovsky <msink@permonline.ru>
Cc: David S. Miller <davem@davemloft.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-16 18:30:27 -04:00
Akinobu Mita
d6586d2ef4 net: w5100: move mmiowb into register access callbacks
Instead of sprinkle mmiowb over the driver code, move it into primary
register write callbacks. (w5100_write, w5100_write16, w5100_writebuf)

This is a preparation for supporting SPI interface which doesn't use
MMIO for accessing w5100 registers.

Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
Cc: Mike Sinkovsky <msink@permonline.ru>
Cc: David S. Miller <davem@davemloft.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-16 18:30:26 -04:00
Andrew Goodbody
cfe2556001 cpsw: Prevent NUll pointer dereference with two PHYs
Adding a 2nd PHY to cpsw results in a NULL pointer dereference
as below. Fix by maintaining a reference to each PHY node in slave
struct instead of a single reference in the priv struct which was
overwritten by the 2nd PHY.

[   17.870933] Unable to handle kernel NULL pointer dereference at virtual address 00000180
[   17.879557] pgd = dc8bc000
[   17.882514] [00000180] *pgd=9c882831, *pte=00000000, *ppte=00000000
[   17.889213] Internal error: Oops: 17 [#1] ARM
[   17.893838] Modules linked in:
[   17.897102] CPU: 0 PID: 1657 Comm: connmand Not tainted 4.5.0-ge463dfb-dirty #11
[   17.904947] Hardware name: Cambrionix whippet
[   17.909576] task: dc859240 ti: dc968000 task.ti: dc968000
[   17.915339] PC is at phy_attached_print+0x18/0x8c
[   17.920339] LR is at phy_attached_info+0x14/0x18
[   17.925247] pc : [<c042baec>]    lr : [<c042bb74>]    psr: 600f0113
[   17.925247] sp : dc969cf8  ip : dc969d28  fp : dc969d18
[   17.937425] r10: dda7a400  r9 : 00000000  r8 : 00000000
[   17.942971] r7 : 00000001  r6 : ddb00480  r5 : ddb8cb34  r4 : 00000000
[   17.949898] r3 : c0954cc0  r2 : c09562b0  r1 : 00000000  r0 : 00000000
[   17.956829] Flags: nZCv  IRQs on  FIQs on  Mode SVC_32  ISA ARM  Segment none
[   17.964401] Control: 10c5387d  Table: 9c8bc019  DAC: 00000051
[   17.970500] Process connmand (pid: 1657, stack limit = 0xdc968210)
[   17.977059] Stack: (0xdc969cf8 to 0xdc96a000)
[   17.981692] 9ce0:                                                       dc969d28 dc969d08
[   17.990386] 9d00: c038f9bc c038f6b4 ddb00480 dc969d34 dc969d28 c042bb74 c042bae4 00000000
[   17.999080] 9d20: c09562b0 c0954cc0 dc969d5c dc969d38 c043ebfc c042bb6c 00000007 00000003
[   18.007773] 9d40: ddb00000 ddb8cb58 ddb00480 00000001 dc969dec dc969d60 c0441614 c043ea68
[   18.016465] 9d60: 00000000 00000003 00000000 fffffff4 dc969df4 0000000d 00000000 00000000
[   18.025159] 9d80: dc969db4 dc969d90 c005dc08 c05839e0 dc969df4 0000000d ddb00000 00001002
[   18.033851] 9da0: 00000000 00000000 dc969dcc dc969db8 c005ddf4 c005dbc8 00000000 00000118
[   18.042544] 9dc0: dc969dec dc969dd0 ddb00000 c06db27c ffff9003 00001002 00000000 00000000
[   18.051237] 9de0: dc969e0c dc969df0 c057c88c c04410dc dc969e0c ddb00000 ddb00000 00000001
[   18.059930] 9e00: dc969e34 dc969e10 c057cb44 c057c7d8 ddb00000 ddb00138 00001002 beaeda20
[   18.068622] 9e20: 00000000 00000000 dc969e5c dc969e38 c057cc28 c057cac0 00000000 dc969e80
[   18.077315] 9e40: dda7a40c beaeda20 00000000 00000000 dc969ecc dc969e60 c05e36d0 c057cc14
[   18.086007] 9e60: dc969e84 00000051 beaeda20 00000000 dda7a40c 00000014 ddb00000 00008914
[   18.094699] 9e80: 30687465 00000000 00000000 00000000 00009003 00000000 00000000 00000000
[   18.103391] 9ea0: 00001002 00008914 dd257ae0 beaeda20 c098a428 beaeda20 00000011 00000000
[   18.112084] 9ec0: dc969edc dc969ed0 c05e4e54 c05e3030 dc969efc dc969ee0 c055f5ac c05e4cc4
[   18.120777] 9ee0: beaeda20 dd257ae0 dc8ab4c0 00008914 dc969f7c dc969f00 c010b388 c055f45c
[   18.129471] 9f00: c071ca40 dd257ac0 c00165e8 dc968000 dc969f3c dc969f20 dc969f64 dc969f28
[   18.138164] 9f20: c0115708 c0683ec8 dd257ac0 dd257ac0 dc969f74 dc969f40 c055f350 c00fc66c
[   18.146857] 9f40: dd82e4d0 00000011 00000000 00080000 dd257ac0 00000000 dc8ab4c0 dc8ab4c0
[   18.155550] 9f60: 00008914 beaeda20 00000011 00000000 dc969fa4 dc969f80 c010bc34 c010b2fc
[   18.164242] 9f80: 00000000 00000011 00000002 00000036 c00165e8 dc968000 00000000 dc969fa8
[   18.172935] 9fa0: c00163e0 c010bbcc 00000000 00000011 00000011 00008914 beaeda20 00009003
[   18.181628] 9fc0: 00000000 00000011 00000002 00000036 00081018 00000001 00000000 beaedc10
[   18.190320] 9fe0: 00083188 beaeda1c 00043a5d b6d29c0c 600b0010 00000011 00000000 00000000
[   18.198989] Backtrace:
[   18.201621] [<c042bad8>] (phy_attached_print) from [<c042bb74>] (phy_attached_info+0x14/0x18)
[   18.210664]  r3:c0954cc0 r2:c09562b0 r1:00000000
[   18.215588]  r4:ddb00480
[   18.218322] [<c042bb60>] (phy_attached_info) from [<c043ebfc>] (cpsw_slave_open+0x1a0/0x280)
[   18.227293] [<c043ea5c>] (cpsw_slave_open) from [<c0441614>] (cpsw_ndo_open+0x544/0x674)
[   18.235874]  r7:00000001 r6:ddb00480 r5:ddb8cb58 r4:ddb00000
[   18.241944] [<c04410d0>] (cpsw_ndo_open) from [<c057c88c>] (__dev_open+0xc0/0x128)
[   18.249972]  r9:00000000 r8:00000000 r7:00001002 r6:ffff9003 r5:c06db27c r4:ddb00000
[   18.258255] [<c057c7cc>] (__dev_open) from [<c057cb44>] (__dev_change_flags+0x90/0x154)
[   18.266745]  r5:00000001 r4:ddb00000
[   18.270575] [<c057cab4>] (__dev_change_flags) from [<c057cc28>] (dev_change_flags+0x20/0x50)
[   18.279523]  r9:00000000 r8:00000000 r7:beaeda20 r6:00001002 r5:ddb00138 r4:ddb00000
[   18.287811] [<c057cc08>] (dev_change_flags) from [<c05e36d0>] (devinet_ioctl+0x6ac/0x76c)
[   18.296483]  r9:00000000 r8:00000000 r7:beaeda20 r6:dda7a40c r5:dc969e80 r4:00000000
[   18.304762] [<c05e3024>] (devinet_ioctl) from [<c05e4e54>] (inet_ioctl+0x19c/0x1c8)
[   18.312882]  r10:00000000 r9:00000011 r8:beaeda20 r7:c098a428 r6:beaeda20 r5:dd257ae0
[   18.321235]  r4:00008914
[   18.323956] [<c05e4cb8>] (inet_ioctl) from [<c055f5ac>] (sock_ioctl+0x15c/0x2d8)
[   18.331829] [<c055f450>] (sock_ioctl) from [<c010b388>] (do_vfs_ioctl+0x98/0x8d0)
[   18.339765]  r7:00008914 r6:dc8ab4c0 r5:dd257ae0 r4:beaeda20
[   18.345822] [<c010b2f0>] (do_vfs_ioctl) from [<c010bc34>] (SyS_ioctl+0x74/0x84)
[   18.353573]  r10:00000000 r9:00000011 r8:beaeda20 r7:00008914 r6:dc8ab4c0 r5:dc8ab4c0
[   18.361924]  r4:00000000
[   18.364653] [<c010bbc0>] (SyS_ioctl) from [<c00163e0>] (ret_fast_syscall+0x0/0x3c)
[   18.372682]  r9:dc968000 r8:c00165e8 r7:00000036 r6:00000002 r5:00000011 r4:00000000
[   18.380960] Code: e92dd810 e24cb010 e24dd010 e59b4004 (e5902180)
[   18.387580] ---[ end trace c80529466223f3f3 ]---

Signed-off-by: Andrew Goodbody <andrew.goodbody@cambrionix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-15 17:24:37 -04:00
Manish Chopra
14db81defa qede: Add fastpath support for tunneling
This patch enables netdev tunneling features and adds
TX/RX fastpath support for tunneling in driver.

Signed-off-by: Manish Chopra <manish.chopra@qlogic.com>
Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com>
Signed-off-by: Ariel Elior <Ariel.Elior@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-15 17:08:09 -04:00
Manish Chopra
f798586920 qed: Enable GRE tunnel slowpath configuration
Signed-off-by: Manish Chopra <manish.chopra@qlogic.com>
Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com>
Signed-off-by: Ariel Elior <Ariel.Elior@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-15 17:08:09 -04:00
Manish Chopra
9a109dd073 qed/qede: Add GENEVE tunnel slowpath configuration support
This patch enables GENEVE tunnel on the adapter and
add support for driver hooks to configure UDP ports
for GENEVE tunnel offload to be performed by the adapter.

Signed-off-by: Manish Chopra <manish.chopra@qlogic.com>
Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com>
Signed-off-by: Ariel Elior <Ariel.Elior@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-15 17:08:08 -04:00
Manish Chopra
b18e170cac qed/qede: Add VXLAN tunnel slowpath configuration support
This patch enables VXLAN tunnel on the adapter and
add support for driver hooks to configure UDP ports
for VXLAN tunnel offload to be performed by the adapter.

Signed-off-by: Manish Chopra <manish.chopra@qlogic.com>
Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com>
Signed-off-by: Ariel Elior <Ariel.Elior@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-15 17:08:08 -04:00
Manish Chopra
464f664501 qed: Add infrastructure support for tunneling
This patch adds various structure/APIs needed to configure/enable different
tunnel [VXLAN/GRE/GENEVE] parameters on the adapter.

Signed-off-by: Manish Chopra <manish.chopra@qlogic.com>
Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com>
Signed-off-by: Ariel Elior <Ariel.Elior@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-15 17:08:08 -04:00
Amitoj Kaur Chawla
ac18dd9e84 qlge: Replace create_singlethread_workqueue with alloc_ordered_workqueue
Replace deprecated create_singlethread_workqueue with
alloc_ordered_workqueue.

Work items include getting tx/rx frame sizes, resetting MPI processor,
setting asic recovery bit so ordering seems necessary as only one work
item should be in queue/executing at any given time, hence the use of
alloc_ordered_workqueue.

WQ_MEM_RECLAIM flag has been set since ethernet devices seem to sit in
memory reclaim path, so to guarantee forward progress regardless of
memory pressure.

Signed-off-by: Amitoj Kaur Chawla <amitoj1606@gmail.com>
Acked-by: Tejun Heo <tj@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-15 16:42:10 -04:00
Hariprasad Shenai
67e658794c cxgb4: Set VPD size so we can read both VPD structures
Chelsio adapters have two VPD structures stored in the VPD:

  - offset 0x000: an abbreviated VPD, and
  - offset 0x400: the complete VPD.

After 104daa71b3 ("PCI: Determine actual VPD size on first access"), the
PCI core computes the valid VPD size by parsing the VPD starting at offset
0x0.  That size only includes the abbreviated VPD structure, so reads of
the complete VPD at 0x400 fail.

Explicitly set the VPD size with pci_set_vpd_size() so the driver can read
both VPD structures.

[bhelgaas: changelog, split patches, rename to pci_set_vpd_size() and
return int (not ssize_t)]
Fixes: 104daa71b3 ("PCI: Determine actual VPD size on first access")
Tested-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Casey Leedom <leedom@chelsio.com>
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
2016-04-15 13:00:18 -05:00
Jiri Pirko
b94cdabbf1 mlxsw: spectrum_buffers: Use MLXSW_SP_PB_UNUSED define for unused pb
Suggested-by: David Laight <David.Laight@ACULAB.COM>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-15 13:02:43 -04:00
Jiri Pirko
ce78f02042 mlxsw: spectrum_buffers: Use designated initializers for mlxsw_sp_pbs
Suggested-by: David Laight <David.Laight@ACULAB.COM>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-15 13:02:42 -04:00
Felix Fietkau
c02bc350f9 bgmac: fix MAC soft-reset bit for corerev > 4
Only core revisions older than 4 use BGMAC_CMDCFG_SR_REV0. This mainly
fixes support for BCM4708A0KF SoCs with Ethernet core rev 5 (it means
only some devices as most of BCM4708A0KF-s got core rev 4).
This was tested for regressions on BCM47094 which doesn't seem to care
which bit gets used.

Signed-off-by: Felix Fietkau <nbd@openwrt.org>
Signed-off-by: Rafał Miłecki <zajec5@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-14 21:17:14 -04:00
Rafał Miłecki
b4dfd8e929 bgmac: reset & enable Ethernet core before using it
This fixes Ethernet on D-Link DIR-885L with BCM47094 SoC. Felix reported
similar fix was needed for his BCM4709 device (Buffalo WXR-1900DHP?).
I tested this for regressions on BCM4706, BCM4708A0 and BCM47081A0.

Cc: Felix Fietkau <nbd@openwrt.org>
Signed-off-by: Rafał Miłecki <zajec5@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-14 17:15:55 -04:00
Jiri Pirko
2d0ed39fbd mlxsw: spectrum_buffers: Implement occupancy monitoring
Implement occupancy API introduced in devlink and mlxsw core. This is
done by accessing SBPM register for Port-Pool and SBSR for Port-TC
current and max occupancy values. Max clear is implemented using the
same registers.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-14 16:22:06 -04:00
Jiri Pirko
caf7297e7a mlxsw: core: Introduce support for asynchronous EMAD register access
So far it was possible to have one EMAD register access at a time,
locked by mutex. This patch extends this interface to allow multiple
EMAD register accesses to be in fly at once. That allows faster
processing on firmware side avoiding unused time in between EMADs.
Measured speedup is ~30% for shared occupancy snapshot operation.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-14 16:22:06 -04:00
Jiri Pirko
dd9bdb04d2 mlxsw: core: Add mlxsw specific workqueue and use it for FDB notif. processing
Follow-up patch is going to need to use delayed work as well and
frequently. The FDB notification processing is already using that and
also quite frequently. It makes sense to create separate workqueue just
for mlxsw driver in this case and do not pollute system_wq.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-14 16:22:06 -04:00
Jiri Pirko
42a7f1d774 mlxsw: reg: Extend SBPM register for occupancy control
Since it is not possible to get and clear Port-Pool occupancy data using
SBSR register, there's a need to implement that using SBPM.
Extend pack helper and add unpack helper to get occupancy values.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-14 16:22:06 -04:00
Jiri Pirko
26176def3c mlxsw: reg: Add Shared Buffer Status register definition
This register allows to query HW for current and maximal buffer usage.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-14 16:22:05 -04:00
Jiri Pirko
1ceecc88d2 mlxsw: core: Add devlink shared buffer occupancy callbacks
Add middle layer in mlxsw core code to forward shared buffer occupancy
calls into specific ASIC drivers.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-14 16:22:05 -04:00
Jiri Pirko
0f433fa0ec mlxsw: spectrum_buffers: Implement shared buffer configuration
Implement previously introduced mlxsw core shared buffer API.
For Spectrum, that is done utilizing registers SBPR, SBCM and SBPM.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-14 16:22:05 -04:00
Jiri Pirko
325f2f197d mlxsw: core: Add mlxsw_core_port_driver_priv helper
Needed in following patch.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-14 16:22:05 -04:00
Jiri Pirko
c30a53c7de mlxsw: spectrum_buffers: Get max_buff defaults into limits exposed to user
Although the device supports max_buff magic values 0 and 0xff, these are
not exposed to the user via devlink.
Therefore, adjust the default values to be within configurable range.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-14 16:22:05 -04:00
Jiri Pirko
bc872506f5 mlxsw: spectrum_buffers: Change initialization of PG 9
As explained in commit ff6551ec0c ("mlxsw: spectrum: Correctly
configure headroom size") control packets are directed to priority group
buffer 9 (PG9) in the ports' headroom buffers.

Since we don't want to drop control packets in case they can't be
admitted to the switch's shared buffer we bind PG9 to a different
ingress pool from the one used by all other PGs.

Unlike other PGs, we currently don't expose the binding between PG9 to a
pool and leave it fixed.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-14 16:22:04 -04:00
Jiri Pirko
5408f7cba3 mlxsw: spectrum_buffers: Remove eg pool 3 default init and CPU port TC binding to it
Since there is no congestion control for CPU port traffic, we can change
the CPU port TC binding to pool 0 with min_buff and max_buff zeroed.
Remove initialization for pool egress pool 3 since it is no longer used
by dafault.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-14 16:22:04 -04:00
Jiri Pirko
078f9c7132 mlxsw: spectrum_buffers: Cache shared buffer configuration
In order to achieve faster dumping of current setting and also in order
to provide possibility to get pool mode without a need to query hardware,
do cache the configuration in driver.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-14 16:22:04 -04:00
Jiri Pirko
aa99bc70ba mlxsw: spectrum_buffers: Rename "pool" to "pr" in initialization
Be consintent with rest of the registers (pm, cm) and use "pr" here.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-14 16:22:04 -04:00
Jiri Pirko
b11c3b4018 mlxsw: spectrum_buffers: Push out indexes and direction out of SB structs
Structs are in arrays so use array index as pool/tc/prio index. With
that, there is need to maintain separate arrays for ingress and egress.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-14 16:22:04 -04:00
Jiri Pirko
94266e3278 mlxsw: spectrum_buffers: Push out shared buffer register writes
Pushed them into helper functions.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-14 16:22:03 -04:00
Jiri Pirko
a6179bf0d1 mlxsw: core: Add devlink shared buffer callbacks
Add middle layer in mlxsw core code to forward shared buffer calls
into specific ASIC drivers.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-14 16:22:03 -04:00
Sergei Shtylyov
d0988a5f77 ravb: make ravb_ptp_interrupt() *void*
When we have the ISS.CGIS bit set, we already know that gPTP interrupt has
happened, so an extra GIS register check at the end of ravb_ptp_interrupt()
seems superfluous.  We can model the gPTP interrupt  handler like all other
dedicated interrupt handlers in the driver and make it *void*.

Signed-off-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-14 00:48:33 -04:00
Yuval Mintz
7c2d7d7438 qed* - bump driver versions to 8.7.1.20
Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-14 00:43:21 -04:00
Sudarsana Reddy Kalluru
961acdeafd qede: add Rx flow hash/indirection support.
Adds support for the following via ethtool:
  - UDP configuration of RSS based on 2-tuple/4-tuple.
  - RSS hash key.
  - RSS indirection table.

Signed-off-by: Sudarsana Reddy Kalluru <sudarsana.kalluru@qlogic.com>
Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-14 00:43:21 -04:00
Sudarsana Reddy Kalluru
8c5ebd0c79 qed: add Rx flow hash/indirection support.
Adds the required API for passing RSS-related configuration from qede.

Signed-off-by: Sudarsana Reddy Kalluru <sudarsana.kalluru@qlogic.com>
Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-14 00:43:20 -04:00
Rahul Verma
95114344ea qed*: remove version dependency
Inbox drivers don't need versioning scheme in order to guarantee
compatibility, as both qed and qede are compiled from same codebase.

Signed-off-by: Rahul Verma <rahul.verma@qlogic.com>
Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-14 00:43:20 -04:00
Petri Gynther
e178c8c230 net: bcmgenet: add BQL support
Add Byte Queue Limits (BQL) support to bcmgenet driver.

Signed-off-by: Petri Gynther <pgynther@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-14 00:33:51 -04:00
David S. Miller
41015e89d9 Merge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-queue
Jeff Kirsher says:

====================
Intel Wired LAN Driver Updates 2016-04-13

This series contains updates to i40e, i40evf and fm10k.

Alex fixes a bug introduced earlier based on his interpretation of the
XL710 datasheet.  The actual limit for fragments with TSO and a skbuff
that has payload data in the header portion of the buffer is actually
only 7 fragments and the skb-data portion counts as 2 buffers, one for
the TSO header, and the one for a segment payload buffer.

Jacob fixes a bug where in a previous refactor of the code broke
multi-bit updates for VFs.  The problem occurs because a multi-bit
request has a non-zero length, and the PF would simply drop any
request with the upper 16 bits set.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-14 00:31:45 -04:00
Florian Fainelli
dac916f8fb net: bcmgenet: use __napi_schedule_irqoff()
bcmgenet_isr1() and bcmgenet_isr0() run in hard irq context,
we do not need to block irq again.

Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Petri Gynther <pgynther@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-13 23:37:37 -04:00
Eric Dumazet
eb96ce01ba net: bcmgenet: use napi_complete_done()
By using napi_complete_done(), we allow fine tuning
of /sys/class/net/ethX/gro_flush_timeout for higher GRO aggregation
efficiency for a Gbit NIC.

Check commit 24d2e4a507 ("tg3: use napi_complete_done()") for details.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Petri Gynther <pgynther@google.com>
Cc: Florian Fainelli <f.fainelli@gmail.com>
Acked-by: Florian Fainelli <f.fainelli@gmail.com>
Acked-by: Petri Gynther <pgynther@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-13 23:37:37 -04:00
Jacob Keller
f808c5dbcd fm10k: fix multi-bit VLAN update requests from VF
The VF uses a multi-bit update request to clear unused VLANs whenever it
resets. However, an accident in a previous refector broke multi-bit
updates for VFs, due to misreading a comment in fm10k_vf.c and
attempting to reduce code duplication. The problem occurs because
a multi-bit request has a non-zero length, and the PF would simply drop
any request with the upper 16 bits set.

We can't simply remove the check of the upper 16 bits and the call to
fm10k_iov_select vid, because this would remove the checks for default
VID and for ensuring no other VLANs can be enabled except pf_vid when it
has been set. To resolve that issue, this revision uses the
iov_select_vid when we have a single-bit update, and denies any
multi-bit update when the VLAN was administratively set by the PF. This
should be ok since the PF properly updates VLAN_TABLE when it assigns
the PF vid. This ensures that requests to add or remove the PF vid work
as expected, but a rogue VF could not use the multi-bit update as
a loophole to attempt receiving traffic on other VLANs.

Reported-by: Ngai-Mint Kwan <ngai-mint.kwan@intel.com>
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Krishneil Singh <Krishneil.k.singh@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-13 20:06:55 -07:00
David Daney
65c66af660 net: thunderx: Fix broken of_node_put() code.
commit b7d3e3d3d2 ("net: thunderx: Don't leak phy device references
on -EPROBE_DEFER condition.") incorrectly moved the call to
of_node_put() outside of the loop.  Under normal loop exit, the node
has already had of_node_put() called, so the extra call results in:

[    8.228020] ERROR: Bad of_node_put() on /soc@0/pci@848000000000/mrml-bridge0@1,0/bgx0/xlaui00
[    8.239433] CPU: 16 PID: 608 Comm: systemd-udevd Not tainted 4.6.0-rc1-numa+ #157
[    8.247380] Hardware name: www.cavium.com EBB8800/EBB8800, BIOS 0.3 Mar  2 2016
[    8.273541] Call trace:
[    8.273550] [<fffffc0008097364>] dump_backtrace+0x0/0x210
[    8.273557] [<fffffc0008097598>] show_stack+0x24/0x2c
[    8.273560] [<fffffc0008399ed0>] dump_stack+0x8c/0xb4
[    8.273566] [<fffffc00085aa828>] of_node_release+0xa8/0xac
[    8.273570] [<fffffc000839cad8>] kobject_cleanup+0x8c/0x194
[    8.273573] [<fffffc000839c97c>] kobject_put+0x44/0x6c
[    8.273576] [<fffffc00085a9ab0>] of_node_put+0x24/0x30
[    8.273587] [<fffffc0000bd0f74>] bgx_probe+0x17c/0xcd8 [thunder_bgx]
[    8.273591] [<fffffc00083ed220>] pci_device_probe+0xa0/0x114
[    8.273596] [<fffffc0008473fbc>] driver_probe_device+0x178/0x418
[    8.273599] [<fffffc000847435c>] __driver_attach+0x100/0x118
[    8.273602] [<fffffc0008471b58>] bus_for_each_dev+0x6c/0xac
[    8.273605] [<fffffc0008473884>] driver_attach+0x30/0x38
[    8.273608] [<fffffc00084732f4>] bus_add_driver+0x1f8/0x29c
[    8.273611] [<fffffc0008475028>] driver_register+0x70/0x110
[    8.273617] [<fffffc00083ebf08>] __pci_register_driver+0x60/0x6c
[    8.273623] [<fffffc0000bf0040>] bgx_init_module+0x40/0x48 [thunder_bgx]
[    8.273626] [<fffffc0008090d04>] do_one_initcall+0xcc/0x1c0
[    8.273631] [<fffffc0008198abc>] do_init_module+0x68/0x1c8
[    8.273635] [<fffffc0008125668>] load_module+0xf44/0x11f4
[    8.273638] [<fffffc0008125b64>] SyS_finit_module+0xb8/0xe0
[    8.273641] [<fffffc0008093b30>] el0_svc_naked+0x24/0x28

Go back to the previous (correct) code that only did the extra
of_node_put() call on early exit from the loop.

Signed-off-by: David Daney <david.daney@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-13 23:06:31 -04:00
Alexander Duyck
3f3f7cb875 i40e/i40evf: Limit TSO to 7 descriptors for payload instead of 8 per packet
This patch addresses a bug introduced based on my interpretation of the
XL710 datasheet.  Specifically section 8.4.1 states that "A single transmit
packet may span up to 8 buffers (up to 8 data descriptors per packet
including both the header and payload buffers)."  It then later goes on to
say that each segment for a TSO obeys the previous rule, however it then
refers to TSO header and the segment payload buffers.

I believe the actual limit for fragments with TSO and a skbuff that has
payload data in the header portion of the buffer is actually only 7
fragments as the skb->data portion counts as 2 buffers, one for the TSO
header, and one for a segment payload buffer.

Fixes: 2d37490b82 ("i40e/i40evf: Rewrite logic for 8 descriptor per packet check")
Reported-by: Sowmini Varadhan <sowmini.varadhan@oracle.com>
Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Acked-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Sowmini Varadhan <sowmini.varadhan@oracle.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-13 19:59:23 -07:00
Denys Vlasenko
ea019649c3 drivers/net/ethernet/jme.c: Deinline jme_reset_mac_processor, save 2816 bytes
This function compiles to 895 bytes of machine code.

Clearly, this isn't a time-critical function.
For one, it has a number of udelay(1) calls.

Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com>
CC: David S. Miller <davem@davemloft.net>
CC: linux-kernel@vger.kernel.org
CC: netdev@vger.kernel.org
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-13 22:57:00 -04:00
Wolfram Sang
a6d37131c0 net: ethernet: renesas: ravb_main: test clock rate to avoid division by 0
The clk API may return 0 on clk_get_rate, so we should check the result before
using it as a divisor.

Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Acked-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-13 22:36:28 -04:00
Alexandre TORGUE
18b46810eb net: ethernet: stmmac: GMAC4.xx: Fix TX descriptor preparation
On GMAC4.xx each descriptor contains 2 buffers of 16KB (each).
Initially, those 2 buffers was filled in dwmac4_rd_prepare_tx_desc but
it is actually not needed. Indeed, stmmac driver supports frame up to
9000 bytes (jumbo). So only one buffer is needed.

Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Alexandre TORGUE <alexandre.torgue@st.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-13 22:35:17 -04:00
John Crispin
369f04531f net: mediatek: do not set the QID field in the TX DMA descriptors
The QID field gets set to the mac id. This made the DMA linked list queue
the traffic of each MAC on a different internal queue. However during long
term testing we found that this will cause traffic stalls as the multi
queue setup requires a more complete initialisation which is not part of
the upstream driver yet.

This patch removes the code setting the QID field, resulting in all
traffic ending up in queue 0 which works without any special setup.

Signed-off-by: John Crispin <blogic@openwrt.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-12 22:41:33 -04:00
John Crispin
7c78b4ad9b net: mediatek: move the pending_work struct to the device generic struct
The worker always touches both netdevs. It is ethernet core and not MAC
specific. We only need one worker, which belongs into the ethernets core
struct.

Signed-off-by: John Crispin <blogic@openwrt.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-12 22:41:33 -04:00
John Crispin
e7d425dcea net: mediatek: fix mtk_pending_work
The driver supports 2 MACs. Both run on the same DMA ring. If we hit a TX
timeout we need to stop both netdevs before restarting them again. If we
don't do this, mtk_stop() wont shutdown DMA and the consecutive call to
mtk_open() wont restart DMA and enable IRQs.

Signed-off-by: John Crispin <blogic@openwrt.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-12 22:41:32 -04:00
John Crispin
34c2e4c9e9 net: mediatek: fix TX locking
Inside the TX path there is a lock inside the tx_map function. This is
however too late. The patch moves the lock to the start of the xmit
function right before the free count check of the DMA ring happens.
If we do not do this, the code becomes racy leading to TX stalls and
dropped packets. This happens as there are 2 netdevs running on the
same physical DMA ring.

Signed-off-by: John Crispin <blogic@openwrt.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-12 22:41:32 -04:00
John Crispin
13c822f6d4 net: mediatek: fix stop and wakeup of queue
The driver supports 2 MACs. Both run on the same DMA ring. If we go
above/below the TX rings threshold value, we always need to wake/stop
the queue of both devices. Not doing to can cause TX stalls and packet
drops on one of the devices.

Signed-off-by: John Crispin <blogic@openwrt.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-12 22:41:32 -04:00
John Crispin
13439eec7a net: mediatek: remove superfluous reset call
HW reset is triggered in the mtk_hw_init() function. There is no need to
also reset the core during probe.

Signed-off-by: John Crispin <blogic@openwrt.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-12 22:41:32 -04:00
John Crispin
beeb4ca466 net: mediatek: mtk_cal_txd_req() returns bad value
The code used to also support the PDMA engine, which had 2 packet pointers
per descriptor. Because of this we had to divide the result by 2 and round
it up. This is no longer needed as the code only supports QDMA.

Signed-off-by: John Crispin <blogic@openwrt.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-12 22:41:32 -04:00
John Crispin
82500aa01a net: mediatek: watchdog_timeo was not set
The original commit failed to set watchdog_timeo. This patch sets
watchdog_timeo to HZ.

Signed-off-by: John Crispin <blogic@openwrt.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-12 22:41:32 -04:00
Grygorii Strashko
71a2cbb72a drivers: net: cpsw: drop host_port field from struct cpsw_priv
The host_port field is constantly assigned to 0 and this value has
never changed (since time when cpsw driver was introduced. More over,
if this field will be assigned to non 0 value it will break current
driver functionality.

Hence, there are no reasons to continue maintaining this host_port
field and it can be removed, and the HOST_PORT_NUM and ALE_PORT_HOST
defines can be used instead.

Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-11 15:11:59 -04:00
Grygorii Strashko
61f1cef90a drivers: net: cpsw: fix port_mask parameters in ale calls
ALE APIs expect to receive port masks as input values for arguments
port_mask, untag, reg_mcast, unreg_mcast. But there are few places in
code where port masks are passed left-shifted by cpsw_priv->host_port,
like below:

 cpsw_ale_add_vlan(priv->ale, priv->data.default_vlan,
		  ALE_ALL_PORTS << priv->host_port,
		  ALE_ALL_PORTS << priv->host_port, 0, 0);

and cpsw is still working just because priv->host_port == 0
and has never ever been changed.

Hence, fix port_mask parameters in ALE APIs calls and drop
"<< priv->host_port" from all places where it's used to
shift valid port mask.

Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-11 15:11:59 -04:00
Michael Chan
8cbde1175e bnxt_en: Add async event handling for speed config changes.
On some dual port cards, link speeds on both ports have to be compatible.
Firmware will inform the driver when a certain speed is no longer
supported if the other port has linked up at a certain speed.  Add
logic to handle this event by logging a message and getting the
updated list of supported speeds.

Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-11 14:58:45 -04:00
Michael Chan
84c33dd342 bnxt_en: Call firmware to approve VF MAC address change.
Some hypervisors (e.g. ESX) require the VF MAC address to be forwarded to
the PF for approval.  In Linux PF, the call is not forwarded and the
firmware will simply check and approve the MAC address if the PF has not
previously administered a valid MAC address for this VF.

Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-11 14:58:45 -04:00
Michael Chan
33f7d55f07 bnxt_en: Shutdown link when device is closed.
Let firmware know that the driver is giving up control of the link so that
it can be shutdown if no management firmware is running.

Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-11 14:58:44 -04:00
Michael Chan
03efbec031 bnxt_en: Disallow forced speed for 10GBaseT devices.
10GBaseT devices must autonegotiate to determine master/slave clocking.
Disallow forced speed in ethtool .set_settings() for these devices.

Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-11 14:58:44 -04:00
Hariprasad Shenai
ebf4dc2b1b cxgb4: Stop Rx Queues before freeing it up
Stop all Ethernet RX Queues before freeing up various Ingress/Egress
Queues, etc. We were seeing cases of Ingress Queues not getting serviced
during the shutdown process leading to Ingress Paths jamming up through
the chip and blocking the shutdown effort itself.

One such case involved the Firmware sending a "Flush Token" through the
ULP-TX -> ULP-RX path for an Ethernet TX Queue being freed in order to
make sure there weren't any remaining TX Work Requests in the pipeline.
But the return path was stalled by Ingress Data unable to be delivered to
the Host because those Ingress Queues were no longer being serviced.

Based on original work by Casey Leedom <leedom@chelsio.com>

Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-11 12:21:39 -04:00
Phil Reid
734e00fa02 net: stmmac: socfgpa: Ensure emac bit set in System Manger for PTP
When using the PTP fpga to hps clock source for the stmmac module
the appropriate bit in the System Manager FPGA Interface Group register
needs to be set. This is not set by the bootloader setup  when the
HPS emac pins are being for this emac module.

This allows the PTP clock to be sourced from the FPGA and also connects
the PTP pps and ext trig signals to the stmmac PTP hardware.

Patch proposed by Phil Collins.

Signed-off-by: Phil Reid <preid@electromag.com.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-10 23:44:21 -04:00
Sergei Shtylyov
49dd48dafe sh_eth: re-enable-E-MAC interrupts in sh_eth_set_ringparam()
The E-MAC interrupts are left disabled when the ring parameters are changed
via 'ethtool'. In order to fix this, it's enough to call sh_eth_dev_init()
with 'true' instead of 'false' for the second argument (which conveniently
allows us to remove the following code re-enabling E-DMAC interrupts and
reception).

Signed-off-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-09 21:27:04 -04:00
David S. Miller
ae95d71261 Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net 2016-04-09 17:41:41 -04:00
John Allen
498cd8e495 ibmvnic: Enable use of multiple tx/rx scrqs
Enables the use of multiple transmit and receive scrqs allowing the ibmvnic
driver to take advantage of multiqueue functionality. To achieve this, the
driver must implement the process of negotiating the maximum number of
queues allowed by the server. Initially, the driver will attempt to login
with the maximum number of tx and rx queues supported by the server. If
the server fails to allocate the requested number of scrqs, it will return
partial success in the login response. In this case, we must reinitiate
the login process from the request capabilities stage and attempt to login
requesting fewer scrqs.

Signed-off-by: John Allen <jallen@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-09 00:02:41 -04:00
Jiri Pirko
9efc8f655c mlxsw: reg: Fix SBPM register name
Fix copy&paste error and state the name of SBPM register correctly.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-08 15:38:43 -04:00
Jiri Pirko
497e8592c6 mlxsw: reg: Share direction enum between SBPR, SBCM, SBPM
Same field, same values, so share the same enum.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-08 15:38:43 -04:00
Jiri Pirko
b2f10571b9 mlxsw: Do not pass around driver_priv directly
Instead of that, pass mlxsw_core and use a helper to get driver priv
from driver code. Looks much cleaner that way.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-08 15:38:42 -04:00
Jiri Pirko
307c2431ab mlxsw: Pass mlxsw_core as a param of mlxsw_core_skb_transmit*
Instead of passing around driver priv, pass struct mlxsw_core *
directly.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-08 15:38:42 -04:00
Jiri Pirko
932762b69a mlxsw: Move devlink port registration into common core code
Remove devlink port reg/unreg from spectrum and switchx2 code and rather
do the common work in core. That also ensures code separation where
devlink is only used in core.c.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-08 15:38:42 -04:00
Jakub Kicinski
cc7c033330 nfp: allow ring size reconfiguration at runtime
Since much of the required changes have already been made for
changing MTU at runtime let's use it for ring size changes as
well.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-08 15:26:06 -04:00
Jakub Kicinski
a98cb25812 nfp: pass ring count as function parameter
Soon ring resize will call this functions with values
different than the current configuration we need to
explicitly pass the ring count as parameter.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-08 15:26:06 -04:00
Jakub Kicinski
36a857e4f2 nfp: convert .ndo_change_mtu() to prepare/commit paradigm
When changing MTU on running device first allocate new rings
and buffers and once it succeeds proceed with changing MTU.

Allocation of new rings is not really necessary for this
operation - it's done to keep the code simple and because
size of the extra ring memory is quite small compared to
the size of buffers.

Operation can still fail midway through if FW communication
times out.  In that case we retry with old MTU (rings).

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-08 15:26:06 -04:00
Jakub Kicinski
30d2117191 nfp: propagate list buffer size in struct rx_ring
Free list buffer size needs to be propagated to few functions
as a parameter and added to struct nfp_net_rx_ring since soon
some of the functions will be reused to manage rings with
buffers of size different than nn->fl_bufsz.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-08 15:26:05 -04:00
Jakub Kicinski
aba52df80b nfp: sync ring state during FW reconfiguration
FW reconfiguration in .ndo_open()/.ndo_stop() should reset/
restore queue state.  Since we need IRQs to be disabled when
filling rings on RX path we have to move disable_irq() from
.ndo_open() all the way up to IRQ allocation.

nfp_net_start_vec() becomes trivial now so it's inlined.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-08 15:26:05 -04:00
Jakub Kicinski
1cd0cfc498 nfp: slice .ndo_open() and .ndo_stop() up
Divide .ndo_open() and .ndo_stop() into logical, callable
chunks.  No functional changes.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-08 15:26:05 -04:00
Jakub Kicinski
ca40feab8f nfp: move filling ring information to FW config
nfp_net_[rt]x_ring_{alloc,free} should only allocate or free
ring resources without touching the device.  Move setting
parameters in the BAR to separate functions.  This will make
it possible to reuse alloc/free functions to allocate new
rings while the device is running.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-08 15:26:05 -04:00
Jakub Kicinski
114bdef0be nfp: preallocate RX buffers early in .ndo_open
We want the .ndo_open() to have following structure:
 - allocate resources;
 - configure HW/FW;
 - enable the device from stack perspective.
Therefore filling RX rings needs to be moved to the beginning
of .ndo_open().

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-08 15:26:05 -04:00
Jakub Kicinski
1934680f55 nfp: reorganize initial filling of RX rings
Separate allocation of buffers from giving them to FW,
thanks to this it will be possible to move allocation
earlier on .ndo_open() path and reuse buffers during
runtime reconfiguration.

Similar to TX side clean up the spill of functionality
from flush to freeing the ring.  Unlike on TX side,
RX ring reset does not free buffers from the ring.
Ring reset means only that FW pointers are zeroed and
buffers on the ring must be placed in [0, cnt - 1)
positions.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-08 15:26:04 -04:00
Jakub Kicinski
827deea9bc nfp: cleanup tx ring flush and rename to reset
Since we never used flush without freeing the ring later
the functionality of the two operations is mixed.
Rename flush to ring reset and move there all the things
which have to be done after FW ring state is cleared.
While at it do some clean-ups.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-08 15:26:04 -04:00
Jakub Kicinski
73725d9dfd nfp: allocate ring SW structs dynamically
To be able to switch rings more easily on config changes
allocate them dynamically, separately from nfp_net structure.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-08 15:26:04 -04:00
Jakub Kicinski
d79737c25e nfp: make *x_ring_init do all the init
nfp_net_[rt]x_ring_init functions used to be called from probe
path only and some of their functionality was spilled to the
call site.  In order to reuse them for ring reconfiguration
we need them to do all the init.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-08 15:26:04 -04:00
Jakub Kicinski
0afbfb183b nfp: break up nfp_net_{alloc|free}_rings
nfp_net_{alloc|free}_rings contained strange mix of allocations
and vector initialization.  Remove it, declare vector init as
a separate function and handle allocations explicitly.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-08 15:26:04 -04:00
Jakub Kicinski
0ba40af963 nfp: move link state interrupt request/free calls
We need to be able to disable the link state interrupt when
the device is brought down.  We used to just free the IRQ
at the beginning of .ndo_stop().  As we now move towards
more ordered .ndo_open()/.ndo_stop() paths LSC allocation
should be placed in the "allocate resource" section.

Since the IRQ can't be freed early in .ndo_stop(), it is
disabled instead.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-08 15:26:03 -04:00
Jakub Kicinski
ff1b68ab2d nfp: correct RX buffer length calculation
When calculating the RX buffer length we need to account
for up to 2 VLAN tags.  Rounding up to 1k is an relic of
a distant past and can be removed.  While at it also remove
trivial print statement.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-08 15:26:03 -04:00
Mark Rustad
10ef00fe53 ixgbe: Bump version number
Update ixgbe version number.

Signed-off-by: Mark Rustad <mark.d.rustad@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-07 17:17:32 -07:00
Mark Rustad
f572b2c4c8 ixgbe: Add KR backplane support for x550em_a
Add support for x550em_a-based KR backplane devices.

Signed-off-by: Mark Rustad <mark.d.rustad@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-07 17:14:06 -07:00
Mark Rustad
200157c2e3 ixgbe: Add support for SGMII backplane interface
Add support for an SGMII backplane interface.

Signed-off-by: Mark Rustad <mark.d.rustad@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-07 17:10:29 -07:00
Mark Rustad
2d40cd1720 ixgbe: Add support for SFPs with retimer
Add support for SFPs with an external retimer.

Signed-off-by: Mark Rustad <mark.d.rustad@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-07 17:06:54 -07:00
Mark Rustad
e84db72727 ixgbe: Introduce function to control MDIO speed
Move code that controls MDIO speed into a new function because
there will be more MACs that need the control.

Signed-off-by: Mark Rustad <mark.d.rustad@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-07 16:57:45 -07:00
Mark Rustad
537cc5df4f ixgbe: Read and parse NW_MNG_IF_SEL register
Read the IXGBE_NW_MNG_IF_SEL register and use it to set interface
attributes.

Signed-off-by: Mark Rustad <mark.d.rustad@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-07 16:36:47 -07:00
Mark Rustad
c898fe2804 ixgbe: Read and set instance id
Read the instance number from EEPROM and save it for later use.

Signed-off-by: Mark Rustad <mark.d.rustad@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-07 16:33:11 -07:00
Mark Rustad
d31afc8f5c ixgbe: Use new methods for PHY access
Now x550em_a devices will use a new method for PHY access that will
get the firmware token for each access.

Signed-off-by: Mark Rustad <mark.d.rustad@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-07 16:23:58 -07:00
Mark Rustad
49425dfc74 ixgbe: Add support for x550em_a 10G MAC type
Add support for x550em_a 10G MAC type to the ixgbe driver. The new
MAC includes new firmware commands that need to be used to control
PHY and IOSF access, so that support is also added. The interface
supported is a native SFP+ interface.

Signed-off-by: Mark Rustad <mark.d.rustad@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-07 16:15:35 -07:00
Mark Rustad
9a5c27e6ef ixgbe: Use method pointer to access IOSF devices
Provide method pointers and use them to access IOSF-attached
devices. A new MAC will introduce a new access method.

Signed-off-by: Mark Rustad <mark.d.rustad@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-07 16:08:08 -07:00
Mark Rustad
207969b94c ixgbe: Add definitions for x550em_a 10G MAC
Add definitions for a x550em_a 10G MAC device with a native SFP
interface.

Signed-off-by: Mark Rustad <mark.d.rustad@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-07 16:04:29 -07:00
Mark Rustad
a711ad89a8 ixgbe: Add support for single-port X550 device
Add support for a single-port X550 device.

Signed-off-by: Mark Rustad <mark.d.rustad@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-07 15:58:52 -07:00
Alexander Duyck
8220bbc12d ixgbe/ixgbevf: Add support for bulk free in Tx cleanup & cleanup boolean logic
This patch enables bulk free in Tx cleanup for ixgbevf and cleans up the
boolean logic in the polling routines for ixgbe and ixgbevf in the hopes of
avoiding any mix-ups similar to what occurred with i40e and i40evf.

Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-07 15:55:25 -07:00
Mark Rustad
af74190176 ixgbe: Take manageability semaphore for firmware commands
We need to take the manageability semaphore when issuing firmware
commands to avoid problems. With this in place, the semaphore is
no longer taken in the ixgbe_set_fw_drv_ver_generic function, since
it will now always be taken by the ixgbe_host_interface_command
function.

Signed-off-by: Mark Rustad <mark.d.rustad@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-07 15:51:27 -07:00
Mark Rustad
5cffde309c ixgbe: Clean up interface for firmware commands
Clean up the interface for issuing firmware commands to use a
void * instead of a u32 *. This eliminates a number of casts.
Also clean up ixgbe_host_interface_command in a few other ways,
eliminating comparisons with 0, redundant parens and minor
formatting issues.

Signed-off-by: Mark Rustad <mark.d.rustad@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-07 15:46:11 -07:00
Mark Rustad
73457165d7 ixgbe: Correct length check for round up
The function ixgbe_host_interface_command actually uses a multiple
of word sized buffer to do its business, but only checks against
the actual length passed in. This means that on read operations it
could be possible to modify locations beyond the length passed in.
Change the check to round up in the same way, just to avoid any
possible hazard.

Signed-off-by: Mark Rustad <mark.d.rustad@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-07 15:30:16 -07:00
Mark Rustad
3775b814d5 ixgbe: Change the lan_id and func fields to a u8 to avoid casts
Since the lan_id and func fields only ever hold small values, make
them u8 to avoid casts used to silence warnings.

Signed-off-by: Mark Rustad <mark.d.rustad@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-07 15:26:31 -07:00
Mark Rustad
832ac59214 ixgbe: Delete some unused register definitions
I noticed the SRAMREL registers are not referenced for any device,
so delete the definitions.

Signed-off-by: Mark Rustad <mark.d.rustad@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-07 15:17:52 -07:00
David S. Miller
94ab1ea94c Merge branch '1GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queue
Jeff Kirsher says:

====================
1GbE Intel Wired LAN Driver Updates 2016-04-06

This series contains updates to e1000, e1000e, igb and Kconfig.

Alex fixes igb where we were casting the MAC address as __beXX and then
passing it into le32_to_cpu, when we could simply cast as __lexx to
maintain consistency since it is already little endian.  Then enabled
bulk free in transmit cleanup for igb.

John Holland enables igb to pickup the MAC address from a device tree
blob when CONFIG_OF has been enabled.

Doron Shikmoni fixes a bug in the output of "ethtool -m ethX" where
the data byte appeared duplicated.

Stefan fixes up e1000 and e1000e ethtool offline tests which were calling
dev_close() which causes IFF_UP to be cleared which removes teh interface
routes and some addresses, so use ndo_stop() instead.

Jiri Benc cleans up some old links in the Kconfig for Intel drivers where
we referred to a URL which is no longer valid.  I am so glad Jiri has the
time in his day to spend clicking on and testing all the URL links in the
the kernel.

Arika Chen reverts the addition of a 'rtnl_unlock()' which had a unmatched
'rtnl_lock()' call before it.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-07 11:50:30 -04:00
Arika Chen
d99e366fc9 Revert "igb: Fix a deadlock in igb_sriov_reinit"
This reverts commit 3eb14ea8d9 ("igb: Fix a deadlock in
igb_sriov_reinit")
It is the same as commit f468adc944 ("igb: missing rtnl_unlock in
igb_sriov_reinit()")
There is no rtnl_lock() in igb_resume before, rtnl_unlock will cause a
deadlock.

Signed-off-by: Arika Chen <arika.chen@huawei.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-06 21:02:11 -07:00
Jiri Benc
5bd0c0202a net: intel: remove dead links
The Kconfig for Intel NICs references two different URLs for the "Adapter
& Driver ID Guide". Neither of those two links works. The current URL seems
to be
http://www.intel.com/content/www/us/en/support/network-and-i-o/ethernet-products/000005584.html
but given it's apparently constantly changing, there's no point in having it
in the help text.

Just keep a generic pointer to http://support.intel.com. Hopefully, this one
will have a longer live. It still works, at least.

Furthermore, remove a link to "the latest Intel PRO/100 network driver for
Linux", this has no place in the mainline kernel and the latest Linux driver
it offers is from 2006, anyway.

Signed-off-by: Jiri Benc <jbenc@redhat.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-06 20:54:45 -07:00
Mitch Williams
ba6cc7f6f1 i40evf: properly handle VLAN features
Correctly set the VLAN feature flags after setting the rest of the
netdev flags. And don't set them in hw_features, because these can't be
controlled by the VF driver.

Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-06 20:27:26 -07:00
Harshitha Ramamurthy
47c46778e1 i40e/i40evf: Bump patch from 1.5.2 to 1.5.5
Signed-off-by: Harshitha Ramamurthy <harshitha.ramamurthy@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-06 19:12:48 -07:00
Kiran Patil
17a035be95 i40e: Input set mask constants for RSS, flow director, and flex bytes
Add defines for input set mask (RSS, flow director, flexible payload),
including defines specific to IPv6.

Change-ID: Ie95ef7d0916a4d6ca011c194283f959774c8dce9
Signed-off-by: Kiran Patil <kiran.patil@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-06 19:09:13 -07:00
Shannon Nelson
bab2fb60dc i40e: Move NVM event wait check to NVM code
The logic that checks AQ events for NVM done events is better kept
in nvm.c with the rest of the nvmupdate handling code.

Change-ID: I2ea58980df8ecaa3726b28a37bff3dfcb8df03dc
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-06 19:00:42 -07:00
Mitch Williams
585954f8b8 i40e: Add RSS configuration to virtual channel
Add opcodes and structures to support RSS configuration by PF driver on
behalf of the VF drivers. This reduces complexity in the VF driver and
allows us to support future hardware designs without modifying the VF
driver.

Change-ID: I8c75765c630eacb71f95967f1109a198542593ac
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-06 18:55:18 -07:00
Shannon Nelson
437f82a229 i40e: Move NVM variable out of AQ struct
The NVM update status info should stay collected together, not
spread across different structs.

Change-ID: Ic16f9e9fd79945d865bb7226184c889884585025
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-06 18:45:11 -07:00
Shannon Nelson
14c5f5d264 i40e: Restrict VF poll mode to only single function mode devices
The VFs can request their queues to be set up into polling mode, rather
than interrupt mode, which works well for supporting things like DPDK,
but this should not be available when working in an multi-function
support device.

Change-ID: Id36792e4e7422db8f2033336507211f68f14ff6f
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-06 18:38:55 -07:00
Anjali Singhai Jain
c3bbbd2002 i40e: Patch to support trusted VF
This patch adds hook to support changing a VF from not-trusted
to trusted and vice-versa. Fixed the wrappers and function prototype.
Changed the dmesg to reflex the current state better. This patch also
disables turning on/off trusted VF in MFP mode.

Change-ID: Ibcd910935c01f0be1f3fdd6d427230291ee92ebe
Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-06 18:31:06 -07:00
Jesse Brandeburg
1f15d66712 i40e/i40evf: Faster RX via avoiding FCoE
As it turns out, calling into other files from hot path hurts
performance a lot.  In this case the majority of the time we
call "check FCoE" and the packet is *not* FCoE, but this call
was taking 5% of our total cycles spent on receive.

Change-ID: I080552c26e7060bc7b78504dc2763f6f0b3d8c76
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-06 18:26:23 -07:00
Jesse Brandeburg
84b079928a i40e/i40evf: Drop unused tx_ring argument
Some of the tx_ring arguments can be deleted since they are not used.

Change-ID: I99275b0f191d7f63ec2f05061919904940c36f31
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-06 18:15:17 -07:00
Jesse Brandeburg
d1bd743b5b i40e/i40evf: Move stack var deeper
A local variable could move down inside the context where it is used.

Change-ID: I9caba9e1eacf921037077f2665cbce83fd8e95d6
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-06 18:10:57 -07:00
Akeem G Abodunrin
30728c5bdf i40e: Move HW flush
This patch moves the HW flush routine to the end of the reset flow,
after the completion of writing to the device VFLR registers- the
benefit is to avoid problems in the passthrough routines.

Change-ID: Ieb56866f21895e6c1fc514b7328c3df79807a57c
Signed-off-by: Akeem G Abodunrin <akeem.g.abodunrin@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-06 18:06:22 -07:00
Shannon Nelson
89dd05512b i40e: Leave debug_mask cleared at init
Don't set our internal debug_mask at startup unless we get specific signal
to from the debug module parameter.

This should take care of the issue with all the device capabilities getting
printed even when we hadn't asked for the debug info.

Change-ID: I7fbc6bd8b11ed9b0631ec018ff36015a04100b6c
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-06 18:02:43 -07:00
Deepthi Kavalur
453e16e8e8 i40e: Inserting a HW capability display info
Display MSIx vector count for HW capabilities.

Change-ID: I4b41e9b50360cf660e7fbcb85b9390fedcf313b1
Signed-off-by: Deepthi Kavalur <deepthi.kavalur@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-06 17:54:14 -07:00
Stefan Assmann
1f2f83f838 e1000: call ndo_stop() instead of dev_close() when running offline selftest
Calling dev_close() causes IFF_UP to be cleared which will remove the
interfaces routes and some addresses. That's probably not what the user
intended when running the offline selftest. Besides this does not happen
if the interface is brought down before the test, so the current
behaviour is inconsistent.
Instead call the net_device_ops ndo_stop function directly and avoid
touching IFF_UP at all.

Signed-off-by: Stefan Assmann <sassmann@kpanic.de>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-06 14:47:55 -07:00
Ido Schimmel
d81a6bdb87 mlxsw: spectrum: Add IEEE 802.1Qbb PFC support
Implement the appropriate DCB ops and allow a user to configure certain
traffic classes as lossless.

The operation configures PFC for both the egress (respecting PFC frames)
and ingress (sending PFC frames) parts of the port.

At egress, when a PFC frame is received for a PFC enabled priority, then
all the priorities mapped to the same TC are stopped.

At ingress, the priority group (PG) buffers to which the enabled PFC
priorities are mapped are configured to be lossless. PFC frames will be
transmitted when the Xoff threshold is crossed.

The user-supplied delay parameter is used to determine the PG's size
according to the following formula:

PG_SIZE = PG_SIZE_LOSSY + delay * CELL_FACTOR + MTU

In the worst case scenario the delay will be made up of packets that
are all of size CELL_SIZE + 1, which means each packet will require
almost twice its true size when buffered in the switch. We therefore
multiply this value by the "cell factor", which is close to 2.

Another MTU is added in case the transmitting host already started
transmitting a maximum length frame when the PFC packet was received.

As with PAUSE enabled ports, when the port's MTU is changed both the
PGs' size and threshold are adjusted accordingly.

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-06 17:24:20 -04:00
Ido Schimmel
34dba0a59d mlxsw: reg: Introduce per priority counters
We are going to add support for PFC as part of DCB ops, which requires us
to report the number of PFC frames sent and received per priority.

Add per priority counters in order to report number of PFC frames sent
and received per priority.

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-06 17:24:20 -04:00
Ido Schimmel
9f7ec052b7 mlxsw: spectrum: Add support for PAUSE frames
When a packet ingress the switch it's placed in its assigned priority
group (PG) buffer in the port's headroom buffer while it goes through
the switch's pipeline. After going through the pipeline - which
determines its egress port(s) and traffic class - it's moved to the
switch's shared buffer awaiting transmission.

However, some packets are not eligible to enter the shared buffer due to
exceeded quotas or insufficient space. Marking their associated PGs as
lossless will cause the packets to accumulate in the PG buffer. Another
reason for packets accumulation are complicated pipelines (e.g.
involving a lot of ACLs).

To prevent packets from being dropped a user can enable PAUSE frames on
the port. This will mark all the active PGs as lossless and set their
size according to the maximum delay, as it's not configured by user.

                         +----------------+   +
                         |                |   |
                         |                |   |
                         |                |   |
                         |                |   |
                         |                |   |
                         |                |   | Delay
                         |                |   |
                         |                |   |
                         |                |   |
                         |                |   |
                         |                |   |
    Xon/Xoff threshold   +----------------+   +
                         |                |   |
                         |                |   | 2 * MTU
                         |                |   |
                         +----------------+   +

The delay (612 [Cells]) was calculated according to worst-case scenario
involving maximum MTU and 100m cables.

After marking the PGs as lossless the device is configured to respect
incoming PAUSE frames (Rx PAUSE) and generate PAUSE frames (Tx PAUSE)
according to user's settings.

Whenever the port's headroom configuration changes we take into account
the PAUSE configuration, so that we correctly set the PG's type (lossy /
lossless), size and threshold. This can happen when:

a) The port's MTU changes, as it directly affects the PG's size.

b) A PG is created following user configuration, by binding a priority
to it.

Note that the relevant SUPPORTED flags were already mistakenly set by
the driver before this commit.

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-06 17:24:19 -04:00
Ido Schimmel
155f9de2e0 mlxsw: reg: Add lossless settings for PBMC register
When configuring PAUSE frames and PFC we'll need to configure the
Xon/Xoff threshold for the priority group (PG) buffers.

Add the Xon/Xoff threshold fields to the PBMC register so that we can
configure these when needed.

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-06 17:24:19 -04:00
Ido Schimmel
6f253d8381 mlxsw: reg: Add Port Flow Control Configuration register
Add the Port Flow Control Configuration (PFCC) register, which
configures both flow control and Priority-based Flow Control (PFC).

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-06 17:24:19 -04:00
Ido Schimmel
cc7cf51758 mlxsw: spectrum: Allow setting maximum rate for a TC
Allow a user to set maximum rate for a particular TC using DCB ops.

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-06 17:24:19 -04:00
Ido Schimmel
8e8dfe9fdf mlxsw: spectrum: Add IEEE 802.1Qaz ETS support
Implement the appropriate DCB ops and allow a user to configure:
	* Priority to traffic class (TC) mapping with a total of 8
	  supported TCs
	* Transmission selection algorithm (TSA) for each TC and the
	  corresponding weights in case of weighted round robin (WRR)

As previously explained, we treat the priority group (PG) buffer in the
port's headroom as the ingress counterpart of the egress TC. Therefore,
when a certain priority to TC mapping is configured, we also configure
the port's headroom buffer.

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-06 17:24:18 -04:00
Ido Schimmel
f00817df2b mlxsw: spectrum: Introduce support for Data Center Bridging (DCB)
Introduce basic infrastructure for DCB and add the missing ops in
following patches.

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-06 17:24:18 -04:00
Ido Schimmel
90183b980d mlxsw: spectrum: Initialize egress scheduling
Before introducing support for DCB ops we should first make sure we
initialize the relevant parts in the device correctly. Specifically, the
egress scheduling.

The device supports a superset of the 802.1Qaz standard with 4 hierarchy
levels that can be linked to each other in multiple ways and with
different transmission selection algorithms (TSA) employed between them.

However, since we only intend to support the 802.1Qaz standard we
flatten the hierarchies and let the user configure via DCB ops the TSA
and max rate shaper at the subgroup hierarchy (see figure below) and the
mapping between switch priority to traffic class. By default, all switch
priorities are mapped to traffic class 0, strict priority is employed
and max shaper is disabled.

Default configuration:

         switch priority 0      ...         switch priority 7
                 +                                  +
                 |                                  |
                 +----------------------------------+
                 |
              +--v--+                          +-----+
Traffic Class |     |                          |     |
  Hierarchy   | TC0 |           ...            | TC7 |
              |     |                          |     |
              +--+--+                          +--+--+
                 |                                |
              +--v--+                          +--v--+
  Subgroup    | SG0 |                          | SG7 |
  Hierarchy   |     |                          |     |
              +-----+                          +-----+
              | TSA |                          | TSA |
              +-----+           ...            +-----+
              | MAX |                          | MAX |
              +--+--+                          +--+--+
                 |                                |
                 +---------------+----------------+
                                 |
                              +--v--+
                      Group   |     |
                    Hierarchy | GR0 |
                              |     |
                              +--+--+
                                 |
                              +--v--+
                      Port    |     |
                    Hierarchy | PR0 |
                              |     |
                              +-----+

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-06 17:24:18 -04:00
Ido Schimmel
2c63a555e8 mlxsw: reg: Add QoS Switch Traffic Class Table register
As part of DCB ops we'll have to configure the priority to traffic class
mapping of a port.

Add the QoS Switch Traffic Class Table (QTCT) register, which configures
the mapping between the packet switch priority and traffic class on the
transmit port.

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-06 17:24:18 -04:00
Ido Schimmel
b9b7cee405 mlxsw: reg: Add QoS ETS Element Configuration register
We are going to introduce support for DCB, so we need to be able to
configure the traffic selection algorithm (TSA) used by each traffic
class (TC), as well as the bandwidth percentage allocated to each TC in
case of ETS.

Add the QoS ETS Element Configuration register, which controls the
above parameters.

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-06 17:24:17 -04:00
Ido Schimmel
d6b7c13b01 mlxsw: spectrum: Set port's shared buffer size to 0
In addition to the priority group (PG) buffers in the headroom, the
device enables the allocation of headroom shared buffer, which can
be shared between different PGs.

However, we are not going to use the headroom shared buffer and instead
allow the user to use its size for PGs or the switch's shared buffer.

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-06 17:24:17 -04:00
Ido Schimmel
7ad7cd6113 mlxsw: reg: Use correct PBMC register length
The last field of the PBMC register is at offset 0x64 and its size is
0x8, so the correct register's length is 0x6C bytes.

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-06 17:24:17 -04:00
Ido Schimmel
ff6551ec0c mlxsw: spectrum: Correctly configure headroom size
When packets ingress the switch they are assigned a switch priority and
directed to the corresponding priority group (PG) buffer in the port's
headroom buffer.

Since we now map all switch priorities to priority group 0 (PG0) by
default, there is no need to allocate the other priority groups during
initialization. The only exception is PG9, which is used for control
traffic.

At minimum, the PG should be able to store the currently classified
packet (pipeline latency isn't 0) and also the packets arriving during
the classification time. However, an incoming packet will not be
buffered if there is no available MTU-sized buffer space for storing it.

The buffer needed to accommodate for pipeline latency is variable and
needs to take into account both the current link speed and current
latency of the pipeline, which is time-dependent. Testing showed that
setting the PG's size to twice the current MTU is optimal.

Since PG9 is used strictly for control packets and not subject to flow
control, we are not going to resize it according to user configuration,
so we simply set it according to worst case scenario, which is twice the
maximum MTU.

In any case, later patches in the series will allow a user to direct
lossless flows to other PGs than PG0 and set their size to accommodate
for round-trip propagation delay.

The above change also requires us to resize the PG buffer whenever the
port's MTU is changed.

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-06 17:24:17 -04:00
Ido Schimmel
1a1984490f mlxsw: spectrum: Add bytes to cells helper
Buffers in the switch store packets in units called buffer cells. Add a
helper to convert from bytes to cells, so that the actual number of
cells required (result is round up) is returned.

Also, drop the SB (shared buffer) acronym from the BYTES_PER_CELL macro,
as this unit is also used in the ports' buffers and not only the
switch's shared buffer.

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-06 17:24:16 -04:00
Ido Schimmel
dd6cb0f9fd mlxsw: spectrum: Map all switch priorities to priority group 0
During transmission, the skb's priority is used to map the skb to a
traffic class, where the idea is to group priorities with similar
characteristics (e.g. lossy, lossless) to the same traffic class. By
default, all priorities are mapped to traffic class 0.

In the device, we model the skb's priority as the switch priority, which
is assigned to a packet according to its PCP value and ingress port
(untagged packets are assigned the port's default switch priority - 0).

At ingress, the packet is directed to a priority group (PG) buffer in
the port's headroom buffer according to the packet's switch priority and
switch priority to buffer mapping.

While it's possible to configure the egress mapping between skb's
priority (switch priority) and traffic class, there is no mechanism to
configure the ingress mapping to a PG.

In order to keep things simple and since grouping certain priorities into
a traffic class at egress also implies they should be grouped the same
at ingress, treat a PG as the ingress counterpart of an egress traffic
class.

Having established the above, during initialization map all the switch
priorities to PG0 in accordance with the Linux defaults for traffic
class mapping.

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-06 17:24:16 -04:00
Ido Schimmel
b98ff151b6 mlxsw: reg: Add Port Prio To Buffer register
When packets ingress the switch they are assigned a switch priority
number that dictates the packet's priority group (PG) buffer in the
port's headroom buffer.

Add the Port Prio To Buffer (PPTB) register, which configures the switch
priority to PG mapping.

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-06 17:24:16 -04:00
Stefan Assmann
d5ea45da1f e1000e: call ndo_stop() instead of dev_close() when running offline selftest
Calling dev_close() causes IFF_UP to be cleared which will remove the
interfaces routes and some addresses. That's probably not what the user
intended when running the offline selftest. Besides this does not happen
if the interface is brought down before the test, so the current
behaviour is inconsistent.
Instead call the net_device_ops ndo_stop function directly and avoid
touching IFF_UP at all.

Signed-off-by: Stefan Assmann <sassmann@kpanic.de>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-06 14:05:24 -07:00
David S. Miller
92b6d35fac Merge branch '40GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queue
Jeff Kirsher says:

====================
40GbE Intel Wired LAN Driver Updates 2016-04-05

This series contains updates to i40e and i40evf only.

Colin Ian King cleaned up a redundant NULL check which was found by static
analysis.

Anjali enables geneve receive offload for XL710/X710 devices.

Mitch cleans up unused variable in i40e_vc_get_vf_resources_msg().
Fixed the driver to actually be able to adjust VLAN tagging features
through ethtool, as expected.  Fixed a problem where VF resets would
get lost by the PF preventing the VF driver from initializing.  Also
put users mind at ease by lowering some message levels since many of
these conditions can happen any time VFs are enabled or disabled and
are not really indicative a fatal problems, unless they happen
continuously.

Shannon disables the link polling to lessen the admin queue traffic
especially since the link event mask usage has been fixed recently.

Alex Duyck fixes the i40e and i40evf drivers to correctly update
checksums for frames up to 16776960 in length which should be more than
large enough for all possible TSO frames in the near future.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-06 17:03:35 -04:00
Giuseppe CAVALLARO
52f95bbfcf stmmac: fix adjust link call in case of a switch is attached
While initializing the phy, the stmmac driver sets the
PHY_IGNORE_INTERRUPT so the PAL won't call the adjust hook
that is needed, on some platforms, e.g. STi, to invoke the glue.

The patch allows the PAL to poll the stmmac_adjust_link just one time
in case of a switch is attached, setting later the PHY_IGNORE_INTERRUPT
flag.
Moving this kind of logic inside the adjust_link it makes sense to
anticipate the check for EEE that will never initialized in this
scenario.

Reported-by: Gabriel Fernandez <gabriel.fernandez@linaro.org>
Signed-off-by: Giuseppe Cavallaro <peppe.cavallaro@st.com>
Tested-by: Gabriel Fernandez <gabriel.fernandez@linaro.org>
Cc: Alexandre TORGUE <alexandre.torgue@st.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-06 16:34:03 -04:00
Doron Shikmoni
efea95d45e igb: Garbled output for "ethtool -m"
Garbled output for "ethtool -m ethX", in igb-driven NICs with module /
plugin EEPROM (i.e. SFP information). Each output data byte appears
duplicated.

In igb_ethtool.c, igb_get_module_eeprom() is reading the EEPROM via i2c;
the eeprom offset for each word that's read via igb_read_phy_reg_i2c()
was passed in #words, whereas it needs to be a byte offset.
This patches fixes the bug.

Signed-off-by: Doron Shikmoni <doron.shikmoni@gmail.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-06 13:24:38 -07:00
Hariprasad Shenai
8a21ec4e0a cxgb4/cxgb4vf: Deprecate module parameter dflt_msg_enable
Message level can be set through ethtool, so deprecate module parameter
which is used to set the same.

Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-06 16:17:25 -04:00
John Holland
806ffb1d50 igb: allow setting MAC address on i211 using a device tree blob
The Intel i211 LOM PCIe Ethernet controllers' iNVM operates as an OTP
and has no external EEPROM interface [1]. The following allows the
driver to pickup the MAC address from a device tree blob when CONFIG_OF
has been enabled.

[1]
http://www.intel.com/content/www/us/en/embedded/products/networking/i211-ethernet-controller-datasheet.html

Signed-off-by: John Holland <jotihojr@gmail.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-06 12:38:36 -07:00
Alexander Duyck
7f0ba84560 igb: Add support for bulk Tx cleanup & cleanup boolean logic
This patch enables bulk free in Tx cleanup for igb and cleans up the
boolean logic in the polling routines for igb in the hopes of avoiding
any mix-ups similar to what occurred with i40e and i40evf.

Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-06 12:26:36 -07:00
Alexander Duyck
415cd2a645 igb: Fix sparse warning about passing __beXX into leXX_to_cpup
We were casting the addr as __beXX and then passing it into le32_to_cpu
because the device expects the MAC address to be in network order even
though the register set is little endian.  Instead of casting it as __beXX
we can just cast it as __leXX in order to maintain consistency since the
region of memory is already in little endian order as far as we are
concerned.

Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-06 12:16:07 -07:00
Hariprasad Shenai
529927f952 cxgb4: Add pci device id for chelsio t520-cr adapter
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-06 14:21:20 -04:00
Alexander Duyck
24d41e5e2c i40e/i40evf: Fix TSO checksum pseudo-header adjustment
With IPv4 and IPv6 now using the same format for checksums based on the
length of the frame we need to update the i40e and i40evf drivers so that
they correctly account for lengths greater than or equal to 64K.

With this patch the driver should now correctly update checksums for frames
up to 16776960 in length which should be more than large enough for all
possible TSO frames in the near future.

Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-05 20:34:51 -07:00
Avinash Dayanand
066439ce79 i40e/i40evf: Bump patch from 1.5.1 to 1.5.2
Signed-off-by: Avinash Dayanand <avinash.dayanand@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-05 20:34:46 -07:00
Shannon Nelson
867a79e37e i40e: Request PHY media event at reset time
Add the Media Not Available flag to the link event mask.  It seems
that event comes first if you have a DA cable pulled out, but there's no
follow-up event for Link Down; if you're not looking for MEDIA_NA you will
get no event, even though there's now no Link.

Change-ID: cb3340a2849805bb881f64f6f2ae810eef46eba7
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-05 20:33:55 -07:00
Mitch Williams
18b7af57d9 i40e: Lower some message levels
These conditions can happen any time VFs are enabled or disabled and are
not really indicative of fatal problems unless they happen continuously.

Lower the log level so that people don't get scared.

Change-ID: I1ceb4adbd10d03cbeed54d1f5b7f20d60328351d
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-05 20:33:49 -07:00
Avinash Dayanand
16badc3469 i40e: Fix for supported link modes in 10GBaseT PHY's
100baseT/Full is now listed and supported link mode for 10GBaseT PHY.
This is a fix to list all the supported link modes of 10GBaseT PHY.

Change-ID: If2be3212ef0fef85fd5d6e4550c7783de2f915e9
Signed-off-by: Avinash Dayanand <avinash.dayanand@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-05 20:33:43 -07:00
Catherine Sullivan
539a379c50 i40evf: Fix get_rss_aq
We were passing in the seed where we should just be passing false
because we want the VSI table not the pf table.

Change-ID: I9b633ab06eb59468087f0c0af8539857e99f9495
Signed-off-by: Catherine Sullivan <catherine.sullivan@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-05 20:33:38 -07:00
Shannon Nelson
8c806b676d i40e: Disable link polling
Periodic link polling was added when the link events were found not to be
trustworthy.  This was the case early on, but was likely because the link
event mask was being used incorrectly.  As this has been fixed in recent
code, we can disable the link polling to lessen the AQ traffic.

Change-ID: Id890b5ee3c2d04381fc76ffa434777644f5d8eb0
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-05 20:33:33 -07:00
Mitch Williams
22ead37f8a i40evf: Add longer wait after remove module
Upon module remove, wait a little longer after requesting a reset before
checking to see if the firmware responded. This change prevents double
resets when the firmware is busy.

Change-ID: Ieedc988ee82fac1f32a074bf4d9e4dba426bfa58
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-05 20:33:27 -07:00
Mitch Williams
7369ca8745 i40e: Make VF resets more reliable
Clear the VFLR bit immediately after triggering a reset instead of
waiting until after cleanup is complete. Make sure to trigger a reset
every time, not just if the PF is up.

These changes fix a problem where VF resets would get lost by the PF,
preventing the VF driver from initializing.

Change-ID: I5945cf2884095b7b0554867c64df8617e71d9d29
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-05 20:33:19 -07:00
Catherine Sullivan
d6bf58c2e8 i40e: Add new device ID for X722
The new device ID is 0x37D3 and it should follow the same flows and
branding string as for 0x37D0.

Change-ID: Ia5ad4a1910268c4666a3fd46a7afffbec55b4fc2
Signed-off-by: Catherine Sullivan <catherine.sullivan@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-05 20:33:14 -07:00
Mitch Williams
c4445aedfe i40evf: Fix VLAN features
Users of ethtool were being given the mistaken impression that this
driver was able to change its VLAN tagging features, and were
disappointed that this was not actually the case. Implement
ndo_fix_features method so that we can adjust these flags as needed to
avoid false impressions.

Change-ID: I08584f103a4fa73d6a4128d472e4ef44dcfda57f
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-05 20:32:48 -07:00
Mitch Williams
442b25e455 i40e: Remove unused variable
This variable is vestigial, a remnant of the primordial code from which
this driver spawned. We can safely remove it.

Change-ID: I24e0fe338e7c7c50d27dc5515564f33caefbb93a
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-05 19:03:07 -07:00
Anjali Singhai Jain
3845ccea34 i40e: Enable Geneve offload for FW API ver > 1.4 for XL710/X710 devices
This patch enables the Capability for XL710/X710 devices with FW API
version higher than 1.4 to do geneve Rx offload.

Change-ID: I9a8f87772c48d7d67dc85e3701d2e0b845034c0b
Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-05 18:58:14 -07:00
Colin King
afb8ece432 i40e: remove redundant check on vsi->active_vlans
active_vlans is an unsigned long array, hence a null check on this
array is superfluous and can be removed.

Detected with static analysis by smatch:

drivers/net/ethernet/intel/i40e/i40e_debugfs.c:386
  i40e_dbg_dump_vsi_seid() warn: this array is probably
  non-NULL. 'vsi->active_vlans'

Signed-off-by: Colin Ian King <colin.king@canonical.com>
Acked-by: Shannon Nelson <shannon.nelson@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-05 18:52:14 -07:00
Thomas Falcon
9be02cdfa6 ibmvnic: enable RX checksum offload
Enable RX Checksum offload feature in the ibmvnic driver.

Signed-off-by: Thomas Falcon <tlfalcon@linux.vnet.ibm.com>
Cc: John Allen <jallen@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-05 19:51:38 -04:00
Thomas Falcon
ad7775dc7b ibmvnic: map L2/L3/L4 header descriptors to firmware
Allow the VNIC driver to provide descriptors containing
L2/L3/L4 headers to firmware.  This feature is needed
for greater hardware compatibility and enablement of checksum
and TCP offloading features.

A new function is included for the hypervisor call,
H_SEND_SUBCRQ_INDIRECT, allowing a DMA-mapped array of SCRQ
descriptor elements to be sent to the VNIC server.

These additions will help fully enable checksum offloading as
well as other features as they are included later.

Signed-off-by: Thomas Falcon <tlfalcon@linux.vnet.ibm.com>
Cc: John Allen <jallen@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-05 19:51:37 -04:00
Petri Gynther
7ee4062562 net: bcmgenet: cleanup for dmadesc_set()
dmadesc_set() is used for setting the Tx buffer DMA address, length,
and status bits on a Tx ring descriptor when a frame is being Tx'ed.

Always set the Tx buffer DMA address first, before updating the length
and status bits, i.e. giving the Tx descriptor to the hardware.

The reason this is a cleanup rather than a fix is that the hardware
won't transmit anything from a Tx ring until the TDMA producer index
has been incremented. As long as the dmadesc_set() writes complete
before the TDMA producer index write, life is good.

Signed-off-by: Petri Gynther <pgynther@google.com>
Acked-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-05 19:33:52 -04:00
Petri Gynther
824ba60357 net: bcmgenet: cleanup for bcmgenet_xmit_frag()
Add frag_size = skb_frag_size(frag) and use it when needed.

Signed-off-by: Petri Gynther <pgynther@google.com>
Acked-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-05 19:33:51 -04:00
Petri Gynther
f5a9ec20b3 net: bcmgenet: cleanup for bcmgenet_xmit()
1. Readability: Move nr_frags assignment a few lines down in order
   to bundle index -> ring -> txq calculations together.
2. Readability: Add parentheses around nr_frags + 1.
3. Minor fix: Stop the Tx queue and throw the error message only if
   the Tx queue hasn't already been stopped.

Signed-off-by: Petri Gynther <pgynther@google.com>
Acked-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-05 19:33:51 -04:00
Alexander Duyck
a4605fef71 e1000: Double Tx descriptors needed check for 82544
The 82544 has code that adds one additional descriptor per data buffer.
However we weren't taking that into account when determining the descriptors
needed for the next transmit at the end of the xmit_frame path.

This change takes that into account by doubling the number of descriptors
needed for the 82544 so that we can avoid a potential issue where we could
hang the Tx ring by loading frames with xmit_more enabled and then stopping
the ring without writing the tail.

In addition it adds a few more descriptors to account for some additional
workarounds that have been added over time.

Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-05 15:05:51 -07:00
Alexander Duyck
847a1d6796 e1000: Do not overestimate descriptor counts in Tx pre-check
The current code path is capable of grossly overestimating the number of
descriptors needed to transmit a new frame.  This specifically occurs if
the skb contains a number of 4K pages.  The issue is that the logic for
determining the descriptors needed is ((S) >> (X)) + 1.  When X is 12 it
means that we were indicating that we required 2 descriptors for each 4K
page when we only needed one.

This change corrects this by instead adding (1 << (X)) - 1 to the S value
instead of adding 1 after the fact.  This way we get an accurate descriptor
needed count as we are essentially doing a DIV_ROUNDUP().

Reported-by: Ivan Suzdal <isuzdal@mirantis.com>
Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-05 14:59:05 -07:00
Jesse Brandeburg
8e2cc0e67f i40e: fix errant PCIe bandwidth message
There was an error introduced with commit 3fced53507 ("i40e: X722 is
on the IOSF bus and does not report the PCI bus info"), where code was
added but the enabling flag is never set.

CC: Anjali Singhai Jain <anjali.singhai@intel.com>
CC: Stefan Assman <sassman@redhat.com>
Fixes: 3fced53507 ("i40e: X722 is on the IOSF bus ...")
Reported-by: Steve Best <sbest@redhat.com>
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-05 14:52:02 -07:00
David S. Miller
e43d15c8d3 Merge branch '40GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queue
Jeff Kirsher says:

====================
40GbE Intel Wired LAN Driver Updates 2016-04-05

This series contains updates to i40e and i40evf only.

Stefan converts dev_close() to ndo_stop() for ethtool offline self test,
since dev_close() causes IFF_UP to be cleared which will remove the
interface routes and addresses.

Alex bumps up the size of the transmit data buffer to 12K rather than 8K,
which provides a gain in throughput and a reduction in overhead for
putting together the frame.  Fixed an issue in the polling routines where
we were using bitwise operators to avoid the side effects of the
logical operators.  Then added support for bulk transmit clean for skbs.

Jesse fixed a sparse issue in the type casting in the transmit code and
fixed i40e_aq_set_phy_debug() to use i40e_status as a return code.

Catherine cleans up duplicated code.

Shannon fixed the cleaning up of the interrupt handling to clean up the
IRQs only if we actually got them set up.  Also fixed up the error
scenarios where we were trying to remove a non-existent timer or
worktask, which causes the kernel heartburn.

Mitch changes the notification of resets to the reset interrupt handler,
instead of the actual reset initiation code.  This allows the VFs to get
properly notified for all resets, including resets initiated by different
PFs on the same physical device.  Also moved the clearing of VFLR bit
after reset processing, instead of before which could lead to double
resets on VF init.  Fixed code comment to match the actual function name.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-05 16:26:31 -04:00
Michael Chan
29c262fed4 bnxt_en: Improve ethtool .get_settings().
If autoneg is off, we should always report the speed and duplex settings
even if it is link down so the user knows the current settings.  The
unknown speed and duplex should only be used for autoneg when link is
down.

Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-05 16:20:48 -04:00
Michael Chan
9d9cee08fc bnxt_en: Check for valid forced speed during ethtool -s.
Check that the forced speed is a valid speed supported by firmware.
If not supported, return -EINVAL.

Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-05 16:20:48 -04:00
Michael Chan
4bb13abf20 bnxt_en: Add unsupported SFP+ module warnings.
Add the PORT_CONN_NOT_ALLOWED async event handling logic.  The driver
will print an appropriate warning to reflect the SFP+ module enforcement
policy done in the firmware.

Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-05 16:20:47 -04:00
Michael Chan
25be862370 bnxt_en: Set async event bits when registering with the firmware.
Currently, the driver only sets bit 0 of the async_event_fwd fields.
To be compatible with the latest spec, we need to set the
appropriate event bits handled by the driver.  We should be handling
link change and PF driver unload events, so these 2 bits should be
set.

Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-05 16:20:47 -04:00
Michael Chan
72b34f04e0 bnxt_en: Add get_eee() and set_eee() ethtool support.
Allow users to get|set EEE parameters.

v2: Added comment for preserving the tx_lpi_timer value in get_eee.

Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-05 16:20:47 -04:00
Michael Chan
939f7f0ca4 bnxt_en: Add EEE setup code.
1. Add bnxt_hwrm_set_eee() function to setup EEE firmware parameters based
on the bp->eee settings.
2. The new function bnxt_eee_config_ok() will check if EEE parameters need
to be modified due to autoneg changes.
3. bnxt_hwrm_set_link() has added a new parameter to update EEE.  If the
parameter is set, it will call bnxt_hwrm_set_eee().

Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-05 16:20:46 -04:00
Michael Chan
170ce01301 bnxt_en: Add basic EEE support.
Get EEE capability and the initial EEE settings from firmware.
Add "EEE is active | not active" to link up dmesg.

Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-05 16:20:45 -04:00
Michael Chan
c9ee9516c1 bnxt_en: Improve flow control autoneg with Firmware 1.2.1 interface.
Make use of the new AUTONEG_PAUSE bit in the new interface to better
control autoneg flow control settings, independent of RX and TX
advertisement settings.

Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-05 16:20:45 -04:00