After d75b1ade56 ("net: less interrupt masking in NAPI")
driver's NAPI poll routine is expected to return
exact budget value if it wants to be re-called.
Signed-off-by: Shahed Shaikh <shahed.shaikh@qlogic.com>
Fixes: d75b1ade56 ("net: less interrupt masking in NAPI")
Signed-off-by: David S. Miller <davem@davemloft.net>
The RSS support requires enablement based on the features reported by
the hardware. The setting of this flag is missing. Add support to
set the RSS enablement flag based on the reported hardware features.
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The number of traffic classes reported by the hardware is zero-based
so increment the value returned to get an actual count.
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch correct the bad expression while writing the
bit-pattern from software's buffer to hardware registers.
Signed-off-by: Sanjeev Sharma <Sanjeev_Sharma@mentor.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Commit de966c5928 (net/mlx4_core: Support more than 64 VFs) was meant to
allow up to 126 VFs. However, due to leaving MLX4_MFUNC_MAX too low, using
more than 80 VFs resulted in memory corruptions (and Oopses) when more than
80 VFs were requested. In addition, the number of slaves was left too high.
This commit fixes these issues.
Fixes: de966c5928 ("net/mlx4_core: Support more than 64 VFs")
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Amir Vadai <amirv@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch fixes a bug where vnet_skb_shape() didn't set the already-selected
queue mapping when a packet copy was required. This results in using the
wrong queue index for stops/starts, hung tx queues and watchdog timeouts
under heavy load.
Signed-off-by: David L Stevens <david.stevens@oracle.com>
Acked-by: Sowmini Varadhan <sowmini.varadhan@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Currently qlge_update_hw_vlan_features() will always first put the
interface down, then update features and then bring it up again. But it
is possible to hit this code while the adapter is down and this causes a
non-paired call to napi_disable(), which will get stuck.
This patch fixes it by skipping these down/up actions if the interface
is already down.
Fixes: a45adbe8d3 ("qlge: Enhance nested VLAN (Q-in-Q) handling.")
Cc: Harish Patil <harish.patil@qlogic.com>
Signed-off-by: Marcelo Ricardo Leitner <mleitner@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch fixes the following kernel crash,
WARNING: CPU: 2 PID: 0 at net/ipv4/tcp_input.c:3079 tcp_clean_rtx_queue+0x658/0x80c()
Call trace:
[<fffffe0000096b7c>] dump_backtrace+0x0/0x184
[<fffffe0000096d10>] show_stack+0x10/0x1c
[<fffffe0000685ea0>] dump_stack+0x74/0x98
[<fffffe00000b44e0>] warn_slowpath_common+0x88/0xb0
[<fffffe00000b461c>] warn_slowpath_null+0x14/0x20
[<fffffe00005b5c1c>] tcp_clean_rtx_queue+0x654/0x80c
[<fffffe00005b6228>] tcp_ack+0x454/0x688
[<fffffe00005b6ca8>] tcp_rcv_established+0x4a4/0x62c
[<fffffe00005bf4b4>] tcp_v4_do_rcv+0x16c/0x350
[<fffffe00005c225c>] tcp_v4_rcv+0x8e8/0x904
[<fffffe000059d470>] ip_local_deliver_finish+0x100/0x26c
[<fffffe000059dad8>] ip_local_deliver+0xac/0xc4
[<fffffe000059d6c4>] ip_rcv_finish+0xe8/0x328
[<fffffe000059dd3c>] ip_rcv+0x24c/0x38c
[<fffffe0000563950>] __netif_receive_skb_core+0x29c/0x7c8
[<fffffe0000563ea4>] __netif_receive_skb+0x28/0x7c
[<fffffe0000563f54>] netif_receive_skb_internal+0x5c/0xe0
[<fffffe0000564810>] napi_gro_receive+0xb4/0x110
[<fffffe0000482a2c>] xgene_enet_process_ring+0x144/0x338
[<fffffe0000482d18>] xgene_enet_napi+0x1c/0x50
[<fffffe0000565454>] net_rx_action+0x154/0x228
[<fffffe00000b804c>] __do_softirq+0x110/0x28c
[<fffffe00000b8424>] irq_exit+0x8c/0xc0
[<fffffe0000093898>] handle_IRQ+0x44/0xa8
[<fffffe000009032c>] gic_handle_irq+0x38/0x7c
[...]
Software writes poison data into the descriptor bytes[15:8] and upon
receiving the interrupt, if those bytes are overwritten by the hardware with
the valid data, software also reads bytes[7:0] and executes receive/tx
completion logic.
If the CPU executes the above two reads in out of order fashion, then the
bytes[7:0] will have older data and causing the kernel panic. We have to
force the order of the reads and thus this patch introduces read memory
barrier between these reads.
Signed-off-by: Iyappan Subramanian <isubramanian@apm.com>
Signed-off-by: Keyur Chudgar <kchudgar@apm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When a skb has multiple vlans and it is CHECKSUM_PARTIAL,
ixgbevf_tx_csum() fails to get the network protocol and checksum related
descriptor fields are not configured correctly because skb->protocol
doesn't show the L3 protocol in this case.
Use first->protocol instead of skb->protocol to get the proper network
protocol.
Signed-off-by: Toshiaki Makita <makita.toshiaki@lab.ntt.co.jp>
Signed-off-by: David S. Miller <davem@davemloft.net>
When a skb has multiple vlans and it is CHECKSUM_PARTIAL,
ixgbe_tx_csum() fails to get the network protocol and checksum related
descriptor fields are not configured correctly because skb->protocol
doesn't show the L3 protocol in this case.
Use vlan_get_protocol() to get the proper network protocol.
Signed-off-by: Toshiaki Makita <makita.toshiaki@lab.ntt.co.jp>
Signed-off-by: David S. Miller <davem@davemloft.net>
When a skb has multiple vlans and it is CHECKSUM_PARTIAL,
igbvf_tx_csum() fails to get the network protocol and checksum related
descriptor fields are not configured correctly because skb->protocol
doesn't show the L3 protocol in this case.
Use vlan_get_protocol() to get the proper network protocol.
Signed-off-by: Toshiaki Makita <makita.toshiaki@lab.ntt.co.jp>
Signed-off-by: David S. Miller <davem@davemloft.net>
A recent patch tried to work around a valid warning for the use of a
deprecated interface by blindly changing from the old
pcmcia_request_exclusive_irq() interface to pcmcia_request_irq().
This driver has an interrupt handler that is not currently aware
of shared interrupts, but can be easily converted to be.
At the moment, the driver reads the interrupt status register
repeatedly until it contains only zeroes in the interesting bits,
and handles each bit individually.
This patch adds the missing part of returning IRQ_NONE in case none
of the bits are set to start with, so we can move on to the next
interrupt source.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Fixes: 5f5316fcd0 ("am2150: Update nmclan_cs.c to use update PCMCIA API")
Signed-off-by: David S. Miller <davem@davemloft.net>
The ni65 and lance ethernet drivers manually program the ISA DMA
controller that is only available on x86 PCs and a few compatible
systems. Trying to build it on ARM results in this error:
ni65.c: In function 'ni65_probe1':
ni65.c:496:62: error: 'DMA1_STAT_REG' undeclared (first use in this function)
((inb(DMA1_STAT_REG) >> 4) & 0x0f)
^
ni65.c:496:62: note: each undeclared identifier is reported only once for each function it appears in
ni65.c:497:63: error: 'DMA2_STAT_REG' undeclared (first use in this function)
| (inb(DMA2_STAT_REG) & 0xf0);
The DMA1_STAT_REG and DMA2_STAT_REG registers are only defined for
alpha, mips, parisc, powerpc and x86, although it is not clear
which subarchitectures actually have them at the correct location.
This patch for now just disables it for ARM, to avoid randconfig
build errors. We could also decide to limit it to the set of
architectures on which it does compile, but that might look more
deliberate than guessing based on where the drivers build.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
The cs89x0 driver can either be built as an ISA driver or a platform
driver, the choice is controlled by the CS89x0_PLATFORM Kconfig
symbol. Building the ISA driver on a system that does not have
a way to map I/O ports fails with this error:
drivers/built-in.o: In function `cs89x0_ioport_probe.constprop.1':
:(.init.text+0x4794): undefined reference to `ioport_map'
:(.init.text+0x4830): undefined reference to `ioport_unmap'
This changes the Kconfig logic to take that option away and
always force building the platform variant of this driver if
CONFIG_HAS_IOPORT_MAP is not set. This is the only correct
choice in this case, and it avoids the build error.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
In the case when alloc_netdev fails we return NULL to a caller. But there is no
check for NULL in the probe drivers. This patch changes NULL to an error
pointer. The function description is amended to reflect what we may get
returned.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
With the commit d75b1ade56 ("net: less interrupt masking in NAPI") napi
repoll is done only when work_done == budget. When in busy_poll is we return 0
in napi_poll. We should return budget.
Signed-off-by: Govindarajulu Varadarajan <_govind@gmx.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
- Use the return value of dma_map_single(), rather than calling
virt_to_page() separately
- Check for mapping failue
- Call dma_unmap_single() rather than dma_sync_single_for_cpu()
Signed-off-by: Ben Hutchings <ben.hutchings@codethink.co.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
dma_map_single() may fail if an IOMMU or swiotlb is in use, so
we need to check for this.
Signed-off-by: Ben Hutchings <ben.hutchings@codethink.co.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Currently we try to clear EDRRR and EDTRR and immediately continue to
free buffers. This is unsafe because:
- In general, register writes are not serialised with DMA, so we still
have to wait for DMA to complete somehow
- The R8A7790 (R-Car H2) manual states that the TX running flag cannot
be cleared by writing to EDTRR
- The same manual states that clearing the RX running flag only stops
RX DMA at the next packet boundary
I applied this patch to the driver to detect DMA writes to freed
buffers:
> --- a/drivers/net/ethernet/renesas/sh_eth.c
> +++ b/drivers/net/ethernet/renesas/sh_eth.c
> @@ -1098,7 +1098,14 @@ static void sh_eth_ring_free(struct net_device *ndev)
> /* Free Rx skb ringbuffer */
> if (mdp->rx_skbuff) {
> for (i = 0; i < mdp->num_rx_ring; i++)
> + memcpy(mdp->rx_skbuff[i]->data,
> + "Hello, world", 12);
> + msleep(100);
> + for (i = 0; i < mdp->num_rx_ring; i++) {
> + WARN_ON(memcmp(mdp->rx_skbuff[i]->data,
> + "Hello, world", 12));
> dev_kfree_skb(mdp->rx_skbuff[i]);
> + }
> }
> kfree(mdp->rx_skbuff);
> mdp->rx_skbuff = NULL;
then ran the loop:
while ethtool -G eth0 rx 128 ; ethtool -G eth0 rx 64; do echo -n .; done
and 'ping -f' toward the sh_eth port from another machine. The
warning fired several times a minute.
To fix these issues:
- Deactivate all TX descriptors rather than writing to EDTRR
- As there seems to be no way of telling when RX DMA is stopped,
perform a soft reset to ensure that both DMA enginess are stopped
- To reduce the possibility of the reset truncating a transmitted
frame, disable egress and wait a reasonable time to reach a
packet boundary before resetting
- Update statistics before resetting
(The 'reasonable time' does not allow for CS/CD in half-duplex
mode, but half-duplex no longer seems reasonable!)
Signed-off-by: Ben Hutchings <ben.hutchings@codethink.co.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
If RX traffic is overflowing the FIFO or DMA ring, logging every time
this happens just makes things worse. These errors are visible in the
statistics anyway.
Signed-off-by: Ben Hutchings <ben.hutchings@codethink.co.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Commit 69ad0dd7af
Author: Ezequiel Garcia <ezequiel.garcia@free-electrons.com>
Date: Mon May 19 13:59:59 2014 -0300
net: mv643xx_eth: Use dma_map_single() to map the skb fragments
caused a nasty regression by removing the support for highmem skb
fragments. By using page_address() to get the address of a fragment's
page, we are assuming a lowmem page. However, such assumption is incorrect,
as fragments can be in highmem pages, resulting in very nasty issues.
This commit fixes this by using the skb_frag_dma_map() helper,
which takes care of mapping the skb fragment properly. Additionally,
the type of mapping is now tracked, so it can be unmapped using
dma_unmap_page or dma_unmap_single when appropriate.
This commit also fixes the error path in txq_init() to release the
resources properly.
Fixes: 69ad0dd7af ("net: mv643xx_eth: Use dma_map_single() to map the skb fragments")
Reported-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Ezequiel Garcia <ezequiel.garcia@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In order to stop the RX path accessing the RX ring while it's being
stopped or resized, we clear the interrupt mask (EESIPR) and then call
free_irq() or synchronise_irq(). This is insufficient because the
interrupt handler or NAPI poller may set EESIPR again after we clear
it. Also, in sh_eth_set_ringparam() we currently don't disable NAPI
polling at all.
I could easily trigger a crash by running the loop:
while ethtool -G eth0 rx 128 && ethtool -G eth0 rx 64; do echo -n .; done
and 'ping -f' toward the sh_eth port from another machine.
To fix this:
- Add a software flag (irq_enabled) to signal whether interrupts
should be enabled
- In the interrupt handler, if the flag is clear then clear EESIPR
and return
- In the NAPI poller, if the flag is clear then don't set EESIPR
- Set the flag before enabling interrupts in sh_eth_dev_init() and
sh_eth_set_ringparam()
- Clear the flag and serialise with the interrupt and NAPI
handlers before clearing EESIPR in sh_eth_close() and
sh_eth_set_ringparam()
After this, I could run the loop for 100,000 iterations successfully.
Signed-off-by: Ben Hutchings <ben.hutchings@codethink.co.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
If the device is down then no packet buffers should be allocated.
We also must not touch its registers as it may be powered off.
Signed-off-by: Ben Hutchings <ben.hutchings@codethink.co.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
We must only ever stop TX queues when they are full or the net device
is not 'ready' so far as the net core, and specifically the watchdog,
is concerned. Otherwise, the watchdog may fire *immediately* if no
packets have been added to the queue in the last 5 seconds.
What's more, sh_eth_tx_timeout() will likely crash if called while
we're resizing the TX ring.
I could easily trigger this by running the loop:
while ethtool -G eth0 rx 128 && ethtool -G eth0 rx 64; do echo -n .; done
Signed-off-by: Ben Hutchings <ben.hutchings@codethink.co.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
If an skb to be transmitted is shorter than the minimum Ethernet frame
length, we currently set the DMA descriptor length to the minimum but
do not add zero-padding. This could result in leaking sensitive
data. We also pass different lengths to dma_map_single() and
dma_unmap_single().
Use skb_padto() to pad properly, before calling dma_map_single().
Signed-off-by: Ben Hutchings <ben.hutchings@codethink.co.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
In Dual EMAC, the default VLANs are used to segregate Rx packets between
the ports, so adding the same default VLAN to the switch will affect the
normal packet transfers. So returning error on addition of dual EMAC
default VLANs.
Even if EMAC 0 default port VLAN is added to EMAC 1, it will lead to
break dual EMAC port separations.
Fixes: d9ba8f9e62 (driver: net: ethernet: cpsw: dual emac interface implementation)
Cc: <stable@vger.kernel.org> # v3.9+
Reported-by: Felipe Balbi <balbi@ti.com>
Signed-off-by: Mugunthan V N <mugunthanvnm@ti.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
NAPI poll logic now enforces that a poller returns exactly the budget
when it wants to be called again.
If a driver limits TX completion, it has to return budget as well when
the limit is hit, not the number of received packets.
Reported-and-tested-by: Mike Galbraith <umgwanakikbuti@gmail.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Fixes: d75b1ade56 ("net: less interrupt masking in NAPI")
Cc: Manish Chopra <manish.chopra@qlogic.com>
Acked-by: Manish Chopra <manish.chopra@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
With the commit d75b1ade56 ("net: less interrupt masking in NAPI") napi repoll
is done only when work_done == budget. When we are in busy_poll we return 0 in
napi_poll. We should return budget.
Signed-off-by: Govindarajulu Varadarajan <_govind@gmx.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Updated hardware documention shows the Rx flow control settings were
moved from the Rx queue operation mode register to a new Rx queue flow
control register. The old flow control settings are now reserved areas
of the Rx queue operation mode register. Update the code to use the new
register.
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
"sp->desc[i]" has 25 characters. "dev->name" has 15 characters. If we
used all 15 characters then the sprintf() would overflow.
I changed the "sprintf(sp->name, "%s Neterion %s"" to snprintf(), as
well, even though it can't overflow just to be consistent.
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
IRQs should only get activated when there is nothing to poll in the
queue any more and to after every poll.
Signed-off-by: Hauke Mehrtens <hauke@hauke-m.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
napi should get registered before the netdev and not after.
Signed-off-by: Hauke Mehrtens <hauke@hauke-m.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
The driver connects and disconnects the PHY device whenever the
net device is brought up and down. The ethtool get_settings,
set_settings and nway_reset operations will dereference a null
or dangling pointer if called while it is down.
I think it would be preferable to keep the PHY connected, but there
may be good reasons not to.
As an immediate fix for this bug:
- Set the phydev pointer to NULL after disconnecting the PHY
- Change those three operations to return -ENODEV while the PHY is
not connected
Signed-off-by: Ben Hutchings <ben.hutchings@codethink.co.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Currently net_device_ops::set_rx_mode is only implemented for
chips with a TSU (multiple address table). However we do need
to turn the PRM (promiscuous) flag on and off for other chips.
- Remove the unlikely() from the TSU functions that we may safely
call for chips without a TSU
- Make setting of the MCT flag conditional on the tsu capability flag
- Rename sh_eth_set_multicast_list() to sh_eth_set_rx_mode() and plumb
it into both net_device_ops structures
- Remove the previously-unreachable branch in sh_eth_rx_mode() that
would otherwise reset the flags to defaults for non-TSU chips
Signed-off-by: Ben Hutchings <ben.hutchings@codethink.co.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
On dm816x we have two emac controllers with separate memory
areas.
Cc: Brian Hutchinson <b.hutchman@gmail.com>
Cc: Felipe Balbi <balbi@ti.com>
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Some devices like dm816x have the MDIO registers within the first EMAC
instance address space. Let's fix the issue by allowing to pass an
optional second IO range for the EMAC control register area.
Cc: Brian Hutchinson <b.hutchman@gmail.com>
Cc: Felipe Balbi <balbi@ti.com>
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Looks like the phy_id is never set up beyond getting the phandle.
Note that we can remove the ifdef for phy_node as there is a stub
for of_phy_connec() if CONFIG_OF is not set.
Cc: Brian Hutchinson <b.hutchman@gmail.com>
Cc: Felipe Balbi <balbi@ti.com>
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
We only use clk_get() to get the frequency, the rest is done by
the runtime PM calls. Let's free the clock too.
Cc: Brian Hutchinson <b.hutchman@gmail.com>
Cc: Felipe Balbi <balbi@ti.com>
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Commit 3ba9738134 ("net: ethernet: davinci_emac: add pm_runtime support")
added support for runtime PM, but it causes issues on omap3 related devices
that actually gate the clocks:
Unhandled fault: external abort on non-linefetch (0x1008)
...
[<c04160f0>] (emac_dev_getnetstats) from [<c04d6a3c>] (dev_get_stats+0x78/0xc8)
[<c04d6a3c>] (dev_get_stats) from [<c04e9ccc>] (rtnl_fill_ifinfo+0x3b8/0x938)
[<c04e9ccc>] (rtnl_fill_ifinfo) from [<c04eade4>] (rtmsg_ifinfo+0x68/0xd8)
[<c04eade4>] (rtmsg_ifinfo) from [<c04dd35c>] (register_netdevice+0x3a0/0x4ec)
[<c04dd35c>] (register_netdevice) from [<c04dd4bc>] (register_netdev+0x14/0x24)
[<c04dd4bc>] (register_netdev) from [<c041755c>] (davinci_emac_probe+0x408/0x5c8)
[<c041755c>] (davinci_emac_probe) from [<c0396d78>] (platform_drv_probe+0x48/0xa4)
Let's fix it by moving the pm_runtime_get() call earlier, and also add it to
the emac_dev_getnetstats(). Also note that we want to use pm_runtime_get_sync()
as we don't want to have deferred_resume happen. And let's also check the
return value for pm_runtime_get_sync() as noted by Felipe Balbi <balbi@ti.com>.
Cc: Brian Hutchinson <b.hutchman@gmail.com>
Acked-by: Mark A. Greer <mgreer@animalcreek.com>
Reviewed-by: Felipe Balbi <balbi@ti.com>
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
On davinci_emac, we have pulse interrupts. This means that we need to
clear the EOI bits when disabling interrupts as otherwise the interrupts
keep happening. And we also need to not clear the EOI bits again when
enabling the interrupts as otherwise we will get tons of:
unexpected IRQ trap at vector 00
These errors almost certainly mean that the omap-intc.c is signaling
a spurious interrupt with the reserved irq 127 as we've seen earlier
on omap3.
Let's fix the issue by clearing the EOI bits when disabling the
interrupts. Let's also keep the comment for "Rx Threshold and Misc
interrupts are not enabled" for both enable and disable so people
are aware of this when potentially adding more support.
Note that eventually we should handle the RX and TX interrupts
separately like cpsw is now doing. However, so far I have not seen
any issues with this based on my testing, so it seems to behave a
little different compared to the cpsw that had a similar issue.
Cc: Brian Hutchinson <b.hutchman@gmail.com>
Reviewed-by: Felipe Balbi <balbi@ti.com>
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Except for VXLAN steering rules, all offloads should work as they were
under plain DMFS mode. Fix that by enabling all the offloads under
DMFS-A0 mode, except for VXLAN steering rules.
Fixes: d57febe1a4 "net/mlx4: Add A0 hybrid steering"
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch fixes double kfree() calls at init_rx_ring() because
it causes static checker warning.
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Byungho An <bh74.an@samsung.com>
Signed-off-by: Kukjin Kim <kgene@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
When the MAC address is provided in the device tree file, the
condition is true and kernel crashes due to NULL dereference.
Signed-off-by: Girish K.S <ks.giri@samsung.com>
Signed-off-by: Byungho An <bh74.an@samsung.com>
Signed-off-by: Kukjin Kim <kgene@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
commit b284fbe3b3 ("sh_eth: Fix access to TRSCER register") wanted
to add a .trscer_err_mask value to the R-Car Gen2 family-specific data
structure (r8a779x_data), but it was accidentally added to the
SH7724-specific data structure (sh7724_data).
Presumably this happened due to a patch conflict with commit
d407bc0203 ("sh-eth: Set fdr_value of R-Car SoCs"), which added
another field at the same position.
Move the field setting to fix this.
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Fixes: b284fbe3b3 ("sh_eth: Fix access to TRSCER register")
Signed-off-by: David S. Miller <davem@davemloft.net>
while adding vlan in dual EMAC mode, only specific ports should be
subscribed for the vlan, else it will lead to switching mode and
if both ports connected to same switch cpsw will hung as it creates
a network loop. Fixing this by adding only specific ports in case
of dual EMAC.
Signed-off-by: Mugunthan V N <mugunthanvnm@ti.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Other tunnels like GRE break while VxLAN offloads are enabled in Skyhawk-R. To
avoid this, we should restrict offload features on a per-packet basis in such
conditions.
Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@emulex.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
synchronize_irq() can sleep waiting, for pending IRQ handlers so driver
should release the tp->lock spin lock before invoking synchronize_irq()
Reported-by: Peter Hurley <peter@hurleysoftware.com>
Tested-by: Peter Hurley <peter@hurleysoftware.com>
Signed-off-by: Prashant Sreedharan <prashant@broadcom.com>
Signed-off-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Currently tg3_reset_task() uses only tp->lock for synchronizing with code
paths like tg3_open() etc. But since tp->lock is released before doing
synchronize_irq(), rtnl_lock should be taken in tg3_reset_task() to
synchronize it with other code paths.
Reported-by: Peter Hurley <peter@hurleysoftware.com>
Tested-by: Peter Hurley <peter@hurleysoftware.com>
Signed-off-by: Prashant Sreedharan <prashant@broadcom.com>
Signed-off-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This is to avoid the race between tg3_timer() and the execution paths
which does not invoke tg3_timer_stop() and releases tp->lock before
calling synchronize_irq()
Reported-by: Peter Hurley <peter@hurleysoftware.com>
Tested-by: Peter Hurley <peter@hurleysoftware.com>
Signed-off-by: Prashant Sreedharan <prashant@broadcom.com>
Signed-off-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Adds FCoE config option I40E_FCOE, so that FCoE can be enabled
as needed but otherwise have it disabled by default.
This also eliminate multiple FCoE config checks, instead now just
one config check for CONFIG_I40E_FCOE.
The I40E FCoE was added with 3.17 kernel and therefore this patch
shall be applied to stable 3.17 kernel also.
CC: <stable@vger.kernel.org>
Signed-off-by: Vasu Dev <vasu.dev@intel.com>
Tested-by: Jim Young <jamesx.m.young@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>