Commit Graph

23 Commits

Author SHA1 Message Date
Geert Uytterhoeven
3b76b3c37b net/ethernet: NET_CALXEDA_XGMAC should depend on HAS_DMA
If NO_DMA=y:

drivers/built-in.o: In function `xgmac_xmit':
drivers/net/ethernet/calxeda/xgmac.c:1102: undefined reference to `dma_mapping_error'

Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Rob Herring <rob.herring@calxeda.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: netdev@vger.kernel.org
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-05-11 16:28:23 -07:00
Dan Carpenter
cf62cb72d6 net: calxedaxgmac: fix condition in xgmac_set_features()
The "changed" variable should be a 64 bit type, otherwise it can't store
all the features.  The way the code is now the test for whether
NETIF_F_RXCSUM changed is always false and we return immediately.

Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-04-25 03:50:17 -04:00
Fabio Estevam
c132cf56f1 xgmac: Remove unneeded PM_OPS definitions
SIMPLE_DEV_PM_OPS macro can handle !CONFIG_PM_SLEEP case nicely, so there is no
need to define PM_OPS for both CONFIG_PM_SLEEP and !CONFIG_PM_SLEEP cases.

Remove the unneeded definitions.

Signed-off-by: Fabio Estevam <fabio.estevam@freescale.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-04-16 16:30:51 -04:00
Rob Herring
e6c3827dcf net: calxedaxgmac: Wake-on-LAN fixes
WOL is broken because the magic packet status bit is getting set rather
than the enable bit. The PMT interrupt is not getting serviced because
the PMT interrupt is also enabled on the global interrupt, but not
cleared by the global interrupt and the global interrupt is higher
priority. This fixes both of these issues to get WOL working.

There's still a problem with receive after resume, but at least now we
can wake-up.

Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-03-29 15:29:35 -04:00
Rob Herring
dc574f1d52 net: calxedaxgmac: fix rx ring handling when OOM
If skb allocation for the rx ring fails repeatedly, we can reach a point
were the ring is empty. In this condition, the driver is out of sync with
the h/w. While this has always been possible, the removal of the skb
recycling seems to have made triggering this problem easier.

Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-03-29 15:29:35 -04:00
David S. Miller
f1e7b73acc Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
Bring in the 'net' tree so that we can get some ipv4/ipv6 bug
fixes that some net-next work will build upon.

Signed-off-by: David S. Miller <davem@davemloft.net>
2013-01-29 15:32:13 -05:00
Rob Herring
d6fb3be544 net: calxedaxgmac: throw away overrun frames
The xgmac driver assumes 1 frame per descriptor. If a frame larger than
the descriptor's buffer size is received, the frame will spill over into
the next descriptor. So check for received frames that span more than one
descriptor and discard them. This prevents a crash if we receive erroneous
large packets.

Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Cc: netdev@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-01-18 14:14:52 -05:00
Jiri Pirko
15c6ff3bc0 net: remove unnecessary NET_ADDR_RANDOM "bitclean"
NET_ADDR_SET is set in dev_set_mac_address() no need to alter
dev->addr_assign_type value in drivers.

Signed-off-by: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-01-03 22:37:36 -08:00
Rob Herring
ef468d2347 net: calxedaxgmac: ip align receive buffers
On gcc 4.7, we will get alignment traps in the ip stack if we don't align
the ip headers on receive. The h/w can support this, so use ip aligned
allocations.

Cut down the unnecessary padding on the allocation. The buffer can start on
any byte alignment, but the size including the begining offset must be 8
byte aligned. So the h/w buffer size must include the NET_IP_ALIGN offset.

Thanks to Eric Dumazet for the initial patch highlighting the padding issues.

Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2012-11-07 03:51:14 -05:00
Rob Herring
97a3a9a67b net: calxedaxgmac: rework transmit ring handling
Only generate tx interrupts on every ring size / 4 descriptors. Move the
netif_stop_queue call to the end of the xmit function rather than
checking at the beginning.

Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2012-11-07 03:51:14 -05:00
Rob Herring
9169963d80 net: calxedaxgmac: drop some unnecessary register writes
The interrupts have already been cleared, so we don't need to clear them
again. Also, we could miss interrupts if they are cleared, but we don't
process the packet.

Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2012-11-07 03:51:14 -05:00
Rob Herring
0ec6d343f7 net: calxedaxgmac: use raw i/o accessors in rx and tx paths
The standard readl/writel accessors involve a spinlock and cache sync
operation on ARM platforms with an outer cache. Only DMA triggering
accesses need this, so use the raw variants instead in the critical paths.

The relaxed variants would be more appropriate, but don't exist on all
arches.

Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2012-11-07 03:51:13 -05:00
Rob Herring
b821bd8e5a net: calxedaxgmac: remove explicit rx dma buffer polling
New received frames will trigger the rx DMA to poll the DMA descriptors,
so there is no need to tell the h/w to poll. We also want to enable
dropping frames from the fifo when there is no buffer.

Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2012-11-07 03:51:13 -05:00
Rob Herring
0aefa8ecd8 net: calxedaxgmac: enable operate on 2nd frame mode
Enable the tx dma to start reading the next frame while sending the current
frame.

Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2012-11-07 03:51:13 -05:00
Eric Dumazet
acb600def2 net: remove skb recycling
Over time, skb recycling infrastructure got litle interest and
many bugs. Generic rx path skb allocation is now using page
fragments for efficient GRO / TCP coalescing, and recyling
a tx skb for rx path is not worth the pain.

Last identified bug is that fat skbs can be recycled
and it can endup using high order pages after few iterations.

With help from Maxime Bizon, who pointed out that commit
87151b8689 (net: allow pskb_expand_head() to get maximum tailroom)
introduced this regression for recycled skbs.

Instead of fixing this bug, lets remove skb recycling.

Drivers wanting really hot skbs should use build_skb() anyway,
to allocate/populate sk_buff right before netif_receive_skb()

Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Maxime Bizon <mbizon@freebox.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
2012-10-07 00:40:54 -04:00
Rob Herring
f62a23a7cb net: calxedaxgmac: enable rx cut-thru mode
Enabling RX cut-thru mode yields better performance as received frames
start getting written to memory before a whole frame is received.

Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2012-07-10 23:25:47 -07:00
Rob Herring
e36ce6eb2b net: calxedaxgmac: set outstanding AXI bus transactions to 8
Increase the number of outstanding read and write AXI transactions from 1
to 8 for better performance.

Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2012-07-10 23:25:47 -07:00
Rob Herring
7c4009192e net: calxedaxgmac: fix hang on rx refill
Fix intermittent hangs in xgmac_rx_refill. If a ring buffer entry already
had an skb allocated, then xgmac_rx_refill would get stuck in a loop. This
can happen on a rx error when we just leave the skb allocated to the entry.

[ 7884.510000] INFO: rcu_preempt detected stall on CPU 0 (t=727315 jiffies)
[ 7884.510000] [<c0010a59>] (unwind_backtrace+0x1/0x98) from [<c006fd93>] (__rcu_pending+0x11b/0x2c4)
[ 7884.510000] [<c006fd93>] (__rcu_pending+0x11b/0x2c4) from [<c0070b95>] (rcu_check_callbacks+0xed/0x1a8)
[ 7884.510000] [<c0070b95>] (rcu_check_callbacks+0xed/0x1a8) from [<c0036abb>] (update_process_times+0x2b/0x48)
[ 7884.510000] [<c0036abb>] (update_process_times+0x2b/0x48) from [<c004e8fd>] (tick_sched_timer+0x51/0x94)
[ 7884.510000] [<c004e8fd>] (tick_sched_timer+0x51/0x94) from [<c0045527>] (__run_hrtimer+0x4f/0x1e8)
[ 7884.510000] [<c0045527>] (__run_hrtimer+0x4f/0x1e8) from [<c0046003>] (hrtimer_interrupt+0xd7/0x1e4)
[ 7884.510000] [<c0046003>] (hrtimer_interrupt+0xd7/0x1e4) from [<c00101d3>] (twd_handler+0x17/0x24)
[ 7884.510000] [<c00101d3>] (twd_handler+0x17/0x24) from [<c006be39>] (handle_percpu_devid_irq+0x59/0x114)
[ 7884.510000] [<c006be39>] (handle_percpu_devid_irq+0x59/0x114) from [<c0069aab>] (generic_handle_irq+0x17/0x2c)
[ 7884.510000] [<c0069aab>] (generic_handle_irq+0x17/0x2c) from [<c000cc8d>] (handle_IRQ+0x35/0x7c)
[ 7884.510000] [<c000cc8d>] (handle_IRQ+0x35/0x7c) from [<c033b153>] (__irq_svc+0x33/0xb8)
[ 7884.510000] [<c033b153>] (__irq_svc+0x33/0xb8) from [<c0244b06>] (xgmac_rx_refill+0x3a/0x140)
[ 7884.510000] [<c0244b06>] (xgmac_rx_refill+0x3a/0x140) from [<c02458ed>] (xgmac_poll+0x265/0x3bc)
[ 7884.510000] [<c02458ed>] (xgmac_poll+0x265/0x3bc) from [<c029fcbf>] (net_rx_action+0xc3/0x200)
[ 7884.510000] [<c029fcbf>] (net_rx_action+0xc3/0x200) from [<c0030cab>] (__do_softirq+0xa3/0x1bc)

Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2012-07-10 23:25:47 -07:00
Rob Herring
eb5e1b29a5 net: calxedaxgmac: fix net timeout recovery
Fix net tx watchdog timeout recovery. The descriptor ring was reset,
but the DMA engine was not reset to the beginning of the ring.

Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2012-07-10 23:25:47 -07:00
Danny Kukawka
7ce5d22219 net: use eth_hw_addr_random() and reset addr_assign_type
Use eth_hw_addr_random() instead of calling random_ether_addr()
to set addr_assign_type correctly to NET_ADDR_RANDOM.

Reset the state to NET_ADDR_PERM as soon as the MAC get
changed via .ndo_set_mac_address.

v2: adapt to renamed eth_hw_addr_random()

Signed-off-by: Danny Kukawka <danny.kukawka@bisect.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2012-02-15 15:34:17 -05:00
stephen hemminger
bd601cc464 xgmac: cleanups
Make local function static, make ethtool_ops const.
Compile tested only.

Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2012-01-05 13:23:00 -05:00
Heiko Carstens
65cb5df51a net: calxeda xgmac ethernet driver add missing HAS_IOMEM dependency
Fix allyesconfig build on architectures without IOMEM:

drivers/net/ethernet/calxeda/xgmac.c:1800:2:
  error: implicit declaration of function 'iounmap' [-Werror=implicit-function-declaration]

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Rob Herring <rob.herring@calxeda.com>
Acked-by: Rob Herring <rob.herring@calxeda.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2011-12-27 13:01:44 -05:00
Rob Herring
85c10f2828 net: add calxeda xgmac ethernet driver
Add support for the XGMAC 10Gb ethernet device in the Calxeda Highbank
SOC.

Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2011-11-29 01:15:24 -05:00