For backward compatibility, we still keep the deprecated form,
and will warn the users if they still use the deprecated one, like this:
drivers/block/drbd/drbd_bitmap.c: In function ‘bm_page_io_async’:
drivers/block/drbd/drbd_bitmap.c:973:3: warning: ‘kmap_atomic_deprecated’ is deprecated (declared at /home/wangcong/linux-2.6/include/linux/highmem.h:124)
drivers/block/drbd/drbd_bitmap.c:977:3: warning: ‘kunmap_atomic_deprecated’ is deprecated (declared at /home/wangcong/linux-2.6/include/linux/highmem.h:144)
Thanks to Nick Bowler for the cpp trick!
Cc: Cesar Eduardo Barros <cesarb@cesarb.net>
Cc: Nick Bowler <nbowler@elliptictech.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Cong Wang <amwang@redhat.com>
Add support for the watchdog integrated into the SMSC SCH5627 and
SCH5636 superio-s. Since the watchdog is part of the hwmon logical device
and thus shares ioports with it, the watchdog driver is integrated into the
existing hwmon drivers for these.
Note that this version of the watchdog support for sch56xx superio-s
implements the watchdog chardev interface itself, rather then relying on
the recently added watchdog core / watchdog_dev. This is done because
currently some needed functionality is missing from watchdog_dev, as soon
as this functionality is added (which is being discussed on the
linux-watchdog mailinglist), I'll convert this driver over to using
watchdog_dev.
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
[guenter.roeck@ericsson.com: Added missing linux/slab.h include]
Signed-off-by: Guenter Roeck <guenter.roeck@ericsson.com>
Adds multitouch support for the Gametel Android game controller.
The multitouch events are emulated by the Gametel device. Each physical button
is configured to generate a MT event on a specific coordinate. This seems to be
the only way for us to support Android games that doesn't support HID gamepads.
It is possible to inject MT events at Android level, but this requires root on
the phone.
Signed-off-by: Andreas Nielsen <eas@svep.se>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Since commit 299b0767(ipv6: Fix IPsec slowpath fragmentation problem)
In func ip6_append_data,after call skb_put(skb, fraglen + dst_exthdrlen)
the skb->len contains dst_exthdrlen,and we don't reduce dst_exthdrlen at last
This will make fraggap>0 in next "while cycle",and cause the size of skb incorrent
Fix this by reserve headroom for dst_exthdrlen.
Signed-off-by: Gao feng <gaofeng@cn.fujitsu.com>
Acked-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Pull core/locking changes for v3.4 from Ingo Molnar
* 'core-locking-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
futex: Simplify return logic
futex: Cover all PI opcodes with cmpxchg enabled check
Pull core/iommu changes for v3.4 from Ingo Molnar
* 'core-iommu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/iommu/intel: Increase the number of iommus supported to MAX_IO_APICS
x86/iommu/intel: Fix identity mapping for sandy bridge
* branch 'dcache-word-accesses':
vfs: use 'unsigned long' accesses for dcache name comparison and hashing
This does the name hashing and lookup using word-sized accesses when
that is efficient, namely on x86 (although any little-endian machine
with good unaligned accesses would do).
It does very much depend on little-endian logic, but it's a very hot
couple of functions under some real loads, and this patch improves the
performance of __d_lookup_rcu() and link_path_walk() by up to about 30%.
Giving a 10% improvement on some very pathname-heavy benchmarks.
Because we do make unaligned accesses past the filename, the
optimization is disabled when CONFIG_DEBUG_PAGEALLOC is active, and we
effectively depend on the fact that on x86 we don't really ever have the
last page of usable RAM followed immediately by any IO memory (due to
ACPI tables, BIOS buffer areas etc).
Some of the bit operations we do are a bit "subtle". It's commented,
but you do need to really think about the code. Or just consider it
black magic.
Thanks to people on G+ for some of the optimized bit tricks.
For some odd historical reason, the final mixing round for the dentry
cache hash table lookup had an insane "xor with big constant" logic. In
two places.
The big constant that is being xor'ed is GOLDEN_RATIO_PRIME, which is a
fairly random-looking number that is designed to be *multiplied* with so
that the bits get spread out over a whole long-word.
But xor'ing with it is insane. It doesn't really even change the hash -
it really only shifts the hash around in the hash table. To make
matters worse, the insane big constant is different on 32-bit and 64-bit
builds, even though the name hash bits we use are always 32-bit (and the
bits from the pointer we mix in effectively are too).
It's all total voodoo programming, in other words.
Now, some testing and analysis of the hash chains shows that the rest of
the hash function seems to be fairly good. It does pick the right bits
of the parent dentry pointer, for example, and while it's generally a
bad idea to use an xor to mix down the upper bits (because if there is a
repeating pattern, the xor can cause "destructive interference"), it
seems to not have been a disaster.
For example, replacing the hash with the normal "hash_long()" code (that
uses the GOLDEN_RATIO_PRIME constant correctly, btw) actually just makes
the hash worse. The hand-picked hash knew which bits of the pointer had
the highest entropy, and hash_long() ends up mixing bits less optimally
at least in some trivial tests.
So the hash function overall seems fine, it just has that really odd
"shift result around by a constant xor".
So get rid of the silly xor, and replace the down-mixing of the bits
with an add instead of an xor that tends to not have the same kind of
destructive interference issues. Some stats on the resulting hash
chains shows that they look statistically identical before and after,
but the code is simpler and no longer makes you go "WTF?".
Also, the incoming hash really is just "unsigned int", not a long, and
there's no real point to worry about the high 26 bits of the dentry
pointer for the 64-bit case, because they are all going to be identical
anyway.
So also change the hashing to be done in the more natural 'unsigned int'
that is the real size of the actual hashed data anyway.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Fixes:
drivers/built-in.o: In function `qmi_wwan_bind_shared':
qmi_wwan.c:(.text+0x25b686): undefined reference to `usb_cdc_wdm_register'
make[1]: *** [.tmp_vmlinux1] Error 1
Reported-by: Randy Dunlap <rdunlap@xenotime.net>
Signed-off-by: Bjørn Mork <bjorn@mork.no>
Acked-by: Randy Dunlap <rdunlap@xenotime.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch adds support for TI's CPSW driver.
The three port switch gigabit ethernet subsystem provides ethernet packet
communication and can be configured as an ethernet switch. Supports
10/100/1000 Mbps.
Signed-off-by: Cyril Chemparathy <cyril@ti.com>
Signed-off-by: Sriramakrishnan A G <srk@ti.com>
Signed-off-by: Mugunthan V N <mugunthanvnm@ti.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
TI CPSW ethernet switch has a built-in address lookup engine. This patch adds
the code necessary for programming this module.
Signed-off-by: Cyril Chemparathy <cyril@ti.com>
Signed-off-by: Mugunthan V N <mugunthanvnm@ti.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Prevent race condition between commands on comm channel.
Happened while unloading the driver when switching from
event to polling mode. VF got completion on the last command
before switching to polling mode, but toggle was not changed.
After the fix - VF will not write the next command before
toggle is updated.
Signed-off-by: Eugenia Emantayev <eugenia@mellanox.co.il>
Signed-off-by: David S. Miller <davem@davemloft.net>
Liang Zheng(lzheng@redhat.com) found that in the following topo,
bonding does not send igmp report when we trigger a fail-over of bonding.
eth0--
|-- bond0 -- br0
eth1--
modprobe bonding mode=1 miimon=100 resend_igmp=10
ifconfig bond0 up
ifenslave bond0 eth0 eth1
brctl addbr br0
ifconfig br0 192.168.100.2/24 up
brctl addif br0 bond0
Add 192.168.100.2(br0) into a multicast group, like 224.10.10.10,
then trigger a fali-over in bonding.
You can see that parameter "resend_igmp" does not work.
The reason is that when we add br0 into a multicast group,
it does not propagate multicast knowledge down to its ports.
If we choose to propagate multicast knowledge down to all ports for bridge,
then we have to track every change that is done to bridge, and keep a backup
for all ports. It is hard to track, I think.
Instead I choose to modify bonding to send igmp report for its master.
Changelog:
V2: correct comments
V3: move this check into bond_resend_igmp_join_requests()
V4: only send igmp reports if bond is enslaved to a bridge
Signed-off-by: Weiping Pan <panweiping3@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add compatible string for MPC5125 FEC. The FEC on MPC5125 additionally
supports RMII PHY interface. Configure controller/PHY interface type
according to the optional phy-connection-type property in the ethernet
node. This property should be either "rmii" or "mii".
Signed-off-by: Vladimir Ermakov <vooon341@gmail.com>
Signed-off-by: Anatolij Gustschin <agust@denx.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Matt Evans spotted that x86 bpf_jit was incorrectly handling negative
constant offsets in BPF_S_LDX_B_MSH instruction.
We need to abort JIT compilation like we do in common_load so that
filter uses the interpreter code and can call __load_pointer()
Reference: http://lists.openwall.net/netdev/2011/07/19/11
Thanks to Indan Zupancic to bring back this issue.
Reported-by: Matt Evans <matt@ozlabs.org>
Reported-by: Indan Zupancic <indan@nul.nu>
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
As suggested by Ben, this adds the clarification on the usage of
CHECKSUM_UNNECESSARY on the outgoing patch. Also add the usage
description of NETIF_F_FCOE_CRC and CHECKSUM_UNNECESSARY
for the kernel FCoE protocol driver.
This is a follow-up to the following:
http://patchwork.ozlabs.org/patch/147315/
Signed-off-by: Yi Zou <yi.zou@intel.com>
Cc: Ben Hutchings <bhutchings@solarflare.com>
Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Cc: www.Open-FCoE.org <devel@open-fcoe.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Fix a bug when using 'ethtool -K ethx tx off' to turn off tx ip checksum,
FCoE CRC offload should not be impacte. The skb_checksum_help() is needed
only if it's not FCoE traffic for ip checksum, regardless of ethtool toggling
the tx ip checksum on or off. Instead of using CHECKSUM_PARTIAL, we will
use CHECKSUM_UNNECESSARY as a proper indication to avoid sw ip checksum
on FCoE frames.
Ref. to original discussion thread:
http://patchwork.ozlabs.org/patch/146567/
CC: "James E.J. Bottomley" <JBottomley@parallels.com>
CC: Robert Love <robert.w.love@intel.com>
Signed-off-by: Yi Zou <yi.zou@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This is related to fixing the bug of dropping FCoE frames when disabling tx ip
checksum by 'ethtool -K ethx tx off'. The FCoE protocol stack driver would
use CHECKSUM_UNNECESSARY on tx path instead of CHECKSUM_PARTIAL (as indicated in
the 2/2 of this series). To do so, netif_needs_gso() has to be changed here to
not do gso for both CHECKSUM_PARTIAL and CHECKSUM_UNNECESSARY.
Ref. to original discussion thread:
http://patchwork.ozlabs.org/patch/146567/
Signed-off-by: Yi Zou <yi.zou@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch allows us to avoid a Tx hang when SR-IOV is enabled. This hang
can be triggered by sending small packets at a rate that was triggering Rx
missed errors from the adapter while the internal Tx switch and at least
one VF are enabled.
This was all due to the fact that under heavy stress the Rx FIFO never
drained below the flow control high water mark. This resulted in the Tx
FIFO being head of line blocked due to the fact that it relies on the flow
control high water mark to determine when it is acceptable for the Tx to
place a packet in the Rx FIFO.
The resolution for this is to set the FCRTH value to the RXPBSIZE - 32 so
that even if the ring is almost completely full we can still place Tx
packets on the Rx ring and drop incoming Rx traffic if we do not have
sufficient space available in the Rx FIFO.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Sibai Li <sibai.li@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Instead of dropping the packet, we keep the skb buffer, and return
NETDEV_TX_BUSY to let upper layer retry send. This will not cause
endless loop, because the host is taking data away from ring buffer,
and we have called the stop_queue before returning NETDEV_TX_BUSY.
The stop_queue was called in the function netvsc_send() in file
netvsc.c, then it returns to rndis_filter_send(), which returns to
netvsc_start_xmit() in file netvsc_drv.c. So the NETDEV_TX_BUSY is
indeed returned AFTER queue is stopped.
Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com>
Reviewed-by: K. Y. Srinivasan <kys@microsoft.com>
Acked-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Resolve namespace issues when FCoE or DCB is not enabled.
The issue is with certain configurations we end up with namespace
problems. A simple example:
ixgbe_main.c
- defines func A()
- uses func A()
ixgbe_fcoe.c
- uses func A()
ixgbe.h
- has prototype for func A()
For default (FCoE included) all is good. But when it isn't the namespace
checker complains about how func A() could be static.
To resolve this, created a ixgbe_lib file to contain functions used
by DCB/FCoE and their helper functions so that they are always in
namespace whether or not DCB/FCoE is enabled.
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Remove an unnecessary #define and use memcpy
instead of a loop to copy an ethernet address.
Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
No need for yet another #define for this.
Convert NODE_ADDRESS_SIZE use to ETH_ALEN and remove #define.
Use memcpy instead of a loop to copy an address.
Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
With increasing receive window sizes, but speed of light not improved
that much, out of order queue can contain a huge number of skbs, waiting
to be moved to receive_queue when missing packets can fill the holes.
Some devices happen to use fat skbs (truesize of 4096 + sizeof(struct
sk_buff)) to store regular (MTU <= 1500) frames. This makes highly
probable sk_rmem_alloc hits sk_rcvbuf limit, which can be 4Mbytes in
many cases.
When limit is hit, tcp stack calls tcp_collapse_ofo_queue(), a true
latency killer and cpu cache blower.
Doing the coalescing attempt each time we add a frame in ofo queue
permits to keep memory use tight and in many cases avoid the
tcp_collapse() thing later.
Tested on various wireless setups (b43, ath9k, ...) known to use big skb
truesize, this patch removed the "packets collapsed in receive queue due
to low socket buffer" I had before.
This also reduced average memory used by tcp sockets.
With help from Neal Cardwell.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Neal Cardwell <ncardwell@google.com>
Cc: Yuchung Cheng <ycheng@google.com>
Cc: H.K. Jerry Chu <hkchu@google.com>
Cc: Tom Herbert <therbert@google.com>
Cc: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
Acked-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>