I found another crash when deleting lots of virtual stations
in a congested environment. I think the problem is that
the ieee80211_mlme_notify_scan_completed could call
ieee80211_restart_sta_timer for a stopped interface
that was about to be deleted.
With the following patch I am unable to reproduce the
crash.
Signed-off-by: Ben Greear <greearb@candelatech.com>
[move check, also make the same change in mesh]
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
If a P2P device wdev is removed while it has a scan, then the
scan completion might crash later as it is already freed by
that time. To avoid the crash always check the scan completion
when the P2P device is being removed for some reason. If the
driver already canceled it, don't want and free it, otherwise
warn and leak it to avoid later crashes.
In order to do this, locking needs to be changed away from the
rdev mutex (which can't always be guaranteed). For now, use
the sched_scan_mtx instead, I'll rename it to just scan_mtx in
a later patch.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
The virtual monitor interface has a locking issue, it calls
into the channel context code with the iflist mutex held
which isn't allowed since it is usually acquired the other
way around. The mutex is still required for the interface
iteration, but need not be held across the channel calls.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Arend reported a crash in tracing if the driver returns an
ERR_PTR() value from the add_virtual_intf() callback. This
is due to the tracing then still attempting to dereference
the "pointer", fix this by using IS_ERR_OR_NULL().
Reported-by: Arend van Spriel <arend@broadcom.com>
Tested-by: Arend van Spriel <arend@broadcom.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
Acked-by: Larry Finger <Larry.Finger@lwfinger.net>
Cc: stable@vger.kernel.org
Signed-off-by: John W. Linville <linville@tuxdriver.com>
curr_cmd points to the command that is in processing or waiting
for its command response from firmware. If the function shutdown
happens to occur at this time we should cancel the cmd timer and
put the command back to free queue.
Cc: <stable@vger.kernel.org> # 3.8
Tested-by: Marco Cesarano <marco@marvell.com>
Signed-off-by: Bing Zhao <bzhao@marvell.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
During rmmod mwifiex_sdio processing FUNC_SHUTDOWN command is
sent to firmware. Firmware expcets only FUNC_INIT once WLAN
function is shut down.
Any command pending in the command queue should be ignored and
freed.
Cc: <stable@vger.kernel.org> # 3.8
Tested-by: Daniel Drake <dsd@laptop.org>
Tested-by: Marco Cesarano <marco@marvell.com>
Signed-off-by: Bing Zhao <bzhao@marvell.com>
Signed-off-by: Amitkumar Karwar <akarwar@marvell.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Running the following script repeatedly on XO-4 with SD8787
produces command timeout and system lockup.
insmod mwifiex_sdio.ko
sleep 1
ifconfig eth0 up
iwlist eth0 scan &
sleep 0.5
rmmod mwifiex_sdio
mwifiex_send_cmd_async() is called for sync as well as async
commands. (mwifiex_send_cmd_sync() internally calls it for
sync command.)
"adapter->cmd_queued" gets filled inside mwifiex_send_cmd_async()
routine for both types of commands. But it is used only for sync
commands in mwifiex_wait_queue_complete(). This could lead to a
race when two threads try to queue a sync command with another
sync/async command simultaneously.
Get rid of global variable and pass command node as a parameter
to mwifiex_wait_queue_complete() to fix the problem.
Cc: <stable@vger.kernel.org> # 3.8
Reported-by: Daniel Drake <dsd@laptop.org>
Tested-by: Daniel Drake <dsd@laptop.org>
Tested-by: Marco Cesarano <marco@marvell.com>
Signed-off-by: Amitkumar Karwar <akarwar@marvell.com>
Signed-off-by: Bing Zhao <bzhao@marvell.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
The beacon and multicast-buffer queues are managed by the beacon
tasklet, and the generic tx path hang check does not help in any way
here. Running it on those queues anyway can introduce some race
conditions leading to unnecessary chip resets.
Cc: stable@vger.kernel.org
Signed-off-by: Felix Fietkau <nbd@openwrt.org>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
The commit 'ath9k_hw: fix calibration issues on chainmask that don't
include chain 0' changed the hardware chainmask to the chip chainmask
for the duration of the calibration, but the revert to user
configuration in the reset path runs too early.
That causes some issues with limiting the number of antennas (including
spurious failure in hardware-generated packets).
Fix this by reverting the chainmask after the essential parts of the
calibration that need the workaround, and before NF calibration is run.
Signed-off-by: Felix Fietkau <nbd@openwrt.org>
Reported-by: Wojciech Dubowik <Wojciech.Dubowik@neratec.com>
Tested-by: Wojciech Dubowik <Wojciech.Dubowik@neratec.com>
Cc: stable@vger.kernel.org
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Since v3.9-rc1 the kernel has basic support for Ralink WiSoC. The config symbols
are named slightly different than before. Fix the rt2x00 to match the new
symbols.
The commit causing this breakage is:
commit ae2b5bb657
Author: John Crispin <blogic@openwrt.org>
Date: Sun Jan 20 22:05:30 2013 +0100
MIPS: ralink: adds Kbuild files
Signed-off-by: John Crispin <blogic@openwrt.org>
Acked-by: Gertjan van Wingerde <gwingerde@gmail.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
With this one we have:
- A fix for properly decreasing socket ack log.
- A timer and works cleanup upon NFC device removal.
- A monitoroing socket cleanup round from llcp_socket_release.
- A proper error report to pending sockets upon NFC device removal.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
iQIcBAABAgAGBQJRPRpxAAoJEIqAPN1PVmxK6lMP+wc4MRsxn5CKi9lEBtAjOql5
7S6x1b8/RqgWEY0MZM3YDrlMJ5+Xhe4eZeOpKc9+nDNH+g3Ad5SzYgUUPWLJVeBm
awxzvqMc8BjRSgXdB6XxbBhzw0jKCngnBg1g8Zf8xvaumMAAmg/Wb+YLvfuC6hcx
tv/jvKdkxiV+xdNi5NDl6HEwsaN7dQv5GmJQ0DVDJh16mX1ZFSlK7GJfaOlfVfLT
jaYOF0nCejT2Qy6jRZm/5npTtvPSsimGR/i8ikEpaJ1eTF3hdoqaTmaw/hxJZMyZ
2T+wNoj/tuuDXWaSu+mg3wW6Ol/RWJ8EE/UYcTJV4CW8lmH5lpHjCd2ZvE1DHw59
Jp/NeaKICBJMklUbZSMEWnJ1lDkz58btzattslOgzJH1GkXRM6Czeb2tVhPU5Lqu
zmq9JDa6TofF3AD8LZEhw8neylcXdcdo9OU6MAJYECfoDWI4EKd6cFeJAMV2Yt6o
/ZGuAF2uVtGkw/QzwemvLynkvqOXKqP7lM418S0ZHN0DcXCv6lQ+x2LRhA7FbBuo
XiAwV33MYxEUtNoPvGcT6maS37TUK9dJ1ZjT3ulntH/crqSB6Ksk5O/nkWgSZWkn
rjVCK3j1YrFdV4kLLCO1LLdte56Eqep8foZUfDv6Q89A95Tek1uoYzOxRtT1SRHU
CiriwiJHJ1X5vRUyNBJG
=faE5
-----END PGP SIGNATURE-----
Merge tag 'nfc-fixes-3.9-1' of git://git.kernel.org/pub/scm/linux/kernel/git/sameo/nfc-fixes
Samuel Ortiz <sameo@linux.intel.com> says:
This is the first NFC pull request for 3.9 fixes
With this one we have:
- A fix for properly decreasing socket ack log.
- A timer and works cleanup upon NFC device removal.
- A monitoroing socket cleanup round from llcp_socket_release.
- A proper error report to pending sockets upon NFC device removal.
Signed-off-by: John W. Linville <linville@tuxdriver.com>
If a P2P Device interface receives an unhandled action
frame, we attempt to return it. This crashes because it
doesn't have a channel context. Fix the crash by using
status->band and properly mark the return frame as an
off-channel frame.
Reported-by: Ilan Peer <ilan.peer@intel.com>
Reviewed-by: Ilan Peer <ilan.peer@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Whenever an adapter is removed we must clean all the local structures,
especially the timers and scheduled work. Otherwise those asynchronous
threads will eventually try to access the freed nfc_dev pointer if an LLCP
link is up.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
This is really difficult to test with real NFC devices, but without
this fix an LLCP server will eventually refuse new connections.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
In the odd case that while updating information from a beacon,
a BSS was found that is part of a hidden group, we drop the
new information. In this case, however, we leak the IE buffer
from the update, and erroneously update the entry's timestamp
so it will never time out. Fix both these issues.
Cc: Larry Finger <Larry.Finger@lwfinger.net>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
There is NETDEV_ENTRY that was incorrectly assigned as WIPHY_ASSIGN,
fix it.
Signed-off-by: Vladimir Kondratiev <qca_vkondrat@qca.qualcomm.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
If there are keys left during station removal, then a
synchronize_net() will be done (for each key, I have a
patch to address this for 3.10), otherwise it won't be
done at all which causes issues because the station
could be used for TX while it's being removed from the
driver -- that might confuse the driver.
Fix this by always doing synchronize_net() if no key
was present any more.
Cc: stable@vger.kernel.org
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
John W. Linville says:
====================
This time just passing along a big batch of fixes from Johannes...
For the mac80211 bits:
"Here I have fixes from Ben Greear for stray work items when deleting
interfaces, another idle handling fix from Felix, a fix from Marco ro a
mesh PS buffering crash and I have a fix for the VHT MCS calculation in
association request frames and more nl80211 feature advertising removal
as well as a workaround to increase the dump size if the SKB overhead is
too large. For 3.10 I already have a complete fix queued, but that also
requires (simple) userspace changes."
And for the iwlwifi bits:
"The patches from Dor fix a bunch of calibration issues in the new MVM
driver, and Emmanuel has a number of fixes there as well. Also, we
decided to disable 8k A-MSDU by default, so that's in there. My own
patches are addressing an issue we found with the new devices but that
seems to also exist on older ones, the DMA writeback the devices do can
be delayed and cause issues. The fix is unfortunately relatively large
and depends on two other changes (to not be hugely conflicting), but I
think it's still worth it at this point."
As Johannes says, it is a bit large. But I hope it is still early
enough in the cycle to make that worthwhile.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
The SLIPORT_SEMAPHORE register shadowed in the
config-space may not reflect the correct POST stage after
an EEH reset in BE2/3; it may return FW_READY state even though
FW is not ready. This causes the driver to prematurely
poll the FW mailbox and fail.
For BE2/3 use the CSR-BAR/0xac instead.
Reported-by: Gavin Shan <shangw@linux.vnet.ibm.com>
Signed-off-by: Sathya Perla <sathya.perla@emulex.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ben Hutchings says:
====================
Fix regressions introduced by the last set of fixes (sorry):
1. Potential deadlock when disabling TX queues.
2. RX was broken on architectures other than x86 and powerpc.
I still expect to send one more bug fix for 3.9, but as it sometimes
takes days to reproduce the bug it's going to take a couple of weeks of
testing to be confident that it's really fixed.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
RX DMA buffers start at an offset of EFX_PAGE_IP_ALIGN bytes from the
start of a cache line. This offset obviously needs to be included in
the virtual address, but this was missed in commit b590ace09d
('sfc: Fix efx_rx_buf_offset() in the presence of swiotlb') since
EFX_PAGE_IP_ALIGN is equal to 0 on both x86 and powerpc.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
efx_device_detach_sync() locks all TX queues before marking the device
detached and thus disabling further TX scheduling. But it can still
be interrupted by TX completions which then result in TX scheduling in
soft interrupt context. This will deadlock when it tries to acquire
a TX queue lock that efx_device_detach_sync() already acquired.
To avoid deadlock, we must use netif_tx_{,un}lock_bh().
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
While PCI card faces EEH errors, reset (usually hot reset) is
expected to recover from the EEH errors. After EEH core finishes
the reset, the driver callback (be_eeh_reset) is called and wait
the firmware to complete POST successfully. The original code would
return with error once detecting failure during POST stage. That
seems not enough.
The patch forces the driver (be_eeh_reset) to wait the firmware
completes POST until timeout, instead of returning error upon
detection POST failure immediately. Also, it would improve the
reliability of the EEH funtionality of the driver.
Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com>
Acked-by: Sathya Perla <sathya.perla@emulex.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When a router forwards a packet that contains the IPv4 timestamp option,
if there is no space left in the option for the router to add its own
timestamp, then the router increments the Overflow value in the option.
However, if the addresses of the routers are prespecified in the option,
then the overflow condition cannot happen: the option is structured so
that each prespecified router has a place to write its timestamp. Other
routers do not add a timestamp, so there will never be a lack of space.
This fix ensures that the Overflow value in the IPv4 timestamp option is
not incremented when the addresses of the routers are prespecified, even
if the Pointer value is greater than the Length value.
Signed-off-by: David Ward <david.ward@ll.mit.edu>
Signed-off-by: David S. Miller <davem@davemloft.net>
We should use time_after_eq() to get maximum latency of two ticks,
instead of three.
Bug added in commit 24f8b2385 (net: increase receive packet quantum)
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Fix new kernel-doc warnings in net/core/dev.c:
Warning(net/core/dev.c:4788): No description found for parameter 'new_carrier'
Warning(net/core/dev.c:4788): Excess function parameter 'new_carries' description in 'dev_change_carrier'
Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
We should reset nf settings bond to the skb as ipip/ipgre do.
If not, the conntrack/nat info bond to the origin packet may continually
redirect the packet to vxlan interface causing a routing loop.
this is the scenario:
VETP VXLAN Gateway
/----\ /---------------\
| | | |
| vx+--+vx --NAT-> eth0+--> Internet
| | | |
\----/ \---------------/
when there are any packet coming from internet to the vetp, there will be lots
of garbage packets coming out the gateway's vxlan interface, but none actually
sent to the physical interface, because they are redirected back to the vxlan
interface in the postrouting chain of NAT rule, and dmesg complains:
Mar 1 21:52:53 debian kernel: [ 8802.997699] Dead loop on virtual device vxlan0, fix it urgently!
Mar 1 21:52:54 debian kernel: [ 8804.004907] Dead loop on virtual device vxlan0, fix it urgently!
Mar 1 21:52:55 debian kernel: [ 8805.012189] Dead loop on virtual device vxlan0, fix it urgently!
Mar 1 21:52:56 debian kernel: [ 8806.020593] Dead loop on virtual device vxlan0, fix it urgently!
the patch should fix the problem
Signed-off-by: Zang MingJie <zealot0630@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
QFQ+ can select for service only 'eligible' aggregates, i.e.,
aggregates that would have started to be served also in the emulated
ideal system. As a consequence, for QFQ+ to be work conserving, at
least one of the active aggregates must be eligible when it is time to
choose the next aggregate to serve.
The set of eligible aggregates is updated through the function
qfq_update_eligible(), which does guarantee that, after its
invocation, at least one of the active aggregates is eligible.
Because of this property, this function is invoked in
qfq_deactivate_agg() to guarantee that at least one of the active
aggregates is still eligible after an aggregate has been deactivated.
In particular, the critical case is when there are other active
aggregates, but the aggregate being deactivated happens to be the only
one eligible.
However, this precaution is not needed for QFQ+ to be work conserving,
because update_eligible() is always invoked also at the beginning of
qfq_choose_next_agg(). This patch removes the additional invocation of
update_eligible() in qfq_deactivate_agg().
Signed-off-by: Paolo Valente <paolo.valente@unimore.it>
Reviewed-by: Fabio Checconi <fchecconi@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
By definition of (the algorithm of) QFQ+, the system virtual time must
be pushed up only if there is no 'eligible' aggregate, i.e. no
aggregate that would have started to be served also in the ideal
system emulated by QFQ+. QFQ+ serves only eligible aggregates, hence
the aggregate currently in service is eligible. As a consequence, to
decide whether there is no eligible aggregate, QFQ+ must also check
whether there is no aggregate in service.
Signed-off-by: Paolo Valente <paolo.valente@unimore.it>
Reviewed-by: Fabio Checconi <fchecconi@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Aggregate budgets are computed so as to guarantee that, after an
aggregate has been selected for service, that aggregate has enough
budget to serve at least one maximum-size packet for the classes it
contains. For this reason, after a new aggregate has been selected
for service, its next packet is immediately dequeued, without any
further control.
The maximum packet size for a class, lmax, can be changed through
qfq_change_class(). In case the user sets lmax to a lower value than
the the size of some of the still-to-arrive packets, QFQ+ will
automatically push up lmax as it enqueues these packets. This
automatic push up is likely to happen with TSO/GSO.
In any case, if lmax is assigned a lower value than the size of some
of the packets already enqueued for the class, then the following
problem may occur: the size of the next packet to dequeue for the
class may happen to be larger than lmax, after the aggregate to which
the class belongs has been just selected for service. In this case,
even the budget of the aggregate, which is an unsigned value, may be
lower than the size of the next packet to dequeue. After dequeueing
this packet and subtracting its size from the budget, the latter would
wrap around.
This fix prevents the budget from wrapping around after any packet
dequeue.
Signed-off-by: Paolo Valente <paolo.valente@unimore.it>
Reviewed-by: Fabio Checconi <fchecconi@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
If no aggregate is in service, then the function qfq_dequeue() does
not dequeue any packet. For this reason, to guarantee QFQ+ to be work
conserving, a just-activated aggregate must be set as in service
immediately if it happens to be the only active aggregate.
This is done by the function qfq_enqueue().
Unfortunately, the function qfq_add_to_agg(), used to add a class to
an aggregate, does not perform this important additional operation.
In particular, if: 1) qfq_add_to_agg() is invoked to complete the move
of a class from a source aggregate, becoming, for this move, inactive,
to a destination aggregate, becoming instead active, and 2) the
destination aggregate becomes the only active aggregate, then this
aggregate is not however set as in service. QFQ+ remains then in a
non-work-conserving state until a new invocation of qfq_enqueue()
recovers the situation.
This fix solves the problem by moving the logic for setting an
aggregate as in service directly into the function qfq_activate_agg().
Hence, from whatever point qfq_activate_aggregate() is invoked, QFQ+
remains work conserving. Since the more-complex logic of this new
version of activate_aggregate() is not necessary, in qfq_dequeue(), to
reschedule an aggregate that finishes its budget, then the aggregate
is now rescheduled by invoking directly the functions needed.
Signed-off-by: Paolo Valente <paolo.valente@unimore.it>
Reviewed-by: Fabio Checconi <fchecconi@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Between two invocations of make_eligible, the system virtual time may
happen to grow enough that, in its binary representation, a bit with
higher order than 31 flips. This happens especially with
TSO/GSO. Before this fix, the mask used in make_eligible was computed
as (1UL<<index_of_last_flipped_bit)-1, whose value is well defined on
a 64-bit architecture, because index_of_flipped_bit <= 63, but is in
general undefined on a 32-bit architecture if index_of_flipped_bit > 31.
The fix just replaces 1UL with 1ULL.
Signed-off-by: Paolo Valente <paolo.valente@unimore.it>
Reviewed-by: Fabio Checconi <fchecconi@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
QFQ+ schedules the active aggregates in a group using a bucket list
(one list per group). The bucket in which each aggregate is inserted
depends on the aggregate's timestamps, and the number
of buckets in a group is enough to accomodate the possible (range of)
values of the timestamps of all the aggregates in the group. For this
property to hold, timestamps must however be computed correctly. One
necessary condition for computing timestamps correctly is that the
number of bits dequeued for each aggregate, while the aggregate is in
service, does not exceed the maximum budget budgetmax assigned to the
aggregate.
For each aggregate, budgetmax is proportional to the number of classes
in the aggregate. If the number of classes of the aggregate is
decreased through qfq_change_class(), then budgetmax is decreased
automatically as well. Problems may occur if the aggregate is in
service when budgetmax is decreased, because the current remaining
budget of the aggregate and/or the service already received by the
aggregate may happen to be larger than the new value of budgetmax. In
this case, when the aggregate is eventually deselected and its
timestamps are updated, the aggregate may happen to have received an
amount of service larger than budgetmax. This may cause the aggregate
to be assigned a higher virtual finish time than the maximum
acceptable value for the last bucket in the bucket list of the group.
This fix introduces a cap that addresses this issue.
Signed-off-by: Paolo Valente <paolo.valente@unimore.it>
Reviewed-by: Fabio Checconi <fchecconi@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
DTR/RTS need to be raised, regardless of the open() mode, but not
if the port has already shutdown.
Signed-off-by: Peter Hurley <peter@hurleysoftware.com>
Signed-off-by: David S. Miller <davem@davemloft.net>