In order for the list handling in net_rx_action() to be
correct, drivers must follow certain rules as stated by
this comment in net_rx_action():
/* Drivers must not modify the NAPI state if they
* consume the entire weight. In such cases this code
* still "owns" the NAPI instance and therefore can
* move the instance around on the list at-will.
*/
A few drivers do not do this because they mix the budget checks
with reading hardware state, resulting in crashes like the one
reported by takano@axe-inc.co.jp.
BNX2 and TG3 are taken care of here, SKY2 fix is from Stephen
Hemminger.
Signed-off-by: David S. Miller <davem@davemloft.net>
This addition of lost_retrans_low to tcp_sock might be
unnecessary, it's not clear how often lost_retrans worker is
executed when there wasn't work to do.
Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
Signed-off-by: David S. Miller <davem@davemloft.net>
Detection implemented with lost_retrans must work also when
fastpath is taken, yet most of the queue is skipped including
(very likely) those retransmitted skb's we're interested in.
This problem appeared when the hints got added, which removed
a need to always walk over the whole write queue head.
Therefore decicion for the lost_retrans worker loop entry must
be separated from the sacktag processing more than it was
necessary before.
It turns out to be problematic to optimize the worker loop
very heavily because ack_seqs of skb may have a number of
discontinuity points. Maybe similar approach as currently is
implemented could be attempted but that's becoming more and
more complex because the trend is towards less skb walking
in sacktag marker. Trying a simple work until all rexmitted
skbs heve been processed approach.
Maybe after(highest_sack_end_seq, tp->high_seq) checking is not
sufficiently accurate and causes entry too often in no-work-to-do
cases. Since that's not known, I've separated solution to that
from this patch.
Noticed because of report against a related problem from TAKANO
Ryousei <takano@axe-inc.co.jp>. He also provided a patch to
that part of the problem. This patch includes solution to it
(though this patch has to use somewhat different placement).
TAKANO's description and patch is available here:
http://marc.info/?l=linux-netdev&m=119149311913288&w=2
...In short, TAKANO's problem is that end_seq the loop is using
not necessarily the largest SACK block's end_seq because the
current ACK may still have higher SACK blocks which are later
by the loop.
Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
Signed-off-by: David S. Miller <davem@davemloft.net>
Both sacked_out and fackets_out are directly known from how
parameter. Since fackets_out is accurate, there's no need for
recounting (sacked_out was previously unnecessarily counted
in the loop anyway).
Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
Signed-off-by: David S. Miller <davem@davemloft.net>
This is necessary for upcoming DSACK bugfix. Reduces sacktag
length which is not very sad thing at all... :-)
Notice that there's a need to handle out-of-mem at caller's
place.
Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
Signed-off-by: David S. Miller <davem@davemloft.net>
It's on the way for future cutting of that function.
Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
Signed-off-by: David S. Miller <davem@davemloft.net>
This condition (plain R) can arise at least in recovery that
is triggered after tcp_undo_loss. There isn't any reason why
they should not be marked as lost, not marking makes in_flight
estimator to return too large values.
Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
Signed-off-by: David S. Miller <davem@davemloft.net>
I was reading tcp_enter_loss while looking for Cedric's bug and
noticed bytes_acked adjustment is missing from FRTO side.
Since bytes_acked will only be used in tcp_cong_avoid, I think
it's safe to assume RTO would be spurious. During FRTO cwnd
will be not controlled by tcp_cong_avoid and if FRTO calls for
conventional recovery, cwnd is adjusted and the result of wrong
assumption is cleared from bytes_acked. If RTO was in fact
spurious, we did normal ABC already and can continue without
any additional adjustments.
Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
Signed-off-by: David S. Miller <davem@davemloft.net>
Tested with Malta; inflates malta_defconfig by 3932 bytes. Ideally there
should be additional configuration to allow getting rid of this overhead
but that would be too much complexity at this stage of the release cycle.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Loading ELF binaries based on the section table is totally wrong. This
still leaves the other fat bug of referencing symbols in an executable
unfixed, so people better don't run strip on their binaries ...
As added bonus the new loader is also 23 lines shorter.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
So far /proc/cpuinfo has been the only user but human readable processor
name are more useful than that for proc.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Dump the generated code for clear/copy page calls like it is done for TLB
fault handlers. Useful for debugging.
Signed-off-by: Maciej W. Rozycki <macro@linux-mips.org>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
It's only used in arch/mips/cobalt/reset.c.
Signed-off-by: Yoichi Yuasa <yoichi_yuasa@tripeaks.co.jp>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
They're only used in arch/mips/cobalt/console.c.
Signed-off-by: Yoichi Yuasa <yoichi_yuasa@tripeaks.co.jp>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
These PCI definitions are only used in arch/mips/pci/fixup-cobalt.c.
Signed-off-by: Yoichi Yuasa <yoichi_yuasa@tripeaks.co.jp>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Add Cobalt Qube series front LED support to platform register.
Signed-off-by: Yoichi Yuasa <yoichi_yuasa@tripeaks.co.jp>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
On MP configurations it's highly dubious what this code will actually
affect since blasting away cachelines may or may not do the right
thing wrt. cache coherency.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-msym32 and previously the strategy to tell the compiler to generate
64-bit code but the assembler to put it into 32-bit ELF was initially
a hack to get around the lack of proper 64-bit binutils support and
later turned into a neat optimization with significant code size
savings. But it's really just an optimization so there is nothing
wrong with just dropping the option (and whatever else goes along with
it, I forgot all the nasty details) on the floor if due to a vintage
compiler it can't be suported.
Signed-off-by: Franck Bui-Huu <fbuihuu@gmail.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The kernel currently only supports broadcasting of the timer interrupt
from a single timer, not multicasting into two multicast groups of
processors. So the implemented mechanism for SMTC works by broadcasting
the cp0 compare interrupt on VPE 0 and ignoring it on any additional VPEs.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
This replaces the MIPS-specific to_tm function with the generic
rtc_time_to_tm function. The big difference between the two functions is
that rtc_time_to_tm uses epoch 70 while to_tm uses 1970, so the result of
rtc_time_to_tm needs to be fixed up.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Hard to follow who is pointing what to where and why so it's simply getting
in the way of the time code renovation.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>