Fix possible NULL (sc->dfs_detector) pointer dereference.
Detected by Smatch:
drivers/net/wireless/ath/ath9k/dfs_debug.c:67 read_file_dfs()
error: we previously assumed 'sc->dfs_detector' could be null (see line 47)
Signed-off-by: Janusz Dziedzic <janusz.dziedzic@tieto.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Use correct width enums when setup
radar_detect_widths for DFS.
Signed-off-by: Janusz Dziedzic <janusz.dziedzic@tieto.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Since:
commit 36323f817a
Author: Thomas Huehn <thomas@net.t-labs.tu-berlin.de>
Date: Mon Jul 23 21:33:42 2012 +0200
mac80211: move TX station pointer and restructure TX
we do not pass sta pointer to rt2x00queue_create_tx_descriptor_ht(),
hence we do not correctly set station WCID and AMPDU density parameters.
Cc: stable@vger.kernel.org # 3.7+
Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com>
Acked-by: Gertjan van Wingerde <gwingerde@gmail.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Fix to return -ENODEV in the unknown model error handling
case instead of 0, as done elsewhere in this function.
Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
Acked-by: Dan Williams <dcbw@redhat.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
If we do a zero size allocation then it will oops. Also we can't be
sure the user passes us a NUL terminated string so I've added a
terminator.
This code can only be triggered by root.
Reported-by: Nico Golde <nico@ngolde.de>
Reported-by: Fabian Yamaguchi <fabs@goesec.de>
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Acked-by: Dan Williams <dcbw@redhat.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
kmalloc on efuse_word can return null, leading to free'ing of
elements in efuse_word on the error exit path even though it has not
been allocated. Instead, don't free the elements of efuse_word if
kmalloc failed.
Also, kmalloc of any of the arrays in efuse_word[] can also fail,
leading to undefined contents in the remaining elements leading to
problems when free'ing these elements later on. So kzalloc efuse_word
to ensure the kfree on the remaining elements won't cause breakage.
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
While parsing TLVs, return failure if number of remaining bytes
are less than current tlv length. This avoids invalid memory
access.
Signed-off-by: Amitkumar Karwar <akarwar@marvell.com>
Signed-off-by: Bing Zhao <bzhao@marvell.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
As tlv_buf_len is decremented at the end of the loop, we may have
accessed invalid memory in the last iteration.
Modify the while condition and add a break statement at the
begining of the loop to fix the problem.
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Amitkumar Karwar <akarwar@marvell.com>
Signed-off-by: Bing Zhao <bzhao@marvell.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
With "while (length)" check we may end up in accessing invalid
memory in last iteration.
This patch makes sure that tlv length is not less than the length
of structure mwifiex_power_group when min/max power is calculated.
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Amitkumar Karwar <akarwar@marvell.com>
Signed-off-by: Bing Zhao <bzhao@marvell.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
__le16 to u16 conversion is missing for "pg_tlv_hdr->length"
in mwifiex_get_power_level(). This creates a problem on big
endian machines.
It is resolved by changing definition of the structure
and making required endianness changes.
Signed-off-by: Amitkumar Karwar <akarwar@marvell.com>
Signed-off-by: Bing Zhao <bzhao@marvell.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Before we loop for next iteration we adjust the buffer pointer and
"resp_len":
curr += (tlv_len + sizeof(tlv_hdr->header));
resp_len -= (tlv_len + sizeof(tlv_hdr->header));
If "resp_len" gets set to negative then it counts as a high positive
value.
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Amitkumar Karwar <akarwar@marvell.com>
Signed-off-by: Bing Zhao <bzhao@marvell.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
There is a typo in the struct member name on assignment when checking
rtlphy->current_chan_bw == HT_CHANNEL_WIDTH_20_40, the check uses pwrgroup_ht40
for bound limit and uses pwrgroup_ht20 when assigning instead.
Signed-off-by: Felipe Pena <felipensp@gmail.com>
Acked-by: Larry Finger <Larry.Finger@lwfinger.net>
Cc: stable@vger.kernel.org [3.0+]
Signed-off-by: John W. Linville <linville@tuxdriver.com>
On rt2800_config_channel_rf53xx function the member default_power1 is checked
for bound limit, but default_power2 is used instead.
Signed-off-by: Felipe Pena <felipensp@gmail.com>
Acked-by: Gertjan van Wingerde <gwingerde@gmail.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
prandom fixes/improvements
====================
It would be great if you could still consider this series that fixes and
improves prandom for 3.13. We have sent it to netdev as prandom() originally
came from net/core/utils.c and networking is its main user. For a detailled
description, please see individual patches.
For patch 3 in this series, there will be a minor merge conflict with the
random tree that is for 3.13. See below how to resolve it.
====
Hannes says: on merge with the random tree I would suggest to resolve the
conflict in drivers/char/random.c like this:
if (r->entropy_total > 128) {
r->initialized = 1;
r->entropy_total = 0;
if (r == &nonblocking_pool) {
prandom_reseed_late();
pr_notice("random: %s pool is initialized\n",
r->name);
}
}
So it won't generate a warning if DEBUG_RANDOM_BOOT gets activated.
====
Patch 1 should probably also go to -stable.
Set tested on 32 and 64 bit machines.
Thanks a lot!
Ref. original discussion: http://patchwork.ozlabs.org/patch/289951/
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
We generated a battery of 100 test cases from GSL taus113 implemention
and compare the results from a particular seed and a particular
iteration with our implementation in the kernel. We have verified on
32 and 64 bit machines that our taus113 kernel implementation gives
same results as GSL taus113 implementation:
[ 0.147370] prandom: seed boundary self test passed
[ 0.148078] prandom: 100 self tests passed
This is a Kconfig option that is disabled on default, just like the
crc32 init selftests in order to not unnecessary slow down boot process.
We also refactored out prandom_seed_very_weak() as it's now used in
multiple places in order to reduce redundant code.
GSL code we used for generating test cases:
int i, j;
srand(time(NULL));
for (i = 0; i < 100; ++i) {
int iteration = 500 + (rand() % 500);
gsl_rng_default_seed = rand() + 1;
gsl_rng *r = gsl_rng_alloc(gsl_rng_taus113);
printf("\t{ %lu, ", gsl_rng_default_seed);
for (j = 0; j < iteration - 1; ++j)
gsl_rng_get(r);
printf("%u, %lu },\n", iteration, gsl_rng_get(r));
gsl_rng_free(r);
}
Joint work with Hannes Frederic Sowa.
Cc: Florian Weimer <fweimer@redhat.com>
Cc: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Since we use prandom*() functions quite often in networking code
i.e. in UDP port selection, netfilter code, etc, upgrade the PRNG
from Pierre L'Ecuyer's original paper "Maximally Equidistributed
Combined Tausworthe Generators", Mathematics of Computation, 65,
213 (1996), 203--213 to the version published in his errata paper [1].
The Tausworthe generator is a maximally-equidistributed generator,
that is fast and has good statistical properties [1].
The version presented there upgrades the 3 state LFSR to a 4 state
LFSR with increased periodicity from about 2^88 to 2^113. The
algorithm is presented in [1] by the very same author who also
designed the original algorithm in [2].
Also, by increasing the state, we make it a bit harder for attackers
to "guess" the PRNGs internal state. See also discussion in [3].
Now, as we use this sort of weak initialization discussed in [3]
only between core_initcall() until late_initcall() time [*] for
prandom32*() users, namely in prandom_init(), it is less relevant
from late_initcall() onwards as we overwrite seeds through
prandom_reseed() anyways with a seed source of higher entropy, that
is, get_random_bytes(). In other words, a exhaustive keysearch of
96 bit would be needed. Now, with the help of this patch, this
state-search increases further to 128 bit. Initialization needs
to make sure that s1 > 1, s2 > 7, s3 > 15, s4 > 127.
taus88 and taus113 algorithm is also part of GSL. I added a test
case in the next patch to verify internal behaviour of this patch
with GSL and ran tests with the dieharder 3.31.1 RNG test suite:
$ dieharder -g 052 -a -m 10 -s 1 -S 4137730333 #taus88
$ dieharder -g 054 -a -m 10 -s 1 -S 4137730333 #taus113
With this seed configuration, in order to compare both, we get
the following differences:
algorithm taus88 taus113
rands/second [**] 1.61e+08 1.37e+08
sts_serial(4, 1st run) WEAK PASSED
sts_serial(9, 2nd run) WEAK PASSED
rgb_lagged_sum(31) WEAK PASSED
We took out diehard_sums test as according to the authors it is
considered broken and unusable [4]. Despite that and the slight
decrease in performance (which is acceptable), taus113 here passes
all 113 tests (only rgb_minimum_distance_5 in WEAK, the rest PASSED).
In general, taus/taus113 is considered "very good" by the authors
of dieharder [5].
The papers [1][2] states a single warm-up step is sufficient by
running quicktaus once on each state to ensure proper initialization
of ~s_{0}:
Our selection of (s) according to Table 1 of [1] row 1 holds the
condition L - k <= r - s, that is,
(32 32 32 32) - (31 29 28 25) <= (25 27 15 22) - (18 2 7 13)
with r = k - q and q = (6 2 13 3) as also stated by the paper.
So according to [2] we are safe with one round of quicktaus for
initialization. However we decided to include the warm-up phase
of the PRNG as done in GSL in every case as a safety net. We also
use the warm up phase to make the output of the RNG easier to
verify by the GSL output.
In prandom_init(), we also mix random_get_entropy() into it, just
like drivers/char/random.c does it, jiffies ^ random_get_entropy().
random-get_entropy() is get_cycles(). xor is entropy preserving so
it is fine if it is not implemented by some architectures.
Note, this PRNG is *not* used for cryptography in the kernel, but
rather as a fast PRNG for various randomizations i.e. in the
networking code, or elsewhere for debugging purposes, for example.
[*]: In order to generate some "sort of pseduo-randomness", since
get_random_bytes() is not yet available for us, we use jiffies and
initialize states s1 - s3 with a simple linear congruential generator
(LCG), that is x <- x * 69069; and derive s2, s3, from the 32bit
initialization from s1. So the above quote from [3] accounts only
for the time from core to late initcall, not afterwards.
[**] Single threaded run on MacBook Air w/ Intel Core i5-3317U
[1] http://www.iro.umontreal.ca/~lecuyer/myftp/papers/tausme2.ps
[2] http://www.iro.umontreal.ca/~lecuyer/myftp/papers/tausme.ps
[3] http://thread.gmane.org/gmane.comp.encryption.general/12103/
[4] http://code.google.com/p/dieharder/source/browse/trunk/libdieharder/diehard_sums.c?spec=svn490&r=490#20
[5] http://www.phy.duke.edu/~rgb/General/dieharder.php
Joint work with Hannes Frederic Sowa.
Cc: Florian Weimer <fweimer@redhat.com>
Cc: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
struct rnd_state got mistakenly pulled into uapi header. It is not
used anywhere and does also not belong there!
Commit 5960164fde ("lib/random32: export pseudo-random number
generator for modules"), the last commit on rnd_state before it
got moved to uapi, says:
This patch moves the definition of struct rnd_state and the inline
__seed() function to linux/random.h. It renames the static __random32()
function to prandom32() and exports it for use in modules.
Hence, the structure was moved from lib/random32.c to linux/random.h
so that it can be used within modules (FCoE-related code in this
case), but not from user space. However, it seems to have been
mistakenly moved to uapi header through the uapi script. Since no-one
should make use of it from the linux headers, move the structure back
to the kernel for internal use, so that it can be modified on demand.
Joint work with Hannes Frederic Sowa.
Cc: Joe Eykholt <jeykholt@cisco.com>
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
The Tausworthe PRNG is initialized at late_initcall time. At that time the
entropy pool serving get_random_bytes is not filled sufficiently. This
patch adds an additional reseeding step as soon as the nonblocking pool
gets marked as initialized.
On some machines it might be possible that late_initcall gets called after
the pool has been initialized. In this situation we won't reseed again.
(A call to prandom_seed_late blocks later invocations of early reseed
attempts.)
Joint work with Daniel Borkmann.
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Acked-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: David S. Miller <davem@davemloft.net>
The current Tausworthe PRNG is never reseeded with truly random data after
the first attempt in late_initcall. As this PRNG is used for some critical
random data as e.g. UDP port randomization we should try better and reseed
the PRNG once in a while with truly random data from get_random_bytes().
When we reseed with prandom_seed we now make also sure to throw the first
output away. This suffices the reseeding procedure.
The delay calculation is based on a proposal from Eric Dumazet.
Joint work with Daniel Borkmann.
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
For properly initialising the Tausworthe generator [1], we have
a strict seeding requirement, that is, s1 > 1, s2 > 7, s3 > 15.
Commit 697f8d0348 ("random32: seeding improvement") introduced
a __seed() function that imposes boundary checks proposed by the
errata paper [2] to properly ensure above conditions.
However, we're off by one, as the function is implemented as:
"return (x < m) ? x + m : x;", and called with __seed(X, 1),
__seed(X, 7), __seed(X, 15). Thus, an unwanted seed of 1, 7, 15
would be possible, whereas the lower boundary should actually
be of at least 2, 8, 16, just as GSL does. Fix this, as otherwise
an initialization with an unwanted seed could have the effect
that Tausworthe's PRNG properties cannot not be ensured.
Note that this PRNG is *not* used for cryptography in the kernel.
[1] http://www.iro.umontreal.ca/~lecuyer/myftp/papers/tausme.ps
[2] http://www.iro.umontreal.ca/~lecuyer/myftp/papers/tausme2.ps
Joint work with Hannes Frederic Sowa.
Fixes: 697f8d0348 ("random32: seeding improvement")
Cc: Stephen Hemminger <stephen@networkplumber.org>
Cc: Florian Weimer <fweimer@redhat.com>
Cc: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
We were not checking if we successfully opened the counters, i.e. if
sys_perf_event_open worked, when it doesn't in this test, we were
continuing anyway and then segfaulting when trying to access the file
descriptor array, that at that point had been freed in perf_evlist__open
error path:
[root@ssdandy ~]# perf test -v 19
19: Test software clock events have valid period values :
--- start ---
Segmentation fault (core dumped)
[root@ssdandy ~]#
Do the check and bail out instead.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-6qy8ljkn0e9hm7bh7keo5z68@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
According both to POSIX.1-2008 and Linux Programmer's Manual mmap()
syscall shouldn't return undocumented ENOSYS, this change replaces
the errno with more appropriate ENODEV and EACCESS.
Signed-off-by: Vladimir Zapolskiy <vladimir_zapolskiy@mentor.com>
Cc: David Woodhouse <David.Woodhouse@intel.com>
Signed-off-by: Brian Norris <computersforpeace@gmail.com>
We cannot scan two chips for imx23 and imx28:
imx23: the Ready-Busy1 line is not connected for some board.
imx28: we do not set the pinctrl for Ready-Busy1
So we only scan two chips for imx6.
Signed-off-by: Huang Shijie <b32955@freescale.com>
Signed-off-by: Brian Norris <computersforpeace@gmail.com>
Add missing platform_set_drvdata() in xtsonic_probe(), otherwise
calling platform_get_drvdata() in xtsonic_device_remove() may
returns NULL.
Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add missing platform_set_drvdata() in mace_probe(), otherwise
calling platform_get_drvdata() in mac_mace_device_remove() may
returns NULL.
Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add missing platform_set_drvdata() in arc_emac_probe(), otherwise
calling platform_get_drvdata() in arc_emac_remove() may returns NULL.
Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
Signed-off-by: David S. Miller <davem@davemloft.net>
Code move only; no logic changes. In preparation for the mmap based
output option in the next patch.
Signed-off-by: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1383884605-30968-2-git-send-email-dsahern@gmail.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
write() returns a 'ssize_t' not an 'int'.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1383906470-21002-1-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
If given sort keys are all elided there'll be no output except for the
overhead column - actually the TUI shows a noisy output. In this case
it'd be better to show up the sort keys rather than elide.
Before:
$ perf report -s comm -c perf
(...)
# Overhead
# ........
#
100.00%
After:
$ perf report -s comm -c perf
(...)
# Overhead Command
# ........ .......
#
100.00% perf
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Acked-by: Ingo Molnar <mingo@kernel.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1383900822-14609-1-git-send-email-namhyung@kernel.org
[ Us curly braces around multi-line statements, as requested by Ingo Molnar ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Several tools (top, kvm) don't need to be called back to process each of
the syntheiszed records, instead relying on the machine__process_event
function to change the per machine data structures that represent
threads and mmaps, so provide a way to ask for this common idiom.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-pusqibp8n3c4ynegd1frn4zd@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Further simplifications to be done on following patch, as most tools
don't use the callback, using instead just the canned
machine__process_event one.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-r1m0vuuj3cat4bampno9yc8d@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
When perf_event_attr.mmap_data is set the kernel will generate
PERF_RECORD_MMAP events when non-exec (data, SysV mem) mmaps are
created, so we need to synthesize from /proc/pid/maps for existing
threads, as we do for exec mmaps.
Right now just 'perf record' does it, but any other tool that uses
perf_event__synthesize_thread(s|map) can request it.
Reported-by: Don Zickus <dzickus@redhat.com>
Tested-by: Don Zickus <dzickus@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Bill Gray <bgray@redhat.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Don Zickus <dzickus@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Joe Mario <jmario@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Richard Fowles <rfowles@redhat.com>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-ihwzraikx23ian9txinogvv2@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Most uses of the evsel constructor are followed by a call to
perf_evlist__add with an idex of evlist->nr_entries, so make rename
the current constructor to perf_evsel__new_idx and remove the need
for passing the constructor for the common case.
We still need the new_idx variant because the way groups are handled,
with evsel->nr_members holding the number of entries in an evlist,
partitioning the evlist into sublists inside a single linked list.
This asks for a clarifying refactoring, but for now simplify the non
parser cases, so that tool writers don't have to bother with evsel idx
setting.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-zy9tskx6jqm2rmw7468zze2a@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Each call to tui_progress__update() would forcibly refresh the entire
screen. This is somewhat inefficient and causes noticable flickering
during the startup of perf-report, especially on large/slow terminals.
It looks like the force-refresh in tui_progress__update() serves no
purpose other than to clear the screen so that the progress bar of a
previous operation does not subsume that of a subsequent operation. But
we can do just that in a much more efficient manner by clearing only the
region that a previous progress bar may have occupied before repainting
the new progress bar. Then the force-refresh could be removed with no
change in visuals.
This patch disables the slow force-refresh in tui_progress__update() and
instead calls SLsmg_fill_region() on the entire area that the progress
bar may occupy before repainting it. This change makes the startup of
perf-report much faster and appear much "smoother".
It turns out that this was a big bottleneck in the startup speed of
perf-report -- with this patch, perf-report starts up ~2x faster (1.1s
vs 0.55s) on my machines. (These numbers were measured by running "time
perf report" on an 8MB perf.data and pressing 'q' immediately.)
Signed-off-by: Patrick Palka <patrick@parcs.ath.cx>
Acked-by: Ingo Molnar <mingo@kernel.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1382747149-9716-1-git-send-email-patrick@parcs.ath.cx
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
This reverts commit 8eba18428a.
uv_trace() is not used by anything, nor is uv_trace_nmi_func, nor
uv_trace_func.
That's not how we do instrumentation code in the kernel: we add
tracepoints, printk()s, etc. so that everyone not just those with
magic kernel modules can debug a system.
So remove this unused (and misguied) piece of code.
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Mike Travis <travis@sgi.com>
Cc: Dimitri Sivanich <sivanich@sgi.com>
Cc: Hedi Berriche <hedi@sgi.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Cc: Jason Wessel <jason.wessel@windriver.com>
Link: http://lkml.kernel.org/n/tip-tumfBffmr4jmnt8Gyxanoblg@git.kernel.org
The dev variable is never assigned after being initialised.
Signed-off-by: Michal Nazarewicz <mina86@mina86.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
Problem reported by Avneesh Pant <avneesh.pant@oracle.com>:
It looks like we are triggering a bug in RDMA CM/UCM interaction.
The bug specifically hits when we have an incoming connection
request and the connecting process dies BEFORE the passive end of
the connection can process the request i.e. it does not call
rdma_get_cm_event() to retrieve the initial connection event. We
were able to triage this further and have some additional
information now.
In the example below when P1 dies after issuing a connect request
as the CM id is being destroyed all outstanding connects (to P2)
are sent a reject message. We see this reject message being
received on the passive end and the appropriate CM ID created for
the initial connection message being retrieved in cm_match_req().
The problem is in the ucma_event_handler() code when this reject
message is delivered to it and the initial connect message itself
HAS NOT been delivered to the client. In fact the client has not
even called rdma_cm_get_event() at this stage so we haven't
allocated a new ctx in ucma_get_event() and updated the new
connection CM_ID to point to the new UCMA context.
This results in the reject message not being dropped in
ucma_event_handler() for the new connection request as the
(if (!ctx->uid)) block is skipped since the ctx it refers to is
the listen CM id context which does have a valid UID associated
with it (I believe the new CMID for the connection initially
uses the listen CMID -> context when it is created in
cma_new_conn_id). Thus the assumption that new events for a
connection can get dropped in ucma_event_handler() is incorrect
IF the initial connect request has not been retrieved in the
first case. We end up getting a CM Reject event on the listen CM
ID and our upper layer code asserts (in fact this event does not
even have the listen_id set as that only gets set up librdmacm
for connect requests).
The solution is to verify that the cm_id being reported in the event
is the same as the cm_id referenced by the ucma context. A mismatch
indicates that the ucma context corresponds to the listen. This fix
was validated by using a modified version of librdmacm that was able
to verify the problem and see that the reject message was indeed
dropped after this patch was applied.
Signed-off-by: Sean Hefty <sean.hefty@intel.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
Some common Xen drivers, like balloon.c, call pfn_to_mfn and mfn_to_pfn
even for autotranslate guests, expecting the argument back.
The following commit broke these drivers by changing the behavior of
pfn_to_mfn and mfn_to_pfn:
commit 4a19138c65
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Thu Oct 17 16:22:27 2013 +0000
arm/xen,arm64/xen: introduce p2m
They now return INVALID_P2M_ENTRY if Linux doesn't actually know what is
the mfn backing a pfn or what is the pfn corresponding to an mfn.
Fix the regression by switching to the old behavior.
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
Tested-by: Ian Campbell <ian.campbell@citrix.com>
- A couple of imx5 and imx6 clock fixes
- Two follow-up patches for improving/fixing the commit "ARM: imx:
replace imx6q_restart()with mxc_restart()"
- One compile fix for imx6sl with randconfig
- Commits to fix pllv3 relock/power issues found in IPU/HDMI testing
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
iQEcBAABAgAGBQJSgPeFAAoJEFBXWFqHsHzON9cH/0HBoqVK11d0F5BbZgRGoBwt
U4jOoww1I04nAi4PcWfcnfFjyfyFl8dm5NovFsMUsTmSQHG/8Wo0dViDGlk0R/WS
Tw3L6shoXJaHEfCXrK+K1iZrxXNzfzXhaZwDrGSIkddmodZUrgeWOJ6zQP+GVGqM
rNr6nDIIIRd7LO60m1LiWg2wqHzVMBQrLIX4Vd+wDtzoLVG0BhCfxBZ+ekFhyfgV
WYDu1phulrhhywpNkHsake2nI0QcTmfBWMRzuDTGXp9vdC0fjf6aGRHklUr0N2GK
yetnlugfeSBVUuKjm2TO8Kr65d2dk7YyopthpkB+Bel1Y92yYTonN490apyYgRc=
=OTBh
-----END PGP SIGNATURE-----
Merge tag 'imx-fixes-3.13' of git://git.linaro.org/people/shawnguo/linux-2.6 into fixes
From Shawn Guo, imx fixes for 3.13:
- A couple of imx5 and imx6 clock fixes
- Two follow-up patches for improving/fixing the commit "ARM: imx:
replace imx6q_restart()with mxc_restart()"
- One compile fix for imx6sl with randconfig
- Commits to fix pllv3 relock/power issues found in IPU/HDMI testing
* tag 'imx-fixes-3.13' of git://git.linaro.org/people/shawnguo/linux-2.6:
ARM: dts: i.MX51: Fix OTG PHY clock
ARM: imx: set up pllv3 POWER and BYPASS sequentially
ARM: imx: pllv3 needs relock in .set_rate() call
ARM: imx: add sleep for pllv3 relock
ARM: imx6q: add missing sentinel to divider table
ARM: imx: v7_cpu_resume() is needed by imx6sl build
ARM: imx: improve mxc_restart() on the SRC bit writes
ARM: imx: remove imx_src_prepare_restart() call
ARM: i.MX6q: fix the wrong parent of can_root clock
The ST-internal Nomadik mailing list is going down. Remove
it from the MAINTAINERS file.
Cc: Olivier CLERGEAUD <olivier.clergeaud@st.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Olof Johansson <olof@lixom.net>
When building the kernel in a shell which defines GREP_OPTIONS so that
grep behavior is modified, we can break the generation of the syscalls
table like so:
__SYSCALL_COMMON(^[[01;31m^[[K0^[[m^[[K, sys_read, sys_read)
__SYSCALL_COMMON(^[[01;31m^[[K1^[[m^[[K, sys_write, sys_write)
__SYSCALL_COMMON(^[[01;31m^[[K1^[[m^[[K0, sys_mprotect, sys_mprotect) ...
This is just the initial breakage, later we barf when generating
modules.
In this case, GREP_OPTIONS contains "--color=always" which adds the shell
colors markup and completely fudges the headers under ...generated/asm/.
Fix that by unexporting the GREP_OPTIONS variable for the whole kernel
build as we tend to use grep at a bunch of places.
Signed-off-by: Borislav Petkov <bp@suse.de>
Signed-off-by: Michal Marek <mmarek@suse.cz>
sparse complains about the enter/exit_sysycall_files[] variables being
dereferenced with rcu_dereference_sched(). The fields need to be
annotated with __rcu.
Reported-by: kbuild test robot <fengguang.wu@intel.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Cache block invalidation is removing an entry from the cache without
writing it back. Cache blocks can be invalidated via the
'invalidate_cblocks' message, which takes an arbitrary number of cblock
ranges:
invalidate_cblocks [<cblock>|<cblock begin>-<cblock end>]*
E.g.
dmsetup message my_cache 0 invalidate_cblocks 2345 3456-4567 5678-6789
Signed-off-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Implement policy_remove_cblock() and add remove_cblock method to the mq
policy. These methods will be used by the following cache block
invalidation patch which adds the 'invalidate_cblocks' message to the
cache core.
Also, update some comments in dm-cache-policy.h
Signed-off-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Rather than storing the cblock in each cache entry, we allocate all
entries in an array and infer the cblock from the entry position.
Saves 4 bytes of memory per cache block. In addition, this gives us an
easy way of looking up cache entries by cblock.
We no longer need to keep an explicit bitset to track which cblocks
have been allocated. And no searching is needed to find free cblocks.
Signed-off-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Need to check the version to verify on-disk metadata is supported.
Signed-off-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
"Passthrough" is a dm-cache operating mode (like writethrough or
writeback) which is intended to be used when the cache contents are not
known to be coherent with the origin device. It behaves as follows:
* All reads are served from the origin device (all reads miss the cache)
* All writes are forwarded to the origin device; additionally, write
hits cause cache block invalidates
This mode decouples cache coherency checks from cache device creation,
largely to avoid having to perform coherency checks while booting. Boot
scripts can create cache devices in passthrough mode and put them into
service (mount cached filesystems, for example) without having to worry
about coherency. Coherency that exists is maintained, although the
cache will gradually cool as writes take place.
Later, applications can perform coherency checks, the nature of which
will depend on the type of the underlying storage. If coherency can be
verified, the cache device can be transitioned to writethrough or
writeback mode while still warm; otherwise, the cache contents can be
discarded prior to transitioning to the desired operating mode.
Signed-off-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Heinz Mauelshagen <heinzm@redhat.com>
Signed-off-by: Morgan Mears <Morgan.Mears@netapp.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Allow a cache to shrink if the blocks being removed from the cache are
not dirty.
Signed-off-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>