Avoids this warning:
arch/powerpc/boot/gunzip_util.c:118:9: warning: comparison of distinct pointer types lacks a cast
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
PowerNV platform is capable of capturing host memory region when system
crashes (because of host/firmware). We have new OPAL API to register/
unregister memory region to be captured when system crashes.
This patch adds support for new API. Also during boot time we register
kernel log buffer and unregister before doing kexec.
Signed-off-by: Vasant Hegde <hegdevasant@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Platforms like IBM Power Systems supports service processor
assisted dump. It provides interface to add memory region to
be captured when system is crashed.
During initialization/running we can add kernel memory region
to be collected.
Presently we don't have a way to get the log buffer base address
and size. This patch adds support to return log buffer address
and size.
Signed-off-by: Vasant Hegde <hegdevasant@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Andrew Morton <akpm@linux-foundation.org>
We have been a bit slack about updating the CPU_FTRS_POSSIBLE and
CPU_FTRS_ALWAYS masks. When we added POWER8, and also POWER8E we forgot
to update the ALWAYS mask. And when we added POWER8_DD1 we forgot to
update both the POSSIBLE and ALWAYS masks.
Luckily this hasn't caused any actual bugs AFAICS. Failing to update the
ALWAYS mask just forgoes a potential optimisation opportunity. Failing
to update the POSSIBLE mask for POWER8_DD1 is also OK because it only
removes a bit rather than adding any.
Regardless they should all be in both masks so as to avoid any future
bugs when the set of ALWAYS/POSSIBLE bits changes, or the masks
themselves change.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Acked-by: Michael Neuling <mikey@neuling.org>
Acked-by: Joel Stanley <joel@jms.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
This patch disables the branch target address CAM which under specific
circumstances may cause the processor to skip execution of 1-4
instructions. This fixes IBM Erratum #47.
Signed-off-by: Alistair Popple <alistair@popple.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
When we take full hotplug to recover from EEH errors, PCI buses
could be involved. For the case, the child devices of involved
PCI buses can't be attached to IOMMU group properly, which is
caused by commit 3f28c5a ("powerpc/powernv: Reduce multi-hit of
iommu_add_device()").
When adding the PCI devices of the newly created PCI buses to
the system, the IOMMU group is expected to be added in (C).
(A) fails to bind the IOMMU group because bus->is_added is
false. (B) fails because the device doesn't have binding IOMMU
table yet. bus->is_added is set to true at end of (C) and
pdev->is_added is set to true at (D).
pcibios_add_pci_devices()
pci_scan_bridge()
pci_scan_child_bus()
pci_scan_slot()
pci_scan_single_device()
pci_scan_device()
pci_device_add()
pcibios_add_device() A: Ignore
device_add() B: Ignore
pcibios_fixup_bus()
pcibios_setup_bus_devices()
pcibios_setup_device() C: Hit
pcibios_finish_adding_to_bus()
pci_bus_add_devices()
pci_bus_add_device() D: Add device
If the parent PCI bus isn't involved in hotplug, the IOMMU
group is expected to be bound in (B). (A) should fail as the
sysfs entries aren't populated.
The patch fixes the issue by reverting commit 3f28c5a and remove
WARN_ON() in iommu_add_device() to allow calling the function
even the specified device already has associated IOMMU group.
Cc: <stable@vger.kernel.org> # 3.16+
Reported-by: Thadeu Lima de Souza Cascardo <cascardo@linux.vnet.ibm.com>
Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com>
Acked-by: Wei Yang <weiyang@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Similar to the previous commit which described why we need to add a
barrier to arch_spin_is_locked(), we have a similar problem with
spin_unlock_wait().
We need a barrier on entry to ensure any spinlock we have previously
taken is visibly locked prior to the load of lock->slock.
It's also not clear if spin_unlock_wait() is intended to have ACQUIRE
semantics. For now be conservative and add a barrier on exit to give it
ACQUIRE semantics.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
The kernel defines the function spin_is_locked(), which can be used to
check if a spinlock is currently locked.
Using spin_is_locked() on a lock you don't hold is obviously racy. That
is, even though you may observe that the lock is unlocked, it may become
locked at any time.
There is (at least) one exception to that, which is if two locks are
used as a pair, and the holder of each checks the status of the other
before doing any update.
Assuming *A and *B are two locks, and *COUNTER is a shared non-atomic
value:
The first CPU does:
spin_lock(*A)
if spin_is_locked(*B)
# nothing
else
smp_mb()
LOAD r = *COUNTER
r++
STORE *COUNTER = r
spin_unlock(*A)
And the second CPU does:
spin_lock(*B)
if spin_is_locked(*A)
# nothing
else
smp_mb()
LOAD r = *COUNTER
r++
STORE *COUNTER = r
spin_unlock(*B)
Although this is a strange locking construct, it should work.
It seems to be understood, but not documented, that spin_is_locked() is
not a memory barrier, so in the examples above and below the caller
inserts its own memory barrier before acting on the result of
spin_is_locked().
For now we assume spin_is_locked() is implemented as below, and we break
it out in our examples:
bool spin_is_locked(*LOCK) {
LOAD l = *LOCK
return l.locked
}
Our intuition is that there should be no problem even if the two code
sequences run simultaneously such as:
CPU 0 CPU 1
==================================================
spin_lock(*A) spin_lock(*B)
LOAD b = *B LOAD a = *A
if b.locked # true if a.locked # true
# nothing # nothing
spin_unlock(*A) spin_unlock(*B)
If one CPU gets the lock before the other then it will do the update and
the other CPU will back off:
CPU 0 CPU 1
==================================================
spin_lock(*A)
LOAD b = *B
spin_lock(*B)
if b.locked # false LOAD a = *A
else if a.locked # true
smp_mb() # nothing
LOAD r1 = *COUNTER spin_unlock(*B)
r1++
STORE *COUNTER = r1
spin_unlock(*A)
However in reality spin_lock() itself is not indivisible. On powerpc we
implement it as a load-and-reserve and store-conditional.
Ignoring the retry logic for the lost reservation case, it boils down to:
spin_lock(*LOCK) {
LOAD l = *LOCK
l.locked = true
STORE *LOCK = l
ACQUIRE_BARRIER
}
The ACQUIRE_BARRIER is required to give spin_lock() ACQUIRE semantics as
defined in memory-barriers.txt:
This acts as a one-way permeable barrier. It guarantees that all
memory operations after the ACQUIRE operation will appear to happen
after the ACQUIRE operation with respect to the other components of
the system.
On modern powerpc systems we use lwsync for ACQUIRE_BARRIER. lwsync is
also know as "lightweight sync", or "sync 1".
As described in Power ISA v2.07 section B.2.1.1, in this scenario the
lwsync is not the barrier itself. It instead causes the LOAD of *LOCK to
act as the barrier, preventing any loads or stores in the locked region
from occurring prior to the load of *LOCK.
Whether this behaviour is in accordance with the definition of ACQUIRE
semantics in memory-barriers.txt is open to discussion, we may switch to
a different barrier in future.
What this means in practice is that the following can occur:
CPU 0 CPU 1
==================================================
LOAD a = *A LOAD b = *B
a.locked = true b.locked = true
LOAD b = *B LOAD a = *A
STORE *A = a STORE *B = b
if b.locked # false if a.locked # false
else else
smp_mb() smp_mb()
LOAD r1 = *COUNTER LOAD r2 = *COUNTER
r1++ r2++
STORE *COUNTER = r1
STORE *COUNTER = r2 # Lost update
spin_unlock(*A) spin_unlock(*B)
That is, the load of *B can occur prior to the store that makes *A
visibly locked. And similarly for CPU 1. The result is both CPUs hold
their lock and believe the other lock is unlocked.
The easiest fix for this is to add a full memory barrier to the start of
spin_is_locked(), so adding to our previous definition would give us:
bool spin_is_locked(*LOCK) {
smp_mb()
LOAD l = *LOCK
return l.locked
}
The new barrier orders the store to the lock we are locking vs the load
of the other lock:
CPU 0 CPU 1
==================================================
LOAD a = *A LOAD b = *B
a.locked = true b.locked = true
STORE *A = a STORE *B = b
smp_mb() smp_mb()
LOAD b = *B LOAD a = *A
if b.locked # true if a.locked # true
# nothing # nothing
spin_unlock(*A) spin_unlock(*B)
Although the above example is theoretical, there is code similar to this
example in sem_lock() in ipc/sem.c. This commit in addition to the next
commit appears to be a fix for crashes we are seeing in that code where
we believe this race happens in practice.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Once again, we see
arch/powerpc/kernel/exceptions-64s.S: Assembler messages:
arch/powerpc/kernel/exceptions-64s.S:865: Error: attempt to move .org backwards
arch/powerpc/kernel/exceptions-64s.S:866: Error: attempt to move .org backwards
arch/powerpc/kernel/exceptions-64s.S:890: Error: attempt to move .org backwards
when compiling ppc:allmodconfig.
This time the problem has been caused by to commit 0869b6fd20
("powerpc/book3s: Add basic infrastructure to handle HMI in Linux"),
which adds functions hmi_exception_early and hmi_exception_after_realmode
into a critical (size-limited) code area, even though that does not appear
to be necessary.
Move those functions to a non-critical area of the file.
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
__early_init_mmu() does some things that are really only needed by the
boot cpu. On FSL booke, This includes calling
memblock_enforce_memory_limit(), which is labelled __init. Secondary
cpu init code can't be __init as that would break CPU hotplug.
While it's probably a bug that memblock_enforce_memory_limit() isn't
__init_memblock instead, there's no reason why we should be doing this
stuff for secondary cpus in the first place.
Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Updates to the Chromebook/box platform drivers:
- A bugfix to pstore registration that makes it also work on non-Google
systems
- Addition of new shipped Chromebooks (later models have more probing
through ACPI so the need for these updates will be less over time).
- A couple of minor coding style updates
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.14 (GNU/Linux)
iQIcBAABAgAGBQJT5v74AAoJEIwa5zzehBx3rwkQAJYMj3E+UXn679bcvoYyuNus
0ljuMIU0ZFGxrnNFmKQncC4fTvR5F+xLgGOXZvjSoDv5vU2fHGF1z9Bt/BphqyXk
oHfF1yRHXqeGJliyFriBLrkaVIRo54eg+QsKOeRYUqmEplC3qdI8GPCuCAjvXHYO
xwTYUCOZ5+SJUCHNMLwM8tJrE01hIou12BAcfD9KbWZbVKxzf0X/7OIMbaQR9ueE
xEgxIDbsMPVCXxa+NN9q1IM7wnQIXVUwlE1mDQgoA04EUphbVfdyao7fsfcnYB2D
3Ejr91YaJXSrUlEu9R5VsbGAlwHjBYGCoTSiHDNWdWSJF99qNhLYsOK7WCBHakKk
9XknM3lq03pkuSrmaNIrdyJvl1YA3VNkmTQTkqUP0r2gJZcaJZMxBFewZQRieUXK
HxbpaUEuSvozaNMN/ghN2pMVS3IhnNi7Wi4pP9bs2CyvwtSZjqFcDg/Bc0Kg8dFy
hBfkia0tjHAqMXc80h+dL9Ly8YAhJyhcX/5W9nYGqVndK0tn3pIkAfmk05in6KzZ
iPZDlBZID5BuFXhmIgCKd1kojk0dfXWq/0Nq1jLx9hHJi6BzvDF8ksHvr0TkK7p6
z45PBDY2H84fCXjRcEajj4YfyFnaRxqMiOVksekoX/7JIgt0ZeLITOKCFMb6nINM
IbK9wUrYlazVfwK9+gGG
=TuFN
-----END PGP SIGNATURE-----
Merge tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/olof/chrome-platform
Pull chrome platform updates from Olof Johansson:
"Updates to the Chromebook/box platform drivers:
- a bugfix to pstore registration that makes it also work on
non-Google systems
- addition of new shipped Chromebooks (later models have more probing
through ACPI so the need for these updates will be less over time).
- A couple of minor coding style updates"
* tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/olof/chrome-platform:
platform/chrome: chromeos_laptop - Add a limit for deferred retries
platform/chrome: Add support for the acer c720p touchscreen.
platform/chrome: pstore: fix dmi table to match all chrome systems
platform/chrome: coding style fixes
platform/chrome: chromeos_laptop - Add Toshiba CB35 Touch
platform/chrome: chromeos_laptop - Add Dell Chromebook 11 touch
platform/chrome: chromeos_laptop - Add HP Chromebook 14
platform/chrome: chromeos_laptop - Add support for Acer C720
- A short branch of OMAP fixes that we didn't merge before the window opened.
- A small cleanup that sorts the rk3288 dts entries properly
- A build fix due to a reference to a removed DT node on exynos
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.14 (GNU/Linux)
iQIcBAABAgAGBQJT5v4gAAoJEIwa5zzehBx3QGYP/0A0F0nVP69NqDgctdse1mgr
wTvxJWqIF9EKUWYkNxdEWfFPJaTLG6Q3d/PrPfMpoSqt//lWmHkQhdPeki1nWg6L
FwDDBcbED6xAtoyl3PFPXKG/gSK3hH/CZj83rgLjFwby1hgMrNNAUCZ+7CrM3CSe
Gq8b1GD3UDfuLW8l41qtVayk7g8m05NPfyY/2e09RWj5Rfbr8tbK4S0fiS5NqbgA
FUIrRa4aeAvOKJkDrBmTm8RcHL/PQ7WdwgQ/gJLFnZl/Qch6nAm7nC5N/BHwhGYo
hbTF0m9ZMaf0IBZGPaKgtZ8OGx0v9WE8yG9oQq16IPyXn1s6DREoe9XbUuHGYFf1
7wdCz6WbbmwNXuA8v5cV8FlCKuMzjeeM6Fud6L+Rpiy1WBXwVp+ObFuOUdf5QF6r
O+dvtE1ooPcJLdMCC+MbfBV/ZN2QHNR3ijqkX3s30D6aARVr7DzSPArmYpQ9XpKD
8koMiyyN0IGHq3W95fYP86ABN0ohE2oD6KQorN3Cyv7qsj9hORTKqaOjZvnu+9KD
ZWpjXv4U6W31dSwzatXI4wv8m0IBnaR1Avd/gY0a2Lkv+Ac57+MWZF4A0JuHhRxa
DV2qAQ9/bLlbAK1xNbmnvF/NrJh70fUMagm+6e5yk2erQV076ayiBc+3CRPD2z6Y
+sQPLWSBxXXRCAcMq6hT
=x9hc
-----END PGP SIGNATURE-----
Merge tag 'fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc
Pull ARM SoC fixes from Olof Johansson:
- a short branch of OMAP fixes that we didn't merge before the window
opened.
- a small cleanup that sorts the rk3288 dts entries properly
- a build fix due to a reference to a removed DT node on exynos
* tag 'fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc:
ARM: dts: exynos5420: remove disp_pd
ARM: EXYNOS: Fix suspend/resume sequences
ARM: dts: Fix the sort ordering of EHCI and HSIC in rk3288.dtsi
ARM: OMAP3: Fix coding style problems in arch/arm/mach-omap2/control.c
ARM: OMAP3: Fix choice of omap3_restore_es function in OMAP34XX rev3.1.2 case.
ARM: OMAP2+: clock: allow omap2_dpll_round_rate() to round to next-lowest rate
Pull nouveau drm updates from Ben Skeggs:
"Apologies for not getting this done in time for Dave's drm-next merge
window. As he mentioned, a pre-existing bug reared its head a lot
more obviously after this lot of changes. It took quite a bit of time
to track it down. In any case, Dave suggested I try my luck by
sending directly to you this time.
Overview:
- more code for Tegra GK20A from NVIDIA - probing, reclockig
- better fix for Kepler GPUs that have the graphics engine powered
off on startup, method courtesy of info provided by NVIDIA
- unhardcoding of a bunch of graphics engine setup on
Fermi/Kepler/Maxwell, will hopefully solve some issues people have
noticed on higher-end models
- support for "Zero Bandwidth Clear" on Fermi/Kepler/Maxwell, needs
userspace support in general, but some lucky apps will benefit
automagically
- reviewed/exposed the full object APIs to userspace (finally), gives
it access to perfctrs, ZBC controls, various events. More to come
in the future.
- various other fixes"
Acked-by: Dave Airlie <airlied@redhat.com>
* 'linux-3.17' of git://anongit.freedesktop.org/git/nouveau/linux-2.6: (87 commits)
drm/nouveau: expose the full object/event interfaces to userspace
drm/nouveau: fix headless mode
drm/nouveau: hide sysfs pstate file behind an option again
drm/nv50/disp: shhh compiler
drm/gf100-/gr: implement the proper SetShaderExceptions method
drm/gf100-/gr: remove some broken ltc bashing, for now
drm/gf100-/gr: unhardcode attribute cb config
drm/gf100-/gr: fetch tpcs-per-ppc info on startup
drm/gf100-/gr: unhardcode pagepool config
drm/gf100-/gr: unhardcode bundle cb config
drm/gf100-/gr: improve initial context patch list helpers
drm/gf100-/gr: add support for zero bandwidth clear
drm/nouveau/ltc: add zbc drivers
drm/nouveau/ltc: s/ltcg/ltc/ + cleanup
drm/nouveau: use ram info from nvif_device
drm/nouveau/disp: implement nvif event sources for vblank/connector notifiers
drm/nouveau/disp: allow user direct access to channel control registers
drm/nouveau/disp: audit and version display classes
drm/nouveau/disp: audit and version SCANOUTPOS method
drm/nv50-/disp: audit and version PIOR_PWR method
...
arm and arm64 architectures. It required some minor updates to the generic
tracepoint system, so it had to wait for me to implement them.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJT5N6DAAoJEKQekfcNnQGuv60H/2NXDO/kUtvdF0L7ewaGbDaO
sjGOXMHDDgF4fQixPsIYNHdra0iGSPL59NBjIaLsESFsB8SUOVqXSclV0MSiZJQc
1PgTduE19p2kEMsqw6F4l8Ir8hPrUT8V8pQScR9lUkww3ANpyTB6Bbg1rZHcmTYA
yAq20q85rfQrAGwbvvhg40UYF8/su0FMUAbt/a180kVL8yeQI2liAkNOJTMCVq35
PpL7if4dlqAhKMqne71ae080PIPOH34q2lmZX3/SbpRvT2tSkS4dkoSFtCAD4pvx
c2TKNOxEDDWlinN/305PXH2yQ87MTIm44SBaTu/WPllUSQoO//EKI7+13tNS8Qc=
=/VeP
-----END PGP SIGNATURE-----
Merge tag 'trace-ipi-tracepoints' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace
Pull IPI tracepoints for ARM from Steven Rostedt:
"Nicolas Pitre added generic tracepoints for tracing IPIs and updated
the arm and arm64 architectures. It required some minor updates to
the generic tracepoint system, so it had to wait for me to implement
them"
* tag 'trace-ipi-tracepoints' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace:
ARM64: add IPI tracepoints
ARM: add IPI tracepoints
tracepoint: add generic tracepoint definitions for IPI tracing
tracing: Do not do anything special with tracepoint_string when tracing is disabled
rarely ever hit, and requires the user to do something that users rarely
do. It took a few special test cases to even trigger this bug,
and one of them was just one test in the process of finishing up as another
one started.
Both bugs have to do with the ring buffer iterator rb_iter_peek(), but one
is more indirect than the other.
The fist bug fix is simply an increase in the safety net loop counter.
The counter makes sure that the rb_iter_peek() only iterates the number
of times we expect it can, and no more. Well, there was one way it could
iterate one more than we expected, and that caused the ring buffer
to shutdown with a nasty warning. The fix was simply to up that counter by
one.
The other bug has to be with rb_iter_reset() (called by rb_iter_peek()).
This happens when a user reads both the trace_pipe and trace files.
The trace_pipe is a consuming read and does not use the ring buffer
iterator, but the trace file is not a consuming read and does use the
ring buffer iterator. When the trace file is being read, if it detects
that a consuming read occurred, it resets the iterator and starts over.
But the reset code that does this (rb_iter_reset()), checks if the
reader_page is linked to the ring buffer or not, and will look into
the ring buffer itself if it is not. This is wrong, as it should always
try to read the reader page first. Not to mention, the code that looked
into the ring buffer did it wrong, and used the header_page "read" offset
to start reading on that page. That offset is bogus for pages in the
writable ring buffer, and was corrupting the iterator, and it would start
returning bogus events.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJT44tRAAoJEKQekfcNnQGuMVIH/3evbjKT+w6Kon4S0LfLSejl
YDsXYkeO/lGiO3MnUveqq1jfw2+yHtyBVUipvfG0A61urMUhyUvjveph8Ec2cQ4Q
qHl0J28vDfF5tpKiYzygRN01wHD6GXYh+XZSChkA4ItzzD8K51lsZT1Yd+I2pTy2
DgH01EEEYiwYJcih+T4vlbKqYju6pwgxqKNCTv0RdVXUPya/tG9X2Nf8VGQTbmKS
FIO73qObYR+P9iXGIuPfyOxk2EvBiWS15WownZmfhZZxOIKx9IrDYwTsiV1+AJw+
sJFER1ulobYGDtGDa9yyrNJQr6wgbrfCDELnNKmdLUTlSwgZjLXpE2HEmlelY/s=
=5mQl
-----END PGP SIGNATURE-----
Merge tag 'trace-fixes-3.16' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace
Pull trace file read iterator fixes from Steven Rostedt:
"This contains a fix for two long standing bugs. Both of which are
rarely ever hit, and requires the user to do something that users
rarely do. It took a few special test cases to even trigger this bug,
and one of them was just one test in the process of finishing up as
another one started.
Both bugs have to do with the ring buffer iterator rb_iter_peek(), but
one is more indirect than the other.
The fist bug fix is simply an increase in the safety net loop counter.
The counter makes sure that the rb_iter_peek() only iterates the
number of times we expect it can, and no more. Well, there was one
way it could iterate one more than we expected, and that caused the
ring buffer to shutdown with a nasty warning. The fix was simply to
up that counter by one.
The other bug has to be with rb_iter_reset() (called by
rb_iter_peek()). This happens when a user reads both the trace_pipe
and trace files. The trace_pipe is a consuming read and does not use
the ring buffer iterator, but the trace file is not a consuming read
and does use the ring buffer iterator. When the trace file is being
read, if it detects that a consuming read occurred, it resets the
iterator and starts over. But the reset code that does this
(rb_iter_reset()), checks if the reader_page is linked to the ring
buffer or not, and will look into the ring buffer itself if it is not.
This is wrong, as it should always try to read the reader page first.
Not to mention, the code that looked into the ring buffer did it
wrong, and used the header_page "read" offset to start reading on that
page. That offset is bogus for pages in the writable ring buffer, and
was corrupting the iterator, and it would start returning bogus
events"
* tag 'trace-fixes-3.16' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace:
ring-buffer: Always reset iterator to reader page
ring-buffer: Up rb_iter_peek() loop count to 3
Pull namespace updates from Eric Biederman:
"This is a bunch of small changes built against 3.16-rc6. The most
significant change for users is the first patch which makes setns
drmatically faster by removing unneded rcu handling.
The next chunk of changes are so that "mount -o remount,.." will not
allow the user namespace root to drop flags on a mount set by the
system wide root. Aks this forces read-only mounts to stay read-only,
no-dev mounts to stay no-dev, no-suid mounts to stay no-suid, no-exec
mounts to stay no exec and it prevents unprivileged users from messing
with a mounts atime settings. I have included my test case as the
last patch in this series so people performing backports can verify
this change works correctly.
The next change fixes a bug in NFS that was discovered while auditing
nsproxy users for the first optimization. Today you can oops the
kernel by reading /proc/fs/nfsfs/{servers,volumes} if you are clever
with pid namespaces. I rebased and fixed the build of the
!CONFIG_NFS_FS case yesterday when a build bot caught my typo. Given
that no one to my knowledge bases anything on my tree fixing the typo
in place seems more responsible that requiring a typo-fix to be
backported as well.
The last change is a small semantic cleanup introducing
/proc/thread-self and pointing /proc/mounts and /proc/net at it. This
prevents several kinds of problemantic corner cases. It is a
user-visible change so it has a minute chance of causing regressions
so the change to /proc/mounts and /proc/net are individual one line
commits that can be trivially reverted. Unfortunately I lost and
could not find the email of the original reporter so he is not
credited. From at least one perspective this change to /proc/net is a
refgression fix to allow pthread /proc/net uses that were broken by
the introduction of the network namespace"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace:
proc: Point /proc/mounts at /proc/thread-self/mounts instead of /proc/self/mounts
proc: Point /proc/net at /proc/thread-self/net instead of /proc/self/net
proc: Implement /proc/thread-self to point at the directory of the current thread
proc: Have net show up under /proc/<tgid>/task/<tid>
NFS: Fix /proc/fs/nfsfs/servers and /proc/fs/nfsfs/volumes
mnt: Add tests for unprivileged remount cases that have found to be faulty
mnt: Change the default remount atime from relatime to the existing value
mnt: Correct permission checks in do_remount
mnt: Move the test for MNT_LOCK_READONLY from change_mount_flags into do_remount
mnt: Only change user settable mount flags in remount
namespaces: Use task_lock and not rcu to protect nsproxy
Pull SElinux fixes from Paul Moore:
"Two small patches to fix a couple of build warnings in SELinux and
NetLabel. The patches are obvious enough that I don't think any
additional explanation is necessary, but it basically boils down to
the usual: I was stupid, and these patches fix some of the stupid.
Both patches were posted earlier this week to the SELinux list, and
that is where they sat as I didn't think there were noteworthy enough
to go upstream at this point in time, but DaveM would rather see them
upstream now so who am I to argue. As the patches are both very
small"
* 'stable-3.17' of git://git.infradead.org/users/pcmoore/selinux:
selinux: remove unused variabled in the netport, netnode, and netif caches
netlabel: fix the netlbl_catmap_setlong() dummy function
Pull nfsd updates from Bruce Fields:
"This includes a major rewrite of the NFSv4 state code, which has
always depended on a single mutex. As an example, open creates are no
longer serialized, fixing a performance regression on NFSv3->NFSv4
upgrades. Thanks to Jeff, Trond, and Benny, and to Christoph for
review.
Also some RDMA fixes from Chuck Lever and Steve Wise, and
miscellaneous fixes from Kinglong Mee and others"
* 'for-3.17' of git://linux-nfs.org/~bfields/linux: (167 commits)
svcrdma: remove rdma_create_qp() failure recovery logic
nfsd: add some comments to the nfsd4 object definitions
nfsd: remove the client_mutex and the nfs4_lock/unlock_state wrappers
nfsd: remove nfs4_lock_state: nfs4_state_shutdown_net
nfsd: remove nfs4_lock_state: nfs4_laundromat
nfsd: Remove nfs4_lock_state(): reclaim_complete()
nfsd: Remove nfs4_lock_state(): setclientid, setclientid_confirm, renew
nfsd: Remove nfs4_lock_state(): exchange_id, create/destroy_session()
nfsd: Remove nfs4_lock_state(): nfsd4_open and nfsd4_open_confirm
nfsd: Remove nfs4_lock_state(): nfsd4_delegreturn()
nfsd: Remove nfs4_lock_state(): nfsd4_open_downgrade + nfsd4_close
nfsd: Remove nfs4_lock_state(): nfsd4_lock/locku/lockt()
nfsd: Remove nfs4_lock_state(): nfsd4_release_lockowner
nfsd: Remove nfs4_lock_state(): nfsd4_test_stateid/nfsd4_free_stateid
nfsd: Remove nfs4_lock_state(): nfs4_preprocess_stateid_op()
nfsd: remove old fault injection infrastructure
nfsd: add more granular locking to *_delegations fault injectors
nfsd: add more granular locking to forget_openowners fault injector
nfsd: add more granular locking to forget_locks fault injector
nfsd: add a list_head arg to nfsd_foreach_client_lock
...
Pull CIFS updates from Steve French:
"The most visible change in this set is the additional of multi-credit
support for SMB2/SMB3 which dramatically improves the large file i/o
performance for these dialects and significantly increases the maximum
i/o size used on the wire for SMB2/SMB3.
Also reconnection behavior after network failure is improved"
* 'for-next' of git://git.samba.org/sfrench/cifs-2.6: (35 commits)
Add worker function to set allocation size
[CIFS] Fix incorrect hex vs. decimal in some debug print statements
update CIFS TODO list
Add Pavel to contributor list in cifs AUTHORS file
Update cifs version
CIFS: Fix STATUS_CANNOT_DELETE error mapping for SMB2
CIFS: Optimize readpages in a short read case on reconnects
CIFS: Optimize cifs_user_read() in a short read case on reconnects
CIFS: Improve indentation in cifs_user_read()
CIFS: Fix possible buffer corruption in cifs_user_read()
CIFS: Count got bytes in read_into_pages()
CIFS: Use separate var for the number of bytes got in async read
CIFS: Indicate reconnect with ECONNABORTED error code
CIFS: Use multicredits for SMB 2.1/3 reads
CIFS: Fix rsize usage for sync read
CIFS: Fix rsize usage in user read
CIFS: Separate page reading from user read
CIFS: Fix rsize usage in readpages
CIFS: Separate page search from readpages
CIFS: Use multicredits for SMB 2.1/3 writes
...
No-one has yet had time to move this to debugfs as discussed during
the last merge window. Until this happens, hide the option to make
it clear it's not going to be here forever.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
We have another version of it implemented in SW, however, that version
isn't serialised with normal PGRAPH operation and can possibly clobber
the enables for another context.
This is the same method that's implemented by the NVIDIA binary driver.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
... and hope that the defaults are good enough. This was always
supposed to be a read/modify/write thing anyway, so we're writing
very wrong stuff for some boards already.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Should be the same values as before, except:
GF117 has smaller buffer allocated, as per register setup.
GK20A now uses values from Tegra driver, not GK104's.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Removes need for fixed buffer indices, and allows the functions
utilising them to also be run outside of context generation.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Default ZBC table is compatible with binary driver defaults.
Userspace will need to be updated to take full advantage of this
feature, however, some applications will see a performance boost
without updated drivers.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
The full object interfaces are about to be exposed to userspace, so we
need to check for any security-related issues and version the structs
to make it easier to handle any changes we may need in the future.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
The full object interfaces are about to be exposed to userspace, so we
need to check for any security-related issues and version the structs
to make it easier to handle any changes we may need in the future.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
The full object interfaces are about to be exposed to userspace, so we
need to check for any security-related issues and version the structs
to make it easier to handle any changes we may need in the future.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
The full object interfaces are about to be exposed to userspace, so we
need to check for any security-related issues and version the structs
to make it easier to handle any changes we may need in the future.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
The full object interfaces are about to be exposed to userspace, so we
need to check for any security-related issues and version the structs
to make it easier to handle any changes we may need in the future.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
The full object interfaces are about to be exposed to userspace, so we
need to check for any security-related issues and version the structs
to make it easier to handle any changes we may need in the future.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
The full object interfaces are about to be exposed to userspace, so we
need to check for any security-related issues and version the structs
to make it easier to handle any changes we may need in the future.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
The full object interfaces are about to be exposed to userspace, so we
need to check for any security-related issues and version the structs
to make it easier to handle any changes we may need in the future.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
The full object interfaces are about to be exposed to userspace, so we
need to check for any security-related issues and version the structs
to make it easier to handle any changes we may need in the future.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
The full object interfaces are about to be exposed to userspace, so we
need to check for any security-related issues and version the structs
to make it easier to handle any changes we may need in the future.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
One of the next commits will remove some of the class IDs, leaving only
the ones used by NVIDIA which, presumably, mark where functionality
changes actually happened.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>