__SEQ_LOCKDEP() is an expression gate for the
seqcount_LOCKNAME_t::lock member. Rename it to be about the member,
not the gate condition.
Later (PREEMPT_RT) patches will make the member available for !LOCKDEP
configs.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
A sequence counter write side critical section must be protected by some
form of locking to serialize writers. A plain seqcount_t does not
contain the information of which lock must be held when entering a write
side critical section.
Use the new seqcount_raw_spinlock_t data type, which allows to associate
a raw spinlock with the sequence counter. This enables lockdep to verify
that the raw spinlock used for writer serialization is held when the
write side critical section is entered.
If lockdep is disabled this lock association is compiled out and has
neither storage size nor runtime overhead.
Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20200720155530.1173732-25-a.darwish@linutronix.de
A sequence counter write side critical section must be protected by some
form of locking to serialize writers. A plain seqcount_t does not
contain the information of which lock must be held when entering a write
side critical section.
Use the new seqcount_spinlock_t data type, which allows to associate a
spinlock with the sequence counter. This enables lockdep to verify that
the spinlock used for writer serialization is held when the write side
critical section is entered.
If lockdep is disabled this lock association is compiled out and has
neither storage size nor runtime overhead.
Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Link: https://lkml.kernel.org/r/20200720155530.1173732-24-a.darwish@linutronix.de
A sequence counter write side critical section must be protected by some
form of locking to serialize writers. A plain seqcount_t does not
contain the information of which lock must be held when entering a write
side critical section.
Use the new seqcount_spinlock_t data type, which allows to associate a
spinlock with the sequence counter. This enables lockdep to verify that
the spinlock used for writer serialization is held when the write side
critical section is entered.
If lockdep is disabled this lock association is compiled out and has
neither storage size nor runtime overhead.
Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20200720155530.1173732-23-a.darwish@linutronix.de
A sequence counter write side critical section must be protected by some
form of locking to serialize writers. A plain seqcount_t does not
contain the information of which lock must be held when entering a write
side critical section.
Use the new seqcount_spinlock_t data type, which allows to associate a
spinlock with the sequence counter. This enables lockdep to verify that
the spinlock used for writer serialization is held when the write side
critical section is entered.
If lockdep is disabled this lock association is compiled out and has
neither storage size nor runtime overhead.
Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20200720155530.1173732-22-a.darwish@linutronix.de
A sequence counter write side critical section must be protected by some
form of locking to serialize writers. A plain seqcount_t does not
contain the information of which lock must be held when entering a write
side critical section.
Use the new seqcount_spinlock_t data type, which allows to associate a
spinlock with the sequence counter. This enables lockdep to verify that
the spinlock used for writer serialization is held when the write side
critical section is entered.
If lockdep is disabled this lock association is compiled out and has
neither storage size nor runtime overhead.
Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Daniel Wagner <dwagner@suse.de>
Link: https://lkml.kernel.org/r/20200720155530.1173732-21-a.darwish@linutronix.de
A sequence counter write side critical section must be protected by some
form of locking to serialize writers. A plain seqcount_t does not
contain the information of which lock must be held when entering a write
side critical section.
Use the new seqcount_spinlock_t data type, which allows to associate a
spinlock with the sequence counter. This enables lockdep to verify that
the spinlock used for writer serialization is held when the write side
critical section is entered.
If lockdep is disabled this lock association is compiled out and has
neither storage size nor runtime overhead.
Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Song Liu <song@kernel.org>
Link: https://lkml.kernel.org/r/20200720155530.1173732-20-a.darwish@linutronix.de
A sequence counter write side critical section must be protected by some
form of locking to serialize writers. A plain seqcount_t does not
contain the information of which lock must be held when entering a write
side critical section.
Use the new seqcount_spinlock_t data type, which allows to associate a
spinlock with the sequence counter. This enables lockdep to verify that
the spinlock used for writer serialization is held when the write side
critical section is entered.
If lockdep is disabled this lock association is compiled out and has
neither storage size nor runtime overhead.
Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20200720155530.1173732-19-a.darwish@linutronix.de
A sequence counter write side critical section must be protected by some
form of locking to serialize writers. A plain seqcount_t does not
contain the information of which lock must be held when entering a write
side critical section.
Use the new seqcount_raw_spinlock_t data type, which allows to associate
a raw spinlock with the sequence counter. This enables lockdep to verify
that the raw spinlock used for writer serialization is held when the
write side critical section is entered.
If lockdep is disabled this lock association is compiled out and has
neither storage size nor runtime overhead.
Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20200720155530.1173732-18-a.darwish@linutronix.de
A sequence counter write side critical section must be protected by some
form of locking to serialize writers. If the serialization primitive is
not disabling preemption implicitly, preemption has to be explicitly
disabled before entering the sequence counter write side critical
section.
A plain seqcount_t does not contain the information of which lock must
be held when entering a write side critical section.
Use the new seqcount_spinlock_t and seqcount_mutex_t data types instead,
which allow to associate a lock with the sequence counter. This enables
lockdep to verify that the lock used for writer serialization is held
when the write side critical section is entered.
If lockdep is disabled this lock association is compiled out and has
neither storage size nor runtime overhead.
Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20200720155530.1173732-17-a.darwish@linutronix.de
A sequence counter write side critical section must be protected by some
form of locking to serialize writers. A plain seqcount_t does not
contain the information of which lock must be held when entering a write
side critical section.
Use the new seqcount_rwlock_t data type, which allows to associate a
rwlock with the sequence counter. This enables lockdep to verify that
the rwlock used for writer serialization is held when the write side
critical section is entered.
If lockdep is disabled this lock association is compiled out and has
neither storage size nor runtime overhead.
Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20200720155530.1173732-16-a.darwish@linutronix.de
A sequence counter write side critical section must be protected by some
form of locking to serialize writers. A plain seqcount_t does not
contain the information of which lock must be held when entering a write
side critical section.
Use the new seqcount_spinlock_t data type, which allows to associate a
spinlock with the sequence counter. This enables lockdep to verify that
the spinlock used for writer serialization is held when the write side
critical section is entered.
If lockdep is disabled this lock association is compiled out and has
neither storage size nor runtime overhead.
Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20200720155530.1173732-15-a.darwish@linutronix.de
A sequence counter write side critical section must be protected by some
form of locking to serialize writers. A plain seqcount_t does not
contain the information of which lock must be held when entering a write
side critical section.
Use the new seqcount_spinlock_t data type, which allows to associate a
spinlock with the sequence counter. This enables lockdep to verify that
the spinlock used for writer serialization is held when the write side
critical section is entered.
If lockdep is disabled this lock association is compiled out and has
neither storage size nor runtime overhead.
Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20200720155530.1173732-14-a.darwish@linutronix.de
A sequence counter write side critical section must be protected by some
form of locking to serialize writers. If the serialization primitive is
not disabling preemption implicitly, preemption has to be explicitly
disabled before entering the sequence counter write side critical
section.
The dma-buf reservation subsystem uses plain sequence counters to manage
updates to reservations. Writer serialization is accomplished through a
wound/wait mutex.
Acquiring a wound/wait mutex does not disable preemption, so this needs
to be done manually before and after the write side critical section.
Use the newly-added seqcount_ww_mutex_t instead:
- It associates the ww_mutex with the sequence count, which enables
lockdep to validate that the write side critical section is properly
serialized.
- It removes the need to explicitly add preempt_disable/enable()
around the write side critical section because the write_begin/end()
functions for this new data type automatically do this.
If lockdep is disabled this ww_mutex lock association is compiled out
and has neither storage size nor runtime overhead.
Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://lkml.kernel.org/r/20200720155530.1173732-13-a.darwish@linutronix.de
Commit 3c3b177a93 ("reservation: add support for read-only access
using rcu") introduced a sequence counter to manage updates to
reservations. Back then, the reservation object initializer
reservation_object_init() was always inlined.
Having the sequence counter initialization inlined meant that each of
the call sites would have a different lockdep class key, which would've
broken lockdep's deadlock detection. The aforementioned commit thus
introduced, and exported, a custom seqcount lockdep class key and name.
The commit 8735f16803 ("dma-buf: cleanup reservation_object_init...")
transformed the reservation object initializer to a normal non-inlined C
function. seqcount_init(), which automatically defines the seqcount
lockdep class key and must be called non-inlined, can now be safely used.
Remove the seqcount custom lockdep class key, name, and export. Use
seqcount_init() inside the dma reservation object initializer.
Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://lkml.kernel.org/r/20200720155530.1173732-12-a.darwish@linutronix.de
Parent commit, "seqlock: Extend seqcount API with associated locks",
introduced a big number of multi-line macros that are newline-escaped
at 72 columns.
For overall cohesion, align the earlier-existing macros similarly.
Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20200720155530.1173732-11-a.darwish@linutronix.de
A sequence counter write side critical section must be protected by some
form of locking to serialize writers. If the serialization primitive is
not disabling preemption implicitly, preemption has to be explicitly
disabled before entering the write side critical section.
There is no built-in debugging mechanism to verify that the lock used
for writer serialization is held and preemption is disabled. Some usage
sites like dma-buf have explicit lockdep checks for the writer-side
lock, but this covers only a small portion of the sequence counter usage
in the kernel.
Add new sequence counter types which allows to associate a lock to the
sequence counter at initialization time. The seqcount API functions are
extended to provide appropriate lockdep assertions depending on the
seqcount/lock type.
For sequence counters with associated locks that do not implicitly
disable preemption, preemption protection is enforced in the sequence
counter write side functions. This removes the need to explicitly add
preempt_disable/enable() around the write side critical sections: the
write_begin/end() functions for these new sequence counter types
automatically do this.
Introduce the following seqcount types with associated locks:
seqcount_spinlock_t
seqcount_raw_spinlock_t
seqcount_rwlock_t
seqcount_mutex_t
seqcount_ww_mutex_t
Extend the seqcount read and write functions to branch out to the
specific seqcount_LOCKTYPE_t implementation at compile-time. This avoids
kernel API explosion per each new seqcount_LOCKTYPE_t added. Add such
compile-time type detection logic into a new, internal, seqlock header.
Document the proper seqcount_LOCKTYPE_t usage, and rationale, at
Documentation/locking/seqlock.rst.
If lockdep is disabled, this lock association is compiled out and has
neither storage size nor runtime overhead.
Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20200720155530.1173732-10-a.darwish@linutronix.de
Preemption must be disabled before entering a sequence count write side
critical section. Failing to do so, the seqcount read side can preempt
the write side section and spin for the entire scheduler tick. If that
reader belongs to a real-time scheduling class, it can spin forever and
the kernel will livelock.
Assert through lockdep that preemption is disabled for seqcount writers.
Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20200720155530.1173732-9-a.darwish@linutronix.de
Asserting that preemption is enabled or disabled is a critical sanity
check. Developers are usually reluctant to add such a check in a
fastpath as reading the preemption count can be costly.
Extend the lockdep API with macros asserting that preemption is disabled
or enabled. If lockdep is disabled, or if the underlying architecture
does not support kernel preemption, this assert has no runtime overhead.
References: f54bb2ec02 ("locking/lockdep: Add IRQs disabled/enabled assertion APIs: ...")
Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20200720155530.1173732-8-a.darwish@linutronix.de
raw_seqcount_begin() has the same code as raw_read_seqcount(), with the
exception of masking the sequence counter's LSB before returning it to
the caller.
Note, raw_seqcount_begin() masks the counter's LSB before returning it
to the caller so that read_seqcount_retry() can fail if the counter is
odd -- without the overhead of an extra branching instruction.
Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20200720155530.1173732-7-a.darwish@linutronix.de
seqlock.h is now included by kernel's RST documentation, but a small
number of the the exported seqlock.h functions are kernel-doc annotated.
Add kernel-doc for all seqlock.h exported APIs.
Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20200720155530.1173732-6-a.darwish@linutronix.de
The seqlock.h seqcount_t and seqlock_t API definitions are presented in
the chronological order of their development rather than the order that
makes most sense to readers. This makes it hard to follow and understand
the header file code.
Group and reorder all of the exported seqlock.h functions according to
their function.
First, group together the seqcount_t standard read path functions:
- __read_seqcount_begin()
- raw_read_seqcount_begin()
- read_seqcount_begin()
since each function is implemented exactly in terms of the one above
it. Then, group the special-case seqcount_t readers on their own as:
- raw_read_seqcount()
- raw_seqcount_begin()
since the only difference between the two functions is that the second
one masks the sequence counter LSB while the first one does not. Note
that raw_seqcount_begin() can actually be implemented in terms of
raw_read_seqcount(), which will be done in a follow-up commit.
Then, group the seqcount_t write path functions, instead of injecting
unrelated seqcount_t latch functions between them, and order them as:
- raw_write_seqcount_begin()
- raw_write_seqcount_end()
- write_seqcount_begin_nested()
- write_seqcount_begin()
- write_seqcount_end()
- raw_write_seqcount_barrier()
- write_seqcount_invalidate()
which is the expected natural order. This also isolates the seqcount_t
latch functions into their own area, at the end of the sequence counters
section, and before jumping to the next one: sequential locks
(seqlock_t).
Do a similar grouping and reordering for seqlock_t "locking" readers vs.
the "conditionally locking or lockless" ones.
No implementation code was changed in any of the reordering above.
Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20200720155530.1173732-5-a.darwish@linutronix.de
The seqcount_t latch reader example at the raw_write_seqcount_latch()
kernel-doc comment ends the latch read section with a manual smp memory
barrier and sequence counter comparison.
This is technically correct, but it is suboptimal: read_seqcount_retry()
already contains the same logic of an smp memory barrier and sequence
counter comparison.
End the latch read critical section example with read_seqcount_retry().
Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20200720155530.1173732-4-a.darwish@linutronix.de
Align the code samples and note sections inside kernel-doc comments with
tabs. This way they can be properly parsed and rendered by Sphinx. It
also makes the code samples easier to read from text editors.
Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20200720155530.1173732-3-a.darwish@linutronix.de
Proper documentation for the design and usage of sequence counters and
sequential locks does not exist. Complete the seqlock.h documentation as
follows:
- Divide all documentation on a seqcount_t vs. seqlock_t basis. The
description for both mechanisms was intermingled, which is incorrect
since the usage constrains for each type are vastly different.
- Add an introductory paragraph describing the internal design of, and
rationale for, sequence counters.
- Document seqcount_t writer non-preemptibility requirement, which was
not previously documented anywhere, and provide a clear rationale.
- Provide template code for seqcount_t and seqlock_t initialization
and reader/writer critical sections.
- Recommend using seqlock_t by default. It implicitly handles the
serialization and non-preemptibility requirements of writers.
At seqlock.h:
- Remove references to brlocks as they've long been removed from the
kernel.
- Remove references to gcc-3.x since the kernel's minimum supported
gcc version is 4.9.
References: 0f6ed63b17 ("no need to keep brlock macros anymore...")
References: 6ec4476ac8 ("Raise gcc version requirement to 4.9")
Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20200720155530.1173732-2-a.darwish@linutronix.de
This patch breaks a header loop involving qspinlock_types.h.
The issue is that qspinlock_types.h includes atomic.h, which then
eventually includes kernel.h which could lead back to the original
file via spinlock_types.h.
As ATOMIC_INIT is now defined by linux/types.h, there is no longer
any need to include atomic.h from qspinlock_types.h. This also
allows the CONFIG_PARAVIRT hack to be removed since it was trying
to prevent exactly this loop.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Waiman Long <longman@redhat.com>
Link: https://lkml.kernel.org/r/20200729123316.GC7047@gondor.apana.org.au
This patch moves ATOMIC_INIT from asm/atomic.h into linux/types.h.
This allows users of atomic_t to use ATOMIC_INIT without having to
include atomic.h as that way may lead to header loops.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Waiman Long <longman@redhat.com>
Link: https://lkml.kernel.org/r/20200729123105.GB7047@gondor.apana.org.au
Currently lockdep_types.h includes list.h without actually using any
of its macros or functions. All it needs are the type definitions
which were moved into types.h long ago. This potentially causes
inclusion loops because both are included by many core header
files.
This patch moves the list.h inclusion into lockdep.h. Note that
we could probably remove it completely but that could potentially
result in compile failures should any end users not include list.h
directly and also be unlucky enough to not get list.h via some other
header file.
Reported-by: Petr Mladek <pmladek@suse.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Petr Mladek <pmladek@suse.com>
Link: https://lkml.kernel.org/r/20200716063649.GA23065@gondor.apana.org.au
Though the number of lock-acquisitions is tracked as unsigned long, this
is passed as the divisor to div_s64() which interprets it as a s32,
giving nonsense values with more than 2 billion acquisitons. E.g.
acquisitions holdtime-min holdtime-max holdtime-total holdtime-avg
-------------------------------------------------------------------------
2350439395 0.07 353.38 649647067.36 0.-32
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: https://lore.kernel.org/r/20200725185110.11588-1-chris@chris-wilson.co.uk
-----BEGIN PGP SIGNATURE-----
iQJIBAABCgAyFiEEgMe7l+5h9hnxdsnuWYigwDrT+vwFAl8beM8UHGJoZWxnYWFz
QGdvb2dsZS5jb20ACgkQWYigwDrT+vygXhAAwYMQgoLF+HZDVGWK6q/YmN+Q0ygb
tXtnKt/ehf+zrvWiWZxmRVPZ71MjR9Ut6kXQ+MWlADkLkBeIxsCMj4N8ZjGSp/Gh
UMfFwVYoUhJG3AM8uko6hCugsYsxagFA5zr53crHIE9AHCp/2Ca4lq1VrBB5k5OQ
v4bghphxtPZXtUelHHcbmylU8EmnPG79XzL1VYqgeOd98nSrn34c68Ib7tezLF3I
zgRraYghlyI6TGvbA55vdouMs+z+eYYN2hIu6FEXDx2lm81IvFYD5LdXwfb78Ns3
4+BcyuqHcsyvHuxD77Y/JrlmLdFopFRyhexfc887znqD2nXc0kAAtLXKM+d+/bXz
vV8qtsDVWhYXS6cwvDuYSjsa1DiaWyJNQ/4tIuu5hYofXfi7ltyATkVrAjE0QPaC
QpWnMVKecXSjd//VyTSvAAgO3Bk27uhXfTy3n7lpCTcpzMJj04t6go6u8/SOLtz9
ECWs4WWQVoi7zX65chcXtwce2+plQA6Pu7duzNLo5hP5KIIH4Ij0I19w1yGZueUm
V5CIRAsVXPfQPJ45nhKwHldHJpZ/uXM97nYH5F9H14j9VXrrMJWptzyZiH53ZKvE
9GLBfHGcyKBCHcOhJtEJqgf7gkrxGpfSVcacYmPClB8SOsTd67mzIVRdH3xS7DUk
j8VlWiNB266bHaY=
=hzUW
-----END PGP SIGNATURE-----
Merge tag 'pci-v5.8-fixes-2' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci into master
Pull PCI fixes from Bjorn Helgaas:
- Reject invalid IRQ 0 command line argument for virtio_mmio because
IRQ 0 now generates warnings (Bjorn Helgaas)
- Revert "PCI/PM: Assume ports without DLL Link Active train links in
100 ms", which broke nouveau (Bjorn Helgaas)
* tag 'pci-v5.8-fixes-2' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci:
Revert "PCI/PM: Assume ports without DLL Link Active train links in 100 ms"
virtio-mmio: Reject invalid IRQ 0 command line argument
/proc/fs/nfsd/client/../state at the wrong moment.
-----BEGIN PGP SIGNATURE-----
iQJJBAABCAAzFiEEYtFWavXG9hZotryuJ5vNeUKO4b4FAl8bWF4VHGJmaWVsZHNA
ZmllbGRzZXMub3JnAAoJECebzXlCjuG+CL4P/1z2k9OPSTxNpcTv/+5+DCg5gEVp
raF3zumeMc8b0gvSRzrMLpYWRZBO/ZcDrH10WFPvySQxQGhMzPNDYsOvRcGabUOs
SDMV7RCQe7fPNlRSZejdTLnWD1ftHgNLXpBKSXPR0GbmML4BRoKwHjTM4rrr3nox
bluQHFyAHMwrNSjjRufi5qi14ThyW5qIanSEpV99qS+aeFwx0U8FL4f4eyFMA1Pq
sV/H2k9N1v/xoPOqQWF20EOZqI+AMxexMY2EQBm7LS3kv5IReSV6WTM4bcTNDl/7
jtFA3A9pIngwfh2d8BOEC12BzPmw1F1vo2kbBjvPzeo1EgtD+EN8RjJ9MP9Lwz3v
kOL09hkJrvtM24+g+VOzdwGEknxm09vmps3vyhth5EvLSLNzJwGfY7HisRxIOrmT
HZBzFNM/wX6RkzsIyrx+AjoqwmOYiz/diElE1oLGLVFrp3IVhk9Grv0IdNlWl8U5
ImtlewJY9vF6+HAsOToXZaPJGS9PBuLWwGj0zrXn5Gr+gNqZfFGLUBfz3sTVxku5
5xa34kECCV7ECGwtHGQ+X0Lp/1jhbAUxdTW0ZI1rJiWtdmx9WJttlmdxmX3dASso
PmBGYCVynMDIaGrnnQ5kig7X9aUy1GF+P9OnZ+tE5FkFdHgG3i4kjOwyOGjtTTen
YmsqNI4KBwV1wBZf
=lzOL
-----END PGP SIGNATURE-----
Merge tag 'nfsd-5.8-2' of git://linux-nfs.org/~bfields/linux into master
Pull nfsd fix from Bruce Fields:
"Just one fix for a NULL dereference if someone happens to read
/proc/fs/nfsd/client/../state at the wrong moment"
* tag 'nfsd-5.8-2' of git://linux-nfs.org/~bfields/linux:
nfsd4: fix NULL dereference in nfsd/clients display code
Merge misc fixes from Andrew Morton:
"Subsystems affected by this patch series: mm/pagemap, mm/shmem,
mm/hotfixes, mm/memcg, mm/hugetlb, mailmap, squashfs, scripts,
io-mapping, MAINTAINERS, and gdb"
* emailed patches from Andrew Morton <akpm@linux-foundation.org>:
scripts/gdb: fix lx-symbols 'gdb.error' while loading modules
MAINTAINERS: add KCOV section
io-mapping: indicate mapping failure
scripts/decode_stacktrace: strip basepath from all paths
squashfs: fix length field overlap check in metadata reading
mailmap: add entry for Mike Rapoport
khugepaged: fix null-pointer dereference due to race
mm/hugetlb: avoid hardcoding while checking if cma is enabled
mm: memcg/slab: fix memory leak at non-root kmem_cache destroy
mm/memcg: fix refcount error while moving and swapping
mm/memcontrol: fix OOPS inside mem_cgroup_get_nr_swap_pages()
mm: initialize return of vm_insert_pages
vfs/xattr: mm/shmem: kernfs: release simple xattr entry in a right way
mm/mmap.c: close race between munmap() and expand_upwards()/downwards()
Pull xtensa csum regression fix from Al Viro:
"Max Filippov caught a breakage introduced in xtensa this cycle
by the csum_and_copy_..._user() series.
Cut'n'paste from the wrong source - the check that belongs
in csum_and_copy_to_user() ended up both there and in
csum_and_copy_from_user()"
* 'fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
xtensa: fix access check in csum_and_copy_from_user
- Fix compat vDSO build flags for recent versions of clang
-----BEGIN PGP SIGNATURE-----
iQFEBAABCgAuFiEEPxTL6PPUbjXGY88ct6xw3ITBYzQFAl8avzQQHHdpbGxAa2Vy
bmVsLm9yZwAKCRC3rHDchMFjNKe4CADFdjUbKJSPMfn2EvYHJGYCS7Q6yUag6uZx
eecq/tfCulz4/vKrWkqiAgAUvFiSxAVhyHnqZhbp0wVo3J7PrNtK6xsPj+8yEBvh
o5z1F0ovhBshBpk4P2jBTDCaq/LsA5p8Uv3/OpgF192P61lplb+1XTmAa5kxN4an
d5UW5TMTYq0Apd5GYUAgWYEnLtwhncSj2DYuIIt3Dq3f3wpCB0mK4XfjnYPVrbgD
lY5X6795qeCffkoGf2WidZ7R9KQm/5E8hpK7ongQpbX1HjbWiobdDzFyzbT4WT7p
EvhOPpbboN5B/mAgeLTO6UKBIC52BgDMWI/dMRP7g4VIPL8OcBzk
=7lBG
-----END PGP SIGNATURE-----
Merge tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux into master
Pull arm64 fix from Will Deacon:
"Fix compat vDSO build flags for recent versions of clang to tell it
where to find the assembler"
* tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
arm64: vdso32: Fix '--prefix=' value for newer versions of clang
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEE8rQSAMVO+zA4DBdWxWXV+ddtWDsFAl8auzgACgkQxWXV+ddt
WDv0CRAAooFO+hloV+br40eEfJwZJJk+iIvc3tyq3TRUrmt1D0G4F7nUtiHjb8JU
ch2HK+GNZkIK4747OCgcFREpYZV2m0hrKybzf/j4mYb7OXzHmeHTMfGVut1g80e7
dlpvP7q4VZbBP8BTo/8wqdSAdCUiNhLFy5oYzyUwyflJ5S8FpjY+3dXIRHUnhxPU
lxMANWhX9y/qQEceGvxqwqJBiYT6WI7dwONiULc1klWDIug/2BGZQR0WuC5PVr0G
YNuxcEU6rluWzKWJ5k3104t+N1Nc5+xglIgBLeLKAyTVYq8zAMf+P8bBPnQ3QDkV
zniNIH9ND8tYSjmGkmO0ltExFrE2o9NRnjapOFXfB0WGXee5LfzFfzd5Hk9YV+Ua
bs98VNGR4B12Iw++DvrbhbFAMxBHiBfAX/O44xJ81uAYVUs21OfefxHWrLzTJK+1
xYfiyfCDxZDGpC/weg9GOPcIZAzzoSAvqDqWHyWY5cCZdB60RaelGJprdG5fP/gA
Y+hDIdutVXMHfhaX0ktWsDvhPRXcC7MT0bjasljkN5WUJ/xZZQr6QmgngY+FA8G/
0n/dv0pYdOTK/8YVZAMO+VklzrDhziqzc2sBrH1k3MA9asa/Ls5v+r2PU+qBKZJm
cBJGtxxsx72CHbkIhtd5oGj5LNTXFdXeHph37ErzW3ajeamO4X0=
=51h/
-----END PGP SIGNATURE-----
Merge tag 'for-5.8-rc6-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux into master
Pull btrfs fixes from David Sterba:
"A few resouce leak fixes from recent patches, all are stable material.
The problems have been observed during testing or have a reproducer"
* tag 'for-5.8-rc6-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux:
btrfs: fix mount failure caused by race with umount
btrfs: fix page leaks after failure to lock page for delalloc
btrfs: qgroup: fix data leak caused by race between writeback and truncate
btrfs: fix double free on ulist after backref resolution failure
Two fixes, the first one to remove compilation warnings and the second
to avoid potentially inefficient allocation of BIOs for direct writes
into sequential zones.
Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com>
-----BEGIN PGP SIGNATURE-----
iHUEABYIAB0WIQSRPv8tYSvhwAzJdzjdoc3SxdoYdgUCXxqPOwAKCRDdoc3SxdoY
dkUzAQCp8p1ijemK+t2pN35dz+J9TG2idX0iUkZA5dAUkwsZmQD+NT/U52WLTqCH
eKc72BZTfdMhTK/Sk6fnUtFzVXvGhgM=
=RERh
-----END PGP SIGNATURE-----
Merge tag 'zonefs-5.8-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/dlemoal/zonefs into master
Pull zonefs fixes from Damien Le Moal:
"Two fixes, the first one to remove compilation warnings and the second
to avoid potentially inefficient allocation of BIOs for direct writes
into sequential zones"
* tag 'zonefs-5.8-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/dlemoal/zonefs:
zonefs: count pages after truncating the iterator
zonefs: Fix compilation warning
-----BEGIN PGP SIGNATURE-----
iQJEBAABCAAuFiEEwPw5LcreJtl1+l5K99NY+ylx4KYFAl8bEJwQHGF4Ym9lQGtl
cm5lbC5kawAKCRD301j7KXHgpgCOEADPQhFAkQvKfaAKBXriQTFluDbET1PjAUNX
oa1ZRjP67O/Oxb4bEW6V6NRlteno1vqx0PD+DQfvnzlSly0WM0LUzQXLz4KDcAGW
55EdoZq3ev50bQOPkDzeqDmER0NEt5S+haVv3dvdEfqQaN9GBVQvEzk3elI5/+Sn
PO9hFhH8u3I16HgCFiQu4E4q+zK2D5j+rr82/wSsQ8wtdjzVHfaoBBTwCFaNq3r1
nGYyRY9rM/iq61l0bNKF0fqlx5sNGqHzqrxEaVsc7xbgcbn5ivDXReQ4fm3BsuUF
uabItKEyZjyr6dB0N5Eq24+S9S9i5V+9nhuZtRgjivANll/goVAVVZxqbvwrdB3q
w0SBDzOMu8LAlEYaNjnheQ+xbzedrI564MDBpyLdf4yuJmnIEmk6TR1fVtgmB+aa
AW1vplw+uBM25rkjltzdVFGpzBVVb478GgJkVbtzNaat7jUamg46s3qq02ReZXVi
W/ga9JZ87zIVE5/ClynUcGQoLSmeJRmQnnvVZjMAgw2lzE2i9xG5RwBU5cOpDKal
RwbnxoRG6FkLMXs0mAIXBsP5EvNuOSyItyCyk/LYqbjYjDTHvnYA/UFvE9MiJX+3
2S4Mt3aHzf+rBX5NDOrRBq0Ri4KF0AgXv/xNSYfrlDV5iB5NQMTv5ZQLaPoBBu/I
mph+EGVssQ==
=TBlP
-----END PGP SIGNATURE-----
Merge tag 'io_uring-5.8-2020-07-24' of git://git.kernel.dk/linux-block into master
Pull io_uring fixes from Jens Axboe:
- Fix discrepancy in how sqe->flags are treated for a few requests,
this makes it consistent (Daniele)
- Ensure that poll driven retry works with double waitqueue poll users
- Fix a missing io_req_init_async() (Pavel)
* tag 'io_uring-5.8-2020-07-24' of git://git.kernel.dk/linux-block:
io_uring: missed req_init_async() for IOSQE_ASYNC
io_uring: always allow drain/link/hardlink/async sqe flags
io_uring: ensure double poll additions work with both request types
One fix:
- Fix a NULL-ptr dereference in the QCOM IOMMU driver
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEr9jSbILcajRFYWYyK/BELZcBGuMFAl8bJqsACgkQK/BELZcB
GuMPOhAA3wwo+FUDabwY8LEwMdiW+WfHkD7PaUb97vWMwFkCl2LhWr8bqawa41jr
7jUN12G760JUGlN0Jf/ZNv5RgCrjOn8GnBzMnLZOSvyWkidHpKTA8F4slqx7Xj+R
6VS0R7eXYKlosW8tblOJYMdgsHDbPCyP1xBlBKX/4qk/vyBbU87G5bXP4J/IvQdb
MkRaLkP8TE9IlOkLku3YcdNU2WJxhTqPtP/o4PIP8rfmmXqhON/e0cUKz9D0ckc8
0XhwWg0/Gop4DcMCtWMezhJf3ZLowsGVb1BYx7uok1mE7l3AKT/ElYsKgGdzrGXj
/2N/XUluWsAReJgov29LXDUlFZBINygsbeJyKk6MkAFKXf95rsD7LSejI4XTHBBA
OkgMWknwZcPgsZ6rYyFcpuHmX7t07lE/ELYaZLlDVXctNqCeC2Dd7xdktLhe+Zr/
vpzbJYDWrEUeklJUg6YGRI8P0Ulnd4QgDKR5AZXl84obQl4E9b+Rf6nQmu0cGok3
f7CUlQxCEwCHw7E5b0EFG6/lmzIEoy0fAhtIkBmnzgXnwBIC6bM1eKvoLEYmuMJ/
kk9WzX53pCVpNiMs5UaFyFaxise02/VJzMnWETmzvy4hN/wl+qf781/D50QQmOjD
xQxR4Fe44SbnnPOgIV54+4Uc4BY+s688TnjBVmCP4hYdyoUOqb8=
=Vun9
-----END PGP SIGNATURE-----
Merge tag 'iommu-fix-v5.8-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu into master
Pull iommu fix from Joerg Roedel:
"Fix a NULL-ptr dereference in the QCOM IOMMU driver"
* tag 'iommu-fix-v5.8-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu:
iommu/qcom: Use domain rather than dev as tlb cookie
One merge window regression, some corruption bugs in HNS and a few more
syzkaller fixes.
- Two long standing syzkaller races
- Fix incorrect HW configuration in HNS
- Restore accidentally dropped locking in IB CM
- Fix ODP prefetch bug added in the big rework several versions ago
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEfB7FMLh+8QxL+6i3OG33FX4gmxoFAl8bHg4ACgkQOG33FX4g
mxpK0A//cvfBXrHYGv5JG8CstePk2xSJ5Jxio/YrLO/+QM+LI/ouWYwzLfHHDwfd
zqCOyPQC9v46J9QZSKJz+rEN/6OLIProkTsHihhVonZY62EppnZPcCxk8KO7G+Z2
bOQZsoA/+qjbcOvVWBXC55LI9wy+qCzAXaQ5/bPItiovcfhAdulKeoQT4EQ5eYLK
4fcgo565Z7Xm7Xd+dKOwIia4lU+4uKlOGtfoqd+J13S3IaOc5SYIO8pVqsSilcz8
utFRXr7mxB2JoBcFRGx4YlJZYYog83nKfMDIVRRNgZ2gY3lss+BqblJ04+XXkOhX
sgEzSxGcVUD822MvDrPnvO+FbXFDIBICVkvlaB5rvMkUSMDOVCbM6JI59pn0GF5K
voOYio2OUU30HkLiUzBpCw/ThdrtGViWeQ5jpkw3L6RYNLfIuZtdAF/RLmpytUUM
DRDHTbZU0gYMckHTRMo2iauxnXF28vVXtFFONuS6PeoHbXeFdRKitp6xASxGiFgD
GCmfoJNri3971p0y7yXhiOgXJ4wqjLg8lpC6FeNPbjmTxzT/xxV2nr43MfSaG2Zw
87MF/HtiGyqJfseCKzNFX/xBsi95guDa52InvIYOJWq4pBS+bt6GU8Uc2gwne0Gn
+XeXSPiHcbfXH0qb2TI/BFFsaHe4Tx/59Y3/RvQY+B78I2iv5tA=
=w7xm
-----END PGP SIGNATURE-----
Merge tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma into master
Pull rdma fixes from Jason Gunthorpe:
"One merge window regression, some corruption bugs in HNS and a few
more syzkaller fixes:
- Two long standing syzkaller races
- Fix incorrect HW configuration in HNS
- Restore accidentally dropped locking in IB CM
- Fix ODP prefetch bug added in the big rework several versions ago"
* tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma:
RDMA/mlx5: Prevent prefetch from racing with implicit destruction
RDMA/cm: Protect access to remote_sidr_table
RDMA/core: Fix race in rdma_alloc_commit_uobject()
RDMA/hns: Fix wrong PBL offset when VA is not aligned to PAGE_SIZE
RDMA/hns: Fix wrong assignment of lp_pktn_ini in QPC
RDMA/mlx5: Use xa_lock_irq when access to SRQ table
gets skipped when resuming a device. This is a fix for a previous
stable@ fix.
-----BEGIN PGP SIGNATURE-----
iQFHBAABCAAxFiEEJfWUX4UqZ4x1O2wixSPxCi2dA1oFAl8bHV8THHNuaXR6ZXJA
cmVkaGF0LmNvbQAKCRDFI/EKLZ0DWuBKB/99ET40snRVeV19eZk6BMxsH1XApnXr
dVphtZHB+VArfVqXHqplN53Ie5CzTqyUFqQc1jIeP8evGyDczdXW8bnJ/HEd9v9H
El+2WajA2BGiAQA84UxvgBt+Dqz432mm3VhehQ3bl/UVSRsPCZq7WooPyzlLjVi2
sn52IC0mXVPyQZTC7rlzGTTXp6XW1gKQ9gT9ws6zPzuRkGVjRAkRw9RCZtIEjora
cPRGqn4K/wX7UnOnb0Ia4OPXikpKJofA9l4EdjCiU9GpQMLqZaDCvFMIhdPnHlIJ
Vxh5KOf2tXmbYCosClv53svUdKMDw/R78/BCF2HkmrybBFSZSc/B+kZt
=afLO
-----END PGP SIGNATURE-----
Merge tag 'for-5.8/dm-fixes-3' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm into master
Pull device mapper fix from Mike Snitzer:
"A stable fix for DM integrity target's integrity recalculation that
gets skipped when resuming a device. This is a fix for a previous
stable@ fix"
* tag 'for-5.8/dm-fixes-3' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm:
dm integrity: fix integrity recalculation that is improperly skipped
Pull i2c fixes from Wolfram Sang:
"Again some driver bugfixes and some documentation fixes"
* 'i2c/for-current' of git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux:
i2c: i2c-qcom-geni: Fix DMA transfer race
i2c: rcar: always clear ICSAR to avoid side effects
MAINTAINERS: i2c: at91: handover maintenance to Codrin Ciubotariu
i2c: drop duplicated word in the header file
i2c: cadence: Clear HOLD bit at correct time in Rx path
Revert "i2c: cadence: Fix the hold bit setting"
amdgpu:
- Fix crash when overclocking VegaM
- Fix possible crash when editing dpm levels
sun4i:
- Fix inverted HPD result; fixes an earlier fix
lima:
- fix timeout during reset
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJfGmmAAAoJEAx081l5xIa+KM4P/1+D/P+Qckc8qswo5rj6UdnQ
t5+mNxz+BBClWnW8rI/NAp6iRy3J73Phhng8JbwVrkQfpTrBxJen+J7ml7DyfIcv
/TCPocJnc9IPig+v7kZZERY1reSwCplgjun5VvEXWZuUOzuPN2+dgyDC/0Ab4RP5
8+EDBhX3faHAaAo8FWLkDDldnQ/8zb2hEdQeNZJfRQxi78AwqAvVKuvsNTl6tD8O
sBnBaNrq4SbcuZpjjr+q9zdsIqwLG5VoFqyHnHuuJLAANPHipyarmMcbY2plTL2g
kCI0/w7t5Krn/yCpvFl5yXE9OgaMuhsp3uiMhH4vlU5xf7cPQ+Wu1tl8hggpI0m8
3sRHLzSI+hTFpdkw3cBBVjVqa0iGben9Eg4TYlvwsEKyJcuFpzeputx97UK0MqLw
44e3TN7VAlDiBotOuo5lhtwF8kWfXLC/Pdnz1K1omFEQZ7iuBdGvWCQBrelMVmcV
1A5/gAqVa0F0lZoKqJxl6zW09sS6vW4+KDVi5iWCOHAQb2kC2RDz3FTQZp86D3FJ
TuZ2SrxKXVC2jrW1OT6jL/v44KIChwgavH9OdLoXG2JPgoORMPzAMhdZWtgqGl91
4fUIsMHdhl0LDxrRSBmJrGlC4CImcEmCbHxzeq3m2dgEJOcFHLY+iolQpajFi1Rp
0ky23M2E7DbjFpiW8Y7v
=r6Zy
-----END PGP SIGNATURE-----
Merge tag 'drm-fixes-2020-07-24' of git://anongit.freedesktop.org/drm/drm into master
Pull drm fixes from Dave Airlie:
"Quiet fixes, I may have a single regression fix follow up to this for
nouveau, but it might be next week, Ben was testing it a bit more .
Otherwise two amdgpu fixes, one lima and one sun4i:
amdgpu:
- Fix crash when overclocking VegaM
- Fix possible crash when editing dpm levels
sun4i:
- Fix inverted HPD result; fixes an earlier fix
lima:
- fix timeout during reset"
* tag 'drm-fixes-2020-07-24' of git://anongit.freedesktop.org/drm/drm:
drm/amdgpu: Fix NULL dereference in dpm sysfs handlers
drm/amd/powerplay: fix a crash when overclocking Vega M
drm/lima: fix wait pp reset timeout
drm: sun4i: hdmi: Fix inverted HPD result