linux/kernel/locking
Linus Torvalds 720261cfc7 bcachefs changes for 6.11-rc1 (version 2)
Additional fixes on top of the original 6.11 pull request:
 - undefined behaviour fixes, originally noted as breaking userspace LTO
   builds
 - fix a spurious warning in fsck_err, reported by Marcin
 - fix an integer overflow on trans->nr_updates, also reported by Marcin;
   this broke during deletion of highly fragmented indirect extents
 - Add comments for lockdep functions
 
 ======
 
 - Metadata version 1.8: Stripe sectors accounting, BCH_DATA_unstriped
 
 This splits out the accounting of dirty sectors and stripe sectors in
 alloc keys; this lets us see stripe buckets that still have unstriped
 data in them.
 
 This is needed for ensuring that erasure coding is working correctly, as
 well as completing stripe creation after a crash.
 
 - Metadata version 1.9: Disk accounting rewrite
 
 The previous disk accounting scheme relied heavily on percpu counters
 that were also sharded by outstanding journal buffer; it was fast but
 not extensible or scalable, and meant that all accounting counters were
 recorded in every journal entry.
 
 The new disk accounting scheme stores accounting as normal btree keys;
 updates are deltas until they are flushed by the btree write buffer.
 
 This means we have no practical limit on the number of counters, and a
 new tagged union format that's easy to extend.
 
 We now have counters for compression type/ratio, per-snapshot-id usage,
 per-btree-id usage, and pending rebalance work.
 
 - Self healing on read IO/checksum error
 
 data is now automatically rewritten if we get a read error and then a
 successful retry
 
 - Mount API conversion (thanks to Thomas Bertschinger)
 
 - Better lockdep coverage
 
 Previously, btree node locks were tracked individually by lockdep, like
 any other lock. But we may take _many_ btree node locks simultaneously,
 we easily blow through the limit of 48 locks that lockdep can track,
 leading to lockdep turning itself off.
 
 Tracking each btree node lock individually isn't really necessary since
 we have our own cycle detector for deadlock avoidance and centralized
 tracking of btree node locks, so we now have a single lockdep_map in
 btree_trans for "any btree nodes are locked".
 
 - some more small incremental work towards online check_allocations
 
 - lots more debugging improvements, fixes
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEKnAFLkS8Qha+jvQrE6szbY3KbnYFAmaZmVYACgkQE6szbY3K
 bnayLBAAr/RB75neEVzNqzE/qLoz9HBfPs7NrGNZg3bLzie9BvcEKGf3VUs2mm83
 6qN/bCJRBhd2vdMcFvIrYYqvD+F2qFFFrBjTzY20toReCwO6q5o8Ihfzv+uSu1qn
 Lg/6AbwwDwHPgSLcFM6yVfNsNBpOZ4tru7xble5/89RGp5uhbU38tsFwkUcngN/t
 4GWjtUYC+rFZaJSbt+wtGtM++nSURhMu1rxbR3MnVLT3JNW0wiG+FnRymCRQOYwx
 eslI6wP3JArTKMvACJqXTDivmjUPNdvsJ26LW6b5KBLi411OgV219ek0mJlkc5gl
 lOOZ5LPmPqn0BxsIdqOd+/OGOpvYkfWEyDj6/46K8KKFt+i++vCTzqIeL06HGQFS
 oCZajoHseLTlWPZTnoi3/KPWkKSJyaROrvFPSHT5/9sJfeeFt88hNSMP9g4+S4GI
 QSXK70GzEVjxr7YzUZwZUmRGWHt7YcS/qkTKJZRkLwUbr2BnrKEJhOa0t/i3RopN
 glFP/hP3dObTcWUF4xH90htfD2E/wHbP7nPwNCL6367n18Uj4TJD09vtM8xHsXSR
 YXECCjfskVDHAQJw/aV5jF1NpaeSW6g/bILi8tQhvRpfzLLvsaVGxA1xp5v+VZEQ
 w3WBepPofEE1EaVfKNeHCiE7STAnSVbWYWTatmjPdqVCyT7C9S0=
 =tnhh
 -----END PGP SIGNATURE-----

Merge tag 'bcachefs-2024-07-18.2' of https://evilpiepirate.org/git/bcachefs

Pull bcachefs updates from Kent Overstreet:

 - Metadata version 1.8: Stripe sectors accounting, BCH_DATA_unstriped

   This splits out the accounting of dirty sectors and stripe sectors in
   alloc keys; this lets us see stripe buckets that still have unstriped
   data in them.

   This is needed for ensuring that erasure coding is working correctly,
   as well as completing stripe creation after a crash.

 - Metadata version 1.9: Disk accounting rewrite

   The previous disk accounting scheme relied heavily on percpu counters
   that were also sharded by outstanding journal buffer; it was fast but
   not extensible or scalable, and meant that all accounting counters
   were recorded in every journal entry.

   The new disk accounting scheme stores accounting as normal btree
   keys; updates are deltas until they are flushed by the btree write
   buffer.

   This means we have no practical limit on the number of counters, and
   a new tagged union format that's easy to extend.

   We now have counters for compression type/ratio, per-snapshot-id
   usage, per-btree-id usage, and pending rebalance work.

 - Self healing on read IO/checksum error

   Data is now automatically rewritten if we get a read error and then a
   successful retry

 - Mount API conversion (thanks to Thomas Bertschinger)

 - Better lockdep coverage

   Previously, btree node locks were tracked individually by lockdep,
   like any other lock. But we may take _many_ btree node locks
   simultaneously, we easily blow through the limit of 48 locks that
   lockdep can track, leading to lockdep turning itself off.

   Tracking each btree node lock individually isn't really necessary
   since we have our own cycle detector for deadlock avoidance and
   centralized tracking of btree node locks, so we now have a single
   lockdep_map in btree_trans for "any btree nodes are locked".

 - Some more small incremental work towards online check_allocations

 - Lots more debugging improvements

 - Fixes, including:
    - undefined behaviour fixes, originally noted as breaking userspace
      LTO builds
    - fix a spurious warning in fsck_err, reported by Marcin
    - fix an integer overflow on trans->nr_updates, also reported by
      Marcin; this broke during deletion of highly fragmented indirect
      extents

* tag 'bcachefs-2024-07-18.2' of https://evilpiepirate.org/git/bcachefs: (120 commits)
  lockdep: Add comments for lockdep_set_no{validate,track}_class()
  bcachefs: Fix integer overflow on trans->nr_updates
  bcachefs: silence silly kdoc warning
  bcachefs: Fix fsck warning about btree_trans not passed to fsck error
  bcachefs: Add an error message for insufficient rw journal devs
  bcachefs: varint: Avoid left-shift of a negative value
  bcachefs: darray: Don't pass NULL to memcpy()
  bcachefs: Kill bch2_assert_btree_nodes_not_locked()
  bcachefs: Rename BCH_WRITE_DONE -> BCH_WRITE_SUBMITTED
  bcachefs: __bch2_read(): call trans_begin() on every loop iter
  bcachefs: show none if label is not set
  bcachefs: drop packed, aligned from bkey_inode_buf
  bcachefs: btree node scan: fall back to comparing by journal seq
  bcachefs: Add lockdep support for btree node locks
  lockdep: lockdep_set_notrack_class()
  bcachefs: Improve copygc_wait_to_text()
  bcachefs: Convert clock code to u64s
  bcachefs: Improve startup message
  bcachefs: Self healing on read IO error
  bcachefs: Make read_only a mount option again, but hidden
  ...
2024-07-18 17:27:43 -07:00
..
irqflag-debug.c
lock_events_list.h
lock_events.c locking/debug: Fix debugfs API return value checks to use IS_ERR() 2023-10-03 10:11:25 +02:00
lock_events.h locking/qspinlock: Always evaluate lockevent* non-event parameter once 2024-03-21 20:45:17 +01:00
lockdep_internals.h
lockdep_proc.c locking/lockdep: Fix string sizing bug that triggers a format-truncation compiler-warning 2023-10-12 20:37:59 +02:00
lockdep_states.h
lockdep.c bcachefs changes for 6.11-rc1 (version 2) 2024-07-18 17:27:43 -07:00
locktorture.c locktorture: Add MODULE_DESCRIPTION() 2024-05-30 15:31:45 -07:00
Makefile lockdep: allow instrumenting lockdep.c with KMSAN 2022-12-11 18:12:11 -08:00
mcs_spinlock.h
mutex-debug.c locking/mutex: Introduce devm_mutex_init() 2024-04-11 17:34:41 +01:00
mutex.c locking/mutex: Document that mutex_unlock() is non-atomic 2023-12-01 11:27:43 +01:00
mutex.h
osq_lock.c locking/osq_lock: Clarify osq_wait_next() 2023-12-30 10:25:51 -08:00
percpu-rwsem.c locking/percpu-rwsem: Trigger contention tracepoints only if contended 2024-02-28 13:10:29 +01:00
qrwlock.c locking: Add __lockfunc to slow path functions 2022-08-19 19:47:51 +02:00
qspinlock_paravirt.h locking/pvqspinlock: Use try_cmpxchg() in qspinlock_paravirt.h 2024-04-12 11:42:39 +02:00
qspinlock_stat.h
qspinlock.c x86/xen: remove deprecated xen_nopvspin boot parameter 2024-07-11 16:33:51 +02:00
rtmutex_api.c locking/rtmutex: Fix task->pi_waiters integrity 2023-07-17 13:59:10 +02:00
rtmutex_common.h locking/rtmutex: Fix task->pi_waiters integrity 2023-07-17 13:59:10 +02:00
rtmutex.c locking/rtmutex: Use try_cmpxchg_relaxed() in mark_rt_mutex_waiters() 2024-03-01 13:02:05 +01:00
rwbase_rt.c locking/rtmutex: Add a lockdep assert to catch potential nested blocking 2023-09-20 09:31:14 +02:00
rwsem.c locking/rwsem: Add __always_inline annotation to __down_write_common() and inlined callers 2024-07-09 13:26:26 +02:00
semaphore.c locking: Add __sched to semaphore functions 2022-09-15 16:14:03 +02:00
spinlock_debug.c sched.h: move pid helpers to pid.h 2023-12-20 19:26:31 -05:00
spinlock_rt.c locking/rtmutex: Add a lockdep assert to catch potential nested blocking 2023-09-20 09:31:14 +02:00
spinlock.c locking/local_lock: Add local nested BH locking infrastructure. 2024-06-24 16:41:22 -07:00
test-ww_mutex.c locking/ww_mutex/test: Make sure we bail out instead of livelock 2023-09-22 09:43:41 +02:00
ww_mutex.h locking/rtmutex: Fix task->pi_waiters integrity 2023-07-17 13:59:10 +02:00
ww_rt_mutex.c locking/rtmutex: Avoid unconditional slowpath for DEBUG_RT_MUTEXES 2023-09-20 09:31:11 +02:00