Split up bch2_journal_write() to simplify locking:
- bch2_journal_write_pick_flush(), which needs j->lock
- bch2_journal_write_prep, which operates on the journal buffer to be
written and will need the upcoming buf_lock for synchronization with
the btree write buffer flush path
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
We avoid using standard error codes: private, per-callsite error codes
make debugging easier.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
BCACHEFS_DEBUG_TRANSACTIONS is useful, but it's too expensive to have on
by default - and it hasn't been coming up in bug reports.
Turn it off by default until we figure out a way to make it cheaper.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
BTREE_INSERT_NOJOURNAL is primarily used for a performance optimization
related to inode updates and fsync - document it.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
rebalance_work entries may refer to entries in the extents btree, which
is a snapshots btree, or they may also refer to entries in the reflink
btree, which is not.
Hence rebalance_work keys may use the snapshot field but it's not
required to be nonzero - add a new btree flag to reflect this.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
When we didn't find anything in the journal that we'd like to use, and
we're forced to use whatever we can find - that entry will have been a
JSET_NO_FLUSH entry with a garbage last_seq value, since it's not
normally used.
Initialize it to something sane, for bch2_fs_journal_start().
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Delete the useless check for inum == 0; we'll return -ENOENT without it,
which is what we want.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
- the fsck_err() check for the filesystem being clean was incorrect,
causing us to always fail to delete unlinked inodes
- if a snapshot had been taken, the unlinked inode needs to be
propagated to snapshot leaves so the unlink can happen there - fixed.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
The bucket_offset field of bch_backpointer is a 40-bit bitfield, but the
bch2_backpointer_swab() helper uses swab32. This leads to inconsistency
when an on-disk fs is accessed from an opposite endian machine.
As it turns out, we already have an internal swab40() helper that is
used from the bch_alloc_v4 swab callback. Lift it into the backpointers
header file and use it consistently in both places.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
A simple test to populate a filesystem on one CPU architecture and
fsck on an arch of the opposite byte order produces errors related
to the fragmentation LRU. This occurs because the 64-bit
fragmentation_lru field is not byte-order swapped when reads detect
that the on-disk/bset key values were written in opposite byte-order
of the current CPU.
Update the bch2_alloc_v4 swab callback to handle fragmentation_lru
as is done for other multi-byte fields. This doesn't affect existing
filesystems when accessed by CPUs of the same endianness because the
->swab() callback is only called when the bset flags indicate an
endianness mismatch between the CPU and on-disk data.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
The bcachefs folio writeback code includes a bio full check as well
as a fixed size check to determine when to split off and submit
writeback I/O. The inclusive check of the latter against the limit
means that writeback can submit slightly prematurely. This is not a
functional problem, but results in unnecessarily split I/Os and
extent merging.
This can be observed with a buffered write sized exactly to the
current maximum value (1MB) and with key_merging_disabled=1. The
latter prevents the merge from the second write such that a
subsequent check of the extent list shows a 1020k extent followed by
a contiguous 4k extent.
The purpose for the fixed size check is also undocumented and
somewhat obscure. Lift this check into a new helper that wraps the
bio check, fix the comparison logic, and add a comment to document
the purpose and how we might improve on this in the future.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Guenter Roeck reports a lockdep splat and DEBUG_OBJECTS_WORK related
warning when bch2_copygc_thread() initializes its rhashtable. The
lockdep splat relates to a warning print caused by the fact that the
rhashtable exists on the stack but is not annotated as so. This is
something that could be addressed by INIT_WORK_ONSTACK(), but
rhashtable doesn't expose that control and probably isnt worth the
churn for just one user. Instead, dynamically allocate the
buckets_in_flight structure and avoid the splat that way.
Reported-by: Guenter Roeck <linux@roeck-us.net>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
A recent bug report uncovered a scenario where a filesystem never
runs with freespace_initialized, and therefore the user observes
significantly degraded write performance by virtue of running the
early bucket allocator. The associated bug aside, the primary cause
of the performance drop in this particular instance is that the
early bucket allocator does not update the allocation cursor. This
means that every allocation walks the alloc btree from the first
bucket of the associated device looking for a bucket marked as free
space.
Update the early allocator code to set the alloc cursor to the last
processed position in the tree, similar to how the freelist
allocator behaves. With the alloc_cursor being updated, the retry
logic also needs to be updated to restart from the beginning of the
device when a free bucket is not available between the cursor and
the end of the device. Track the restart position in a first_bucket
variable to make the code a bit more easily readable and consistent
with the freelist allocator.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
bcachefs had a transient bug where freespace_initialized was not
properly being set, which lead to unexpected use of the early bucket
allocator at runtime. This issue has been fixed, but the existence
of it uncovered a coherency issue in the early bucket allocation
code that is somewhat related to how uncached iterators deal with
the key cache.
The problem itself manifests as occasional failure of generic/113
due to corruption, often seen as a duplicate backpointer or multiple
data types per-bucket error. The immediate cause of the error is a
racing bucket allocation along the lines of the following sequence:
- Task 1 selects key A in bch2_bucket_alloc_early() and schedules.
- Task 2 selects the same key A, but proceeds to complete the
allocation and associated I/O, after which it releases the
open_bucket.
- Task 1 resumes with key A, but does not recognize the bucket is
now allocated because the open_bucket has been removed
from the hash when it was released in the previous step.
This generally shouldn't happen because the allocating task updates
the alloc btree key before releasing the bucket. This is not
sufficient in this particular instance, however, because an uncached
iterator for a cached btree doesn't actually lock the key cache slot
when no key exists for a given slot in the cache. Thus the fact that
the allocation side updates the cached key means that multiple
uncached iters can stumble across the same alloc key and duplicate
the bucket allocation as described above.
This is something that probably needs a longer term fix in the
iterator code. As a short term fix, close the race through explicit
use of a cached iterator for likely allocation candidates. We don't
want to scan the btree with a cached iterator because that would
unnecessarily pollute the cache. This mitigates cache pollution by
primarily scanning the tree with an uncached iterator, but closes
the race by creating a key cache entry for any prospective slot
prior to the bucket allocation attempt (also similar to how
_alloc_freelist() works via try_alloc_bucket()). This survives many
iterations of generic/113 on a kernel hacked to always use the early
bucket allocator.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
The SRCU read lock that btree_trans takes exists to make it safe for
bch2_trans_relock() to deref pointers to btree nodes/key cache items we
don't have locked, but as a side effect it blocks reclaim from freeing
those items.
Thus, it's important to not hold it for too long: we need to
differentiate between bch2_trans_unlock() calls that will be only for a
short duration, and ones that will be for an unbounded duration.
This introduces bch2_trans_unlock_long(), to be used mainly by the data
move paths.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
gcc 10 seems to complain about array bounds in situations where gcc 11
does not - curious.
This unfortunately requires adding some casts for now; we may
investigate getting rid of our __u64 _data[] VLA in a future patch so
that our start[0] members can be VLAs.
Reported-by: John Stoffel <john@stoffel.org>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
We should only be downgrading locks on success - otherwise, our
transaction restarts won't be getting the correct locks and we'll
livelock.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This patch adds a superblock error counter for every distinct fsck
error; this means that when analyzing filesystems out in the wild we'll
be able to see what sorts of inconsistencies are being found and repair,
and hence what bugs to look for.
Errors validating bkeys are not yet considered distinct fsck errors, but
this patch adds a new helper, bkey_fsck_err(), in order to add distinct
error types for them as well.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Add a new superblock section to keep counts of errors seen since
filesystem creation: we'll be addingcounters for every distinct fsck
error.
The new superblock section has entries of the for [ id, count,
time_of_last_error ]; this is intended to let us see what errors are
occuring - and getting fixed - via show-super output.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
We now track IO errors per device since filesystem creation.
IO error counts can be viewed in sysfs, or with the 'bcachefs
show-super' command.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This fixes a use after free - mi is dangling after the resize call.
Additionally, resizing the device's member info section was useless - we
were attempting to preallocate the space required before adding it to
the filesystem superblock, but there's other sections that we should
have been preallocating as well for that to work.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This fixes an incorrect memcpy() in the recent members_v2 code - a
members_v1 member is BCH_MEMBER_V1_BYTES, not sizeof(struct bch_member).
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This adds a new btree, rebalance_work, to eliminate scanning required
for finding extents that need work done on them in the background - i.e.
for the background_target and background_compression options.
rebalance_work is a bitset btree, where a KEY_TYPE_set corresponds to an
extent in the extents or reflink btree at the same pos.
A new extent field is added, bch_extent_rebalance, which indicates that
this extent has work that needs to be done in the background - and which
options to use. This allows per-inode options to be propagated to
indirect extents - at least in some circumstances. In this patch,
changing IO options on a file will not propagate the new options to
indirect extents pointed to by that file.
Updating (setting/clearing) the rebalance_work btree is done by the
extent trigger, which looks at the bch_extent_rebalance field.
Scanning is still requrired after changing IO path options - either just
for a given inode, or for the whole filesystem. We indicate that
scanning is required by adding a KEY_TYPE_cookie key to the
rebalance_work btree: the cookie counter is so that we can detect that
scanning is still required when an option has been flipped mid-way
through an existing scan.
Future possible work:
- Propagate options to indirect extents when being changed
- Add other IO path options - nr_replicas, ec, to rebalance_work so
they can be applied in the background when they change
- Add a counter, for bcachefs fs usage output, showing the pending
amount of rebalance work: we'll probably want to do this after the
disk space accounting rewrite (moving it to a new btree)
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
data_progress_list is gone - it was redundant with moving_context_list
The upcoming rebalance rewrite is going to have it using two different
move_stats objects with the same moving_context, depending on whether
it's scanning or using the rebalance_work btree - this patch plumbs
stats around a bit differently so that will work.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
btree_trans and moving_context are used together, and having the
moving_context owns the transaction object reduces some plumbing.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Since compression options now include compression level, proper
validation is a bit more involved.
This adds bch2_compression_opt_valid(), and plumbs it around
appropriately.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
The write path may (rarely) see an encoded (checksummed) extent that
exceeds encoded_extent_max - this can happen when we're moving an
existing extent that was not checksummed, but was given a checksum by
bch2_write_rechecksum().
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
We're going to be using bch2_target_to_text() ->
bch2_disk_path_to_text() from bch2_bkey_ptrs_to_text() and
bch2_bkey_ptrs_invalid(), which can be called in any context.
This patch adds the actual label to bch_disk_group_cpu so that it can be
used by bch2_disk_path_to_text, and splits out bch2_disk_path_to_text()
into two variants - like the previous patch, one for when we have a
running filesystem and another for when we only have a superblock.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>