This fixes the replicas_write_errors test: the patch
bcachefs: mark journal replicas before journal write submission
partially fixed replicas marking for the journal, but it broke the case
where one replica failed - this patch re-adds marking after the journal
write completes, when we know how many replicas succeeded.
Additionally, we do not consider it a fsck error when the very last
journal entry is not correctly marked, since there is an inherent race
there.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Split out a new file from recovery.c for managing the list of keys we
read from the journal: before journal replay finishes the btree iterator
code needs to be able to iterate over and return keys from the journal
as well, so there's a fair bit of code here.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Split out a new file for bch_sb_field_members - we'll likely want to
move more code here in the future.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
fs-io.c is too big - time for some reorganization
- fs-dio.c: direct io
- fs-pagecache.c: pagecache data structures (bch_folio), utility code
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
On old kernels, kmalloc() may return an allocation that's not naturally
aligned - this resulted in a bug where we allocated a bio with not
enough biovecs. Fix this by using buf_pages().
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This fixes a bug where we were already passing bkey_invalid_flags
around, but treating the parameter as just read/write - so the compat
code wasn't being run correctly.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Awhile back, we changed bkey_format generation to ensure that the packed
representation could never represent fields larger than the unpacked
representation.
This was to ensure that bkey_packed_successor() always gave a sensible
result, but in the current code bkey_packed_successor() is only used in
a debug assertion - not for anything important.
This kills the requirement that we've gotten rid of those weird bkey
formats, and instead changes the assertion to check if we're dealing
with an old weird bkey format.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Add error messages when we fail to lookup an inode, and also add a few
missing bch2_err_class() calls.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
We've observed significant lock thrashing on fstests generic/083 in
fallocate, due to dropping and retaking btree locks when checking the
pagecache for data.
This adds a nonblocking mode to bch2_clamp_data_hole(), where we only
use folio_trylock(), and can thus be used safely while btree locks are
held - thus we only have to drop btree locks as a fallback, on actual
lock contention.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This fixes should_restart_for_topology_repair() - previously it was
returning false if the btree io path had already seleceted topology
repair to run, even if it hadn't run yet.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
UBSAN was complaining about a shift by 64 in set_inc_field().
This only happened when the value being shifted was 0, so in theory
should be harmless - a shift by 64 (or register width) should logically
give a result of 0, but CPUs will in practice leave the input unchanged
when the number of bits to shift by wraps - and since our input here is
0, the output is still what we want.
But, it's still undefined behaviour and we need our UBSAN output to be
clean, so it needs to be fixed.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
- add a to_text() method for bkey_format
- convert bch2_bkey_format_validate() to modern error message style,
where we pass a printbuf for the error string instead of returning a
static string
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Add a new bitset btree for inodes pending deletion; this means we no
longer have to scan the full inodes btree after an unclean shutdown.
Specifically, this adds:
- a trigger to update the deleted_inodes btree based on changes to the
inodes btree
- a new recovery pass
- and check_inodes is now only a fsck pass.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
A number of smallish fixes for overlapping extent repair, and (part of)
a new unit test. This fixes all the issues turned up by bhzhu203, in his
filesystem image from running mongodb + snapshots.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
We want to ensure that fsck actually fixed all the errors it found - the
second fsck run should be clean.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
clang had a few more warnings about enum conversion, and also didn't
like the opts.c initializer.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This refactoring centralizes defining per-btree properties.
bch2_key_types_allowed was also about to overflow a u32, so expand that
to a u64.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This adds bch2_run_explicit_recovery_pass(), for rewinding recovery and
explicitly running a specific recovery pass - this is a more general
replacement for how we were running topology repair before.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This introduces bch2_run_explicit_recovery_pass() and uses it for when
fsck detects that we need to re-run dead snaphots cleanup, and makes
dead snapshot cleanup more like a normal recovery pass.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
The write buffer mechanism journals keys twice in certain
situations. A key is always journaled on write buffer insertion, and
is potentially journaled again if a write buffer flush falls into
either of the slow btree insert paths. This has shown to cause
journal recovery ordering problems in the event of an untimely
crash.
For example, consider if a key is inserted into index 0 of a write
buffer, the active write buffer switches to index 1, the key is
deleted in index 1, and then index 0 is flushed. If the original key
is rejournaled in the btree update from the index 0 flush, the (now
deleted) key is journaled in a seq buffer ahead of the latest
version of key (which was journaled when the key was deleted in
index 1). If the fs crashes while this is still observable in the
log, recovery sees the key from the btree update after the delete
key from the write buffer insert, which is the incorrect order. This
problem is occasionally reproduced by generic/388 and generally
manifests as one or more backpointer entry inconsistencies.
To avoid this problem, never rejournal write buffered key updates to
the associated btree. Instead, use prejournaled key updates to pass
the journal seq of the write buffer insert down to the btree insert,
which updates the btree leaf pin to reflect the seq of the key.
Note that tracking the seq is required instead of just using
NOJOURNAL here because otherwise we lose protection of the write
buffer pin when the buffer is flushed, which means the key can fall
off the tail of the on-disk journal before the btree leaf is flushed
and lead to similar recovery inconsistencies.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Introduce support for prejournaled key updates. This allows a
transaction to commit an update for a key that already exists (and
is pinned) in the journal. This is required for btree write buffer
updates as the current scheme of journaling both on write buffer
insertion and write buffer (slow path) flush is unsafe in certain
crash recovery scenarios.
Create a small trans update wrapper to pass along the seq where the
key resides into the btree_insert_entry. From there, trans commit
passes the seq into the btree insert path where it is used to manage
the journal pin for the associated btree leaf.
Note that this patch only introduces the underlying mechanism and
otherwise includes no functional changes.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
There is only one other caller so eliminate some boilerplate.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This is in preparation to support prejournaled keys. We want the
ability to optionally pass a seq stored in the btree update rather
than the seq of the committing transaction.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
We commonly use no_data_io mode when debugging filesystem metadata
dumps, where data checksum/compression errors are expected and
unimportant - this patch suppresses these.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
We currently don't track whether snapshot cleanup still needs to finish
(aside from running a full fsck), so it shouldn't be a fsck error yet -
fsck -n after fsck has succesfully completed shouldn't error.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Delete a redundant bch2_snapshot_is_ancestor() check, and convert some
assertions to debug assertions.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Make the overlapping extent check/repair code more self contained.
This is prep work for hopefully reducing key_visible_in_snapshot() usage
here as well, and also includes a nice performance optimization to not
check ref_visible2() unless the extents potentially overlap.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This changes the main part of check_extents(), that checks the extent
against the corresponding inode, to not use key_visible_in_snapshot().
key_visible_in_snapshot() has to iterate over the list of ancestor
overwrites repeatedly calling bch2_snapshot_is_ancestor(), so this is a
significant performance improvement.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
More prep work for reducing key_visible_in_snapshot() usage - this
rearranges how KEY_TYPE_whitout keys are handled, so that they can be
marked off in inode_warker->inode->seen_this_pos.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
We only want to synthesize an inode for the current snapshot ID for non
whiteouts - this refactoring lets us call walk_inode() earlier and clean
up some control flow.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Minor refactoring/dead code deletion, prep work for reworking
check_extent() to avoid key_visible_in_snapshot().
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This improves the repair path for overlapping extents - we now verify
that we find in the btree the overlapping extents that the algorithm
detected, and fail the fsck run with a more useful error if it doesn't
match.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Prep work for changing check_extent() to avoid
key_visible_in_snapshot() - this adds the state to track whether an
inode has seen an extent at this pos.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Further optimization for bch2_snapshot_is_ancestor(). We add a small
inline bitmap to snapshot_t, which indicates which of the next 128
snapshot IDs are ancestors of the current id - eliminating the last few
iterations of the loop in bch2_snapshot_is_ancestor().
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Mark these caches as reclaimable, so that available memory is correctly
reported when there is a lot of cached inodes.
Note that more work is needed - you should add __GFP_RECLAIMABLE to some
of the kmalloc calls, so that they are allocated from the "kmalloc-rcl-*"
caches.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This allows including a compression level when specifying a compression
type, e.g.
compression=zstd:15
Values from 1 through 15 indicate compression levels, 0 or unspecified
indicates the default.
For LZ4, values 3-15 specify that the HC algorithm should be used.
Note that for compatibility, extents themselves only include the
compression type, not the compression level. This means that specifying
the same compression algorithm but different compression levels for the
compression and background_compression options will have no effect.
XXX: perhaps we could add a warning for this
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Before, it was parsed as a bool but internally it was really an enum:
this lets us pass in all the possible values.
But we special case the option parsing: no supplied value is parsed as
FSCK_FIX_yes, to match the previous behaviour.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This switches the generic radix tree for the in-memory table of snapshot
nodes to a simple rcu array. This means we have to add new locking to
deal with reallocations, but is faster than traversing the radix tree.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
In userspace, we want to be able to switch to buffered IO when we're
dealing with an image on a filesystem/device that doesn't support the
blocksize the filesystem was formatted with.
This plumbs through !opts.direct_io -> FMODE_BUFFERED, which will be
supported by the shim version of blkdev_get_by_path() in -tools, and it
adds a fallback to disable direct IO and retry for userspace.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Previously, fallocate would only check the state of the extents btree
when determining if we need to create a reservation.
But the page cache might already have dirty data or a disk reservation.
This changes __bchfs_fallocate() to call bch2_seek_pagecache_hole() to
check for this.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
With "bcachefs: Snapshot depth, skiplist fields", we now can't run data
move operations until after bch2_check_snapshots() is complete.
Ideally we'd have the copygc (and rebalance) threads wait until
c->curr_recovery_pass has advanced, but the waitlist handling is tricky
- so for now, move starting copygc back to read_write_late().
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
fixes
./include/linux/stddef.h:8:14: error: positional initialization of field in ‘struct’ declared with ‘designated_init’ attribute [-Werror=designated-init]
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This extents KEY_TYPE_snapshot to include some new fields:
- depth, to indicate depth of this particular node from the root
- skip[3], skiplist entries for quickly walking back up to the root
These are to improve bch2_snapshot_is_ancestor(), making it O(ln(n))
instead of O(n) in the snapshot tree depth.
Skiplist nodes are picked at random from the set of ancestor nodes, not
some fixed fraction.
This introduces bcachefs_metadata_version 1.1, snapshot_skiplists.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Now that we've got forward compatibility sorted out, we should be doing
more frequent version upgrades in the future.
To avoid having to run a full fsck for every version upgrade, this
improves the BCH_METADATA_VERSIONS() table to explicitly specify a
bitmask of recovery passes to run when upgrading to or past a given
version.
This means we can also delete PASS_UPGRADE().
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
We're not supposed to block if BTREE_INSERT_JOURNAL_RECLAIM && watermark
!= BCH_WATERMARK_reclaim.
This should really be a separate BTREE_INSERT_NONBLOCK flag - add some
comments to that effect, it's not important for this patch.
btree write buffer flush depends on this behaviour though - the first
loop tries to flush sequentially, which doesn't free up space in the
journal optimally. If that can't proceed we bail out and flush in
journal order - that won't work if we're blocked instead of returning an
error.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This introduces major/minor versioning to the superblock version number.
Major version number changes indicate incompatible releases; we can move
forward to a new major version number, but not backwards. Minor version
numbers indicate compatible changes - these add features, but can still
be mounted and used by old versions.
With the recent patches that make it possible to roll out new btrees and
key types without breaking compatibility, we should be able to roll out
most new features without incompatible changes.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
We've been seeing assertions pop that indicate the btree node cache or
key cache have dirty items when we just did a clean shutdown.
Add some more assertions so we can catch this when we're dirtying items.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
We were freeing open buckets on the writepoint list, but forgetting to
take them off the writepoint list - whoops
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
As discussed on list, bcachefs is going to be marked as experimental for
a few releases, until the inevitable tide of new bug reports subsides.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Recovery and fsck have many different passes/jobs to do, which always
run in the same order - but not all of them run all the time. Some are
for fsck, some for unclean shutdown, some for version upgrades.
This adds some new structure: a defined list of recovery passes that we
can run in a loop, as well as consolidating the log messages.
The main benefit is consolidating the "should run this recovery pass"
logic, as well as cleaning up the "this recovery pass has finished"
state; instead of having a bunch of ad-hoc state bits in c->flags, we've
now got c->curr_recovery_pass.
By consolidating the "should run this recovery pass" logic, in the
future on disk format upgrades will be able to say "upgrading to this
version requires x passes to run", instead of forcing all of fsck to
run.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
For the upcoming enumeration of recovery passes, we need all recovery
passes to be called the same way - including journal replay.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This folds bch2_bucket_gens_read() into bch2_alloc_read(), doing the
version check there.
This is prep work for enumarating all recovery passes: we need some
cleanup first to make calling all the recovery passes consistent.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
We need to always call bch2_replicas_gc_end() after we've called
bch2_replicas_gc_start(), else we leave state around that needs to be
cleaned up.
Partial fix for: https://github.com/koverstreet/bcachefs/issues/560
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
The version_upgrade parameter is now an enum, not a bool, and it's
persistent in the superblock:
- compatible (default): upgrade to the latest compatible version
- incompatible: upgrade to latest incompatible version
- none
Currently all upgrades are incompatible upgrades, but the next release
will introduce major:minor versions.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Version upgrades are not atomic operations: when we do a version upgrade
we need to update the superblock before we start using new features, and
then when the upgrade completes we need to update the superblock again.
This adds a new superblock field so we can detect and handle incomplete
version upgrades.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Now that we have distinct error codes for different memory allocation
failures, the early init log messages are no longer needed.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
As part of the forward compatibility patch series, we need to allow for
new key types without complaining loudly when running an old version.
This patch changes the flags parameter of bkey_invalid to an enum, and
adds a new flag to indicate we're being called from the transaction
commit path.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
- endianness fixes
- mark some things static
- fix a few __percpu annotations
- fix silent enum conversions
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This changes bch_sb_field_ops lookup to match how bkey_ops now works;
for an unknown field type we return an empty ops struct.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This adds a new helper for lookups bkey_ops for a given key type, which
returns a null bkey_ops for unknown key types; various bkey_ops users
are tweaked as well to handle unknown key types.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
We need to allow filesystems with metadata from newer versions to be
mountable and usable by older versions.
This patch enables us to roll out new btrees without a new major version
number; we can now handle btree roots for unknown btree types.
The unknown btree roots will be retained, and fsck (including
backpointers) will check them, the same as other btree types.
We add a dynamic array for the extra, unknown btree roots, in addition
to the fixed size btree root array, and add new helpers for looking up
btree roots.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
A crash immediately after device removal can result in an
unmountable filesystem due to recovery failure. The following
command reliably reproduces on a multi-device fs:
bcachefs device remove <dev> && xfs_io -xc shutdown <mnt>
The post-crash mount fails with an error similar to the following,
reported by fsck:
invalid journal entry dev_usage at offset 7994/8034 seq 12: bad dev, fixing
This refers to a device usage entry in the journal that refers to
the index of the just removed device. Recovery considers this an
invalid entry and fails to proceed.
Device usage entries are added to journal buffer writes via
bch_journal_write() -> bch2_journal_super_entries_add_common(),
which means any journal buffer write has content that refers to
member devices at the time of the journal write.
The device remove sequence already removes metadata references to
the device being removed. It then flushes any pins that refer to the
device, clears replica entries, removes the in-memory device object
and lastly updates the superblock to reflect that the device is no
longer present. The problem is that any journal writes that occur
during this sequence will include a dev usage entry so long as the
device is present. To avoid this problem, we can flush the journal
once more after the device entry is removed from the in-core
structures, but before the superblock is updated to fully remove the
device on-disk.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
A simple device evacuate, remove, add test loop with concurrent
shutdowns occasionally reproduces a problem where the filesystem
fails to mount. The mount failure occurs because the filesystem was
uncleanly shut down, yet no member device is marked for journal data
in the superblock. An fsck detects the problem, restores the mark
and allows the mount to proceed without further consistency issues.
The reason for the lack of journal data marks is the gc mechanism
invoked via bch2_journal_flush_device_pins() runs while the journal
happens to be empty. This results in garbage collection of all journal
replicas entries. Once the updated replicas table is written to the
superblock, the filesystem is put in a transiently unrecoverable state
until further journal data is written, because journal recovery expects
to find at least one marked journal device whenever the filesystem is
not otherwise marked clean (i.e. as on clean unmount).
To fix this problem, update the journal replicas gc algorithm to always
mark currently active journal replicas entries by writing to the
journal. This ensures that only entries for devices that are no longer
used for journaling are garbage collected, not just those that don't
happen to currently hold journal data. This preserves the journal
recovery invariant above and avoids putting the fs into a transiently
unrecoverable state.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This adds a new helper for checking if an on-disk version is compatible
with the running version of bcachefs - prep work for introducing
major:minor version numbers.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Now that we have journal watermarks and alloc watermarks unified,
BTREE_INSERT_USE_RESERVE is redundant and can be deleted.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This fixes a null ptr deref in bch2_free_pending_node_rewrites() when
the list head wasn't initialized.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This unifies JOURNAL_WATERMARK with BCH_WATERMARK; we're working towards
specifying watermarks once in the transaction commit path.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Add another watermark for journal reclaim - this is needed for the next
patches, that unify BCH_WATERMARK with JOURNAL_WATERMARK.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This adds the extent entry for extents that rebalance needs to do
something with.
We're adding this ahead of the main rebalance_work patchset, because
adding new extent entries can't be done in a forwards-compatible way.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
We now have 20 bits for the btree ID in the on disk format - sufficient
for 1 million distinct btrees.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Some refactoring, prep work for algorithm improvements related to
snapshots.
we need to add a bitmap to the list of inodes for "seen this snapshot";
for this bitmap to correctly be available, we'll need to gather the list
of inodes first, and later look up the inode for a given snapshot.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
bch2_bkey_make_mut() now takes the bkey_s_c by reference and points it
at the new, mutable key.
This helps in some fsck paths that may have multiple repair operations
on the same key.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Excessive inlining may (on some versions of gcc?) cause excessive stack
usage; this turns off some inlining in bch2_check_alloc_info.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
We now print out the full previous extent we overlapping with, to aid in
debugging and searching through the journal.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
When we return errors outside of bcachefs, we need to return a standard
error code - fix this for BCH_ERR_fsck.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
bch2_bucket_backpointer_mod() doesn't need to update the alloc key, we
can exit the alloc iter earlier.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Add two new helpers for printing error messages with __func__ and
bch2_err_str():
- bch_err_fn
- bch_err_msg
Also kill the old error strings in the recovery path, which were causing
us to incorrectly report memory allocation failures - they're not needed
anymore.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
As with the previous patch, we generally can't hold btree locks while
copying to userspace, as that may incur a page fault and require
mmap_lock.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
We can't be holding btree_trans_lock while copying to user space, which
might incur a page fault. To fix this, convert it to a seqmutex so we
can unlock/relock.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
We weren't correctly checking the freespace btree - it's an extents
btree, which means we need to iterate over each bucket in a freespace
extent.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
The calculation for number of nodes to allocate in
bch2_btree_update_start() was incorrect - this fixes a BUG_ON() on the
small nodes test.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This adds a new helper for getting a pointer's durability irrespective
of the device state, and uses it in the the data update path.
This fixes a bug where we do a data update but request 0 replicas to be
allocated, because the replica being rewritten is on a device marked as
failed.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
- We may need to drop btree locks before taking the writepoint_lock, as
is done in other places.
- We should be using open_bucket_free_unused(), so that we don't waste
space.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
since we currently don't have a good fault injection library,
bch2_btree_insert_node() was randomly injecting faults based on
local_clock().
At the very least this should have been a debug mode only thing, but
this is a brittle method so let's just delete it.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
btree write buffer flush is only invoked from contexts that already hold
a write ref, and checking if we're still RW could cause us to fail to
completely flush the write buffer when shutting down.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
On Mon, 29 May 2023, Mikulas Patocka wrote:
> The oops happens in set_btree_iter_dontneed and it is caused by the fact
> that iter->path is NULL. The code in try_alloc_bucket is buggy because it
> sets "struct btree_iter iter = { NULL };" and then jumps to the "err"
> label that tries to dereference values in "iter".
Here I'm sending a patch for it.
From: Mikulas Patocka <mpatocka@redhat.com>
The function try_alloc_bucket sets the variable "iter" to NULL and then
(on various error conditions) jumps to the label "err". On the "err"
label, it calls "set_btree_iter_dontneed" that tries to dereference
"iter->trans" and "iter->path".
So, we get an oops on error condition.
This patch fixes the crash by testing that iter.trans and iter.path is
non-zero before calling set_btree_iter_dontneed.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
d_prune_aliases() may call bch2_evict_inode(), which needs
c->vfs_inodes_list_lock.
Fix this by always calling igrab() before putting the inodes onto our
disposal list, and then calling d_prune_aliases() with
c->vfs_inodes_lock dropped.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
If a bcachefs filesystem is configured with a background device
(disk group), rebalance will relocate data to this device in the
background by checking extent keys for whether they currently reside
in the specified target. For keys that do not, rebalance performs a
read/write cycle to allow the write path to properly relocate data.
If the background target is not usable (read-only, for example),
however, the write path doesn't actually move data to another
device. Instead, rebalance spins indefinitely reading and rewriting
the same data over and over to the same device. If the background
target is made available again, the rebalance picks this up,
relocates the data, and eventually terminates.
To avoid this spinning behavior, update the rebalance background
target logic to not only check whether the extent is not in the
target, but whether the target is actually usable as well. If not,
then don't mark the key for rewrite.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
We have one caller that cycles the rcu lock solely for this call
(via target_rw_devs()), and we'd like to add another. Simplify
things by pushing the rcu lock down into bch2_target_to_mask(),
similar to how bch2_dev_in_target() works.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
We have bch2_sb_disk_groups_to_text() to dump disk group labels, but
no good information on device group membership at runtime. Add
bch2_disk_groups_to_text() and an associated 'disk_groups' sysfs
file to print group and device relationships.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
the error message here dated from when backpointers could be stored in
alloc keys; now, we should always print the full key.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Using drop_locks_do() ensures that every unlock() is paired with a
relock(), with proper error checking.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
- getline() output includes a newline, without stripping that we were
just looping
- Make the prompt clearer
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Add two new helpers for allocating memory with btree locks held: The
idea is to first try the allocation with GFP_NOWAIT|__GFP_NOWARN, then
if that fails - unlock, retry with GFP_KERNEL, and then call
trans_relock().
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
The promote path had a BUG_ON() for unknown error type, which we're now
seeing: change it to a WARN_ON() - because we're curious what this is -
and otherwise handle it in the normal error path.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
GFP_NOFS doesn't ever make sense. If we're allocatingc memory it should
be GFP_NOWAIT if btree locks are held, GFP_KERNEL otherwise.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
When allocating memory, gfp flags should generally be
- GFP_NOWAIT|__GFP_NOWARN if btree locks are held
- GFP_NOFS if in the IO path or otherwise holding resources needed for
IO submission
- GFP_KERNEL otherwise
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Add a new helper for the common pattern of:
- trans_unlock()
- do something
- trans_relock()
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
GFP_NOIO dates from the bcache days, when we operated under the block
layer. Now, GFP_NOFS is more appropriate, so switch all GFP_NOIO uses to
GFP_NOFS.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Fix a bug where bch2_btree_node_get() might call bch2_trans_unlock() (in
fill) without calling bch2_trans_relock(); this is a bug when it's done
in the core btree code.
Also, twea bch2_btree_node_mem_alloc() to drop btree locks before doing
a blocking memory allocation.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
We've been using __GFP_NOFAIL for allocating struct bch_folio, our
private per-folio state.
However, that struct is variable size - it holds state for each sector
in the folio, and folios can be quite large now, which means it's
possible for bch_folio to be larger than PAGE_SIZE now.
__GFP_NOFAIL allocations are undesirable in normal circumstances, but
particularly so at >= PAGE_SIZE, and warnings are emitted for that.
So, this patch adds proper error paths and eliminates most uses of
__GFP_NOFAIL. Also, do some more cleanup of gfp flags w.r.t. btree node
locks: we can use GFP_KERNEL, but only if we're not holding btree locks,
and if we are holding btree locks we should be using GFP_NOWAIT.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
When partially overwriting an extent in an older snapshot, the existing
extent has to be split.
If the existing extent was overwritten in a different (sibling)
snapshot, we have to ensure that the split won't be visible in the
sibling snapshot.
data_update.c already has code for this,
bch2_insert_snapshot_writeouts() - we just need to move it into
btree_update_leaf.c and change bch2_trans_update_extent() to use it as
well.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
As with previous conversions, replace -ENOENT uses with more informative
private error codes.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
bch2_btree_trans_to_text() is used on btree_trans objects that are owned
by different threads - when printing out deadlock cycles - so we need a
safe version of trans_for_each_path(), else we race with seeing a
btree_path that was just allocated and not fully initialized:
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
bch2_fs_quota_read() could see an inode that's been deleted
(KEY_TYPE_inode_generation) - bch2_fs_quota_read_inode() needs to check
for that instead of erroring.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
fail counters need to be events, not numbers of sectors - or the
calculations the tests use for determining if we've had too many
slowpath events don't work.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
We've been seeing difficult to debug "missing indirect extent" bugs,
that fsck doesn't seem to find.
One possibility is that there was a missing indirect extent, but then a
new indirect extent was created at the location of the previous indirect
extent.
This patch eliminates that possibility by always creating new indirect
extents right after the last one, at the end of the reflink btree.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Add some more tests that test conventional and weighted mean
simultaneously, and with a table of values that represents events that
we'll be using this to look for so we can verify-by-eyeball that the
output looks sane.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
When running in userspace, we currently don't have a real percpu
implementation available - at least in bcachefs-tools, which is where
this code is currently used in userspace.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This switches to a newer cmpxchg variant which updates @old for us on
failure, simplifying the cmpxchg loops a bit and supposedly generating
better code.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
In the conversion to atomic_t, six_lock_slowpath() ended up calling
six_lock_wakeup() in the failure path with a state variable that was
never initialized - whoops.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Since we're not generating different versions of the lock functions for
each lock type, the constant propagation we were trying to do before is
no longer useful - this is now a small code size decrease.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This deletes the crazy cast-atomic-to-unsigned-long, and replaces them
with atomic_and() and atomic_or().
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
lock->state.seq is shortly being moved out of lock->state, to kill the
depedency on atomic64; in preparation for that, we change the write
locking bit to write locked.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
The next patch is going to move lock->seq out of lock->state. This
replaces six_relock() with a much simpler implementation based on
trylock.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
- Expanded and revamped overview documentation in six.h, giving an
overview of all features
- docbook-comments for all external interfaces
- Rename some functions for simplicity, i.e.
six_lock_ip_type() -> six_lock_ip()
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
As suggested by Linus, this drops the six_lock_state union in favor of
raw bitmasks.
On the one hand, bitfields give more type-level structure to the code.
However, a significant amount of the code was working with
six_lock_state as a u64/atomic64_t, and the conversions from the
bitfields to the u64 were deemed a bit too out-there.
More significantly, because bitfield order is poorly defined (#ifdef
__LITTLE_ENDIAN_BITFIELD can be used, but is gross), incrementing the
sequence number would overflow into the rest of the bitfield if the
compiler didn't put the sequence number at the high end of the word.
The new code is a bit saner when we're on an architecture without real
atomic64_t support - all accesses to lock->state now go through
atomic64_*() operations.
On architectures with real atomic64_t support, we additionally use
atomic bit ops for setting/clearing individual bits.
Text size: 7467 bytes -> 4649 bytes - compilers still suck at
bitfields.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Originally, we used inlining/flattening to cause the compiler to
generate different versions of lock/trylock/relock/unlock for each lock
type - read, intent, and write. This made the individual functions
smaller and let the compiler eliminate table lookups: however, as the
code has gotten more complicated these optimizations have gotten less
worthwhile, and all the tricky inlining and dispatching made the code
less readable.
Text size: 11015 bytes -> 7467 bytes, and benchmarks show no loss of
performance.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Originally, the waiting bit was always set by trylock() on failure:
however, it's now set by __six_lock_type_slowpath(), with wait_lock held
- which is the more correct place to do it.
That made setting the waiting bit in trylock redundant, so this patch
deletes that.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
The lost wakeup bug hasn't been observed in awhile, and we're trying to
provoke it and determine if it still exists.
This patch removes some defenses that were added to attempt to track it
down; if it still exists, this should make it easier to see it.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
six_lock_pcpu_alloc() is an unsafe interface: it's not safe to allocate
or free the percpu reader count on an existing lock that's in use, the
only safe time to allocate percpu readers is when the lock is first
being initialized.
This patch adds a flags parameter to six_lock_init(), and instead of
six_lock_pcpu_free() we now expose six_lock_exit(), which does the same
thing but is less likely to be misused.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This moves a helper out of the bcachefs code that shouldn't have been
there, since it touches six lock internals.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
We were copying the size of a struct bch_fs_usage_online to a struct
bch_fs_usage, which is 8 bytes smaller.
This adds some new helpers so we can do this correctly, and get rid of
some magic +1s too.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
With the recent bkey_ops.min_val_size addition, bkey values are
automatically extended to the size of the current version.
The check in bch2_alloc_v4_invalid() needs to be updated to take this
into account.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
These deletes a bch2_trans_unlock() call from __bch2_move_data(). It was
redundant; bch2_move_extent() has the correct unlock call, and it was
buggy because when move_extent calls bch2_extent_drop_ptrs() we don't
want the transaction to be unlocked yet - this fixes a btree_iter.c
assertion.
Fixes https://github.com/koverstreet/bcachefs/issues/511.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Small performance optimization; an open coded loop is better than rep ;
movsq for small copies.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
A user hit this BUG_ON() - it's unclear how it happened, so replace it
with a fatal error that will cause us to go read only, and print out
more information.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
bch2_replicas_gc_(start|end) is now only used for journal replicas
entries, which don't have bucket sector counts - so this code is
entirely dead and can be deleted.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
The journal write submission path marks the associated replica
entries for journal data in journal_write_done(), which is just
after journal write bio submission. This creates a small window
where journal entries might have been written out, but the
associated replica is not marked such that recovery does not know
that the associated device contains journal data.
Move the replica marking a bit earlier in the write path such that
recovery is guaranteed to recognize that the device contains journal
data in the event of a crash.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Now that we can reliably designate and find the master subvolume out of
a tree of snapshots, we can finally make quotas work with snapshots:
That is - quotas will now _ignore_ snapshot subvolumes, and only be in
effect for the master (non snapshot) subvolume.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Add two new fields to bch_subvolume:
- otime: creation time
- parent: For snapshots, this is the id of the subvolume the snapshot
was created from
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This adds a new btree which gets us a persistent per-snapshot-tree
identifier.
- BTREE_ID_snapshot_trees
- KEY_TYPE_snapshot_tree
- bch_snapshot now has a field that points to a snapshot_tree
This is going to be used to designate one snapshot ID/subvolume out of a
given tree of snapshots as the "main" subvolume, so that we can do quota
accounting in that subvolume and not the rest.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
It's safe to call bch2_trans_update with a k/v pair where the value
hasn't been filled out, as long as the key part has been and the value
is filled out by transaction commit time.
This patch folds the bch2_trans_update() call into bch2_bkey_make_mut(),
eliminating a bit of boilerplate.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
It's safe to call bch2_trans_update with a k/v pair where the value
hasn't been filled out, as long as the key part has been and the value
is filled out by transaction commit time.
This patch folds the bch2_trans_update() call into bch2_bkey_get_mut(),
eliminating a bit of boilerplate.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
It's safe to call bch2_trans_update with a k/v pair where the value
hasn't been filled out, as long as the key part has been and the value
is filled out by transaction commit time.
This patch folds the bch2_trans_update() call into bch2_bkey_alloc(),
eliminating a bit of boilerplate.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
- bch2_bkey_get_mut() now handles types increasing in size, allocating
a buffer for the type's current size when necessary
- bch2_bkey_make_mut_typed()
- bch2_bkey_get_mut() now initializes the iterator, like
bch2_bkey_get_iter()
Also, refactor so that most of the code is in functions - now macros are
only used for wrappers.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
It's for doing updates - this is where it belongs, and next pathes will
be changing these helpers to use items from btree_update.h.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Introduce new helpers for a common pattern:
bch2_trans_iter_init();
bch2_btree_iter_peek_slot();
- bch2_bkey_get_iter_type() returns -ENOENT if it doesn't find a key of
the correct type
- bch2_bkey_get_val_typed() copies the val out of the btree to a
(typically stack allocated) variable; it handles the case where the
value in the btree is smaller than the current version of the type,
zeroing out the remainder.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This adds a new field to bkey_ops for the minimum size of the value,
which standardizes that check and also enforces the new rule (previously
done somewhat ad-hoc) that we can extend value types by adding new
fields on to the end.
To make that work we do _not_ initialize min_val_size with sizeof,
instead we initialize it to the size of the first version of those
values.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Change btree_update_flags to start after the last btree iterator flag,
so that we can pass both in the same flags argument.
This is needed for the upcoming bch2_bkey_get_mut() helper.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
When a device is removed from a bcachefs volume, the associated
content is removed from the various btrees. The alloc tree uses the
key cache, so when keys are removed the deletes exist in cache for a
period of time until reclaim comes along and flushes outstanding
updates.
When a device is re-added to the bcachefs volume, the add process
re-adds some of these previously deleted keys. When marking device
superblock locations on device add, the keys will likely refer to
some of the same alloc keys that were just removed. The memory
triggers for these key updates are responsible for further updates,
such as bch2_mark_alloc() calling into bch2_dev_usage_update() to
update per-device usage accounting.
When a new key is added to key cache, the trans update path also
flushes the key to the backing btree for coherency reasons for tree
walks.
With all of this context, if a device is removed and re-added
quickly enough such that some key deletes from the remove are still
pending a key cache flush, the trans update path can view this as
addition of a new key because the old key in the insert entry refers
to a deleted key. However the deleted cached key has not been filled
by absence of a btree key, but rather refers to an explicit deletion
of an existing key that occurred during device removal.
The trans update path adds a new update to flush the key and tags
the original (cached) update to skip running the memory triggers.
This results in running triggers on the non-cached update instead,
which in turn will perform accounting updates based on incoherent
values. For example, bch2_dev_usage_update() subtracts the the old
alloc key dirty sector count in the non-cached btree key from the
newly initialized (i.e. zeroed) per device counters, leading to
underflow and accounting corruption.
There are at least a few ways to avoid this problem, the simplest of
which may be to run triggers against the cached update rather than
the non-cached update. If the key only needs to be flushed when the
key is not present in the tree, however, then this still performs an
unnecessary update. We could potentially use the cached key dirty
state to determine whether the delete is a dirty, cached update vs.
a clean cache fill, but this may require transmitting key cache
dirty state across layers, which adds complexity and seems to be of
limited value. Instead, update flush_new_cached_update() to handle
this by simply checking for the key in the btree and only perform
the flush when a backing key is not present.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This patch deletes a .key_invalid check for btree pointers that only
applies to _very_ old on disk format versions, and potentially
complicates the upgrade process.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
We had some bugs with setting/using first_this_inode in the inode
walker in the dirents/xattr code.
This patch changes to not clear first_this_inode until after
initializing the new hash info.
Also, we fix an error message to not print on transaction restart, and
add a comment to related fsck error code.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
With backpointers, it's now impossible for bch2_evacuate_bucket() to be
completely reliable: it can race with an extent being partially
overwritten or split, which needs a new write buffer flush for the
backpointer to be seen.
This shouldn't be a real issue in practice; the previous patch added a
new tracepoint so we'll be able to see more easily if it is.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Move path tracepoints now include the key being moved. Also, add new
tracepoints for the start of move_extent, and evacuate_bucket.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
If a device has keys in the bucket_gens btree associated with its
buckets and is removed from a bcachefs volume, fsck will complain
about the presence of keys associated with an invalid device index.
A repair removes the associated keys and restores correctness.
Update bch2_dev_remove_alloc() to remove device related keys at
device removal time to avoid the problem.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
fsck removes bucket_gens keys for devices that do not exist in the
volume (i.e., if the device was removed). In 'fsck -n' mode, the
associated fsck_err_on() wrapper returns false to skip the key
removal. This proceeds on to the rest of the function, which
eventually segfaults on a NULL bch_dev because the device does not
exist.
Update bch2_check_bucket_gens_key() to skip out of the rest of the
function when the associated device does not exist, regardless of
running fsck in check or repair mode.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Create a small helper to translate from file offset to the
associated bch_folio_sector index in the underlying bch_folio. The
helper assumes the file offset is covered by the passed folio.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
It turns out, in rare situations we need to be passing in a disk
reservation, which will be used internally by the transaction commit
path when needed. Pass one in...
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
In __bch2_alloc_to_v4_mut(), we overrun the buffer we allocate if the
alloc key had backpointers stored in it (which we no longer support).
Fix this with a max() call.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This changes the ask_yn() function used by fsck to accept Y or N,
meaning yes or no for all errors of a given type.
With this, the user can be prompted only for distinct error types -
useful when a filesystem has lots of errors.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Some of the folio_end_*() helpers are prone to overflow of signed
64-bit types because the mapping is only limited by the max value of
loff_t and the associated helpers return the start offset of the
next folio. Therefore, a folio_end_pos() of the max allowable folio in a
mapping returns a value that overflows loff_t.
This makes it hard to rely on such values when doing folio
processing across a range of a file, as bcachefs attempts to do with
the recent folio changes. For example, generic/564 causes problems
in the buffered write path when testing writes at max boundary
conditions.
The current understanding is that the pagecache historically limited
the mapping to one less page to avoid this problem and this was
dropped with some of the folio conversions, but may be reinstated to
properly address the problem. In the meantime, update the internal
folio_end_*() helpers in bcachefs to return a u64, and all of the
associated code to use or cast to u64 to avoid overflow problems.
This allows generic/564 to pass and can be reverted back to using
loff_t if at any point the pagecache subsystem can guarantee these
boundary conditions will not overflow.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
The buffered write path batches folio creations in the file mapping
based on the requested size of the write. Under low free space
conditions, it is possible to add a bunch of folios to the mapping
and then return a short write or -ENOSPC due to lack of space. If
this occurs on an extending write, the file size is updated based on
the amount of data successfully written to the file. If folios were
added beyond the final i_size, they may hang around until reclaimed,
truncated or encountered unexpectedly by another operation.
For example, generic/083 reproduces a sequence of events where a
short write leaves around one or more post-EOF folios on an inode, a
subsequent zero range request extends beyond i_size and overlaps
with an aforementioned folio, and __bch2_truncate_folio() happens
across it and complains.
Update __bch2_buffered_write() to keep track of the start offset of
the last folio added to the mapping for a prospective write. After
i_size is updated, check whether this offset starts beyond EOF. If
so, truncate pagecache beyond the latest EOF to clean up any folios
that don't reside at least partially within EOF upon completion of
the write.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
generic/083 occasionally reproduces a panic caused by an overflow
when accessing the bch_folio_sector array of the folio being
processed by __bch2_truncate_folio(). The immediate cause of the
overflow is that the folio offset is beyond i_size, and therefore
the sector index calculation underflows on subtraction of the folio
offset.
One cause of this is mainly observed on nocow mounts. When nocow is
enabled, fallocate performs physical block allocation (as opposed to
block reservation in cow mode), which range_has_data() then
interprets as valid data that requires partial zeroing on truncate.
Therefore, if a post-eof zero range request lands across post-eof
preallocated blocks, __bch2_truncate_folio() may actually create a
post-eof folio in order to perform zeroing. To avoid this problem,
update range_has_data() to filter out unwritten blocks from folio
creation and partial zeroing.
Even though we should never create folios beyond EOF like this, the
mere existence of such folios is not necessarily a fatal error. Fix
up the truncate code to warn about this condition and not overflow
the sector array and possibly crash the system. The addition of this
warning without the corresponding unwritten extent fix has shown
that various other fstests are able to reproduce this problem fairly
frequently, but often in ways that doesn't necessarily result in a
kernel panic or a change in user observable behavior, and therefore
the problem goes undetected.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
With large folios, it's now incidentally possible to end up with a
clean, uptodate folio in the page cache that doesn't have a bch_folio
attached, if a folio has to be split.
This patch fixes __bch2_truncate_folio() to check for this; other code
paths appear to handle it.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
- X-macro-ize the bch_folio_sector_state enum: this means we can easily
generate strings, which is helpful for debugging.
- Add helpers for state transitions: folio_sector_dirty(),
folio_sector_undirty(), folio_sector_reserve()
- Add folio_sector_set(), a single helper for changing folio sector
state just so that we have a single place to instrument when we're
debugging.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This converts fs-io.c to pass folios, not pages. We're not handling
large folios yet, there's no functional changes in this patch - just a
lot of churn doing the initial type conversions.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Seeing an odd bug with page/folio state not being properly initialized,
this is to help track it down.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
We're just doing cpu work here and it could take awhile, a
cond_resched() is definitely needed.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This greatly expands the move_extent_fail tracepoint - now it includes
all the information we have available, including exactly why the extent
wasn't updated.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Most counters aren't in units of sectors, and the ones that are should
just be switched to bytes, for simplicity.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>