The recent work to fix data moves w.r.t. durability broke promotes,
because the caused us to bail out when the extent minus pointers being
dropped still has enough pointers to satisfy the current number of
replicas.
Disable this check when we're adding cached replicas.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
When overwriting and splitting existing extents, we weren't correctly
accounting for a 3 way split of a compressed extent.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
When we fail to allocate because of insufficient open buckets, we don't
want to retry from the full set of devices - we just want to retry in
blocking mode.
But if the retry in blocking mode fails with a different error code, we
end up squashing the -BCH_ERR_open_buckets_empty error with an error
that makes us thing we won't be able to allocate (insufficient_devices)
- which is incorrect when we didn't try to allocate from the full set of
devices, and causes the write to fail.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
We need to help modprobe load architecture specific modules so we don't
fall back to generic software implementations, this should help
performance when building as a module.
Signed-off-by: Daniel Hill <daniel@gluo.nz>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
When bch2_fs_alloc() gets an error before calling
bch2_fs_btree_iter_init(), bch2_fs_btree_iter_exit() makes an invalid
memory access because btree_trans_list is uninitialized.
Signed-off-by: Thomas Bertschinger <tahbertschinger@gmail.com>
Fixes: 6bd68ec266 ("bcachefs: Heap allocate btree_trans")
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
The ->encode_fh method is responsible for setting amount of space
required for storing the file handle if not enough space was provided.
bch2_encode_fh() was not setting required length in that case which
breaks e.g. fanotify. Fix it.
Reported-by: Petr Vorel <pvorel@suse.cz>
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
On trylock failure we were waiting for outstanding reads to complete -
but nocow locks need to be held until the whole move is finished.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Since outstanding journal buffers hold a journal pin, when flushing all
pins we need to close the current journal entry if necessary so its pin
can be released.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
We could delete directories transactionally on rmdir()/unlink(), but we
don't; instead, like with regular files we wait for the VFS to call
evict().
That means that our check for directories in the deleted inodes btree is
wrong - the check should be for non-empty directories.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This fixes a bug where rebalance would loop repeatedly on the same
extents.
Signed-off-by: Daniel Hill <daniel@gluo.nz>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
The internal freeze mechanism in bcachefs mostly reuses the generic
rw<->ro transition code. If the fs happens to shutdown during or
after freeze, a transition back to rw can fail. This is expected,
but returning an error from the unfreeze callout prevents the
filesystem from being unfrozen.
Skip the read write transition if the fs is shutdown. This allows
the fs to unfreeze at the vfs level so writes will no longer block,
but will still fail due to the emergency read-only state of the fs.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
When creating a snapshot without specifying the source subvolume, we use
the subvolume containing the new snapshot.
Previously, this worked if the directory containing the new snapshot was
the subvolume root - but we were using the incorrect helper, and got a
subvolume ID of 0 when the parent directory wasn't the root of the
subvolume, causing an emergency read-only.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This fixes a transaction path overflow reported in the snapshot deletion
path, when moving extents to the correct snapshot.
The root of the issue is that creating/deleting a reflink pointer can
generate an unbounded number of updates, if it is allowed to reference
an unbounded number of indirect extents; to prevent this, merging of
reflink pointers has been disabled.
But there's a hole, which is that copygc/rebalance may fragment existing
extents in the course of moving them around, and if an indirect extent
becomes too fragmented we'll then become unable to delete the reflink
pointer.
The eventual solution is going to be to tweak trigger handling so that
we can process large reflink pointers incrementally when necessary, and
notice that trigger updates don't need to be run for the part of the
reflink pointer not changing. That is going to be a bigger project
though, for another patch.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
for_each_btree_key2() runs each loop iteration in a btree transaction,
and thus does not cause SRCU lock hold time problems.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Also, make bch2_extent_drop_ptrs() safer, so it works with extents and
non-extents iterators.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Recently, journal pre-reservations were removed. They were for reserving
space ahead of time in the journal for operations that are required for
journal reclaim, e.g. btree key cache flushing and interior node btree
updates.
Instead we have watermarks - only operations for journal reclaim are
allowed when the journal is low on space, and in general we're quite
good about doing operations in the order that will free up space in the
journal quickest when we're low on space. If we're doing a journal
reclaim operation out of order, we usually do it in nonblocking mode if
it's not freeing up space at the end of the journal.
There's an exceptino though - interior btree node update operations have
to be BCH_WATERMARK_reclaim - once they've been started, and they can't
be nonblocking. Generally this is fine because they'll only be a very
small fraction of transaction commits - but there's an exception, which
is during journal replay.
Journal replay does many btree operations, but doesn't need to commit
them to the journal since they're already in the journal. So killing off
of pre-reservation, plus another change to make journal replay more
efficient by initially doing the replay in sorted btree order, made it
possible for the interior update operations replay generates to fill and
deadlock the journal.
Fix this by introducing a new check on journal space at the _start_ of
an interior update operation. This causes us to block if necessary in
exactly the same way as we used to when interior updates took a journal
pre-reservaiton, but without all the expensive accounting
pre-reservations required.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
The keys being replayed by journal replay have to be synchronized with
updates by other threads that overwrite them. We rely on btree node
locks for synchronizing - but since btree write buffer updates take no
btree locks, that won't work.
Instead, simply disable using the btree write buffer until journal
replay is finished.
This fixes a rare backpointers error in the merge_torture_flakey test.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
There's no need to drop journal pins in our exit paths - the code was
trying to have everything cleaned up on any shutdown, but better to just
tweak the assertions a bit.
This fixes a bug where calling into journal reclaim in the exit path
would cass a null ptr deref.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This fixes a bug where going read-only was taking longer than it should
have due to copygc forgetting to check kthread_should_stop()
Additionally: fix a missing is_kthread check in bch2_move_ratelimit().
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This eliminates some SRCU warnings: for_each_btree_key2() runs every
loop iteration in a distinct transaction context.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
btree writes update the btree node key after every write, in order to
update sectors_written, and they also might need to drop pointers if one
of the writes failed in a replicated btree node.
But the btree node might also have had a pointer dropped while the write
was in flight, by bch2_dev_metadata_drop(), and thus there was a bug
where the btree node write would ovewrite the btree node's key with what
it had at the start of the write.
Fix this by dropping pointers not currently in the btree node key.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
journal_cur_seq() can legitimately be used outside of the journal lock,
where this assert can race
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
The automated tests check if we've hit too many slowpath/error path
events and fail the test - if we're just shutting down, that naturally
shouldn't count.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Renamed from trace_move_extent_alloc_mem_fail, because there are other
reasons we colud fail (disk space allocation failure).
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
bch2_btree_update_start() calculates which nodes are going to have to be
split/rewritten, so that we know how many nodes to reserve and how deep
in the tree we have to take locks.
But btree node merges require inserting two keys into the parent node,
not just splits.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Validation was completely missing for replicas entries in the journal
(not the superblock replicas section) - we can't have replicas entries
pointing to invalid devices.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
zstd apparently lies about the size of the compression workspace it
requires; if we double it compression succeeds.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
bkey embeds a bpos that is misaligned on big endian; this is so that
bch2_bkey_swab() works correctly without having to differentiate between
packed and non-packed keys (a debatable design decision).
This means it can't have the __aligned() tag on big endian.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Durability of an erasure coded pointer doesn't add the device
durability; durability is the same for any extent in that stripe so the
calculation only comes from the stripe.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Previously, there was a bug where if an extent had greater durability
than required (because we needed to move a durability=1 pointer and
ended up putting it on a durability 2 device), we would submit a write
for replicas=2 - the durability of the pointer being rewritten - instead
of the number of replicas required to bring it back up to the
data_replicas option.
This, plus the allocation path sometimes allocating on a greater
durability device than requested, meant that extents could continue
having more and more replicas added as they were being rewritten.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
When allocating from devices with different durability, we might end up
with more replicas than required; this changes
bch2_alloc_sectors_start() to check for this, and drop replicas that
aren't needed to hit the number of replicas requested.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
The btree iterator code overlays keys from the journal until journal
replay is finished; since we're now starting copygc/rebalance etc.
before replay is finished, this is multithreaded access and thus needs
refcounting.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Various userspace scripts/tools may expect mount entries in
/proc/mounts to reflect the device path names used to mount the
associated filesystem. bcachefs seems to normalize the device path
to the underlying device name based on the block device. This
confuses tools like fstests when the test devices might be lvm or
device-mapper based.
The default behavior for show_vfsmnt() appers to be to use the
string passed to alloc_vfsmnt(), so tweak bcachefs to copy the path
at device superblock read time and to display it via
->show_devname().
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This fixes a bug where copygc would occasionally race with going
read-write and die, thinking we were read only, because it couldn't take
a ref on c->writes.
It's not necessary for copygc (or rebalance, or copygc) to take write
refs; they could run with BCH_TRANS_COMMIT_nocheck_rw, but this is an
easier fix that making sure that flag is passed correctly everywhere.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
copygc no longer has to scan the buckets, so it's no longer a problem if
the number of buckets is changing while it's running.
This also fixes a bug where we forgot to restart copygc.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This adds move_ctxt_wait_event_timeout(), which can sleep for a timeout
while also issueing pending moves as reads complete.
Co-developed-by: Daniel Hill <daniel@gluo.nz>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Introduce a new helper to flush all move IOs, and use it in a few places
where we should have been.
The new helper also drops btree locks before waiting on outstanding move
writes, avoiding potential deadlocks.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
We still have disk space accounting changes coming for erasure coding,
and the changes won't be as strictly backwards compatible as they'd
ought to be - specifically, we need to start accounting striped data
under a separate counter in bch_alloc (which describes buckets).
A fsck will suffice for upgrading/downgrading, but since erasure coding
is the most incomplete major feature of bcachefs it still makes sense to
put behind a separate kconfig option, so that users are fully aware.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Control flow integrity is now checking that type signatures match on
indirect function calls. That breaks closures, which embed a work_struct
in a closure in such a way that a closure_fn may also be used as a
workqueue fn by the underlying closure code.
So we have to change closure fns to take a work_struct as their
argument - but that results in a loss of clarity, as closure fns have
different semantics from normal workqueue functions (they run owning a
ref on the closure, which must be released with continue_at() or
closure_return()).
Thus, this patc introduces CLOSURE_CALLBACK() and closure_type() macros
as suggested by Kees, to smooth things over a bit.
Suggested-by: Kees Cook <keescook@chromium.org>
Cc: Coly Li <colyli@suse.de>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
In percpu reader mode, trylock() for read had a lost wakeup: on failure
to get the lock, we may have caused a writer to fail to get the lock,
because we temporarily elevated the reader count.
We need to check for waiters after decrementing the read count - not
before.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
In no_data_io mode, we expect data checksums to be wrong - don't want to
spew the log with them.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>