2018-06-06 02:42:14 +00:00
|
|
|
# SPDX-License-Identifier: GPL-2.0
|
2007-11-27 05:53:47 +00:00
|
|
|
#
|
|
|
|
# Copyright (c) 2000-2005 Silicon Graphics, Inc.
|
|
|
|
# All Rights Reserved.
|
|
|
|
#
|
|
|
|
|
kbuild: use $(src) instead of $(srctree)/$(src) for source directory
Kbuild conventionally uses $(obj)/ for generated files, and $(src)/ for
checked-in source files. It is merely a convention without any functional
difference. In fact, $(obj) and $(src) are exactly the same, as defined
in scripts/Makefile.build:
src := $(obj)
When the kernel is built in a separate output directory, $(src) does
not accurately reflect the source directory location. While Kbuild
resolves this discrepancy by specifying VPATH=$(srctree) to search for
source files, it does not cover all cases. For example, when adding a
header search path for local headers, -I$(srctree)/$(src) is typically
passed to the compiler.
This introduces inconsistency between upstream and downstream Makefiles
because $(src) is used instead of $(srctree)/$(src) for the latter.
To address this inconsistency, this commit changes the semantics of
$(src) so that it always points to the directory in the source tree.
Going forward, the variables used in Makefiles will have the following
meanings:
$(obj) - directory in the object tree
$(src) - directory in the source tree (changed by this commit)
$(objtree) - the top of the kernel object tree
$(srctree) - the top of the kernel source tree
Consequently, $(srctree)/$(src) in upstream Makefiles need to be replaced
with $(src).
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
Reviewed-by: Nicolas Schier <nicolas@fjasle.eu>
2024-04-27 14:55:02 +00:00
|
|
|
ccflags-y += -I $(src) # needed for trace events
|
|
|
|
ccflags-y += -I $(src)/libxfs
|
2011-08-14 17:13:00 +00:00
|
|
|
|
2007-11-27 05:53:47 +00:00
|
|
|
obj-$(CONFIG_XFS_FS) += xfs.o
|
2007-11-28 07:28:09 +00:00
|
|
|
|
2011-08-12 21:21:35 +00:00
|
|
|
# this one should be compiled first, as the tracing macros can easily blow up
|
|
|
|
xfs-y += xfs_trace.o
|
2007-11-27 05:53:47 +00:00
|
|
|
|
2014-06-25 04:57:22 +00:00
|
|
|
# build the libxfs code first
|
|
|
|
xfs-y += $(addprefix libxfs/, \
|
2018-05-14 06:10:08 +00:00
|
|
|
xfs_ag.o \
|
2014-06-25 04:57:53 +00:00
|
|
|
xfs_alloc.o \
|
|
|
|
xfs_alloc_btree.o \
|
|
|
|
xfs_attr.o \
|
|
|
|
xfs_attr_leaf.o \
|
|
|
|
xfs_attr_remote.o \
|
2015-07-29 01:52:08 +00:00
|
|
|
xfs_bit.o \
|
2014-06-25 04:57:53 +00:00
|
|
|
xfs_bmap.o \
|
|
|
|
xfs_bmap_btree.o \
|
|
|
|
xfs_btree.o \
|
2020-03-11 17:40:26 +00:00
|
|
|
xfs_btree_staging.o \
|
2014-06-25 04:57:53 +00:00
|
|
|
xfs_da_btree.o \
|
2016-08-03 01:12:25 +00:00
|
|
|
xfs_defer.o \
|
2014-06-25 04:57:53 +00:00
|
|
|
xfs_dir2.o \
|
|
|
|
xfs_dir2_block.o \
|
|
|
|
xfs_dir2_data.o \
|
|
|
|
xfs_dir2_leaf.o \
|
|
|
|
xfs_dir2_node.o \
|
|
|
|
xfs_dir2_sf.o \
|
|
|
|
xfs_dquot_buf.o \
|
2024-04-15 21:54:17 +00:00
|
|
|
xfs_exchmaps.o \
|
2014-06-25 04:57:53 +00:00
|
|
|
xfs_ialloc.o \
|
|
|
|
xfs_ialloc_btree.o \
|
2017-11-03 17:34:46 +00:00
|
|
|
xfs_iext_tree.o \
|
2014-06-25 04:57:53 +00:00
|
|
|
xfs_inode_fork.o \
|
|
|
|
xfs_inode_buf.o \
|
2024-07-02 18:22:32 +00:00
|
|
|
xfs_inode_util.o \
|
2014-06-25 04:57:53 +00:00
|
|
|
xfs_log_rlimit.o \
|
xfs: set up per-AG free space reservations
One unfortunate quirk of the reference count and reverse mapping
btrees -- they can expand in size when blocks are written to *other*
allocation groups if, say, one large extent becomes a lot of tiny
extents. Since we don't want to start throwing errors in the middle
of CoWing, we need to reserve some blocks to handle future expansion.
The transaction block reservation counters aren't sufficient here
because we have to have a reserve of blocks in every AG, not just
somewhere in the filesystem.
Therefore, create two per-AG block reservation pools. One feeds the
AGFL so that rmapbt expansion always succeeds, and the other feeds all
other metadata so that refcountbt expansion never fails.
Use the count of how many reserved blocks we need to have on hand to
create a virtual reservation in the AG. Through selective clamping of
the maximum length of allocation requests and of the length of the
longest free extent, we can make it look like there's less free space
in the AG unless the reservation owner is asking for blocks.
In other words, play some accounting tricks in-core to make sure that
we always have blocks available. On the plus side, there's nothing to
clean up if we crash, which is contrast to the strategy that the rough
draft used (actually removing extents from the freespace btrees).
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Signed-off-by: Dave Chinner <david@fromorbit.com>
2016-09-19 00:30:52 +00:00
|
|
|
xfs_ag_resv.o \
|
2024-04-22 16:47:44 +00:00
|
|
|
xfs_parent.o \
|
2016-08-03 01:33:43 +00:00
|
|
|
xfs_rmap.o \
|
2016-08-03 01:36:07 +00:00
|
|
|
xfs_rmap_btree.o \
|
2016-10-03 16:11:19 +00:00
|
|
|
xfs_refcount.o \
|
2016-10-03 16:11:18 +00:00
|
|
|
xfs_refcount_btree.o \
|
2014-06-25 04:57:22 +00:00
|
|
|
xfs_sb.o \
|
2014-06-25 04:57:53 +00:00
|
|
|
xfs_symlink_remote.o \
|
2019-07-12 22:07:05 +00:00
|
|
|
xfs_trans_inode.o \
|
2014-06-25 04:57:53 +00:00
|
|
|
xfs_trans_resv.o \
|
2024-04-22 16:47:47 +00:00
|
|
|
xfs_trans_space.o \
|
2018-06-07 14:53:33 +00:00
|
|
|
xfs_types.o \
|
2014-06-25 04:57:53 +00:00
|
|
|
)
|
|
|
|
# xfs_rtbitmap is shared with libxfs
|
|
|
|
xfs-$(CONFIG_XFS_RT) += $(addprefix libxfs/, \
|
|
|
|
xfs_rtbitmap.o \
|
2014-06-25 04:57:22 +00:00
|
|
|
)
|
|
|
|
|
2011-08-12 21:21:35 +00:00
|
|
|
# highlevel code
|
|
|
|
xfs-y += xfs_aops.o \
|
2013-08-12 10:49:39 +00:00
|
|
|
xfs_attr_inactive.o \
|
2013-08-12 10:49:38 +00:00
|
|
|
xfs_attr_list.o \
|
2013-08-12 10:49:42 +00:00
|
|
|
xfs_bmap_util.o \
|
2019-06-29 02:27:26 +00:00
|
|
|
xfs_bio_io.o \
|
2011-08-12 21:21:35 +00:00
|
|
|
xfs_buf.o \
|
xfs: test dir/attr hash when loading module
Back in the 6.2-rc1 days, Eric Whitney reported a fstests regression in
ext4 against generic/454. The cause of this test failure was the
unfortunate combination of setting an xattr name containing UTF8 encoded
emoji, an xattr hash function that accepted a char pointer with no
explicit signedness, signed type extension of those chars to an int, and
the 6.2 build tools maintainers deciding to mandate -funsigned-char
across the board. As a result, the ondisk extended attribute structure
written out by 6.1 and 6.2 were not the same.
This discrepancy, in fact, had been noticeable if a filesystem with such
an xattr were moved between any two architectures that don't employ the
same signedness of a raw "char" declaration. The only reason anyone
noticed is that x86 gcc defaults to signed, and no such -funsigned-char
update was made to e2fsprogs, so e2fsck immediately started reporting
data corruption.
After a day and a half of discussing how to handle this use case (xattrs
with bit 7 set anywhere in the name) without breaking existing users,
Linus merged his own patch and didn't tell the maintainer. None of the
ext4 developers realized this until AUTOSEL announced that the commit
had been backported to stable.
In the end, this problem could have been detected much earlier if there
had been any useful tests of hash function(s) in use inside ext4 to make
sure that they always produce the same outputs given the same inputs.
The XFS dirent/xattr name hash takes a uint8_t*, so I don't think it's
vulnerable to this problem. However, let's avoid all this drama by
adding our own self test to check that the da hash produces the same
outputs for a static pile of inputs on various platforms. This enables
us to fix any breakage that may result in a controlled fashion. The
buffer and test data are identical to the patches submitted to xfsprogs.
Link: https://lore.kernel.org/linux-ext4/Y8bpkm3jA3bDm3eL@debian-BULLSEYE-live-builder-AMD64/
Link: https://lore.kernel.org/linux-xfs/ZBUKCRR7xvIqPrpX@destitution/T/#md38272cc684e2c0d61494435ccbb91f022e8dee4
Signed-off-by: Darrick J. Wong <djwong@kernel.org>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
2023-03-16 16:31:20 +00:00
|
|
|
xfs_dahash_test.o \
|
2013-08-12 10:49:36 +00:00
|
|
|
xfs_dir2_readdir.o \
|
2011-08-12 21:21:35 +00:00
|
|
|
xfs_discard.o \
|
|
|
|
xfs_error.o \
|
2024-04-15 21:54:14 +00:00
|
|
|
xfs_exchrange.o \
|
2011-08-12 21:21:35 +00:00
|
|
|
xfs_export.o \
|
2012-04-29 10:39:43 +00:00
|
|
|
xfs_extent_busy.o \
|
2011-08-12 21:21:35 +00:00
|
|
|
xfs_file.o \
|
|
|
|
xfs_filestream.o \
|
2017-03-28 21:56:37 +00:00
|
|
|
xfs_fsmap.o \
|
2011-08-12 21:21:35 +00:00
|
|
|
xfs_fsops.o \
|
|
|
|
xfs_globals.o \
|
2024-04-22 16:47:54 +00:00
|
|
|
xfs_handle.o \
|
2019-04-12 14:40:25 +00:00
|
|
|
xfs_health.o \
|
2012-10-08 10:56:09 +00:00
|
|
|
xfs_icache.o \
|
2011-08-12 21:21:35 +00:00
|
|
|
xfs_ioctl.o \
|
|
|
|
xfs_iomap.o \
|
|
|
|
xfs_iops.o \
|
2014-06-25 04:57:53 +00:00
|
|
|
xfs_inode.o \
|
2011-08-12 21:21:35 +00:00
|
|
|
xfs_itable.o \
|
2019-07-02 16:39:38 +00:00
|
|
|
xfs_iwalk.o \
|
2011-08-12 21:21:35 +00:00
|
|
|
xfs_message.o \
|
2013-08-12 10:49:41 +00:00
|
|
|
xfs_mount.o \
|
2011-08-12 21:21:35 +00:00
|
|
|
xfs_mru_cache.o \
|
2019-07-03 14:33:26 +00:00
|
|
|
xfs_pwork.o \
|
2016-10-03 16:11:32 +00:00
|
|
|
xfs_reflink.o \
|
2015-10-18 21:42:46 +00:00
|
|
|
xfs_stats.o \
|
2013-04-03 05:11:18 +00:00
|
|
|
xfs_super.o \
|
2013-08-12 10:49:40 +00:00
|
|
|
xfs_symlink.o \
|
2014-07-14 22:07:01 +00:00
|
|
|
xfs_sysfs.o \
|
2013-08-12 10:49:32 +00:00
|
|
|
xfs_trans.o \
|
2024-01-15 22:59:40 +00:00
|
|
|
xfs_xattr.o
|
2007-11-27 05:53:47 +00:00
|
|
|
|
2011-08-12 21:21:35 +00:00
|
|
|
# low-level transaction/log code
|
|
|
|
xfs-y += xfs_log.o \
|
|
|
|
xfs_log_cil.o \
|
2016-10-03 16:11:25 +00:00
|
|
|
xfs_bmap_item.o \
|
2011-08-12 21:21:35 +00:00
|
|
|
xfs_buf_item.o \
|
2020-05-01 23:00:45 +00:00
|
|
|
xfs_buf_item_recover.o \
|
|
|
|
xfs_dquot_item_recover.o \
|
2024-04-15 21:54:16 +00:00
|
|
|
xfs_exchmaps_item.o \
|
2011-08-12 21:21:35 +00:00
|
|
|
xfs_extfree_item.o \
|
2022-05-04 02:41:02 +00:00
|
|
|
xfs_attr_item.o \
|
2014-06-25 04:57:53 +00:00
|
|
|
xfs_icreate_item.o \
|
2011-08-12 21:21:35 +00:00
|
|
|
xfs_inode_item.o \
|
2020-05-01 23:00:45 +00:00
|
|
|
xfs_inode_item_recover.o \
|
2022-07-14 01:47:42 +00:00
|
|
|
xfs_iunlink_item.o \
|
2016-10-03 16:11:20 +00:00
|
|
|
xfs_refcount_item.o \
|
2016-08-03 02:04:45 +00:00
|
|
|
xfs_rmap_item.o \
|
2014-06-25 04:57:53 +00:00
|
|
|
xfs_log_recover.o \
|
2007-11-27 05:53:47 +00:00
|
|
|
xfs_trans_ail.o \
|
2019-07-12 22:07:05 +00:00
|
|
|
xfs_trans_buf.o
|
2007-11-27 05:53:47 +00:00
|
|
|
|
2011-08-12 21:21:35 +00:00
|
|
|
# optional features
|
|
|
|
xfs-$(CONFIG_XFS_QUOTA) += xfs_dquot.o \
|
|
|
|
xfs_dquot_item.o \
|
|
|
|
xfs_trans_dquot.o \
|
|
|
|
xfs_qm_syscalls.o \
|
|
|
|
xfs_qm_bhv.o \
|
|
|
|
xfs_qm.o \
|
|
|
|
xfs_quotaops.o
|
2013-10-14 22:17:56 +00:00
|
|
|
|
|
|
|
# xfs_rtbitmap is shared with libxfs
|
2014-06-25 04:57:53 +00:00
|
|
|
xfs-$(CONFIG_XFS_RT) += xfs_rtalloc.o
|
2013-10-14 22:17:56 +00:00
|
|
|
|
2011-08-12 21:21:35 +00:00
|
|
|
xfs-$(CONFIG_XFS_POSIX_ACL) += xfs_acl.o
|
|
|
|
xfs-$(CONFIG_SYSCTL) += xfs_sysctl.o
|
|
|
|
xfs-$(CONFIG_COMPAT) += xfs_ioctl32.o
|
2016-07-08 13:53:20 +00:00
|
|
|
xfs-$(CONFIG_EXPORTFS_BLOCK_OPS) += xfs_pnfs.o
|
2017-10-18 04:37:34 +00:00
|
|
|
|
2022-06-03 05:37:30 +00:00
|
|
|
# notify failure
|
|
|
|
ifeq ($(CONFIG_MEMORY_FAILURE),y)
|
|
|
|
xfs-$(CONFIG_FS_DAX) += xfs_notify_failure.o
|
|
|
|
endif
|
|
|
|
|
xfs: allow queued AG intents to drain before scrubbing
When a writer thread executes a chain of log intent items, the AG header
buffer locks will cycle during a transaction roll to get from one intent
item to the next in a chain. Although scrub takes all AG header buffer
locks, this isn't sufficient to guard against scrub checking an AG while
that writer thread is in the middle of finishing a chain because there's
no higher level locking primitive guarding allocation groups.
When there's a collision, cross-referencing between data structures
(e.g. rmapbt and refcountbt) yields false corruption events; if repair
is running, this results in incorrect repairs, which is catastrophic.
Fix this by adding to the perag structure the count of active intents
and make scrub wait until it has both AG header buffer locks and the
intent counter reaches zero.
One quirk of the drain code is that deferred bmap updates also bump and
drop the intent counter. A fundamental decision made during the design
phase of the reverse mapping feature is that updates to the rmapbt
records are always made by the same code that updates the primary
metadata. In other words, callers of bmapi functions expect that the
bmapi functions will queue deferred rmap updates.
Some parts of the reflink code queue deferred refcount (CUI) and bmap
(BUI) updates in the same head transaction, but the deferred work
manager completely finishes the CUI before the BUI work is started. As
a result, the CUI drops the intent count long before the deferred rmap
(RUI) update even has a chance to bump the intent count. The only way
to keep the intent count elevated between the CUI and RUI is for the BUI
to bump the counter until the RUI has been created.
A second quirk of the intent drain code is that deferred work items must
increment the intent counter as soon as the work item is added to the
transaction. When a BUI completes and queues an RUI, the RUI must
increment the counter before the BUI decrements it. The only way to
accomplish this is to require that the counter be bumped as soon as the
deferred work item is created in memory.
In the next patches we'll improve on this facility, but this patch
provides the basic functionality.
Signed-off-by: Darrick J. Wong <djwong@kernel.org>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
2023-04-12 01:59:58 +00:00
|
|
|
xfs-$(CONFIG_XFS_DRAIN_INTENTS) += xfs_drain.o
|
xfs: allow scrub to hook metadata updates in other writers
Certain types of filesystem metadata can only be checked by scanning
every file in the entire filesystem. Specific examples of this include
quota counts, file link counts, and reverse mappings of file extents.
Directory and parent pointer reconstruction may also fall into this
category. File scanning is much trickier than scanning AG metadata
because we have to take inode locks in the same order as the rest of
[VX]FS, we can't be holding buffer locks when we do that, and scanning
the whole filesystem takes time.
Earlier versions of the online repair patchset relied heavily on
fsfreeze as a means to quiesce the filesystem so that we could take
locks in the proper order without worrying about concurrent updates from
other writers. Reviewers of those patches opined that freezing the
entire fs to check and repair something was not sufficiently better than
unmounting to run fsck offline. I don't agree with that 100%, but the
message was clear: find a way to repair things that minimizes the
quiet period where nobody can write to the filesystem.
Generally, building btree indexes online can be split into two phases: a
collection phase where we compute the records that will be put into the
new btree; and a construction phase, where we construct the physical
btree blocks and persist them. While it's simple to hold resource locks
for the entirety of the two phases to ensure that the new index is
consistent with the rest of the system, we don't need to hold resource
locks during the collection phase if we have a means to receive live
updates of other work going on elsewhere in the system.
The goal of this patch, then, is to enable online fsck to learn about
metadata updates going on in other threads while it constructs a shadow
copy of the metadata records to verify or correct the real metadata. To
minimize the overhead when online fsck isn't running, we use srcu
notifiers because they prioritize fast access to the notifier call chain
(particularly when the chain is empty) at a cost to configuring
notifiers. Online fsck should be relatively infrequent, so this is
acceptable.
The intended usage model is fairly simple. Code that modifies a
metadata structure of interest should declare a xfs_hook_chain structure
in some well defined place, and call xfs_hook_call whenever an update
happens. Online fsck code should define a struct notifier_block and use
xfs_hook_add to attach the block to the chain, along with a function to
be called. This function should synchronize with the fsck scanner to
update whatever in-memory data the scanner is collecting. When
finished, xfs_hook_del removes the notifier from the list and waits for
them all to complete.
Originally, I selected srcu notifiers over blocking notifiers to
implement live hooks because they seemed to have fewer impacts to
scalability. The per-call cost of srcu_notifier_call_chain is higher
(19ns) than blocking_notifier_ (4ns) in the single threaded case, but
blocking notifiers use an rwsem to stabilize the list. Cacheline
bouncing for that rwsem is costly to runtime code when there are a lot
of CPUs running regular filesystem operations. If there are no hooks
installed, this is a total waste of CPU time.
Therefore, I stuck with srcu notifiers, despite trading off single
threaded performance for multithreaded performance. I also wasn't
thrilled with the very high teardown time for srcu notifiers, since the
caller has to wait for the next rcu grace period. This can take a long
time if there are a lot of CPUs.
Then I discovered the jump label implementation of static keys.
Jump labels use kernel code patching to replace a branch with a nop sled
when the key is disabled. IOWs, they can eliminate the overhead of
_call_chain when there are no hooks enabled. This makes blocking
notifiers competitive again -- scrub runs faster because teardown of the
chain is a lot cheaper, and runtime code only pays the rwsem locking
overhead when scrub is actually running.
With jump labels enabled, calls to empty notifier chains are elided from
the call sites when there are no hooks registered, which means that the
overhead is 0.36ns when fsck is not running. This is perfect for most
of the architectures that XFS is expected to run on (e.g. x86, powerpc,
arm64, s390x, riscv).
For architectures that don't support jump labels (e.g. m68k) the runtime
overhead of checking the static key is an atomic counter read. This
isn't great, but it's still cheaper than taking a shared rwsem.
Signed-off-by: Darrick J. Wong <djwong@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
2024-02-22 20:30:45 +00:00
|
|
|
xfs-$(CONFIG_XFS_LIVE_HOOKS) += xfs_hooks.o
|
2024-02-22 20:43:21 +00:00
|
|
|
xfs-$(CONFIG_XFS_MEMORY_BUFS) += xfs_buf_mem.o
|
2024-02-22 20:43:35 +00:00
|
|
|
xfs-$(CONFIG_XFS_BTREE_IN_MEM) += libxfs/xfs_btree_mem.o
|
xfs: allow queued AG intents to drain before scrubbing
When a writer thread executes a chain of log intent items, the AG header
buffer locks will cycle during a transaction roll to get from one intent
item to the next in a chain. Although scrub takes all AG header buffer
locks, this isn't sufficient to guard against scrub checking an AG while
that writer thread is in the middle of finishing a chain because there's
no higher level locking primitive guarding allocation groups.
When there's a collision, cross-referencing between data structures
(e.g. rmapbt and refcountbt) yields false corruption events; if repair
is running, this results in incorrect repairs, which is catastrophic.
Fix this by adding to the perag structure the count of active intents
and make scrub wait until it has both AG header buffer locks and the
intent counter reaches zero.
One quirk of the drain code is that deferred bmap updates also bump and
drop the intent counter. A fundamental decision made during the design
phase of the reverse mapping feature is that updates to the rmapbt
records are always made by the same code that updates the primary
metadata. In other words, callers of bmapi functions expect that the
bmapi functions will queue deferred rmap updates.
Some parts of the reflink code queue deferred refcount (CUI) and bmap
(BUI) updates in the same head transaction, but the deferred work
manager completely finishes the CUI before the BUI work is started. As
a result, the CUI drops the intent count long before the deferred rmap
(RUI) update even has a chance to bump the intent count. The only way
to keep the intent count elevated between the CUI and RUI is for the BUI
to bump the counter until the RUI has been created.
A second quirk of the intent drain code is that deferred work items must
increment the intent counter as soon as the work item is added to the
transaction. When a BUI completes and queues an RUI, the RUI must
increment the counter before the BUI decrements it. The only way to
accomplish this is to require that the counter be bumped as soon as the
deferred work item is created in memory.
In the next patches we'll improve on this facility, but this patch
provides the basic functionality.
Signed-off-by: Darrick J. Wong <djwong@kernel.org>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
2023-04-12 01:59:58 +00:00
|
|
|
|
2017-10-18 04:37:34 +00:00
|
|
|
# online scrub/repair
|
|
|
|
ifeq ($(CONFIG_XFS_ONLINE_SCRUB),y)
|
|
|
|
|
|
|
|
# Tracepoints like to blow up, so build that before everything else
|
|
|
|
|
|
|
|
xfs-y += $(addprefix scrub/, \
|
|
|
|
trace.o \
|
2023-12-15 18:03:30 +00:00
|
|
|
agb_bitmap.o \
|
2017-10-18 04:37:38 +00:00
|
|
|
agheader.o \
|
2017-10-18 04:37:40 +00:00
|
|
|
alloc.o \
|
2017-10-18 04:37:45 +00:00
|
|
|
attr.o \
|
2023-04-12 02:00:38 +00:00
|
|
|
bitmap.o \
|
2017-10-18 04:37:43 +00:00
|
|
|
bmap.o \
|
2017-10-18 04:37:37 +00:00
|
|
|
btree.o \
|
2017-10-18 04:37:36 +00:00
|
|
|
common.o \
|
2017-10-18 04:37:43 +00:00
|
|
|
dabtree.o \
|
2017-10-18 04:37:44 +00:00
|
|
|
dir.o \
|
2024-04-22 16:48:20 +00:00
|
|
|
dirtree.o \
|
2019-04-26 01:26:24 +00:00
|
|
|
fscounters.o \
|
2019-04-16 15:22:00 +00:00
|
|
|
health.o \
|
2017-10-18 04:37:40 +00:00
|
|
|
ialloc.o \
|
2017-10-18 04:37:42 +00:00
|
|
|
inode.o \
|
2024-02-22 20:30:45 +00:00
|
|
|
iscan.o \
|
2024-04-15 21:54:48 +00:00
|
|
|
listxattr.o \
|
2024-02-22 20:30:58 +00:00
|
|
|
nlinks.o \
|
2017-10-18 04:37:46 +00:00
|
|
|
parent.o \
|
2023-04-12 02:00:17 +00:00
|
|
|
readdir.o \
|
2017-10-18 04:37:41 +00:00
|
|
|
refcount.o \
|
2017-10-18 04:37:41 +00:00
|
|
|
rmap.o \
|
2017-10-18 04:37:34 +00:00
|
|
|
scrub.o \
|
2017-10-18 04:37:45 +00:00
|
|
|
symlink.o \
|
2023-08-10 14:48:04 +00:00
|
|
|
xfarray.o \
|
2024-04-22 16:48:04 +00:00
|
|
|
xfblob.o \
|
2023-08-10 14:48:04 +00:00
|
|
|
xfile.o \
|
2017-10-18 04:37:34 +00:00
|
|
|
)
|
2017-10-18 04:37:46 +00:00
|
|
|
|
2023-08-10 14:48:07 +00:00
|
|
|
xfs-$(CONFIG_XFS_ONLINE_SCRUB_STATS) += scrub/stats.o
|
2023-08-10 14:48:09 +00:00
|
|
|
|
|
|
|
xfs-$(CONFIG_XFS_RT) += $(addprefix scrub/, \
|
|
|
|
rtbitmap.o \
|
|
|
|
rtsummary.o \
|
|
|
|
)
|
|
|
|
|
2023-12-15 18:03:45 +00:00
|
|
|
xfs-$(CONFIG_XFS_QUOTA) += $(addprefix scrub/, \
|
|
|
|
dqiterate.o \
|
|
|
|
quota.o \
|
2024-02-22 20:30:54 +00:00
|
|
|
quotacheck.o \
|
2023-12-15 18:03:45 +00:00
|
|
|
)
|
2018-05-14 13:34:36 +00:00
|
|
|
|
|
|
|
# online repair
|
|
|
|
ifeq ($(CONFIG_XFS_ONLINE_REPAIR),y)
|
|
|
|
xfs-y += $(addprefix scrub/, \
|
2018-05-30 05:18:12 +00:00
|
|
|
agheader_repair.o \
|
2023-12-15 18:03:32 +00:00
|
|
|
alloc_repair.o \
|
2024-04-15 21:54:45 +00:00
|
|
|
attr_repair.o \
|
2023-12-15 18:03:39 +00:00
|
|
|
bmap_repair.o \
|
2023-12-15 18:03:40 +00:00
|
|
|
cow_repair.o \
|
2024-04-15 21:54:51 +00:00
|
|
|
dir_repair.o \
|
2024-04-22 16:48:22 +00:00
|
|
|
dirtree_repair.o \
|
2024-04-15 21:54:52 +00:00
|
|
|
findparent.o \
|
2024-02-22 20:33:05 +00:00
|
|
|
fscounters_repair.o \
|
2023-12-15 18:03:32 +00:00
|
|
|
ialloc_repair.o \
|
2023-12-15 18:03:36 +00:00
|
|
|
inode_repair.o \
|
2023-12-07 02:40:59 +00:00
|
|
|
newbt.o \
|
2024-02-22 20:31:00 +00:00
|
|
|
nlinks_repair.o \
|
2024-04-15 21:54:55 +00:00
|
|
|
orphanage.o \
|
2024-04-15 21:54:53 +00:00
|
|
|
parent_repair.o \
|
xfs: define an in-memory btree for storing refcount bag info during repairs
Create a new in-memory btree type so that we can store refcount bag info
in a much more memory-efficient and performant format. Recall that the
refcount recordset regenerator computes the new recordset from browsing
the rmap records. Let's say that the rmap records are:
{agbno: 10, length: 40, ...}
{agbno: 11, length: 3, ...}
{agbno: 12, length: 20, ...}
{agbno: 15, length: 1, ...}
It is convenient to have a data structure that could quickly tell us the
refcount for an arbitrary agbno without wasting memory. An array or a
list could do that pretty easily. List suck because of the pointer
overhead. xfarrays are a lot more compact, but we want to minimize
sparse holes in the xfarray to constrain memory usage. Maintaining any
kind of record order isn't needed for correctness, so I created the
"rcbag", which is shorthand for an unordered list of (excerpted) reverse
mappings.
So we add the first rmap to the rcbag, and it looks like:
0: {agbno: 10, length: 40}
The refcount for agbno 10 is 1. Then we move on to block 11, so we add
the second rmap:
0: {agbno: 10, length: 40}
1: {agbno: 11, length: 3}
The refcount for agbno 11 is 2. We move on to block 12, so we add the
third:
0: {agbno: 10, length: 40}
1: {agbno: 11, length: 3}
2: {agbno: 12, length: 20}
The refcount for agbno 12 and 13 is 3. We move on to block 14, and
remove the second rmap:
0: {agbno: 10, length: 40}
1: NULL
2: {agbno: 12, length: 20}
The refcount for agbno 14 is 2. We move on to block 15, and add the
last rmap. But we don't care where it is and we don't want to expand
the array so we put it in slot 1:
0: {agbno: 10, length: 40}
1: {agbno: 15, length: 1}
2: {agbno: 12, length: 20}
The refcount for block 15 is 3. Notice how order doesn't matter in this
list? That's why repair uses an unordered list, or "bag". The data
structure is not a set because it does not guarantee uniqueness.
That said, adding and removing specific items is now an O(n) operation
because we have no idea where that item might be in the list. Overall,
the runtime is O(n^2) which is bad.
I realized that I could easily refactor the btree code and reimplement
the refcount bag with an xfbtree. Adding and removing is now O(log2 n),
so the runtime is at least O(n log2 n), which is much faster. In the
end, the rcbag becomes a sorted list, but that's merely a detail of the
implementation. The repair code doesn't care.
(Note: That horrible xfs_db bmap_inflate command can be used to exercise
this sort of rcbag insanity by cranking up refcounts quickly.)
Signed-off-by: Darrick J. Wong <djwong@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
2024-02-22 20:43:40 +00:00
|
|
|
rcbag_btree.o \
|
2024-02-22 20:43:41 +00:00
|
|
|
rcbag.o \
|
2023-08-10 14:48:01 +00:00
|
|
|
reap.o \
|
2023-12-15 18:03:33 +00:00
|
|
|
refcount_repair.o \
|
2018-05-14 13:34:36 +00:00
|
|
|
repair.o \
|
2024-02-22 20:43:38 +00:00
|
|
|
rmap_repair.o \
|
2024-04-15 21:54:59 +00:00
|
|
|
symlink_repair.o \
|
2024-04-15 21:54:28 +00:00
|
|
|
tempfile.o \
|
2018-05-14 13:34:36 +00:00
|
|
|
)
|
2023-12-15 18:03:43 +00:00
|
|
|
|
|
|
|
xfs-$(CONFIG_XFS_RT) += $(addprefix scrub/, \
|
|
|
|
rtbitmap_repair.o \
|
2024-04-15 21:54:33 +00:00
|
|
|
rtsummary_repair.o \
|
2023-12-15 18:03:43 +00:00
|
|
|
)
|
2023-12-15 18:03:45 +00:00
|
|
|
|
|
|
|
xfs-$(CONFIG_XFS_QUOTA) += $(addprefix scrub/, \
|
|
|
|
quota_repair.o \
|
2024-02-22 20:30:57 +00:00
|
|
|
quotacheck_repair.o \
|
2023-12-15 18:03:45 +00:00
|
|
|
)
|
2018-05-14 13:34:36 +00:00
|
|
|
endif
|
2017-10-18 04:37:34 +00:00
|
|
|
endif
|