2019-05-19 12:07:45 +00:00
|
|
|
# SPDX-License-Identifier: GPL-2.0-only
|
2005-04-16 22:20:36 +00:00
|
|
|
config XFS_FS
|
|
|
|
tristate "XFS filesystem support"
|
[PATCH] BLOCK: Make it possible to disable the block layer [try #6]
Make it possible to disable the block layer. Not all embedded devices require
it, some can make do with just JFFS2, NFS, ramfs, etc - none of which require
the block layer to be present.
This patch does the following:
(*) Introduces CONFIG_BLOCK to disable the block layer, buffering and blockdev
support.
(*) Adds dependencies on CONFIG_BLOCK to any configuration item that controls
an item that uses the block layer. This includes:
(*) Block I/O tracing.
(*) Disk partition code.
(*) All filesystems that are block based, eg: Ext3, ReiserFS, ISOFS.
(*) The SCSI layer. As far as I can tell, even SCSI chardevs use the
block layer to do scheduling. Some drivers that use SCSI facilities -
such as USB storage - end up disabled indirectly from this.
(*) Various block-based device drivers, such as IDE and the old CDROM
drivers.
(*) MTD blockdev handling and FTL.
(*) JFFS - which uses set_bdev_super(), something it could avoid doing by
taking a leaf out of JFFS2's book.
(*) Makes most of the contents of linux/blkdev.h, linux/buffer_head.h and
linux/elevator.h contingent on CONFIG_BLOCK being set. sector_div() is,
however, still used in places, and so is still available.
(*) Also made contingent are the contents of linux/mpage.h, linux/genhd.h and
parts of linux/fs.h.
(*) Makes a number of files in fs/ contingent on CONFIG_BLOCK.
(*) Makes mm/bounce.c (bounce buffering) contingent on CONFIG_BLOCK.
(*) set_page_dirty() doesn't call __set_page_dirty_buffers() if CONFIG_BLOCK
is not enabled.
(*) fs/no-block.c is created to hold out-of-line stubs and things that are
required when CONFIG_BLOCK is not set:
(*) Default blockdev file operations (to give error ENODEV on opening).
(*) Makes some /proc changes:
(*) /proc/devices does not list any blockdevs.
(*) /proc/diskstats and /proc/partitions are contingent on CONFIG_BLOCK.
(*) Makes some compat ioctl handling contingent on CONFIG_BLOCK.
(*) If CONFIG_BLOCK is not defined, makes sys_quotactl() return -ENODEV if
given command other than Q_SYNC or if a special device is specified.
(*) In init/do_mounts.c, no reference is made to the blockdev routines if
CONFIG_BLOCK is not defined. This does not prohibit NFS roots or JFFS2.
(*) The bdflush, ioprio_set and ioprio_get syscalls can now be absent (return
error ENOSYS by way of cond_syscall if so).
(*) The seclvl_bd_claim() and seclvl_bd_release() security calls do nothing if
CONFIG_BLOCK is not set, since they can't then happen.
Signed-Off-By: David Howells <dhowells@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2006-09-30 18:45:40 +00:00
|
|
|
depends on BLOCK
|
2009-01-19 01:02:57 +00:00
|
|
|
select EXPORTFS
|
2012-11-15 22:20:37 +00:00
|
|
|
select LIBCRC32C
|
2016-06-20 23:53:44 +00:00
|
|
|
select FS_IOMAP
|
2005-04-16 22:20:36 +00:00
|
|
|
help
|
|
|
|
XFS is a high performance journaling filesystem which originated
|
|
|
|
on the SGI IRIX platform. It is completely multi-threaded, can
|
|
|
|
support large files and large filesystems, extended attributes,
|
|
|
|
variable block sizes, is extent based, and makes extensive use of
|
|
|
|
Btrees (directories, extents, free space) to aid both performance
|
|
|
|
and scalability.
|
|
|
|
|
|
|
|
Refer to the documentation at <http://oss.sgi.com/projects/xfs/>
|
|
|
|
for complete details. This implementation is on-disk compatible
|
|
|
|
with the IRIX version of XFS.
|
|
|
|
|
|
|
|
To compile this file system support as a module, choose M here: the
|
|
|
|
module will be called xfs. Be aware, however, that if the file
|
|
|
|
system of your root partition is compiled as a module, you'll need
|
|
|
|
to use an initial ramdisk (initrd) to boot.
|
|
|
|
|
2020-09-10 17:57:17 +00:00
|
|
|
config XFS_SUPPORT_V4
|
|
|
|
bool "Support deprecated V4 (crc=0) format"
|
2020-10-12 21:10:03 +00:00
|
|
|
depends on XFS_FS
|
2020-09-10 17:57:17 +00:00
|
|
|
default y
|
|
|
|
help
|
|
|
|
The V4 filesystem format lacks certain features that are supported
|
|
|
|
by the V5 format, such as metadata checksumming, strengthened
|
|
|
|
metadata verification, and the ability to store timestamps past the
|
|
|
|
year 2038. Because of this, the V4 format is deprecated. All users
|
|
|
|
should upgrade by backing up their files, reformatting, and restoring
|
|
|
|
from the backup.
|
|
|
|
|
|
|
|
Administrators and users can detect a V4 filesystem by running
|
|
|
|
xfs_info against a filesystem mountpoint and checking for a string
|
|
|
|
beginning with "crc=". If the string "crc=0" is found, the
|
|
|
|
filesystem is a V4 filesystem. If no such string is found, please
|
|
|
|
upgrade xfsprogs to the latest version and try again.
|
|
|
|
|
|
|
|
This option will become default N in September 2025. Support for the
|
|
|
|
V4 format will be removed entirely in September 2030. Distributors
|
|
|
|
can say N here to withdraw support earlier.
|
|
|
|
|
|
|
|
To continue supporting the old V4 format (crc=0), say Y.
|
|
|
|
To close off an attack surface, say N.
|
|
|
|
|
2023-04-12 02:05:19 +00:00
|
|
|
config XFS_SUPPORT_ASCII_CI
|
|
|
|
bool "Support deprecated case-insensitive ascii (ascii-ci=1) format"
|
|
|
|
depends on XFS_FS
|
|
|
|
default y
|
|
|
|
help
|
|
|
|
The ASCII case insensitivity filesystem feature only works correctly
|
|
|
|
on systems that have been coerced into using ISO 8859-1, and it does
|
|
|
|
not work on extended attributes. The kernel has no visibility into
|
|
|
|
the locale settings in userspace, so it corrupts UTF-8 names.
|
|
|
|
Enabling this feature makes XFS vulnerable to mixed case sensitivity
|
|
|
|
attacks. Because of this, the feature is deprecated. All users
|
|
|
|
should upgrade by backing up their files, reformatting, and restoring
|
|
|
|
from the backup.
|
|
|
|
|
|
|
|
Administrators and users can detect such a filesystem by running
|
|
|
|
xfs_info against a filesystem mountpoint and checking for a string
|
|
|
|
beginning with "ascii-ci=". If the string "ascii-ci=1" is found, the
|
|
|
|
filesystem is a case-insensitive filesystem. If no such string is
|
|
|
|
found, please upgrade xfsprogs to the latest version and try again.
|
|
|
|
|
|
|
|
This option will become default N in September 2025. Support for the
|
|
|
|
feature will be removed entirely in September 2030. Distributors
|
|
|
|
can say N here to withdraw support earlier.
|
|
|
|
|
|
|
|
To continue supporting case-insensitivity (ascii-ci=1), say Y.
|
|
|
|
To close off an attack surface, say N.
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
config XFS_QUOTA
|
2005-11-03 02:55:06 +00:00
|
|
|
bool "XFS Quota support"
|
2005-04-16 22:20:36 +00:00
|
|
|
depends on XFS_FS
|
2010-08-17 10:14:44 +00:00
|
|
|
select QUOTACTL
|
2005-04-16 22:20:36 +00:00
|
|
|
help
|
|
|
|
If you say Y here, you will be able to set limits for disk usage on
|
|
|
|
a per user and/or a per group basis under XFS. XFS considers quota
|
|
|
|
information as filesystem metadata and uses journaling to provide a
|
|
|
|
higher level guarantee of consistency. The on-disk data format for
|
|
|
|
quota is also compatible with the IRIX version of XFS, allowing a
|
|
|
|
filesystem to be migrated between Linux and IRIX without any need
|
|
|
|
for conversion.
|
|
|
|
|
|
|
|
If unsure, say N. More comprehensive documentation can be found in
|
|
|
|
README.quota in the xfsprogs package. XFS quota can be used either
|
|
|
|
with or without the generic quota support enabled (CONFIG_QUOTA) -
|
|
|
|
they are completely independent subsystems.
|
|
|
|
|
|
|
|
config XFS_POSIX_ACL
|
2005-09-08 05:34:58 +00:00
|
|
|
bool "XFS POSIX ACL support"
|
2005-04-16 22:20:36 +00:00
|
|
|
depends on XFS_FS
|
2009-06-10 15:07:47 +00:00
|
|
|
select FS_POSIX_ACL
|
2005-04-16 22:20:36 +00:00
|
|
|
help
|
|
|
|
POSIX Access Control Lists (ACLs) support permissions for users and
|
|
|
|
groups beyond the owner/group/world scheme.
|
|
|
|
|
|
|
|
If you don't know what Access Control Lists are, say N.
|
|
|
|
|
2005-09-08 05:34:58 +00:00
|
|
|
config XFS_RT
|
2006-06-13 06:28:11 +00:00
|
|
|
bool "XFS Realtime subvolume support"
|
|
|
|
depends on XFS_FS
|
2005-09-08 05:34:58 +00:00
|
|
|
help
|
|
|
|
If you say Y here you will be able to mount and use XFS filesystems
|
2006-06-13 06:28:11 +00:00
|
|
|
which contain a realtime subvolume. The realtime subvolume is a
|
|
|
|
separate area of disk space where only file data is stored. It was
|
|
|
|
originally designed to provide deterministic data rates suitable
|
|
|
|
for media streaming applications, but is also useful as a generic
|
|
|
|
mechanism for ensuring data and metadata/log I/Os are completely
|
|
|
|
separated. Regular file I/Os are isolated to a separate device
|
|
|
|
from all other requests, and this can be done quite transparently
|
|
|
|
to applications via the inherit-realtime directory inode flag.
|
2005-09-08 05:34:58 +00:00
|
|
|
|
2006-06-13 06:28:11 +00:00
|
|
|
See the xfs man page in section 5 for additional information.
|
2005-09-08 05:34:58 +00:00
|
|
|
|
|
|
|
If unsure, say N.
|
2008-04-21 07:22:27 +00:00
|
|
|
|
xfs: allow queued AG intents to drain before scrubbing
When a writer thread executes a chain of log intent items, the AG header
buffer locks will cycle during a transaction roll to get from one intent
item to the next in a chain. Although scrub takes all AG header buffer
locks, this isn't sufficient to guard against scrub checking an AG while
that writer thread is in the middle of finishing a chain because there's
no higher level locking primitive guarding allocation groups.
When there's a collision, cross-referencing between data structures
(e.g. rmapbt and refcountbt) yields false corruption events; if repair
is running, this results in incorrect repairs, which is catastrophic.
Fix this by adding to the perag structure the count of active intents
and make scrub wait until it has both AG header buffer locks and the
intent counter reaches zero.
One quirk of the drain code is that deferred bmap updates also bump and
drop the intent counter. A fundamental decision made during the design
phase of the reverse mapping feature is that updates to the rmapbt
records are always made by the same code that updates the primary
metadata. In other words, callers of bmapi functions expect that the
bmapi functions will queue deferred rmap updates.
Some parts of the reflink code queue deferred refcount (CUI) and bmap
(BUI) updates in the same head transaction, but the deferred work
manager completely finishes the CUI before the BUI work is started. As
a result, the CUI drops the intent count long before the deferred rmap
(RUI) update even has a chance to bump the intent count. The only way
to keep the intent count elevated between the CUI and RUI is for the BUI
to bump the counter until the RUI has been created.
A second quirk of the intent drain code is that deferred work items must
increment the intent counter as soon as the work item is added to the
transaction. When a BUI completes and queues an RUI, the RUI must
increment the counter before the BUI decrements it. The only way to
accomplish this is to require that the counter be bumped as soon as the
deferred work item is created in memory.
In the next patches we'll improve on this facility, but this patch
provides the basic functionality.
Signed-off-by: Darrick J. Wong <djwong@kernel.org>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
2023-04-12 01:59:58 +00:00
|
|
|
config XFS_DRAIN_INTENTS
|
|
|
|
bool
|
2023-04-12 01:59:59 +00:00
|
|
|
select JUMP_LABEL if HAVE_ARCH_JUMP_LABEL
|
xfs: allow queued AG intents to drain before scrubbing
When a writer thread executes a chain of log intent items, the AG header
buffer locks will cycle during a transaction roll to get from one intent
item to the next in a chain. Although scrub takes all AG header buffer
locks, this isn't sufficient to guard against scrub checking an AG while
that writer thread is in the middle of finishing a chain because there's
no higher level locking primitive guarding allocation groups.
When there's a collision, cross-referencing between data structures
(e.g. rmapbt and refcountbt) yields false corruption events; if repair
is running, this results in incorrect repairs, which is catastrophic.
Fix this by adding to the perag structure the count of active intents
and make scrub wait until it has both AG header buffer locks and the
intent counter reaches zero.
One quirk of the drain code is that deferred bmap updates also bump and
drop the intent counter. A fundamental decision made during the design
phase of the reverse mapping feature is that updates to the rmapbt
records are always made by the same code that updates the primary
metadata. In other words, callers of bmapi functions expect that the
bmapi functions will queue deferred rmap updates.
Some parts of the reflink code queue deferred refcount (CUI) and bmap
(BUI) updates in the same head transaction, but the deferred work
manager completely finishes the CUI before the BUI work is started. As
a result, the CUI drops the intent count long before the deferred rmap
(RUI) update even has a chance to bump the intent count. The only way
to keep the intent count elevated between the CUI and RUI is for the BUI
to bump the counter until the RUI has been created.
A second quirk of the intent drain code is that deferred work items must
increment the intent counter as soon as the work item is added to the
transaction. When a BUI completes and queues an RUI, the RUI must
increment the counter before the BUI decrements it. The only way to
accomplish this is to require that the counter be bumped as soon as the
deferred work item is created in memory.
In the next patches we'll improve on this facility, but this patch
provides the basic functionality.
Signed-off-by: Darrick J. Wong <djwong@kernel.org>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
2023-04-12 01:59:58 +00:00
|
|
|
|
xfs: allow scrub to hook metadata updates in other writers
Certain types of filesystem metadata can only be checked by scanning
every file in the entire filesystem. Specific examples of this include
quota counts, file link counts, and reverse mappings of file extents.
Directory and parent pointer reconstruction may also fall into this
category. File scanning is much trickier than scanning AG metadata
because we have to take inode locks in the same order as the rest of
[VX]FS, we can't be holding buffer locks when we do that, and scanning
the whole filesystem takes time.
Earlier versions of the online repair patchset relied heavily on
fsfreeze as a means to quiesce the filesystem so that we could take
locks in the proper order without worrying about concurrent updates from
other writers. Reviewers of those patches opined that freezing the
entire fs to check and repair something was not sufficiently better than
unmounting to run fsck offline. I don't agree with that 100%, but the
message was clear: find a way to repair things that minimizes the
quiet period where nobody can write to the filesystem.
Generally, building btree indexes online can be split into two phases: a
collection phase where we compute the records that will be put into the
new btree; and a construction phase, where we construct the physical
btree blocks and persist them. While it's simple to hold resource locks
for the entirety of the two phases to ensure that the new index is
consistent with the rest of the system, we don't need to hold resource
locks during the collection phase if we have a means to receive live
updates of other work going on elsewhere in the system.
The goal of this patch, then, is to enable online fsck to learn about
metadata updates going on in other threads while it constructs a shadow
copy of the metadata records to verify or correct the real metadata. To
minimize the overhead when online fsck isn't running, we use srcu
notifiers because they prioritize fast access to the notifier call chain
(particularly when the chain is empty) at a cost to configuring
notifiers. Online fsck should be relatively infrequent, so this is
acceptable.
The intended usage model is fairly simple. Code that modifies a
metadata structure of interest should declare a xfs_hook_chain structure
in some well defined place, and call xfs_hook_call whenever an update
happens. Online fsck code should define a struct notifier_block and use
xfs_hook_add to attach the block to the chain, along with a function to
be called. This function should synchronize with the fsck scanner to
update whatever in-memory data the scanner is collecting. When
finished, xfs_hook_del removes the notifier from the list and waits for
them all to complete.
Originally, I selected srcu notifiers over blocking notifiers to
implement live hooks because they seemed to have fewer impacts to
scalability. The per-call cost of srcu_notifier_call_chain is higher
(19ns) than blocking_notifier_ (4ns) in the single threaded case, but
blocking notifiers use an rwsem to stabilize the list. Cacheline
bouncing for that rwsem is costly to runtime code when there are a lot
of CPUs running regular filesystem operations. If there are no hooks
installed, this is a total waste of CPU time.
Therefore, I stuck with srcu notifiers, despite trading off single
threaded performance for multithreaded performance. I also wasn't
thrilled with the very high teardown time for srcu notifiers, since the
caller has to wait for the next rcu grace period. This can take a long
time if there are a lot of CPUs.
Then I discovered the jump label implementation of static keys.
Jump labels use kernel code patching to replace a branch with a nop sled
when the key is disabled. IOWs, they can eliminate the overhead of
_call_chain when there are no hooks enabled. This makes blocking
notifiers competitive again -- scrub runs faster because teardown of the
chain is a lot cheaper, and runtime code only pays the rwsem locking
overhead when scrub is actually running.
With jump labels enabled, calls to empty notifier chains are elided from
the call sites when there are no hooks registered, which means that the
overhead is 0.36ns when fsck is not running. This is perfect for most
of the architectures that XFS is expected to run on (e.g. x86, powerpc,
arm64, s390x, riscv).
For architectures that don't support jump labels (e.g. m68k) the runtime
overhead of checking the static key is an atomic counter read. This
isn't great, but it's still cheaper than taking a shared rwsem.
Signed-off-by: Darrick J. Wong <djwong@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
2024-02-22 20:30:45 +00:00
|
|
|
config XFS_LIVE_HOOKS
|
|
|
|
bool
|
|
|
|
select JUMP_LABEL if HAVE_ARCH_JUMP_LABEL
|
|
|
|
|
2024-02-22 20:43:21 +00:00
|
|
|
config XFS_MEMORY_BUFS
|
|
|
|
bool
|
|
|
|
|
2024-02-22 20:43:35 +00:00
|
|
|
config XFS_BTREE_IN_MEM
|
|
|
|
bool
|
|
|
|
|
2017-10-18 04:37:34 +00:00
|
|
|
config XFS_ONLINE_SCRUB
|
|
|
|
bool "XFS online metadata check support"
|
|
|
|
default n
|
|
|
|
depends on XFS_FS
|
2023-08-10 14:48:04 +00:00
|
|
|
depends on TMPFS && SHMEM
|
xfs: allow scrub to hook metadata updates in other writers
Certain types of filesystem metadata can only be checked by scanning
every file in the entire filesystem. Specific examples of this include
quota counts, file link counts, and reverse mappings of file extents.
Directory and parent pointer reconstruction may also fall into this
category. File scanning is much trickier than scanning AG metadata
because we have to take inode locks in the same order as the rest of
[VX]FS, we can't be holding buffer locks when we do that, and scanning
the whole filesystem takes time.
Earlier versions of the online repair patchset relied heavily on
fsfreeze as a means to quiesce the filesystem so that we could take
locks in the proper order without worrying about concurrent updates from
other writers. Reviewers of those patches opined that freezing the
entire fs to check and repair something was not sufficiently better than
unmounting to run fsck offline. I don't agree with that 100%, but the
message was clear: find a way to repair things that minimizes the
quiet period where nobody can write to the filesystem.
Generally, building btree indexes online can be split into two phases: a
collection phase where we compute the records that will be put into the
new btree; and a construction phase, where we construct the physical
btree blocks and persist them. While it's simple to hold resource locks
for the entirety of the two phases to ensure that the new index is
consistent with the rest of the system, we don't need to hold resource
locks during the collection phase if we have a means to receive live
updates of other work going on elsewhere in the system.
The goal of this patch, then, is to enable online fsck to learn about
metadata updates going on in other threads while it constructs a shadow
copy of the metadata records to verify or correct the real metadata. To
minimize the overhead when online fsck isn't running, we use srcu
notifiers because they prioritize fast access to the notifier call chain
(particularly when the chain is empty) at a cost to configuring
notifiers. Online fsck should be relatively infrequent, so this is
acceptable.
The intended usage model is fairly simple. Code that modifies a
metadata structure of interest should declare a xfs_hook_chain structure
in some well defined place, and call xfs_hook_call whenever an update
happens. Online fsck code should define a struct notifier_block and use
xfs_hook_add to attach the block to the chain, along with a function to
be called. This function should synchronize with the fsck scanner to
update whatever in-memory data the scanner is collecting. When
finished, xfs_hook_del removes the notifier from the list and waits for
them all to complete.
Originally, I selected srcu notifiers over blocking notifiers to
implement live hooks because they seemed to have fewer impacts to
scalability. The per-call cost of srcu_notifier_call_chain is higher
(19ns) than blocking_notifier_ (4ns) in the single threaded case, but
blocking notifiers use an rwsem to stabilize the list. Cacheline
bouncing for that rwsem is costly to runtime code when there are a lot
of CPUs running regular filesystem operations. If there are no hooks
installed, this is a total waste of CPU time.
Therefore, I stuck with srcu notifiers, despite trading off single
threaded performance for multithreaded performance. I also wasn't
thrilled with the very high teardown time for srcu notifiers, since the
caller has to wait for the next rcu grace period. This can take a long
time if there are a lot of CPUs.
Then I discovered the jump label implementation of static keys.
Jump labels use kernel code patching to replace a branch with a nop sled
when the key is disabled. IOWs, they can eliminate the overhead of
_call_chain when there are no hooks enabled. This makes blocking
notifiers competitive again -- scrub runs faster because teardown of the
chain is a lot cheaper, and runtime code only pays the rwsem locking
overhead when scrub is actually running.
With jump labels enabled, calls to empty notifier chains are elided from
the call sites when there are no hooks registered, which means that the
overhead is 0.36ns when fsck is not running. This is perfect for most
of the architectures that XFS is expected to run on (e.g. x86, powerpc,
arm64, s390x, riscv).
For architectures that don't support jump labels (e.g. m68k) the runtime
overhead of checking the static key is an atomic counter read. This
isn't great, but it's still cheaper than taking a shared rwsem.
Signed-off-by: Darrick J. Wong <djwong@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
2024-02-22 20:30:45 +00:00
|
|
|
select XFS_LIVE_HOOKS
|
xfs: allow queued AG intents to drain before scrubbing
When a writer thread executes a chain of log intent items, the AG header
buffer locks will cycle during a transaction roll to get from one intent
item to the next in a chain. Although scrub takes all AG header buffer
locks, this isn't sufficient to guard against scrub checking an AG while
that writer thread is in the middle of finishing a chain because there's
no higher level locking primitive guarding allocation groups.
When there's a collision, cross-referencing between data structures
(e.g. rmapbt and refcountbt) yields false corruption events; if repair
is running, this results in incorrect repairs, which is catastrophic.
Fix this by adding to the perag structure the count of active intents
and make scrub wait until it has both AG header buffer locks and the
intent counter reaches zero.
One quirk of the drain code is that deferred bmap updates also bump and
drop the intent counter. A fundamental decision made during the design
phase of the reverse mapping feature is that updates to the rmapbt
records are always made by the same code that updates the primary
metadata. In other words, callers of bmapi functions expect that the
bmapi functions will queue deferred rmap updates.
Some parts of the reflink code queue deferred refcount (CUI) and bmap
(BUI) updates in the same head transaction, but the deferred work
manager completely finishes the CUI before the BUI work is started. As
a result, the CUI drops the intent count long before the deferred rmap
(RUI) update even has a chance to bump the intent count. The only way
to keep the intent count elevated between the CUI and RUI is for the BUI
to bump the counter until the RUI has been created.
A second quirk of the intent drain code is that deferred work items must
increment the intent counter as soon as the work item is added to the
transaction. When a BUI completes and queues an RUI, the RUI must
increment the counter before the BUI decrements it. The only way to
accomplish this is to require that the counter be bumped as soon as the
deferred work item is created in memory.
In the next patches we'll improve on this facility, but this patch
provides the basic functionality.
Signed-off-by: Darrick J. Wong <djwong@kernel.org>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
2023-04-12 01:59:58 +00:00
|
|
|
select XFS_DRAIN_INTENTS
|
2024-02-22 20:43:21 +00:00
|
|
|
select XFS_MEMORY_BUFS
|
2017-10-18 04:37:34 +00:00
|
|
|
help
|
|
|
|
If you say Y here you will be able to check metadata on a
|
|
|
|
mounted XFS filesystem. This feature is intended to reduce
|
|
|
|
filesystem downtime by supplementing xfs_repair. The key
|
|
|
|
advantage here is to look for problems proactively so that
|
|
|
|
they can be dealt with in a controlled manner.
|
|
|
|
|
|
|
|
This feature is considered EXPERIMENTAL. Use with caution!
|
|
|
|
|
|
|
|
See the xfs_scrub man page in section 8 for additional information.
|
|
|
|
|
|
|
|
If unsure, say N.
|
|
|
|
|
2023-08-10 14:48:07 +00:00
|
|
|
config XFS_ONLINE_SCRUB_STATS
|
|
|
|
bool "XFS online metadata check usage data collection"
|
|
|
|
default y
|
|
|
|
depends on XFS_ONLINE_SCRUB
|
2023-11-05 19:23:18 +00:00
|
|
|
select DEBUG_FS
|
2023-08-10 14:48:07 +00:00
|
|
|
help
|
|
|
|
If you say Y here, the kernel will gather usage data about
|
|
|
|
the online metadata check subsystem. This includes the number
|
|
|
|
of invocations, the outcomes, and the results of repairs, if any.
|
|
|
|
This may slow down scrub slightly due to the use of high precision
|
|
|
|
timers and the need to merge per-invocation information into the
|
|
|
|
filesystem counters.
|
|
|
|
|
|
|
|
Usage data are collected in /sys/kernel/debug/xfs/scrub.
|
|
|
|
|
|
|
|
If unsure, say N.
|
|
|
|
|
2018-05-14 13:34:36 +00:00
|
|
|
config XFS_ONLINE_REPAIR
|
|
|
|
bool "XFS online metadata repair support"
|
|
|
|
default n
|
|
|
|
depends on XFS_FS && XFS_ONLINE_SCRUB
|
2024-02-22 20:43:35 +00:00
|
|
|
select XFS_BTREE_IN_MEM
|
2018-05-14 13:34:36 +00:00
|
|
|
help
|
|
|
|
If you say Y here you will be able to repair metadata on a
|
|
|
|
mounted XFS filesystem. This feature is intended to reduce
|
|
|
|
filesystem downtime by fixing minor problems before they cause the
|
|
|
|
filesystem to go down. However, it requires that the filesystem be
|
|
|
|
formatted with secondary metadata, such as reverse mappings and inode
|
|
|
|
parent pointers.
|
|
|
|
|
|
|
|
This feature is considered EXPERIMENTAL. Use with caution!
|
|
|
|
|
|
|
|
See the xfs_scrub man page in section 8 for additional information.
|
|
|
|
|
|
|
|
If unsure, say N.
|
|
|
|
|
2013-04-30 11:39:34 +00:00
|
|
|
config XFS_WARN
|
|
|
|
bool "XFS Verbose Warnings"
|
|
|
|
depends on XFS_FS && !XFS_DEBUG
|
|
|
|
help
|
|
|
|
Say Y here to get an XFS build with many additional warnings.
|
|
|
|
It converts ASSERT checks to WARN, so will log any out-of-bounds
|
|
|
|
conditions that occur that would otherwise be missed. It is much
|
|
|
|
lighter weight than XFS_DEBUG and does not modify algorithms and will
|
|
|
|
not cause the kernel to panic on non-fatal errors.
|
|
|
|
|
|
|
|
However, similar to XFS_DEBUG, it is only advisable to use this if you
|
|
|
|
are debugging a particular problem.
|
|
|
|
|
2008-04-21 07:22:27 +00:00
|
|
|
config XFS_DEBUG
|
2012-10-02 18:19:27 +00:00
|
|
|
bool "XFS Debugging support"
|
|
|
|
depends on XFS_FS
|
2008-04-21 07:22:27 +00:00
|
|
|
help
|
|
|
|
Say Y here to get an XFS build with many debugging features,
|
|
|
|
including ASSERT checks, function wrappers around macros,
|
|
|
|
and extra sanity-checking functions in various code paths.
|
|
|
|
|
|
|
|
Note that the resulting code will be HUGE and SLOW, and probably
|
|
|
|
not useful unless you are debugging a particular problem.
|
|
|
|
|
|
|
|
Say N unless you are an XFS developer, or you play one on TV.
|
2017-06-15 04:29:13 +00:00
|
|
|
|
xfs: verify buffer, inode, and dquot items every tx commit
generic/388 has an annoying tendency to fail like this during log
recovery:
XFS (sda4): Unmounting Filesystem 435fe39b-82b6-46ef-be56-819499585130
XFS (sda4): Mounting V5 Filesystem 435fe39b-82b6-46ef-be56-819499585130
XFS (sda4): Starting recovery (logdev: internal)
00000000: 49 4e 81 b6 03 02 00 00 00 00 00 07 00 00 00 07 IN..............
00000010: 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00 10 ................
00000020: 35 9a 8b c1 3e 6e 81 00 35 9a 8b c1 3f dc b7 00 5...>n..5...?...
00000030: 35 9a 8b c1 3f dc b7 00 00 00 00 00 00 3c 86 4f 5...?........<.O
00000040: 00 00 00 00 00 00 02 f3 00 00 00 00 00 00 00 00 ................
00000050: 00 00 1f 01 00 00 00 00 00 00 00 02 b2 74 c9 0b .............t..
00000060: ff ff ff ff d7 45 73 10 00 00 00 00 00 00 00 2d .....Es........-
00000070: 00 00 07 92 00 01 fe 30 00 00 00 00 00 00 00 1a .......0........
00000080: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
00000090: 35 9a 8b c1 3b 55 0c 00 00 00 00 00 04 27 b2 d1 5...;U.......'..
000000a0: 43 5f e3 9b 82 b6 46 ef be 56 81 94 99 58 51 30 C_....F..V...XQ0
XFS (sda4): Internal error Bad dinode after recovery at line 539 of file fs/xfs/xfs_inode_item_recover.c. Caller xlog_recover_items_pass2+0x4e/0xc0 [xfs]
CPU: 0 PID: 2189311 Comm: mount Not tainted 6.9.0-rc4-djwx #rc4
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS ?-20171121_152543-x86-ol7-builder-01.us.oracle.com-4.el7.1 04/01/2014
Call Trace:
<TASK>
dump_stack_lvl+0x4f/0x60
xfs_corruption_error+0x90/0xa0
xlog_recover_inode_commit_pass2+0x5f1/0xb00
xlog_recover_items_pass2+0x4e/0xc0
xlog_recover_commit_trans+0x2db/0x350
xlog_recovery_process_trans+0xab/0xe0
xlog_recover_process_data+0xa7/0x130
xlog_do_recovery_pass+0x398/0x840
xlog_do_log_recovery+0x62/0xc0
xlog_do_recover+0x34/0x1d0
xlog_recover+0xe9/0x1a0
xfs_log_mount+0xff/0x260
xfs_mountfs+0x5d9/0xb60
xfs_fs_fill_super+0x76b/0xa30
get_tree_bdev+0x124/0x1d0
vfs_get_tree+0x17/0xa0
path_mount+0x72b/0xa90
__x64_sys_mount+0x112/0x150
do_syscall_64+0x49/0x100
entry_SYSCALL_64_after_hwframe+0x4b/0x53
</TASK>
XFS (sda4): Corruption detected. Unmount and run xfs_repair
XFS (sda4): Metadata corruption detected at xfs_dinode_verify.part.0+0x739/0x920 [xfs], inode 0x427b2d1
XFS (sda4): Filesystem has been shut down due to log error (0x2).
XFS (sda4): Please unmount the filesystem and rectify the problem(s).
XFS (sda4): log mount/recovery failed: error -117
XFS (sda4): log mount failed
This inode log item recovery failing the dinode verifier after
replaying the contents of the inode log item into the ondisk inode.
Looking back into what the kernel was doing at the time of the fs
shutdown, a thread was in the middle of running a series of
transactions, each of which committed changes to the inode.
At some point in the middle of that chain, an invalid (at least
according to the verifier) change was committed. Had the filesystem not
shut down in the middle of the chain, a subsequent transaction would
have corrected the invalid state and nobody would have noticed. But
that's not what happened here. Instead, the invalid inode state was
committed to the ondisk log, so log recovery tripped over it.
The actual defect here was an overzealous inode verifier, which was
fixed in a separate patch. This patch adds some transaction precommit
functions for CONFIG_XFS_DEBUG=y mode so that we can detect these kinds
of transient errors at transaction commit time, where it's much easier
to find the root cause.
Signed-off-by: Darrick J. Wong <djwong@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
2024-07-02 18:23:23 +00:00
|
|
|
config XFS_DEBUG_EXPENSIVE
|
|
|
|
bool "XFS expensive debugging checks"
|
|
|
|
depends on XFS_FS && XFS_DEBUG
|
|
|
|
help
|
|
|
|
Say Y here to get an XFS build with expensive debugging checks
|
|
|
|
enabled. These checks may affect performance significantly.
|
|
|
|
|
|
|
|
Note that the resulting code will be HUGER and SLOWER, and probably
|
|
|
|
not useful unless you are debugging a particular problem.
|
|
|
|
|
|
|
|
Say N unless you are an XFS developer, or you play one on TV.
|
|
|
|
|
2017-06-15 04:29:13 +00:00
|
|
|
config XFS_ASSERT_FATAL
|
|
|
|
bool "XFS fatal asserts"
|
|
|
|
default y
|
|
|
|
depends on XFS_FS && XFS_DEBUG
|
|
|
|
help
|
|
|
|
Set the default DEBUG mode ASSERT failure behavior.
|
|
|
|
|
|
|
|
Say Y here to cause DEBUG mode ASSERT failures to result in fatal
|
|
|
|
errors that BUG() the kernel by default. If you say N, ASSERT failures
|
|
|
|
result in warnings.
|
|
|
|
|
|
|
|
This behavior can be modified at runtime via sysfs.
|