forked from Minki/linux
xfs: introduce in-memory inode unlink log items
To facilitate future improvements in inode logging and improving inode cluster buffer locking order consistency, we need a new mechanism for defering inode cluster buffer modifications during unlinked list modifications. The unlinked inode list buffer locking is complex. The unlinked list is unordered - we add to the tail, remove from where-ever the inode is in the list. Hence we might need to lock two inode buffers here (previous inode in list and the one being removed). While we can order the locking of these buffers correctly within the confines of the unlinked list, there may be other inodes that need buffer locking in the same transaction. e.g. O_TMPFILE being linked into a directory also modifies the directory inode. Hence we need a mechanism for defering unlinked inode list updates until a point where we know that all modifications have been made and all that remains is to lock and modify the cluster buffers. We can do this by first observing that we serialise unlinked list modifications by holding the AGI buffer lock. IOWs, the AGI is going to be locked until the transaction commits any time we modify the unlinked list. Hence it doesn't matter when in the unlink transactions that we actually load, lock and modify the inode cluster buffer. We add an in-memory unlinked inode log item to defer the inode cluster buffer update to transaction commit time where it can be ordered with all the other inode cluster operations that need to be done. Essentially all we need to do is record the inodes that need to have their unlinked list pointer updated in a new log item that we attached to the transaction. This log item exists purely for the purpose of delaying the update of the unlinked list pointer until the inode cluster buffer can be locked in the correct order around the other inode cluster buffers. It plays no part in the actual commit, and there's no change to anything that is written to the log. i.e. the inode cluster buffers still have to be fully logged here (not just ordered) as log recovery depedends on this to replay mods to the unlinked inode list. Hence if we add a "precommit" hook into xfs_trans_commit() to run a "precommit" operation on these iunlink log items, we can delay the locking, modification and logging of the inode cluster buffer until after all other modifications have been made. The precommit hook reuires us to sort the items that are going to be run so that we can lock precommit items in the correct order as we perform the modifications they describe. To make this unlinked inode list processing simpler and easier to implement as a log item, we need to change the way we track the unlinked list in memory. Starting from the observation that an inode on the unlinked list is pinned in memory by the VFS, we can use the xfs_inode itself to track the unlinked list. To do this efficiently, we want the unlinked list to be a double linked list. The problem here is that we need a list per AGI unlinked list, and there are 64 of these per AGI. The approach taken in this patchset is to shadow the AGI unlinked list heads in the perag, and link inodes by agino, hence requiring only 8 extra bytes per inode to track this state. We can then use the agino pointers for lockless inode cache lookups to retreive the inode. The aginos in the inode are modified only under the AGI lock, just like the cluster buffer pointers, so we don't need any extra locking here. The i_next_unlinked field tracks the on-disk value of the unlinked list, and the i_prev_unlinked is a purely in-memory pointer that enables us to efficiently remove inodes from the middle of the list. This results in moving a lot of the unlink modification work into the precommit operations on the unlink log item. Tracking all the unlinked inodes in the inodes themselves also gets rid of the unlinked list reference hash table that is used to track this back pointer relationship. This greatly simplifies the the unlinked list modification code, and removes memory allocations in this hot path to track back pointers. This, overall, slightly reduces the CPU overhead of the unlink path. The result of this log item means that we move all the actual manipulation of objects to be logged out of the iunlink path and into the iunlink item. This allows for future optimisation of this mechanism without needing changes to high level unlink path, as well as making the unlink lock ordering predictable and synchronised with other operations that may require inode cluster locking. Signed-off-by: Dave Chinner <dchinner@redhat.com> -----BEGIN PGP SIGNATURE----- iQJIBAABCgAyFiEEmJOoJ8GffZYWSjj/regpR/R1+h0FAmLPvZAUHGRhdmlkQGZy b21vcmJpdC5jb20ACgkQregpR/R1+h2CDBAAj9QH4/XIe8JIx/mKgAGzcNNwQxu8 geBqb5S2ri0oB22pRXKc/3zArw/8zPwcgZF83ChkFrQ6tLn4JGkEEuvIKr3b8k50 2AEghYf8dqCaXRpkdvIGjJtdK54MFZIHv9TYRwHVzBp3WLtrz7uHmKeRf2qeSBMI DLurzVIbcocMptvHxrZZCpf1ajuVdovXtuw8ExiORZZKLOeF+3xBGztenkfh2BTO 8Kh8qJVSNN41XQ8h87PWyQtmah6JouqURXXGERJcgLbr80pTSw2EBihJvmXUmn8y qnoT27TCPAMOEDTWe+SHzLOVRLvhN+at/lFWbvas6PwOvDGAwQQtZkv/QyLTSgqD 6Zg9xJeeSgHhHP2kCeLlKmvW1dRptcUzhCWOrQ9Ry+WZnKK5ZenevkaAXAva6ucS NXwIU1DnWfJ51SHIYQiQIci2g+vF+pnJRQq1DtYUuwtBSWfsmw1uquNZodgbA9Ue k6hfk4qVua63k+vXsd5gVdCHT+Liw+1ldTInl2GNhT/riNzewO0HY3zmc1aZQyMM mymHXKVcQJbLpJvwqB5SXq8a37fbpoQDYlycptSF/YxxBhiCKKWuc6q7Tl6Y9VSS qpSHvh+MkJcP8PYtPjNUcJ9yeXhYJgkv1KK47zkIKzOD9a+zh4SIrfiBUflZbpQq M9ubXGHVmFqS4Nk= =59M0 -----END PGP SIGNATURE----- Merge tag 'xfs-iunlink-item-5.20' of git://git.kernel.org/pub/scm/linux/kernel/git/dgc/linux-xfs into xfs-5.20-mergeB xfs: introduce in-memory inode unlink log items To facilitate future improvements in inode logging and improving inode cluster buffer locking order consistency, we need a new mechanism for defering inode cluster buffer modifications during unlinked list modifications. The unlinked inode list buffer locking is complex. The unlinked list is unordered - we add to the tail, remove from where-ever the inode is in the list. Hence we might need to lock two inode buffers here (previous inode in list and the one being removed). While we can order the locking of these buffers correctly within the confines of the unlinked list, there may be other inodes that need buffer locking in the same transaction. e.g. O_TMPFILE being linked into a directory also modifies the directory inode. Hence we need a mechanism for defering unlinked inode list updates until a point where we know that all modifications have been made and all that remains is to lock and modify the cluster buffers. We can do this by first observing that we serialise unlinked list modifications by holding the AGI buffer lock. IOWs, the AGI is going to be locked until the transaction commits any time we modify the unlinked list. Hence it doesn't matter when in the unlink transactions that we actually load, lock and modify the inode cluster buffer. We add an in-memory unlinked inode log item to defer the inode cluster buffer update to transaction commit time where it can be ordered with all the other inode cluster operations that need to be done. Essentially all we need to do is record the inodes that need to have their unlinked list pointer updated in a new log item that we attached to the transaction. This log item exists purely for the purpose of delaying the update of the unlinked list pointer until the inode cluster buffer can be locked in the correct order around the other inode cluster buffers. It plays no part in the actual commit, and there's no change to anything that is written to the log. i.e. the inode cluster buffers still have to be fully logged here (not just ordered) as log recovery depedends on this to replay mods to the unlinked inode list. Hence if we add a "precommit" hook into xfs_trans_commit() to run a "precommit" operation on these iunlink log items, we can delay the locking, modification and logging of the inode cluster buffer until after all other modifications have been made. The precommit hook reuires us to sort the items that are going to be run so that we can lock precommit items in the correct order as we perform the modifications they describe. To make this unlinked inode list processing simpler and easier to implement as a log item, we need to change the way we track the unlinked list in memory. Starting from the observation that an inode on the unlinked list is pinned in memory by the VFS, we can use the xfs_inode itself to track the unlinked list. To do this efficiently, we want the unlinked list to be a double linked list. The problem here is that we need a list per AGI unlinked list, and there are 64 of these per AGI. The approach taken in this patchset is to shadow the AGI unlinked list heads in the perag, and link inodes by agino, hence requiring only 8 extra bytes per inode to track this state. We can then use the agino pointers for lockless inode cache lookups to retreive the inode. The aginos in the inode are modified only under the AGI lock, just like the cluster buffer pointers, so we don't need any extra locking here. The i_next_unlinked field tracks the on-disk value of the unlinked list, and the i_prev_unlinked is a purely in-memory pointer that enables us to efficiently remove inodes from the middle of the list. This results in moving a lot of the unlink modification work into the precommit operations on the unlink log item. Tracking all the unlinked inodes in the inodes themselves also gets rid of the unlinked list reference hash table that is used to track this back pointer relationship. This greatly simplifies the the unlinked list modification code, and removes memory allocations in this hot path to track back pointers. This, overall, slightly reduces the CPU overhead of the unlink path. The result of this log item means that we move all the actual manipulation of objects to be logged out of the iunlink path and into the iunlink item. This allows for future optimisation of this mechanism without needing changes to high level unlink path, as well as making the unlink lock ordering predictable and synchronised with other operations that may require inode cluster locking. Signed-off-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Darrick J. Wong <djwong@kernel.org> * tag 'xfs-iunlink-item-5.20' of git://git.kernel.org/pub/scm/linux/kernel/git/dgc/linux-xfs: xfs: add in-memory iunlink log item xfs: add log item precommit operation xfs: combine iunlink inode update functions xfs: clean up xfs_iunlink_update_inode() xfs: double link the unlinked inode list xfs: introduce xfs_iunlink_lookup xfs: refactor xlog_recover_process_iunlinks() xfs: track the iunlink list pointer in the xfs_inode xfs: factor the xfs_iunlink functions xfs: flush inode gc workqueue before clearing agi bucket
This commit is contained in:
commit
4613b17cc4
@ -106,6 +106,7 @@ xfs-y += xfs_log.o \
|
||||
xfs_icreate_item.o \
|
||||
xfs_inode_item.o \
|
||||
xfs_inode_item_recover.o \
|
||||
xfs_iunlink_item.o \
|
||||
xfs_refcount_item.o \
|
||||
xfs_rmap_item.o \
|
||||
xfs_log_recover.o \
|
||||
|
@ -194,7 +194,6 @@ xfs_free_perag(
|
||||
XFS_IS_CORRUPT(pag->pag_mount, atomic_read(&pag->pag_ref) != 0);
|
||||
|
||||
cancel_delayed_work_sync(&pag->pag_blockgc_work);
|
||||
xfs_iunlink_destroy(pag);
|
||||
xfs_buf_hash_destroy(pag);
|
||||
|
||||
call_rcu(&pag->rcu_head, __xfs_free_perag);
|
||||
@ -323,10 +322,6 @@ xfs_initialize_perag(
|
||||
if (error)
|
||||
goto out_remove_pag;
|
||||
|
||||
error = xfs_iunlink_init(pag);
|
||||
if (error)
|
||||
goto out_hash_destroy;
|
||||
|
||||
/* first new pag is fully initialized */
|
||||
if (first_initialised == NULLAGNUMBER)
|
||||
first_initialised = index;
|
||||
@ -349,8 +344,6 @@ xfs_initialize_perag(
|
||||
mp->m_ag_prealloc_blocks = xfs_prealloc_blocks(mp);
|
||||
return 0;
|
||||
|
||||
out_hash_destroy:
|
||||
xfs_buf_hash_destroy(pag);
|
||||
out_remove_pag:
|
||||
radix_tree_delete(&mp->m_perag_tree, index);
|
||||
out_free_pag:
|
||||
@ -362,7 +355,6 @@ out_unwind_new_pags:
|
||||
if (!pag)
|
||||
break;
|
||||
xfs_buf_hash_destroy(pag);
|
||||
xfs_iunlink_destroy(pag);
|
||||
kmem_free(pag);
|
||||
}
|
||||
return error;
|
||||
|
@ -103,12 +103,6 @@ struct xfs_perag {
|
||||
/* background prealloc block trimming */
|
||||
struct delayed_work pag_blockgc_work;
|
||||
|
||||
/*
|
||||
* Unlinked inode information. This incore information reflects
|
||||
* data stored in the AGI, so callers must hold the AGI buffer lock
|
||||
* or have some other means to control concurrency.
|
||||
*/
|
||||
struct rhashtable pagi_unlinked_hash;
|
||||
#endif /* __KERNEL__ */
|
||||
};
|
||||
|
||||
|
@ -229,7 +229,8 @@ xfs_inode_from_disk(
|
||||
ip->i_nblocks = be64_to_cpu(from->di_nblocks);
|
||||
ip->i_extsize = be32_to_cpu(from->di_extsize);
|
||||
ip->i_forkoff = from->di_forkoff;
|
||||
ip->i_diflags = be16_to_cpu(from->di_flags);
|
||||
ip->i_diflags = be16_to_cpu(from->di_flags);
|
||||
ip->i_next_unlinked = be32_to_cpu(from->di_next_unlinked);
|
||||
|
||||
if (from->di_dmevmask || from->di_dmstate)
|
||||
xfs_iflags_set(ip, XFS_IPRESERVE_DM_FIELDS);
|
||||
|
@ -111,6 +111,8 @@ xfs_inode_alloc(
|
||||
INIT_WORK(&ip->i_ioend_work, xfs_end_io);
|
||||
INIT_LIST_HEAD(&ip->i_ioend_list);
|
||||
spin_lock_init(&ip->i_ioend_lock);
|
||||
ip->i_next_unlinked = NULLAGINO;
|
||||
ip->i_prev_unlinked = NULLAGINO;
|
||||
|
||||
return ip;
|
||||
}
|
||||
@ -912,6 +914,7 @@ reclaim:
|
||||
ip->i_checked = 0;
|
||||
spin_unlock(&ip->i_flags_lock);
|
||||
|
||||
ASSERT(!ip->i_itemp || ip->i_itemp->ili_item.li_buf == NULL);
|
||||
xfs_iunlock(ip, XFS_ILOCK_EXCL);
|
||||
|
||||
XFS_STATS_INC(ip->i_mount, xs_ig_reclaims);
|
||||
|
@ -20,6 +20,7 @@
|
||||
#include "xfs_trans.h"
|
||||
#include "xfs_buf_item.h"
|
||||
#include "xfs_inode_item.h"
|
||||
#include "xfs_iunlink_item.h"
|
||||
#include "xfs_ialloc.h"
|
||||
#include "xfs_bmap.h"
|
||||
#include "xfs_bmap_util.h"
|
||||
@ -1801,195 +1802,69 @@ out:
|
||||
* because we must walk that list to find the inode that points to the inode
|
||||
* being removed from the unlinked hash bucket list.
|
||||
*
|
||||
* What if we modelled the unlinked list as a collection of records capturing
|
||||
* "X.next_unlinked = Y" relations? If we indexed those records on Y, we'd
|
||||
* have a fast way to look up unlinked list predecessors, which avoids the
|
||||
* slow list walk. That's exactly what we do here (in-core) with a per-AG
|
||||
* rhashtable.
|
||||
* Hence we keep an in-memory double linked list to link each inode on an
|
||||
* unlinked list. Because there are 64 unlinked lists per AGI, keeping pointer
|
||||
* based lists would require having 64 list heads in the perag, one for each
|
||||
* list. This is expensive in terms of memory (think millions of AGs) and cache
|
||||
* misses on lookups. Instead, use the fact that inodes on the unlinked list
|
||||
* must be referenced at the VFS level to keep them on the list and hence we
|
||||
* have an existence guarantee for inodes on the unlinked list.
|
||||
*
|
||||
* Because this is a backref cache, we ignore operational failures since the
|
||||
* iunlink code can fall back to the slow bucket walk. The only errors that
|
||||
* should bubble out are for obviously incorrect situations.
|
||||
*
|
||||
* All users of the backref cache MUST hold the AGI buffer lock to serialize
|
||||
* access or have otherwise provided for concurrency control.
|
||||
* Given we have an existence guarantee, we can use lockless inode cache lookups
|
||||
* to resolve aginos to xfs inodes. This means we only need 8 bytes per inode
|
||||
* for the double linked unlinked list, and we don't need any extra locking to
|
||||
* keep the list safe as all manipulations are done under the AGI buffer lock.
|
||||
* Keeping the list up to date does not require memory allocation, just finding
|
||||
* the XFS inode and updating the next/prev unlinked list aginos.
|
||||
*/
|
||||
|
||||
/* Capture a "X.next_unlinked = Y" relationship. */
|
||||
struct xfs_iunlink {
|
||||
struct rhash_head iu_rhash_head;
|
||||
xfs_agino_t iu_agino; /* X */
|
||||
xfs_agino_t iu_next_unlinked; /* Y */
|
||||
};
|
||||
|
||||
/* Unlinked list predecessor lookup hashtable construction */
|
||||
static int
|
||||
xfs_iunlink_obj_cmpfn(
|
||||
struct rhashtable_compare_arg *arg,
|
||||
const void *obj)
|
||||
{
|
||||
const xfs_agino_t *key = arg->key;
|
||||
const struct xfs_iunlink *iu = obj;
|
||||
|
||||
if (iu->iu_next_unlinked != *key)
|
||||
return 1;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct rhashtable_params xfs_iunlink_hash_params = {
|
||||
.min_size = XFS_AGI_UNLINKED_BUCKETS,
|
||||
.key_len = sizeof(xfs_agino_t),
|
||||
.key_offset = offsetof(struct xfs_iunlink,
|
||||
iu_next_unlinked),
|
||||
.head_offset = offsetof(struct xfs_iunlink, iu_rhash_head),
|
||||
.automatic_shrinking = true,
|
||||
.obj_cmpfn = xfs_iunlink_obj_cmpfn,
|
||||
};
|
||||
|
||||
/*
|
||||
* Return X, where X.next_unlinked == @agino. Returns NULLAGINO if no such
|
||||
* relation is found.
|
||||
* Find an inode on the unlinked list. This does not take references to the
|
||||
* inode as we have existence guarantees by holding the AGI buffer lock and that
|
||||
* only unlinked, referenced inodes can be on the unlinked inode list. If we
|
||||
* don't find the inode in cache, then let the caller handle the situation.
|
||||
*/
|
||||
static xfs_agino_t
|
||||
xfs_iunlink_lookup_backref(
|
||||
static struct xfs_inode *
|
||||
xfs_iunlink_lookup(
|
||||
struct xfs_perag *pag,
|
||||
xfs_agino_t agino)
|
||||
{
|
||||
struct xfs_iunlink *iu;
|
||||
struct xfs_inode *ip;
|
||||
|
||||
iu = rhashtable_lookup_fast(&pag->pagi_unlinked_hash, &agino,
|
||||
xfs_iunlink_hash_params);
|
||||
return iu ? iu->iu_agino : NULLAGINO;
|
||||
}
|
||||
rcu_read_lock();
|
||||
ip = radix_tree_lookup(&pag->pag_ici_root, agino);
|
||||
|
||||
/*
|
||||
* Take ownership of an iunlink cache entry and insert it into the hash table.
|
||||
* If successful, the entry will be owned by the cache; if not, it is freed.
|
||||
* Either way, the caller does not own @iu after this call.
|
||||
*/
|
||||
static int
|
||||
xfs_iunlink_insert_backref(
|
||||
struct xfs_perag *pag,
|
||||
struct xfs_iunlink *iu)
|
||||
{
|
||||
int error;
|
||||
|
||||
error = rhashtable_insert_fast(&pag->pagi_unlinked_hash,
|
||||
&iu->iu_rhash_head, xfs_iunlink_hash_params);
|
||||
/*
|
||||
* Fail loudly if there already was an entry because that's a sign of
|
||||
* corruption of in-memory data. Also fail loudly if we see an error
|
||||
* code we didn't anticipate from the rhashtable code. Currently we
|
||||
* only anticipate ENOMEM.
|
||||
* Inode not in memory or in RCU freeing limbo should not happen.
|
||||
* Warn about this and let the caller handle the failure.
|
||||
*/
|
||||
if (error) {
|
||||
WARN(error != -ENOMEM, "iunlink cache insert error %d", error);
|
||||
kmem_free(iu);
|
||||
if (WARN_ON_ONCE(!ip || !ip->i_ino)) {
|
||||
rcu_read_unlock();
|
||||
return NULL;
|
||||
}
|
||||
/*
|
||||
* Absorb any runtime errors that aren't a result of corruption because
|
||||
* this is a cache and we can always fall back to bucket list scanning.
|
||||
*/
|
||||
if (error != 0 && error != -EEXIST)
|
||||
error = 0;
|
||||
return error;
|
||||
ASSERT(!xfs_iflags_test(ip, XFS_IRECLAIMABLE | XFS_IRECLAIM));
|
||||
rcu_read_unlock();
|
||||
return ip;
|
||||
}
|
||||
|
||||
/* Remember that @prev_agino.next_unlinked = @this_agino. */
|
||||
/* Update the prev pointer of the next agino. */
|
||||
static int
|
||||
xfs_iunlink_add_backref(
|
||||
xfs_iunlink_update_backref(
|
||||
struct xfs_perag *pag,
|
||||
xfs_agino_t prev_agino,
|
||||
xfs_agino_t this_agino)
|
||||
xfs_agino_t next_agino)
|
||||
{
|
||||
struct xfs_iunlink *iu;
|
||||
struct xfs_inode *ip;
|
||||
|
||||
if (XFS_TEST_ERROR(false, pag->pag_mount, XFS_ERRTAG_IUNLINK_FALLBACK))
|
||||
/* No update necessary if we are at the end of the list. */
|
||||
if (next_agino == NULLAGINO)
|
||||
return 0;
|
||||
|
||||
iu = kmem_zalloc(sizeof(*iu), KM_NOFS);
|
||||
iu->iu_agino = prev_agino;
|
||||
iu->iu_next_unlinked = this_agino;
|
||||
|
||||
return xfs_iunlink_insert_backref(pag, iu);
|
||||
}
|
||||
|
||||
/*
|
||||
* Replace X.next_unlinked = @agino with X.next_unlinked = @next_unlinked.
|
||||
* If @next_unlinked is NULLAGINO, we drop the backref and exit. If there
|
||||
* wasn't any such entry then we don't bother.
|
||||
*/
|
||||
static int
|
||||
xfs_iunlink_change_backref(
|
||||
struct xfs_perag *pag,
|
||||
xfs_agino_t agino,
|
||||
xfs_agino_t next_unlinked)
|
||||
{
|
||||
struct xfs_iunlink *iu;
|
||||
int error;
|
||||
|
||||
/* Look up the old entry; if there wasn't one then exit. */
|
||||
iu = rhashtable_lookup_fast(&pag->pagi_unlinked_hash, &agino,
|
||||
xfs_iunlink_hash_params);
|
||||
if (!iu)
|
||||
return 0;
|
||||
|
||||
/*
|
||||
* Remove the entry. This shouldn't ever return an error, but if we
|
||||
* couldn't remove the old entry we don't want to add it again to the
|
||||
* hash table, and if the entry disappeared on us then someone's
|
||||
* violated the locking rules and we need to fail loudly. Either way
|
||||
* we cannot remove the inode because internal state is or would have
|
||||
* been corrupt.
|
||||
*/
|
||||
error = rhashtable_remove_fast(&pag->pagi_unlinked_hash,
|
||||
&iu->iu_rhash_head, xfs_iunlink_hash_params);
|
||||
if (error)
|
||||
return error;
|
||||
|
||||
/* If there is no new next entry just free our item and return. */
|
||||
if (next_unlinked == NULLAGINO) {
|
||||
kmem_free(iu);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* Update the entry and re-add it to the hash table. */
|
||||
iu->iu_next_unlinked = next_unlinked;
|
||||
return xfs_iunlink_insert_backref(pag, iu);
|
||||
}
|
||||
|
||||
/* Set up the in-core predecessor structures. */
|
||||
int
|
||||
xfs_iunlink_init(
|
||||
struct xfs_perag *pag)
|
||||
{
|
||||
return rhashtable_init(&pag->pagi_unlinked_hash,
|
||||
&xfs_iunlink_hash_params);
|
||||
}
|
||||
|
||||
/* Free the in-core predecessor structures. */
|
||||
static void
|
||||
xfs_iunlink_free_item(
|
||||
void *ptr,
|
||||
void *arg)
|
||||
{
|
||||
struct xfs_iunlink *iu = ptr;
|
||||
bool *freed_anything = arg;
|
||||
|
||||
*freed_anything = true;
|
||||
kmem_free(iu);
|
||||
}
|
||||
|
||||
void
|
||||
xfs_iunlink_destroy(
|
||||
struct xfs_perag *pag)
|
||||
{
|
||||
bool freed_anything = false;
|
||||
|
||||
rhashtable_free_and_destroy(&pag->pagi_unlinked_hash,
|
||||
xfs_iunlink_free_item, &freed_anything);
|
||||
|
||||
ASSERT(freed_anything == false || xfs_is_shutdown(pag->pag_mount));
|
||||
ip = xfs_iunlink_lookup(pag, next_agino);
|
||||
if (!ip)
|
||||
return -EFSCORRUPTED;
|
||||
ip->i_prev_unlinked = prev_agino;
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
@ -2031,88 +1906,53 @@ xfs_iunlink_update_bucket(
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* Set an on-disk inode's next_unlinked pointer. */
|
||||
STATIC void
|
||||
xfs_iunlink_update_dinode(
|
||||
static int
|
||||
xfs_iunlink_insert_inode(
|
||||
struct xfs_trans *tp,
|
||||
struct xfs_perag *pag,
|
||||
xfs_agino_t agino,
|
||||
struct xfs_buf *ibp,
|
||||
struct xfs_dinode *dip,
|
||||
struct xfs_imap *imap,
|
||||
xfs_agino_t next_agino)
|
||||
struct xfs_buf *agibp,
|
||||
struct xfs_inode *ip)
|
||||
{
|
||||
struct xfs_mount *mp = tp->t_mountp;
|
||||
int offset;
|
||||
|
||||
ASSERT(xfs_verify_agino_or_null(pag, next_agino));
|
||||
|
||||
trace_xfs_iunlink_update_dinode(mp, pag->pag_agno, agino,
|
||||
be32_to_cpu(dip->di_next_unlinked), next_agino);
|
||||
|
||||
dip->di_next_unlinked = cpu_to_be32(next_agino);
|
||||
offset = imap->im_boffset +
|
||||
offsetof(struct xfs_dinode, di_next_unlinked);
|
||||
|
||||
/* need to recalc the inode CRC if appropriate */
|
||||
xfs_dinode_calc_crc(mp, dip);
|
||||
xfs_trans_inode_buf(tp, ibp);
|
||||
xfs_trans_log_buf(tp, ibp, offset, offset + sizeof(xfs_agino_t) - 1);
|
||||
}
|
||||
|
||||
/* Set an in-core inode's unlinked pointer and return the old value. */
|
||||
STATIC int
|
||||
xfs_iunlink_update_inode(
|
||||
struct xfs_trans *tp,
|
||||
struct xfs_inode *ip,
|
||||
struct xfs_perag *pag,
|
||||
xfs_agino_t next_agino,
|
||||
xfs_agino_t *old_next_agino)
|
||||
{
|
||||
struct xfs_mount *mp = tp->t_mountp;
|
||||
struct xfs_dinode *dip;
|
||||
struct xfs_buf *ibp;
|
||||
xfs_agino_t old_value;
|
||||
struct xfs_agi *agi = agibp->b_addr;
|
||||
xfs_agino_t next_agino;
|
||||
xfs_agino_t agino = XFS_INO_TO_AGINO(mp, ip->i_ino);
|
||||
short bucket_index = agino % XFS_AGI_UNLINKED_BUCKETS;
|
||||
int error;
|
||||
|
||||
ASSERT(xfs_verify_agino_or_null(pag, next_agino));
|
||||
|
||||
error = xfs_imap_to_bp(mp, tp, &ip->i_imap, &ibp);
|
||||
if (error)
|
||||
return error;
|
||||
dip = xfs_buf_offset(ibp, ip->i_imap.im_boffset);
|
||||
|
||||
/* Make sure the old pointer isn't garbage. */
|
||||
old_value = be32_to_cpu(dip->di_next_unlinked);
|
||||
if (!xfs_verify_agino_or_null(pag, old_value)) {
|
||||
xfs_inode_verifier_error(ip, -EFSCORRUPTED, __func__, dip,
|
||||
sizeof(*dip), __this_address);
|
||||
error = -EFSCORRUPTED;
|
||||
goto out;
|
||||
/*
|
||||
* Get the index into the agi hash table for the list this inode will
|
||||
* go on. Make sure the pointer isn't garbage and that this inode
|
||||
* isn't already on the list.
|
||||
*/
|
||||
next_agino = be32_to_cpu(agi->agi_unlinked[bucket_index]);
|
||||
if (next_agino == agino ||
|
||||
!xfs_verify_agino_or_null(pag, next_agino)) {
|
||||
xfs_buf_mark_corrupt(agibp);
|
||||
return -EFSCORRUPTED;
|
||||
}
|
||||
|
||||
/*
|
||||
* Since we're updating a linked list, we should never find that the
|
||||
* current pointer is the same as the new value, unless we're
|
||||
* terminating the list.
|
||||
* Update the prev pointer in the next inode to point back to this
|
||||
* inode.
|
||||
*/
|
||||
*old_next_agino = old_value;
|
||||
if (old_value == next_agino) {
|
||||
if (next_agino != NULLAGINO) {
|
||||
xfs_inode_verifier_error(ip, -EFSCORRUPTED, __func__,
|
||||
dip, sizeof(*dip), __this_address);
|
||||
error = -EFSCORRUPTED;
|
||||
}
|
||||
goto out;
|
||||
error = xfs_iunlink_update_backref(pag, agino, next_agino);
|
||||
if (error)
|
||||
return error;
|
||||
|
||||
if (next_agino != NULLAGINO) {
|
||||
/*
|
||||
* There is already another inode in the bucket, so point this
|
||||
* inode to the current head of the list.
|
||||
*/
|
||||
error = xfs_iunlink_log_inode(tp, ip, pag, next_agino);
|
||||
if (error)
|
||||
return error;
|
||||
ip->i_next_unlinked = next_agino;
|
||||
}
|
||||
|
||||
/* Ok, update the new pointer. */
|
||||
xfs_iunlink_update_dinode(tp, pag, XFS_INO_TO_AGINO(mp, ip->i_ino),
|
||||
ibp, dip, &ip->i_imap, next_agino);
|
||||
return 0;
|
||||
out:
|
||||
xfs_trans_brelse(tp, ibp);
|
||||
return error;
|
||||
/* Point the head of the list to point to this inode. */
|
||||
return xfs_iunlink_update_bucket(tp, pag, agibp, bucket_index, agino);
|
||||
}
|
||||
|
||||
/*
|
||||
@ -2129,11 +1969,7 @@ xfs_iunlink(
|
||||
{
|
||||
struct xfs_mount *mp = tp->t_mountp;
|
||||
struct xfs_perag *pag;
|
||||
struct xfs_agi *agi;
|
||||
struct xfs_buf *agibp;
|
||||
xfs_agino_t next_agino;
|
||||
xfs_agino_t agino = XFS_INO_TO_AGINO(mp, ip->i_ino);
|
||||
short bucket_index = agino % XFS_AGI_UNLINKED_BUCKETS;
|
||||
int error;
|
||||
|
||||
ASSERT(VFS_I(ip)->i_nlink == 0);
|
||||
@ -2146,193 +1982,29 @@ xfs_iunlink(
|
||||
error = xfs_read_agi(pag, tp, &agibp);
|
||||
if (error)
|
||||
goto out;
|
||||
agi = agibp->b_addr;
|
||||
|
||||
/*
|
||||
* Get the index into the agi hash table for the list this inode will
|
||||
* go on. Make sure the pointer isn't garbage and that this inode
|
||||
* isn't already on the list.
|
||||
*/
|
||||
next_agino = be32_to_cpu(agi->agi_unlinked[bucket_index]);
|
||||
if (next_agino == agino ||
|
||||
!xfs_verify_agino_or_null(pag, next_agino)) {
|
||||
xfs_buf_mark_corrupt(agibp);
|
||||
error = -EFSCORRUPTED;
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (next_agino != NULLAGINO) {
|
||||
xfs_agino_t old_agino;
|
||||
|
||||
/*
|
||||
* There is already another inode in the bucket, so point this
|
||||
* inode to the current head of the list.
|
||||
*/
|
||||
error = xfs_iunlink_update_inode(tp, ip, pag, next_agino,
|
||||
&old_agino);
|
||||
if (error)
|
||||
goto out;
|
||||
ASSERT(old_agino == NULLAGINO);
|
||||
|
||||
/*
|
||||
* agino has been unlinked, add a backref from the next inode
|
||||
* back to agino.
|
||||
*/
|
||||
error = xfs_iunlink_add_backref(pag, agino, next_agino);
|
||||
if (error)
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* Point the head of the list to point to this inode. */
|
||||
error = xfs_iunlink_update_bucket(tp, pag, agibp, bucket_index, agino);
|
||||
error = xfs_iunlink_insert_inode(tp, pag, agibp, ip);
|
||||
out:
|
||||
xfs_perag_put(pag);
|
||||
return error;
|
||||
}
|
||||
|
||||
/* Return the imap, dinode pointer, and buffer for an inode. */
|
||||
STATIC int
|
||||
xfs_iunlink_map_ino(
|
||||
struct xfs_trans *tp,
|
||||
xfs_agnumber_t agno,
|
||||
xfs_agino_t agino,
|
||||
struct xfs_imap *imap,
|
||||
struct xfs_dinode **dipp,
|
||||
struct xfs_buf **bpp)
|
||||
{
|
||||
struct xfs_mount *mp = tp->t_mountp;
|
||||
int error;
|
||||
|
||||
imap->im_blkno = 0;
|
||||
error = xfs_imap(mp, tp, XFS_AGINO_TO_INO(mp, agno, agino), imap, 0);
|
||||
if (error) {
|
||||
xfs_warn(mp, "%s: xfs_imap returned error %d.",
|
||||
__func__, error);
|
||||
return error;
|
||||
}
|
||||
|
||||
error = xfs_imap_to_bp(mp, tp, imap, bpp);
|
||||
if (error) {
|
||||
xfs_warn(mp, "%s: xfs_imap_to_bp returned error %d.",
|
||||
__func__, error);
|
||||
return error;
|
||||
}
|
||||
|
||||
*dipp = xfs_buf_offset(*bpp, imap->im_boffset);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* Walk the unlinked chain from @head_agino until we find the inode that
|
||||
* points to @target_agino. Return the inode number, map, dinode pointer,
|
||||
* and inode cluster buffer of that inode as @agino, @imap, @dipp, and @bpp.
|
||||
*
|
||||
* @tp, @pag, @head_agino, and @target_agino are input parameters.
|
||||
* @agino, @imap, @dipp, and @bpp are all output parameters.
|
||||
*
|
||||
* Do not call this function if @target_agino is the head of the list.
|
||||
*/
|
||||
STATIC int
|
||||
xfs_iunlink_map_prev(
|
||||
struct xfs_trans *tp,
|
||||
struct xfs_perag *pag,
|
||||
xfs_agino_t head_agino,
|
||||
xfs_agino_t target_agino,
|
||||
xfs_agino_t *agino,
|
||||
struct xfs_imap *imap,
|
||||
struct xfs_dinode **dipp,
|
||||
struct xfs_buf **bpp)
|
||||
{
|
||||
struct xfs_mount *mp = tp->t_mountp;
|
||||
xfs_agino_t next_agino;
|
||||
int error;
|
||||
|
||||
ASSERT(head_agino != target_agino);
|
||||
*bpp = NULL;
|
||||
|
||||
/* See if our backref cache can find it faster. */
|
||||
*agino = xfs_iunlink_lookup_backref(pag, target_agino);
|
||||
if (*agino != NULLAGINO) {
|
||||
error = xfs_iunlink_map_ino(tp, pag->pag_agno, *agino, imap,
|
||||
dipp, bpp);
|
||||
if (error)
|
||||
return error;
|
||||
|
||||
if (be32_to_cpu((*dipp)->di_next_unlinked) == target_agino)
|
||||
return 0;
|
||||
|
||||
/*
|
||||
* If we get here the cache contents were corrupt, so drop the
|
||||
* buffer and fall back to walking the bucket list.
|
||||
*/
|
||||
xfs_trans_brelse(tp, *bpp);
|
||||
*bpp = NULL;
|
||||
WARN_ON_ONCE(1);
|
||||
}
|
||||
|
||||
trace_xfs_iunlink_map_prev_fallback(mp, pag->pag_agno);
|
||||
|
||||
/* Otherwise, walk the entire bucket until we find it. */
|
||||
next_agino = head_agino;
|
||||
while (next_agino != target_agino) {
|
||||
xfs_agino_t unlinked_agino;
|
||||
|
||||
if (*bpp)
|
||||
xfs_trans_brelse(tp, *bpp);
|
||||
|
||||
*agino = next_agino;
|
||||
error = xfs_iunlink_map_ino(tp, pag->pag_agno, next_agino, imap,
|
||||
dipp, bpp);
|
||||
if (error)
|
||||
return error;
|
||||
|
||||
unlinked_agino = be32_to_cpu((*dipp)->di_next_unlinked);
|
||||
/*
|
||||
* Make sure this pointer is valid and isn't an obvious
|
||||
* infinite loop.
|
||||
*/
|
||||
if (!xfs_verify_agino(pag, unlinked_agino) ||
|
||||
next_agino == unlinked_agino) {
|
||||
XFS_CORRUPTION_ERROR(__func__,
|
||||
XFS_ERRLEVEL_LOW, mp,
|
||||
*dipp, sizeof(**dipp));
|
||||
error = -EFSCORRUPTED;
|
||||
return error;
|
||||
}
|
||||
next_agino = unlinked_agino;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* Pull the on-disk inode from the AGI unlinked list.
|
||||
*/
|
||||
STATIC int
|
||||
xfs_iunlink_remove(
|
||||
static int
|
||||
xfs_iunlink_remove_inode(
|
||||
struct xfs_trans *tp,
|
||||
struct xfs_perag *pag,
|
||||
struct xfs_buf *agibp,
|
||||
struct xfs_inode *ip)
|
||||
{
|
||||
struct xfs_mount *mp = tp->t_mountp;
|
||||
struct xfs_agi *agi;
|
||||
struct xfs_buf *agibp;
|
||||
struct xfs_buf *last_ibp;
|
||||
struct xfs_dinode *last_dip = NULL;
|
||||
struct xfs_agi *agi = agibp->b_addr;
|
||||
xfs_agino_t agino = XFS_INO_TO_AGINO(mp, ip->i_ino);
|
||||
xfs_agino_t next_agino;
|
||||
xfs_agino_t head_agino;
|
||||
short bucket_index = agino % XFS_AGI_UNLINKED_BUCKETS;
|
||||
int error;
|
||||
|
||||
trace_xfs_iunlink_remove(ip);
|
||||
|
||||
/* Get the agi buffer first. It ensures lock ordering on the list. */
|
||||
error = xfs_read_agi(pag, tp, &agibp);
|
||||
if (error)
|
||||
return error;
|
||||
agi = agibp->b_addr;
|
||||
|
||||
/*
|
||||
* Get the index into the agi hash table for the list this inode will
|
||||
* go on. Make sure the head pointer isn't garbage.
|
||||
@ -2349,52 +2021,60 @@ xfs_iunlink_remove(
|
||||
* the old pointer value so that we can update whatever was previous
|
||||
* to us in the list to point to whatever was next in the list.
|
||||
*/
|
||||
error = xfs_iunlink_update_inode(tp, ip, pag, NULLAGINO, &next_agino);
|
||||
error = xfs_iunlink_log_inode(tp, ip, pag, NULLAGINO);
|
||||
if (error)
|
||||
return error;
|
||||
|
||||
/*
|
||||
* If there was a backref pointing from the next inode back to this
|
||||
* one, remove it because we've removed this inode from the list.
|
||||
*
|
||||
* Later, if this inode was in the middle of the list we'll update
|
||||
* this inode's backref to point from the next inode.
|
||||
* Update the prev pointer in the next inode to point back to previous
|
||||
* inode in the chain.
|
||||
*/
|
||||
if (next_agino != NULLAGINO) {
|
||||
error = xfs_iunlink_change_backref(pag, next_agino, NULLAGINO);
|
||||
if (error)
|
||||
return error;
|
||||
}
|
||||
error = xfs_iunlink_update_backref(pag, ip->i_prev_unlinked,
|
||||
ip->i_next_unlinked);
|
||||
if (error)
|
||||
return error;
|
||||
|
||||
if (head_agino != agino) {
|
||||
struct xfs_imap imap;
|
||||
xfs_agino_t prev_agino;
|
||||
struct xfs_inode *prev_ip;
|
||||
|
||||
/* We need to search the list for the inode being freed. */
|
||||
error = xfs_iunlink_map_prev(tp, pag, head_agino, agino,
|
||||
&prev_agino, &imap, &last_dip, &last_ibp);
|
||||
if (error)
|
||||
return error;
|
||||
prev_ip = xfs_iunlink_lookup(pag, ip->i_prev_unlinked);
|
||||
if (!prev_ip)
|
||||
return -EFSCORRUPTED;
|
||||
|
||||
/* Point the previous inode on the list to the next inode. */
|
||||
xfs_iunlink_update_dinode(tp, pag, prev_agino, last_ibp,
|
||||
last_dip, &imap, next_agino);
|
||||
|
||||
/*
|
||||
* Now we deal with the backref for this inode. If this inode
|
||||
* pointed at a real inode, change the backref that pointed to
|
||||
* us to point to our old next. If this inode was the end of
|
||||
* the list, delete the backref that pointed to us. Note that
|
||||
* change_backref takes care of deleting the backref if
|
||||
* next_agino is NULLAGINO.
|
||||
*/
|
||||
return xfs_iunlink_change_backref(agibp->b_pag, agino,
|
||||
next_agino);
|
||||
error = xfs_iunlink_log_inode(tp, prev_ip, pag,
|
||||
ip->i_next_unlinked);
|
||||
prev_ip->i_next_unlinked = ip->i_next_unlinked;
|
||||
} else {
|
||||
/* Point the head of the list to the next unlinked inode. */
|
||||
error = xfs_iunlink_update_bucket(tp, pag, agibp, bucket_index,
|
||||
ip->i_next_unlinked);
|
||||
}
|
||||
|
||||
/* Point the head of the list to the next unlinked inode. */
|
||||
return xfs_iunlink_update_bucket(tp, pag, agibp, bucket_index,
|
||||
next_agino);
|
||||
ip->i_next_unlinked = NULLAGINO;
|
||||
ip->i_prev_unlinked = NULLAGINO;
|
||||
return error;
|
||||
}
|
||||
|
||||
/*
|
||||
* Pull the on-disk inode from the AGI unlinked list.
|
||||
*/
|
||||
STATIC int
|
||||
xfs_iunlink_remove(
|
||||
struct xfs_trans *tp,
|
||||
struct xfs_perag *pag,
|
||||
struct xfs_inode *ip)
|
||||
{
|
||||
struct xfs_buf *agibp;
|
||||
int error;
|
||||
|
||||
trace_xfs_iunlink_remove(ip);
|
||||
|
||||
/* Get the agi buffer first. It ensures lock ordering on the list. */
|
||||
error = xfs_read_agi(pag, tp, &agibp);
|
||||
if (error)
|
||||
return error;
|
||||
|
||||
return xfs_iunlink_remove_inode(tp, pag, agibp, ip);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -68,6 +68,10 @@ typedef struct xfs_inode {
|
||||
uint64_t i_diflags2; /* XFS_DIFLAG2_... */
|
||||
struct timespec64 i_crtime; /* time created */
|
||||
|
||||
/* unlinked list pointers */
|
||||
xfs_agino_t i_next_unlinked;
|
||||
xfs_agino_t i_prev_unlinked;
|
||||
|
||||
/* VFS inode */
|
||||
struct inode i_vnode; /* embedded VFS inode */
|
||||
|
||||
@ -505,9 +509,6 @@ extern struct kmem_cache *xfs_inode_cache;
|
||||
|
||||
bool xfs_inode_needs_inactive(struct xfs_inode *ip);
|
||||
|
||||
int xfs_iunlink_init(struct xfs_perag *pag);
|
||||
void xfs_iunlink_destroy(struct xfs_perag *pag);
|
||||
|
||||
void xfs_end_io(struct work_struct *work);
|
||||
|
||||
int xfs_ilock2_io_mmap(struct xfs_inode *ip1, struct xfs_inode *ip2);
|
||||
|
180
fs/xfs/xfs_iunlink_item.c
Normal file
180
fs/xfs/xfs_iunlink_item.c
Normal file
@ -0,0 +1,180 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Copyright (c) 2020-2022, Red Hat, Inc.
|
||||
* All Rights Reserved.
|
||||
*/
|
||||
#include "xfs.h"
|
||||
#include "xfs_fs.h"
|
||||
#include "xfs_shared.h"
|
||||
#include "xfs_format.h"
|
||||
#include "xfs_log_format.h"
|
||||
#include "xfs_trans_resv.h"
|
||||
#include "xfs_mount.h"
|
||||
#include "xfs_inode.h"
|
||||
#include "xfs_trans.h"
|
||||
#include "xfs_trans_priv.h"
|
||||
#include "xfs_ag.h"
|
||||
#include "xfs_iunlink_item.h"
|
||||
#include "xfs_trace.h"
|
||||
#include "xfs_error.h"
|
||||
|
||||
struct kmem_cache *xfs_iunlink_cache;
|
||||
|
||||
static inline struct xfs_iunlink_item *IUL_ITEM(struct xfs_log_item *lip)
|
||||
{
|
||||
return container_of(lip, struct xfs_iunlink_item, item);
|
||||
}
|
||||
|
||||
static void
|
||||
xfs_iunlink_item_release(
|
||||
struct xfs_log_item *lip)
|
||||
{
|
||||
struct xfs_iunlink_item *iup = IUL_ITEM(lip);
|
||||
|
||||
xfs_perag_put(iup->pag);
|
||||
kmem_cache_free(xfs_iunlink_cache, IUL_ITEM(lip));
|
||||
}
|
||||
|
||||
|
||||
static uint64_t
|
||||
xfs_iunlink_item_sort(
|
||||
struct xfs_log_item *lip)
|
||||
{
|
||||
return IUL_ITEM(lip)->ip->i_ino;
|
||||
}
|
||||
|
||||
/*
|
||||
* Look up the inode cluster buffer and log the on-disk unlinked inode change
|
||||
* we need to make.
|
||||
*/
|
||||
static int
|
||||
xfs_iunlink_log_dinode(
|
||||
struct xfs_trans *tp,
|
||||
struct xfs_iunlink_item *iup)
|
||||
{
|
||||
struct xfs_mount *mp = tp->t_mountp;
|
||||
struct xfs_inode *ip = iup->ip;
|
||||
struct xfs_dinode *dip;
|
||||
struct xfs_buf *ibp;
|
||||
int offset;
|
||||
int error;
|
||||
|
||||
error = xfs_imap_to_bp(mp, tp, &ip->i_imap, &ibp);
|
||||
if (error)
|
||||
return error;
|
||||
/*
|
||||
* Don't log the unlinked field on stale buffers as this may be the
|
||||
* transaction that frees the inode cluster and relogging the buffer
|
||||
* here will incorrectly remove the stale state.
|
||||
*/
|
||||
if (ibp->b_flags & XBF_STALE)
|
||||
goto out;
|
||||
|
||||
dip = xfs_buf_offset(ibp, ip->i_imap.im_boffset);
|
||||
|
||||
/* Make sure the old pointer isn't garbage. */
|
||||
if (be32_to_cpu(dip->di_next_unlinked) != iup->old_agino) {
|
||||
xfs_inode_verifier_error(ip, -EFSCORRUPTED, __func__, dip,
|
||||
sizeof(*dip), __this_address);
|
||||
error = -EFSCORRUPTED;
|
||||
goto out;
|
||||
}
|
||||
|
||||
trace_xfs_iunlink_update_dinode(mp, iup->pag->pag_agno,
|
||||
XFS_INO_TO_AGINO(mp, ip->i_ino),
|
||||
be32_to_cpu(dip->di_next_unlinked), iup->next_agino);
|
||||
|
||||
dip->di_next_unlinked = cpu_to_be32(iup->next_agino);
|
||||
offset = ip->i_imap.im_boffset +
|
||||
offsetof(struct xfs_dinode, di_next_unlinked);
|
||||
|
||||
xfs_dinode_calc_crc(mp, dip);
|
||||
xfs_trans_inode_buf(tp, ibp);
|
||||
xfs_trans_log_buf(tp, ibp, offset, offset + sizeof(xfs_agino_t) - 1);
|
||||
return 0;
|
||||
out:
|
||||
xfs_trans_brelse(tp, ibp);
|
||||
return error;
|
||||
}
|
||||
|
||||
/*
|
||||
* On precommit, we grab the inode cluster buffer for the inode number we were
|
||||
* passed, then update the next unlinked field for that inode in the buffer and
|
||||
* log the buffer. This ensures that the inode cluster buffer was logged in the
|
||||
* correct order w.r.t. other inode cluster buffers. We can then remove the
|
||||
* iunlink item from the transaction and release it as it is has now served it's
|
||||
* purpose.
|
||||
*/
|
||||
static int
|
||||
xfs_iunlink_item_precommit(
|
||||
struct xfs_trans *tp,
|
||||
struct xfs_log_item *lip)
|
||||
{
|
||||
struct xfs_iunlink_item *iup = IUL_ITEM(lip);
|
||||
int error;
|
||||
|
||||
error = xfs_iunlink_log_dinode(tp, iup);
|
||||
list_del(&lip->li_trans);
|
||||
xfs_iunlink_item_release(lip);
|
||||
return error;
|
||||
}
|
||||
|
||||
static const struct xfs_item_ops xfs_iunlink_item_ops = {
|
||||
.iop_release = xfs_iunlink_item_release,
|
||||
.iop_sort = xfs_iunlink_item_sort,
|
||||
.iop_precommit = xfs_iunlink_item_precommit,
|
||||
};
|
||||
|
||||
|
||||
/*
|
||||
* Initialize the inode log item for a newly allocated (in-core) inode.
|
||||
*
|
||||
* Inode extents can only reside within an AG. Hence specify the starting
|
||||
* block for the inode chunk by offset within an AG as well as the
|
||||
* length of the allocated extent.
|
||||
*
|
||||
* This joins the item to the transaction and marks it dirty so
|
||||
* that we don't need a separate call to do this, nor does the
|
||||
* caller need to know anything about the iunlink item.
|
||||
*/
|
||||
int
|
||||
xfs_iunlink_log_inode(
|
||||
struct xfs_trans *tp,
|
||||
struct xfs_inode *ip,
|
||||
struct xfs_perag *pag,
|
||||
xfs_agino_t next_agino)
|
||||
{
|
||||
struct xfs_mount *mp = tp->t_mountp;
|
||||
struct xfs_iunlink_item *iup;
|
||||
|
||||
ASSERT(xfs_verify_agino_or_null(pag, next_agino));
|
||||
ASSERT(xfs_verify_agino_or_null(pag, ip->i_next_unlinked));
|
||||
|
||||
/*
|
||||
* Since we're updating a linked list, we should never find that the
|
||||
* current pointer is the same as the new value, unless we're
|
||||
* terminating the list.
|
||||
*/
|
||||
if (ip->i_next_unlinked == next_agino) {
|
||||
if (next_agino != NULLAGINO)
|
||||
return -EFSCORRUPTED;
|
||||
return 0;
|
||||
}
|
||||
|
||||
iup = kmem_cache_zalloc(xfs_iunlink_cache, GFP_KERNEL | __GFP_NOFAIL);
|
||||
xfs_log_item_init(mp, &iup->item, XFS_LI_IUNLINK,
|
||||
&xfs_iunlink_item_ops);
|
||||
|
||||
iup->ip = ip;
|
||||
iup->next_agino = next_agino;
|
||||
iup->old_agino = ip->i_next_unlinked;
|
||||
|
||||
atomic_inc(&pag->pag_ref);
|
||||
iup->pag = pag;
|
||||
|
||||
xfs_trans_add_item(tp, &iup->item);
|
||||
tp->t_flags |= XFS_TRANS_DIRTY;
|
||||
set_bit(XFS_LI_DIRTY, &iup->item.li_flags);
|
||||
return 0;
|
||||
}
|
||||
|
27
fs/xfs/xfs_iunlink_item.h
Normal file
27
fs/xfs/xfs_iunlink_item.h
Normal file
@ -0,0 +1,27 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Copyright (c) 2020-2022, Red Hat, Inc.
|
||||
* All Rights Reserved.
|
||||
*/
|
||||
#ifndef XFS_IUNLINK_ITEM_H
|
||||
#define XFS_IUNLINK_ITEM_H 1
|
||||
|
||||
struct xfs_trans;
|
||||
struct xfs_inode;
|
||||
struct xfs_perag;
|
||||
|
||||
/* in memory log item structure */
|
||||
struct xfs_iunlink_item {
|
||||
struct xfs_log_item item;
|
||||
struct xfs_inode *ip;
|
||||
struct xfs_perag *pag;
|
||||
xfs_agino_t next_agino;
|
||||
xfs_agino_t old_agino;
|
||||
};
|
||||
|
||||
extern struct kmem_cache *xfs_iunlink_cache;
|
||||
|
||||
int xfs_iunlink_log_inode(struct xfs_trans *tp, struct xfs_inode *ip,
|
||||
struct xfs_perag *pag, xfs_agino_t next_agino);
|
||||
|
||||
#endif /* XFS_IUNLINK_ITEM_H */
|
@ -2667,55 +2667,57 @@ out_error:
|
||||
return;
|
||||
}
|
||||
|
||||
STATIC xfs_agino_t
|
||||
xlog_recover_process_one_iunlink(
|
||||
struct xfs_perag *pag,
|
||||
xfs_agino_t agino,
|
||||
int bucket)
|
||||
static int
|
||||
xlog_recover_iunlink_bucket(
|
||||
struct xfs_perag *pag,
|
||||
struct xfs_agi *agi,
|
||||
int bucket)
|
||||
{
|
||||
struct xfs_buf *ibp;
|
||||
struct xfs_dinode *dip;
|
||||
struct xfs_inode *ip;
|
||||
xfs_ino_t ino;
|
||||
int error;
|
||||
struct xfs_mount *mp = pag->pag_mount;
|
||||
struct xfs_inode *prev_ip = NULL;
|
||||
struct xfs_inode *ip;
|
||||
xfs_agino_t prev_agino, agino;
|
||||
int error = 0;
|
||||
|
||||
ino = XFS_AGINO_TO_INO(pag->pag_mount, pag->pag_agno, agino);
|
||||
error = xfs_iget(pag->pag_mount, NULL, ino, 0, 0, &ip);
|
||||
if (error)
|
||||
goto fail;
|
||||
agino = be32_to_cpu(agi->agi_unlinked[bucket]);
|
||||
while (agino != NULLAGINO) {
|
||||
error = xfs_iget(mp, NULL,
|
||||
XFS_AGINO_TO_INO(mp, pag->pag_agno, agino),
|
||||
0, 0, &ip);
|
||||
if (error)
|
||||
break;
|
||||
|
||||
/*
|
||||
* Get the on disk inode to find the next inode in the bucket.
|
||||
*/
|
||||
error = xfs_imap_to_bp(pag->pag_mount, NULL, &ip->i_imap, &ibp);
|
||||
if (error)
|
||||
goto fail_iput;
|
||||
dip = xfs_buf_offset(ibp, ip->i_imap.im_boffset);
|
||||
ASSERT(VFS_I(ip)->i_nlink == 0);
|
||||
ASSERT(VFS_I(ip)->i_mode != 0);
|
||||
xfs_iflags_clear(ip, XFS_IRECOVERY);
|
||||
agino = ip->i_next_unlinked;
|
||||
|
||||
xfs_iflags_clear(ip, XFS_IRECOVERY);
|
||||
ASSERT(VFS_I(ip)->i_nlink == 0);
|
||||
ASSERT(VFS_I(ip)->i_mode != 0);
|
||||
if (prev_ip) {
|
||||
ip->i_prev_unlinked = prev_agino;
|
||||
xfs_irele(prev_ip);
|
||||
|
||||
/* setup for the next pass */
|
||||
agino = be32_to_cpu(dip->di_next_unlinked);
|
||||
xfs_buf_relse(ibp);
|
||||
/*
|
||||
* Ensure the inode is removed from the unlinked list
|
||||
* before we continue so that it won't race with
|
||||
* building the in-memory list here. This could be
|
||||
* serialised with the agibp lock, but that just
|
||||
* serialises via lockstepping and it's much simpler
|
||||
* just to flush the inodegc queue and wait for it to
|
||||
* complete.
|
||||
*/
|
||||
xfs_inodegc_flush(mp);
|
||||
}
|
||||
|
||||
xfs_irele(ip);
|
||||
return agino;
|
||||
prev_agino = agino;
|
||||
prev_ip = ip;
|
||||
}
|
||||
|
||||
fail_iput:
|
||||
xfs_irele(ip);
|
||||
fail:
|
||||
/*
|
||||
* We can't read in the inode this bucket points to, or this inode
|
||||
* is messed up. Just ditch this bucket of inodes. We will lose
|
||||
* some inodes and space, but at least we won't hang.
|
||||
*
|
||||
* Call xlog_recover_clear_agi_bucket() to perform a transaction to
|
||||
* clear the inode pointer in the bucket.
|
||||
*/
|
||||
xlog_recover_clear_agi_bucket(pag, bucket);
|
||||
return NULLAGINO;
|
||||
if (prev_ip) {
|
||||
ip->i_prev_unlinked = prev_agino;
|
||||
xfs_irele(prev_ip);
|
||||
}
|
||||
xfs_inodegc_flush(mp);
|
||||
return error;
|
||||
}
|
||||
|
||||
/*
|
||||
@ -2741,59 +2743,70 @@ xlog_recover_process_one_iunlink(
|
||||
* scheduled on this CPU to ensure other scheduled work can run without undue
|
||||
* latency.
|
||||
*/
|
||||
STATIC void
|
||||
xlog_recover_process_iunlinks(
|
||||
struct xlog *log)
|
||||
static void
|
||||
xlog_recover_iunlink_ag(
|
||||
struct xfs_perag *pag)
|
||||
{
|
||||
struct xfs_mount *mp = log->l_mp;
|
||||
struct xfs_perag *pag;
|
||||
xfs_agnumber_t agno;
|
||||
struct xfs_agi *agi;
|
||||
struct xfs_buf *agibp;
|
||||
xfs_agino_t agino;
|
||||
int bucket;
|
||||
int error;
|
||||
|
||||
for_each_perag(mp, agno, pag) {
|
||||
error = xfs_read_agi(pag, NULL, &agibp);
|
||||
error = xfs_read_agi(pag, NULL, &agibp);
|
||||
if (error) {
|
||||
/*
|
||||
* AGI is b0rked. Don't process it.
|
||||
*
|
||||
* We should probably mark the filesystem as corrupt after we've
|
||||
* recovered all the ag's we can....
|
||||
*/
|
||||
return;
|
||||
}
|
||||
|
||||
/*
|
||||
* Unlock the buffer so that it can be acquired in the normal course of
|
||||
* the transaction to truncate and free each inode. Because we are not
|
||||
* racing with anyone else here for the AGI buffer, we don't even need
|
||||
* to hold it locked to read the initial unlinked bucket entries out of
|
||||
* the buffer. We keep buffer reference though, so that it stays pinned
|
||||
* in memory while we need the buffer.
|
||||
*/
|
||||
agi = agibp->b_addr;
|
||||
xfs_buf_unlock(agibp);
|
||||
|
||||
for (bucket = 0; bucket < XFS_AGI_UNLINKED_BUCKETS; bucket++) {
|
||||
error = xlog_recover_iunlink_bucket(pag, agi, bucket);
|
||||
if (error) {
|
||||
/*
|
||||
* AGI is b0rked. Don't process it.
|
||||
*
|
||||
* We should probably mark the filesystem as corrupt
|
||||
* after we've recovered all the ag's we can....
|
||||
* Bucket is unrecoverable, so only a repair scan can
|
||||
* free the remaining unlinked inodes. Just empty the
|
||||
* bucket and remaining inodes on it unreferenced and
|
||||
* unfreeable.
|
||||
*/
|
||||
continue;
|
||||
xfs_inodegc_flush(pag->pag_mount);
|
||||
xlog_recover_clear_agi_bucket(pag, bucket);
|
||||
}
|
||||
/*
|
||||
* Unlock the buffer so that it can be acquired in the normal
|
||||
* course of the transaction to truncate and free each inode.
|
||||
* Because we are not racing with anyone else here for the AGI
|
||||
* buffer, we don't even need to hold it locked to read the
|
||||
* initial unlinked bucket entries out of the buffer. We keep
|
||||
* buffer reference though, so that it stays pinned in memory
|
||||
* while we need the buffer.
|
||||
*/
|
||||
agi = agibp->b_addr;
|
||||
xfs_buf_unlock(agibp);
|
||||
|
||||
for (bucket = 0; bucket < XFS_AGI_UNLINKED_BUCKETS; bucket++) {
|
||||
agino = be32_to_cpu(agi->agi_unlinked[bucket]);
|
||||
while (agino != NULLAGINO) {
|
||||
agino = xlog_recover_process_one_iunlink(pag,
|
||||
agino, bucket);
|
||||
cond_resched();
|
||||
}
|
||||
}
|
||||
xfs_buf_rele(agibp);
|
||||
}
|
||||
|
||||
xfs_buf_rele(agibp);
|
||||
}
|
||||
|
||||
static void
|
||||
xlog_recover_process_iunlinks(
|
||||
struct xlog *log)
|
||||
{
|
||||
struct xfs_perag *pag;
|
||||
xfs_agnumber_t agno;
|
||||
|
||||
for_each_perag(log->l_mp, agno, pag)
|
||||
xlog_recover_iunlink_ag(pag);
|
||||
|
||||
/*
|
||||
* Flush the pending unlinked inodes to ensure that the inactivations
|
||||
* are fully completed on disk and the incore inodes can be reclaimed
|
||||
* before we signal that recovery is complete.
|
||||
*/
|
||||
xfs_inodegc_flush(mp);
|
||||
xfs_inodegc_flush(log->l_mp);
|
||||
}
|
||||
|
||||
STATIC void
|
||||
|
@ -40,6 +40,7 @@
|
||||
#include "xfs_defer.h"
|
||||
#include "xfs_attr_item.h"
|
||||
#include "xfs_xattr.h"
|
||||
#include "xfs_iunlink_item.h"
|
||||
|
||||
#include <linux/magic.h>
|
||||
#include <linux/fs_context.h>
|
||||
@ -2096,8 +2097,16 @@ xfs_init_caches(void)
|
||||
if (!xfs_attri_cache)
|
||||
goto out_destroy_attrd_cache;
|
||||
|
||||
xfs_iunlink_cache = kmem_cache_create("xfs_iul_item",
|
||||
sizeof(struct xfs_iunlink_item),
|
||||
0, 0, NULL);
|
||||
if (!xfs_iunlink_cache)
|
||||
goto out_destroy_attri_cache;
|
||||
|
||||
return 0;
|
||||
|
||||
out_destroy_attri_cache:
|
||||
kmem_cache_destroy(xfs_attri_cache);
|
||||
out_destroy_attrd_cache:
|
||||
kmem_cache_destroy(xfs_attrd_cache);
|
||||
out_destroy_bui_cache:
|
||||
@ -2148,6 +2157,7 @@ xfs_destroy_caches(void)
|
||||
* destroy caches.
|
||||
*/
|
||||
rcu_barrier();
|
||||
kmem_cache_destroy(xfs_iunlink_cache);
|
||||
kmem_cache_destroy(xfs_attri_cache);
|
||||
kmem_cache_destroy(xfs_attrd_cache);
|
||||
kmem_cache_destroy(xfs_bui_cache);
|
||||
|
@ -3672,7 +3672,6 @@ DEFINE_EVENT(xfs_ag_inode_class, name, \
|
||||
TP_ARGS(ip))
|
||||
DEFINE_AGINODE_EVENT(xfs_iunlink);
|
||||
DEFINE_AGINODE_EVENT(xfs_iunlink_remove);
|
||||
DEFINE_AG_EVENT(xfs_iunlink_map_prev_fallback);
|
||||
|
||||
DECLARE_EVENT_CLASS(xfs_fs_corrupt_class,
|
||||
TP_PROTO(struct xfs_mount *mp, unsigned int flags),
|
||||
|
@ -844,6 +844,90 @@ xfs_trans_committed_bulk(
|
||||
spin_unlock(&ailp->ail_lock);
|
||||
}
|
||||
|
||||
/*
|
||||
* Sort transaction items prior to running precommit operations. This will
|
||||
* attempt to order the items such that they will always be locked in the same
|
||||
* order. Items that have no sort function are moved to the end of the list
|
||||
* and so are locked last.
|
||||
*
|
||||
* This may need refinement as different types of objects add sort functions.
|
||||
*
|
||||
* Function is more complex than it needs to be because we are comparing 64 bit
|
||||
* values and the function only returns 32 bit values.
|
||||
*/
|
||||
static int
|
||||
xfs_trans_precommit_sort(
|
||||
void *unused_arg,
|
||||
const struct list_head *a,
|
||||
const struct list_head *b)
|
||||
{
|
||||
struct xfs_log_item *lia = container_of(a,
|
||||
struct xfs_log_item, li_trans);
|
||||
struct xfs_log_item *lib = container_of(b,
|
||||
struct xfs_log_item, li_trans);
|
||||
int64_t diff;
|
||||
|
||||
/*
|
||||
* If both items are non-sortable, leave them alone. If only one is
|
||||
* sortable, move the non-sortable item towards the end of the list.
|
||||
*/
|
||||
if (!lia->li_ops->iop_sort && !lib->li_ops->iop_sort)
|
||||
return 0;
|
||||
if (!lia->li_ops->iop_sort)
|
||||
return 1;
|
||||
if (!lib->li_ops->iop_sort)
|
||||
return -1;
|
||||
|
||||
diff = lia->li_ops->iop_sort(lia) - lib->li_ops->iop_sort(lib);
|
||||
if (diff < 0)
|
||||
return -1;
|
||||
if (diff > 0)
|
||||
return 1;
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* Run transaction precommit functions.
|
||||
*
|
||||
* If there is an error in any of the callouts, then stop immediately and
|
||||
* trigger a shutdown to abort the transaction. There is no recovery possible
|
||||
* from errors at this point as the transaction is dirty....
|
||||
*/
|
||||
static int
|
||||
xfs_trans_run_precommits(
|
||||
struct xfs_trans *tp)
|
||||
{
|
||||
struct xfs_mount *mp = tp->t_mountp;
|
||||
struct xfs_log_item *lip, *n;
|
||||
int error = 0;
|
||||
|
||||
/*
|
||||
* Sort the item list to avoid ABBA deadlocks with other transactions
|
||||
* running precommit operations that lock multiple shared items such as
|
||||
* inode cluster buffers.
|
||||
*/
|
||||
list_sort(NULL, &tp->t_items, xfs_trans_precommit_sort);
|
||||
|
||||
/*
|
||||
* Precommit operations can remove the log item from the transaction
|
||||
* if the log item exists purely to delay modifications until they
|
||||
* can be ordered against other operations. Hence we have to use
|
||||
* list_for_each_entry_safe() here.
|
||||
*/
|
||||
list_for_each_entry_safe(lip, n, &tp->t_items, li_trans) {
|
||||
if (!test_bit(XFS_LI_DIRTY, &lip->li_flags))
|
||||
continue;
|
||||
if (lip->li_ops->iop_precommit) {
|
||||
error = lip->li_ops->iop_precommit(tp, lip);
|
||||
if (error)
|
||||
break;
|
||||
}
|
||||
}
|
||||
if (error)
|
||||
xfs_force_shutdown(mp, SHUTDOWN_CORRUPT_INCORE);
|
||||
return error;
|
||||
}
|
||||
|
||||
/*
|
||||
* Commit the given transaction to the log.
|
||||
*
|
||||
@ -869,6 +953,13 @@ __xfs_trans_commit(
|
||||
|
||||
trace_xfs_trans_commit(tp, _RET_IP_);
|
||||
|
||||
error = xfs_trans_run_precommits(tp);
|
||||
if (error) {
|
||||
if (tp->t_flags & XFS_TRANS_PERM_LOG_RES)
|
||||
xfs_defer_cancel(tp);
|
||||
goto out_unreserve;
|
||||
}
|
||||
|
||||
/*
|
||||
* Finish deferred items on final commit. Only permanent transactions
|
||||
* should ever have deferred ops.
|
||||
|
@ -72,10 +72,12 @@ struct xfs_item_ops {
|
||||
void (*iop_format)(struct xfs_log_item *, struct xfs_log_vec *);
|
||||
void (*iop_pin)(struct xfs_log_item *);
|
||||
void (*iop_unpin)(struct xfs_log_item *, int remove);
|
||||
uint (*iop_push)(struct xfs_log_item *, struct list_head *);
|
||||
uint64_t (*iop_sort)(struct xfs_log_item *lip);
|
||||
int (*iop_precommit)(struct xfs_trans *tp, struct xfs_log_item *lip);
|
||||
void (*iop_committing)(struct xfs_log_item *lip, xfs_csn_t seq);
|
||||
void (*iop_release)(struct xfs_log_item *);
|
||||
xfs_lsn_t (*iop_committed)(struct xfs_log_item *, xfs_lsn_t);
|
||||
uint (*iop_push)(struct xfs_log_item *, struct list_head *);
|
||||
void (*iop_release)(struct xfs_log_item *);
|
||||
int (*iop_recover)(struct xfs_log_item *lip,
|
||||
struct list_head *capture_list);
|
||||
bool (*iop_match)(struct xfs_log_item *item, uint64_t id);
|
||||
|
Loading…
Reference in New Issue
Block a user