2019-05-31 08:09:56 +00:00
|
|
|
// SPDX-License-Identifier: GPL-2.0-only
|
2006-01-16 16:50:04 +00:00
|
|
|
/*
|
|
|
|
* Copyright (C) Sistina Software, Inc. 1997-2003 All rights reserved.
|
2008-01-31 16:31:39 +00:00
|
|
|
* Copyright (C) 2004-2008 Red Hat, Inc. All rights reserved.
|
2006-01-16 16:50:04 +00:00
|
|
|
*/
|
|
|
|
|
2014-03-06 20:10:45 +00:00
|
|
|
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
|
|
|
|
|
2006-01-16 16:50:04 +00:00
|
|
|
#include <linux/sched.h>
|
|
|
|
#include <linux/slab.h>
|
|
|
|
#include <linux/spinlock.h>
|
|
|
|
#include <linux/buffer_head.h>
|
|
|
|
#include <linux/delay.h>
|
|
|
|
#include <linux/sort.h>
|
2017-08-01 16:18:26 +00:00
|
|
|
#include <linux/hash.h>
|
2006-01-16 16:50:04 +00:00
|
|
|
#include <linux/jhash.h>
|
2006-03-29 19:36:49 +00:00
|
|
|
#include <linux/kallsyms.h>
|
2006-02-27 22:23:27 +00:00
|
|
|
#include <linux/gfs2_ondisk.h>
|
2006-09-12 01:40:30 +00:00
|
|
|
#include <linux/list.h>
|
2007-01-17 15:33:23 +00:00
|
|
|
#include <linux/wait.h>
|
2007-03-06 07:10:39 +00:00
|
|
|
#include <linux/module.h>
|
2016-12-24 19:46:01 +00:00
|
|
|
#include <linux/uaccess.h>
|
2007-03-16 10:26:37 +00:00
|
|
|
#include <linux/seq_file.h>
|
|
|
|
#include <linux/debugfs.h>
|
2007-08-01 12:57:10 +00:00
|
|
|
#include <linux/kthread.h>
|
|
|
|
#include <linux/freezer.h>
|
[GFS2] delay glock demote for a minimum hold time
When a lot of IO, with some distributed mmap IO, is run on a GFS2 filesystem in
a cluster, it will deadlock. The reason is that do_no_page() will repeatedly
call gfs2_sharewrite_nopage(), because each node keeps giving up the glock
too early, and is forced to call unmap_mapping_range(). This bumps the
mapping->truncate_count sequence count, forcing do_no_page() to retry. This
patch institutes a minimum glock hold time a tenth a second. This insures
that even in heavy contention cases, the node has enough time to get some
useful work done before it gives up the glock.
A second issue is that when gfs2_glock_dq() is called from within a page fault
to demote a lock, and the associated page needs to be written out, it will
try to acqire a lock on it, but it has already been locked at a higher level.
This patch puts makes gfs2_glock_dq() use the work queue as well, to avoid this
issue. This is the same patch as Steve Whitehouse originally proposed to fix
this issue, execpt that gfs2_glock_dq() now grabs a reference to the glock
before it queues up the work on it.
Signed-off-by: Benjamin E. Marzinski <bmarzins@redhat.com>
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
2007-08-23 18:19:05 +00:00
|
|
|
#include <linux/workqueue.h>
|
|
|
|
#include <linux/jiffies.h>
|
2011-01-19 09:30:01 +00:00
|
|
|
#include <linux/rcupdate.h>
|
|
|
|
#include <linux/rculist_bl.h>
|
|
|
|
#include <linux/bit_spinlock.h>
|
GFS2: glock statistics gathering
The stats are divided into two sets: those relating to the
super block and those relating to an individual glock. The
super block stats are done on a per cpu basis in order to
try and reduce the overhead of gathering them. They are also
further divided by glock type.
In the case of both the super block and glock statistics,
the same information is gathered in each case. The super
block statistics are used to provide default values for
most of the glock statistics, so that newly created glocks
should have, as far as possible, a sensible starting point.
The statistics are divided into three pairs of mean and
variance, plus two counters. The mean/variance pairs are
smoothed exponential estimates and the algorithm used is
one which will be very familiar to those used to calculation
of round trip times in network code.
The three pairs of mean/variance measure the following
things:
1. DLM lock time (non-blocking requests)
2. DLM lock time (blocking requests)
3. Inter-request time (again to the DLM)
A non-blocking request is one which will complete right
away, whatever the state of the DLM lock in question. That
currently means any requests when (a) the current state of
the lock is exclusive (b) the requested state is either null
or unlocked or (c) the "try lock" flag is set. A blocking
request covers all the other lock requests.
There are two counters. The first is there primarily to show
how many lock requests have been made, and thus how much data
has gone into the mean/variance calculations. The other counter
is counting queueing of holders at the top layer of the glock
code. Hopefully that number will be a lot larger than the number
of dlm lock requests issued.
So why gather these statistics? There are several reasons
we'd like to get a better idea of these timings:
1. To be able to better set the glock "min hold time"
2. To spot performance issues more easily
3. To improve the algorithm for selecting resource groups for
allocation (to base it on lock wait time, rather than blindly
using a "try lock")
Due to the smoothing action of the updates, a step change in
some input quantity being sampled will only fully be taken
into account after 8 samples (or 4 for the variance) and this
needs to be carefully considered when interpreting the
results.
Knowing both the time it takes a lock request to complete and
the average time between lock requests for a glock means we
can compute the total percentage of the time for which the
node is able to use a glock vs. time that the rest of the
cluster has its share. That will be very useful when setting
the lock min hold time.
The other point to remember is that all times are in
nanoseconds. Great care has been taken to ensure that we
measure exactly the quantities that we want, as accurately
as possible. There are always inaccuracies in any
measuring system, but I hope this is as accurate as we
can reasonably make it.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
2012-01-20 10:38:36 +00:00
|
|
|
#include <linux/percpu.h>
|
2013-02-01 20:36:03 +00:00
|
|
|
#include <linux/list_sort.h>
|
2013-10-15 14:18:08 +00:00
|
|
|
#include <linux/lockref.h>
|
2015-03-16 16:02:46 +00:00
|
|
|
#include <linux/rhashtable.h>
|
gfs2: Add glockfd debugfs file
When a process has a gfs2 file open, the file is keeping a reference on the
underlying gfs2 inode, and the inode is keeping the inode's iopen glock held in
shared mode. In other words, the process depends on the iopen glock of each
open gfs2 file. Expose those dependencies in a new "glockfd" debugfs file.
The new debugfs file contains one line for each gfs2 file descriptor,
specifying the tgid, file descriptor number, and glock name, e.g.,
1601 6 5/816d
This list is compiled by iterating all tasks on the system using find_ge_pid(),
and all file descriptors of each task using task_lookup_next_fd_rcu(). To make
that work from gfs2, export those two functions.
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2022-06-08 14:22:55 +00:00
|
|
|
#include <linux/pid_namespace.h>
|
|
|
|
#include <linux/fdtable.h>
|
|
|
|
#include <linux/file.h>
|
2006-01-16 16:50:04 +00:00
|
|
|
|
|
|
|
#include "gfs2.h"
|
2006-02-27 22:23:27 +00:00
|
|
|
#include "incore.h"
|
2006-01-16 16:50:04 +00:00
|
|
|
#include "glock.h"
|
|
|
|
#include "glops.h"
|
|
|
|
#include "inode.h"
|
|
|
|
#include "lops.h"
|
|
|
|
#include "meta_io.h"
|
|
|
|
#include "quota.h"
|
|
|
|
#include "super.h"
|
2006-02-27 22:23:27 +00:00
|
|
|
#include "util.h"
|
2008-11-18 13:38:48 +00:00
|
|
|
#include "bmap.h"
|
2009-06-12 07:49:20 +00:00
|
|
|
#define CREATE_TRACE_POINTS
|
|
|
|
#include "trace_gfs2.h"
|
2006-01-16 16:50:04 +00:00
|
|
|
|
2008-05-21 16:03:22 +00:00
|
|
|
struct gfs2_glock_iter {
|
2012-06-08 10:16:22 +00:00
|
|
|
struct gfs2_sbd *sdp; /* incore superblock */
|
2015-03-16 16:02:46 +00:00
|
|
|
struct rhashtable_iter hti; /* rhashtable iterator */
|
2012-06-08 10:16:22 +00:00
|
|
|
struct gfs2_glock *gl; /* current glock struct */
|
|
|
|
loff_t last_pos; /* last position */
|
2007-03-16 10:26:37 +00:00
|
|
|
};
|
|
|
|
|
2006-01-16 16:50:04 +00:00
|
|
|
typedef void (*glock_examiner) (struct gfs2_glock * gl);
|
|
|
|
|
2008-05-21 16:03:22 +00:00
|
|
|
static void do_xmote(struct gfs2_glock *gl, struct gfs2_holder *gh, unsigned int target);
|
2021-08-19 18:51:23 +00:00
|
|
|
static void __gfs2_glock_dq(struct gfs2_holder *gh);
|
2022-08-18 18:32:38 +00:00
|
|
|
static void handle_callback(struct gfs2_glock *gl, unsigned int state,
|
|
|
|
unsigned long delay, bool remote);
|
[GFS2] delay glock demote for a minimum hold time
When a lot of IO, with some distributed mmap IO, is run on a GFS2 filesystem in
a cluster, it will deadlock. The reason is that do_no_page() will repeatedly
call gfs2_sharewrite_nopage(), because each node keeps giving up the glock
too early, and is forced to call unmap_mapping_range(). This bumps the
mapping->truncate_count sequence count, forcing do_no_page() to retry. This
patch institutes a minimum glock hold time a tenth a second. This insures
that even in heavy contention cases, the node has enough time to get some
useful work done before it gives up the glock.
A second issue is that when gfs2_glock_dq() is called from within a page fault
to demote a lock, and the associated page needs to be written out, it will
try to acqire a lock on it, but it has already been locked at a higher level.
This patch puts makes gfs2_glock_dq() use the work queue as well, to avoid this
issue. This is the same patch as Steve Whitehouse originally proposed to fix
this issue, execpt that gfs2_glock_dq() now grabs a reference to the glock
before it queues up the work on it.
Signed-off-by: Benjamin E. Marzinski <bmarzins@redhat.com>
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
2007-08-23 18:19:05 +00:00
|
|
|
|
2007-03-16 10:26:37 +00:00
|
|
|
static struct dentry *gfs2_root;
|
[GFS2] delay glock demote for a minimum hold time
When a lot of IO, with some distributed mmap IO, is run on a GFS2 filesystem in
a cluster, it will deadlock. The reason is that do_no_page() will repeatedly
call gfs2_sharewrite_nopage(), because each node keeps giving up the glock
too early, and is forced to call unmap_mapping_range(). This bumps the
mapping->truncate_count sequence count, forcing do_no_page() to retry. This
patch institutes a minimum glock hold time a tenth a second. This insures
that even in heavy contention cases, the node has enough time to get some
useful work done before it gives up the glock.
A second issue is that when gfs2_glock_dq() is called from within a page fault
to demote a lock, and the associated page needs to be written out, it will
try to acqire a lock on it, but it has already been locked at a higher level.
This patch puts makes gfs2_glock_dq() use the work queue as well, to avoid this
issue. This is the same patch as Steve Whitehouse originally proposed to fix
this issue, execpt that gfs2_glock_dq() now grabs a reference to the glock
before it queues up the work on it.
Signed-off-by: Benjamin E. Marzinski <bmarzins@redhat.com>
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
2007-08-23 18:19:05 +00:00
|
|
|
static struct workqueue_struct *glock_workqueue;
|
2009-07-23 23:52:34 +00:00
|
|
|
struct workqueue_struct *gfs2_delete_workqueue;
|
2008-11-20 13:39:47 +00:00
|
|
|
static LIST_HEAD(lru_list);
|
|
|
|
static atomic_t lru_count = ATOMIC_INIT(0);
|
2008-12-25 14:35:27 +00:00
|
|
|
static DEFINE_SPINLOCK(lru_lock);
|
2006-04-28 14:59:12 +00:00
|
|
|
|
2006-09-12 14:10:01 +00:00
|
|
|
#define GFS2_GL_HASH_SHIFT 15
|
2016-08-02 17:05:27 +00:00
|
|
|
#define GFS2_GL_HASH_SIZE BIT(GFS2_GL_HASH_SHIFT)
|
2006-09-09 20:59:11 +00:00
|
|
|
|
2017-08-30 12:50:03 +00:00
|
|
|
static const struct rhashtable_params ht_parms = {
|
2015-03-16 16:02:46 +00:00
|
|
|
.nelem_hint = GFS2_GL_HASH_SIZE * 3 / 4,
|
2017-03-16 13:54:57 +00:00
|
|
|
.key_len = offsetofend(struct lm_lockname, ln_type),
|
2015-03-16 16:02:46 +00:00
|
|
|
.key_offset = offsetof(struct gfs2_glock, gl_name),
|
|
|
|
.head_offset = offsetof(struct gfs2_glock, gl_node),
|
|
|
|
};
|
2006-01-16 16:50:04 +00:00
|
|
|
|
2015-03-16 16:02:46 +00:00
|
|
|
static struct rhashtable gl_hash_table;
|
2006-01-16 16:50:04 +00:00
|
|
|
|
2017-08-01 16:18:26 +00:00
|
|
|
#define GLOCK_WAIT_TABLE_BITS 12
|
|
|
|
#define GLOCK_WAIT_TABLE_SIZE (1 << GLOCK_WAIT_TABLE_BITS)
|
|
|
|
static wait_queue_head_t glock_wait_table[GLOCK_WAIT_TABLE_SIZE] __cacheline_aligned;
|
|
|
|
|
|
|
|
struct wait_glock_queue {
|
|
|
|
struct lm_lockname *name;
|
|
|
|
wait_queue_entry_t wait;
|
|
|
|
};
|
|
|
|
|
|
|
|
static int glock_wake_function(wait_queue_entry_t *wait, unsigned int mode,
|
|
|
|
int sync, void *key)
|
|
|
|
{
|
|
|
|
struct wait_glock_queue *wait_glock =
|
|
|
|
container_of(wait, struct wait_glock_queue, wait);
|
|
|
|
struct lm_lockname *wait_name = wait_glock->name;
|
|
|
|
struct lm_lockname *wake_name = key;
|
|
|
|
|
|
|
|
if (wake_name->ln_sbd != wait_name->ln_sbd ||
|
|
|
|
wake_name->ln_number != wait_name->ln_number ||
|
|
|
|
wake_name->ln_type != wait_name->ln_type)
|
|
|
|
return 0;
|
|
|
|
return autoremove_wake_function(wait, mode, sync, key);
|
|
|
|
}
|
|
|
|
|
|
|
|
static wait_queue_head_t *glock_waitqueue(struct lm_lockname *name)
|
|
|
|
{
|
2019-03-06 14:41:57 +00:00
|
|
|
u32 hash = jhash2((u32 *)name, ht_parms.key_len / 4, 0);
|
2017-08-01 16:18:26 +00:00
|
|
|
|
|
|
|
return glock_wait_table + hash_32(hash, GLOCK_WAIT_TABLE_BITS);
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* wake_up_glock - Wake up waiters on a glock
|
|
|
|
* @gl: the glock
|
|
|
|
*/
|
|
|
|
static void wake_up_glock(struct gfs2_glock *gl)
|
|
|
|
{
|
|
|
|
wait_queue_head_t *wq = glock_waitqueue(&gl->gl_name);
|
|
|
|
|
|
|
|
if (waitqueue_active(wq))
|
|
|
|
__wake_up(wq, TASK_NORMAL, 1, &gl->gl_name);
|
|
|
|
}
|
|
|
|
|
2017-07-07 18:22:05 +00:00
|
|
|
static void gfs2_glock_dealloc(struct rcu_head *rcu)
|
2006-01-16 16:50:04 +00:00
|
|
|
{
|
2017-07-07 18:22:05 +00:00
|
|
|
struct gfs2_glock *gl = container_of(rcu, struct gfs2_glock, gl_rcu);
|
2006-01-16 16:50:04 +00:00
|
|
|
|
2020-01-15 18:47:46 +00:00
|
|
|
kfree(gl->gl_lksb.sb_lvbptr);
|
2022-05-08 10:06:30 +00:00
|
|
|
if (gl->gl_ops->go_flags & GLOF_ASPACE) {
|
|
|
|
struct gfs2_glock_aspace *gla =
|
|
|
|
container_of(gl, struct gfs2_glock_aspace, glock);
|
|
|
|
kmem_cache_free(gfs2_glock_aspace_cachep, gla);
|
|
|
|
} else
|
2011-01-19 09:30:01 +00:00
|
|
|
kmem_cache_free(gfs2_glock_cachep, gl);
|
2017-07-07 18:22:05 +00:00
|
|
|
}
|
|
|
|
|
gfs2: Allow some glocks to be used during withdraw
We need to allow some glocks to be enqueued, dequeued, promoted, and demoted
when we're withdrawn. For example, to maintain metadata integrity, we should
disallow the use of inode and rgrp glocks when withdrawn. Other glocks, like
iopen or the transaction glocks may be safely used because none of their
metadata goes through the journal. So in general, we should disallow all
glocks with an address space, and allow all the others. One exception is:
we need to allow our active journal to be demoted so others may recover it.
Allowing glocks after withdraw gives us the ability to take appropriate
action (in a following patch) to have our journal properly replayed by
another node rather than just abandoning the current transactions and
pretending nothing bad happened, leaving the other nodes free to modify
the blocks we had in our journal, which may result in file system
corruption.
Signed-off-by: Bob Peterson <rpeterso@redhat.com>
2019-06-13 18:28:45 +00:00
|
|
|
/**
|
|
|
|
* glock_blocked_by_withdraw - determine if we can still use a glock
|
|
|
|
* @gl: the glock
|
|
|
|
*
|
|
|
|
* We need to allow some glocks to be enqueued, dequeued, promoted, and demoted
|
|
|
|
* when we're withdrawn. For example, to maintain metadata integrity, we should
|
|
|
|
* disallow the use of inode and rgrp glocks when withdrawn. Other glocks, like
|
|
|
|
* iopen or the transaction glocks may be safely used because none of their
|
|
|
|
* metadata goes through the journal. So in general, we should disallow all
|
|
|
|
* glocks that are journaled, and allow all the others. One exception is:
|
|
|
|
* we need to allow our active journal to be promoted and demoted so others
|
|
|
|
* may recover it and we can reacquire it when they're done.
|
|
|
|
*/
|
|
|
|
static bool glock_blocked_by_withdraw(struct gfs2_glock *gl)
|
|
|
|
{
|
|
|
|
struct gfs2_sbd *sdp = gl->gl_name.ln_sbd;
|
|
|
|
|
|
|
|
if (likely(!gfs2_withdrawn(sdp)))
|
|
|
|
return false;
|
|
|
|
if (gl->gl_ops->go_flags & GLOF_NONDISK)
|
|
|
|
return false;
|
|
|
|
if (!sdp->sd_jdesc ||
|
|
|
|
gl->gl_name.ln_number == sdp->sd_jdesc->jd_no_addr)
|
|
|
|
return false;
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2017-07-07 18:22:05 +00:00
|
|
|
void gfs2_glock_free(struct gfs2_glock *gl)
|
|
|
|
{
|
|
|
|
struct gfs2_sbd *sdp = gl->gl_name.ln_sbd;
|
|
|
|
|
2020-05-23 13:13:50 +00:00
|
|
|
gfs2_glock_assert_withdraw(gl, atomic_read(&gl->gl_revokes) == 0);
|
2017-08-01 16:18:26 +00:00
|
|
|
rhashtable_remove_fast(&gl_hash_table, &gl->gl_node, ht_parms);
|
|
|
|
smp_mb();
|
|
|
|
wake_up_glock(gl);
|
2017-07-07 18:22:05 +00:00
|
|
|
call_rcu(&gl->gl_rcu, gfs2_glock_dealloc);
|
2011-01-19 09:30:01 +00:00
|
|
|
if (atomic_dec_and_test(&sdp->sd_glock_disposal))
|
|
|
|
wake_up(&sdp->sd_glock_wait);
|
2006-01-16 16:50:04 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* gfs2_glock_hold() - increment reference count on glock
|
|
|
|
* @gl: The glock to hold
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
2017-08-01 16:45:23 +00:00
|
|
|
void gfs2_glock_hold(struct gfs2_glock *gl)
|
2006-01-16 16:50:04 +00:00
|
|
|
{
|
2013-10-15 14:18:08 +00:00
|
|
|
GLOCK_BUG_ON(gl, __lockref_is_dead(&gl->gl_lockref));
|
|
|
|
lockref_get(&gl->gl_lockref);
|
2006-01-16 16:50:04 +00:00
|
|
|
}
|
|
|
|
|
2009-07-10 23:04:24 +00:00
|
|
|
/**
|
|
|
|
* demote_ok - Check to see if it's ok to unlock a glock
|
|
|
|
* @gl: the glock
|
|
|
|
*
|
|
|
|
* Returns: 1 if it's ok
|
|
|
|
*/
|
|
|
|
|
|
|
|
static int demote_ok(const struct gfs2_glock *gl)
|
|
|
|
{
|
|
|
|
const struct gfs2_glock_operations *glops = gl->gl_ops;
|
|
|
|
|
|
|
|
if (gl->gl_state == LM_ST_UNLOCKED)
|
|
|
|
return 0;
|
2021-08-19 18:51:23 +00:00
|
|
|
/*
|
|
|
|
* Note that demote_ok is used for the lru process of disposing of
|
|
|
|
* glocks. For this purpose, we don't care if the glock's holders
|
|
|
|
* have the HIF_MAY_DEMOTE flag set or not. If someone is using
|
|
|
|
* them, don't demote.
|
|
|
|
*/
|
2011-04-14 15:50:31 +00:00
|
|
|
if (!list_empty(&gl->gl_holders))
|
2009-07-10 23:04:24 +00:00
|
|
|
return 0;
|
|
|
|
if (glops->go_demote_ok)
|
|
|
|
return glops->go_demote_ok(gl);
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2011-01-19 09:30:01 +00:00
|
|
|
|
2011-03-30 15:33:25 +00:00
|
|
|
void gfs2_glock_add_to_lru(struct gfs2_glock *gl)
|
|
|
|
{
|
2019-03-27 17:09:17 +00:00
|
|
|
if (!(gl->gl_ops->go_flags & GLOF_LRU))
|
|
|
|
return;
|
|
|
|
|
2011-03-30 15:33:25 +00:00
|
|
|
spin_lock(&lru_lock);
|
|
|
|
|
2021-06-08 03:12:44 +00:00
|
|
|
list_move_tail(&gl->gl_lru, &lru_list);
|
2019-03-27 17:09:17 +00:00
|
|
|
|
|
|
|
if (!test_bit(GLF_LRU, &gl->gl_flags)) {
|
|
|
|
set_bit(GLF_LRU, &gl->gl_flags);
|
2011-03-30 15:33:25 +00:00
|
|
|
atomic_inc(&lru_count);
|
2019-03-27 17:09:17 +00:00
|
|
|
}
|
2011-03-30 15:33:25 +00:00
|
|
|
|
|
|
|
spin_unlock(&lru_lock);
|
|
|
|
}
|
|
|
|
|
2015-01-05 18:25:10 +00:00
|
|
|
static void gfs2_glock_remove_from_lru(struct gfs2_glock *gl)
|
2011-04-14 15:50:31 +00:00
|
|
|
{
|
2017-07-26 15:57:35 +00:00
|
|
|
if (!(gl->gl_ops->go_flags & GLOF_LRU))
|
|
|
|
return;
|
|
|
|
|
2015-01-05 18:25:10 +00:00
|
|
|
spin_lock(&lru_lock);
|
2019-03-27 17:09:17 +00:00
|
|
|
if (test_bit(GLF_LRU, &gl->gl_flags)) {
|
2011-04-14 15:50:31 +00:00
|
|
|
list_del_init(&gl->gl_lru);
|
|
|
|
atomic_dec(&lru_count);
|
|
|
|
clear_bit(GLF_LRU, &gl->gl_flags);
|
|
|
|
}
|
|
|
|
spin_unlock(&lru_lock);
|
|
|
|
}
|
|
|
|
|
2017-06-30 13:10:01 +00:00
|
|
|
/*
|
|
|
|
* Enqueue the glock on the work queue. Passes one glock reference on to the
|
|
|
|
* work queue.
|
2006-01-16 16:50:04 +00:00
|
|
|
*/
|
2017-06-30 13:10:01 +00:00
|
|
|
static void __gfs2_glock_queue_work(struct gfs2_glock *gl, unsigned long delay) {
|
|
|
|
if (!queue_delayed_work(glock_workqueue, &gl->gl_work, delay)) {
|
|
|
|
/*
|
|
|
|
* We are holding the lockref spinlock, and the work was still
|
|
|
|
* queued above. The queued work (glock_work_func) takes that
|
|
|
|
* spinlock before dropping its glock reference(s), so it
|
|
|
|
* cannot have dropped them in the meantime.
|
|
|
|
*/
|
|
|
|
GLOCK_BUG_ON(gl, gl->gl_lockref.count < 2);
|
|
|
|
gl->gl_lockref.count--;
|
|
|
|
}
|
|
|
|
}
|
2006-01-16 16:50:04 +00:00
|
|
|
|
2017-06-30 13:10:01 +00:00
|
|
|
static void gfs2_glock_queue_work(struct gfs2_glock *gl, unsigned long delay) {
|
|
|
|
spin_lock(&gl->gl_lockref.lock);
|
|
|
|
__gfs2_glock_queue_work(gl, delay);
|
|
|
|
spin_unlock(&gl->gl_lockref.lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void __gfs2_glock_put(struct gfs2_glock *gl)
|
2006-01-16 16:50:04 +00:00
|
|
|
{
|
2015-03-16 16:52:05 +00:00
|
|
|
struct gfs2_sbd *sdp = gl->gl_name.ln_sbd;
|
2011-01-19 09:30:01 +00:00
|
|
|
struct address_space *mapping = gfs2_glock2aspace(gl);
|
2006-01-16 16:50:04 +00:00
|
|
|
|
2013-10-15 14:18:08 +00:00
|
|
|
lockref_mark_dead(&gl->gl_lockref);
|
|
|
|
|
2015-01-05 18:25:10 +00:00
|
|
|
gfs2_glock_remove_from_lru(gl);
|
2013-10-15 14:18:08 +00:00
|
|
|
spin_unlock(&gl->gl_lockref.lock);
|
|
|
|
GLOCK_BUG_ON(gl, !list_empty(&gl->gl_holders));
|
2020-09-16 16:06:23 +00:00
|
|
|
if (mapping) {
|
|
|
|
truncate_inode_pages_final(mapping);
|
|
|
|
if (!gfs2_withdrawn(sdp))
|
2021-05-05 01:32:45 +00:00
|
|
|
GLOCK_BUG_ON(gl, !mapping_empty(mapping));
|
2020-09-16 16:06:23 +00:00
|
|
|
}
|
2013-10-15 14:18:08 +00:00
|
|
|
trace_gfs2_glock_put(gl);
|
|
|
|
sdp->sd_lockstruct.ls_ops->lm_put_lock(gl);
|
2006-01-16 16:50:04 +00:00
|
|
|
}
|
|
|
|
|
2017-08-01 16:45:23 +00:00
|
|
|
/*
|
|
|
|
* Cause the glock to be put in work queue context.
|
|
|
|
*/
|
|
|
|
void gfs2_glock_queue_put(struct gfs2_glock *gl)
|
|
|
|
{
|
|
|
|
gfs2_glock_queue_work(gl, 0);
|
|
|
|
}
|
|
|
|
|
2017-06-30 13:10:01 +00:00
|
|
|
/**
|
|
|
|
* gfs2_glock_put() - Decrement reference count on glock
|
|
|
|
* @gl: The glock to put
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
|
|
|
void gfs2_glock_put(struct gfs2_glock *gl)
|
|
|
|
{
|
|
|
|
if (lockref_put_or_lock(&gl->gl_lockref))
|
|
|
|
return;
|
|
|
|
|
|
|
|
__gfs2_glock_put(gl);
|
|
|
|
}
|
|
|
|
|
2008-05-21 16:03:22 +00:00
|
|
|
/**
|
2021-08-09 22:43:55 +00:00
|
|
|
* may_grant - check if it's ok to grant a new lock
|
2008-05-21 16:03:22 +00:00
|
|
|
* @gl: The glock
|
2021-08-09 22:43:55 +00:00
|
|
|
* @current_gh: One of the current holders of @gl
|
2008-05-21 16:03:22 +00:00
|
|
|
* @gh: The lock request which we wish to grant
|
|
|
|
*
|
2021-08-09 22:43:55 +00:00
|
|
|
* With our current compatibility rules, if a glock has one or more active
|
|
|
|
* holders (HIF_HOLDER flag set), any of those holders can be passed in as
|
|
|
|
* @current_gh; they are all the same as far as compatibility with the new @gh
|
|
|
|
* goes.
|
|
|
|
*
|
|
|
|
* Returns true if it's ok to grant the lock.
|
2008-05-21 16:03:22 +00:00
|
|
|
*/
|
|
|
|
|
2021-08-09 22:43:55 +00:00
|
|
|
static inline bool may_grant(struct gfs2_glock *gl,
|
|
|
|
struct gfs2_holder *current_gh,
|
|
|
|
struct gfs2_holder *gh)
|
2008-05-21 16:03:22 +00:00
|
|
|
{
|
2021-08-09 22:43:55 +00:00
|
|
|
if (current_gh) {
|
|
|
|
GLOCK_BUG_ON(gl, !test_bit(HIF_HOLDER, ¤t_gh->gh_iflags));
|
|
|
|
|
|
|
|
switch(current_gh->gh_state) {
|
|
|
|
case LM_ST_EXCLUSIVE:
|
|
|
|
/*
|
|
|
|
* Here we make a special exception to grant holders
|
|
|
|
* who agree to share the EX lock with other holders
|
|
|
|
* who also have the bit set. If the original holder
|
|
|
|
* has the LM_FLAG_NODE_SCOPE bit set, we grant more
|
|
|
|
* holders with the bit set.
|
|
|
|
*/
|
|
|
|
return gh->gh_state == LM_ST_EXCLUSIVE &&
|
|
|
|
(current_gh->gh_flags & LM_FLAG_NODE_SCOPE) &&
|
|
|
|
(gh->gh_flags & LM_FLAG_NODE_SCOPE);
|
|
|
|
|
|
|
|
case LM_ST_SHARED:
|
|
|
|
case LM_ST_DEFERRED:
|
|
|
|
return gh->gh_state == current_gh->gh_state;
|
|
|
|
|
|
|
|
default:
|
|
|
|
return false;
|
|
|
|
}
|
2018-04-18 20:58:19 +00:00
|
|
|
}
|
2021-08-09 22:43:55 +00:00
|
|
|
|
2008-05-21 16:03:22 +00:00
|
|
|
if (gl->gl_state == gh->gh_state)
|
2021-08-09 22:43:55 +00:00
|
|
|
return true;
|
2008-05-21 16:03:22 +00:00
|
|
|
if (gh->gh_flags & GL_EXACT)
|
2021-08-09 22:43:55 +00:00
|
|
|
return false;
|
2008-07-07 09:07:28 +00:00
|
|
|
if (gl->gl_state == LM_ST_EXCLUSIVE) {
|
2021-08-09 22:43:55 +00:00
|
|
|
return gh->gh_state == LM_ST_SHARED ||
|
|
|
|
gh->gh_state == LM_ST_DEFERRED;
|
2008-07-07 09:07:28 +00:00
|
|
|
}
|
2021-08-09 22:43:55 +00:00
|
|
|
if (gh->gh_flags & LM_FLAG_ANY)
|
|
|
|
return gl->gl_state != LM_ST_UNLOCKED;
|
|
|
|
return false;
|
2008-05-21 16:03:22 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void gfs2_holder_wake(struct gfs2_holder *gh)
|
|
|
|
{
|
|
|
|
clear_bit(HIF_WAIT, &gh->gh_iflags);
|
2014-03-17 17:06:10 +00:00
|
|
|
smp_mb__after_atomic();
|
2008-05-21 16:03:22 +00:00
|
|
|
wake_up_bit(&gh->gh_iflags, HIF_WAIT);
|
gfs2: Use async glocks for rename
Because s_vfs_rename_mutex is not cluster-wide, multiple nodes can
reverse the roles of which directories are "old" and which are "new" for
the purposes of rename. This can cause deadlocks where two nodes end up
waiting for each other.
There can be several layers of directory dependencies across many nodes.
This patch fixes the problem by acquiring all gfs2_rename's inode glocks
asychronously and waiting for all glocks to be acquired. That way all
inodes are locked regardless of the order.
The timeout value for multiple asynchronous glocks is calculated to be
the total of the individual wait times for each glock times two.
Since gfs2_exchange is very similar to gfs2_rename, both functions are
patched in the same way.
A new async glock wait queue, sd_async_glock_wait, keeps a list of
waiters for these events. If gfs2's holder_wake function detects an
async holder, it wakes up any waiters for the event. The waiter only
tests whether any of its requests are still pending.
Since the glocks are sent to dlm asychronously, the wait function needs
to check to see which glocks, if any, were granted.
If a glock is granted by dlm (and therefore held), its minimum hold time
is checked and adjusted as necessary, as other glock grants do.
If the event times out, all glocks held thus far must be dequeued to
resolve any existing deadlocks. Then, if there are any outstanding
locking requests, we need to loop around and wait for dlm to respond to
those requests too. After we release all requests, we return -ESTALE to
the caller (vfs rename) which loops around and retries the request.
Node1 Node2
--------- ---------
1. Enqueue A Enqueue B
2. Enqueue B Enqueue A
3. A granted
6. B granted
7. Wait for B
8. Wait for A
9. A times out (since Node 1 holds A)
10. Dequeue B (since it was granted)
11. Wait for all requests from DLM
12. B Granted (since Node2 released it in step 10)
13. Rename
14. Dequeue A
15. DLM Grants A
16. Dequeue A (due to the timeout and since we
no longer have B held for our task).
17. Dequeue B
18. Return -ESTALE to vfs
19. VFS retries the operation, goto step 1.
This release-all-locks / acquire-all-locks may slow rename / exchange
down as both nodes struggle in the same way and do the same thing.
However, this will only happen when there is contention for the same
inodes, which ought to be rare.
Signed-off-by: Bob Peterson <rpeterso@redhat.com>
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2019-08-30 17:31:02 +00:00
|
|
|
if (gh->gh_flags & GL_ASYNC) {
|
|
|
|
struct gfs2_sbd *sdp = gh->gh_gl->gl_name.ln_sbd;
|
|
|
|
|
|
|
|
wake_up(&sdp->sd_async_glock_wait);
|
|
|
|
}
|
2008-05-21 16:03:22 +00:00
|
|
|
}
|
|
|
|
|
2010-07-23 13:05:51 +00:00
|
|
|
/**
|
|
|
|
* do_error - Something unexpected has happened during a lock request
|
2021-03-30 16:44:29 +00:00
|
|
|
* @gl: The glock
|
|
|
|
* @ret: The status from the DLM
|
2010-07-23 13:05:51 +00:00
|
|
|
*/
|
|
|
|
|
2016-04-12 16:39:12 +00:00
|
|
|
static void do_error(struct gfs2_glock *gl, const int ret)
|
2010-07-23 13:05:51 +00:00
|
|
|
{
|
|
|
|
struct gfs2_holder *gh, *tmp;
|
|
|
|
|
|
|
|
list_for_each_entry_safe(gh, tmp, &gl->gl_holders, gh_list) {
|
2021-08-19 18:51:23 +00:00
|
|
|
if (!test_bit(HIF_WAIT, &gh->gh_iflags))
|
2010-07-23 13:05:51 +00:00
|
|
|
continue;
|
|
|
|
if (ret & LM_OUT_ERROR)
|
|
|
|
gh->gh_error = -EIO;
|
|
|
|
else if (gh->gh_flags & (LM_FLAG_TRY | LM_FLAG_TRY_1CB))
|
|
|
|
gh->gh_error = GLR_TRYFAILED;
|
|
|
|
else
|
|
|
|
continue;
|
|
|
|
list_del_init(&gh->gh_list);
|
|
|
|
trace_gfs2_glock_queue(gh, 0);
|
|
|
|
gfs2_holder_wake(gh);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-08-19 18:51:23 +00:00
|
|
|
/**
|
|
|
|
* demote_incompat_holders - demote incompatible demoteable holders
|
|
|
|
* @gl: the glock we want to promote
|
2022-06-10 22:53:32 +00:00
|
|
|
* @current_gh: the newly promoted holder
|
|
|
|
*
|
|
|
|
* We're passing the newly promoted holder in @current_gh, but actually, any of
|
|
|
|
* the strong holders would do.
|
2021-08-19 18:51:23 +00:00
|
|
|
*/
|
|
|
|
static void demote_incompat_holders(struct gfs2_glock *gl,
|
2022-06-10 22:53:32 +00:00
|
|
|
struct gfs2_holder *current_gh)
|
2021-08-19 18:51:23 +00:00
|
|
|
{
|
2021-11-08 15:08:07 +00:00
|
|
|
struct gfs2_holder *gh, *tmp;
|
2021-08-19 18:51:23 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Demote incompatible holders before we make ourselves eligible.
|
|
|
|
* (This holder may or may not allow auto-demoting, but we don't want
|
|
|
|
* to demote the new holder before it's even granted.)
|
|
|
|
*/
|
2021-11-08 15:08:07 +00:00
|
|
|
list_for_each_entry_safe(gh, tmp, &gl->gl_holders, gh_list) {
|
2021-08-19 18:51:23 +00:00
|
|
|
/*
|
|
|
|
* Since holders are at the front of the list, we stop when we
|
|
|
|
* find the first non-holder.
|
|
|
|
*/
|
|
|
|
if (!test_bit(HIF_HOLDER, &gh->gh_iflags))
|
|
|
|
return;
|
2022-06-10 22:53:32 +00:00
|
|
|
if (gh == current_gh)
|
|
|
|
continue;
|
2021-08-19 18:51:23 +00:00
|
|
|
if (test_bit(HIF_MAY_DEMOTE, &gh->gh_iflags) &&
|
2022-06-10 22:53:32 +00:00
|
|
|
!may_grant(gl, current_gh, gh)) {
|
2021-08-19 18:51:23 +00:00
|
|
|
/*
|
|
|
|
* We should not recurse into do_promote because
|
|
|
|
* __gfs2_glock_dq only calls handle_callback,
|
|
|
|
* gfs2_glock_add_to_lru and __gfs2_glock_queue_work.
|
|
|
|
*/
|
|
|
|
__gfs2_glock_dq(gh);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-08-09 22:43:55 +00:00
|
|
|
/**
|
|
|
|
* find_first_holder - find the first "holder" gh
|
|
|
|
* @gl: the glock
|
|
|
|
*/
|
|
|
|
|
|
|
|
static inline struct gfs2_holder *find_first_holder(const struct gfs2_glock *gl)
|
|
|
|
{
|
|
|
|
struct gfs2_holder *gh;
|
|
|
|
|
|
|
|
if (!list_empty(&gl->gl_holders)) {
|
|
|
|
gh = list_first_entry(&gl->gl_holders, struct gfs2_holder,
|
|
|
|
gh_list);
|
|
|
|
if (test_bit(HIF_HOLDER, &gh->gh_iflags))
|
|
|
|
return gh;
|
|
|
|
}
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2021-08-19 18:51:23 +00:00
|
|
|
/**
|
|
|
|
* find_first_strong_holder - find the first non-demoteable holder
|
|
|
|
* @gl: the glock
|
|
|
|
*
|
|
|
|
* Find the first holder that doesn't have the HIF_MAY_DEMOTE flag set.
|
|
|
|
*/
|
|
|
|
static inline struct gfs2_holder *
|
|
|
|
find_first_strong_holder(struct gfs2_glock *gl)
|
|
|
|
{
|
|
|
|
struct gfs2_holder *gh;
|
|
|
|
|
|
|
|
list_for_each_entry(gh, &gl->gl_holders, gh_list) {
|
|
|
|
if (!test_bit(HIF_HOLDER, &gh->gh_iflags))
|
|
|
|
return NULL;
|
|
|
|
if (!test_bit(HIF_MAY_DEMOTE, &gh->gh_iflags))
|
|
|
|
return gh;
|
|
|
|
}
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2021-10-06 13:59:52 +00:00
|
|
|
/*
|
|
|
|
* gfs2_instantiate - Call the glops instantiate function
|
2021-11-29 20:56:16 +00:00
|
|
|
* @gh: The glock holder
|
2021-10-06 13:59:52 +00:00
|
|
|
*
|
2022-06-11 03:04:11 +00:00
|
|
|
* Returns: 0 if instantiate was successful, or error.
|
2021-10-06 13:59:52 +00:00
|
|
|
*/
|
gfs2: fix GL_SKIP node_scope problems
Before this patch, when a glock was locked, the very first holder on the
queue would unlock the lockref and call the go_instantiate glops function
(if one existed), unless GL_SKIP was specified. When we introduced the new
node-scope concept, we allowed multiple holders to lock glocks in EX mode
and share the lock.
But node-scope introduced a new problem: if the first holder has GL_SKIP
and the next one does NOT, since it is not the first holder on the queue,
the go_instantiate op was not called. Eventually the GL_SKIP holder may
call the instantiate sub-function (e.g. gfs2_rgrp_bh_get) but there was
still a window of time in which another non-GL_SKIP holder assumes the
instantiate function had been called by the first holder. In the case of
rgrp glocks, this led to a NULL pointer dereference on the buffer_heads.
This patch tries to fix the problem by introducing two new glock flags:
GLF_INSTANTIATE_NEEDED, which keeps track of when the instantiate function
needs to be called to "fill in" or "read in" the object before it is
referenced.
GLF_INSTANTIATE_IN_PROG which is used to determine when a process is
in the process of reading in the object. Whenever a function needs to
reference the object, it checks the GLF_INSTANTIATE_NEEDED flag, and if
set, it sets GLF_INSTANTIATE_IN_PROG and calls the glops "go_instantiate"
function.
As before, the gl_lockref spin_lock is unlocked during the IO operation,
which may take a relatively long amount of time to complete. While
unlocked, if another process determines go_instantiate is still needed,
it sees GLF_INSTANTIATE_IN_PROG is set, and waits for the go_instantiate
glop operation to be completed. Once GLF_INSTANTIATE_IN_PROG is cleared,
it needs to check GLF_INSTANTIATE_NEEDED again because the other process's
go_instantiate operation may not have been successful.
Functions that previously called the instantiate sub-functions now call
directly into gfs2_instantiate so the new bits are managed properly.
Signed-off-by: Bob Peterson <rpeterso@redhat.com>
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2021-10-06 14:29:18 +00:00
|
|
|
int gfs2_instantiate(struct gfs2_holder *gh)
|
2021-10-06 13:59:52 +00:00
|
|
|
{
|
|
|
|
struct gfs2_glock *gl = gh->gh_gl;
|
|
|
|
const struct gfs2_glock_operations *glops = gl->gl_ops;
|
gfs2: fix GL_SKIP node_scope problems
Before this patch, when a glock was locked, the very first holder on the
queue would unlock the lockref and call the go_instantiate glops function
(if one existed), unless GL_SKIP was specified. When we introduced the new
node-scope concept, we allowed multiple holders to lock glocks in EX mode
and share the lock.
But node-scope introduced a new problem: if the first holder has GL_SKIP
and the next one does NOT, since it is not the first holder on the queue,
the go_instantiate op was not called. Eventually the GL_SKIP holder may
call the instantiate sub-function (e.g. gfs2_rgrp_bh_get) but there was
still a window of time in which another non-GL_SKIP holder assumes the
instantiate function had been called by the first holder. In the case of
rgrp glocks, this led to a NULL pointer dereference on the buffer_heads.
This patch tries to fix the problem by introducing two new glock flags:
GLF_INSTANTIATE_NEEDED, which keeps track of when the instantiate function
needs to be called to "fill in" or "read in" the object before it is
referenced.
GLF_INSTANTIATE_IN_PROG which is used to determine when a process is
in the process of reading in the object. Whenever a function needs to
reference the object, it checks the GLF_INSTANTIATE_NEEDED flag, and if
set, it sets GLF_INSTANTIATE_IN_PROG and calls the glops "go_instantiate"
function.
As before, the gl_lockref spin_lock is unlocked during the IO operation,
which may take a relatively long amount of time to complete. While
unlocked, if another process determines go_instantiate is still needed,
it sees GLF_INSTANTIATE_IN_PROG is set, and waits for the go_instantiate
glop operation to be completed. Once GLF_INSTANTIATE_IN_PROG is cleared,
it needs to check GLF_INSTANTIATE_NEEDED again because the other process's
go_instantiate operation may not have been successful.
Functions that previously called the instantiate sub-functions now call
directly into gfs2_instantiate so the new bits are managed properly.
Signed-off-by: Bob Peterson <rpeterso@redhat.com>
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2021-10-06 14:29:18 +00:00
|
|
|
int ret;
|
|
|
|
|
|
|
|
again:
|
|
|
|
if (!test_bit(GLF_INSTANTIATE_NEEDED, &gl->gl_flags))
|
2022-06-10 09:42:33 +00:00
|
|
|
goto done;
|
2021-10-06 13:59:52 +00:00
|
|
|
|
gfs2: fix GL_SKIP node_scope problems
Before this patch, when a glock was locked, the very first holder on the
queue would unlock the lockref and call the go_instantiate glops function
(if one existed), unless GL_SKIP was specified. When we introduced the new
node-scope concept, we allowed multiple holders to lock glocks in EX mode
and share the lock.
But node-scope introduced a new problem: if the first holder has GL_SKIP
and the next one does NOT, since it is not the first holder on the queue,
the go_instantiate op was not called. Eventually the GL_SKIP holder may
call the instantiate sub-function (e.g. gfs2_rgrp_bh_get) but there was
still a window of time in which another non-GL_SKIP holder assumes the
instantiate function had been called by the first holder. In the case of
rgrp glocks, this led to a NULL pointer dereference on the buffer_heads.
This patch tries to fix the problem by introducing two new glock flags:
GLF_INSTANTIATE_NEEDED, which keeps track of when the instantiate function
needs to be called to "fill in" or "read in" the object before it is
referenced.
GLF_INSTANTIATE_IN_PROG which is used to determine when a process is
in the process of reading in the object. Whenever a function needs to
reference the object, it checks the GLF_INSTANTIATE_NEEDED flag, and if
set, it sets GLF_INSTANTIATE_IN_PROG and calls the glops "go_instantiate"
function.
As before, the gl_lockref spin_lock is unlocked during the IO operation,
which may take a relatively long amount of time to complete. While
unlocked, if another process determines go_instantiate is still needed,
it sees GLF_INSTANTIATE_IN_PROG is set, and waits for the go_instantiate
glop operation to be completed. Once GLF_INSTANTIATE_IN_PROG is cleared,
it needs to check GLF_INSTANTIATE_NEEDED again because the other process's
go_instantiate operation may not have been successful.
Functions that previously called the instantiate sub-functions now call
directly into gfs2_instantiate so the new bits are managed properly.
Signed-off-by: Bob Peterson <rpeterso@redhat.com>
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2021-10-06 14:29:18 +00:00
|
|
|
/*
|
|
|
|
* Since we unlock the lockref lock, we set a flag to indicate
|
|
|
|
* instantiate is in progress.
|
|
|
|
*/
|
2021-11-03 15:15:51 +00:00
|
|
|
if (test_and_set_bit(GLF_INSTANTIATE_IN_PROG, &gl->gl_flags)) {
|
gfs2: fix GL_SKIP node_scope problems
Before this patch, when a glock was locked, the very first holder on the
queue would unlock the lockref and call the go_instantiate glops function
(if one existed), unless GL_SKIP was specified. When we introduced the new
node-scope concept, we allowed multiple holders to lock glocks in EX mode
and share the lock.
But node-scope introduced a new problem: if the first holder has GL_SKIP
and the next one does NOT, since it is not the first holder on the queue,
the go_instantiate op was not called. Eventually the GL_SKIP holder may
call the instantiate sub-function (e.g. gfs2_rgrp_bh_get) but there was
still a window of time in which another non-GL_SKIP holder assumes the
instantiate function had been called by the first holder. In the case of
rgrp glocks, this led to a NULL pointer dereference on the buffer_heads.
This patch tries to fix the problem by introducing two new glock flags:
GLF_INSTANTIATE_NEEDED, which keeps track of when the instantiate function
needs to be called to "fill in" or "read in" the object before it is
referenced.
GLF_INSTANTIATE_IN_PROG which is used to determine when a process is
in the process of reading in the object. Whenever a function needs to
reference the object, it checks the GLF_INSTANTIATE_NEEDED flag, and if
set, it sets GLF_INSTANTIATE_IN_PROG and calls the glops "go_instantiate"
function.
As before, the gl_lockref spin_lock is unlocked during the IO operation,
which may take a relatively long amount of time to complete. While
unlocked, if another process determines go_instantiate is still needed,
it sees GLF_INSTANTIATE_IN_PROG is set, and waits for the go_instantiate
glop operation to be completed. Once GLF_INSTANTIATE_IN_PROG is cleared,
it needs to check GLF_INSTANTIATE_NEEDED again because the other process's
go_instantiate operation may not have been successful.
Functions that previously called the instantiate sub-functions now call
directly into gfs2_instantiate so the new bits are managed properly.
Signed-off-by: Bob Peterson <rpeterso@redhat.com>
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2021-10-06 14:29:18 +00:00
|
|
|
wait_on_bit(&gl->gl_flags, GLF_INSTANTIATE_IN_PROG,
|
|
|
|
TASK_UNINTERRUPTIBLE);
|
|
|
|
/*
|
|
|
|
* Here we just waited for a different instantiate to finish.
|
|
|
|
* But that may not have been successful, as when a process
|
|
|
|
* locks an inode glock _before_ it has an actual inode to
|
|
|
|
* instantiate into. So we check again. This process might
|
|
|
|
* have an inode to instantiate, so might be successful.
|
|
|
|
*/
|
|
|
|
goto again;
|
|
|
|
}
|
|
|
|
|
2022-06-10 10:06:06 +00:00
|
|
|
ret = glops->go_instantiate(gl);
|
gfs2: fix GL_SKIP node_scope problems
Before this patch, when a glock was locked, the very first holder on the
queue would unlock the lockref and call the go_instantiate glops function
(if one existed), unless GL_SKIP was specified. When we introduced the new
node-scope concept, we allowed multiple holders to lock glocks in EX mode
and share the lock.
But node-scope introduced a new problem: if the first holder has GL_SKIP
and the next one does NOT, since it is not the first holder on the queue,
the go_instantiate op was not called. Eventually the GL_SKIP holder may
call the instantiate sub-function (e.g. gfs2_rgrp_bh_get) but there was
still a window of time in which another non-GL_SKIP holder assumes the
instantiate function had been called by the first holder. In the case of
rgrp glocks, this led to a NULL pointer dereference on the buffer_heads.
This patch tries to fix the problem by introducing two new glock flags:
GLF_INSTANTIATE_NEEDED, which keeps track of when the instantiate function
needs to be called to "fill in" or "read in" the object before it is
referenced.
GLF_INSTANTIATE_IN_PROG which is used to determine when a process is
in the process of reading in the object. Whenever a function needs to
reference the object, it checks the GLF_INSTANTIATE_NEEDED flag, and if
set, it sets GLF_INSTANTIATE_IN_PROG and calls the glops "go_instantiate"
function.
As before, the gl_lockref spin_lock is unlocked during the IO operation,
which may take a relatively long amount of time to complete. While
unlocked, if another process determines go_instantiate is still needed,
it sees GLF_INSTANTIATE_IN_PROG is set, and waits for the go_instantiate
glop operation to be completed. Once GLF_INSTANTIATE_IN_PROG is cleared,
it needs to check GLF_INSTANTIATE_NEEDED again because the other process's
go_instantiate operation may not have been successful.
Functions that previously called the instantiate sub-functions now call
directly into gfs2_instantiate so the new bits are managed properly.
Signed-off-by: Bob Peterson <rpeterso@redhat.com>
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2021-10-06 14:29:18 +00:00
|
|
|
if (!ret)
|
|
|
|
clear_bit(GLF_INSTANTIATE_NEEDED, &gl->gl_flags);
|
2021-11-03 15:15:51 +00:00
|
|
|
clear_and_wake_up_bit(GLF_INSTANTIATE_IN_PROG, &gl->gl_flags);
|
2022-06-10 09:42:33 +00:00
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
done:
|
|
|
|
if (glops->go_held)
|
|
|
|
return glops->go_held(gh);
|
|
|
|
return 0;
|
2021-10-06 13:59:52 +00:00
|
|
|
}
|
|
|
|
|
2008-05-21 16:03:22 +00:00
|
|
|
/**
|
|
|
|
* do_promote - promote as many requests as possible on the current queue
|
|
|
|
* @gl: The glock
|
|
|
|
*
|
2022-06-02 20:15:02 +00:00
|
|
|
* Returns: 1 if there is a blocked holder at the head of the list
|
2008-05-21 16:03:22 +00:00
|
|
|
*/
|
|
|
|
|
|
|
|
static int do_promote(struct gfs2_glock *gl)
|
|
|
|
{
|
2022-06-11 03:00:23 +00:00
|
|
|
struct gfs2_holder *gh, *current_gh;
|
2021-08-19 18:51:23 +00:00
|
|
|
bool incompat_holders_demoted = false;
|
2008-05-21 16:03:22 +00:00
|
|
|
|
2022-06-10 22:43:00 +00:00
|
|
|
current_gh = find_first_strong_holder(gl);
|
2022-06-11 03:00:23 +00:00
|
|
|
list_for_each_entry(gh, &gl->gl_holders, gh_list) {
|
2021-10-06 12:59:59 +00:00
|
|
|
if (test_bit(HIF_HOLDER, &gh->gh_iflags))
|
2008-05-21 16:03:22 +00:00
|
|
|
continue;
|
2022-06-10 22:43:00 +00:00
|
|
|
if (!may_grant(gl, current_gh, gh)) {
|
2021-10-06 12:35:04 +00:00
|
|
|
/*
|
2022-05-26 14:56:51 +00:00
|
|
|
* If we get here, it means we may not grant this
|
|
|
|
* holder for some reason. If this holder is at the
|
|
|
|
* head of the list, it means we have a blocked holder
|
|
|
|
* at the head, so return 1.
|
2021-10-06 12:35:04 +00:00
|
|
|
*/
|
2022-01-13 01:54:55 +00:00
|
|
|
if (list_is_first(&gh->gh_list, &gl->gl_holders))
|
2021-10-06 12:35:04 +00:00
|
|
|
return 1;
|
|
|
|
do_error(gl, 0);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
set_bit(HIF_HOLDER, &gh->gh_iflags);
|
|
|
|
trace_gfs2_promote(gh);
|
|
|
|
gfs2_holder_wake(gh);
|
|
|
|
if (!incompat_holders_demoted) {
|
2022-06-10 22:53:32 +00:00
|
|
|
current_gh = gh;
|
2022-06-10 22:43:00 +00:00
|
|
|
demote_incompat_holders(gl, current_gh);
|
2021-10-06 12:35:04 +00:00
|
|
|
incompat_holders_demoted = true;
|
|
|
|
}
|
2008-05-21 16:03:22 +00:00
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* find_first_waiter - find the first gh that's waiting for the glock
|
|
|
|
* @gl: the glock
|
|
|
|
*/
|
|
|
|
|
|
|
|
static inline struct gfs2_holder *find_first_waiter(const struct gfs2_glock *gl)
|
|
|
|
{
|
|
|
|
struct gfs2_holder *gh;
|
|
|
|
|
|
|
|
list_for_each_entry(gh, &gl->gl_holders, gh_list) {
|
|
|
|
if (!test_bit(HIF_HOLDER, &gh->gh_iflags))
|
|
|
|
return gh;
|
|
|
|
}
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* state_change - record that the glock is now in a different state
|
|
|
|
* @gl: the glock
|
2021-03-30 16:44:29 +00:00
|
|
|
* @new_state: the new state
|
2008-05-21 16:03:22 +00:00
|
|
|
*/
|
|
|
|
|
|
|
|
static void state_change(struct gfs2_glock *gl, unsigned int new_state)
|
|
|
|
{
|
|
|
|
int held1, held2;
|
|
|
|
|
|
|
|
held1 = (gl->gl_state != LM_ST_UNLOCKED);
|
|
|
|
held2 = (new_state != LM_ST_UNLOCKED);
|
|
|
|
|
|
|
|
if (held1 != held2) {
|
2013-10-15 14:18:08 +00:00
|
|
|
GLOCK_BUG_ON(gl, __lockref_is_dead(&gl->gl_lockref));
|
2008-05-21 16:03:22 +00:00
|
|
|
if (held2)
|
2013-10-15 14:18:08 +00:00
|
|
|
gl->gl_lockref.count++;
|
2008-05-21 16:03:22 +00:00
|
|
|
else
|
2013-10-15 14:18:08 +00:00
|
|
|
gl->gl_lockref.count--;
|
2008-05-21 16:03:22 +00:00
|
|
|
}
|
2011-06-15 15:41:48 +00:00
|
|
|
if (new_state != gl->gl_target)
|
|
|
|
/* shorten our minimum hold time */
|
|
|
|
gl->gl_hold_time = max(gl->gl_hold_time - GL_GLOCK_HOLD_DECR,
|
|
|
|
GL_GLOCK_MIN_HOLD);
|
2008-05-21 16:03:22 +00:00
|
|
|
gl->gl_state = new_state;
|
|
|
|
gl->gl_tchange = jiffies;
|
|
|
|
}
|
|
|
|
|
2020-01-17 12:48:49 +00:00
|
|
|
static void gfs2_set_demote(struct gfs2_glock *gl)
|
|
|
|
{
|
|
|
|
struct gfs2_sbd *sdp = gl->gl_name.ln_sbd;
|
|
|
|
|
|
|
|
set_bit(GLF_DEMOTE, &gl->gl_flags);
|
|
|
|
smp_mb();
|
|
|
|
wake_up(&sdp->sd_async_glock_wait);
|
|
|
|
}
|
|
|
|
|
2008-05-21 16:03:22 +00:00
|
|
|
static void gfs2_demote_wake(struct gfs2_glock *gl)
|
|
|
|
{
|
|
|
|
gl->gl_demote_state = LM_ST_EXCLUSIVE;
|
|
|
|
clear_bit(GLF_DEMOTE, &gl->gl_flags);
|
2014-03-17 17:06:10 +00:00
|
|
|
smp_mb__after_atomic();
|
2008-05-21 16:03:22 +00:00
|
|
|
wake_up_bit(&gl->gl_flags, GLF_DEMOTE);
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* finish_xmote - The DLM has replied to one of our lock requests
|
|
|
|
* @gl: The glock
|
|
|
|
* @ret: The status from the DLM
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
|
|
|
static void finish_xmote(struct gfs2_glock *gl, unsigned int ret)
|
|
|
|
{
|
|
|
|
const struct gfs2_glock_operations *glops = gl->gl_ops;
|
|
|
|
struct gfs2_holder *gh;
|
|
|
|
unsigned state = ret & LM_OUT_ST_MASK;
|
|
|
|
|
2015-10-29 15:58:09 +00:00
|
|
|
spin_lock(&gl->gl_lockref.lock);
|
2009-06-12 07:49:20 +00:00
|
|
|
trace_gfs2_glock_state_change(gl, state);
|
2008-05-21 16:03:22 +00:00
|
|
|
state_change(gl, state);
|
|
|
|
gh = find_first_waiter(gl);
|
|
|
|
|
|
|
|
/* Demote to UN request arrived during demote to SH or DF */
|
|
|
|
if (test_bit(GLF_DEMOTE_IN_PROGRESS, &gl->gl_flags) &&
|
|
|
|
state != LM_ST_UNLOCKED && gl->gl_demote_state == LM_ST_UNLOCKED)
|
|
|
|
gl->gl_target = LM_ST_UNLOCKED;
|
|
|
|
|
|
|
|
/* Check for state != intended state */
|
|
|
|
if (unlikely(state != gl->gl_target)) {
|
2022-01-24 17:23:55 +00:00
|
|
|
if (gh && (ret & LM_OUT_CANCELED))
|
|
|
|
gfs2_holder_wake(gh);
|
2008-05-21 16:03:22 +00:00
|
|
|
if (gh && !test_bit(GLF_DEMOTE_IN_PROGRESS, &gl->gl_flags)) {
|
|
|
|
/* move to back of queue and try next entry */
|
|
|
|
if (ret & LM_OUT_CANCELED) {
|
|
|
|
if ((gh->gh_flags & LM_FLAG_PRIORITY) == 0)
|
|
|
|
list_move_tail(&gh->gh_list, &gl->gl_holders);
|
|
|
|
gh = find_first_waiter(gl);
|
|
|
|
gl->gl_target = gh->gh_state;
|
|
|
|
goto retry;
|
|
|
|
}
|
|
|
|
/* Some error or failed "try lock" - report it */
|
|
|
|
if ((ret & LM_OUT_ERROR) ||
|
|
|
|
(gh->gh_flags & (LM_FLAG_TRY | LM_FLAG_TRY_1CB))) {
|
|
|
|
gl->gl_target = gl->gl_state;
|
|
|
|
do_error(gl, ret);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
switch(state) {
|
|
|
|
/* Unlocked due to conversion deadlock, try again */
|
|
|
|
case LM_ST_UNLOCKED:
|
|
|
|
retry:
|
|
|
|
do_xmote(gl, gh, gl->gl_target);
|
|
|
|
break;
|
|
|
|
/* Conversion fails, unlock and try again */
|
|
|
|
case LM_ST_SHARED:
|
|
|
|
case LM_ST_DEFERRED:
|
|
|
|
do_xmote(gl, gh, LM_ST_UNLOCKED);
|
|
|
|
break;
|
|
|
|
default: /* Everything else */
|
2018-10-03 13:47:36 +00:00
|
|
|
fs_err(gl->gl_name.ln_sbd, "wanted %u got %u\n",
|
|
|
|
gl->gl_target, state);
|
2008-05-21 16:03:22 +00:00
|
|
|
GLOCK_BUG_ON(gl, 1);
|
|
|
|
}
|
2015-10-29 15:58:09 +00:00
|
|
|
spin_unlock(&gl->gl_lockref.lock);
|
2008-05-21 16:03:22 +00:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Fast path - we got what we asked for */
|
|
|
|
if (test_and_clear_bit(GLF_DEMOTE_IN_PROGRESS, &gl->gl_flags))
|
|
|
|
gfs2_demote_wake(gl);
|
|
|
|
if (state != LM_ST_UNLOCKED) {
|
|
|
|
if (glops->go_xmote_bh) {
|
2022-06-02 20:15:02 +00:00
|
|
|
int rv;
|
|
|
|
|
2015-10-29 15:58:09 +00:00
|
|
|
spin_unlock(&gl->gl_lockref.lock);
|
2021-03-19 11:56:44 +00:00
|
|
|
rv = glops->go_xmote_bh(gl);
|
2015-10-29 15:58:09 +00:00
|
|
|
spin_lock(&gl->gl_lockref.lock);
|
2008-05-21 16:03:22 +00:00
|
|
|
if (rv) {
|
|
|
|
do_error(gl, rv);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
2022-06-02 20:15:02 +00:00
|
|
|
do_promote(gl);
|
2008-05-21 16:03:22 +00:00
|
|
|
}
|
|
|
|
out:
|
|
|
|
clear_bit(GLF_LOCK, &gl->gl_flags);
|
2015-10-29 15:58:09 +00:00
|
|
|
spin_unlock(&gl->gl_lockref.lock);
|
2008-05-21 16:03:22 +00:00
|
|
|
}
|
|
|
|
|
2021-05-18 13:14:31 +00:00
|
|
|
static bool is_system_glock(struct gfs2_glock *gl)
|
|
|
|
{
|
|
|
|
struct gfs2_sbd *sdp = gl->gl_name.ln_sbd;
|
|
|
|
struct gfs2_inode *m_ip = GFS2_I(sdp->sd_statfs_inode);
|
|
|
|
|
|
|
|
if (gl == m_ip->i_gl)
|
|
|
|
return true;
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2008-05-21 16:03:22 +00:00
|
|
|
/**
|
|
|
|
* do_xmote - Calls the DLM to change the state of a lock
|
|
|
|
* @gl: The lock state
|
|
|
|
* @gh: The holder (only for promotes)
|
|
|
|
* @target: The target lock state
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
2022-08-18 18:32:38 +00:00
|
|
|
static void do_xmote(struct gfs2_glock *gl, struct gfs2_holder *gh,
|
|
|
|
unsigned int target)
|
2015-10-29 15:58:09 +00:00
|
|
|
__releases(&gl->gl_lockref.lock)
|
|
|
|
__acquires(&gl->gl_lockref.lock)
|
2008-05-21 16:03:22 +00:00
|
|
|
{
|
|
|
|
const struct gfs2_glock_operations *glops = gl->gl_ops;
|
2015-03-16 16:52:05 +00:00
|
|
|
struct gfs2_sbd *sdp = gl->gl_name.ln_sbd;
|
2015-07-24 14:45:43 +00:00
|
|
|
unsigned int lck_flags = (unsigned int)(gh ? gh->gh_flags : 0);
|
2008-05-21 16:03:22 +00:00
|
|
|
int ret;
|
|
|
|
|
gfs2: Force withdraw to replay journals and wait for it to finish
When a node withdraws from a file system, it often leaves its journal
in an incomplete state. This is especially true when the withdraw is
caused by io errors writing to the journal. Before this patch, a
withdraw would try to write a "shutdown" record to the journal, tell
dlm it's done with the file system, and none of the other nodes
know about the problem. Later, when the problem is fixed and the
withdrawn node is rebooted, it would then discover that its own
journal was incomplete, and replay it. However, replaying it at this
point is almost guaranteed to introduce corruption because the other
nodes are likely to have used affected resource groups that appeared
in the journal since the time of the withdraw. Replaying the journal
later will overwrite any changes made, and not through any fault of
dlm, which was instructed during the withdraw to release those
resources.
This patch makes file system withdraws seen by the entire cluster.
Withdrawing nodes dequeue their journal glock to allow recovery.
The remaining nodes check all the journals to see if they are
clean or in need of replay. They try to replay dirty journals, but
only the journals of withdrawn nodes will be "not busy" and
therefore available for replay.
Until the journal replay is complete, no i/o related glocks may be
given out, to ensure that the replay does not cause the
aforementioned corruption: We cannot allow any journal replay to
overwrite blocks associated with a glock once it is held.
The "live" glock which is now used to signal when a withdraw
occurs. When a withdraw occurs, the node signals its withdraw by
dequeueing the "live" glock and trying to enqueue it in EX mode,
thus forcing the other nodes to all see a demote request, by way
of a "1CB" (one callback) try lock. The "live" glock is not
granted in EX; the callback is only just used to indicate a
withdraw has occurred.
Note that all nodes in the cluster must wait for the recovering
node to finish replaying the withdrawing node's journal before
continuing. To this end, it checks that the journals are clean
multiple times in a retry loop.
Also note that the withdraw function may be called from a wide
variety of situations, and therefore, we need to take extra
precautions to make sure pointers are valid before using them in
many circumstances.
We also need to take care when glocks decide to withdraw, since
the withdraw code now uses glocks.
Also, before this patch, if a process encountered an error and
decided to withdraw, if another process was already withdrawing,
the second withdraw would be silently ignored, which set it free
to unlock its glocks. That's correct behavior if the original
withdrawer encounters further errors down the road. But if
secondary waiters don't wait for the journal replay, unlocking
glocks will allow other nodes to use them, despite the fact that
the journal containing those blocks is being replayed. The
replay needs to finish before our glocks are released to other
nodes. IOW, secondary withdraws need to wait for the first
withdraw to finish.
For example, if an rgrp glock is unlocked by a process that didn't
wait for the first withdraw, a journal replay could introduce file
system corruption by replaying a rgrp block that has already been
granted to a different cluster node.
Signed-off-by: Bob Peterson <rpeterso@redhat.com>
2020-01-28 19:23:45 +00:00
|
|
|
if (target != LM_ST_UNLOCKED && glock_blocked_by_withdraw(gl) &&
|
|
|
|
gh && !(gh->gh_flags & LM_FLAG_NOEXP))
|
2022-08-18 18:32:38 +00:00
|
|
|
goto skip_inval;
|
|
|
|
|
2008-05-21 16:03:22 +00:00
|
|
|
lck_flags &= (LM_FLAG_TRY | LM_FLAG_TRY_1CB | LM_FLAG_NOEXP |
|
|
|
|
LM_FLAG_PRIORITY);
|
2010-11-29 12:50:38 +00:00
|
|
|
GLOCK_BUG_ON(gl, gl->gl_state == target);
|
|
|
|
GLOCK_BUG_ON(gl, gl->gl_state == gl->gl_target);
|
2008-05-21 16:03:22 +00:00
|
|
|
if ((target == LM_ST_UNLOCKED || target == LM_ST_DEFERRED) &&
|
|
|
|
glops->go_inval) {
|
2019-11-15 15:45:41 +00:00
|
|
|
/*
|
|
|
|
* If another process is already doing the invalidate, let that
|
|
|
|
* finish first. The glock state machine will get back to this
|
|
|
|
* holder again later.
|
|
|
|
*/
|
|
|
|
if (test_and_set_bit(GLF_INVALIDATE_IN_PROGRESS,
|
|
|
|
&gl->gl_flags))
|
|
|
|
return;
|
2008-05-21 16:03:22 +00:00
|
|
|
do_error(gl, 0); /* Fail queued try locks */
|
|
|
|
}
|
2010-11-30 15:49:31 +00:00
|
|
|
gl->gl_req = target;
|
GFS2: glock statistics gathering
The stats are divided into two sets: those relating to the
super block and those relating to an individual glock. The
super block stats are done on a per cpu basis in order to
try and reduce the overhead of gathering them. They are also
further divided by glock type.
In the case of both the super block and glock statistics,
the same information is gathered in each case. The super
block statistics are used to provide default values for
most of the glock statistics, so that newly created glocks
should have, as far as possible, a sensible starting point.
The statistics are divided into three pairs of mean and
variance, plus two counters. The mean/variance pairs are
smoothed exponential estimates and the algorithm used is
one which will be very familiar to those used to calculation
of round trip times in network code.
The three pairs of mean/variance measure the following
things:
1. DLM lock time (non-blocking requests)
2. DLM lock time (blocking requests)
3. Inter-request time (again to the DLM)
A non-blocking request is one which will complete right
away, whatever the state of the DLM lock in question. That
currently means any requests when (a) the current state of
the lock is exclusive (b) the requested state is either null
or unlocked or (c) the "try lock" flag is set. A blocking
request covers all the other lock requests.
There are two counters. The first is there primarily to show
how many lock requests have been made, and thus how much data
has gone into the mean/variance calculations. The other counter
is counting queueing of holders at the top layer of the glock
code. Hopefully that number will be a lot larger than the number
of dlm lock requests issued.
So why gather these statistics? There are several reasons
we'd like to get a better idea of these timings:
1. To be able to better set the glock "min hold time"
2. To spot performance issues more easily
3. To improve the algorithm for selecting resource groups for
allocation (to base it on lock wait time, rather than blindly
using a "try lock")
Due to the smoothing action of the updates, a step change in
some input quantity being sampled will only fully be taken
into account after 8 samples (or 4 for the variance) and this
needs to be carefully considered when interpreting the
results.
Knowing both the time it takes a lock request to complete and
the average time between lock requests for a glock means we
can compute the total percentage of the time for which the
node is able to use a glock vs. time that the rest of the
cluster has its share. That will be very useful when setting
the lock min hold time.
The other point to remember is that all times are in
nanoseconds. Great care has been taken to ensure that we
measure exactly the quantities that we want, as accurately
as possible. There are always inaccuracies in any
measuring system, but I hope this is as accurate as we
can reasonably make it.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
2012-01-20 10:38:36 +00:00
|
|
|
set_bit(GLF_BLOCKING, &gl->gl_flags);
|
|
|
|
if ((gl->gl_req == LM_ST_UNLOCKED) ||
|
|
|
|
(gl->gl_state == LM_ST_EXCLUSIVE) ||
|
|
|
|
(lck_flags & (LM_FLAG_TRY|LM_FLAG_TRY_1CB)))
|
|
|
|
clear_bit(GLF_BLOCKING, &gl->gl_flags);
|
2015-10-29 15:58:09 +00:00
|
|
|
spin_unlock(&gl->gl_lockref.lock);
|
2019-11-13 20:09:28 +00:00
|
|
|
if (glops->go_sync) {
|
|
|
|
ret = glops->go_sync(gl);
|
|
|
|
/* If we had a problem syncing (due to io errors or whatever,
|
|
|
|
* we should not invalidate the metadata or tell dlm to
|
|
|
|
* release the glock to other nodes.
|
|
|
|
*/
|
|
|
|
if (ret) {
|
|
|
|
if (cmpxchg(&sdp->sd_log_error, 0, ret)) {
|
|
|
|
fs_err(sdp, "Error %d syncing glock \n", ret);
|
|
|
|
gfs2_dump_glock(NULL, gl, true);
|
|
|
|
}
|
2020-05-07 17:12:23 +00:00
|
|
|
goto skip_inval;
|
2019-11-13 20:09:28 +00:00
|
|
|
}
|
|
|
|
}
|
2019-05-22 14:21:21 +00:00
|
|
|
if (test_bit(GLF_INVALIDATE_IN_PROGRESS, &gl->gl_flags)) {
|
|
|
|
/*
|
|
|
|
* The call to go_sync should have cleared out the ail list.
|
|
|
|
* If there are still items, we have a problem. We ought to
|
|
|
|
* withdraw, but we can't because the withdraw code also uses
|
|
|
|
* glocks. Warn about the error, dump the glock, then fall
|
|
|
|
* through and wait for logd to do the withdraw for us.
|
|
|
|
*/
|
|
|
|
if ((atomic_read(&gl->gl_ail_count) != 0) &&
|
|
|
|
(!cmpxchg(&sdp->sd_log_error, 0, -EIO))) {
|
2020-05-23 13:13:50 +00:00
|
|
|
gfs2_glock_assert_warn(gl,
|
|
|
|
!atomic_read(&gl->gl_ail_count));
|
2019-05-22 14:21:21 +00:00
|
|
|
gfs2_dump_glock(NULL, gl, true);
|
|
|
|
}
|
2008-05-21 16:03:22 +00:00
|
|
|
glops->go_inval(gl, target == LM_ST_DEFERRED ? 0 : DIO_METADATA);
|
2019-05-22 14:21:21 +00:00
|
|
|
clear_bit(GLF_INVALIDATE_IN_PROGRESS, &gl->gl_flags);
|
|
|
|
}
|
2008-05-21 16:03:22 +00:00
|
|
|
|
2020-05-07 17:12:23 +00:00
|
|
|
skip_inval:
|
2008-05-21 16:03:22 +00:00
|
|
|
gfs2_glock_hold(gl);
|
2019-04-29 19:14:58 +00:00
|
|
|
/*
|
|
|
|
* Check for an error encountered since we called go_sync and go_inval.
|
|
|
|
* If so, we can't withdraw from the glock code because the withdraw
|
|
|
|
* code itself uses glocks (see function signal_our_withdraw) to
|
|
|
|
* change the mount to read-only. Most importantly, we must not call
|
|
|
|
* dlm to unlock the glock until the journal is in a known good state
|
|
|
|
* (after journal replay) otherwise other nodes may use the object
|
|
|
|
* (rgrp or dinode) and then later, journal replay will corrupt the
|
|
|
|
* file system. The best we can do here is wait for the logd daemon
|
|
|
|
* to see sd_log_error and withdraw, and in the meantime, requeue the
|
|
|
|
* work for later.
|
|
|
|
*
|
2021-05-18 13:14:31 +00:00
|
|
|
* We make a special exception for some system glocks, such as the
|
|
|
|
* system statfs inode glock, which needs to be granted before the
|
|
|
|
* gfs2_quotad daemon can exit, and that exit needs to finish before
|
|
|
|
* we can unmount the withdrawn file system.
|
|
|
|
*
|
2019-04-29 19:14:58 +00:00
|
|
|
* However, if we're just unlocking the lock (say, for unmount, when
|
|
|
|
* gfs2_gl_hash_clear calls clear_glock) and recovery is complete
|
|
|
|
* then it's okay to tell dlm to unlock it.
|
|
|
|
*/
|
|
|
|
if (unlikely(sdp->sd_log_error && !gfs2_withdrawn(sdp)))
|
|
|
|
gfs2_withdraw_delayed(sdp);
|
2021-05-18 13:14:31 +00:00
|
|
|
if (glock_blocked_by_withdraw(gl) &&
|
|
|
|
(target != LM_ST_UNLOCKED ||
|
|
|
|
test_bit(SDF_WITHDRAW_RECOVERY, &sdp->sd_flags))) {
|
|
|
|
if (!is_system_glock(gl)) {
|
2022-08-18 18:32:38 +00:00
|
|
|
handle_callback(gl, LM_ST_UNLOCKED, 0, false); /* sets demote */
|
|
|
|
/*
|
|
|
|
* Ordinarily, we would call dlm and its callback would call
|
|
|
|
* finish_xmote, which would call state_change() to the new state.
|
|
|
|
* Since we withdrew, we won't call dlm, so call state_change
|
|
|
|
* manually, but to the UNLOCKED state we desire.
|
|
|
|
*/
|
|
|
|
state_change(gl, LM_ST_UNLOCKED);
|
|
|
|
/*
|
|
|
|
* We skip telling dlm to do the locking, so we won't get a
|
|
|
|
* reply that would otherwise clear GLF_LOCK. So we clear it here.
|
|
|
|
*/
|
|
|
|
clear_bit(GLF_LOCK, &gl->gl_flags);
|
|
|
|
clear_bit(GLF_DEMOTE_IN_PROGRESS, &gl->gl_flags);
|
2019-04-29 19:14:58 +00:00
|
|
|
gfs2_glock_queue_work(gl, GL_GLOCK_DFT_HOLD);
|
|
|
|
goto out;
|
2021-05-18 13:14:31 +00:00
|
|
|
} else {
|
|
|
|
clear_bit(GLF_INVALIDATE_IN_PROGRESS, &gl->gl_flags);
|
2019-04-29 19:14:58 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2010-11-29 12:50:38 +00:00
|
|
|
if (sdp->sd_lockstruct.ls_ops->lm_lock) {
|
|
|
|
/* lock_dlm */
|
|
|
|
ret = sdp->sd_lockstruct.ls_ops->lm_lock(gl, target, lck_flags);
|
2016-03-23 18:29:59 +00:00
|
|
|
if (ret == -EINVAL && gl->gl_target == LM_ST_UNLOCKED &&
|
|
|
|
target == LM_ST_UNLOCKED &&
|
|
|
|
test_bit(SDF_SKIP_DLM_UNLOCK, &sdp->sd_flags)) {
|
|
|
|
finish_xmote(gl, target);
|
2017-06-30 13:10:01 +00:00
|
|
|
gfs2_glock_queue_work(gl, 0);
|
2019-04-29 19:14:58 +00:00
|
|
|
} else if (ret) {
|
2018-10-03 13:47:36 +00:00
|
|
|
fs_err(sdp, "lm_lock ret %d\n", ret);
|
2019-11-14 14:52:15 +00:00
|
|
|
GLOCK_BUG_ON(gl, !gfs2_withdrawn(sdp));
|
2012-11-14 18:46:53 +00:00
|
|
|
}
|
2010-11-29 12:50:38 +00:00
|
|
|
} else { /* lock_nolock */
|
|
|
|
finish_xmote(gl, target);
|
2017-06-30 13:10:01 +00:00
|
|
|
gfs2_glock_queue_work(gl, 0);
|
2008-05-21 16:03:22 +00:00
|
|
|
}
|
2019-04-29 19:14:58 +00:00
|
|
|
out:
|
2015-10-29 15:58:09 +00:00
|
|
|
spin_lock(&gl->gl_lockref.lock);
|
2008-05-21 16:03:22 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* run_queue - do all outstanding tasks related to a glock
|
|
|
|
* @gl: The glock in question
|
|
|
|
* @nonblock: True if we must not block in run_queue
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
|
|
|
static void run_queue(struct gfs2_glock *gl, const int nonblock)
|
2015-10-29 15:58:09 +00:00
|
|
|
__releases(&gl->gl_lockref.lock)
|
|
|
|
__acquires(&gl->gl_lockref.lock)
|
2008-05-21 16:03:22 +00:00
|
|
|
{
|
|
|
|
struct gfs2_holder *gh = NULL;
|
|
|
|
|
|
|
|
if (test_and_set_bit(GLF_LOCK, &gl->gl_flags))
|
|
|
|
return;
|
|
|
|
|
|
|
|
GLOCK_BUG_ON(gl, test_bit(GLF_DEMOTE_IN_PROGRESS, &gl->gl_flags));
|
|
|
|
|
|
|
|
if (test_bit(GLF_DEMOTE, &gl->gl_flags) &&
|
|
|
|
gl->gl_demote_state != gl->gl_state) {
|
|
|
|
if (find_first_holder(gl))
|
2009-02-05 10:12:38 +00:00
|
|
|
goto out_unlock;
|
2008-05-21 16:03:22 +00:00
|
|
|
if (nonblock)
|
|
|
|
goto out_sched;
|
|
|
|
set_bit(GLF_DEMOTE_IN_PROGRESS, &gl->gl_flags);
|
2008-07-07 09:02:36 +00:00
|
|
|
GLOCK_BUG_ON(gl, gl->gl_demote_state == LM_ST_EXCLUSIVE);
|
2008-05-21 16:03:22 +00:00
|
|
|
gl->gl_target = gl->gl_demote_state;
|
|
|
|
} else {
|
|
|
|
if (test_bit(GLF_DEMOTE, &gl->gl_flags))
|
|
|
|
gfs2_demote_wake(gl);
|
2022-06-02 20:15:02 +00:00
|
|
|
if (do_promote(gl) == 0)
|
2009-02-05 10:12:38 +00:00
|
|
|
goto out_unlock;
|
2008-05-21 16:03:22 +00:00
|
|
|
gh = find_first_waiter(gl);
|
|
|
|
gl->gl_target = gh->gh_state;
|
|
|
|
if (!(gh->gh_flags & (LM_FLAG_TRY | LM_FLAG_TRY_1CB)))
|
|
|
|
do_error(gl, 0); /* Fail queued try locks */
|
|
|
|
}
|
|
|
|
do_xmote(gl, gh, gl->gl_target);
|
|
|
|
return;
|
|
|
|
|
|
|
|
out_sched:
|
2009-09-22 09:56:16 +00:00
|
|
|
clear_bit(GLF_LOCK, &gl->gl_flags);
|
2014-03-17 17:06:10 +00:00
|
|
|
smp_mb__after_atomic();
|
2013-10-15 14:18:08 +00:00
|
|
|
gl->gl_lockref.count++;
|
2017-06-30 13:10:01 +00:00
|
|
|
__gfs2_glock_queue_work(gl, 0);
|
2009-09-22 09:56:16 +00:00
|
|
|
return;
|
|
|
|
|
2009-02-05 10:12:38 +00:00
|
|
|
out_unlock:
|
2008-05-21 16:03:22 +00:00
|
|
|
clear_bit(GLF_LOCK, &gl->gl_flags);
|
2014-03-17 17:06:10 +00:00
|
|
|
smp_mb__after_atomic();
|
2009-09-22 09:56:16 +00:00
|
|
|
return;
|
2008-05-21 16:03:22 +00:00
|
|
|
}
|
|
|
|
|
2020-01-13 20:21:49 +00:00
|
|
|
void gfs2_inode_remember_delete(struct gfs2_glock *gl, u64 generation)
|
|
|
|
{
|
|
|
|
struct gfs2_inode_lvb *ri = (void *)gl->gl_lksb.sb_lvbptr;
|
|
|
|
|
|
|
|
if (ri->ri_magic == 0)
|
|
|
|
ri->ri_magic = cpu_to_be32(GFS2_MAGIC);
|
|
|
|
if (ri->ri_magic == cpu_to_be32(GFS2_MAGIC))
|
|
|
|
ri->ri_generation_deleted = cpu_to_be64(generation);
|
|
|
|
}
|
|
|
|
|
|
|
|
bool gfs2_inode_already_deleted(struct gfs2_glock *gl, u64 generation)
|
|
|
|
{
|
|
|
|
struct gfs2_inode_lvb *ri = (void *)gl->gl_lksb.sb_lvbptr;
|
|
|
|
|
|
|
|
if (ri->ri_magic != cpu_to_be32(GFS2_MAGIC))
|
|
|
|
return false;
|
|
|
|
return generation <= be64_to_cpu(ri->ri_generation_deleted);
|
|
|
|
}
|
|
|
|
|
2020-01-17 09:53:23 +00:00
|
|
|
static void gfs2_glock_poke(struct gfs2_glock *gl)
|
|
|
|
{
|
|
|
|
int flags = LM_FLAG_TRY_1CB | LM_FLAG_ANY | GL_SKIP;
|
|
|
|
struct gfs2_holder gh;
|
|
|
|
int error;
|
|
|
|
|
2021-09-30 18:49:36 +00:00
|
|
|
__gfs2_holder_init(gl, LM_ST_SHARED, flags, &gh, _RET_IP_);
|
2020-07-27 17:18:57 +00:00
|
|
|
error = gfs2_glock_nq(&gh);
|
2020-01-17 09:53:23 +00:00
|
|
|
if (!error)
|
|
|
|
gfs2_glock_dq(&gh);
|
2020-07-27 17:18:57 +00:00
|
|
|
gfs2_holder_uninit(&gh);
|
2020-01-17 09:53:23 +00:00
|
|
|
}
|
|
|
|
|
2020-01-13 21:16:17 +00:00
|
|
|
static bool gfs2_try_evict(struct gfs2_glock *gl)
|
|
|
|
{
|
|
|
|
struct gfs2_inode *ip;
|
|
|
|
bool evicted = false;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If there is contention on the iopen glock and we have an inode, try
|
|
|
|
* to grab and release the inode so that it can be evicted. This will
|
|
|
|
* allow the remote node to go ahead and delete the inode without us
|
|
|
|
* having to do it, which will avoid rgrp glock thrashing.
|
|
|
|
*
|
|
|
|
* The remote node is likely still holding the corresponding inode
|
|
|
|
* glock, so it will run before we get to verify that the delete has
|
|
|
|
* happened below.
|
|
|
|
*/
|
|
|
|
spin_lock(&gl->gl_lockref.lock);
|
|
|
|
ip = gl->gl_object;
|
|
|
|
if (ip && !igrab(&ip->i_inode))
|
|
|
|
ip = NULL;
|
|
|
|
spin_unlock(&gl->gl_lockref.lock);
|
|
|
|
if (ip) {
|
2020-01-17 09:53:23 +00:00
|
|
|
struct gfs2_glock *inode_gl = NULL;
|
|
|
|
|
2020-01-15 08:54:14 +00:00
|
|
|
gl->gl_no_formal_ino = ip->i_no_formal_ino;
|
2020-01-13 21:16:17 +00:00
|
|
|
set_bit(GIF_DEFERRED_DELETE, &ip->i_flags);
|
|
|
|
d_prune_aliases(&ip->i_inode);
|
|
|
|
iput(&ip->i_inode);
|
|
|
|
|
|
|
|
/* If the inode was evicted, gl->gl_object will now be NULL. */
|
|
|
|
spin_lock(&gl->gl_lockref.lock);
|
|
|
|
ip = gl->gl_object;
|
2020-01-17 09:53:23 +00:00
|
|
|
if (ip) {
|
|
|
|
inode_gl = ip->i_gl;
|
|
|
|
lockref_get(&inode_gl->gl_lockref);
|
2020-01-13 21:16:17 +00:00
|
|
|
clear_bit(GIF_DEFERRED_DELETE, &ip->i_flags);
|
2020-01-17 09:53:23 +00:00
|
|
|
}
|
2020-01-13 21:16:17 +00:00
|
|
|
spin_unlock(&gl->gl_lockref.lock);
|
2020-01-17 09:53:23 +00:00
|
|
|
if (inode_gl) {
|
|
|
|
gfs2_glock_poke(inode_gl);
|
|
|
|
gfs2_glock_put(inode_gl);
|
|
|
|
}
|
2020-01-13 21:16:17 +00:00
|
|
|
evicted = !ip;
|
|
|
|
}
|
|
|
|
return evicted;
|
|
|
|
}
|
|
|
|
|
2009-07-23 23:52:34 +00:00
|
|
|
static void delete_work_func(struct work_struct *work)
|
|
|
|
{
|
2020-01-16 19:12:26 +00:00
|
|
|
struct delayed_work *dwork = to_delayed_work(work);
|
|
|
|
struct gfs2_glock *gl = container_of(dwork, struct gfs2_glock, gl_delete);
|
2015-03-16 16:52:05 +00:00
|
|
|
struct gfs2_sbd *sdp = gl->gl_name.ln_sbd;
|
2016-06-14 17:23:59 +00:00
|
|
|
struct inode *inode;
|
2010-11-03 20:01:07 +00:00
|
|
|
u64 no_addr = gl->gl_name.ln_number;
|
|
|
|
|
2020-01-16 19:12:26 +00:00
|
|
|
spin_lock(&gl->gl_lockref.lock);
|
|
|
|
clear_bit(GLF_PENDING_DELETE, &gl->gl_flags);
|
|
|
|
spin_unlock(&gl->gl_lockref.lock);
|
|
|
|
|
2020-01-13 21:16:17 +00:00
|
|
|
if (test_bit(GLF_DEMOTE, &gl->gl_flags)) {
|
|
|
|
/*
|
|
|
|
* If we can evict the inode, give the remote node trying to
|
|
|
|
* delete the inode some time before verifying that the delete
|
|
|
|
* has happened. Otherwise, if we cause contention on the inode glock
|
|
|
|
* immediately, the remote node will think that we still have
|
|
|
|
* the inode in use, and so it will give up waiting.
|
2020-01-17 09:53:23 +00:00
|
|
|
*
|
|
|
|
* If we can't evict the inode, signal to the remote node that
|
|
|
|
* the inode is still in use. We'll later try to delete the
|
|
|
|
* inode locally in gfs2_evict_inode.
|
|
|
|
*
|
|
|
|
* FIXME: We only need to verify that the remote node has
|
|
|
|
* deleted the inode because nodes before this remote delete
|
|
|
|
* rework won't cooperate. At a later time, when we no longer
|
|
|
|
* care about compatibility with such nodes, we can skip this
|
|
|
|
* step entirely.
|
2020-01-13 21:16:17 +00:00
|
|
|
*/
|
|
|
|
if (gfs2_try_evict(gl)) {
|
|
|
|
if (gfs2_queue_delete_work(gl, 5 * HZ))
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-01-15 08:54:14 +00:00
|
|
|
inode = gfs2_lookup_by_inum(sdp, no_addr, gl->gl_no_formal_ino,
|
|
|
|
GFS2_BLKST_UNLINKED);
|
2022-08-22 16:30:12 +00:00
|
|
|
if (IS_ERR(inode)) {
|
|
|
|
if (PTR_ERR(inode) == -EAGAIN &&
|
|
|
|
(gfs2_queue_delete_work(gl, 5 * HZ)))
|
|
|
|
return;
|
|
|
|
} else {
|
2010-11-03 20:01:07 +00:00
|
|
|
d_prune_aliases(inode);
|
|
|
|
iput(inode);
|
2009-07-23 23:52:34 +00:00
|
|
|
}
|
|
|
|
gfs2_glock_put(gl);
|
|
|
|
}
|
|
|
|
|
[GFS2] delay glock demote for a minimum hold time
When a lot of IO, with some distributed mmap IO, is run on a GFS2 filesystem in
a cluster, it will deadlock. The reason is that do_no_page() will repeatedly
call gfs2_sharewrite_nopage(), because each node keeps giving up the glock
too early, and is forced to call unmap_mapping_range(). This bumps the
mapping->truncate_count sequence count, forcing do_no_page() to retry. This
patch institutes a minimum glock hold time a tenth a second. This insures
that even in heavy contention cases, the node has enough time to get some
useful work done before it gives up the glock.
A second issue is that when gfs2_glock_dq() is called from within a page fault
to demote a lock, and the associated page needs to be written out, it will
try to acqire a lock on it, but it has already been locked at a higher level.
This patch puts makes gfs2_glock_dq() use the work queue as well, to avoid this
issue. This is the same patch as Steve Whitehouse originally proposed to fix
this issue, execpt that gfs2_glock_dq() now grabs a reference to the glock
before it queues up the work on it.
Signed-off-by: Benjamin E. Marzinski <bmarzins@redhat.com>
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
2007-08-23 18:19:05 +00:00
|
|
|
static void glock_work_func(struct work_struct *work)
|
|
|
|
{
|
2008-05-21 16:03:22 +00:00
|
|
|
unsigned long delay = 0;
|
[GFS2] delay glock demote for a minimum hold time
When a lot of IO, with some distributed mmap IO, is run on a GFS2 filesystem in
a cluster, it will deadlock. The reason is that do_no_page() will repeatedly
call gfs2_sharewrite_nopage(), because each node keeps giving up the glock
too early, and is forced to call unmap_mapping_range(). This bumps the
mapping->truncate_count sequence count, forcing do_no_page() to retry. This
patch institutes a minimum glock hold time a tenth a second. This insures
that even in heavy contention cases, the node has enough time to get some
useful work done before it gives up the glock.
A second issue is that when gfs2_glock_dq() is called from within a page fault
to demote a lock, and the associated page needs to be written out, it will
try to acqire a lock on it, but it has already been locked at a higher level.
This patch puts makes gfs2_glock_dq() use the work queue as well, to avoid this
issue. This is the same patch as Steve Whitehouse originally proposed to fix
this issue, execpt that gfs2_glock_dq() now grabs a reference to the glock
before it queues up the work on it.
Signed-off-by: Benjamin E. Marzinski <bmarzins@redhat.com>
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
2007-08-23 18:19:05 +00:00
|
|
|
struct gfs2_glock *gl = container_of(work, struct gfs2_glock, gl_work.work);
|
2017-06-30 13:10:01 +00:00
|
|
|
unsigned int drop_refs = 1;
|
[GFS2] delay glock demote for a minimum hold time
When a lot of IO, with some distributed mmap IO, is run on a GFS2 filesystem in
a cluster, it will deadlock. The reason is that do_no_page() will repeatedly
call gfs2_sharewrite_nopage(), because each node keeps giving up the glock
too early, and is forced to call unmap_mapping_range(). This bumps the
mapping->truncate_count sequence count, forcing do_no_page() to retry. This
patch institutes a minimum glock hold time a tenth a second. This insures
that even in heavy contention cases, the node has enough time to get some
useful work done before it gives up the glock.
A second issue is that when gfs2_glock_dq() is called from within a page fault
to demote a lock, and the associated page needs to be written out, it will
try to acqire a lock on it, but it has already been locked at a higher level.
This patch puts makes gfs2_glock_dq() use the work queue as well, to avoid this
issue. This is the same patch as Steve Whitehouse originally proposed to fix
this issue, execpt that gfs2_glock_dq() now grabs a reference to the glock
before it queues up the work on it.
Signed-off-by: Benjamin E. Marzinski <bmarzins@redhat.com>
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
2007-08-23 18:19:05 +00:00
|
|
|
|
2009-11-27 10:31:11 +00:00
|
|
|
if (test_and_clear_bit(GLF_REPLY_PENDING, &gl->gl_flags)) {
|
2008-05-21 16:03:22 +00:00
|
|
|
finish_xmote(gl, gl->gl_reply);
|
2017-06-30 13:10:01 +00:00
|
|
|
drop_refs++;
|
2009-11-27 10:31:11 +00:00
|
|
|
}
|
2015-10-29 15:58:09 +00:00
|
|
|
spin_lock(&gl->gl_lockref.lock);
|
GFS2: Processes waiting on inode glock that no processes are holding
This patch fixes a race in the GFS2 glock state machine that may
result in lockups. The symptom is that all nodes but one will
hang, waiting for a particular glock. All the holder records
will have the "W" (Waiting) bit set. The other node will
typically have the glock stuck in Exclusive mode (EX) with no
holder records, but the dinode will be cached. In other words,
an entry with "I:" will appear in the glock dump for that glock,
but nothing else.
The race has to do with the glock "Pending Demote" bit, which
can be set, then immediately reset, thus losing the fact that
another node needs the glock. The sequence of events is:
1. Something schedules the glock workqueue (e.g. glock request from fs)
2. The glock workqueue gets to the point between the test of the reply pending
bit and the spin lock:
if (test_and_clear_bit(GLF_REPLY_PENDING, &gl->gl_flags)) {
finish_xmote(gl, gl->gl_reply);
drop_ref = 1;
}
down_read(&gfs2_umount_flush_sem); <---- i.e. here
spin_lock(&gl->gl_spin);
3. In comes (a) the reply to our EX lock request setting GLF_REPLY_PENDING and
(b) the demote request which sets GLF_PENDING_DEMOTE
4. The following test is executed:
if (test_and_clear_bit(GLF_PENDING_DEMOTE, &gl->gl_flags) &&
gl->gl_state != LM_ST_UNLOCKED &&
gl->gl_demote_state != LM_ST_EXCLUSIVE) {
This resets the pending demote flag, and gl->gl_demote_state is not equal to
exclusive, however because the reply from the dlm arrived after we checked for
the GLF_REPLY_PENDING flag, gl->gl_state is still equal to unlocked, so
although we reset the GLF_PENDING_DEMOTE flag, we didn't then set the
GLF_DEMOTE flag or reinstate the GLF_PENDING_DEMOTE_FLAG.
The patch closes the timing window by only transitioning the
"Pending demote" bit to the "demote" flag once we know the
other conditions (not unlocked and not exclusive) are met.
Signed-off-by: Bob Peterson <rpeterso@redhat.com>
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
2011-05-24 14:44:42 +00:00
|
|
|
if (test_bit(GLF_PENDING_DEMOTE, &gl->gl_flags) &&
|
2008-07-07 09:02:36 +00:00
|
|
|
gl->gl_state != LM_ST_UNLOCKED &&
|
|
|
|
gl->gl_demote_state != LM_ST_EXCLUSIVE) {
|
2008-05-21 16:03:22 +00:00
|
|
|
unsigned long holdtime, now = jiffies;
|
GFS2: Processes waiting on inode glock that no processes are holding
This patch fixes a race in the GFS2 glock state machine that may
result in lockups. The symptom is that all nodes but one will
hang, waiting for a particular glock. All the holder records
will have the "W" (Waiting) bit set. The other node will
typically have the glock stuck in Exclusive mode (EX) with no
holder records, but the dinode will be cached. In other words,
an entry with "I:" will appear in the glock dump for that glock,
but nothing else.
The race has to do with the glock "Pending Demote" bit, which
can be set, then immediately reset, thus losing the fact that
another node needs the glock. The sequence of events is:
1. Something schedules the glock workqueue (e.g. glock request from fs)
2. The glock workqueue gets to the point between the test of the reply pending
bit and the spin lock:
if (test_and_clear_bit(GLF_REPLY_PENDING, &gl->gl_flags)) {
finish_xmote(gl, gl->gl_reply);
drop_ref = 1;
}
down_read(&gfs2_umount_flush_sem); <---- i.e. here
spin_lock(&gl->gl_spin);
3. In comes (a) the reply to our EX lock request setting GLF_REPLY_PENDING and
(b) the demote request which sets GLF_PENDING_DEMOTE
4. The following test is executed:
if (test_and_clear_bit(GLF_PENDING_DEMOTE, &gl->gl_flags) &&
gl->gl_state != LM_ST_UNLOCKED &&
gl->gl_demote_state != LM_ST_EXCLUSIVE) {
This resets the pending demote flag, and gl->gl_demote_state is not equal to
exclusive, however because the reply from the dlm arrived after we checked for
the GLF_REPLY_PENDING flag, gl->gl_state is still equal to unlocked, so
although we reset the GLF_PENDING_DEMOTE flag, we didn't then set the
GLF_DEMOTE flag or reinstate the GLF_PENDING_DEMOTE_FLAG.
The patch closes the timing window by only transitioning the
"Pending demote" bit to the "demote" flag once we know the
other conditions (not unlocked and not exclusive) are met.
Signed-off-by: Bob Peterson <rpeterso@redhat.com>
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
2011-05-24 14:44:42 +00:00
|
|
|
|
2011-06-15 15:41:48 +00:00
|
|
|
holdtime = gl->gl_tchange + gl->gl_hold_time;
|
2008-05-21 16:03:22 +00:00
|
|
|
if (time_before(now, holdtime))
|
|
|
|
delay = holdtime - now;
|
GFS2: Processes waiting on inode glock that no processes are holding
This patch fixes a race in the GFS2 glock state machine that may
result in lockups. The symptom is that all nodes but one will
hang, waiting for a particular glock. All the holder records
will have the "W" (Waiting) bit set. The other node will
typically have the glock stuck in Exclusive mode (EX) with no
holder records, but the dinode will be cached. In other words,
an entry with "I:" will appear in the glock dump for that glock,
but nothing else.
The race has to do with the glock "Pending Demote" bit, which
can be set, then immediately reset, thus losing the fact that
another node needs the glock. The sequence of events is:
1. Something schedules the glock workqueue (e.g. glock request from fs)
2. The glock workqueue gets to the point between the test of the reply pending
bit and the spin lock:
if (test_and_clear_bit(GLF_REPLY_PENDING, &gl->gl_flags)) {
finish_xmote(gl, gl->gl_reply);
drop_ref = 1;
}
down_read(&gfs2_umount_flush_sem); <---- i.e. here
spin_lock(&gl->gl_spin);
3. In comes (a) the reply to our EX lock request setting GLF_REPLY_PENDING and
(b) the demote request which sets GLF_PENDING_DEMOTE
4. The following test is executed:
if (test_and_clear_bit(GLF_PENDING_DEMOTE, &gl->gl_flags) &&
gl->gl_state != LM_ST_UNLOCKED &&
gl->gl_demote_state != LM_ST_EXCLUSIVE) {
This resets the pending demote flag, and gl->gl_demote_state is not equal to
exclusive, however because the reply from the dlm arrived after we checked for
the GLF_REPLY_PENDING flag, gl->gl_state is still equal to unlocked, so
although we reset the GLF_PENDING_DEMOTE flag, we didn't then set the
GLF_DEMOTE flag or reinstate the GLF_PENDING_DEMOTE_FLAG.
The patch closes the timing window by only transitioning the
"Pending demote" bit to the "demote" flag once we know the
other conditions (not unlocked and not exclusive) are met.
Signed-off-by: Bob Peterson <rpeterso@redhat.com>
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
2011-05-24 14:44:42 +00:00
|
|
|
|
|
|
|
if (!delay) {
|
|
|
|
clear_bit(GLF_PENDING_DEMOTE, &gl->gl_flags);
|
2020-01-17 12:48:49 +00:00
|
|
|
gfs2_set_demote(gl);
|
GFS2: Processes waiting on inode glock that no processes are holding
This patch fixes a race in the GFS2 glock state machine that may
result in lockups. The symptom is that all nodes but one will
hang, waiting for a particular glock. All the holder records
will have the "W" (Waiting) bit set. The other node will
typically have the glock stuck in Exclusive mode (EX) with no
holder records, but the dinode will be cached. In other words,
an entry with "I:" will appear in the glock dump for that glock,
but nothing else.
The race has to do with the glock "Pending Demote" bit, which
can be set, then immediately reset, thus losing the fact that
another node needs the glock. The sequence of events is:
1. Something schedules the glock workqueue (e.g. glock request from fs)
2. The glock workqueue gets to the point between the test of the reply pending
bit and the spin lock:
if (test_and_clear_bit(GLF_REPLY_PENDING, &gl->gl_flags)) {
finish_xmote(gl, gl->gl_reply);
drop_ref = 1;
}
down_read(&gfs2_umount_flush_sem); <---- i.e. here
spin_lock(&gl->gl_spin);
3. In comes (a) the reply to our EX lock request setting GLF_REPLY_PENDING and
(b) the demote request which sets GLF_PENDING_DEMOTE
4. The following test is executed:
if (test_and_clear_bit(GLF_PENDING_DEMOTE, &gl->gl_flags) &&
gl->gl_state != LM_ST_UNLOCKED &&
gl->gl_demote_state != LM_ST_EXCLUSIVE) {
This resets the pending demote flag, and gl->gl_demote_state is not equal to
exclusive, however because the reply from the dlm arrived after we checked for
the GLF_REPLY_PENDING flag, gl->gl_state is still equal to unlocked, so
although we reset the GLF_PENDING_DEMOTE flag, we didn't then set the
GLF_DEMOTE flag or reinstate the GLF_PENDING_DEMOTE_FLAG.
The patch closes the timing window by only transitioning the
"Pending demote" bit to the "demote" flag once we know the
other conditions (not unlocked and not exclusive) are met.
Signed-off-by: Bob Peterson <rpeterso@redhat.com>
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
2011-05-24 14:44:42 +00:00
|
|
|
}
|
2008-05-21 16:03:22 +00:00
|
|
|
}
|
|
|
|
run_queue(gl, 0);
|
2017-06-30 13:10:01 +00:00
|
|
|
if (delay) {
|
|
|
|
/* Keep one glock reference for the work we requeue. */
|
|
|
|
drop_refs--;
|
2011-06-15 15:41:48 +00:00
|
|
|
if (gl->gl_name.ln_type != LM_TYPE_INODE)
|
|
|
|
delay = 0;
|
2017-06-30 13:10:01 +00:00
|
|
|
__gfs2_glock_queue_work(gl, delay);
|
2011-06-15 15:41:48 +00:00
|
|
|
}
|
2017-06-30 13:10:01 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Drop the remaining glock references manually here. (Mind that
|
|
|
|
* __gfs2_glock_queue_work depends on the lockref spinlock begin held
|
|
|
|
* here as well.)
|
|
|
|
*/
|
|
|
|
gl->gl_lockref.count -= drop_refs;
|
|
|
|
if (!gl->gl_lockref.count) {
|
|
|
|
__gfs2_glock_put(gl);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
spin_unlock(&gl->gl_lockref.lock);
|
[GFS2] delay glock demote for a minimum hold time
When a lot of IO, with some distributed mmap IO, is run on a GFS2 filesystem in
a cluster, it will deadlock. The reason is that do_no_page() will repeatedly
call gfs2_sharewrite_nopage(), because each node keeps giving up the glock
too early, and is forced to call unmap_mapping_range(). This bumps the
mapping->truncate_count sequence count, forcing do_no_page() to retry. This
patch institutes a minimum glock hold time a tenth a second. This insures
that even in heavy contention cases, the node has enough time to get some
useful work done before it gives up the glock.
A second issue is that when gfs2_glock_dq() is called from within a page fault
to demote a lock, and the associated page needs to be written out, it will
try to acqire a lock on it, but it has already been locked at a higher level.
This patch puts makes gfs2_glock_dq() use the work queue as well, to avoid this
issue. This is the same patch as Steve Whitehouse originally proposed to fix
this issue, execpt that gfs2_glock_dq() now grabs a reference to the glock
before it queues up the work on it.
Signed-off-by: Benjamin E. Marzinski <bmarzins@redhat.com>
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
2007-08-23 18:19:05 +00:00
|
|
|
}
|
|
|
|
|
2017-08-01 16:18:26 +00:00
|
|
|
static struct gfs2_glock *find_insert_glock(struct lm_lockname *name,
|
|
|
|
struct gfs2_glock *new)
|
|
|
|
{
|
|
|
|
struct wait_glock_queue wait;
|
2017-08-04 12:40:45 +00:00
|
|
|
wait_queue_head_t *wq = glock_waitqueue(name);
|
2017-08-01 16:18:26 +00:00
|
|
|
struct gfs2_glock *gl;
|
|
|
|
|
2017-08-04 12:40:45 +00:00
|
|
|
wait.name = name;
|
|
|
|
init_wait(&wait.wait);
|
|
|
|
wait.wait.func = glock_wake_function;
|
|
|
|
|
2017-08-01 16:18:26 +00:00
|
|
|
again:
|
2017-08-04 12:40:45 +00:00
|
|
|
prepare_to_wait(wq, &wait.wait, TASK_UNINTERRUPTIBLE);
|
2017-08-01 16:18:26 +00:00
|
|
|
rcu_read_lock();
|
|
|
|
if (new) {
|
|
|
|
gl = rhashtable_lookup_get_insert_fast(&gl_hash_table,
|
|
|
|
&new->gl_node, ht_parms);
|
|
|
|
if (IS_ERR(gl))
|
|
|
|
goto out;
|
|
|
|
} else {
|
|
|
|
gl = rhashtable_lookup_fast(&gl_hash_table,
|
|
|
|
name, ht_parms);
|
|
|
|
}
|
|
|
|
if (gl && !lockref_get_not_dead(&gl->gl_lockref)) {
|
|
|
|
rcu_read_unlock();
|
|
|
|
schedule();
|
|
|
|
goto again;
|
|
|
|
}
|
|
|
|
out:
|
|
|
|
rcu_read_unlock();
|
2017-08-04 12:40:45 +00:00
|
|
|
finish_wait(wq, &wait.wait);
|
2017-08-01 16:18:26 +00:00
|
|
|
return gl;
|
|
|
|
}
|
|
|
|
|
2006-01-16 16:50:04 +00:00
|
|
|
/**
|
|
|
|
* gfs2_glock_get() - Get a glock, or create one if one doesn't exist
|
|
|
|
* @sdp: The GFS2 superblock
|
|
|
|
* @number: the lock number
|
|
|
|
* @glops: The glock_operations to use
|
|
|
|
* @create: If 0, don't create the glock if it doesn't exist
|
|
|
|
* @glp: the glock is returned here
|
|
|
|
*
|
|
|
|
* This does not lock a glock, just finds/creates structures for one.
|
|
|
|
*
|
|
|
|
* Returns: errno
|
|
|
|
*/
|
|
|
|
|
2006-09-04 16:49:07 +00:00
|
|
|
int gfs2_glock_get(struct gfs2_sbd *sdp, u64 number,
|
2006-08-30 13:30:00 +00:00
|
|
|
const struct gfs2_glock_operations *glops, int create,
|
2006-01-16 16:50:04 +00:00
|
|
|
struct gfs2_glock **glp)
|
|
|
|
{
|
2009-12-08 12:12:13 +00:00
|
|
|
struct super_block *s = sdp->sd_vfs;
|
2015-03-16 16:52:05 +00:00
|
|
|
struct lm_lockname name = { .ln_number = number,
|
|
|
|
.ln_type = glops->go_type,
|
|
|
|
.ln_sbd = sdp };
|
2017-02-21 22:19:10 +00:00
|
|
|
struct gfs2_glock *gl, *tmp;
|
2009-12-08 12:12:13 +00:00
|
|
|
struct address_space *mapping;
|
2017-02-21 22:19:10 +00:00
|
|
|
int ret = 0;
|
2006-01-16 16:50:04 +00:00
|
|
|
|
2017-08-01 16:18:26 +00:00
|
|
|
gl = find_insert_glock(&name, NULL);
|
|
|
|
if (gl) {
|
|
|
|
*glp = gl;
|
2006-01-16 16:50:04 +00:00
|
|
|
return 0;
|
2017-08-01 16:18:26 +00:00
|
|
|
}
|
GFS2: Add a "demote a glock" interface to sysfs
This adds a sysfs file called demote_rq to GFS2's
per filesystem directory. Its possible to use this
file to demote arbitrary glocks in exactly the same
way as if a request had come in from a remote node.
This is intended for testing issues relating to caching
of data under glocks. Despite that, the interface is
generic enough to send requests to any type of glock,
but be careful as its not always safe to send an
arbitrary message to an arbitrary glock. For that reason
and to prevent DoS, this interface is restricted to root
only.
The messages look like this:
<type>:<glocknumber> <mode>
Example:
echo -n "2:13324 EX" >/sys/fs/gfs2/unity:myfs/demote_rq
Which means "please demote inode glock (type 2) number 13324 so that
I can get an EX (exclusive) lock". The lock modes are those which
would normally be sent by a remote node in its callback so if you
want to unlock a glock, you use EX, to demote to shared, use SH or PR
(depending on whether you like GFS2 or DLM lock modes better!).
If the glock doesn't exist, you'll get -ENOENT returned. If the
arguments don't make sense, you'll get -EINVAL returned.
The plan is that this interface will be used in combination with
the blktrace patch which I recently posted for comments although
it is, of course, still useful in its own right.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
2009-02-12 13:31:58 +00:00
|
|
|
if (!create)
|
|
|
|
return -ENOENT;
|
2006-01-16 16:50:04 +00:00
|
|
|
|
2022-05-08 10:06:30 +00:00
|
|
|
if (glops->go_flags & GLOF_ASPACE) {
|
|
|
|
struct gfs2_glock_aspace *gla =
|
|
|
|
kmem_cache_alloc(gfs2_glock_aspace_cachep, GFP_NOFS);
|
|
|
|
if (!gla)
|
|
|
|
return -ENOMEM;
|
|
|
|
gl = &gla->glock;
|
|
|
|
} else {
|
|
|
|
gl = kmem_cache_alloc(gfs2_glock_cachep, GFP_NOFS);
|
|
|
|
if (!gl)
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
2012-11-14 18:46:53 +00:00
|
|
|
memset(&gl->gl_lksb, 0, sizeof(struct dlm_lksb));
|
2022-05-08 10:06:30 +00:00
|
|
|
gl->gl_ops = glops;
|
2012-11-14 18:46:53 +00:00
|
|
|
|
|
|
|
if (glops->go_flags & GLOF_LVB) {
|
2020-01-15 23:25:32 +00:00
|
|
|
gl->gl_lksb.sb_lvbptr = kzalloc(GDLM_LVB_SIZE, GFP_NOFS);
|
2012-11-14 18:47:37 +00:00
|
|
|
if (!gl->gl_lksb.sb_lvbptr) {
|
2022-05-08 10:06:30 +00:00
|
|
|
gfs2_glock_dealloc(&gl->gl_rcu);
|
2012-11-14 18:46:53 +00:00
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2010-01-29 15:21:27 +00:00
|
|
|
atomic_inc(&sdp->sd_glock_disposal);
|
2015-03-16 16:02:46 +00:00
|
|
|
gl->gl_node.next = NULL;
|
gfs2: fix GL_SKIP node_scope problems
Before this patch, when a glock was locked, the very first holder on the
queue would unlock the lockref and call the go_instantiate glops function
(if one existed), unless GL_SKIP was specified. When we introduced the new
node-scope concept, we allowed multiple holders to lock glocks in EX mode
and share the lock.
But node-scope introduced a new problem: if the first holder has GL_SKIP
and the next one does NOT, since it is not the first holder on the queue,
the go_instantiate op was not called. Eventually the GL_SKIP holder may
call the instantiate sub-function (e.g. gfs2_rgrp_bh_get) but there was
still a window of time in which another non-GL_SKIP holder assumes the
instantiate function had been called by the first holder. In the case of
rgrp glocks, this led to a NULL pointer dereference on the buffer_heads.
This patch tries to fix the problem by introducing two new glock flags:
GLF_INSTANTIATE_NEEDED, which keeps track of when the instantiate function
needs to be called to "fill in" or "read in" the object before it is
referenced.
GLF_INSTANTIATE_IN_PROG which is used to determine when a process is
in the process of reading in the object. Whenever a function needs to
reference the object, it checks the GLF_INSTANTIATE_NEEDED flag, and if
set, it sets GLF_INSTANTIATE_IN_PROG and calls the glops "go_instantiate"
function.
As before, the gl_lockref spin_lock is unlocked during the IO operation,
which may take a relatively long amount of time to complete. While
unlocked, if another process determines go_instantiate is still needed,
it sees GLF_INSTANTIATE_IN_PROG is set, and waits for the go_instantiate
glop operation to be completed. Once GLF_INSTANTIATE_IN_PROG is cleared,
it needs to check GLF_INSTANTIATE_NEEDED again because the other process's
go_instantiate operation may not have been successful.
Functions that previously called the instantiate sub-functions now call
directly into gfs2_instantiate so the new bits are managed properly.
Signed-off-by: Bob Peterson <rpeterso@redhat.com>
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2021-10-06 14:29:18 +00:00
|
|
|
gl->gl_flags = glops->go_instantiate ? BIT(GLF_INSTANTIATE_NEEDED) : 0;
|
2006-01-16 16:50:04 +00:00
|
|
|
gl->gl_name = name;
|
2020-11-23 15:53:35 +00:00
|
|
|
lockdep_set_subclass(&gl->gl_lockref.lock, glops->go_subclass);
|
2013-10-15 14:18:08 +00:00
|
|
|
gl->gl_lockref.count = 1;
|
2006-01-16 16:50:04 +00:00
|
|
|
gl->gl_state = LM_ST_UNLOCKED;
|
2008-05-21 16:03:22 +00:00
|
|
|
gl->gl_target = LM_ST_UNLOCKED;
|
[GFS2] delay glock demote for a minimum hold time
When a lot of IO, with some distributed mmap IO, is run on a GFS2 filesystem in
a cluster, it will deadlock. The reason is that do_no_page() will repeatedly
call gfs2_sharewrite_nopage(), because each node keeps giving up the glock
too early, and is forced to call unmap_mapping_range(). This bumps the
mapping->truncate_count sequence count, forcing do_no_page() to retry. This
patch institutes a minimum glock hold time a tenth a second. This insures
that even in heavy contention cases, the node has enough time to get some
useful work done before it gives up the glock.
A second issue is that when gfs2_glock_dq() is called from within a page fault
to demote a lock, and the associated page needs to be written out, it will
try to acqire a lock on it, but it has already been locked at a higher level.
This patch puts makes gfs2_glock_dq() use the work queue as well, to avoid this
issue. This is the same patch as Steve Whitehouse originally proposed to fix
this issue, execpt that gfs2_glock_dq() now grabs a reference to the glock
before it queues up the work on it.
Signed-off-by: Benjamin E. Marzinski <bmarzins@redhat.com>
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
2007-08-23 18:19:05 +00:00
|
|
|
gl->gl_demote_state = LM_ST_EXCLUSIVE;
|
2016-12-25 11:30:41 +00:00
|
|
|
gl->gl_dstamp = 0;
|
GFS2: glock statistics gathering
The stats are divided into two sets: those relating to the
super block and those relating to an individual glock. The
super block stats are done on a per cpu basis in order to
try and reduce the overhead of gathering them. They are also
further divided by glock type.
In the case of both the super block and glock statistics,
the same information is gathered in each case. The super
block statistics are used to provide default values for
most of the glock statistics, so that newly created glocks
should have, as far as possible, a sensible starting point.
The statistics are divided into three pairs of mean and
variance, plus two counters. The mean/variance pairs are
smoothed exponential estimates and the algorithm used is
one which will be very familiar to those used to calculation
of round trip times in network code.
The three pairs of mean/variance measure the following
things:
1. DLM lock time (non-blocking requests)
2. DLM lock time (blocking requests)
3. Inter-request time (again to the DLM)
A non-blocking request is one which will complete right
away, whatever the state of the DLM lock in question. That
currently means any requests when (a) the current state of
the lock is exclusive (b) the requested state is either null
or unlocked or (c) the "try lock" flag is set. A blocking
request covers all the other lock requests.
There are two counters. The first is there primarily to show
how many lock requests have been made, and thus how much data
has gone into the mean/variance calculations. The other counter
is counting queueing of holders at the top layer of the glock
code. Hopefully that number will be a lot larger than the number
of dlm lock requests issued.
So why gather these statistics? There are several reasons
we'd like to get a better idea of these timings:
1. To be able to better set the glock "min hold time"
2. To spot performance issues more easily
3. To improve the algorithm for selecting resource groups for
allocation (to base it on lock wait time, rather than blindly
using a "try lock")
Due to the smoothing action of the updates, a step change in
some input quantity being sampled will only fully be taken
into account after 8 samples (or 4 for the variance) and this
needs to be carefully considered when interpreting the
results.
Knowing both the time it takes a lock request to complete and
the average time between lock requests for a glock means we
can compute the total percentage of the time for which the
node is able to use a glock vs. time that the rest of the
cluster has its share. That will be very useful when setting
the lock min hold time.
The other point to remember is that all times are in
nanoseconds. Great care has been taken to ensure that we
measure exactly the quantities that we want, as accurately
as possible. There are always inaccuracies in any
measuring system, but I hope this is as accurate as we
can reasonably make it.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
2012-01-20 10:38:36 +00:00
|
|
|
preempt_disable();
|
|
|
|
/* We use the global stats to estimate the initial per-glock stats */
|
|
|
|
gl->gl_stats = this_cpu_ptr(sdp->sd_lkstats)->lkstats[glops->go_type];
|
|
|
|
preempt_enable();
|
|
|
|
gl->gl_stats.stats[GFS2_LKS_DCOUNT] = 0;
|
|
|
|
gl->gl_stats.stats[GFS2_LKS_QCOUNT] = 0;
|
[GFS2] delay glock demote for a minimum hold time
When a lot of IO, with some distributed mmap IO, is run on a GFS2 filesystem in
a cluster, it will deadlock. The reason is that do_no_page() will repeatedly
call gfs2_sharewrite_nopage(), because each node keeps giving up the glock
too early, and is forced to call unmap_mapping_range(). This bumps the
mapping->truncate_count sequence count, forcing do_no_page() to retry. This
patch institutes a minimum glock hold time a tenth a second. This insures
that even in heavy contention cases, the node has enough time to get some
useful work done before it gives up the glock.
A second issue is that when gfs2_glock_dq() is called from within a page fault
to demote a lock, and the associated page needs to be written out, it will
try to acqire a lock on it, but it has already been locked at a higher level.
This patch puts makes gfs2_glock_dq() use the work queue as well, to avoid this
issue. This is the same patch as Steve Whitehouse originally proposed to fix
this issue, execpt that gfs2_glock_dq() now grabs a reference to the glock
before it queues up the work on it.
Signed-off-by: Benjamin E. Marzinski <bmarzins@redhat.com>
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
2007-08-23 18:19:05 +00:00
|
|
|
gl->gl_tchange = jiffies;
|
2006-08-30 14:36:52 +00:00
|
|
|
gl->gl_object = NULL;
|
2011-06-15 15:41:48 +00:00
|
|
|
gl->gl_hold_time = GL_GLOCK_DFT_HOLD;
|
[GFS2] delay glock demote for a minimum hold time
When a lot of IO, with some distributed mmap IO, is run on a GFS2 filesystem in
a cluster, it will deadlock. The reason is that do_no_page() will repeatedly
call gfs2_sharewrite_nopage(), because each node keeps giving up the glock
too early, and is forced to call unmap_mapping_range(). This bumps the
mapping->truncate_count sequence count, forcing do_no_page() to retry. This
patch institutes a minimum glock hold time a tenth a second. This insures
that even in heavy contention cases, the node has enough time to get some
useful work done before it gives up the glock.
A second issue is that when gfs2_glock_dq() is called from within a page fault
to demote a lock, and the associated page needs to be written out, it will
try to acqire a lock on it, but it has already been locked at a higher level.
This patch puts makes gfs2_glock_dq() use the work queue as well, to avoid this
issue. This is the same patch as Steve Whitehouse originally proposed to fix
this issue, execpt that gfs2_glock_dq() now grabs a reference to the glock
before it queues up the work on it.
Signed-off-by: Benjamin E. Marzinski <bmarzins@redhat.com>
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
2007-08-23 18:19:05 +00:00
|
|
|
INIT_DELAYED_WORK(&gl->gl_work, glock_work_func);
|
2020-10-15 16:16:48 +00:00
|
|
|
if (gl->gl_name.ln_type == LM_TYPE_IOPEN)
|
|
|
|
INIT_DELAYED_WORK(&gl->gl_delete, delete_work_func);
|
2006-01-16 16:50:04 +00:00
|
|
|
|
2009-12-08 12:12:13 +00:00
|
|
|
mapping = gfs2_glock2aspace(gl);
|
|
|
|
if (mapping) {
|
|
|
|
mapping->a_ops = &gfs2_meta_aops;
|
|
|
|
mapping->host = s->s_bdev->bd_inode;
|
|
|
|
mapping->flags = 0;
|
|
|
|
mapping_set_gfp_mask(mapping, GFP_NOFS);
|
2012-12-12 00:02:35 +00:00
|
|
|
mapping->private_data = NULL;
|
2009-12-08 12:12:13 +00:00
|
|
|
mapping->writeback_index = 0;
|
2006-01-16 16:50:04 +00:00
|
|
|
}
|
|
|
|
|
2017-08-01 16:18:26 +00:00
|
|
|
tmp = find_insert_glock(&name, gl);
|
2017-02-21 22:19:10 +00:00
|
|
|
if (!tmp) {
|
2015-03-16 16:02:46 +00:00
|
|
|
*glp = gl;
|
2017-02-21 22:19:10 +00:00
|
|
|
goto out;
|
2006-01-16 16:50:04 +00:00
|
|
|
}
|
2017-02-21 22:19:10 +00:00
|
|
|
if (IS_ERR(tmp)) {
|
|
|
|
ret = PTR_ERR(tmp);
|
|
|
|
goto out_free;
|
|
|
|
}
|
2017-08-01 16:18:26 +00:00
|
|
|
*glp = tmp;
|
2017-02-21 22:19:10 +00:00
|
|
|
|
|
|
|
out_free:
|
2022-05-08 10:06:30 +00:00
|
|
|
gfs2_glock_dealloc(&gl->gl_rcu);
|
2020-10-26 14:52:29 +00:00
|
|
|
if (atomic_dec_and_test(&sdp->sd_glock_disposal))
|
|
|
|
wake_up(&sdp->sd_glock_wait);
|
2006-01-16 16:50:04 +00:00
|
|
|
|
2017-02-21 22:19:10 +00:00
|
|
|
out:
|
2015-03-16 16:02:46 +00:00
|
|
|
return ret;
|
2006-01-16 16:50:04 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
2021-11-18 19:33:00 +00:00
|
|
|
* __gfs2_holder_init - initialize a struct gfs2_holder in the default way
|
2006-01-16 16:50:04 +00:00
|
|
|
* @gl: the glock
|
|
|
|
* @state: the state we're requesting
|
|
|
|
* @flags: the modifier flags
|
|
|
|
* @gh: the holder structure
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
2021-09-30 18:49:36 +00:00
|
|
|
void __gfs2_holder_init(struct gfs2_glock *gl, unsigned int state, u16 flags,
|
|
|
|
struct gfs2_holder *gh, unsigned long ip)
|
2006-01-16 16:50:04 +00:00
|
|
|
{
|
|
|
|
INIT_LIST_HEAD(&gh->gh_list);
|
|
|
|
gh->gh_gl = gl;
|
2021-09-30 18:49:36 +00:00
|
|
|
gh->gh_ip = ip;
|
2008-02-07 08:13:19 +00:00
|
|
|
gh->gh_owner_pid = get_pid(task_pid(current));
|
2006-01-16 16:50:04 +00:00
|
|
|
gh->gh_state = state;
|
|
|
|
gh->gh_flags = flags;
|
|
|
|
gh->gh_iflags = 0;
|
|
|
|
gfs2_glock_hold(gl);
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* gfs2_holder_reinit - reinitialize a struct gfs2_holder so we can requeue it
|
|
|
|
* @state: the state we're requesting
|
|
|
|
* @flags: the modifier flags
|
|
|
|
* @gh: the holder structure
|
|
|
|
*
|
|
|
|
* Don't mess with the glock.
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
2015-07-24 14:45:43 +00:00
|
|
|
void gfs2_holder_reinit(unsigned int state, u16 flags, struct gfs2_holder *gh)
|
2006-01-16 16:50:04 +00:00
|
|
|
{
|
|
|
|
gh->gh_state = state;
|
2006-04-26 18:58:26 +00:00
|
|
|
gh->gh_flags = flags;
|
2007-03-16 09:40:31 +00:00
|
|
|
gh->gh_iflags = 0;
|
2014-10-03 18:15:36 +00:00
|
|
|
gh->gh_ip = _RET_IP_;
|
2014-11-18 10:31:23 +00:00
|
|
|
put_pid(gh->gh_owner_pid);
|
2010-04-14 15:58:16 +00:00
|
|
|
gh->gh_owner_pid = get_pid(task_pid(current));
|
2006-01-16 16:50:04 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* gfs2_holder_uninit - uninitialize a holder structure (drop glock reference)
|
|
|
|
* @gh: the holder structure
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
|
|
|
void gfs2_holder_uninit(struct gfs2_holder *gh)
|
|
|
|
{
|
2008-02-07 08:13:19 +00:00
|
|
|
put_pid(gh->gh_owner_pid);
|
2006-01-16 16:50:04 +00:00
|
|
|
gfs2_glock_put(gh->gh_gl);
|
2016-06-17 12:31:27 +00:00
|
|
|
gfs2_holder_mark_uninitialized(gh);
|
2006-03-29 19:36:49 +00:00
|
|
|
gh->gh_ip = 0;
|
2006-01-16 16:50:04 +00:00
|
|
|
}
|
|
|
|
|
2019-08-30 17:31:01 +00:00
|
|
|
static void gfs2_glock_update_hold_time(struct gfs2_glock *gl,
|
|
|
|
unsigned long start_time)
|
|
|
|
{
|
|
|
|
/* Have we waited longer that a second? */
|
|
|
|
if (time_after(jiffies, start_time + HZ)) {
|
|
|
|
/* Lengthen the minimum hold time. */
|
|
|
|
gl->gl_hold_time = min(gl->gl_hold_time + GL_GLOCK_HOLD_INCR,
|
|
|
|
GL_GLOCK_MAX_HOLD);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2022-06-11 03:04:11 +00:00
|
|
|
/**
|
|
|
|
* gfs2_glock_holder_ready - holder is ready and its error code can be collected
|
|
|
|
* @gh: the glock holder
|
|
|
|
*
|
|
|
|
* Called when a glock holder no longer needs to be waited for because it is
|
|
|
|
* now either held (HIF_HOLDER set; gh_error == 0), or acquiring the lock has
|
|
|
|
* failed (gh_error != 0).
|
|
|
|
*/
|
|
|
|
|
|
|
|
int gfs2_glock_holder_ready(struct gfs2_holder *gh)
|
|
|
|
{
|
|
|
|
if (gh->gh_error || (gh->gh_flags & GL_SKIP))
|
|
|
|
return gh->gh_error;
|
|
|
|
gh->gh_error = gfs2_instantiate(gh);
|
|
|
|
if (gh->gh_error)
|
|
|
|
gfs2_glock_dq(gh);
|
|
|
|
return gh->gh_error;
|
|
|
|
}
|
|
|
|
|
2012-08-09 17:48:44 +00:00
|
|
|
/**
|
|
|
|
* gfs2_glock_wait - wait on a glock acquisition
|
|
|
|
* @gh: the glock holder
|
|
|
|
*
|
|
|
|
* Returns: 0 on success
|
|
|
|
*/
|
|
|
|
|
|
|
|
int gfs2_glock_wait(struct gfs2_holder *gh)
|
2008-01-30 15:34:04 +00:00
|
|
|
{
|
2019-08-30 17:31:01 +00:00
|
|
|
unsigned long start_time = jiffies;
|
2011-06-15 15:41:48 +00:00
|
|
|
|
2008-05-21 16:03:22 +00:00
|
|
|
might_sleep();
|
sched: Remove proliferation of wait_on_bit() action functions
The current "wait_on_bit" interface requires an 'action'
function to be provided which does the actual waiting.
There are over 20 such functions, many of them identical.
Most cases can be satisfied by one of just two functions, one
which uses io_schedule() and one which just uses schedule().
So:
Rename wait_on_bit and wait_on_bit_lock to
wait_on_bit_action and wait_on_bit_lock_action
to make it explicit that they need an action function.
Introduce new wait_on_bit{,_lock} and wait_on_bit{,_lock}_io
which are *not* given an action function but implicitly use
a standard one.
The decision to error-out if a signal is pending is now made
based on the 'mode' argument rather than being encoded in the action
function.
All instances of the old wait_on_bit and wait_on_bit_lock which
can use the new version have been changed accordingly and their
action functions have been discarded.
wait_on_bit{_lock} does not return any specific error code in the
event of a signal so the caller must check for non-zero and
interpolate their own error code as appropriate.
The wait_on_bit() call in __fscache_wait_on_invalidate() was
ambiguous as it specified TASK_UNINTERRUPTIBLE but used
fscache_wait_bit_interruptible as an action function.
David Howells confirms this should be uniformly
"uninterruptible"
The main remaining user of wait_on_bit{,_lock}_action is NFS
which needs to use a freezer-aware schedule() call.
A comment in fs/gfs2/glock.c notes that having multiple 'action'
functions is useful as they display differently in the 'wchan'
field of 'ps'. (and /proc/$PID/wchan).
As the new bit_wait{,_io} functions are tagged "__sched", they
will not show up at all, but something higher in the stack. So
the distinction will still be visible, only with different
function names (gds2_glock_wait versus gfs2_glock_dq_wait in the
gfs2/glock.c case).
Since first version of this patch (against 3.15) two new action
functions appeared, on in NFS and one in CIFS. CIFS also now
uses an action function that makes the same freezer aware
schedule call as NFS.
Signed-off-by: NeilBrown <neilb@suse.de>
Acked-by: David Howells <dhowells@redhat.com> (fscache, keys)
Acked-by: Steven Whitehouse <swhiteho@redhat.com> (gfs2)
Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Steve French <sfrench@samba.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/20140707051603.28027.72349.stgit@notabene.brown
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-07-07 05:16:04 +00:00
|
|
|
wait_on_bit(&gh->gh_iflags, HIF_WAIT, TASK_UNINTERRUPTIBLE);
|
2019-08-30 17:31:01 +00:00
|
|
|
gfs2_glock_update_hold_time(gh->gh_gl, start_time);
|
2022-06-11 03:04:11 +00:00
|
|
|
return gfs2_glock_holder_ready(gh);
|
2008-01-30 15:34:04 +00:00
|
|
|
}
|
|
|
|
|
gfs2: Use async glocks for rename
Because s_vfs_rename_mutex is not cluster-wide, multiple nodes can
reverse the roles of which directories are "old" and which are "new" for
the purposes of rename. This can cause deadlocks where two nodes end up
waiting for each other.
There can be several layers of directory dependencies across many nodes.
This patch fixes the problem by acquiring all gfs2_rename's inode glocks
asychronously and waiting for all glocks to be acquired. That way all
inodes are locked regardless of the order.
The timeout value for multiple asynchronous glocks is calculated to be
the total of the individual wait times for each glock times two.
Since gfs2_exchange is very similar to gfs2_rename, both functions are
patched in the same way.
A new async glock wait queue, sd_async_glock_wait, keeps a list of
waiters for these events. If gfs2's holder_wake function detects an
async holder, it wakes up any waiters for the event. The waiter only
tests whether any of its requests are still pending.
Since the glocks are sent to dlm asychronously, the wait function needs
to check to see which glocks, if any, were granted.
If a glock is granted by dlm (and therefore held), its minimum hold time
is checked and adjusted as necessary, as other glock grants do.
If the event times out, all glocks held thus far must be dequeued to
resolve any existing deadlocks. Then, if there are any outstanding
locking requests, we need to loop around and wait for dlm to respond to
those requests too. After we release all requests, we return -ESTALE to
the caller (vfs rename) which loops around and retries the request.
Node1 Node2
--------- ---------
1. Enqueue A Enqueue B
2. Enqueue B Enqueue A
3. A granted
6. B granted
7. Wait for B
8. Wait for A
9. A times out (since Node 1 holds A)
10. Dequeue B (since it was granted)
11. Wait for all requests from DLM
12. B Granted (since Node2 released it in step 10)
13. Rename
14. Dequeue A
15. DLM Grants A
16. Dequeue A (due to the timeout and since we
no longer have B held for our task).
17. Dequeue B
18. Return -ESTALE to vfs
19. VFS retries the operation, goto step 1.
This release-all-locks / acquire-all-locks may slow rename / exchange
down as both nodes struggle in the same way and do the same thing.
However, this will only happen when there is contention for the same
inodes, which ought to be rare.
Signed-off-by: Bob Peterson <rpeterso@redhat.com>
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2019-08-30 17:31:02 +00:00
|
|
|
static int glocks_pending(unsigned int num_gh, struct gfs2_holder *ghs)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < num_gh; i++)
|
|
|
|
if (test_bit(HIF_WAIT, &ghs[i].gh_iflags))
|
|
|
|
return 1;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* gfs2_glock_async_wait - wait on multiple asynchronous glock acquisitions
|
|
|
|
* @num_gh: the number of holders in the array
|
|
|
|
* @ghs: the glock holder array
|
|
|
|
*
|
|
|
|
* Returns: 0 on success, meaning all glocks have been granted and are held.
|
|
|
|
* -ESTALE if the request timed out, meaning all glocks were released,
|
|
|
|
* and the caller should retry the operation.
|
|
|
|
*/
|
|
|
|
|
|
|
|
int gfs2_glock_async_wait(unsigned int num_gh, struct gfs2_holder *ghs)
|
|
|
|
{
|
|
|
|
struct gfs2_sbd *sdp = ghs[0].gh_gl->gl_name.ln_sbd;
|
|
|
|
int i, ret = 0, timeout = 0;
|
|
|
|
unsigned long start_time = jiffies;
|
|
|
|
|
|
|
|
might_sleep();
|
|
|
|
/*
|
|
|
|
* Total up the (minimum hold time * 2) of all glocks and use that to
|
|
|
|
* determine the max amount of time we should wait.
|
|
|
|
*/
|
|
|
|
for (i = 0; i < num_gh; i++)
|
|
|
|
timeout += ghs[i].gh_gl->gl_hold_time << 1;
|
|
|
|
|
|
|
|
if (!wait_event_timeout(sdp->sd_async_glock_wait,
|
2022-06-09 11:39:10 +00:00
|
|
|
!glocks_pending(num_gh, ghs), timeout)) {
|
gfs2: Use async glocks for rename
Because s_vfs_rename_mutex is not cluster-wide, multiple nodes can
reverse the roles of which directories are "old" and which are "new" for
the purposes of rename. This can cause deadlocks where two nodes end up
waiting for each other.
There can be several layers of directory dependencies across many nodes.
This patch fixes the problem by acquiring all gfs2_rename's inode glocks
asychronously and waiting for all glocks to be acquired. That way all
inodes are locked regardless of the order.
The timeout value for multiple asynchronous glocks is calculated to be
the total of the individual wait times for each glock times two.
Since gfs2_exchange is very similar to gfs2_rename, both functions are
patched in the same way.
A new async glock wait queue, sd_async_glock_wait, keeps a list of
waiters for these events. If gfs2's holder_wake function detects an
async holder, it wakes up any waiters for the event. The waiter only
tests whether any of its requests are still pending.
Since the glocks are sent to dlm asychronously, the wait function needs
to check to see which glocks, if any, were granted.
If a glock is granted by dlm (and therefore held), its minimum hold time
is checked and adjusted as necessary, as other glock grants do.
If the event times out, all glocks held thus far must be dequeued to
resolve any existing deadlocks. Then, if there are any outstanding
locking requests, we need to loop around and wait for dlm to respond to
those requests too. After we release all requests, we return -ESTALE to
the caller (vfs rename) which loops around and retries the request.
Node1 Node2
--------- ---------
1. Enqueue A Enqueue B
2. Enqueue B Enqueue A
3. A granted
6. B granted
7. Wait for B
8. Wait for A
9. A times out (since Node 1 holds A)
10. Dequeue B (since it was granted)
11. Wait for all requests from DLM
12. B Granted (since Node2 released it in step 10)
13. Rename
14. Dequeue A
15. DLM Grants A
16. Dequeue A (due to the timeout and since we
no longer have B held for our task).
17. Dequeue B
18. Return -ESTALE to vfs
19. VFS retries the operation, goto step 1.
This release-all-locks / acquire-all-locks may slow rename / exchange
down as both nodes struggle in the same way and do the same thing.
However, this will only happen when there is contention for the same
inodes, which ought to be rare.
Signed-off-by: Bob Peterson <rpeterso@redhat.com>
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2019-08-30 17:31:02 +00:00
|
|
|
ret = -ESTALE; /* request timed out. */
|
2022-06-09 11:39:10 +00:00
|
|
|
goto out;
|
|
|
|
}
|
gfs2: Use async glocks for rename
Because s_vfs_rename_mutex is not cluster-wide, multiple nodes can
reverse the roles of which directories are "old" and which are "new" for
the purposes of rename. This can cause deadlocks where two nodes end up
waiting for each other.
There can be several layers of directory dependencies across many nodes.
This patch fixes the problem by acquiring all gfs2_rename's inode glocks
asychronously and waiting for all glocks to be acquired. That way all
inodes are locked regardless of the order.
The timeout value for multiple asynchronous glocks is calculated to be
the total of the individual wait times for each glock times two.
Since gfs2_exchange is very similar to gfs2_rename, both functions are
patched in the same way.
A new async glock wait queue, sd_async_glock_wait, keeps a list of
waiters for these events. If gfs2's holder_wake function detects an
async holder, it wakes up any waiters for the event. The waiter only
tests whether any of its requests are still pending.
Since the glocks are sent to dlm asychronously, the wait function needs
to check to see which glocks, if any, were granted.
If a glock is granted by dlm (and therefore held), its minimum hold time
is checked and adjusted as necessary, as other glock grants do.
If the event times out, all glocks held thus far must be dequeued to
resolve any existing deadlocks. Then, if there are any outstanding
locking requests, we need to loop around and wait for dlm to respond to
those requests too. After we release all requests, we return -ESTALE to
the caller (vfs rename) which loops around and retries the request.
Node1 Node2
--------- ---------
1. Enqueue A Enqueue B
2. Enqueue B Enqueue A
3. A granted
6. B granted
7. Wait for B
8. Wait for A
9. A times out (since Node 1 holds A)
10. Dequeue B (since it was granted)
11. Wait for all requests from DLM
12. B Granted (since Node2 released it in step 10)
13. Rename
14. Dequeue A
15. DLM Grants A
16. Dequeue A (due to the timeout and since we
no longer have B held for our task).
17. Dequeue B
18. Return -ESTALE to vfs
19. VFS retries the operation, goto step 1.
This release-all-locks / acquire-all-locks may slow rename / exchange
down as both nodes struggle in the same way and do the same thing.
However, this will only happen when there is contention for the same
inodes, which ought to be rare.
Signed-off-by: Bob Peterson <rpeterso@redhat.com>
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2019-08-30 17:31:02 +00:00
|
|
|
|
|
|
|
for (i = 0; i < num_gh; i++) {
|
2022-06-09 11:39:10 +00:00
|
|
|
struct gfs2_holder *gh = &ghs[i];
|
2022-06-11 03:04:11 +00:00
|
|
|
int ret2;
|
gfs2: Use async glocks for rename
Because s_vfs_rename_mutex is not cluster-wide, multiple nodes can
reverse the roles of which directories are "old" and which are "new" for
the purposes of rename. This can cause deadlocks where two nodes end up
waiting for each other.
There can be several layers of directory dependencies across many nodes.
This patch fixes the problem by acquiring all gfs2_rename's inode glocks
asychronously and waiting for all glocks to be acquired. That way all
inodes are locked regardless of the order.
The timeout value for multiple asynchronous glocks is calculated to be
the total of the individual wait times for each glock times two.
Since gfs2_exchange is very similar to gfs2_rename, both functions are
patched in the same way.
A new async glock wait queue, sd_async_glock_wait, keeps a list of
waiters for these events. If gfs2's holder_wake function detects an
async holder, it wakes up any waiters for the event. The waiter only
tests whether any of its requests are still pending.
Since the glocks are sent to dlm asychronously, the wait function needs
to check to see which glocks, if any, were granted.
If a glock is granted by dlm (and therefore held), its minimum hold time
is checked and adjusted as necessary, as other glock grants do.
If the event times out, all glocks held thus far must be dequeued to
resolve any existing deadlocks. Then, if there are any outstanding
locking requests, we need to loop around and wait for dlm to respond to
those requests too. After we release all requests, we return -ESTALE to
the caller (vfs rename) which loops around and retries the request.
Node1 Node2
--------- ---------
1. Enqueue A Enqueue B
2. Enqueue B Enqueue A
3. A granted
6. B granted
7. Wait for B
8. Wait for A
9. A times out (since Node 1 holds A)
10. Dequeue B (since it was granted)
11. Wait for all requests from DLM
12. B Granted (since Node2 released it in step 10)
13. Rename
14. Dequeue A
15. DLM Grants A
16. Dequeue A (due to the timeout and since we
no longer have B held for our task).
17. Dequeue B
18. Return -ESTALE to vfs
19. VFS retries the operation, goto step 1.
This release-all-locks / acquire-all-locks may slow rename / exchange
down as both nodes struggle in the same way and do the same thing.
However, this will only happen when there is contention for the same
inodes, which ought to be rare.
Signed-off-by: Bob Peterson <rpeterso@redhat.com>
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2019-08-30 17:31:02 +00:00
|
|
|
|
2022-06-09 11:39:10 +00:00
|
|
|
if (test_bit(HIF_HOLDER, &gh->gh_iflags)) {
|
|
|
|
gfs2_glock_update_hold_time(gh->gh_gl,
|
|
|
|
start_time);
|
gfs2: Use async glocks for rename
Because s_vfs_rename_mutex is not cluster-wide, multiple nodes can
reverse the roles of which directories are "old" and which are "new" for
the purposes of rename. This can cause deadlocks where two nodes end up
waiting for each other.
There can be several layers of directory dependencies across many nodes.
This patch fixes the problem by acquiring all gfs2_rename's inode glocks
asychronously and waiting for all glocks to be acquired. That way all
inodes are locked regardless of the order.
The timeout value for multiple asynchronous glocks is calculated to be
the total of the individual wait times for each glock times two.
Since gfs2_exchange is very similar to gfs2_rename, both functions are
patched in the same way.
A new async glock wait queue, sd_async_glock_wait, keeps a list of
waiters for these events. If gfs2's holder_wake function detects an
async holder, it wakes up any waiters for the event. The waiter only
tests whether any of its requests are still pending.
Since the glocks are sent to dlm asychronously, the wait function needs
to check to see which glocks, if any, were granted.
If a glock is granted by dlm (and therefore held), its minimum hold time
is checked and adjusted as necessary, as other glock grants do.
If the event times out, all glocks held thus far must be dequeued to
resolve any existing deadlocks. Then, if there are any outstanding
locking requests, we need to loop around and wait for dlm to respond to
those requests too. After we release all requests, we return -ESTALE to
the caller (vfs rename) which loops around and retries the request.
Node1 Node2
--------- ---------
1. Enqueue A Enqueue B
2. Enqueue B Enqueue A
3. A granted
6. B granted
7. Wait for B
8. Wait for A
9. A times out (since Node 1 holds A)
10. Dequeue B (since it was granted)
11. Wait for all requests from DLM
12. B Granted (since Node2 released it in step 10)
13. Rename
14. Dequeue A
15. DLM Grants A
16. Dequeue A (due to the timeout and since we
no longer have B held for our task).
17. Dequeue B
18. Return -ESTALE to vfs
19. VFS retries the operation, goto step 1.
This release-all-locks / acquire-all-locks may slow rename / exchange
down as both nodes struggle in the same way and do the same thing.
However, this will only happen when there is contention for the same
inodes, which ought to be rare.
Signed-off-by: Bob Peterson <rpeterso@redhat.com>
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2019-08-30 17:31:02 +00:00
|
|
|
}
|
2022-06-11 03:04:11 +00:00
|
|
|
ret2 = gfs2_glock_holder_ready(gh);
|
gfs2: Use async glocks for rename
Because s_vfs_rename_mutex is not cluster-wide, multiple nodes can
reverse the roles of which directories are "old" and which are "new" for
the purposes of rename. This can cause deadlocks where two nodes end up
waiting for each other.
There can be several layers of directory dependencies across many nodes.
This patch fixes the problem by acquiring all gfs2_rename's inode glocks
asychronously and waiting for all glocks to be acquired. That way all
inodes are locked regardless of the order.
The timeout value for multiple asynchronous glocks is calculated to be
the total of the individual wait times for each glock times two.
Since gfs2_exchange is very similar to gfs2_rename, both functions are
patched in the same way.
A new async glock wait queue, sd_async_glock_wait, keeps a list of
waiters for these events. If gfs2's holder_wake function detects an
async holder, it wakes up any waiters for the event. The waiter only
tests whether any of its requests are still pending.
Since the glocks are sent to dlm asychronously, the wait function needs
to check to see which glocks, if any, were granted.
If a glock is granted by dlm (and therefore held), its minimum hold time
is checked and adjusted as necessary, as other glock grants do.
If the event times out, all glocks held thus far must be dequeued to
resolve any existing deadlocks. Then, if there are any outstanding
locking requests, we need to loop around and wait for dlm to respond to
those requests too. After we release all requests, we return -ESTALE to
the caller (vfs rename) which loops around and retries the request.
Node1 Node2
--------- ---------
1. Enqueue A Enqueue B
2. Enqueue B Enqueue A
3. A granted
6. B granted
7. Wait for B
8. Wait for A
9. A times out (since Node 1 holds A)
10. Dequeue B (since it was granted)
11. Wait for all requests from DLM
12. B Granted (since Node2 released it in step 10)
13. Rename
14. Dequeue A
15. DLM Grants A
16. Dequeue A (due to the timeout and since we
no longer have B held for our task).
17. Dequeue B
18. Return -ESTALE to vfs
19. VFS retries the operation, goto step 1.
This release-all-locks / acquire-all-locks may slow rename / exchange
down as both nodes struggle in the same way and do the same thing.
However, this will only happen when there is contention for the same
inodes, which ought to be rare.
Signed-off-by: Bob Peterson <rpeterso@redhat.com>
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2019-08-30 17:31:02 +00:00
|
|
|
if (!ret)
|
2022-06-11 03:04:11 +00:00
|
|
|
ret = ret2;
|
gfs2: Use async glocks for rename
Because s_vfs_rename_mutex is not cluster-wide, multiple nodes can
reverse the roles of which directories are "old" and which are "new" for
the purposes of rename. This can cause deadlocks where two nodes end up
waiting for each other.
There can be several layers of directory dependencies across many nodes.
This patch fixes the problem by acquiring all gfs2_rename's inode glocks
asychronously and waiting for all glocks to be acquired. That way all
inodes are locked regardless of the order.
The timeout value for multiple asynchronous glocks is calculated to be
the total of the individual wait times for each glock times two.
Since gfs2_exchange is very similar to gfs2_rename, both functions are
patched in the same way.
A new async glock wait queue, sd_async_glock_wait, keeps a list of
waiters for these events. If gfs2's holder_wake function detects an
async holder, it wakes up any waiters for the event. The waiter only
tests whether any of its requests are still pending.
Since the glocks are sent to dlm asychronously, the wait function needs
to check to see which glocks, if any, were granted.
If a glock is granted by dlm (and therefore held), its minimum hold time
is checked and adjusted as necessary, as other glock grants do.
If the event times out, all glocks held thus far must be dequeued to
resolve any existing deadlocks. Then, if there are any outstanding
locking requests, we need to loop around and wait for dlm to respond to
those requests too. After we release all requests, we return -ESTALE to
the caller (vfs rename) which loops around and retries the request.
Node1 Node2
--------- ---------
1. Enqueue A Enqueue B
2. Enqueue B Enqueue A
3. A granted
6. B granted
7. Wait for B
8. Wait for A
9. A times out (since Node 1 holds A)
10. Dequeue B (since it was granted)
11. Wait for all requests from DLM
12. B Granted (since Node2 released it in step 10)
13. Rename
14. Dequeue A
15. DLM Grants A
16. Dequeue A (due to the timeout and since we
no longer have B held for our task).
17. Dequeue B
18. Return -ESTALE to vfs
19. VFS retries the operation, goto step 1.
This release-all-locks / acquire-all-locks may slow rename / exchange
down as both nodes struggle in the same way and do the same thing.
However, this will only happen when there is contention for the same
inodes, which ought to be rare.
Signed-off-by: Bob Peterson <rpeterso@redhat.com>
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2019-08-30 17:31:02 +00:00
|
|
|
}
|
|
|
|
|
2022-06-09 11:39:10 +00:00
|
|
|
out:
|
|
|
|
if (ret) {
|
|
|
|
for (i = 0; i < num_gh; i++) {
|
|
|
|
struct gfs2_holder *gh = &ghs[i];
|
gfs2: Use async glocks for rename
Because s_vfs_rename_mutex is not cluster-wide, multiple nodes can
reverse the roles of which directories are "old" and which are "new" for
the purposes of rename. This can cause deadlocks where two nodes end up
waiting for each other.
There can be several layers of directory dependencies across many nodes.
This patch fixes the problem by acquiring all gfs2_rename's inode glocks
asychronously and waiting for all glocks to be acquired. That way all
inodes are locked regardless of the order.
The timeout value for multiple asynchronous glocks is calculated to be
the total of the individual wait times for each glock times two.
Since gfs2_exchange is very similar to gfs2_rename, both functions are
patched in the same way.
A new async glock wait queue, sd_async_glock_wait, keeps a list of
waiters for these events. If gfs2's holder_wake function detects an
async holder, it wakes up any waiters for the event. The waiter only
tests whether any of its requests are still pending.
Since the glocks are sent to dlm asychronously, the wait function needs
to check to see which glocks, if any, were granted.
If a glock is granted by dlm (and therefore held), its minimum hold time
is checked and adjusted as necessary, as other glock grants do.
If the event times out, all glocks held thus far must be dequeued to
resolve any existing deadlocks. Then, if there are any outstanding
locking requests, we need to loop around and wait for dlm to respond to
those requests too. After we release all requests, we return -ESTALE to
the caller (vfs rename) which loops around and retries the request.
Node1 Node2
--------- ---------
1. Enqueue A Enqueue B
2. Enqueue B Enqueue A
3. A granted
6. B granted
7. Wait for B
8. Wait for A
9. A times out (since Node 1 holds A)
10. Dequeue B (since it was granted)
11. Wait for all requests from DLM
12. B Granted (since Node2 released it in step 10)
13. Rename
14. Dequeue A
15. DLM Grants A
16. Dequeue A (due to the timeout and since we
no longer have B held for our task).
17. Dequeue B
18. Return -ESTALE to vfs
19. VFS retries the operation, goto step 1.
This release-all-locks / acquire-all-locks may slow rename / exchange
down as both nodes struggle in the same way and do the same thing.
However, this will only happen when there is contention for the same
inodes, which ought to be rare.
Signed-off-by: Bob Peterson <rpeterso@redhat.com>
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2019-08-30 17:31:02 +00:00
|
|
|
|
2022-06-09 11:39:10 +00:00
|
|
|
gfs2_glock_dq(gh);
|
|
|
|
}
|
|
|
|
}
|
gfs2: Use async glocks for rename
Because s_vfs_rename_mutex is not cluster-wide, multiple nodes can
reverse the roles of which directories are "old" and which are "new" for
the purposes of rename. This can cause deadlocks where two nodes end up
waiting for each other.
There can be several layers of directory dependencies across many nodes.
This patch fixes the problem by acquiring all gfs2_rename's inode glocks
asychronously and waiting for all glocks to be acquired. That way all
inodes are locked regardless of the order.
The timeout value for multiple asynchronous glocks is calculated to be
the total of the individual wait times for each glock times two.
Since gfs2_exchange is very similar to gfs2_rename, both functions are
patched in the same way.
A new async glock wait queue, sd_async_glock_wait, keeps a list of
waiters for these events. If gfs2's holder_wake function detects an
async holder, it wakes up any waiters for the event. The waiter only
tests whether any of its requests are still pending.
Since the glocks are sent to dlm asychronously, the wait function needs
to check to see which glocks, if any, were granted.
If a glock is granted by dlm (and therefore held), its minimum hold time
is checked and adjusted as necessary, as other glock grants do.
If the event times out, all glocks held thus far must be dequeued to
resolve any existing deadlocks. Then, if there are any outstanding
locking requests, we need to loop around and wait for dlm to respond to
those requests too. After we release all requests, we return -ESTALE to
the caller (vfs rename) which loops around and retries the request.
Node1 Node2
--------- ---------
1. Enqueue A Enqueue B
2. Enqueue B Enqueue A
3. A granted
6. B granted
7. Wait for B
8. Wait for A
9. A times out (since Node 1 holds A)
10. Dequeue B (since it was granted)
11. Wait for all requests from DLM
12. B Granted (since Node2 released it in step 10)
13. Rename
14. Dequeue A
15. DLM Grants A
16. Dequeue A (due to the timeout and since we
no longer have B held for our task).
17. Dequeue B
18. Return -ESTALE to vfs
19. VFS retries the operation, goto step 1.
This release-all-locks / acquire-all-locks may slow rename / exchange
down as both nodes struggle in the same way and do the same thing.
However, this will only happen when there is contention for the same
inodes, which ought to be rare.
Signed-off-by: Bob Peterson <rpeterso@redhat.com>
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2019-08-30 17:31:02 +00:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2006-01-16 16:50:04 +00:00
|
|
|
/**
|
2008-05-21 16:03:22 +00:00
|
|
|
* handle_callback - process a demote request
|
|
|
|
* @gl: the glock
|
|
|
|
* @state: the state the caller wants us to change to
|
2021-03-30 16:44:29 +00:00
|
|
|
* @delay: zero to demote immediately; otherwise pending demote
|
|
|
|
* @remote: true if this came from a different cluster node
|
2006-01-16 16:50:04 +00:00
|
|
|
*
|
2008-05-21 16:03:22 +00:00
|
|
|
* There are only two requests that we are going to see in actual
|
|
|
|
* practise: LM_ST_SHARED and LM_ST_UNLOCKED
|
2006-01-16 16:50:04 +00:00
|
|
|
*/
|
|
|
|
|
2008-05-21 16:03:22 +00:00
|
|
|
static void handle_callback(struct gfs2_glock *gl, unsigned int state,
|
2013-04-10 09:26:55 +00:00
|
|
|
unsigned long delay, bool remote)
|
2006-01-16 16:50:04 +00:00
|
|
|
{
|
2020-01-17 12:48:49 +00:00
|
|
|
if (delay)
|
|
|
|
set_bit(GLF_PENDING_DEMOTE, &gl->gl_flags);
|
|
|
|
else
|
|
|
|
gfs2_set_demote(gl);
|
2008-05-21 16:03:22 +00:00
|
|
|
if (gl->gl_demote_state == LM_ST_EXCLUSIVE) {
|
|
|
|
gl->gl_demote_state = state;
|
|
|
|
gl->gl_demote_time = jiffies;
|
|
|
|
} else if (gl->gl_demote_state != LM_ST_UNLOCKED &&
|
|
|
|
gl->gl_demote_state != state) {
|
|
|
|
gl->gl_demote_state = LM_ST_UNLOCKED;
|
2006-01-16 16:50:04 +00:00
|
|
|
}
|
2009-07-23 23:52:34 +00:00
|
|
|
if (gl->gl_ops->go_callback)
|
2013-04-10 09:26:55 +00:00
|
|
|
gl->gl_ops->go_callback(gl, remote);
|
2013-04-10 09:32:05 +00:00
|
|
|
trace_gfs2_demote_rq(gl, remote);
|
2006-01-16 16:50:04 +00:00
|
|
|
}
|
|
|
|
|
2008-05-21 16:03:22 +00:00
|
|
|
void gfs2_print_dbg(struct seq_file *seq, const char *fmt, ...)
|
2007-03-16 10:26:37 +00:00
|
|
|
{
|
2010-11-10 00:35:20 +00:00
|
|
|
struct va_format vaf;
|
2007-03-16 10:26:37 +00:00
|
|
|
va_list args;
|
|
|
|
|
|
|
|
va_start(args, fmt);
|
2010-11-10 00:35:20 +00:00
|
|
|
|
2008-05-21 16:03:22 +00:00
|
|
|
if (seq) {
|
2012-06-11 12:26:50 +00:00
|
|
|
seq_vprintf(seq, fmt, args);
|
2008-05-21 16:03:22 +00:00
|
|
|
} else {
|
2010-11-10 00:35:20 +00:00
|
|
|
vaf.fmt = fmt;
|
|
|
|
vaf.va = &args;
|
|
|
|
|
2014-03-06 20:10:45 +00:00
|
|
|
pr_err("%pV", &vaf);
|
2008-05-21 16:03:22 +00:00
|
|
|
}
|
2010-11-10 00:35:20 +00:00
|
|
|
|
2007-03-16 10:26:37 +00:00
|
|
|
va_end(args);
|
|
|
|
}
|
|
|
|
|
2022-04-05 20:07:30 +00:00
|
|
|
static inline bool pid_is_meaningful(const struct gfs2_holder *gh)
|
|
|
|
{
|
|
|
|
if (!(gh->gh_flags & GL_NOPID))
|
|
|
|
return true;
|
|
|
|
if (gh->gh_state == LM_ST_UNLOCKED)
|
|
|
|
return true;
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2006-01-16 16:50:04 +00:00
|
|
|
/**
|
|
|
|
* add_to_queue - Add a holder to the wait queue (but look for recursion)
|
|
|
|
* @gh: the holder structure to add
|
|
|
|
*
|
2008-05-21 16:03:22 +00:00
|
|
|
* Eventually we should move the recursive locking trap to a
|
|
|
|
* debugging option or something like that. This is the fast
|
|
|
|
* path and needs to have the minimum number of distractions.
|
|
|
|
*
|
2006-01-16 16:50:04 +00:00
|
|
|
*/
|
|
|
|
|
2008-05-21 16:03:22 +00:00
|
|
|
static inline void add_to_queue(struct gfs2_holder *gh)
|
2015-10-29 15:58:09 +00:00
|
|
|
__releases(&gl->gl_lockref.lock)
|
|
|
|
__acquires(&gl->gl_lockref.lock)
|
2006-01-16 16:50:04 +00:00
|
|
|
{
|
|
|
|
struct gfs2_glock *gl = gh->gh_gl;
|
2015-03-16 16:52:05 +00:00
|
|
|
struct gfs2_sbd *sdp = gl->gl_name.ln_sbd;
|
2008-05-21 16:03:22 +00:00
|
|
|
struct list_head *insert_pt = NULL;
|
|
|
|
struct gfs2_holder *gh2;
|
2012-08-09 17:48:46 +00:00
|
|
|
int try_futile = 0;
|
2006-01-16 16:50:04 +00:00
|
|
|
|
gfs2: Use async glocks for rename
Because s_vfs_rename_mutex is not cluster-wide, multiple nodes can
reverse the roles of which directories are "old" and which are "new" for
the purposes of rename. This can cause deadlocks where two nodes end up
waiting for each other.
There can be several layers of directory dependencies across many nodes.
This patch fixes the problem by acquiring all gfs2_rename's inode glocks
asychronously and waiting for all glocks to be acquired. That way all
inodes are locked regardless of the order.
The timeout value for multiple asynchronous glocks is calculated to be
the total of the individual wait times for each glock times two.
Since gfs2_exchange is very similar to gfs2_rename, both functions are
patched in the same way.
A new async glock wait queue, sd_async_glock_wait, keeps a list of
waiters for these events. If gfs2's holder_wake function detects an
async holder, it wakes up any waiters for the event. The waiter only
tests whether any of its requests are still pending.
Since the glocks are sent to dlm asychronously, the wait function needs
to check to see which glocks, if any, were granted.
If a glock is granted by dlm (and therefore held), its minimum hold time
is checked and adjusted as necessary, as other glock grants do.
If the event times out, all glocks held thus far must be dequeued to
resolve any existing deadlocks. Then, if there are any outstanding
locking requests, we need to loop around and wait for dlm to respond to
those requests too. After we release all requests, we return -ESTALE to
the caller (vfs rename) which loops around and retries the request.
Node1 Node2
--------- ---------
1. Enqueue A Enqueue B
2. Enqueue B Enqueue A
3. A granted
6. B granted
7. Wait for B
8. Wait for A
9. A times out (since Node 1 holds A)
10. Dequeue B (since it was granted)
11. Wait for all requests from DLM
12. B Granted (since Node2 released it in step 10)
13. Rename
14. Dequeue A
15. DLM Grants A
16. Dequeue A (due to the timeout and since we
no longer have B held for our task).
17. Dequeue B
18. Return -ESTALE to vfs
19. VFS retries the operation, goto step 1.
This release-all-locks / acquire-all-locks may slow rename / exchange
down as both nodes struggle in the same way and do the same thing.
However, this will only happen when there is contention for the same
inodes, which ought to be rare.
Signed-off-by: Bob Peterson <rpeterso@redhat.com>
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2019-08-30 17:31:02 +00:00
|
|
|
GLOCK_BUG_ON(gl, gh->gh_owner_pid == NULL);
|
2007-01-17 15:33:23 +00:00
|
|
|
if (test_and_set_bit(HIF_WAIT, &gh->gh_iflags))
|
gfs2: Use async glocks for rename
Because s_vfs_rename_mutex is not cluster-wide, multiple nodes can
reverse the roles of which directories are "old" and which are "new" for
the purposes of rename. This can cause deadlocks where two nodes end up
waiting for each other.
There can be several layers of directory dependencies across many nodes.
This patch fixes the problem by acquiring all gfs2_rename's inode glocks
asychronously and waiting for all glocks to be acquired. That way all
inodes are locked regardless of the order.
The timeout value for multiple asynchronous glocks is calculated to be
the total of the individual wait times for each glock times two.
Since gfs2_exchange is very similar to gfs2_rename, both functions are
patched in the same way.
A new async glock wait queue, sd_async_glock_wait, keeps a list of
waiters for these events. If gfs2's holder_wake function detects an
async holder, it wakes up any waiters for the event. The waiter only
tests whether any of its requests are still pending.
Since the glocks are sent to dlm asychronously, the wait function needs
to check to see which glocks, if any, were granted.
If a glock is granted by dlm (and therefore held), its minimum hold time
is checked and adjusted as necessary, as other glock grants do.
If the event times out, all glocks held thus far must be dequeued to
resolve any existing deadlocks. Then, if there are any outstanding
locking requests, we need to loop around and wait for dlm to respond to
those requests too. After we release all requests, we return -ESTALE to
the caller (vfs rename) which loops around and retries the request.
Node1 Node2
--------- ---------
1. Enqueue A Enqueue B
2. Enqueue B Enqueue A
3. A granted
6. B granted
7. Wait for B
8. Wait for A
9. A times out (since Node 1 holds A)
10. Dequeue B (since it was granted)
11. Wait for all requests from DLM
12. B Granted (since Node2 released it in step 10)
13. Rename
14. Dequeue A
15. DLM Grants A
16. Dequeue A (due to the timeout and since we
no longer have B held for our task).
17. Dequeue B
18. Return -ESTALE to vfs
19. VFS retries the operation, goto step 1.
This release-all-locks / acquire-all-locks may slow rename / exchange
down as both nodes struggle in the same way and do the same thing.
However, this will only happen when there is contention for the same
inodes, which ought to be rare.
Signed-off-by: Bob Peterson <rpeterso@redhat.com>
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2019-08-30 17:31:02 +00:00
|
|
|
GLOCK_BUG_ON(gl, true);
|
2006-04-20 20:57:23 +00:00
|
|
|
|
2008-05-21 16:03:22 +00:00
|
|
|
if (gh->gh_flags & (LM_FLAG_TRY | LM_FLAG_TRY_1CB)) {
|
2021-08-09 22:43:55 +00:00
|
|
|
if (test_bit(GLF_LOCK, &gl->gl_flags)) {
|
2022-06-10 22:43:00 +00:00
|
|
|
struct gfs2_holder *current_gh;
|
2021-08-09 22:43:55 +00:00
|
|
|
|
2022-06-10 22:43:00 +00:00
|
|
|
current_gh = find_first_strong_holder(gl);
|
|
|
|
try_futile = !may_grant(gl, current_gh, gh);
|
2021-08-09 22:43:55 +00:00
|
|
|
}
|
2008-05-21 16:03:22 +00:00
|
|
|
if (test_bit(GLF_INVALIDATE_IN_PROGRESS, &gl->gl_flags))
|
|
|
|
goto fail;
|
|
|
|
}
|
|
|
|
|
|
|
|
list_for_each_entry(gh2, &gl->gl_holders, gh_list) {
|
2022-04-05 20:07:30 +00:00
|
|
|
if (likely(gh2->gh_owner_pid != gh->gh_owner_pid))
|
|
|
|
continue;
|
|
|
|
if (gh->gh_gl->gl_ops->go_type == LM_TYPE_FLOCK)
|
|
|
|
continue;
|
|
|
|
if (test_bit(HIF_MAY_DEMOTE, &gh2->gh_iflags))
|
|
|
|
continue;
|
|
|
|
if (!pid_is_meaningful(gh2))
|
|
|
|
continue;
|
|
|
|
goto trap_recursive;
|
|
|
|
}
|
|
|
|
list_for_each_entry(gh2, &gl->gl_holders, gh_list) {
|
2012-08-09 17:48:46 +00:00
|
|
|
if (try_futile &&
|
|
|
|
!(gh2->gh_flags & (LM_FLAG_TRY | LM_FLAG_TRY_1CB))) {
|
2008-05-21 16:03:22 +00:00
|
|
|
fail:
|
|
|
|
gh->gh_error = GLR_TRYFAILED;
|
|
|
|
gfs2_holder_wake(gh);
|
|
|
|
return;
|
2007-09-14 04:35:27 +00:00
|
|
|
}
|
2008-05-21 16:03:22 +00:00
|
|
|
if (test_bit(HIF_HOLDER, &gh2->gh_iflags))
|
|
|
|
continue;
|
|
|
|
if (unlikely((gh->gh_flags & LM_FLAG_PRIORITY) && !insert_pt))
|
|
|
|
insert_pt = &gh2->gh_list;
|
|
|
|
}
|
2011-01-31 09:38:12 +00:00
|
|
|
trace_gfs2_glock_queue(gh, 1);
|
GFS2: glock statistics gathering
The stats are divided into two sets: those relating to the
super block and those relating to an individual glock. The
super block stats are done on a per cpu basis in order to
try and reduce the overhead of gathering them. They are also
further divided by glock type.
In the case of both the super block and glock statistics,
the same information is gathered in each case. The super
block statistics are used to provide default values for
most of the glock statistics, so that newly created glocks
should have, as far as possible, a sensible starting point.
The statistics are divided into three pairs of mean and
variance, plus two counters. The mean/variance pairs are
smoothed exponential estimates and the algorithm used is
one which will be very familiar to those used to calculation
of round trip times in network code.
The three pairs of mean/variance measure the following
things:
1. DLM lock time (non-blocking requests)
2. DLM lock time (blocking requests)
3. Inter-request time (again to the DLM)
A non-blocking request is one which will complete right
away, whatever the state of the DLM lock in question. That
currently means any requests when (a) the current state of
the lock is exclusive (b) the requested state is either null
or unlocked or (c) the "try lock" flag is set. A blocking
request covers all the other lock requests.
There are two counters. The first is there primarily to show
how many lock requests have been made, and thus how much data
has gone into the mean/variance calculations. The other counter
is counting queueing of holders at the top layer of the glock
code. Hopefully that number will be a lot larger than the number
of dlm lock requests issued.
So why gather these statistics? There are several reasons
we'd like to get a better idea of these timings:
1. To be able to better set the glock "min hold time"
2. To spot performance issues more easily
3. To improve the algorithm for selecting resource groups for
allocation (to base it on lock wait time, rather than blindly
using a "try lock")
Due to the smoothing action of the updates, a step change in
some input quantity being sampled will only fully be taken
into account after 8 samples (or 4 for the variance) and this
needs to be carefully considered when interpreting the
results.
Knowing both the time it takes a lock request to complete and
the average time between lock requests for a glock means we
can compute the total percentage of the time for which the
node is able to use a glock vs. time that the rest of the
cluster has its share. That will be very useful when setting
the lock min hold time.
The other point to remember is that all times are in
nanoseconds. Great care has been taken to ensure that we
measure exactly the quantities that we want, as accurately
as possible. There are always inaccuracies in any
measuring system, but I hope this is as accurate as we
can reasonably make it.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
2012-01-20 10:38:36 +00:00
|
|
|
gfs2_glstats_inc(gl, GFS2_LKS_QCOUNT);
|
|
|
|
gfs2_sbstats_inc(gl, GFS2_LKS_QCOUNT);
|
2008-05-21 16:03:22 +00:00
|
|
|
if (likely(insert_pt == NULL)) {
|
|
|
|
list_add_tail(&gh->gh_list, &gl->gl_holders);
|
|
|
|
if (unlikely(gh->gh_flags & LM_FLAG_PRIORITY))
|
|
|
|
goto do_cancel;
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
list_add_tail(&gh->gh_list, insert_pt);
|
|
|
|
do_cancel:
|
2020-02-03 18:22:45 +00:00
|
|
|
gh = list_first_entry(&gl->gl_holders, struct gfs2_holder, gh_list);
|
2008-05-21 16:03:22 +00:00
|
|
|
if (!(gh->gh_flags & LM_FLAG_PRIORITY)) {
|
2015-10-29 15:58:09 +00:00
|
|
|
spin_unlock(&gl->gl_lockref.lock);
|
2008-05-23 13:46:04 +00:00
|
|
|
if (sdp->sd_lockstruct.ls_ops->lm_cancel)
|
2009-01-12 10:43:39 +00:00
|
|
|
sdp->sd_lockstruct.ls_ops->lm_cancel(gl);
|
2015-10-29 15:58:09 +00:00
|
|
|
spin_lock(&gl->gl_lockref.lock);
|
2006-01-16 16:50:04 +00:00
|
|
|
}
|
2008-05-21 16:03:22 +00:00
|
|
|
return;
|
2006-01-16 16:50:04 +00:00
|
|
|
|
2008-05-21 16:03:22 +00:00
|
|
|
trap_recursive:
|
2018-10-03 13:47:36 +00:00
|
|
|
fs_err(sdp, "original: %pSR\n", (void *)gh2->gh_ip);
|
|
|
|
fs_err(sdp, "pid: %d\n", pid_nr(gh2->gh_owner_pid));
|
|
|
|
fs_err(sdp, "lock type: %d req lock state : %d\n",
|
2008-05-21 16:03:22 +00:00
|
|
|
gh2->gh_gl->gl_name.ln_type, gh2->gh_state);
|
2018-10-03 13:47:36 +00:00
|
|
|
fs_err(sdp, "new: %pSR\n", (void *)gh->gh_ip);
|
|
|
|
fs_err(sdp, "pid: %d\n", pid_nr(gh->gh_owner_pid));
|
|
|
|
fs_err(sdp, "lock type: %d req lock state : %d\n",
|
2008-05-21 16:03:22 +00:00
|
|
|
gh->gh_gl->gl_name.ln_type, gh->gh_state);
|
2019-05-09 14:21:48 +00:00
|
|
|
gfs2_dump_glock(NULL, gl, true);
|
2008-05-21 16:03:22 +00:00
|
|
|
BUG();
|
2006-01-16 16:50:04 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* gfs2_glock_nq - enqueue a struct gfs2_holder onto a glock (acquire a glock)
|
|
|
|
* @gh: the holder structure
|
|
|
|
*
|
|
|
|
* if (gh->gh_flags & GL_ASYNC), this never returns an error
|
|
|
|
*
|
|
|
|
* Returns: 0, GLR_TRYFAILED, or errno on failure
|
|
|
|
*/
|
|
|
|
|
|
|
|
int gfs2_glock_nq(struct gfs2_holder *gh)
|
|
|
|
{
|
|
|
|
struct gfs2_glock *gl = gh->gh_gl;
|
|
|
|
int error = 0;
|
|
|
|
|
gfs2: Force withdraw to replay journals and wait for it to finish
When a node withdraws from a file system, it often leaves its journal
in an incomplete state. This is especially true when the withdraw is
caused by io errors writing to the journal. Before this patch, a
withdraw would try to write a "shutdown" record to the journal, tell
dlm it's done with the file system, and none of the other nodes
know about the problem. Later, when the problem is fixed and the
withdrawn node is rebooted, it would then discover that its own
journal was incomplete, and replay it. However, replaying it at this
point is almost guaranteed to introduce corruption because the other
nodes are likely to have used affected resource groups that appeared
in the journal since the time of the withdraw. Replaying the journal
later will overwrite any changes made, and not through any fault of
dlm, which was instructed during the withdraw to release those
resources.
This patch makes file system withdraws seen by the entire cluster.
Withdrawing nodes dequeue their journal glock to allow recovery.
The remaining nodes check all the journals to see if they are
clean or in need of replay. They try to replay dirty journals, but
only the journals of withdrawn nodes will be "not busy" and
therefore available for replay.
Until the journal replay is complete, no i/o related glocks may be
given out, to ensure that the replay does not cause the
aforementioned corruption: We cannot allow any journal replay to
overwrite blocks associated with a glock once it is held.
The "live" glock which is now used to signal when a withdraw
occurs. When a withdraw occurs, the node signals its withdraw by
dequeueing the "live" glock and trying to enqueue it in EX mode,
thus forcing the other nodes to all see a demote request, by way
of a "1CB" (one callback) try lock. The "live" glock is not
granted in EX; the callback is only just used to indicate a
withdraw has occurred.
Note that all nodes in the cluster must wait for the recovering
node to finish replaying the withdrawing node's journal before
continuing. To this end, it checks that the journals are clean
multiple times in a retry loop.
Also note that the withdraw function may be called from a wide
variety of situations, and therefore, we need to take extra
precautions to make sure pointers are valid before using them in
many circumstances.
We also need to take care when glocks decide to withdraw, since
the withdraw code now uses glocks.
Also, before this patch, if a process encountered an error and
decided to withdraw, if another process was already withdrawing,
the second withdraw would be silently ignored, which set it free
to unlock its glocks. That's correct behavior if the original
withdrawer encounters further errors down the road. But if
secondary waiters don't wait for the journal replay, unlocking
glocks will allow other nodes to use them, despite the fact that
the journal containing those blocks is being replayed. The
replay needs to finish before our glocks are released to other
nodes. IOW, secondary withdraws need to wait for the first
withdraw to finish.
For example, if an rgrp glock is unlocked by a process that didn't
wait for the first withdraw, a journal replay could introduce file
system corruption by replaying a rgrp block that has already been
granted to a different cluster node.
Signed-off-by: Bob Peterson <rpeterso@redhat.com>
2020-01-28 19:23:45 +00:00
|
|
|
if (glock_blocked_by_withdraw(gl) && !(gh->gh_flags & LM_FLAG_NOEXP))
|
2006-01-16 16:50:04 +00:00
|
|
|
return -EIO;
|
|
|
|
|
2011-04-14 15:50:31 +00:00
|
|
|
if (test_bit(GLF_LRU, &gl->gl_flags))
|
|
|
|
gfs2_glock_remove_from_lru(gl);
|
|
|
|
|
2022-01-14 08:05:13 +00:00
|
|
|
gh->gh_error = 0;
|
2015-10-29 15:58:09 +00:00
|
|
|
spin_lock(&gl->gl_lockref.lock);
|
2006-01-16 16:50:04 +00:00
|
|
|
add_to_queue(gh);
|
2014-03-12 14:32:20 +00:00
|
|
|
if (unlikely((LM_FLAG_NOEXP & gh->gh_flags) &&
|
|
|
|
test_and_clear_bit(GLF_FROZEN, &gl->gl_flags))) {
|
2010-08-02 09:15:17 +00:00
|
|
|
set_bit(GLF_REPLY_PENDING, &gl->gl_flags);
|
2014-03-12 14:32:20 +00:00
|
|
|
gl->gl_lockref.count++;
|
2017-06-30 13:10:01 +00:00
|
|
|
__gfs2_glock_queue_work(gl, 0);
|
2014-03-12 14:32:20 +00:00
|
|
|
}
|
2008-05-21 16:03:22 +00:00
|
|
|
run_queue(gl, 1);
|
2015-10-29 15:58:09 +00:00
|
|
|
spin_unlock(&gl->gl_lockref.lock);
|
2006-01-16 16:50:04 +00:00
|
|
|
|
2008-05-21 16:03:22 +00:00
|
|
|
if (!(gh->gh_flags & GL_ASYNC))
|
|
|
|
error = gfs2_glock_wait(gh);
|
2006-01-16 16:50:04 +00:00
|
|
|
|
|
|
|
return error;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* gfs2_glock_poll - poll to see if an async request has been completed
|
|
|
|
* @gh: the holder
|
|
|
|
*
|
|
|
|
* Returns: 1 if the request is ready to be gfs2_glock_wait()ed on
|
|
|
|
*/
|
|
|
|
|
|
|
|
int gfs2_glock_poll(struct gfs2_holder *gh)
|
|
|
|
{
|
2008-05-21 16:03:22 +00:00
|
|
|
return test_bit(HIF_WAIT, &gh->gh_iflags) ? 0 : 1;
|
2006-01-16 16:50:04 +00:00
|
|
|
}
|
|
|
|
|
2021-08-19 18:51:23 +00:00
|
|
|
static inline bool needs_demote(struct gfs2_glock *gl)
|
|
|
|
{
|
|
|
|
return (test_bit(GLF_DEMOTE, &gl->gl_flags) ||
|
|
|
|
test_bit(GLF_PENDING_DEMOTE, &gl->gl_flags));
|
|
|
|
}
|
2006-01-16 16:50:04 +00:00
|
|
|
|
2021-08-19 18:51:23 +00:00
|
|
|
static void __gfs2_glock_dq(struct gfs2_holder *gh)
|
2006-01-16 16:50:04 +00:00
|
|
|
{
|
|
|
|
struct gfs2_glock *gl = gh->gh_gl;
|
gfs2: Force withdraw to replay journals and wait for it to finish
When a node withdraws from a file system, it often leaves its journal
in an incomplete state. This is especially true when the withdraw is
caused by io errors writing to the journal. Before this patch, a
withdraw would try to write a "shutdown" record to the journal, tell
dlm it's done with the file system, and none of the other nodes
know about the problem. Later, when the problem is fixed and the
withdrawn node is rebooted, it would then discover that its own
journal was incomplete, and replay it. However, replaying it at this
point is almost guaranteed to introduce corruption because the other
nodes are likely to have used affected resource groups that appeared
in the journal since the time of the withdraw. Replaying the journal
later will overwrite any changes made, and not through any fault of
dlm, which was instructed during the withdraw to release those
resources.
This patch makes file system withdraws seen by the entire cluster.
Withdrawing nodes dequeue their journal glock to allow recovery.
The remaining nodes check all the journals to see if they are
clean or in need of replay. They try to replay dirty journals, but
only the journals of withdrawn nodes will be "not busy" and
therefore available for replay.
Until the journal replay is complete, no i/o related glocks may be
given out, to ensure that the replay does not cause the
aforementioned corruption: We cannot allow any journal replay to
overwrite blocks associated with a glock once it is held.
The "live" glock which is now used to signal when a withdraw
occurs. When a withdraw occurs, the node signals its withdraw by
dequeueing the "live" glock and trying to enqueue it in EX mode,
thus forcing the other nodes to all see a demote request, by way
of a "1CB" (one callback) try lock. The "live" glock is not
granted in EX; the callback is only just used to indicate a
withdraw has occurred.
Note that all nodes in the cluster must wait for the recovering
node to finish replaying the withdrawing node's journal before
continuing. To this end, it checks that the journals are clean
multiple times in a retry loop.
Also note that the withdraw function may be called from a wide
variety of situations, and therefore, we need to take extra
precautions to make sure pointers are valid before using them in
many circumstances.
We also need to take care when glocks decide to withdraw, since
the withdraw code now uses glocks.
Also, before this patch, if a process encountered an error and
decided to withdraw, if another process was already withdrawing,
the second withdraw would be silently ignored, which set it free
to unlock its glocks. That's correct behavior if the original
withdrawer encounters further errors down the road. But if
secondary waiters don't wait for the journal replay, unlocking
glocks will allow other nodes to use them, despite the fact that
the journal containing those blocks is being replayed. The
replay needs to finish before our glocks are released to other
nodes. IOW, secondary withdraws need to wait for the first
withdraw to finish.
For example, if an rgrp glock is unlocked by a process that didn't
wait for the first withdraw, a journal replay could introduce file
system corruption by replaying a rgrp block that has already been
granted to a different cluster node.
Signed-off-by: Bob Peterson <rpeterso@redhat.com>
2020-01-28 19:23:45 +00:00
|
|
|
struct gfs2_sbd *sdp = gl->gl_name.ln_sbd;
|
[GFS2] delay glock demote for a minimum hold time
When a lot of IO, with some distributed mmap IO, is run on a GFS2 filesystem in
a cluster, it will deadlock. The reason is that do_no_page() will repeatedly
call gfs2_sharewrite_nopage(), because each node keeps giving up the glock
too early, and is forced to call unmap_mapping_range(). This bumps the
mapping->truncate_count sequence count, forcing do_no_page() to retry. This
patch institutes a minimum glock hold time a tenth a second. This insures
that even in heavy contention cases, the node has enough time to get some
useful work done before it gives up the glock.
A second issue is that when gfs2_glock_dq() is called from within a page fault
to demote a lock, and the associated page needs to be written out, it will
try to acqire a lock on it, but it has already been locked at a higher level.
This patch puts makes gfs2_glock_dq() use the work queue as well, to avoid this
issue. This is the same patch as Steve Whitehouse originally proposed to fix
this issue, execpt that gfs2_glock_dq() now grabs a reference to the glock
before it queues up the work on it.
Signed-off-by: Benjamin E. Marzinski <bmarzins@redhat.com>
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
2007-08-23 18:19:05 +00:00
|
|
|
unsigned delay = 0;
|
2008-05-21 16:03:22 +00:00
|
|
|
int fast_path = 0;
|
2006-01-16 16:50:04 +00:00
|
|
|
|
gfs2: Force withdraw to replay journals and wait for it to finish
When a node withdraws from a file system, it often leaves its journal
in an incomplete state. This is especially true when the withdraw is
caused by io errors writing to the journal. Before this patch, a
withdraw would try to write a "shutdown" record to the journal, tell
dlm it's done with the file system, and none of the other nodes
know about the problem. Later, when the problem is fixed and the
withdrawn node is rebooted, it would then discover that its own
journal was incomplete, and replay it. However, replaying it at this
point is almost guaranteed to introduce corruption because the other
nodes are likely to have used affected resource groups that appeared
in the journal since the time of the withdraw. Replaying the journal
later will overwrite any changes made, and not through any fault of
dlm, which was instructed during the withdraw to release those
resources.
This patch makes file system withdraws seen by the entire cluster.
Withdrawing nodes dequeue their journal glock to allow recovery.
The remaining nodes check all the journals to see if they are
clean or in need of replay. They try to replay dirty journals, but
only the journals of withdrawn nodes will be "not busy" and
therefore available for replay.
Until the journal replay is complete, no i/o related glocks may be
given out, to ensure that the replay does not cause the
aforementioned corruption: We cannot allow any journal replay to
overwrite blocks associated with a glock once it is held.
The "live" glock which is now used to signal when a withdraw
occurs. When a withdraw occurs, the node signals its withdraw by
dequeueing the "live" glock and trying to enqueue it in EX mode,
thus forcing the other nodes to all see a demote request, by way
of a "1CB" (one callback) try lock. The "live" glock is not
granted in EX; the callback is only just used to indicate a
withdraw has occurred.
Note that all nodes in the cluster must wait for the recovering
node to finish replaying the withdrawing node's journal before
continuing. To this end, it checks that the journals are clean
multiple times in a retry loop.
Also note that the withdraw function may be called from a wide
variety of situations, and therefore, we need to take extra
precautions to make sure pointers are valid before using them in
many circumstances.
We also need to take care when glocks decide to withdraw, since
the withdraw code now uses glocks.
Also, before this patch, if a process encountered an error and
decided to withdraw, if another process was already withdrawing,
the second withdraw would be silently ignored, which set it free
to unlock its glocks. That's correct behavior if the original
withdrawer encounters further errors down the road. But if
secondary waiters don't wait for the journal replay, unlocking
glocks will allow other nodes to use them, despite the fact that
the journal containing those blocks is being replayed. The
replay needs to finish before our glocks are released to other
nodes. IOW, secondary withdraws need to wait for the first
withdraw to finish.
For example, if an rgrp glock is unlocked by a process that didn't
wait for the first withdraw, a journal replay could introduce file
system corruption by replaying a rgrp block that has already been
granted to a different cluster node.
Signed-off-by: Bob Peterson <rpeterso@redhat.com>
2020-01-28 19:23:45 +00:00
|
|
|
/*
|
2021-08-19 18:51:23 +00:00
|
|
|
* This while loop is similar to function demote_incompat_holders:
|
|
|
|
* If the glock is due to be demoted (which may be from another node
|
|
|
|
* or even if this holder is GL_NOCACHE), the weak holders are
|
|
|
|
* demoted as well, allowing the glock to be demoted.
|
gfs2: Force withdraw to replay journals and wait for it to finish
When a node withdraws from a file system, it often leaves its journal
in an incomplete state. This is especially true when the withdraw is
caused by io errors writing to the journal. Before this patch, a
withdraw would try to write a "shutdown" record to the journal, tell
dlm it's done with the file system, and none of the other nodes
know about the problem. Later, when the problem is fixed and the
withdrawn node is rebooted, it would then discover that its own
journal was incomplete, and replay it. However, replaying it at this
point is almost guaranteed to introduce corruption because the other
nodes are likely to have used affected resource groups that appeared
in the journal since the time of the withdraw. Replaying the journal
later will overwrite any changes made, and not through any fault of
dlm, which was instructed during the withdraw to release those
resources.
This patch makes file system withdraws seen by the entire cluster.
Withdrawing nodes dequeue their journal glock to allow recovery.
The remaining nodes check all the journals to see if they are
clean or in need of replay. They try to replay dirty journals, but
only the journals of withdrawn nodes will be "not busy" and
therefore available for replay.
Until the journal replay is complete, no i/o related glocks may be
given out, to ensure that the replay does not cause the
aforementioned corruption: We cannot allow any journal replay to
overwrite blocks associated with a glock once it is held.
The "live" glock which is now used to signal when a withdraw
occurs. When a withdraw occurs, the node signals its withdraw by
dequeueing the "live" glock and trying to enqueue it in EX mode,
thus forcing the other nodes to all see a demote request, by way
of a "1CB" (one callback) try lock. The "live" glock is not
granted in EX; the callback is only just used to indicate a
withdraw has occurred.
Note that all nodes in the cluster must wait for the recovering
node to finish replaying the withdrawing node's journal before
continuing. To this end, it checks that the journals are clean
multiple times in a retry loop.
Also note that the withdraw function may be called from a wide
variety of situations, and therefore, we need to take extra
precautions to make sure pointers are valid before using them in
many circumstances.
We also need to take care when glocks decide to withdraw, since
the withdraw code now uses glocks.
Also, before this patch, if a process encountered an error and
decided to withdraw, if another process was already withdrawing,
the second withdraw would be silently ignored, which set it free
to unlock its glocks. That's correct behavior if the original
withdrawer encounters further errors down the road. But if
secondary waiters don't wait for the journal replay, unlocking
glocks will allow other nodes to use them, despite the fact that
the journal containing those blocks is being replayed. The
replay needs to finish before our glocks are released to other
nodes. IOW, secondary withdraws need to wait for the first
withdraw to finish.
For example, if an rgrp glock is unlocked by a process that didn't
wait for the first withdraw, a journal replay could introduce file
system corruption by replaying a rgrp block that has already been
granted to a different cluster node.
Signed-off-by: Bob Peterson <rpeterso@redhat.com>
2020-01-28 19:23:45 +00:00
|
|
|
*/
|
2021-08-19 18:51:23 +00:00
|
|
|
while (gh) {
|
|
|
|
/*
|
|
|
|
* If we're in the process of file system withdraw, we cannot
|
|
|
|
* just dequeue any glocks until our journal is recovered, lest
|
|
|
|
* we introduce file system corruption. We need two exceptions
|
|
|
|
* to this rule: We need to allow unlocking of nondisk glocks
|
|
|
|
* and the glock for our own journal that needs recovery.
|
|
|
|
*/
|
|
|
|
if (test_bit(SDF_WITHDRAW_RECOVERY, &sdp->sd_flags) &&
|
|
|
|
glock_blocked_by_withdraw(gl) &&
|
|
|
|
gh->gh_gl != sdp->sd_jinode_gl) {
|
|
|
|
sdp->sd_glock_dqs_held++;
|
|
|
|
spin_unlock(&gl->gl_lockref.lock);
|
|
|
|
might_sleep();
|
|
|
|
wait_on_bit(&sdp->sd_flags, SDF_WITHDRAW_RECOVERY,
|
|
|
|
TASK_UNINTERRUPTIBLE);
|
|
|
|
spin_lock(&gl->gl_lockref.lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This holder should not be cached, so mark it for demote.
|
|
|
|
* Note: this should be done before the check for needs_demote
|
|
|
|
* below.
|
|
|
|
*/
|
|
|
|
if (gh->gh_flags & GL_NOCACHE)
|
|
|
|
handle_callback(gl, LM_ST_UNLOCKED, 0, false);
|
|
|
|
|
|
|
|
list_del_init(&gh->gh_list);
|
|
|
|
clear_bit(HIF_HOLDER, &gh->gh_iflags);
|
|
|
|
trace_gfs2_glock_queue(gh, 0);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If there hasn't been a demote request we are done.
|
|
|
|
* (Let the remaining holders, if any, keep holding it.)
|
|
|
|
*/
|
|
|
|
if (!needs_demote(gl)) {
|
|
|
|
if (list_empty(&gl->gl_holders))
|
|
|
|
fast_path = 1;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
/*
|
|
|
|
* If we have another strong holder (we cannot auto-demote)
|
|
|
|
* we are done. It keeps holding it until it is done.
|
|
|
|
*/
|
|
|
|
if (find_first_strong_holder(gl))
|
|
|
|
break;
|
2006-01-16 16:50:04 +00:00
|
|
|
|
2021-08-19 18:51:23 +00:00
|
|
|
/*
|
|
|
|
* If we have a weak holder at the head of the list, it
|
|
|
|
* (and all others like it) must be auto-demoted. If there
|
|
|
|
* are no more weak holders, we exit the while loop.
|
|
|
|
*/
|
|
|
|
gh = find_first_holder(gl);
|
|
|
|
}
|
2021-08-02 14:08:51 +00:00
|
|
|
|
2019-03-27 17:09:17 +00:00
|
|
|
if (!test_bit(GLF_LFLUSH, &gl->gl_flags) && demote_ok(gl))
|
2012-08-09 17:48:43 +00:00
|
|
|
gfs2_glock_add_to_lru(gl);
|
|
|
|
|
2017-06-30 13:10:01 +00:00
|
|
|
if (unlikely(!fast_path)) {
|
|
|
|
gl->gl_lockref.count++;
|
|
|
|
if (test_bit(GLF_PENDING_DEMOTE, &gl->gl_flags) &&
|
|
|
|
!test_bit(GLF_DEMOTE, &gl->gl_flags) &&
|
|
|
|
gl->gl_name.ln_type == LM_TYPE_INODE)
|
|
|
|
delay = gl->gl_hold_time;
|
|
|
|
__gfs2_glock_queue_work(gl, delay);
|
|
|
|
}
|
2021-08-19 18:51:23 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* gfs2_glock_dq - dequeue a struct gfs2_holder from a glock (release a glock)
|
|
|
|
* @gh: the glock holder
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
void gfs2_glock_dq(struct gfs2_holder *gh)
|
|
|
|
{
|
|
|
|
struct gfs2_glock *gl = gh->gh_gl;
|
|
|
|
|
|
|
|
spin_lock(&gl->gl_lockref.lock);
|
2022-01-24 17:23:55 +00:00
|
|
|
if (list_is_first(&gh->gh_list, &gl->gl_holders) &&
|
|
|
|
!test_bit(HIF_HOLDER, &gh->gh_iflags)) {
|
|
|
|
spin_unlock(&gl->gl_lockref.lock);
|
|
|
|
gl->gl_name.ln_sbd->sd_lockstruct.ls_ops->lm_cancel(gl);
|
|
|
|
wait_on_bit(&gh->gh_iflags, HIF_WAIT, TASK_UNINTERRUPTIBLE);
|
|
|
|
spin_lock(&gl->gl_lockref.lock);
|
|
|
|
}
|
|
|
|
|
2021-08-19 18:51:23 +00:00
|
|
|
__gfs2_glock_dq(gh);
|
2015-10-29 15:58:09 +00:00
|
|
|
spin_unlock(&gl->gl_lockref.lock);
|
2006-01-16 16:50:04 +00:00
|
|
|
}
|
|
|
|
|
2007-06-11 07:22:32 +00:00
|
|
|
void gfs2_glock_dq_wait(struct gfs2_holder *gh)
|
|
|
|
{
|
|
|
|
struct gfs2_glock *gl = gh->gh_gl;
|
|
|
|
gfs2_glock_dq(gh);
|
2012-08-09 17:48:45 +00:00
|
|
|
might_sleep();
|
sched: Remove proliferation of wait_on_bit() action functions
The current "wait_on_bit" interface requires an 'action'
function to be provided which does the actual waiting.
There are over 20 such functions, many of them identical.
Most cases can be satisfied by one of just two functions, one
which uses io_schedule() and one which just uses schedule().
So:
Rename wait_on_bit and wait_on_bit_lock to
wait_on_bit_action and wait_on_bit_lock_action
to make it explicit that they need an action function.
Introduce new wait_on_bit{,_lock} and wait_on_bit{,_lock}_io
which are *not* given an action function but implicitly use
a standard one.
The decision to error-out if a signal is pending is now made
based on the 'mode' argument rather than being encoded in the action
function.
All instances of the old wait_on_bit and wait_on_bit_lock which
can use the new version have been changed accordingly and their
action functions have been discarded.
wait_on_bit{_lock} does not return any specific error code in the
event of a signal so the caller must check for non-zero and
interpolate their own error code as appropriate.
The wait_on_bit() call in __fscache_wait_on_invalidate() was
ambiguous as it specified TASK_UNINTERRUPTIBLE but used
fscache_wait_bit_interruptible as an action function.
David Howells confirms this should be uniformly
"uninterruptible"
The main remaining user of wait_on_bit{,_lock}_action is NFS
which needs to use a freezer-aware schedule() call.
A comment in fs/gfs2/glock.c notes that having multiple 'action'
functions is useful as they display differently in the 'wchan'
field of 'ps'. (and /proc/$PID/wchan).
As the new bit_wait{,_io} functions are tagged "__sched", they
will not show up at all, but something higher in the stack. So
the distinction will still be visible, only with different
function names (gds2_glock_wait versus gfs2_glock_dq_wait in the
gfs2/glock.c case).
Since first version of this patch (against 3.15) two new action
functions appeared, on in NFS and one in CIFS. CIFS also now
uses an action function that makes the same freezer aware
schedule call as NFS.
Signed-off-by: NeilBrown <neilb@suse.de>
Acked-by: David Howells <dhowells@redhat.com> (fscache, keys)
Acked-by: Steven Whitehouse <swhiteho@redhat.com> (gfs2)
Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Steve French <sfrench@samba.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/20140707051603.28027.72349.stgit@notabene.brown
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-07-07 05:16:04 +00:00
|
|
|
wait_on_bit(&gl->gl_flags, GLF_DEMOTE, TASK_UNINTERRUPTIBLE);
|
2007-06-11 07:22:32 +00:00
|
|
|
}
|
|
|
|
|
2006-01-16 16:50:04 +00:00
|
|
|
/**
|
|
|
|
* gfs2_glock_dq_uninit - dequeue a holder from a glock and initialize it
|
|
|
|
* @gh: the holder structure
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
|
|
|
void gfs2_glock_dq_uninit(struct gfs2_holder *gh)
|
|
|
|
{
|
|
|
|
gfs2_glock_dq(gh);
|
|
|
|
gfs2_holder_uninit(gh);
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* gfs2_glock_nq_num - acquire a glock based on lock number
|
|
|
|
* @sdp: the filesystem
|
|
|
|
* @number: the lock number
|
|
|
|
* @glops: the glock operations for the type of glock
|
|
|
|
* @state: the state to acquire the glock in
|
2011-03-31 01:57:33 +00:00
|
|
|
* @flags: modifier flags for the acquisition
|
2006-01-16 16:50:04 +00:00
|
|
|
* @gh: the struct gfs2_holder
|
|
|
|
*
|
|
|
|
* Returns: errno
|
|
|
|
*/
|
|
|
|
|
2006-09-04 16:49:07 +00:00
|
|
|
int gfs2_glock_nq_num(struct gfs2_sbd *sdp, u64 number,
|
2006-08-30 13:30:00 +00:00
|
|
|
const struct gfs2_glock_operations *glops,
|
2015-07-24 14:45:43 +00:00
|
|
|
unsigned int state, u16 flags, struct gfs2_holder *gh)
|
2006-01-16 16:50:04 +00:00
|
|
|
{
|
|
|
|
struct gfs2_glock *gl;
|
|
|
|
int error;
|
|
|
|
|
|
|
|
error = gfs2_glock_get(sdp, number, glops, CREATE, &gl);
|
|
|
|
if (!error) {
|
|
|
|
error = gfs2_glock_nq_init(gl, state, flags, gh);
|
|
|
|
gfs2_glock_put(gl);
|
|
|
|
}
|
|
|
|
|
|
|
|
return error;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* glock_compare - Compare two struct gfs2_glock structures for sorting
|
|
|
|
* @arg_a: the first structure
|
|
|
|
* @arg_b: the second structure
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
|
|
|
static int glock_compare(const void *arg_a, const void *arg_b)
|
|
|
|
{
|
2006-09-09 21:07:05 +00:00
|
|
|
const struct gfs2_holder *gh_a = *(const struct gfs2_holder **)arg_a;
|
|
|
|
const struct gfs2_holder *gh_b = *(const struct gfs2_holder **)arg_b;
|
|
|
|
const struct lm_lockname *a = &gh_a->gh_gl->gl_name;
|
|
|
|
const struct lm_lockname *b = &gh_b->gh_gl->gl_name;
|
2006-01-16 16:50:04 +00:00
|
|
|
|
|
|
|
if (a->ln_number > b->ln_number)
|
2006-09-09 21:07:05 +00:00
|
|
|
return 1;
|
|
|
|
if (a->ln_number < b->ln_number)
|
|
|
|
return -1;
|
2007-01-22 17:10:39 +00:00
|
|
|
BUG_ON(gh_a->gh_gl->gl_ops->go_type == gh_b->gh_gl->gl_ops->go_type);
|
2006-09-09 21:07:05 +00:00
|
|
|
return 0;
|
2006-01-16 16:50:04 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
2022-06-14 19:55:45 +00:00
|
|
|
* nq_m_sync - synchronously acquire more than one glock in deadlock free order
|
2006-01-16 16:50:04 +00:00
|
|
|
* @num_gh: the number of structures
|
|
|
|
* @ghs: an array of struct gfs2_holder structures
|
2021-03-30 16:44:29 +00:00
|
|
|
* @p: placeholder for the holder structure to pass back
|
2006-01-16 16:50:04 +00:00
|
|
|
*
|
|
|
|
* Returns: 0 on success (all glocks acquired),
|
|
|
|
* errno on failure (no glocks acquired)
|
|
|
|
*/
|
|
|
|
|
|
|
|
static int nq_m_sync(unsigned int num_gh, struct gfs2_holder *ghs,
|
|
|
|
struct gfs2_holder **p)
|
|
|
|
{
|
|
|
|
unsigned int x;
|
|
|
|
int error = 0;
|
|
|
|
|
|
|
|
for (x = 0; x < num_gh; x++)
|
|
|
|
p[x] = &ghs[x];
|
|
|
|
|
|
|
|
sort(p, num_gh, sizeof(struct gfs2_holder *), glock_compare, NULL);
|
|
|
|
|
|
|
|
for (x = 0; x < num_gh; x++) {
|
|
|
|
error = gfs2_glock_nq(p[x]);
|
|
|
|
if (error) {
|
|
|
|
while (x--)
|
|
|
|
gfs2_glock_dq(p[x]);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return error;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* gfs2_glock_nq_m - acquire multiple glocks
|
|
|
|
* @num_gh: the number of structures
|
|
|
|
* @ghs: an array of struct gfs2_holder structures
|
|
|
|
*
|
|
|
|
* Returns: 0 on success (all glocks acquired),
|
|
|
|
* errno on failure (no glocks acquired)
|
|
|
|
*/
|
|
|
|
|
|
|
|
int gfs2_glock_nq_m(unsigned int num_gh, struct gfs2_holder *ghs)
|
|
|
|
{
|
2007-06-19 14:38:17 +00:00
|
|
|
struct gfs2_holder *tmp[4];
|
|
|
|
struct gfs2_holder **pph = tmp;
|
2006-01-16 16:50:04 +00:00
|
|
|
int error = 0;
|
|
|
|
|
2007-06-19 14:38:17 +00:00
|
|
|
switch(num_gh) {
|
|
|
|
case 0:
|
2006-01-16 16:50:04 +00:00
|
|
|
return 0;
|
2007-06-19 14:38:17 +00:00
|
|
|
case 1:
|
2006-01-16 16:50:04 +00:00
|
|
|
return gfs2_glock_nq(ghs);
|
2007-06-19 14:38:17 +00:00
|
|
|
default:
|
|
|
|
if (num_gh <= 4)
|
2006-01-16 16:50:04 +00:00
|
|
|
break;
|
treewide: kmalloc() -> kmalloc_array()
The kmalloc() function has a 2-factor argument form, kmalloc_array(). This
patch replaces cases of:
kmalloc(a * b, gfp)
with:
kmalloc_array(a * b, gfp)
as well as handling cases of:
kmalloc(a * b * c, gfp)
with:
kmalloc(array3_size(a, b, c), gfp)
as it's slightly less ugly than:
kmalloc_array(array_size(a, b), c, gfp)
This does, however, attempt to ignore constant size factors like:
kmalloc(4 * 1024, gfp)
though any constants defined via macros get caught up in the conversion.
Any factors with a sizeof() of "unsigned char", "char", and "u8" were
dropped, since they're redundant.
The tools/ directory was manually excluded, since it has its own
implementation of kmalloc().
The Coccinelle script used for this was:
// Fix redundant parens around sizeof().
@@
type TYPE;
expression THING, E;
@@
(
kmalloc(
- (sizeof(TYPE)) * E
+ sizeof(TYPE) * E
, ...)
|
kmalloc(
- (sizeof(THING)) * E
+ sizeof(THING) * E
, ...)
)
// Drop single-byte sizes and redundant parens.
@@
expression COUNT;
typedef u8;
typedef __u8;
@@
(
kmalloc(
- sizeof(u8) * (COUNT)
+ COUNT
, ...)
|
kmalloc(
- sizeof(__u8) * (COUNT)
+ COUNT
, ...)
|
kmalloc(
- sizeof(char) * (COUNT)
+ COUNT
, ...)
|
kmalloc(
- sizeof(unsigned char) * (COUNT)
+ COUNT
, ...)
|
kmalloc(
- sizeof(u8) * COUNT
+ COUNT
, ...)
|
kmalloc(
- sizeof(__u8) * COUNT
+ COUNT
, ...)
|
kmalloc(
- sizeof(char) * COUNT
+ COUNT
, ...)
|
kmalloc(
- sizeof(unsigned char) * COUNT
+ COUNT
, ...)
)
// 2-factor product with sizeof(type/expression) and identifier or constant.
@@
type TYPE;
expression THING;
identifier COUNT_ID;
constant COUNT_CONST;
@@
(
- kmalloc
+ kmalloc_array
(
- sizeof(TYPE) * (COUNT_ID)
+ COUNT_ID, sizeof(TYPE)
, ...)
|
- kmalloc
+ kmalloc_array
(
- sizeof(TYPE) * COUNT_ID
+ COUNT_ID, sizeof(TYPE)
, ...)
|
- kmalloc
+ kmalloc_array
(
- sizeof(TYPE) * (COUNT_CONST)
+ COUNT_CONST, sizeof(TYPE)
, ...)
|
- kmalloc
+ kmalloc_array
(
- sizeof(TYPE) * COUNT_CONST
+ COUNT_CONST, sizeof(TYPE)
, ...)
|
- kmalloc
+ kmalloc_array
(
- sizeof(THING) * (COUNT_ID)
+ COUNT_ID, sizeof(THING)
, ...)
|
- kmalloc
+ kmalloc_array
(
- sizeof(THING) * COUNT_ID
+ COUNT_ID, sizeof(THING)
, ...)
|
- kmalloc
+ kmalloc_array
(
- sizeof(THING) * (COUNT_CONST)
+ COUNT_CONST, sizeof(THING)
, ...)
|
- kmalloc
+ kmalloc_array
(
- sizeof(THING) * COUNT_CONST
+ COUNT_CONST, sizeof(THING)
, ...)
)
// 2-factor product, only identifiers.
@@
identifier SIZE, COUNT;
@@
- kmalloc
+ kmalloc_array
(
- SIZE * COUNT
+ COUNT, SIZE
, ...)
// 3-factor product with 1 sizeof(type) or sizeof(expression), with
// redundant parens removed.
@@
expression THING;
identifier STRIDE, COUNT;
type TYPE;
@@
(
kmalloc(
- sizeof(TYPE) * (COUNT) * (STRIDE)
+ array3_size(COUNT, STRIDE, sizeof(TYPE))
, ...)
|
kmalloc(
- sizeof(TYPE) * (COUNT) * STRIDE
+ array3_size(COUNT, STRIDE, sizeof(TYPE))
, ...)
|
kmalloc(
- sizeof(TYPE) * COUNT * (STRIDE)
+ array3_size(COUNT, STRIDE, sizeof(TYPE))
, ...)
|
kmalloc(
- sizeof(TYPE) * COUNT * STRIDE
+ array3_size(COUNT, STRIDE, sizeof(TYPE))
, ...)
|
kmalloc(
- sizeof(THING) * (COUNT) * (STRIDE)
+ array3_size(COUNT, STRIDE, sizeof(THING))
, ...)
|
kmalloc(
- sizeof(THING) * (COUNT) * STRIDE
+ array3_size(COUNT, STRIDE, sizeof(THING))
, ...)
|
kmalloc(
- sizeof(THING) * COUNT * (STRIDE)
+ array3_size(COUNT, STRIDE, sizeof(THING))
, ...)
|
kmalloc(
- sizeof(THING) * COUNT * STRIDE
+ array3_size(COUNT, STRIDE, sizeof(THING))
, ...)
)
// 3-factor product with 2 sizeof(variable), with redundant parens removed.
@@
expression THING1, THING2;
identifier COUNT;
type TYPE1, TYPE2;
@@
(
kmalloc(
- sizeof(TYPE1) * sizeof(TYPE2) * COUNT
+ array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2))
, ...)
|
kmalloc(
- sizeof(TYPE1) * sizeof(THING2) * (COUNT)
+ array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2))
, ...)
|
kmalloc(
- sizeof(THING1) * sizeof(THING2) * COUNT
+ array3_size(COUNT, sizeof(THING1), sizeof(THING2))
, ...)
|
kmalloc(
- sizeof(THING1) * sizeof(THING2) * (COUNT)
+ array3_size(COUNT, sizeof(THING1), sizeof(THING2))
, ...)
|
kmalloc(
- sizeof(TYPE1) * sizeof(THING2) * COUNT
+ array3_size(COUNT, sizeof(TYPE1), sizeof(THING2))
, ...)
|
kmalloc(
- sizeof(TYPE1) * sizeof(THING2) * (COUNT)
+ array3_size(COUNT, sizeof(TYPE1), sizeof(THING2))
, ...)
)
// 3-factor product, only identifiers, with redundant parens removed.
@@
identifier STRIDE, SIZE, COUNT;
@@
(
kmalloc(
- (COUNT) * STRIDE * SIZE
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
|
kmalloc(
- COUNT * (STRIDE) * SIZE
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
|
kmalloc(
- COUNT * STRIDE * (SIZE)
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
|
kmalloc(
- (COUNT) * (STRIDE) * SIZE
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
|
kmalloc(
- COUNT * (STRIDE) * (SIZE)
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
|
kmalloc(
- (COUNT) * STRIDE * (SIZE)
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
|
kmalloc(
- (COUNT) * (STRIDE) * (SIZE)
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
|
kmalloc(
- COUNT * STRIDE * SIZE
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
)
// Any remaining multi-factor products, first at least 3-factor products,
// when they're not all constants...
@@
expression E1, E2, E3;
constant C1, C2, C3;
@@
(
kmalloc(C1 * C2 * C3, ...)
|
kmalloc(
- (E1) * E2 * E3
+ array3_size(E1, E2, E3)
, ...)
|
kmalloc(
- (E1) * (E2) * E3
+ array3_size(E1, E2, E3)
, ...)
|
kmalloc(
- (E1) * (E2) * (E3)
+ array3_size(E1, E2, E3)
, ...)
|
kmalloc(
- E1 * E2 * E3
+ array3_size(E1, E2, E3)
, ...)
)
// And then all remaining 2 factors products when they're not all constants,
// keeping sizeof() as the second factor argument.
@@
expression THING, E1, E2;
type TYPE;
constant C1, C2, C3;
@@
(
kmalloc(sizeof(THING) * C2, ...)
|
kmalloc(sizeof(TYPE) * C2, ...)
|
kmalloc(C1 * C2 * C3, ...)
|
kmalloc(C1 * C2, ...)
|
- kmalloc
+ kmalloc_array
(
- sizeof(TYPE) * (E2)
+ E2, sizeof(TYPE)
, ...)
|
- kmalloc
+ kmalloc_array
(
- sizeof(TYPE) * E2
+ E2, sizeof(TYPE)
, ...)
|
- kmalloc
+ kmalloc_array
(
- sizeof(THING) * (E2)
+ E2, sizeof(THING)
, ...)
|
- kmalloc
+ kmalloc_array
(
- sizeof(THING) * E2
+ E2, sizeof(THING)
, ...)
|
- kmalloc
+ kmalloc_array
(
- (E1) * E2
+ E1, E2
, ...)
|
- kmalloc
+ kmalloc_array
(
- (E1) * (E2)
+ E1, E2
, ...)
|
- kmalloc
+ kmalloc_array
(
- E1 * E2
+ E1, E2
, ...)
)
Signed-off-by: Kees Cook <keescook@chromium.org>
2018-06-12 20:55:00 +00:00
|
|
|
pph = kmalloc_array(num_gh, sizeof(struct gfs2_holder *),
|
|
|
|
GFP_NOFS);
|
2007-06-19 14:38:17 +00:00
|
|
|
if (!pph)
|
|
|
|
return -ENOMEM;
|
2006-01-16 16:50:04 +00:00
|
|
|
}
|
|
|
|
|
2007-06-19 14:38:17 +00:00
|
|
|
error = nq_m_sync(num_gh, ghs, pph);
|
2006-01-16 16:50:04 +00:00
|
|
|
|
2007-06-19 14:38:17 +00:00
|
|
|
if (pph != tmp)
|
|
|
|
kfree(pph);
|
2006-01-16 16:50:04 +00:00
|
|
|
|
|
|
|
return error;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* gfs2_glock_dq_m - release multiple glocks
|
|
|
|
* @num_gh: the number of structures
|
|
|
|
* @ghs: an array of struct gfs2_holder structures
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
|
|
|
void gfs2_glock_dq_m(unsigned int num_gh, struct gfs2_holder *ghs)
|
|
|
|
{
|
2011-03-10 16:41:57 +00:00
|
|
|
while (num_gh--)
|
|
|
|
gfs2_glock_dq(&ghs[num_gh]);
|
2006-01-16 16:50:04 +00:00
|
|
|
}
|
|
|
|
|
2009-01-12 10:43:39 +00:00
|
|
|
void gfs2_glock_cb(struct gfs2_glock *gl, unsigned int state)
|
2008-01-30 15:34:04 +00:00
|
|
|
{
|
[GFS2] delay glock demote for a minimum hold time
When a lot of IO, with some distributed mmap IO, is run on a GFS2 filesystem in
a cluster, it will deadlock. The reason is that do_no_page() will repeatedly
call gfs2_sharewrite_nopage(), because each node keeps giving up the glock
too early, and is forced to call unmap_mapping_range(). This bumps the
mapping->truncate_count sequence count, forcing do_no_page() to retry. This
patch institutes a minimum glock hold time a tenth a second. This insures
that even in heavy contention cases, the node has enough time to get some
useful work done before it gives up the glock.
A second issue is that when gfs2_glock_dq() is called from within a page fault
to demote a lock, and the associated page needs to be written out, it will
try to acqire a lock on it, but it has already been locked at a higher level.
This patch puts makes gfs2_glock_dq() use the work queue as well, to avoid this
issue. This is the same patch as Steve Whitehouse originally proposed to fix
this issue, execpt that gfs2_glock_dq() now grabs a reference to the glock
before it queues up the work on it.
Signed-off-by: Benjamin E. Marzinski <bmarzins@redhat.com>
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
2007-08-23 18:19:05 +00:00
|
|
|
unsigned long delay = 0;
|
|
|
|
unsigned long holdtime;
|
|
|
|
unsigned long now = jiffies;
|
2006-01-16 16:50:04 +00:00
|
|
|
|
2009-01-12 10:43:39 +00:00
|
|
|
gfs2_glock_hold(gl);
|
gfs2: eliminate GLF_QUEUED flag in favor of list_empty(gl_holders)
Before this patch, glock.c maintained a flag, GLF_QUEUED, which indicated
when a glock had a holder queued. It was only checked for inode glocks,
although set and cleared by all glocks, and it was only used to determine
whether the glock should be held for the minimum hold time before releasing.
The problem is that the flag is not accurate at all. If a process holds
the glock, the flag is set. When they dequeue the glock, it only cleared
the flag in cases when the state actually changed. So if the state doesn't
change, the flag may still be set, even when nothing is queued.
This happens to iopen glocks often: the get held in SH, then the file is
closed, but the glock remains in SH mode.
We don't need a special flag to indicate this: we can simply tell whether
the glock has any items queued to the holders queue. It's a waste of cpu
time to maintain it.
This patch eliminates the flag in favor of simply checking list_empty
on the glock holders.
Signed-off-by: Bob Peterson <rpeterso@redhat.com>
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2020-10-12 18:57:37 +00:00
|
|
|
spin_lock(&gl->gl_lockref.lock);
|
2011-06-15 15:41:48 +00:00
|
|
|
holdtime = gl->gl_tchange + gl->gl_hold_time;
|
gfs2: eliminate GLF_QUEUED flag in favor of list_empty(gl_holders)
Before this patch, glock.c maintained a flag, GLF_QUEUED, which indicated
when a glock had a holder queued. It was only checked for inode glocks,
although set and cleared by all glocks, and it was only used to determine
whether the glock should be held for the minimum hold time before releasing.
The problem is that the flag is not accurate at all. If a process holds
the glock, the flag is set. When they dequeue the glock, it only cleared
the flag in cases when the state actually changed. So if the state doesn't
change, the flag may still be set, even when nothing is queued.
This happens to iopen glocks often: the get held in SH, then the file is
closed, but the glock remains in SH mode.
We don't need a special flag to indicate this: we can simply tell whether
the glock has any items queued to the holders queue. It's a waste of cpu
time to maintain it.
This patch eliminates the flag in favor of simply checking list_empty
on the glock holders.
Signed-off-by: Bob Peterson <rpeterso@redhat.com>
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2020-10-12 18:57:37 +00:00
|
|
|
if (!list_empty(&gl->gl_holders) &&
|
2011-06-15 15:41:48 +00:00
|
|
|
gl->gl_name.ln_type == LM_TYPE_INODE) {
|
2010-09-03 08:39:20 +00:00
|
|
|
if (time_before(now, holdtime))
|
|
|
|
delay = holdtime - now;
|
|
|
|
if (test_bit(GLF_REPLY_PENDING, &gl->gl_flags))
|
2011-06-15 15:41:48 +00:00
|
|
|
delay = gl->gl_hold_time;
|
2010-09-03 08:39:20 +00:00
|
|
|
}
|
2021-08-19 18:51:23 +00:00
|
|
|
/*
|
|
|
|
* Note 1: We cannot call demote_incompat_holders from handle_callback
|
|
|
|
* or gfs2_set_demote due to recursion problems like: gfs2_glock_dq ->
|
|
|
|
* handle_callback -> demote_incompat_holders -> gfs2_glock_dq
|
|
|
|
* Plus, we only want to demote the holders if the request comes from
|
|
|
|
* a remote cluster node because local holder conflicts are resolved
|
|
|
|
* elsewhere.
|
|
|
|
*
|
|
|
|
* Note 2: if a remote node wants this glock in EX mode, lock_dlm will
|
|
|
|
* request that we set our state to UNLOCKED. Here we mock up a holder
|
|
|
|
* to make it look like someone wants the lock EX locally. Any SH
|
|
|
|
* and DF requests should be able to share the lock without demoting.
|
|
|
|
*
|
|
|
|
* Note 3: We only want to demote the demoteable holders when there
|
|
|
|
* are no more strong holders. The demoteable holders might as well
|
|
|
|
* keep the glock until the last strong holder is done with it.
|
|
|
|
*/
|
|
|
|
if (!find_first_strong_holder(gl)) {
|
2021-11-22 16:32:35 +00:00
|
|
|
struct gfs2_holder mock_gh = {
|
|
|
|
.gh_gl = gl,
|
|
|
|
.gh_state = (state == LM_ST_UNLOCKED) ?
|
|
|
|
LM_ST_EXCLUSIVE : state,
|
|
|
|
.gh_iflags = BIT(HIF_HOLDER)
|
|
|
|
};
|
|
|
|
|
2021-08-19 18:51:23 +00:00
|
|
|
demote_incompat_holders(gl, &mock_gh);
|
|
|
|
}
|
2013-04-10 09:26:55 +00:00
|
|
|
handle_callback(gl, state, delay, true);
|
2017-06-30 13:10:01 +00:00
|
|
|
__gfs2_glock_queue_work(gl, delay);
|
2015-10-29 15:58:09 +00:00
|
|
|
spin_unlock(&gl->gl_lockref.lock);
|
2006-01-16 16:50:04 +00:00
|
|
|
}
|
|
|
|
|
2010-08-02 09:15:17 +00:00
|
|
|
/**
|
|
|
|
* gfs2_should_freeze - Figure out if glock should be frozen
|
|
|
|
* @gl: The glock in question
|
|
|
|
*
|
|
|
|
* Glocks are not frozen if (a) the result of the dlm operation is
|
|
|
|
* an error, (b) the locking operation was an unlock operation or
|
|
|
|
* (c) if there is a "noexp" flagged request anywhere in the queue
|
|
|
|
*
|
|
|
|
* Returns: 1 if freezing should occur, 0 otherwise
|
|
|
|
*/
|
|
|
|
|
|
|
|
static int gfs2_should_freeze(const struct gfs2_glock *gl)
|
|
|
|
{
|
|
|
|
const struct gfs2_holder *gh;
|
|
|
|
|
|
|
|
if (gl->gl_reply & ~LM_OUT_ST_MASK)
|
|
|
|
return 0;
|
|
|
|
if (gl->gl_target == LM_ST_UNLOCKED)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
list_for_each_entry(gh, &gl->gl_holders, gh_list) {
|
|
|
|
if (test_bit(HIF_HOLDER, &gh->gh_iflags))
|
|
|
|
continue;
|
|
|
|
if (LM_FLAG_NOEXP & gh->gh_flags)
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2006-01-16 16:50:04 +00:00
|
|
|
/**
|
2009-01-12 10:43:39 +00:00
|
|
|
* gfs2_glock_complete - Callback used by locking
|
|
|
|
* @gl: Pointer to the glock
|
|
|
|
* @ret: The return value from the dlm
|
2006-01-16 16:50:04 +00:00
|
|
|
*
|
2015-10-29 15:58:09 +00:00
|
|
|
* The gl_reply field is under the gl_lockref.lock lock so that it is ok
|
2010-11-30 15:49:31 +00:00
|
|
|
* to use a bitfield shared with other glock state fields.
|
2006-01-16 16:50:04 +00:00
|
|
|
*/
|
|
|
|
|
2009-01-12 10:43:39 +00:00
|
|
|
void gfs2_glock_complete(struct gfs2_glock *gl, int ret)
|
2006-01-16 16:50:04 +00:00
|
|
|
{
|
2015-03-16 16:52:05 +00:00
|
|
|
struct lm_lockstruct *ls = &gl->gl_name.ln_sbd->sd_lockstruct;
|
2010-08-02 09:15:17 +00:00
|
|
|
|
2015-10-29 15:58:09 +00:00
|
|
|
spin_lock(&gl->gl_lockref.lock);
|
2009-01-12 10:43:39 +00:00
|
|
|
gl->gl_reply = ret;
|
2010-08-02 09:15:17 +00:00
|
|
|
|
2012-01-09 22:18:05 +00:00
|
|
|
if (unlikely(test_bit(DFL_BLOCK_LOCKS, &ls->ls_recover_flags))) {
|
2010-08-02 09:15:17 +00:00
|
|
|
if (gfs2_should_freeze(gl)) {
|
2009-01-12 10:43:39 +00:00
|
|
|
set_bit(GLF_FROZEN, &gl->gl_flags);
|
2015-10-29 15:58:09 +00:00
|
|
|
spin_unlock(&gl->gl_lockref.lock);
|
2006-01-16 16:50:04 +00:00
|
|
|
return;
|
2010-08-02 09:15:17 +00:00
|
|
|
}
|
2006-01-16 16:50:04 +00:00
|
|
|
}
|
2010-11-30 15:49:31 +00:00
|
|
|
|
2013-10-15 14:18:08 +00:00
|
|
|
gl->gl_lockref.count++;
|
2009-01-12 10:43:39 +00:00
|
|
|
set_bit(GLF_REPLY_PENDING, &gl->gl_flags);
|
2017-06-30 13:10:01 +00:00
|
|
|
__gfs2_glock_queue_work(gl, 0);
|
2015-10-29 15:58:09 +00:00
|
|
|
spin_unlock(&gl->gl_lockref.lock);
|
2006-01-16 16:50:04 +00:00
|
|
|
}
|
|
|
|
|
2021-04-08 18:28:34 +00:00
|
|
|
static int glock_cmp(void *priv, const struct list_head *a,
|
|
|
|
const struct list_head *b)
|
2013-02-01 20:36:03 +00:00
|
|
|
{
|
|
|
|
struct gfs2_glock *gla, *glb;
|
|
|
|
|
|
|
|
gla = list_entry(a, struct gfs2_glock, gl_lru);
|
|
|
|
glb = list_entry(b, struct gfs2_glock, gl_lru);
|
|
|
|
|
|
|
|
if (gla->gl_name.ln_number > glb->gl_name.ln_number)
|
|
|
|
return 1;
|
|
|
|
if (gla->gl_name.ln_number < glb->gl_name.ln_number)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* gfs2_dispose_glock_lru - Demote a list of glocks
|
|
|
|
* @list: The list to dispose of
|
|
|
|
*
|
|
|
|
* Disposing of glocks may involve disk accesses, so that here we sort
|
|
|
|
* the glocks by number (i.e. disk location of the inodes) so that if
|
|
|
|
* there are any such accesses, they'll be sent in order (mostly).
|
|
|
|
*
|
|
|
|
* Must be called under the lru_lock, but may drop and retake this
|
|
|
|
* lock. While the lru_lock is dropped, entries may vanish from the
|
|
|
|
* list, but no new entries will appear on the list (since it is
|
|
|
|
* private)
|
|
|
|
*/
|
|
|
|
|
|
|
|
static void gfs2_dispose_glock_lru(struct list_head *list)
|
|
|
|
__releases(&lru_lock)
|
|
|
|
__acquires(&lru_lock)
|
|
|
|
{
|
|
|
|
struct gfs2_glock *gl;
|
|
|
|
|
|
|
|
list_sort(NULL, list, glock_cmp);
|
|
|
|
|
|
|
|
while(!list_empty(list)) {
|
2020-02-03 18:22:45 +00:00
|
|
|
gl = list_first_entry(list, struct gfs2_glock, gl_lru);
|
2013-02-01 20:36:03 +00:00
|
|
|
list_del_init(&gl->gl_lru);
|
2021-05-18 08:46:25 +00:00
|
|
|
clear_bit(GLF_LRU, &gl->gl_flags);
|
2015-10-29 15:58:09 +00:00
|
|
|
if (!spin_trylock(&gl->gl_lockref.lock)) {
|
2014-06-23 13:43:32 +00:00
|
|
|
add_back_to_lru:
|
2013-10-15 14:18:08 +00:00
|
|
|
list_add(&gl->gl_lru, &lru_list);
|
2019-03-27 17:09:17 +00:00
|
|
|
set_bit(GLF_LRU, &gl->gl_flags);
|
2013-10-15 14:18:08 +00:00
|
|
|
atomic_inc(&lru_count);
|
|
|
|
continue;
|
|
|
|
}
|
2014-06-23 13:43:32 +00:00
|
|
|
if (test_and_set_bit(GLF_LOCK, &gl->gl_flags)) {
|
2015-10-29 15:58:09 +00:00
|
|
|
spin_unlock(&gl->gl_lockref.lock);
|
2014-06-23 13:43:32 +00:00
|
|
|
goto add_back_to_lru;
|
|
|
|
}
|
2013-10-15 14:18:08 +00:00
|
|
|
gl->gl_lockref.count++;
|
2013-02-01 20:36:03 +00:00
|
|
|
if (demote_ok(gl))
|
2013-04-10 09:26:55 +00:00
|
|
|
handle_callback(gl, LM_ST_UNLOCKED, 0, false);
|
2013-02-01 20:36:03 +00:00
|
|
|
WARN_ON(!test_and_clear_bit(GLF_LOCK, &gl->gl_flags));
|
2017-06-30 13:10:01 +00:00
|
|
|
__gfs2_glock_queue_work(gl, 0);
|
2015-10-29 15:58:09 +00:00
|
|
|
spin_unlock(&gl->gl_lockref.lock);
|
2014-06-23 13:43:32 +00:00
|
|
|
cond_resched_lock(&lru_lock);
|
2013-02-01 20:36:03 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-12-14 12:28:30 +00:00
|
|
|
/**
|
|
|
|
* gfs2_scan_glock_lru - Scan the LRU looking for locks to demote
|
|
|
|
* @nr: The number of entries to scan
|
|
|
|
*
|
2013-02-01 20:36:03 +00:00
|
|
|
* This function selects the entries on the LRU which are able to
|
|
|
|
* be demoted, and then kicks off the process by calling
|
|
|
|
* gfs2_dispose_glock_lru() above.
|
2012-12-14 12:28:30 +00:00
|
|
|
*/
|
2006-01-16 16:50:04 +00:00
|
|
|
|
2013-08-28 00:18:09 +00:00
|
|
|
static long gfs2_scan_glock_lru(int nr)
|
2006-01-16 16:50:04 +00:00
|
|
|
{
|
|
|
|
struct gfs2_glock *gl;
|
2008-11-20 13:39:47 +00:00
|
|
|
LIST_HEAD(skipped);
|
2013-02-01 20:36:03 +00:00
|
|
|
LIST_HEAD(dispose);
|
2013-08-28 00:18:09 +00:00
|
|
|
long freed = 0;
|
2006-01-16 16:50:04 +00:00
|
|
|
|
2008-11-20 13:39:47 +00:00
|
|
|
spin_lock(&lru_lock);
|
2013-08-28 00:18:09 +00:00
|
|
|
while ((nr-- >= 0) && !list_empty(&lru_list)) {
|
2020-02-03 18:22:45 +00:00
|
|
|
gl = list_first_entry(&lru_list, struct gfs2_glock, gl_lru);
|
2008-11-20 13:39:47 +00:00
|
|
|
|
|
|
|
/* Test for being demotable */
|
2014-06-23 13:43:32 +00:00
|
|
|
if (!test_bit(GLF_LOCK, &gl->gl_flags)) {
|
2013-02-01 20:36:03 +00:00
|
|
|
list_move(&gl->gl_lru, &dispose);
|
|
|
|
atomic_dec(&lru_count);
|
2013-08-28 00:18:09 +00:00
|
|
|
freed++;
|
2009-06-25 15:30:26 +00:00
|
|
|
continue;
|
2008-11-20 13:39:47 +00:00
|
|
|
}
|
2013-02-01 20:36:03 +00:00
|
|
|
|
|
|
|
list_move(&gl->gl_lru, &skipped);
|
2006-01-16 16:50:04 +00:00
|
|
|
}
|
2008-11-20 13:39:47 +00:00
|
|
|
list_splice(&skipped, &lru_list);
|
2013-02-01 20:36:03 +00:00
|
|
|
if (!list_empty(&dispose))
|
|
|
|
gfs2_dispose_glock_lru(&dispose);
|
2008-11-20 13:39:47 +00:00
|
|
|
spin_unlock(&lru_lock);
|
2013-08-28 00:18:09 +00:00
|
|
|
|
|
|
|
return freed;
|
2012-12-14 12:28:30 +00:00
|
|
|
}
|
|
|
|
|
2013-08-28 00:18:09 +00:00
|
|
|
static unsigned long gfs2_glock_shrink_scan(struct shrinker *shrink,
|
|
|
|
struct shrink_control *sc)
|
2012-12-14 12:28:30 +00:00
|
|
|
{
|
2013-08-28 00:18:09 +00:00
|
|
|
if (!(sc->gfp_mask & __GFP_FS))
|
|
|
|
return SHRINK_STOP;
|
|
|
|
return gfs2_scan_glock_lru(sc->nr_to_scan);
|
|
|
|
}
|
2012-12-14 12:28:30 +00:00
|
|
|
|
2013-08-28 00:18:09 +00:00
|
|
|
static unsigned long gfs2_glock_shrink_count(struct shrinker *shrink,
|
|
|
|
struct shrink_control *sc)
|
|
|
|
{
|
2013-08-28 00:17:53 +00:00
|
|
|
return vfs_pressure_ratio(atomic_read(&lru_count));
|
2006-01-16 16:50:04 +00:00
|
|
|
}
|
|
|
|
|
2008-11-20 13:39:47 +00:00
|
|
|
static struct shrinker glock_shrinker = {
|
|
|
|
.seeks = DEFAULT_SEEKS,
|
2013-08-28 00:18:09 +00:00
|
|
|
.count_objects = gfs2_glock_shrink_count,
|
|
|
|
.scan_objects = gfs2_glock_shrink_scan,
|
2008-11-20 13:39:47 +00:00
|
|
|
};
|
|
|
|
|
2006-01-16 16:50:04 +00:00
|
|
|
/**
|
2020-10-12 20:04:20 +00:00
|
|
|
* glock_hash_walk - Call a function for glock in a hash bucket
|
2006-01-16 16:50:04 +00:00
|
|
|
* @examiner: the function
|
|
|
|
* @sdp: the filesystem
|
|
|
|
*
|
2017-02-11 11:26:45 +00:00
|
|
|
* Note that the function can be called multiple times on the same
|
|
|
|
* object. So the user must ensure that the function can cope with
|
|
|
|
* that.
|
2006-01-16 16:50:04 +00:00
|
|
|
*/
|
|
|
|
|
2015-03-16 16:02:46 +00:00
|
|
|
static void glock_hash_walk(glock_examiner examiner, const struct gfs2_sbd *sdp)
|
2006-01-16 16:50:04 +00:00
|
|
|
{
|
2011-01-19 09:30:01 +00:00
|
|
|
struct gfs2_glock *gl;
|
2017-02-11 11:26:45 +00:00
|
|
|
struct rhashtable_iter iter;
|
|
|
|
|
|
|
|
rhashtable_walk_enter(&gl_hash_table, &iter);
|
|
|
|
|
|
|
|
do {
|
2017-12-04 18:31:41 +00:00
|
|
|
rhashtable_walk_start(&iter);
|
2017-02-11 11:26:45 +00:00
|
|
|
|
2021-10-07 13:57:44 +00:00
|
|
|
while ((gl = rhashtable_walk_next(&iter)) && !IS_ERR(gl)) {
|
|
|
|
if (gl->gl_name.ln_sbd == sdp)
|
2015-03-16 16:02:46 +00:00
|
|
|
examiner(gl);
|
2021-10-07 13:57:44 +00:00
|
|
|
}
|
2017-02-11 11:26:45 +00:00
|
|
|
|
|
|
|
rhashtable_walk_stop(&iter);
|
|
|
|
} while (cond_resched(), gl == ERR_PTR(-EAGAIN));
|
|
|
|
|
|
|
|
rhashtable_walk_exit(&iter);
|
2011-01-19 09:30:01 +00:00
|
|
|
}
|
|
|
|
|
2020-01-16 19:12:26 +00:00
|
|
|
bool gfs2_queue_delete_work(struct gfs2_glock *gl, unsigned long delay)
|
|
|
|
{
|
|
|
|
bool queued;
|
|
|
|
|
|
|
|
spin_lock(&gl->gl_lockref.lock);
|
|
|
|
queued = queue_delayed_work(gfs2_delete_workqueue,
|
|
|
|
&gl->gl_delete, delay);
|
|
|
|
if (queued)
|
|
|
|
set_bit(GLF_PENDING_DELETE, &gl->gl_flags);
|
|
|
|
spin_unlock(&gl->gl_lockref.lock);
|
|
|
|
return queued;
|
|
|
|
}
|
|
|
|
|
|
|
|
void gfs2_cancel_delete_work(struct gfs2_glock *gl)
|
|
|
|
{
|
2021-10-11 18:53:02 +00:00
|
|
|
if (cancel_delayed_work(&gl->gl_delete)) {
|
2020-01-16 19:12:26 +00:00
|
|
|
clear_bit(GLF_PENDING_DELETE, &gl->gl_flags);
|
|
|
|
gfs2_glock_put(gl);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
bool gfs2_delete_work_queued(const struct gfs2_glock *gl)
|
|
|
|
{
|
|
|
|
return test_bit(GLF_PENDING_DELETE, &gl->gl_flags);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void flush_delete_work(struct gfs2_glock *gl)
|
|
|
|
{
|
2020-10-15 16:16:48 +00:00
|
|
|
if (gl->gl_name.ln_type == LM_TYPE_IOPEN) {
|
|
|
|
if (cancel_delayed_work(&gl->gl_delete)) {
|
|
|
|
queue_delayed_work(gfs2_delete_workqueue,
|
|
|
|
&gl->gl_delete, 0);
|
|
|
|
}
|
2020-06-10 16:31:56 +00:00
|
|
|
}
|
2020-01-16 19:12:26 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void gfs2_flush_delete_work(struct gfs2_sbd *sdp)
|
|
|
|
{
|
|
|
|
glock_hash_walk(flush_delete_work, sdp);
|
|
|
|
flush_workqueue(gfs2_delete_workqueue);
|
|
|
|
}
|
|
|
|
|
2009-01-12 10:43:39 +00:00
|
|
|
/**
|
|
|
|
* thaw_glock - thaw out a glock which has an unprocessed reply waiting
|
|
|
|
* @gl: The glock to thaw
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
|
|
|
static void thaw_glock(struct gfs2_glock *gl)
|
|
|
|
{
|
2021-10-07 13:57:44 +00:00
|
|
|
if (!test_and_clear_bit(GLF_FROZEN, &gl->gl_flags))
|
|
|
|
return;
|
|
|
|
if (!lockref_get_not_dead(&gl->gl_lockref))
|
2017-06-30 13:10:01 +00:00
|
|
|
return;
|
|
|
|
set_bit(GLF_REPLY_PENDING, &gl->gl_flags);
|
|
|
|
gfs2_glock_queue_work(gl, 0);
|
2009-01-12 10:43:39 +00:00
|
|
|
}
|
|
|
|
|
2006-01-16 16:50:04 +00:00
|
|
|
/**
|
|
|
|
* clear_glock - look at a glock and see if we can free it from glock cache
|
|
|
|
* @gl: the glock to look at
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
|
|
|
static void clear_glock(struct gfs2_glock *gl)
|
|
|
|
{
|
2011-04-14 15:50:31 +00:00
|
|
|
gfs2_glock_remove_from_lru(gl);
|
2006-01-16 16:50:04 +00:00
|
|
|
|
2015-10-29 15:58:09 +00:00
|
|
|
spin_lock(&gl->gl_lockref.lock);
|
2021-10-07 13:57:44 +00:00
|
|
|
if (!__lockref_is_dead(&gl->gl_lockref)) {
|
|
|
|
gl->gl_lockref.count++;
|
|
|
|
if (gl->gl_state != LM_ST_UNLOCKED)
|
|
|
|
handle_callback(gl, LM_ST_UNLOCKED, 0, false);
|
|
|
|
__gfs2_glock_queue_work(gl, 0);
|
|
|
|
}
|
2015-10-29 15:58:09 +00:00
|
|
|
spin_unlock(&gl->gl_lockref.lock);
|
2006-01-16 16:50:04 +00:00
|
|
|
}
|
|
|
|
|
2009-01-12 10:43:39 +00:00
|
|
|
/**
|
|
|
|
* gfs2_glock_thaw - Thaw any frozen glocks
|
|
|
|
* @sdp: The super block
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
|
|
|
void gfs2_glock_thaw(struct gfs2_sbd *sdp)
|
|
|
|
{
|
2011-01-19 09:30:01 +00:00
|
|
|
glock_hash_walk(thaw_glock, sdp);
|
|
|
|
}
|
2009-01-12 10:43:39 +00:00
|
|
|
|
2019-05-09 14:21:48 +00:00
|
|
|
static void dump_glock(struct seq_file *seq, struct gfs2_glock *gl, bool fsid)
|
2011-01-19 09:30:01 +00:00
|
|
|
{
|
2015-10-29 15:58:09 +00:00
|
|
|
spin_lock(&gl->gl_lockref.lock);
|
2019-05-09 14:21:48 +00:00
|
|
|
gfs2_dump_glock(seq, gl, fsid);
|
2015-10-29 15:58:09 +00:00
|
|
|
spin_unlock(&gl->gl_lockref.lock);
|
2011-01-19 09:30:01 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void dump_glock_func(struct gfs2_glock *gl)
|
|
|
|
{
|
2019-05-09 14:21:48 +00:00
|
|
|
dump_glock(NULL, gl, true);
|
2009-01-12 10:43:39 +00:00
|
|
|
}
|
|
|
|
|
2022-08-18 18:32:37 +00:00
|
|
|
static void withdraw_dq(struct gfs2_glock *gl)
|
|
|
|
{
|
|
|
|
spin_lock(&gl->gl_lockref.lock);
|
|
|
|
if (!__lockref_is_dead(&gl->gl_lockref) &&
|
|
|
|
glock_blocked_by_withdraw(gl))
|
|
|
|
do_error(gl, LM_OUT_ERROR); /* remove pending waiters */
|
|
|
|
spin_unlock(&gl->gl_lockref.lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
void gfs2_gl_dq_holders(struct gfs2_sbd *sdp)
|
|
|
|
{
|
|
|
|
glock_hash_walk(withdraw_dq, sdp);
|
|
|
|
}
|
|
|
|
|
2006-01-16 16:50:04 +00:00
|
|
|
/**
|
|
|
|
* gfs2_gl_hash_clear - Empty out the glock hash table
|
|
|
|
* @sdp: the filesystem
|
|
|
|
*
|
2008-06-03 13:09:53 +00:00
|
|
|
* Called when unmounting the filesystem.
|
2006-01-16 16:50:04 +00:00
|
|
|
*/
|
|
|
|
|
2008-12-19 15:32:06 +00:00
|
|
|
void gfs2_gl_hash_clear(struct gfs2_sbd *sdp)
|
2006-01-16 16:50:04 +00:00
|
|
|
{
|
2012-11-13 15:58:56 +00:00
|
|
|
set_bit(SDF_SKIP_DLM_UNLOCK, &sdp->sd_flags);
|
GFS2: Flush work queue before clearing glock hash tables
There was a timing window when a GFS2 file system was unmounted
that caused GFS2 to call BUG() and panic the kernel. The call
to BUG() is meant to ensure that the glock reference count,
gl_ref, never gets down to zero and bounce back up again. What was
happening during umount is that function gfs2_put_super was dequeing
its glocks for well-known files. In particular, we saw it on the
journal glock, sd_jinode_gh. The dequeue caused delayed work to be
queued for the glock state machine, to transition the lock to an
"unlocked" state. While the work was still queued, gfs2_put_super
called gfs2_gl_hash_clear to clear out the glock hash tables.
If the timing was just so, the glock work function would drop the
reference count at the time when it was being checked for zero,
and that caused BUG() to be called. This patch calls
flush_workqueue before clearing the glock hash tables, thereby
ensuring that the delayed work is executed before the hash tables
are cleared, and therefore the reference count never goes to zero
until the glock is cleared.
Signed-off-by: Bob Peterson <rpeterso@redhat.com>
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
2013-04-25 16:49:17 +00:00
|
|
|
flush_workqueue(glock_workqueue);
|
2011-01-19 09:30:01 +00:00
|
|
|
glock_hash_walk(clear_glock, sdp);
|
2010-01-29 15:21:27 +00:00
|
|
|
flush_workqueue(glock_workqueue);
|
2015-05-19 14:11:23 +00:00
|
|
|
wait_event_timeout(sdp->sd_glock_wait,
|
|
|
|
atomic_read(&sdp->sd_glock_disposal) == 0,
|
|
|
|
HZ * 600);
|
2011-01-19 09:30:01 +00:00
|
|
|
glock_hash_walk(dump_glock_func, sdp);
|
2006-01-16 16:50:04 +00:00
|
|
|
}
|
|
|
|
|
2008-05-21 16:03:22 +00:00
|
|
|
static const char *state2str(unsigned state)
|
2007-03-23 22:05:15 +00:00
|
|
|
{
|
2008-05-21 16:03:22 +00:00
|
|
|
switch(state) {
|
|
|
|
case LM_ST_UNLOCKED:
|
|
|
|
return "UN";
|
|
|
|
case LM_ST_SHARED:
|
|
|
|
return "SH";
|
|
|
|
case LM_ST_DEFERRED:
|
|
|
|
return "DF";
|
|
|
|
case LM_ST_EXCLUSIVE:
|
|
|
|
return "EX";
|
|
|
|
}
|
|
|
|
return "??";
|
|
|
|
}
|
|
|
|
|
2015-07-24 14:45:43 +00:00
|
|
|
static const char *hflags2str(char *buf, u16 flags, unsigned long iflags)
|
2008-05-21 16:03:22 +00:00
|
|
|
{
|
|
|
|
char *p = buf;
|
|
|
|
if (flags & LM_FLAG_TRY)
|
|
|
|
*p++ = 't';
|
|
|
|
if (flags & LM_FLAG_TRY_1CB)
|
|
|
|
*p++ = 'T';
|
|
|
|
if (flags & LM_FLAG_NOEXP)
|
|
|
|
*p++ = 'e';
|
|
|
|
if (flags & LM_FLAG_ANY)
|
2009-01-12 10:43:39 +00:00
|
|
|
*p++ = 'A';
|
2008-05-21 16:03:22 +00:00
|
|
|
if (flags & LM_FLAG_PRIORITY)
|
|
|
|
*p++ = 'p';
|
2018-04-18 20:58:19 +00:00
|
|
|
if (flags & LM_FLAG_NODE_SCOPE)
|
|
|
|
*p++ = 'n';
|
2008-05-21 16:03:22 +00:00
|
|
|
if (flags & GL_ASYNC)
|
|
|
|
*p++ = 'a';
|
|
|
|
if (flags & GL_EXACT)
|
|
|
|
*p++ = 'E';
|
|
|
|
if (flags & GL_NOCACHE)
|
|
|
|
*p++ = 'c';
|
|
|
|
if (test_bit(HIF_HOLDER, &iflags))
|
|
|
|
*p++ = 'H';
|
|
|
|
if (test_bit(HIF_WAIT, &iflags))
|
|
|
|
*p++ = 'W';
|
2021-08-19 18:51:23 +00:00
|
|
|
if (test_bit(HIF_MAY_DEMOTE, &iflags))
|
|
|
|
*p++ = 'D';
|
2021-09-13 17:40:09 +00:00
|
|
|
if (flags & GL_SKIP)
|
|
|
|
*p++ = 's';
|
2008-05-21 16:03:22 +00:00
|
|
|
*p = 0;
|
|
|
|
return buf;
|
2007-03-23 22:05:15 +00:00
|
|
|
}
|
|
|
|
|
2006-01-16 16:50:04 +00:00
|
|
|
/**
|
|
|
|
* dump_holder - print information about a glock holder
|
2008-05-21 16:03:22 +00:00
|
|
|
* @seq: the seq_file struct
|
2006-01-16 16:50:04 +00:00
|
|
|
* @gh: the glock holder
|
2019-05-09 14:21:48 +00:00
|
|
|
* @fs_id_buf: pointer to file system id (if requested)
|
2006-01-16 16:50:04 +00:00
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
2019-05-09 14:21:48 +00:00
|
|
|
static void dump_holder(struct seq_file *seq, const struct gfs2_holder *gh,
|
|
|
|
const char *fs_id_buf)
|
2006-01-16 16:50:04 +00:00
|
|
|
{
|
2022-04-05 20:07:30 +00:00
|
|
|
const char *comm = "(none)";
|
|
|
|
pid_t owner_pid = 0;
|
2008-05-21 16:03:22 +00:00
|
|
|
char flags_buf[32];
|
2006-01-16 16:50:04 +00:00
|
|
|
|
2014-01-02 10:52:20 +00:00
|
|
|
rcu_read_lock();
|
2022-04-05 20:07:30 +00:00
|
|
|
if (pid_is_meaningful(gh)) {
|
|
|
|
struct task_struct *gh_owner;
|
|
|
|
|
|
|
|
comm = "(ended)";
|
|
|
|
owner_pid = pid_nr(gh->gh_owner_pid);
|
2008-02-07 08:13:19 +00:00
|
|
|
gh_owner = pid_task(gh->gh_owner_pid, PIDTYPE_PID);
|
2022-04-05 20:07:30 +00:00
|
|
|
if (gh_owner)
|
|
|
|
comm = gh_owner->comm;
|
|
|
|
}
|
2019-05-09 14:21:48 +00:00
|
|
|
gfs2_print_dbg(seq, "%s H: s:%s f:%s e:%d p:%ld [%s] %pS\n",
|
|
|
|
fs_id_buf, state2str(gh->gh_state),
|
2010-11-05 23:12:36 +00:00
|
|
|
hflags2str(flags_buf, gh->gh_flags, gh->gh_iflags),
|
2022-04-05 20:07:30 +00:00
|
|
|
gh->gh_error, (long)owner_pid, comm, (void *)gh->gh_ip);
|
2014-01-02 10:52:20 +00:00
|
|
|
rcu_read_unlock();
|
2006-01-16 16:50:04 +00:00
|
|
|
}
|
|
|
|
|
2011-04-14 13:09:52 +00:00
|
|
|
static const char *gflags2str(char *buf, const struct gfs2_glock *gl)
|
2008-05-21 16:03:22 +00:00
|
|
|
{
|
2011-04-14 13:09:52 +00:00
|
|
|
const unsigned long *gflags = &gl->gl_flags;
|
2008-05-21 16:03:22 +00:00
|
|
|
char *p = buf;
|
2011-04-14 13:09:52 +00:00
|
|
|
|
2008-05-21 16:03:22 +00:00
|
|
|
if (test_bit(GLF_LOCK, gflags))
|
|
|
|
*p++ = 'l';
|
|
|
|
if (test_bit(GLF_DEMOTE, gflags))
|
|
|
|
*p++ = 'D';
|
|
|
|
if (test_bit(GLF_PENDING_DEMOTE, gflags))
|
|
|
|
*p++ = 'd';
|
|
|
|
if (test_bit(GLF_DEMOTE_IN_PROGRESS, gflags))
|
|
|
|
*p++ = 'p';
|
|
|
|
if (test_bit(GLF_DIRTY, gflags))
|
|
|
|
*p++ = 'y';
|
|
|
|
if (test_bit(GLF_LFLUSH, gflags))
|
|
|
|
*p++ = 'f';
|
|
|
|
if (test_bit(GLF_INVALIDATE_IN_PROGRESS, gflags))
|
|
|
|
*p++ = 'i';
|
|
|
|
if (test_bit(GLF_REPLY_PENDING, gflags))
|
|
|
|
*p++ = 'r';
|
2009-01-12 10:43:39 +00:00
|
|
|
if (test_bit(GLF_INITIAL, gflags))
|
2009-02-05 10:12:38 +00:00
|
|
|
*p++ = 'I';
|
2009-01-12 10:43:39 +00:00
|
|
|
if (test_bit(GLF_FROZEN, gflags))
|
|
|
|
*p++ = 'F';
|
gfs2: eliminate GLF_QUEUED flag in favor of list_empty(gl_holders)
Before this patch, glock.c maintained a flag, GLF_QUEUED, which indicated
when a glock had a holder queued. It was only checked for inode glocks,
although set and cleared by all glocks, and it was only used to determine
whether the glock should be held for the minimum hold time before releasing.
The problem is that the flag is not accurate at all. If a process holds
the glock, the flag is set. When they dequeue the glock, it only cleared
the flag in cases when the state actually changed. So if the state doesn't
change, the flag may still be set, even when nothing is queued.
This happens to iopen glocks often: the get held in SH, then the file is
closed, but the glock remains in SH mode.
We don't need a special flag to indicate this: we can simply tell whether
the glock has any items queued to the holders queue. It's a waste of cpu
time to maintain it.
This patch eliminates the flag in favor of simply checking list_empty
on the glock holders.
Signed-off-by: Bob Peterson <rpeterso@redhat.com>
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2020-10-12 18:57:37 +00:00
|
|
|
if (!list_empty(&gl->gl_holders))
|
2010-09-03 08:39:20 +00:00
|
|
|
*p++ = 'q';
|
2011-04-14 13:09:52 +00:00
|
|
|
if (test_bit(GLF_LRU, gflags))
|
|
|
|
*p++ = 'L';
|
|
|
|
if (gl->gl_object)
|
|
|
|
*p++ = 'o';
|
GFS2: glock statistics gathering
The stats are divided into two sets: those relating to the
super block and those relating to an individual glock. The
super block stats are done on a per cpu basis in order to
try and reduce the overhead of gathering them. They are also
further divided by glock type.
In the case of both the super block and glock statistics,
the same information is gathered in each case. The super
block statistics are used to provide default values for
most of the glock statistics, so that newly created glocks
should have, as far as possible, a sensible starting point.
The statistics are divided into three pairs of mean and
variance, plus two counters. The mean/variance pairs are
smoothed exponential estimates and the algorithm used is
one which will be very familiar to those used to calculation
of round trip times in network code.
The three pairs of mean/variance measure the following
things:
1. DLM lock time (non-blocking requests)
2. DLM lock time (blocking requests)
3. Inter-request time (again to the DLM)
A non-blocking request is one which will complete right
away, whatever the state of the DLM lock in question. That
currently means any requests when (a) the current state of
the lock is exclusive (b) the requested state is either null
or unlocked or (c) the "try lock" flag is set. A blocking
request covers all the other lock requests.
There are two counters. The first is there primarily to show
how many lock requests have been made, and thus how much data
has gone into the mean/variance calculations. The other counter
is counting queueing of holders at the top layer of the glock
code. Hopefully that number will be a lot larger than the number
of dlm lock requests issued.
So why gather these statistics? There are several reasons
we'd like to get a better idea of these timings:
1. To be able to better set the glock "min hold time"
2. To spot performance issues more easily
3. To improve the algorithm for selecting resource groups for
allocation (to base it on lock wait time, rather than blindly
using a "try lock")
Due to the smoothing action of the updates, a step change in
some input quantity being sampled will only fully be taken
into account after 8 samples (or 4 for the variance) and this
needs to be carefully considered when interpreting the
results.
Knowing both the time it takes a lock request to complete and
the average time between lock requests for a glock means we
can compute the total percentage of the time for which the
node is able to use a glock vs. time that the rest of the
cluster has its share. That will be very useful when setting
the lock min hold time.
The other point to remember is that all times are in
nanoseconds. Great care has been taken to ensure that we
measure exactly the quantities that we want, as accurately
as possible. There are always inaccuracies in any
measuring system, but I hope this is as accurate as we
can reasonably make it.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
2012-01-20 10:38:36 +00:00
|
|
|
if (test_bit(GLF_BLOCKING, gflags))
|
|
|
|
*p++ = 'b';
|
2020-06-09 17:29:08 +00:00
|
|
|
if (test_bit(GLF_PENDING_DELETE, gflags))
|
|
|
|
*p++ = 'P';
|
|
|
|
if (test_bit(GLF_FREEING, gflags))
|
|
|
|
*p++ = 'x';
|
gfs2: fix GL_SKIP node_scope problems
Before this patch, when a glock was locked, the very first holder on the
queue would unlock the lockref and call the go_instantiate glops function
(if one existed), unless GL_SKIP was specified. When we introduced the new
node-scope concept, we allowed multiple holders to lock glocks in EX mode
and share the lock.
But node-scope introduced a new problem: if the first holder has GL_SKIP
and the next one does NOT, since it is not the first holder on the queue,
the go_instantiate op was not called. Eventually the GL_SKIP holder may
call the instantiate sub-function (e.g. gfs2_rgrp_bh_get) but there was
still a window of time in which another non-GL_SKIP holder assumes the
instantiate function had been called by the first holder. In the case of
rgrp glocks, this led to a NULL pointer dereference on the buffer_heads.
This patch tries to fix the problem by introducing two new glock flags:
GLF_INSTANTIATE_NEEDED, which keeps track of when the instantiate function
needs to be called to "fill in" or "read in" the object before it is
referenced.
GLF_INSTANTIATE_IN_PROG which is used to determine when a process is
in the process of reading in the object. Whenever a function needs to
reference the object, it checks the GLF_INSTANTIATE_NEEDED flag, and if
set, it sets GLF_INSTANTIATE_IN_PROG and calls the glops "go_instantiate"
function.
As before, the gl_lockref spin_lock is unlocked during the IO operation,
which may take a relatively long amount of time to complete. While
unlocked, if another process determines go_instantiate is still needed,
it sees GLF_INSTANTIATE_IN_PROG is set, and waits for the go_instantiate
glop operation to be completed. Once GLF_INSTANTIATE_IN_PROG is cleared,
it needs to check GLF_INSTANTIATE_NEEDED again because the other process's
go_instantiate operation may not have been successful.
Functions that previously called the instantiate sub-functions now call
directly into gfs2_instantiate so the new bits are managed properly.
Signed-off-by: Bob Peterson <rpeterso@redhat.com>
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2021-10-06 14:29:18 +00:00
|
|
|
if (test_bit(GLF_INSTANTIATE_NEEDED, gflags))
|
|
|
|
*p++ = 'n';
|
|
|
|
if (test_bit(GLF_INSTANTIATE_IN_PROG, gflags))
|
|
|
|
*p++ = 'N';
|
2008-05-21 16:03:22 +00:00
|
|
|
*p = 0;
|
|
|
|
return buf;
|
2006-01-16 16:50:04 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
2012-10-15 09:57:02 +00:00
|
|
|
* gfs2_dump_glock - print information about a glock
|
2008-05-21 16:03:22 +00:00
|
|
|
* @seq: The seq_file struct
|
2006-01-16 16:50:04 +00:00
|
|
|
* @gl: the glock
|
2019-05-09 14:21:48 +00:00
|
|
|
* @fsid: If true, also dump the file system id
|
2008-05-21 16:03:22 +00:00
|
|
|
*
|
|
|
|
* The file format is as follows:
|
|
|
|
* One line per object, capital letters are used to indicate objects
|
|
|
|
* G = glock, I = Inode, R = rgrp, H = holder. Glocks are not indented,
|
|
|
|
* other objects are indented by a single space and follow the glock to
|
|
|
|
* which they are related. Fields are indicated by lower case letters
|
|
|
|
* followed by a colon and the field value, except for strings which are in
|
|
|
|
* [] so that its possible to see if they are composed of spaces for
|
|
|
|
* example. The field's are n = number (id of the object), f = flags,
|
|
|
|
* t = type, s = state, r = refcount, e = error, p = pid.
|
2006-01-16 16:50:04 +00:00
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
2019-05-09 14:21:48 +00:00
|
|
|
void gfs2_dump_glock(struct seq_file *seq, struct gfs2_glock *gl, bool fsid)
|
2006-01-16 16:50:04 +00:00
|
|
|
{
|
2008-05-21 16:03:22 +00:00
|
|
|
const struct gfs2_glock_operations *glops = gl->gl_ops;
|
|
|
|
unsigned long long dtime;
|
|
|
|
const struct gfs2_holder *gh;
|
|
|
|
char gflags_buf[32];
|
2019-05-09 14:21:48 +00:00
|
|
|
struct gfs2_sbd *sdp = gl->gl_name.ln_sbd;
|
2019-08-13 13:25:15 +00:00
|
|
|
char fs_id_buf[sizeof(sdp->sd_fsname) + 7];
|
2020-05-22 18:57:41 +00:00
|
|
|
unsigned long nrpages = 0;
|
2006-01-16 16:50:04 +00:00
|
|
|
|
2020-05-22 18:57:41 +00:00
|
|
|
if (gl->gl_ops->go_flags & GLOF_ASPACE) {
|
|
|
|
struct address_space *mapping = gfs2_glock2aspace(gl);
|
|
|
|
|
|
|
|
nrpages = mapping->nrpages;
|
|
|
|
}
|
2019-05-09 14:21:48 +00:00
|
|
|
memset(fs_id_buf, 0, sizeof(fs_id_buf));
|
|
|
|
if (fsid && sdp) /* safety precaution */
|
|
|
|
sprintf(fs_id_buf, "fsid=%s: ", sdp->sd_fsname);
|
2008-05-21 16:03:22 +00:00
|
|
|
dtime = jiffies - gl->gl_demote_time;
|
|
|
|
dtime *= 1000000/HZ; /* demote time in uSec */
|
|
|
|
if (!test_bit(GLF_DEMOTE, &gl->gl_flags))
|
|
|
|
dtime = 0;
|
2019-05-09 14:21:48 +00:00
|
|
|
gfs2_print_dbg(seq, "%sG: s:%s n:%u/%llx f:%s t:%s d:%s/%llu a:%d "
|
2020-05-22 18:57:41 +00:00
|
|
|
"v:%d r:%d m:%ld p:%lu\n",
|
|
|
|
fs_id_buf, state2str(gl->gl_state),
|
|
|
|
gl->gl_name.ln_type,
|
|
|
|
(unsigned long long)gl->gl_name.ln_number,
|
|
|
|
gflags2str(gflags_buf, gl),
|
|
|
|
state2str(gl->gl_target),
|
|
|
|
state2str(gl->gl_demote_state), dtime,
|
|
|
|
atomic_read(&gl->gl_ail_count),
|
|
|
|
atomic_read(&gl->gl_revokes),
|
|
|
|
(int)gl->gl_lockref.count, gl->gl_hold_time, nrpages);
|
2006-01-16 16:50:04 +00:00
|
|
|
|
2014-01-16 10:31:13 +00:00
|
|
|
list_for_each_entry(gh, &gl->gl_holders, gh_list)
|
2019-05-09 14:21:48 +00:00
|
|
|
dump_holder(seq, gh, fs_id_buf);
|
2014-01-16 10:31:13 +00:00
|
|
|
|
2008-05-21 16:03:22 +00:00
|
|
|
if (gl->gl_state != LM_ST_UNLOCKED && glops->go_dump)
|
2019-05-09 14:21:48 +00:00
|
|
|
glops->go_dump(seq, gl, fs_id_buf);
|
2006-01-16 16:50:04 +00:00
|
|
|
}
|
|
|
|
|
GFS2: glock statistics gathering
The stats are divided into two sets: those relating to the
super block and those relating to an individual glock. The
super block stats are done on a per cpu basis in order to
try and reduce the overhead of gathering them. They are also
further divided by glock type.
In the case of both the super block and glock statistics,
the same information is gathered in each case. The super
block statistics are used to provide default values for
most of the glock statistics, so that newly created glocks
should have, as far as possible, a sensible starting point.
The statistics are divided into three pairs of mean and
variance, plus two counters. The mean/variance pairs are
smoothed exponential estimates and the algorithm used is
one which will be very familiar to those used to calculation
of round trip times in network code.
The three pairs of mean/variance measure the following
things:
1. DLM lock time (non-blocking requests)
2. DLM lock time (blocking requests)
3. Inter-request time (again to the DLM)
A non-blocking request is one which will complete right
away, whatever the state of the DLM lock in question. That
currently means any requests when (a) the current state of
the lock is exclusive (b) the requested state is either null
or unlocked or (c) the "try lock" flag is set. A blocking
request covers all the other lock requests.
There are two counters. The first is there primarily to show
how many lock requests have been made, and thus how much data
has gone into the mean/variance calculations. The other counter
is counting queueing of holders at the top layer of the glock
code. Hopefully that number will be a lot larger than the number
of dlm lock requests issued.
So why gather these statistics? There are several reasons
we'd like to get a better idea of these timings:
1. To be able to better set the glock "min hold time"
2. To spot performance issues more easily
3. To improve the algorithm for selecting resource groups for
allocation (to base it on lock wait time, rather than blindly
using a "try lock")
Due to the smoothing action of the updates, a step change in
some input quantity being sampled will only fully be taken
into account after 8 samples (or 4 for the variance) and this
needs to be carefully considered when interpreting the
results.
Knowing both the time it takes a lock request to complete and
the average time between lock requests for a glock means we
can compute the total percentage of the time for which the
node is able to use a glock vs. time that the rest of the
cluster has its share. That will be very useful when setting
the lock min hold time.
The other point to remember is that all times are in
nanoseconds. Great care has been taken to ensure that we
measure exactly the quantities that we want, as accurately
as possible. There are always inaccuracies in any
measuring system, but I hope this is as accurate as we
can reasonably make it.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
2012-01-20 10:38:36 +00:00
|
|
|
static int gfs2_glstats_seq_show(struct seq_file *seq, void *iter_ptr)
|
|
|
|
{
|
|
|
|
struct gfs2_glock *gl = iter_ptr;
|
|
|
|
|
2015-08-27 17:51:45 +00:00
|
|
|
seq_printf(seq, "G: n:%u/%llx rtt:%llu/%llu rttb:%llu/%llu irt:%llu/%llu dcnt: %llu qcnt: %llu\n",
|
GFS2: glock statistics gathering
The stats are divided into two sets: those relating to the
super block and those relating to an individual glock. The
super block stats are done on a per cpu basis in order to
try and reduce the overhead of gathering them. They are also
further divided by glock type.
In the case of both the super block and glock statistics,
the same information is gathered in each case. The super
block statistics are used to provide default values for
most of the glock statistics, so that newly created glocks
should have, as far as possible, a sensible starting point.
The statistics are divided into three pairs of mean and
variance, plus two counters. The mean/variance pairs are
smoothed exponential estimates and the algorithm used is
one which will be very familiar to those used to calculation
of round trip times in network code.
The three pairs of mean/variance measure the following
things:
1. DLM lock time (non-blocking requests)
2. DLM lock time (blocking requests)
3. Inter-request time (again to the DLM)
A non-blocking request is one which will complete right
away, whatever the state of the DLM lock in question. That
currently means any requests when (a) the current state of
the lock is exclusive (b) the requested state is either null
or unlocked or (c) the "try lock" flag is set. A blocking
request covers all the other lock requests.
There are two counters. The first is there primarily to show
how many lock requests have been made, and thus how much data
has gone into the mean/variance calculations. The other counter
is counting queueing of holders at the top layer of the glock
code. Hopefully that number will be a lot larger than the number
of dlm lock requests issued.
So why gather these statistics? There are several reasons
we'd like to get a better idea of these timings:
1. To be able to better set the glock "min hold time"
2. To spot performance issues more easily
3. To improve the algorithm for selecting resource groups for
allocation (to base it on lock wait time, rather than blindly
using a "try lock")
Due to the smoothing action of the updates, a step change in
some input quantity being sampled will only fully be taken
into account after 8 samples (or 4 for the variance) and this
needs to be carefully considered when interpreting the
results.
Knowing both the time it takes a lock request to complete and
the average time between lock requests for a glock means we
can compute the total percentage of the time for which the
node is able to use a glock vs. time that the rest of the
cluster has its share. That will be very useful when setting
the lock min hold time.
The other point to remember is that all times are in
nanoseconds. Great care has been taken to ensure that we
measure exactly the quantities that we want, as accurately
as possible. There are always inaccuracies in any
measuring system, but I hope this is as accurate as we
can reasonably make it.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
2012-01-20 10:38:36 +00:00
|
|
|
gl->gl_name.ln_type,
|
|
|
|
(unsigned long long)gl->gl_name.ln_number,
|
2015-08-27 17:51:45 +00:00
|
|
|
(unsigned long long)gl->gl_stats.stats[GFS2_LKS_SRTT],
|
|
|
|
(unsigned long long)gl->gl_stats.stats[GFS2_LKS_SRTTVAR],
|
|
|
|
(unsigned long long)gl->gl_stats.stats[GFS2_LKS_SRTTB],
|
|
|
|
(unsigned long long)gl->gl_stats.stats[GFS2_LKS_SRTTVARB],
|
|
|
|
(unsigned long long)gl->gl_stats.stats[GFS2_LKS_SIRT],
|
|
|
|
(unsigned long long)gl->gl_stats.stats[GFS2_LKS_SIRTVAR],
|
|
|
|
(unsigned long long)gl->gl_stats.stats[GFS2_LKS_DCOUNT],
|
|
|
|
(unsigned long long)gl->gl_stats.stats[GFS2_LKS_QCOUNT]);
|
GFS2: glock statistics gathering
The stats are divided into two sets: those relating to the
super block and those relating to an individual glock. The
super block stats are done on a per cpu basis in order to
try and reduce the overhead of gathering them. They are also
further divided by glock type.
In the case of both the super block and glock statistics,
the same information is gathered in each case. The super
block statistics are used to provide default values for
most of the glock statistics, so that newly created glocks
should have, as far as possible, a sensible starting point.
The statistics are divided into three pairs of mean and
variance, plus two counters. The mean/variance pairs are
smoothed exponential estimates and the algorithm used is
one which will be very familiar to those used to calculation
of round trip times in network code.
The three pairs of mean/variance measure the following
things:
1. DLM lock time (non-blocking requests)
2. DLM lock time (blocking requests)
3. Inter-request time (again to the DLM)
A non-blocking request is one which will complete right
away, whatever the state of the DLM lock in question. That
currently means any requests when (a) the current state of
the lock is exclusive (b) the requested state is either null
or unlocked or (c) the "try lock" flag is set. A blocking
request covers all the other lock requests.
There are two counters. The first is there primarily to show
how many lock requests have been made, and thus how much data
has gone into the mean/variance calculations. The other counter
is counting queueing of holders at the top layer of the glock
code. Hopefully that number will be a lot larger than the number
of dlm lock requests issued.
So why gather these statistics? There are several reasons
we'd like to get a better idea of these timings:
1. To be able to better set the glock "min hold time"
2. To spot performance issues more easily
3. To improve the algorithm for selecting resource groups for
allocation (to base it on lock wait time, rather than blindly
using a "try lock")
Due to the smoothing action of the updates, a step change in
some input quantity being sampled will only fully be taken
into account after 8 samples (or 4 for the variance) and this
needs to be carefully considered when interpreting the
results.
Knowing both the time it takes a lock request to complete and
the average time between lock requests for a glock means we
can compute the total percentage of the time for which the
node is able to use a glock vs. time that the rest of the
cluster has its share. That will be very useful when setting
the lock min hold time.
The other point to remember is that all times are in
nanoseconds. Great care has been taken to ensure that we
measure exactly the quantities that we want, as accurately
as possible. There are always inaccuracies in any
measuring system, but I hope this is as accurate as we
can reasonably make it.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
2012-01-20 10:38:36 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static const char *gfs2_gltype[] = {
|
|
|
|
"type",
|
|
|
|
"reserved",
|
|
|
|
"nondisk",
|
|
|
|
"inode",
|
|
|
|
"rgrp",
|
|
|
|
"meta",
|
|
|
|
"iopen",
|
|
|
|
"flock",
|
|
|
|
"plock",
|
|
|
|
"quota",
|
|
|
|
"journal",
|
|
|
|
};
|
|
|
|
|
|
|
|
static const char *gfs2_stype[] = {
|
|
|
|
[GFS2_LKS_SRTT] = "srtt",
|
|
|
|
[GFS2_LKS_SRTTVAR] = "srttvar",
|
|
|
|
[GFS2_LKS_SRTTB] = "srttb",
|
|
|
|
[GFS2_LKS_SRTTVARB] = "srttvarb",
|
|
|
|
[GFS2_LKS_SIRT] = "sirt",
|
|
|
|
[GFS2_LKS_SIRTVAR] = "sirtvar",
|
|
|
|
[GFS2_LKS_DCOUNT] = "dlm",
|
|
|
|
[GFS2_LKS_QCOUNT] = "queue",
|
|
|
|
};
|
|
|
|
|
|
|
|
#define GFS2_NR_SBSTATS (ARRAY_SIZE(gfs2_gltype) * ARRAY_SIZE(gfs2_stype))
|
|
|
|
|
|
|
|
static int gfs2_sbstats_seq_show(struct seq_file *seq, void *iter_ptr)
|
|
|
|
{
|
2015-08-27 16:43:00 +00:00
|
|
|
struct gfs2_sbd *sdp = seq->private;
|
|
|
|
loff_t pos = *(loff_t *)iter_ptr;
|
|
|
|
unsigned index = pos >> 3;
|
|
|
|
unsigned subindex = pos & 0x07;
|
GFS2: glock statistics gathering
The stats are divided into two sets: those relating to the
super block and those relating to an individual glock. The
super block stats are done on a per cpu basis in order to
try and reduce the overhead of gathering them. They are also
further divided by glock type.
In the case of both the super block and glock statistics,
the same information is gathered in each case. The super
block statistics are used to provide default values for
most of the glock statistics, so that newly created glocks
should have, as far as possible, a sensible starting point.
The statistics are divided into three pairs of mean and
variance, plus two counters. The mean/variance pairs are
smoothed exponential estimates and the algorithm used is
one which will be very familiar to those used to calculation
of round trip times in network code.
The three pairs of mean/variance measure the following
things:
1. DLM lock time (non-blocking requests)
2. DLM lock time (blocking requests)
3. Inter-request time (again to the DLM)
A non-blocking request is one which will complete right
away, whatever the state of the DLM lock in question. That
currently means any requests when (a) the current state of
the lock is exclusive (b) the requested state is either null
or unlocked or (c) the "try lock" flag is set. A blocking
request covers all the other lock requests.
There are two counters. The first is there primarily to show
how many lock requests have been made, and thus how much data
has gone into the mean/variance calculations. The other counter
is counting queueing of holders at the top layer of the glock
code. Hopefully that number will be a lot larger than the number
of dlm lock requests issued.
So why gather these statistics? There are several reasons
we'd like to get a better idea of these timings:
1. To be able to better set the glock "min hold time"
2. To spot performance issues more easily
3. To improve the algorithm for selecting resource groups for
allocation (to base it on lock wait time, rather than blindly
using a "try lock")
Due to the smoothing action of the updates, a step change in
some input quantity being sampled will only fully be taken
into account after 8 samples (or 4 for the variance) and this
needs to be carefully considered when interpreting the
results.
Knowing both the time it takes a lock request to complete and
the average time between lock requests for a glock means we
can compute the total percentage of the time for which the
node is able to use a glock vs. time that the rest of the
cluster has its share. That will be very useful when setting
the lock min hold time.
The other point to remember is that all times are in
nanoseconds. Great care has been taken to ensure that we
measure exactly the quantities that we want, as accurately
as possible. There are always inaccuracies in any
measuring system, but I hope this is as accurate as we
can reasonably make it.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
2012-01-20 10:38:36 +00:00
|
|
|
int i;
|
|
|
|
|
|
|
|
if (index == 0 && subindex != 0)
|
|
|
|
return 0;
|
2008-05-21 16:03:22 +00:00
|
|
|
|
GFS2: glock statistics gathering
The stats are divided into two sets: those relating to the
super block and those relating to an individual glock. The
super block stats are done on a per cpu basis in order to
try and reduce the overhead of gathering them. They are also
further divided by glock type.
In the case of both the super block and glock statistics,
the same information is gathered in each case. The super
block statistics are used to provide default values for
most of the glock statistics, so that newly created glocks
should have, as far as possible, a sensible starting point.
The statistics are divided into three pairs of mean and
variance, plus two counters. The mean/variance pairs are
smoothed exponential estimates and the algorithm used is
one which will be very familiar to those used to calculation
of round trip times in network code.
The three pairs of mean/variance measure the following
things:
1. DLM lock time (non-blocking requests)
2. DLM lock time (blocking requests)
3. Inter-request time (again to the DLM)
A non-blocking request is one which will complete right
away, whatever the state of the DLM lock in question. That
currently means any requests when (a) the current state of
the lock is exclusive (b) the requested state is either null
or unlocked or (c) the "try lock" flag is set. A blocking
request covers all the other lock requests.
There are two counters. The first is there primarily to show
how many lock requests have been made, and thus how much data
has gone into the mean/variance calculations. The other counter
is counting queueing of holders at the top layer of the glock
code. Hopefully that number will be a lot larger than the number
of dlm lock requests issued.
So why gather these statistics? There are several reasons
we'd like to get a better idea of these timings:
1. To be able to better set the glock "min hold time"
2. To spot performance issues more easily
3. To improve the algorithm for selecting resource groups for
allocation (to base it on lock wait time, rather than blindly
using a "try lock")
Due to the smoothing action of the updates, a step change in
some input quantity being sampled will only fully be taken
into account after 8 samples (or 4 for the variance) and this
needs to be carefully considered when interpreting the
results.
Knowing both the time it takes a lock request to complete and
the average time between lock requests for a glock means we
can compute the total percentage of the time for which the
node is able to use a glock vs. time that the rest of the
cluster has its share. That will be very useful when setting
the lock min hold time.
The other point to remember is that all times are in
nanoseconds. Great care has been taken to ensure that we
measure exactly the quantities that we want, as accurately
as possible. There are always inaccuracies in any
measuring system, but I hope this is as accurate as we
can reasonably make it.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
2012-01-20 10:38:36 +00:00
|
|
|
seq_printf(seq, "%-10s %8s:", gfs2_gltype[index],
|
|
|
|
(index == 0) ? "cpu": gfs2_stype[subindex]);
|
2006-01-16 16:50:04 +00:00
|
|
|
|
GFS2: glock statistics gathering
The stats are divided into two sets: those relating to the
super block and those relating to an individual glock. The
super block stats are done on a per cpu basis in order to
try and reduce the overhead of gathering them. They are also
further divided by glock type.
In the case of both the super block and glock statistics,
the same information is gathered in each case. The super
block statistics are used to provide default values for
most of the glock statistics, so that newly created glocks
should have, as far as possible, a sensible starting point.
The statistics are divided into three pairs of mean and
variance, plus two counters. The mean/variance pairs are
smoothed exponential estimates and the algorithm used is
one which will be very familiar to those used to calculation
of round trip times in network code.
The three pairs of mean/variance measure the following
things:
1. DLM lock time (non-blocking requests)
2. DLM lock time (blocking requests)
3. Inter-request time (again to the DLM)
A non-blocking request is one which will complete right
away, whatever the state of the DLM lock in question. That
currently means any requests when (a) the current state of
the lock is exclusive (b) the requested state is either null
or unlocked or (c) the "try lock" flag is set. A blocking
request covers all the other lock requests.
There are two counters. The first is there primarily to show
how many lock requests have been made, and thus how much data
has gone into the mean/variance calculations. The other counter
is counting queueing of holders at the top layer of the glock
code. Hopefully that number will be a lot larger than the number
of dlm lock requests issued.
So why gather these statistics? There are several reasons
we'd like to get a better idea of these timings:
1. To be able to better set the glock "min hold time"
2. To spot performance issues more easily
3. To improve the algorithm for selecting resource groups for
allocation (to base it on lock wait time, rather than blindly
using a "try lock")
Due to the smoothing action of the updates, a step change in
some input quantity being sampled will only fully be taken
into account after 8 samples (or 4 for the variance) and this
needs to be carefully considered when interpreting the
results.
Knowing both the time it takes a lock request to complete and
the average time between lock requests for a glock means we
can compute the total percentage of the time for which the
node is able to use a glock vs. time that the rest of the
cluster has its share. That will be very useful when setting
the lock min hold time.
The other point to remember is that all times are in
nanoseconds. Great care has been taken to ensure that we
measure exactly the quantities that we want, as accurately
as possible. There are always inaccuracies in any
measuring system, but I hope this is as accurate as we
can reasonably make it.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
2012-01-20 10:38:36 +00:00
|
|
|
for_each_possible_cpu(i) {
|
|
|
|
const struct gfs2_pcpu_lkstats *lkstats = per_cpu_ptr(sdp->sd_lkstats, i);
|
2015-08-27 18:02:54 +00:00
|
|
|
|
|
|
|
if (index == 0)
|
|
|
|
seq_printf(seq, " %15u", i);
|
|
|
|
else
|
|
|
|
seq_printf(seq, " %15llu", (unsigned long long)lkstats->
|
|
|
|
lkstats[index - 1].stats[subindex]);
|
GFS2: glock statistics gathering
The stats are divided into two sets: those relating to the
super block and those relating to an individual glock. The
super block stats are done on a per cpu basis in order to
try and reduce the overhead of gathering them. They are also
further divided by glock type.
In the case of both the super block and glock statistics,
the same information is gathered in each case. The super
block statistics are used to provide default values for
most of the glock statistics, so that newly created glocks
should have, as far as possible, a sensible starting point.
The statistics are divided into three pairs of mean and
variance, plus two counters. The mean/variance pairs are
smoothed exponential estimates and the algorithm used is
one which will be very familiar to those used to calculation
of round trip times in network code.
The three pairs of mean/variance measure the following
things:
1. DLM lock time (non-blocking requests)
2. DLM lock time (blocking requests)
3. Inter-request time (again to the DLM)
A non-blocking request is one which will complete right
away, whatever the state of the DLM lock in question. That
currently means any requests when (a) the current state of
the lock is exclusive (b) the requested state is either null
or unlocked or (c) the "try lock" flag is set. A blocking
request covers all the other lock requests.
There are two counters. The first is there primarily to show
how many lock requests have been made, and thus how much data
has gone into the mean/variance calculations. The other counter
is counting queueing of holders at the top layer of the glock
code. Hopefully that number will be a lot larger than the number
of dlm lock requests issued.
So why gather these statistics? There are several reasons
we'd like to get a better idea of these timings:
1. To be able to better set the glock "min hold time"
2. To spot performance issues more easily
3. To improve the algorithm for selecting resource groups for
allocation (to base it on lock wait time, rather than blindly
using a "try lock")
Due to the smoothing action of the updates, a step change in
some input quantity being sampled will only fully be taken
into account after 8 samples (or 4 for the variance) and this
needs to be carefully considered when interpreting the
results.
Knowing both the time it takes a lock request to complete and
the average time between lock requests for a glock means we
can compute the total percentage of the time for which the
node is able to use a glock vs. time that the rest of the
cluster has its share. That will be very useful when setting
the lock min hold time.
The other point to remember is that all times are in
nanoseconds. Great care has been taken to ensure that we
measure exactly the quantities that we want, as accurately
as possible. There are always inaccuracies in any
measuring system, but I hope this is as accurate as we
can reasonably make it.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
2012-01-20 10:38:36 +00:00
|
|
|
}
|
|
|
|
seq_putc(seq, '\n');
|
|
|
|
return 0;
|
|
|
|
}
|
2007-08-01 12:57:10 +00:00
|
|
|
|
2006-09-07 18:40:21 +00:00
|
|
|
int __init gfs2_glock_init(void)
|
|
|
|
{
|
2017-08-01 16:18:26 +00:00
|
|
|
int i, ret;
|
2015-03-16 16:02:46 +00:00
|
|
|
|
|
|
|
ret = rhashtable_init(&gl_hash_table, &ht_parms);
|
|
|
|
if (ret < 0)
|
|
|
|
return ret;
|
2007-08-01 12:57:10 +00:00
|
|
|
|
2010-11-03 19:58:53 +00:00
|
|
|
glock_workqueue = alloc_workqueue("glock_workqueue", WQ_MEM_RECLAIM |
|
2011-02-16 08:25:31 +00:00
|
|
|
WQ_HIGHPRI | WQ_FREEZABLE, 0);
|
2015-03-16 16:02:46 +00:00
|
|
|
if (!glock_workqueue) {
|
|
|
|
rhashtable_destroy(&gl_hash_table);
|
2013-08-15 07:54:43 +00:00
|
|
|
return -ENOMEM;
|
2015-03-16 16:02:46 +00:00
|
|
|
}
|
2010-11-03 19:58:53 +00:00
|
|
|
gfs2_delete_workqueue = alloc_workqueue("delete_workqueue",
|
2011-02-16 08:25:31 +00:00
|
|
|
WQ_MEM_RECLAIM | WQ_FREEZABLE,
|
2010-11-03 19:58:53 +00:00
|
|
|
0);
|
2013-08-15 07:54:43 +00:00
|
|
|
if (!gfs2_delete_workqueue) {
|
2009-07-23 23:52:34 +00:00
|
|
|
destroy_workqueue(glock_workqueue);
|
2015-03-16 16:02:46 +00:00
|
|
|
rhashtable_destroy(&gl_hash_table);
|
2013-08-15 07:54:43 +00:00
|
|
|
return -ENOMEM;
|
2009-07-23 23:52:34 +00:00
|
|
|
}
|
2008-11-20 13:39:47 +00:00
|
|
|
|
2022-06-01 03:22:24 +00:00
|
|
|
ret = register_shrinker(&glock_shrinker, "gfs2-glock");
|
2016-09-21 17:09:40 +00:00
|
|
|
if (ret) {
|
|
|
|
destroy_workqueue(gfs2_delete_workqueue);
|
|
|
|
destroy_workqueue(glock_workqueue);
|
|
|
|
rhashtable_destroy(&gl_hash_table);
|
|
|
|
return ret;
|
|
|
|
}
|
[GFS2] delay glock demote for a minimum hold time
When a lot of IO, with some distributed mmap IO, is run on a GFS2 filesystem in
a cluster, it will deadlock. The reason is that do_no_page() will repeatedly
call gfs2_sharewrite_nopage(), because each node keeps giving up the glock
too early, and is forced to call unmap_mapping_range(). This bumps the
mapping->truncate_count sequence count, forcing do_no_page() to retry. This
patch institutes a minimum glock hold time a tenth a second. This insures
that even in heavy contention cases, the node has enough time to get some
useful work done before it gives up the glock.
A second issue is that when gfs2_glock_dq() is called from within a page fault
to demote a lock, and the associated page needs to be written out, it will
try to acqire a lock on it, but it has already been locked at a higher level.
This patch puts makes gfs2_glock_dq() use the work queue as well, to avoid this
issue. This is the same patch as Steve Whitehouse originally proposed to fix
this issue, execpt that gfs2_glock_dq() now grabs a reference to the glock
before it queues up the work on it.
Signed-off-by: Benjamin E. Marzinski <bmarzins@redhat.com>
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
2007-08-23 18:19:05 +00:00
|
|
|
|
2017-08-01 16:18:26 +00:00
|
|
|
for (i = 0; i < GLOCK_WAIT_TABLE_SIZE; i++)
|
|
|
|
init_waitqueue_head(glock_wait_table + i);
|
|
|
|
|
2006-09-07 18:40:21 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2007-08-01 12:57:10 +00:00
|
|
|
void gfs2_glock_exit(void)
|
|
|
|
{
|
2008-11-20 13:39:47 +00:00
|
|
|
unregister_shrinker(&glock_shrinker);
|
2015-03-16 16:02:46 +00:00
|
|
|
rhashtable_destroy(&gl_hash_table);
|
[GFS2] delay glock demote for a minimum hold time
When a lot of IO, with some distributed mmap IO, is run on a GFS2 filesystem in
a cluster, it will deadlock. The reason is that do_no_page() will repeatedly
call gfs2_sharewrite_nopage(), because each node keeps giving up the glock
too early, and is forced to call unmap_mapping_range(). This bumps the
mapping->truncate_count sequence count, forcing do_no_page() to retry. This
patch institutes a minimum glock hold time a tenth a second. This insures
that even in heavy contention cases, the node has enough time to get some
useful work done before it gives up the glock.
A second issue is that when gfs2_glock_dq() is called from within a page fault
to demote a lock, and the associated page needs to be written out, it will
try to acqire a lock on it, but it has already been locked at a higher level.
This patch puts makes gfs2_glock_dq() use the work queue as well, to avoid this
issue. This is the same patch as Steve Whitehouse originally proposed to fix
this issue, execpt that gfs2_glock_dq() now grabs a reference to the glock
before it queues up the work on it.
Signed-off-by: Benjamin E. Marzinski <bmarzins@redhat.com>
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
2007-08-23 18:19:05 +00:00
|
|
|
destroy_workqueue(glock_workqueue);
|
2009-07-23 23:52:34 +00:00
|
|
|
destroy_workqueue(gfs2_delete_workqueue);
|
2007-08-01 12:57:10 +00:00
|
|
|
}
|
|
|
|
|
2018-01-08 21:35:43 +00:00
|
|
|
static void gfs2_glock_iter_next(struct gfs2_glock_iter *gi, loff_t n)
|
2011-01-19 09:30:01 +00:00
|
|
|
{
|
2018-03-28 10:05:35 +00:00
|
|
|
struct gfs2_glock *gl = gi->gl;
|
|
|
|
|
|
|
|
if (gl) {
|
|
|
|
if (n == 0)
|
|
|
|
return;
|
|
|
|
if (!lockref_put_not_zero(&gl->gl_lockref))
|
|
|
|
gfs2_glock_queue_put(gl);
|
2018-01-08 21:35:43 +00:00
|
|
|
}
|
|
|
|
for (;;) {
|
2018-03-28 10:05:35 +00:00
|
|
|
gl = rhashtable_walk_next(&gi->hti);
|
|
|
|
if (IS_ERR_OR_NULL(gl)) {
|
|
|
|
if (gl == ERR_PTR(-EAGAIN)) {
|
|
|
|
n = 1;
|
|
|
|
continue;
|
2018-01-08 21:35:43 +00:00
|
|
|
}
|
2018-03-28 10:05:35 +00:00
|
|
|
gl = NULL;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
if (gl->gl_name.ln_sbd != gi->sdp)
|
|
|
|
continue;
|
|
|
|
if (n <= 1) {
|
|
|
|
if (!lockref_get_not_dead(&gl->gl_lockref))
|
|
|
|
continue;
|
|
|
|
break;
|
|
|
|
} else {
|
|
|
|
if (__lockref_is_dead(&gl->gl_lockref))
|
|
|
|
continue;
|
|
|
|
n--;
|
2011-01-19 09:30:01 +00:00
|
|
|
}
|
2016-12-14 14:02:03 +00:00
|
|
|
}
|
2018-03-28 10:05:35 +00:00
|
|
|
gi->gl = gl;
|
2007-03-16 10:26:37 +00:00
|
|
|
}
|
|
|
|
|
2008-05-21 16:03:22 +00:00
|
|
|
static void *gfs2_glock_seq_start(struct seq_file *seq, loff_t *pos)
|
2017-08-18 14:15:13 +00:00
|
|
|
__acquires(RCU)
|
2007-03-16 10:26:37 +00:00
|
|
|
{
|
2008-05-21 16:03:22 +00:00
|
|
|
struct gfs2_glock_iter *gi = seq->private;
|
2018-01-08 21:35:43 +00:00
|
|
|
loff_t n;
|
2012-06-08 10:16:22 +00:00
|
|
|
|
2018-01-08 21:35:43 +00:00
|
|
|
/*
|
|
|
|
* We can either stay where we are, skip to the next hash table
|
|
|
|
* entry, or start from the beginning.
|
|
|
|
*/
|
|
|
|
if (*pos < gi->last_pos) {
|
|
|
|
rhashtable_walk_exit(&gi->hti);
|
|
|
|
rhashtable_walk_enter(&gl_hash_table, &gi->hti);
|
|
|
|
n = *pos + 1;
|
|
|
|
} else {
|
|
|
|
n = *pos - gi->last_pos;
|
|
|
|
}
|
2007-03-16 10:26:37 +00:00
|
|
|
|
2018-01-08 21:35:43 +00:00
|
|
|
rhashtable_walk_start(&gi->hti);
|
2007-03-16 10:26:37 +00:00
|
|
|
|
2018-01-08 21:35:43 +00:00
|
|
|
gfs2_glock_iter_next(gi, n);
|
2012-06-08 10:16:22 +00:00
|
|
|
gi->last_pos = *pos;
|
2008-05-21 16:03:22 +00:00
|
|
|
return gi->gl;
|
2007-03-16 10:26:37 +00:00
|
|
|
}
|
|
|
|
|
2008-05-21 16:03:22 +00:00
|
|
|
static void *gfs2_glock_seq_next(struct seq_file *seq, void *iter_ptr,
|
2007-03-16 10:26:37 +00:00
|
|
|
loff_t *pos)
|
|
|
|
{
|
2008-05-21 16:03:22 +00:00
|
|
|
struct gfs2_glock_iter *gi = seq->private;
|
2007-03-16 10:26:37 +00:00
|
|
|
|
|
|
|
(*pos)++;
|
2012-06-08 10:16:22 +00:00
|
|
|
gi->last_pos = *pos;
|
2018-01-08 21:35:43 +00:00
|
|
|
gfs2_glock_iter_next(gi, 1);
|
2008-05-21 16:03:22 +00:00
|
|
|
return gi->gl;
|
2007-03-16 10:26:37 +00:00
|
|
|
}
|
|
|
|
|
2008-05-21 16:03:22 +00:00
|
|
|
static void gfs2_glock_seq_stop(struct seq_file *seq, void *iter_ptr)
|
2017-08-18 14:15:13 +00:00
|
|
|
__releases(RCU)
|
2007-03-16 10:26:37 +00:00
|
|
|
{
|
2008-05-21 16:03:22 +00:00
|
|
|
struct gfs2_glock_iter *gi = seq->private;
|
2011-01-19 09:30:01 +00:00
|
|
|
|
2015-03-16 16:02:46 +00:00
|
|
|
rhashtable_walk_stop(&gi->hti);
|
2007-03-16 10:26:37 +00:00
|
|
|
}
|
|
|
|
|
2008-05-21 16:03:22 +00:00
|
|
|
static int gfs2_glock_seq_show(struct seq_file *seq, void *iter_ptr)
|
2007-03-16 10:26:37 +00:00
|
|
|
{
|
2019-05-09 14:21:48 +00:00
|
|
|
dump_glock(seq, iter_ptr, false);
|
2014-01-16 10:31:13 +00:00
|
|
|
return 0;
|
2007-03-16 10:26:37 +00:00
|
|
|
}
|
|
|
|
|
GFS2: glock statistics gathering
The stats are divided into two sets: those relating to the
super block and those relating to an individual glock. The
super block stats are done on a per cpu basis in order to
try and reduce the overhead of gathering them. They are also
further divided by glock type.
In the case of both the super block and glock statistics,
the same information is gathered in each case. The super
block statistics are used to provide default values for
most of the glock statistics, so that newly created glocks
should have, as far as possible, a sensible starting point.
The statistics are divided into three pairs of mean and
variance, plus two counters. The mean/variance pairs are
smoothed exponential estimates and the algorithm used is
one which will be very familiar to those used to calculation
of round trip times in network code.
The three pairs of mean/variance measure the following
things:
1. DLM lock time (non-blocking requests)
2. DLM lock time (blocking requests)
3. Inter-request time (again to the DLM)
A non-blocking request is one which will complete right
away, whatever the state of the DLM lock in question. That
currently means any requests when (a) the current state of
the lock is exclusive (b) the requested state is either null
or unlocked or (c) the "try lock" flag is set. A blocking
request covers all the other lock requests.
There are two counters. The first is there primarily to show
how many lock requests have been made, and thus how much data
has gone into the mean/variance calculations. The other counter
is counting queueing of holders at the top layer of the glock
code. Hopefully that number will be a lot larger than the number
of dlm lock requests issued.
So why gather these statistics? There are several reasons
we'd like to get a better idea of these timings:
1. To be able to better set the glock "min hold time"
2. To spot performance issues more easily
3. To improve the algorithm for selecting resource groups for
allocation (to base it on lock wait time, rather than blindly
using a "try lock")
Due to the smoothing action of the updates, a step change in
some input quantity being sampled will only fully be taken
into account after 8 samples (or 4 for the variance) and this
needs to be carefully considered when interpreting the
results.
Knowing both the time it takes a lock request to complete and
the average time between lock requests for a glock means we
can compute the total percentage of the time for which the
node is able to use a glock vs. time that the rest of the
cluster has its share. That will be very useful when setting
the lock min hold time.
The other point to remember is that all times are in
nanoseconds. Great care has been taken to ensure that we
measure exactly the quantities that we want, as accurately
as possible. There are always inaccuracies in any
measuring system, but I hope this is as accurate as we
can reasonably make it.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
2012-01-20 10:38:36 +00:00
|
|
|
static void *gfs2_sbstats_seq_start(struct seq_file *seq, loff_t *pos)
|
|
|
|
{
|
2015-08-27 16:43:00 +00:00
|
|
|
preempt_disable();
|
GFS2: glock statistics gathering
The stats are divided into two sets: those relating to the
super block and those relating to an individual glock. The
super block stats are done on a per cpu basis in order to
try and reduce the overhead of gathering them. They are also
further divided by glock type.
In the case of both the super block and glock statistics,
the same information is gathered in each case. The super
block statistics are used to provide default values for
most of the glock statistics, so that newly created glocks
should have, as far as possible, a sensible starting point.
The statistics are divided into three pairs of mean and
variance, plus two counters. The mean/variance pairs are
smoothed exponential estimates and the algorithm used is
one which will be very familiar to those used to calculation
of round trip times in network code.
The three pairs of mean/variance measure the following
things:
1. DLM lock time (non-blocking requests)
2. DLM lock time (blocking requests)
3. Inter-request time (again to the DLM)
A non-blocking request is one which will complete right
away, whatever the state of the DLM lock in question. That
currently means any requests when (a) the current state of
the lock is exclusive (b) the requested state is either null
or unlocked or (c) the "try lock" flag is set. A blocking
request covers all the other lock requests.
There are two counters. The first is there primarily to show
how many lock requests have been made, and thus how much data
has gone into the mean/variance calculations. The other counter
is counting queueing of holders at the top layer of the glock
code. Hopefully that number will be a lot larger than the number
of dlm lock requests issued.
So why gather these statistics? There are several reasons
we'd like to get a better idea of these timings:
1. To be able to better set the glock "min hold time"
2. To spot performance issues more easily
3. To improve the algorithm for selecting resource groups for
allocation (to base it on lock wait time, rather than blindly
using a "try lock")
Due to the smoothing action of the updates, a step change in
some input quantity being sampled will only fully be taken
into account after 8 samples (or 4 for the variance) and this
needs to be carefully considered when interpreting the
results.
Knowing both the time it takes a lock request to complete and
the average time between lock requests for a glock means we
can compute the total percentage of the time for which the
node is able to use a glock vs. time that the rest of the
cluster has its share. That will be very useful when setting
the lock min hold time.
The other point to remember is that all times are in
nanoseconds. Great care has been taken to ensure that we
measure exactly the quantities that we want, as accurately
as possible. There are always inaccuracies in any
measuring system, but I hope this is as accurate as we
can reasonably make it.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
2012-01-20 10:38:36 +00:00
|
|
|
if (*pos >= GFS2_NR_SBSTATS)
|
|
|
|
return NULL;
|
2015-08-27 16:43:00 +00:00
|
|
|
return pos;
|
GFS2: glock statistics gathering
The stats are divided into two sets: those relating to the
super block and those relating to an individual glock. The
super block stats are done on a per cpu basis in order to
try and reduce the overhead of gathering them. They are also
further divided by glock type.
In the case of both the super block and glock statistics,
the same information is gathered in each case. The super
block statistics are used to provide default values for
most of the glock statistics, so that newly created glocks
should have, as far as possible, a sensible starting point.
The statistics are divided into three pairs of mean and
variance, plus two counters. The mean/variance pairs are
smoothed exponential estimates and the algorithm used is
one which will be very familiar to those used to calculation
of round trip times in network code.
The three pairs of mean/variance measure the following
things:
1. DLM lock time (non-blocking requests)
2. DLM lock time (blocking requests)
3. Inter-request time (again to the DLM)
A non-blocking request is one which will complete right
away, whatever the state of the DLM lock in question. That
currently means any requests when (a) the current state of
the lock is exclusive (b) the requested state is either null
or unlocked or (c) the "try lock" flag is set. A blocking
request covers all the other lock requests.
There are two counters. The first is there primarily to show
how many lock requests have been made, and thus how much data
has gone into the mean/variance calculations. The other counter
is counting queueing of holders at the top layer of the glock
code. Hopefully that number will be a lot larger than the number
of dlm lock requests issued.
So why gather these statistics? There are several reasons
we'd like to get a better idea of these timings:
1. To be able to better set the glock "min hold time"
2. To spot performance issues more easily
3. To improve the algorithm for selecting resource groups for
allocation (to base it on lock wait time, rather than blindly
using a "try lock")
Due to the smoothing action of the updates, a step change in
some input quantity being sampled will only fully be taken
into account after 8 samples (or 4 for the variance) and this
needs to be carefully considered when interpreting the
results.
Knowing both the time it takes a lock request to complete and
the average time between lock requests for a glock means we
can compute the total percentage of the time for which the
node is able to use a glock vs. time that the rest of the
cluster has its share. That will be very useful when setting
the lock min hold time.
The other point to remember is that all times are in
nanoseconds. Great care has been taken to ensure that we
measure exactly the quantities that we want, as accurately
as possible. There are always inaccuracies in any
measuring system, but I hope this is as accurate as we
can reasonably make it.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
2012-01-20 10:38:36 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void *gfs2_sbstats_seq_next(struct seq_file *seq, void *iter_ptr,
|
|
|
|
loff_t *pos)
|
|
|
|
{
|
|
|
|
(*pos)++;
|
2015-08-27 16:43:00 +00:00
|
|
|
if (*pos >= GFS2_NR_SBSTATS)
|
GFS2: glock statistics gathering
The stats are divided into two sets: those relating to the
super block and those relating to an individual glock. The
super block stats are done on a per cpu basis in order to
try and reduce the overhead of gathering them. They are also
further divided by glock type.
In the case of both the super block and glock statistics,
the same information is gathered in each case. The super
block statistics are used to provide default values for
most of the glock statistics, so that newly created glocks
should have, as far as possible, a sensible starting point.
The statistics are divided into three pairs of mean and
variance, plus two counters. The mean/variance pairs are
smoothed exponential estimates and the algorithm used is
one which will be very familiar to those used to calculation
of round trip times in network code.
The three pairs of mean/variance measure the following
things:
1. DLM lock time (non-blocking requests)
2. DLM lock time (blocking requests)
3. Inter-request time (again to the DLM)
A non-blocking request is one which will complete right
away, whatever the state of the DLM lock in question. That
currently means any requests when (a) the current state of
the lock is exclusive (b) the requested state is either null
or unlocked or (c) the "try lock" flag is set. A blocking
request covers all the other lock requests.
There are two counters. The first is there primarily to show
how many lock requests have been made, and thus how much data
has gone into the mean/variance calculations. The other counter
is counting queueing of holders at the top layer of the glock
code. Hopefully that number will be a lot larger than the number
of dlm lock requests issued.
So why gather these statistics? There are several reasons
we'd like to get a better idea of these timings:
1. To be able to better set the glock "min hold time"
2. To spot performance issues more easily
3. To improve the algorithm for selecting resource groups for
allocation (to base it on lock wait time, rather than blindly
using a "try lock")
Due to the smoothing action of the updates, a step change in
some input quantity being sampled will only fully be taken
into account after 8 samples (or 4 for the variance) and this
needs to be carefully considered when interpreting the
results.
Knowing both the time it takes a lock request to complete and
the average time between lock requests for a glock means we
can compute the total percentage of the time for which the
node is able to use a glock vs. time that the rest of the
cluster has its share. That will be very useful when setting
the lock min hold time.
The other point to remember is that all times are in
nanoseconds. Great care has been taken to ensure that we
measure exactly the quantities that we want, as accurately
as possible. There are always inaccuracies in any
measuring system, but I hope this is as accurate as we
can reasonably make it.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
2012-01-20 10:38:36 +00:00
|
|
|
return NULL;
|
2015-08-27 16:43:00 +00:00
|
|
|
return pos;
|
GFS2: glock statistics gathering
The stats are divided into two sets: those relating to the
super block and those relating to an individual glock. The
super block stats are done on a per cpu basis in order to
try and reduce the overhead of gathering them. They are also
further divided by glock type.
In the case of both the super block and glock statistics,
the same information is gathered in each case. The super
block statistics are used to provide default values for
most of the glock statistics, so that newly created glocks
should have, as far as possible, a sensible starting point.
The statistics are divided into three pairs of mean and
variance, plus two counters. The mean/variance pairs are
smoothed exponential estimates and the algorithm used is
one which will be very familiar to those used to calculation
of round trip times in network code.
The three pairs of mean/variance measure the following
things:
1. DLM lock time (non-blocking requests)
2. DLM lock time (blocking requests)
3. Inter-request time (again to the DLM)
A non-blocking request is one which will complete right
away, whatever the state of the DLM lock in question. That
currently means any requests when (a) the current state of
the lock is exclusive (b) the requested state is either null
or unlocked or (c) the "try lock" flag is set. A blocking
request covers all the other lock requests.
There are two counters. The first is there primarily to show
how many lock requests have been made, and thus how much data
has gone into the mean/variance calculations. The other counter
is counting queueing of holders at the top layer of the glock
code. Hopefully that number will be a lot larger than the number
of dlm lock requests issued.
So why gather these statistics? There are several reasons
we'd like to get a better idea of these timings:
1. To be able to better set the glock "min hold time"
2. To spot performance issues more easily
3. To improve the algorithm for selecting resource groups for
allocation (to base it on lock wait time, rather than blindly
using a "try lock")
Due to the smoothing action of the updates, a step change in
some input quantity being sampled will only fully be taken
into account after 8 samples (or 4 for the variance) and this
needs to be carefully considered when interpreting the
results.
Knowing both the time it takes a lock request to complete and
the average time between lock requests for a glock means we
can compute the total percentage of the time for which the
node is able to use a glock vs. time that the rest of the
cluster has its share. That will be very useful when setting
the lock min hold time.
The other point to remember is that all times are in
nanoseconds. Great care has been taken to ensure that we
measure exactly the quantities that we want, as accurately
as possible. There are always inaccuracies in any
measuring system, but I hope this is as accurate as we
can reasonably make it.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
2012-01-20 10:38:36 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void gfs2_sbstats_seq_stop(struct seq_file *seq, void *iter_ptr)
|
|
|
|
{
|
|
|
|
preempt_enable();
|
|
|
|
}
|
|
|
|
|
2007-07-31 10:31:11 +00:00
|
|
|
static const struct seq_operations gfs2_glock_seq_ops = {
|
2007-03-16 10:26:37 +00:00
|
|
|
.start = gfs2_glock_seq_start,
|
|
|
|
.next = gfs2_glock_seq_next,
|
|
|
|
.stop = gfs2_glock_seq_stop,
|
|
|
|
.show = gfs2_glock_seq_show,
|
|
|
|
};
|
|
|
|
|
GFS2: glock statistics gathering
The stats are divided into two sets: those relating to the
super block and those relating to an individual glock. The
super block stats are done on a per cpu basis in order to
try and reduce the overhead of gathering them. They are also
further divided by glock type.
In the case of both the super block and glock statistics,
the same information is gathered in each case. The super
block statistics are used to provide default values for
most of the glock statistics, so that newly created glocks
should have, as far as possible, a sensible starting point.
The statistics are divided into three pairs of mean and
variance, plus two counters. The mean/variance pairs are
smoothed exponential estimates and the algorithm used is
one which will be very familiar to those used to calculation
of round trip times in network code.
The three pairs of mean/variance measure the following
things:
1. DLM lock time (non-blocking requests)
2. DLM lock time (blocking requests)
3. Inter-request time (again to the DLM)
A non-blocking request is one which will complete right
away, whatever the state of the DLM lock in question. That
currently means any requests when (a) the current state of
the lock is exclusive (b) the requested state is either null
or unlocked or (c) the "try lock" flag is set. A blocking
request covers all the other lock requests.
There are two counters. The first is there primarily to show
how many lock requests have been made, and thus how much data
has gone into the mean/variance calculations. The other counter
is counting queueing of holders at the top layer of the glock
code. Hopefully that number will be a lot larger than the number
of dlm lock requests issued.
So why gather these statistics? There are several reasons
we'd like to get a better idea of these timings:
1. To be able to better set the glock "min hold time"
2. To spot performance issues more easily
3. To improve the algorithm for selecting resource groups for
allocation (to base it on lock wait time, rather than blindly
using a "try lock")
Due to the smoothing action of the updates, a step change in
some input quantity being sampled will only fully be taken
into account after 8 samples (or 4 for the variance) and this
needs to be carefully considered when interpreting the
results.
Knowing both the time it takes a lock request to complete and
the average time between lock requests for a glock means we
can compute the total percentage of the time for which the
node is able to use a glock vs. time that the rest of the
cluster has its share. That will be very useful when setting
the lock min hold time.
The other point to remember is that all times are in
nanoseconds. Great care has been taken to ensure that we
measure exactly the quantities that we want, as accurately
as possible. There are always inaccuracies in any
measuring system, but I hope this is as accurate as we
can reasonably make it.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
2012-01-20 10:38:36 +00:00
|
|
|
static const struct seq_operations gfs2_glstats_seq_ops = {
|
|
|
|
.start = gfs2_glock_seq_start,
|
|
|
|
.next = gfs2_glock_seq_next,
|
|
|
|
.stop = gfs2_glock_seq_stop,
|
|
|
|
.show = gfs2_glstats_seq_show,
|
|
|
|
};
|
|
|
|
|
2020-09-16 02:50:21 +00:00
|
|
|
static const struct seq_operations gfs2_sbstats_sops = {
|
GFS2: glock statistics gathering
The stats are divided into two sets: those relating to the
super block and those relating to an individual glock. The
super block stats are done on a per cpu basis in order to
try and reduce the overhead of gathering them. They are also
further divided by glock type.
In the case of both the super block and glock statistics,
the same information is gathered in each case. The super
block statistics are used to provide default values for
most of the glock statistics, so that newly created glocks
should have, as far as possible, a sensible starting point.
The statistics are divided into three pairs of mean and
variance, plus two counters. The mean/variance pairs are
smoothed exponential estimates and the algorithm used is
one which will be very familiar to those used to calculation
of round trip times in network code.
The three pairs of mean/variance measure the following
things:
1. DLM lock time (non-blocking requests)
2. DLM lock time (blocking requests)
3. Inter-request time (again to the DLM)
A non-blocking request is one which will complete right
away, whatever the state of the DLM lock in question. That
currently means any requests when (a) the current state of
the lock is exclusive (b) the requested state is either null
or unlocked or (c) the "try lock" flag is set. A blocking
request covers all the other lock requests.
There are two counters. The first is there primarily to show
how many lock requests have been made, and thus how much data
has gone into the mean/variance calculations. The other counter
is counting queueing of holders at the top layer of the glock
code. Hopefully that number will be a lot larger than the number
of dlm lock requests issued.
So why gather these statistics? There are several reasons
we'd like to get a better idea of these timings:
1. To be able to better set the glock "min hold time"
2. To spot performance issues more easily
3. To improve the algorithm for selecting resource groups for
allocation (to base it on lock wait time, rather than blindly
using a "try lock")
Due to the smoothing action of the updates, a step change in
some input quantity being sampled will only fully be taken
into account after 8 samples (or 4 for the variance) and this
needs to be carefully considered when interpreting the
results.
Knowing both the time it takes a lock request to complete and
the average time between lock requests for a glock means we
can compute the total percentage of the time for which the
node is able to use a glock vs. time that the rest of the
cluster has its share. That will be very useful when setting
the lock min hold time.
The other point to remember is that all times are in
nanoseconds. Great care has been taken to ensure that we
measure exactly the quantities that we want, as accurately
as possible. There are always inaccuracies in any
measuring system, but I hope this is as accurate as we
can reasonably make it.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
2012-01-20 10:38:36 +00:00
|
|
|
.start = gfs2_sbstats_seq_start,
|
|
|
|
.next = gfs2_sbstats_seq_next,
|
|
|
|
.stop = gfs2_sbstats_seq_stop,
|
|
|
|
.show = gfs2_sbstats_seq_show,
|
|
|
|
};
|
|
|
|
|
2012-06-11 12:49:47 +00:00
|
|
|
#define GFS2_SEQ_GOODSIZE min(PAGE_SIZE << PAGE_ALLOC_COSTLY_ORDER, 65536UL)
|
|
|
|
|
2017-03-09 14:48:05 +00:00
|
|
|
static int __gfs2_glocks_open(struct inode *inode, struct file *file,
|
|
|
|
const struct seq_operations *ops)
|
2007-03-16 10:26:37 +00:00
|
|
|
{
|
2017-03-09 14:48:05 +00:00
|
|
|
int ret = seq_open_private(file, ops, sizeof(struct gfs2_glock_iter));
|
2008-05-21 16:03:22 +00:00
|
|
|
if (ret == 0) {
|
|
|
|
struct seq_file *seq = file->private_data;
|
|
|
|
struct gfs2_glock_iter *gi = seq->private;
|
2015-03-16 16:02:46 +00:00
|
|
|
|
2008-05-21 16:03:22 +00:00
|
|
|
gi->sdp = inode->i_private;
|
2012-06-11 12:49:47 +00:00
|
|
|
seq->buf = kmalloc(GFS2_SEQ_GOODSIZE, GFP_KERNEL | __GFP_NOWARN);
|
2012-06-07 12:30:16 +00:00
|
|
|
if (seq->buf)
|
2012-06-11 12:49:47 +00:00
|
|
|
seq->size = GFS2_SEQ_GOODSIZE;
|
2018-01-08 21:35:43 +00:00
|
|
|
/*
|
|
|
|
* Initially, we are "before" the first hash table entry; the
|
|
|
|
* first call to rhashtable_walk_next gets us the first entry.
|
|
|
|
*/
|
|
|
|
gi->last_pos = -1;
|
2015-03-16 16:02:46 +00:00
|
|
|
gi->gl = NULL;
|
2018-01-08 21:35:43 +00:00
|
|
|
rhashtable_walk_enter(&gl_hash_table, &gi->hti);
|
2008-05-21 16:03:22 +00:00
|
|
|
}
|
|
|
|
return ret;
|
2007-03-16 10:26:37 +00:00
|
|
|
}
|
|
|
|
|
2017-03-09 14:48:05 +00:00
|
|
|
static int gfs2_glocks_open(struct inode *inode, struct file *file)
|
|
|
|
{
|
|
|
|
return __gfs2_glocks_open(inode, file, &gfs2_glock_seq_ops);
|
|
|
|
}
|
|
|
|
|
2015-03-16 16:02:46 +00:00
|
|
|
static int gfs2_glocks_release(struct inode *inode, struct file *file)
|
|
|
|
{
|
|
|
|
struct seq_file *seq = file->private_data;
|
|
|
|
struct gfs2_glock_iter *gi = seq->private;
|
|
|
|
|
2018-03-28 10:05:35 +00:00
|
|
|
if (gi->gl)
|
|
|
|
gfs2_glock_put(gi->gl);
|
2018-01-08 21:35:43 +00:00
|
|
|
rhashtable_walk_exit(&gi->hti);
|
2015-03-16 16:02:46 +00:00
|
|
|
return seq_release_private(inode, file);
|
|
|
|
}
|
|
|
|
|
GFS2: glock statistics gathering
The stats are divided into two sets: those relating to the
super block and those relating to an individual glock. The
super block stats are done on a per cpu basis in order to
try and reduce the overhead of gathering them. They are also
further divided by glock type.
In the case of both the super block and glock statistics,
the same information is gathered in each case. The super
block statistics are used to provide default values for
most of the glock statistics, so that newly created glocks
should have, as far as possible, a sensible starting point.
The statistics are divided into three pairs of mean and
variance, plus two counters. The mean/variance pairs are
smoothed exponential estimates and the algorithm used is
one which will be very familiar to those used to calculation
of round trip times in network code.
The three pairs of mean/variance measure the following
things:
1. DLM lock time (non-blocking requests)
2. DLM lock time (blocking requests)
3. Inter-request time (again to the DLM)
A non-blocking request is one which will complete right
away, whatever the state of the DLM lock in question. That
currently means any requests when (a) the current state of
the lock is exclusive (b) the requested state is either null
or unlocked or (c) the "try lock" flag is set. A blocking
request covers all the other lock requests.
There are two counters. The first is there primarily to show
how many lock requests have been made, and thus how much data
has gone into the mean/variance calculations. The other counter
is counting queueing of holders at the top layer of the glock
code. Hopefully that number will be a lot larger than the number
of dlm lock requests issued.
So why gather these statistics? There are several reasons
we'd like to get a better idea of these timings:
1. To be able to better set the glock "min hold time"
2. To spot performance issues more easily
3. To improve the algorithm for selecting resource groups for
allocation (to base it on lock wait time, rather than blindly
using a "try lock")
Due to the smoothing action of the updates, a step change in
some input quantity being sampled will only fully be taken
into account after 8 samples (or 4 for the variance) and this
needs to be carefully considered when interpreting the
results.
Knowing both the time it takes a lock request to complete and
the average time between lock requests for a glock means we
can compute the total percentage of the time for which the
node is able to use a glock vs. time that the rest of the
cluster has its share. That will be very useful when setting
the lock min hold time.
The other point to remember is that all times are in
nanoseconds. Great care has been taken to ensure that we
measure exactly the quantities that we want, as accurately
as possible. There are always inaccuracies in any
measuring system, but I hope this is as accurate as we
can reasonably make it.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
2012-01-20 10:38:36 +00:00
|
|
|
static int gfs2_glstats_open(struct inode *inode, struct file *file)
|
|
|
|
{
|
2017-03-09 14:48:05 +00:00
|
|
|
return __gfs2_glocks_open(inode, file, &gfs2_glstats_seq_ops);
|
GFS2: glock statistics gathering
The stats are divided into two sets: those relating to the
super block and those relating to an individual glock. The
super block stats are done on a per cpu basis in order to
try and reduce the overhead of gathering them. They are also
further divided by glock type.
In the case of both the super block and glock statistics,
the same information is gathered in each case. The super
block statistics are used to provide default values for
most of the glock statistics, so that newly created glocks
should have, as far as possible, a sensible starting point.
The statistics are divided into three pairs of mean and
variance, plus two counters. The mean/variance pairs are
smoothed exponential estimates and the algorithm used is
one which will be very familiar to those used to calculation
of round trip times in network code.
The three pairs of mean/variance measure the following
things:
1. DLM lock time (non-blocking requests)
2. DLM lock time (blocking requests)
3. Inter-request time (again to the DLM)
A non-blocking request is one which will complete right
away, whatever the state of the DLM lock in question. That
currently means any requests when (a) the current state of
the lock is exclusive (b) the requested state is either null
or unlocked or (c) the "try lock" flag is set. A blocking
request covers all the other lock requests.
There are two counters. The first is there primarily to show
how many lock requests have been made, and thus how much data
has gone into the mean/variance calculations. The other counter
is counting queueing of holders at the top layer of the glock
code. Hopefully that number will be a lot larger than the number
of dlm lock requests issued.
So why gather these statistics? There are several reasons
we'd like to get a better idea of these timings:
1. To be able to better set the glock "min hold time"
2. To spot performance issues more easily
3. To improve the algorithm for selecting resource groups for
allocation (to base it on lock wait time, rather than blindly
using a "try lock")
Due to the smoothing action of the updates, a step change in
some input quantity being sampled will only fully be taken
into account after 8 samples (or 4 for the variance) and this
needs to be carefully considered when interpreting the
results.
Knowing both the time it takes a lock request to complete and
the average time between lock requests for a glock means we
can compute the total percentage of the time for which the
node is able to use a glock vs. time that the rest of the
cluster has its share. That will be very useful when setting
the lock min hold time.
The other point to remember is that all times are in
nanoseconds. Great care has been taken to ensure that we
measure exactly the quantities that we want, as accurately
as possible. There are always inaccuracies in any
measuring system, but I hope this is as accurate as we
can reasonably make it.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
2012-01-20 10:38:36 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static const struct file_operations gfs2_glocks_fops = {
|
|
|
|
.owner = THIS_MODULE,
|
|
|
|
.open = gfs2_glocks_open,
|
|
|
|
.read = seq_read,
|
|
|
|
.llseek = seq_lseek,
|
2015-03-16 16:02:46 +00:00
|
|
|
.release = gfs2_glocks_release,
|
GFS2: glock statistics gathering
The stats are divided into two sets: those relating to the
super block and those relating to an individual glock. The
super block stats are done on a per cpu basis in order to
try and reduce the overhead of gathering them. They are also
further divided by glock type.
In the case of both the super block and glock statistics,
the same information is gathered in each case. The super
block statistics are used to provide default values for
most of the glock statistics, so that newly created glocks
should have, as far as possible, a sensible starting point.
The statistics are divided into three pairs of mean and
variance, plus two counters. The mean/variance pairs are
smoothed exponential estimates and the algorithm used is
one which will be very familiar to those used to calculation
of round trip times in network code.
The three pairs of mean/variance measure the following
things:
1. DLM lock time (non-blocking requests)
2. DLM lock time (blocking requests)
3. Inter-request time (again to the DLM)
A non-blocking request is one which will complete right
away, whatever the state of the DLM lock in question. That
currently means any requests when (a) the current state of
the lock is exclusive (b) the requested state is either null
or unlocked or (c) the "try lock" flag is set. A blocking
request covers all the other lock requests.
There are two counters. The first is there primarily to show
how many lock requests have been made, and thus how much data
has gone into the mean/variance calculations. The other counter
is counting queueing of holders at the top layer of the glock
code. Hopefully that number will be a lot larger than the number
of dlm lock requests issued.
So why gather these statistics? There are several reasons
we'd like to get a better idea of these timings:
1. To be able to better set the glock "min hold time"
2. To spot performance issues more easily
3. To improve the algorithm for selecting resource groups for
allocation (to base it on lock wait time, rather than blindly
using a "try lock")
Due to the smoothing action of the updates, a step change in
some input quantity being sampled will only fully be taken
into account after 8 samples (or 4 for the variance) and this
needs to be carefully considered when interpreting the
results.
Knowing both the time it takes a lock request to complete and
the average time between lock requests for a glock means we
can compute the total percentage of the time for which the
node is able to use a glock vs. time that the rest of the
cluster has its share. That will be very useful when setting
the lock min hold time.
The other point to remember is that all times are in
nanoseconds. Great care has been taken to ensure that we
measure exactly the quantities that we want, as accurately
as possible. There are always inaccuracies in any
measuring system, but I hope this is as accurate as we
can reasonably make it.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
2012-01-20 10:38:36 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
static const struct file_operations gfs2_glstats_fops = {
|
2007-03-16 10:26:37 +00:00
|
|
|
.owner = THIS_MODULE,
|
GFS2: glock statistics gathering
The stats are divided into two sets: those relating to the
super block and those relating to an individual glock. The
super block stats are done on a per cpu basis in order to
try and reduce the overhead of gathering them. They are also
further divided by glock type.
In the case of both the super block and glock statistics,
the same information is gathered in each case. The super
block statistics are used to provide default values for
most of the glock statistics, so that newly created glocks
should have, as far as possible, a sensible starting point.
The statistics are divided into three pairs of mean and
variance, plus two counters. The mean/variance pairs are
smoothed exponential estimates and the algorithm used is
one which will be very familiar to those used to calculation
of round trip times in network code.
The three pairs of mean/variance measure the following
things:
1. DLM lock time (non-blocking requests)
2. DLM lock time (blocking requests)
3. Inter-request time (again to the DLM)
A non-blocking request is one which will complete right
away, whatever the state of the DLM lock in question. That
currently means any requests when (a) the current state of
the lock is exclusive (b) the requested state is either null
or unlocked or (c) the "try lock" flag is set. A blocking
request covers all the other lock requests.
There are two counters. The first is there primarily to show
how many lock requests have been made, and thus how much data
has gone into the mean/variance calculations. The other counter
is counting queueing of holders at the top layer of the glock
code. Hopefully that number will be a lot larger than the number
of dlm lock requests issued.
So why gather these statistics? There are several reasons
we'd like to get a better idea of these timings:
1. To be able to better set the glock "min hold time"
2. To spot performance issues more easily
3. To improve the algorithm for selecting resource groups for
allocation (to base it on lock wait time, rather than blindly
using a "try lock")
Due to the smoothing action of the updates, a step change in
some input quantity being sampled will only fully be taken
into account after 8 samples (or 4 for the variance) and this
needs to be carefully considered when interpreting the
results.
Knowing both the time it takes a lock request to complete and
the average time between lock requests for a glock means we
can compute the total percentage of the time for which the
node is able to use a glock vs. time that the rest of the
cluster has its share. That will be very useful when setting
the lock min hold time.
The other point to remember is that all times are in
nanoseconds. Great care has been taken to ensure that we
measure exactly the quantities that we want, as accurately
as possible. There are always inaccuracies in any
measuring system, but I hope this is as accurate as we
can reasonably make it.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
2012-01-20 10:38:36 +00:00
|
|
|
.open = gfs2_glstats_open,
|
|
|
|
.read = seq_read,
|
|
|
|
.llseek = seq_lseek,
|
2015-03-16 16:02:46 +00:00
|
|
|
.release = gfs2_glocks_release,
|
GFS2: glock statistics gathering
The stats are divided into two sets: those relating to the
super block and those relating to an individual glock. The
super block stats are done on a per cpu basis in order to
try and reduce the overhead of gathering them. They are also
further divided by glock type.
In the case of both the super block and glock statistics,
the same information is gathered in each case. The super
block statistics are used to provide default values for
most of the glock statistics, so that newly created glocks
should have, as far as possible, a sensible starting point.
The statistics are divided into three pairs of mean and
variance, plus two counters. The mean/variance pairs are
smoothed exponential estimates and the algorithm used is
one which will be very familiar to those used to calculation
of round trip times in network code.
The three pairs of mean/variance measure the following
things:
1. DLM lock time (non-blocking requests)
2. DLM lock time (blocking requests)
3. Inter-request time (again to the DLM)
A non-blocking request is one which will complete right
away, whatever the state of the DLM lock in question. That
currently means any requests when (a) the current state of
the lock is exclusive (b) the requested state is either null
or unlocked or (c) the "try lock" flag is set. A blocking
request covers all the other lock requests.
There are two counters. The first is there primarily to show
how many lock requests have been made, and thus how much data
has gone into the mean/variance calculations. The other counter
is counting queueing of holders at the top layer of the glock
code. Hopefully that number will be a lot larger than the number
of dlm lock requests issued.
So why gather these statistics? There are several reasons
we'd like to get a better idea of these timings:
1. To be able to better set the glock "min hold time"
2. To spot performance issues more easily
3. To improve the algorithm for selecting resource groups for
allocation (to base it on lock wait time, rather than blindly
using a "try lock")
Due to the smoothing action of the updates, a step change in
some input quantity being sampled will only fully be taken
into account after 8 samples (or 4 for the variance) and this
needs to be carefully considered when interpreting the
results.
Knowing both the time it takes a lock request to complete and
the average time between lock requests for a glock means we
can compute the total percentage of the time for which the
node is able to use a glock vs. time that the rest of the
cluster has its share. That will be very useful when setting
the lock min hold time.
The other point to remember is that all times are in
nanoseconds. Great care has been taken to ensure that we
measure exactly the quantities that we want, as accurately
as possible. There are always inaccuracies in any
measuring system, but I hope this is as accurate as we
can reasonably make it.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
2012-01-20 10:38:36 +00:00
|
|
|
};
|
|
|
|
|
gfs2: Add glockfd debugfs file
When a process has a gfs2 file open, the file is keeping a reference on the
underlying gfs2 inode, and the inode is keeping the inode's iopen glock held in
shared mode. In other words, the process depends on the iopen glock of each
open gfs2 file. Expose those dependencies in a new "glockfd" debugfs file.
The new debugfs file contains one line for each gfs2 file descriptor,
specifying the tgid, file descriptor number, and glock name, e.g.,
1601 6 5/816d
This list is compiled by iterating all tasks on the system using find_ge_pid(),
and all file descriptors of each task using task_lookup_next_fd_rcu(). To make
that work from gfs2, export those two functions.
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2022-06-08 14:22:55 +00:00
|
|
|
struct gfs2_glockfd_iter {
|
|
|
|
struct super_block *sb;
|
|
|
|
unsigned int tgid;
|
|
|
|
struct task_struct *task;
|
|
|
|
unsigned int fd;
|
|
|
|
struct file *file;
|
|
|
|
};
|
|
|
|
|
|
|
|
static struct task_struct *gfs2_glockfd_next_task(struct gfs2_glockfd_iter *i)
|
|
|
|
{
|
|
|
|
struct pid_namespace *ns = task_active_pid_ns(current);
|
|
|
|
struct pid *pid;
|
|
|
|
|
|
|
|
if (i->task)
|
|
|
|
put_task_struct(i->task);
|
|
|
|
|
|
|
|
rcu_read_lock();
|
|
|
|
retry:
|
|
|
|
i->task = NULL;
|
|
|
|
pid = find_ge_pid(i->tgid, ns);
|
|
|
|
if (pid) {
|
|
|
|
i->tgid = pid_nr_ns(pid, ns);
|
|
|
|
i->task = pid_task(pid, PIDTYPE_TGID);
|
|
|
|
if (!i->task) {
|
|
|
|
i->tgid++;
|
|
|
|
goto retry;
|
|
|
|
}
|
|
|
|
get_task_struct(i->task);
|
|
|
|
}
|
|
|
|
rcu_read_unlock();
|
|
|
|
return i->task;
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct file *gfs2_glockfd_next_file(struct gfs2_glockfd_iter *i)
|
|
|
|
{
|
|
|
|
if (i->file) {
|
|
|
|
fput(i->file);
|
|
|
|
i->file = NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
rcu_read_lock();
|
|
|
|
for(;; i->fd++) {
|
|
|
|
struct inode *inode;
|
|
|
|
|
|
|
|
i->file = task_lookup_next_fd_rcu(i->task, &i->fd);
|
|
|
|
if (!i->file) {
|
|
|
|
i->fd = 0;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
inode = file_inode(i->file);
|
|
|
|
if (inode->i_sb != i->sb)
|
|
|
|
continue;
|
|
|
|
if (get_file_rcu(i->file))
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
rcu_read_unlock();
|
|
|
|
return i->file;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void *gfs2_glockfd_seq_start(struct seq_file *seq, loff_t *pos)
|
|
|
|
{
|
|
|
|
struct gfs2_glockfd_iter *i = seq->private;
|
|
|
|
|
|
|
|
if (*pos)
|
|
|
|
return NULL;
|
|
|
|
while (gfs2_glockfd_next_task(i)) {
|
|
|
|
if (gfs2_glockfd_next_file(i))
|
|
|
|
return i;
|
|
|
|
i->tgid++;
|
|
|
|
}
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void *gfs2_glockfd_seq_next(struct seq_file *seq, void *iter_ptr,
|
|
|
|
loff_t *pos)
|
|
|
|
{
|
|
|
|
struct gfs2_glockfd_iter *i = seq->private;
|
|
|
|
|
|
|
|
(*pos)++;
|
|
|
|
i->fd++;
|
|
|
|
do {
|
|
|
|
if (gfs2_glockfd_next_file(i))
|
|
|
|
return i;
|
|
|
|
i->tgid++;
|
|
|
|
} while (gfs2_glockfd_next_task(i));
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void gfs2_glockfd_seq_stop(struct seq_file *seq, void *iter_ptr)
|
|
|
|
{
|
|
|
|
struct gfs2_glockfd_iter *i = seq->private;
|
|
|
|
|
|
|
|
if (i->file)
|
|
|
|
fput(i->file);
|
|
|
|
if (i->task)
|
|
|
|
put_task_struct(i->task);
|
|
|
|
}
|
|
|
|
|
2022-06-23 21:29:36 +00:00
|
|
|
static void gfs2_glockfd_seq_show_flock(struct seq_file *seq,
|
|
|
|
struct gfs2_glockfd_iter *i)
|
|
|
|
{
|
|
|
|
struct gfs2_file *fp = i->file->private_data;
|
|
|
|
struct gfs2_holder *fl_gh = &fp->f_fl_gh;
|
|
|
|
struct lm_lockname gl_name = { .ln_type = LM_TYPE_RESERVED };
|
|
|
|
|
|
|
|
if (!READ_ONCE(fl_gh->gh_gl))
|
|
|
|
return;
|
|
|
|
|
|
|
|
spin_lock(&i->file->f_lock);
|
|
|
|
if (gfs2_holder_initialized(fl_gh))
|
|
|
|
gl_name = fl_gh->gh_gl->gl_name;
|
|
|
|
spin_unlock(&i->file->f_lock);
|
|
|
|
|
|
|
|
if (gl_name.ln_type != LM_TYPE_RESERVED) {
|
|
|
|
seq_printf(seq, "%d %u %u/%llx\n",
|
|
|
|
i->tgid, i->fd, gl_name.ln_type,
|
|
|
|
(unsigned long long)gl_name.ln_number);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
gfs2: Add glockfd debugfs file
When a process has a gfs2 file open, the file is keeping a reference on the
underlying gfs2 inode, and the inode is keeping the inode's iopen glock held in
shared mode. In other words, the process depends on the iopen glock of each
open gfs2 file. Expose those dependencies in a new "glockfd" debugfs file.
The new debugfs file contains one line for each gfs2 file descriptor,
specifying the tgid, file descriptor number, and glock name, e.g.,
1601 6 5/816d
This list is compiled by iterating all tasks on the system using find_ge_pid(),
and all file descriptors of each task using task_lookup_next_fd_rcu(). To make
that work from gfs2, export those two functions.
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2022-06-08 14:22:55 +00:00
|
|
|
static int gfs2_glockfd_seq_show(struct seq_file *seq, void *iter_ptr)
|
|
|
|
{
|
|
|
|
struct gfs2_glockfd_iter *i = seq->private;
|
|
|
|
struct inode *inode = file_inode(i->file);
|
|
|
|
struct gfs2_glock *gl;
|
|
|
|
|
|
|
|
inode_lock_shared(inode);
|
|
|
|
gl = GFS2_I(inode)->i_iopen_gh.gh_gl;
|
|
|
|
if (gl) {
|
|
|
|
seq_printf(seq, "%d %u %u/%llx\n",
|
|
|
|
i->tgid, i->fd, gl->gl_name.ln_type,
|
|
|
|
(unsigned long long)gl->gl_name.ln_number);
|
|
|
|
}
|
2022-06-23 21:29:36 +00:00
|
|
|
gfs2_glockfd_seq_show_flock(seq, i);
|
gfs2: Add glockfd debugfs file
When a process has a gfs2 file open, the file is keeping a reference on the
underlying gfs2 inode, and the inode is keeping the inode's iopen glock held in
shared mode. In other words, the process depends on the iopen glock of each
open gfs2 file. Expose those dependencies in a new "glockfd" debugfs file.
The new debugfs file contains one line for each gfs2 file descriptor,
specifying the tgid, file descriptor number, and glock name, e.g.,
1601 6 5/816d
This list is compiled by iterating all tasks on the system using find_ge_pid(),
and all file descriptors of each task using task_lookup_next_fd_rcu(). To make
that work from gfs2, export those two functions.
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2022-06-08 14:22:55 +00:00
|
|
|
inode_unlock_shared(inode);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static const struct seq_operations gfs2_glockfd_seq_ops = {
|
|
|
|
.start = gfs2_glockfd_seq_start,
|
|
|
|
.next = gfs2_glockfd_seq_next,
|
|
|
|
.stop = gfs2_glockfd_seq_stop,
|
|
|
|
.show = gfs2_glockfd_seq_show,
|
|
|
|
};
|
|
|
|
|
|
|
|
static int gfs2_glockfd_open(struct inode *inode, struct file *file)
|
|
|
|
{
|
|
|
|
struct gfs2_glockfd_iter *i;
|
|
|
|
struct gfs2_sbd *sdp = inode->i_private;
|
|
|
|
|
|
|
|
i = __seq_open_private(file, &gfs2_glockfd_seq_ops,
|
|
|
|
sizeof(struct gfs2_glockfd_iter));
|
|
|
|
if (!i)
|
|
|
|
return -ENOMEM;
|
|
|
|
i->sb = sdp->sd_vfs;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static const struct file_operations gfs2_glockfd_fops = {
|
|
|
|
.owner = THIS_MODULE,
|
|
|
|
.open = gfs2_glockfd_open,
|
|
|
|
.read = seq_read,
|
|
|
|
.llseek = seq_lseek,
|
|
|
|
.release = seq_release_private,
|
|
|
|
};
|
|
|
|
|
2020-09-16 02:50:21 +00:00
|
|
|
DEFINE_SEQ_ATTRIBUTE(gfs2_sbstats);
|
2007-03-16 10:26:37 +00:00
|
|
|
|
2019-01-22 15:21:51 +00:00
|
|
|
void gfs2_create_debugfs_file(struct gfs2_sbd *sdp)
|
|
|
|
{
|
|
|
|
sdp->debugfs_dir = debugfs_create_dir(sdp->sd_table_name, gfs2_root);
|
2007-03-16 10:26:37 +00:00
|
|
|
|
2019-01-22 15:21:51 +00:00
|
|
|
debugfs_create_file("glocks", S_IFREG | S_IRUGO, sdp->debugfs_dir, sdp,
|
|
|
|
&gfs2_glocks_fops);
|
|
|
|
|
gfs2: Add glockfd debugfs file
When a process has a gfs2 file open, the file is keeping a reference on the
underlying gfs2 inode, and the inode is keeping the inode's iopen glock held in
shared mode. In other words, the process depends on the iopen glock of each
open gfs2 file. Expose those dependencies in a new "glockfd" debugfs file.
The new debugfs file contains one line for each gfs2 file descriptor,
specifying the tgid, file descriptor number, and glock name, e.g.,
1601 6 5/816d
This list is compiled by iterating all tasks on the system using find_ge_pid(),
and all file descriptors of each task using task_lookup_next_fd_rcu(). To make
that work from gfs2, export those two functions.
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2022-06-08 14:22:55 +00:00
|
|
|
debugfs_create_file("glockfd", S_IFREG | S_IRUGO, sdp->debugfs_dir, sdp,
|
|
|
|
&gfs2_glockfd_fops);
|
|
|
|
|
2019-01-22 15:21:51 +00:00
|
|
|
debugfs_create_file("glstats", S_IFREG | S_IRUGO, sdp->debugfs_dir, sdp,
|
|
|
|
&gfs2_glstats_fops);
|
|
|
|
|
|
|
|
debugfs_create_file("sbstats", S_IFREG | S_IRUGO, sdp->debugfs_dir, sdp,
|
|
|
|
&gfs2_sbstats_fops);
|
2007-03-16 10:26:37 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void gfs2_delete_debugfs_file(struct gfs2_sbd *sdp)
|
|
|
|
{
|
2019-01-22 15:21:51 +00:00
|
|
|
debugfs_remove_recursive(sdp->debugfs_dir);
|
|
|
|
sdp->debugfs_dir = NULL;
|
2007-03-16 10:26:37 +00:00
|
|
|
}
|
|
|
|
|
2019-01-22 15:21:51 +00:00
|
|
|
void gfs2_register_debugfs(void)
|
2007-03-16 10:26:37 +00:00
|
|
|
{
|
|
|
|
gfs2_root = debugfs_create_dir("gfs2", NULL);
|
|
|
|
}
|
|
|
|
|
|
|
|
void gfs2_unregister_debugfs(void)
|
|
|
|
{
|
|
|
|
debugfs_remove(gfs2_root);
|
2007-04-18 16:41:11 +00:00
|
|
|
gfs2_root = NULL;
|
2007-03-16 10:26:37 +00:00
|
|
|
}
|