License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 14:07:57 +00:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
2005-06-22 17:16:21 +00:00
|
|
|
/*
|
|
|
|
* linux/fs/nfs/nfs4_fs.h
|
|
|
|
*
|
|
|
|
* Copyright (C) 2005 Trond Myklebust
|
|
|
|
*
|
|
|
|
* NFSv4-specific filesystem definitions and declarations
|
|
|
|
*/
|
|
|
|
|
|
|
|
#ifndef __LINUX_FS_NFS_NFS4_FS_H
|
|
|
|
#define __LINUX_FS_NFS_NFS4_FS_H
|
|
|
|
|
2013-11-13 17:29:08 +00:00
|
|
|
#if defined(CONFIG_NFS_V4_2)
|
|
|
|
#define NFS4_MAX_MINOR_VERSION 2
|
|
|
|
#elif defined(CONFIG_NFS_V4_1)
|
|
|
|
#define NFS4_MAX_MINOR_VERSION 1
|
|
|
|
#else
|
|
|
|
#define NFS4_MAX_MINOR_VERSION 0
|
|
|
|
#endif
|
|
|
|
|
2012-07-30 20:05:25 +00:00
|
|
|
#if IS_ENABLED(CONFIG_NFS_V4)
|
2005-06-22 17:16:21 +00:00
|
|
|
|
2012-11-26 18:13:29 +00:00
|
|
|
#define NFS4_MAX_LOOP_ON_RECOVER (10)
|
|
|
|
|
2013-02-07 19:41:11 +00:00
|
|
|
#include <linux/seqlock.h>
|
|
|
|
|
2005-06-22 17:16:21 +00:00
|
|
|
struct idmap;
|
|
|
|
|
|
|
|
enum nfs4_client_state {
|
2008-12-23 20:21:48 +00:00
|
|
|
NFS4CLNT_MANAGER_RUNNING = 0,
|
2008-12-23 20:21:42 +00:00
|
|
|
NFS4CLNT_CHECK_LEASE,
|
2006-01-03 08:55:24 +00:00
|
|
|
NFS4CLNT_LEASE_EXPIRED,
|
2008-12-23 20:21:41 +00:00
|
|
|
NFS4CLNT_RECLAIM_REBOOT,
|
|
|
|
NFS4CLNT_RECLAIM_NOGRACE,
|
2008-12-23 20:21:47 +00:00
|
|
|
NFS4CLNT_DELEGRETURN,
|
2009-12-04 20:55:05 +00:00
|
|
|
NFS4CLNT_SESSION_RESET,
|
2011-04-24 18:28:18 +00:00
|
|
|
NFS4CLNT_LEASE_CONFIRM,
|
2011-05-31 23:05:47 +00:00
|
|
|
NFS4CLNT_SERVER_SCOPE_MISMATCH,
|
2012-05-22 02:45:33 +00:00
|
|
|
NFS4CLNT_PURGE_STATE,
|
2012-05-24 16:26:37 +00:00
|
|
|
NFS4CLNT_BIND_CONN_TO_SESSION,
|
2013-10-17 18:13:02 +00:00
|
|
|
NFS4CLNT_MOVED,
|
2013-10-17 18:13:35 +00:00
|
|
|
NFS4CLNT_LEASE_MOVED,
|
2016-09-22 17:38:59 +00:00
|
|
|
NFS4CLNT_DELEGATION_EXPIRED,
|
2018-11-20 01:11:45 +00:00
|
|
|
NFS4CLNT_RUN_MANAGER,
|
2022-03-06 23:41:44 +00:00
|
|
|
NFS4CLNT_MANAGER_AVAILABLE,
|
2020-02-18 20:58:31 +00:00
|
|
|
NFS4CLNT_RECALL_RUNNING,
|
|
|
|
NFS4CLNT_RECALL_ANY_LAYOUT_READ,
|
|
|
|
NFS4CLNT_RECALL_ANY_LAYOUT_RW,
|
2021-05-08 14:01:32 +00:00
|
|
|
NFS4CLNT_DELEGRETURN_DELAYED,
|
2005-06-22 17:16:21 +00:00
|
|
|
};
|
|
|
|
|
2011-08-24 19:07:37 +00:00
|
|
|
#define NFS4_RENEW_TIMEOUT 0x01
|
|
|
|
#define NFS4_RENEW_DELEGATION_CB 0x02
|
|
|
|
|
2015-01-24 00:19:25 +00:00
|
|
|
struct nfs_seqid_counter;
|
2010-06-16 13:52:26 +00:00
|
|
|
struct nfs4_minor_version_ops {
|
|
|
|
u32 minor_version;
|
2013-03-15 20:11:57 +00:00
|
|
|
unsigned init_caps;
|
2010-06-16 13:52:26 +00:00
|
|
|
|
2013-08-09 16:49:11 +00:00
|
|
|
int (*init_client)(struct nfs_client *);
|
|
|
|
void (*shutdown_client)(struct nfs_client *);
|
2012-03-04 23:13:56 +00:00
|
|
|
bool (*match_stateid)(const nfs4_stateid *,
|
2010-06-16 13:52:27 +00:00
|
|
|
const nfs4_stateid *);
|
2011-06-02 18:59:07 +00:00
|
|
|
int (*find_root_sec)(struct nfs_server *, struct nfs_fh *,
|
|
|
|
struct nfs_fsinfo *);
|
2014-05-01 10:28:47 +00:00
|
|
|
void (*free_lock_state)(struct nfs_server *,
|
2013-05-03 20:22:55 +00:00
|
|
|
struct nfs4_lock_state *);
|
2016-09-22 17:38:59 +00:00
|
|
|
int (*test_and_free_expired)(struct nfs_server *,
|
2018-12-03 00:30:31 +00:00
|
|
|
nfs4_stateid *, const struct cred *);
|
2015-01-24 00:19:25 +00:00
|
|
|
struct nfs_seqid *
|
|
|
|
(*alloc_seqid)(struct nfs_seqid_counter *, gfp_t);
|
2018-12-19 06:59:57 +00:00
|
|
|
void (*session_trunk)(struct rpc_clnt *clnt,
|
|
|
|
struct rpc_xprt *xprt, void *data);
|
2013-08-09 16:48:27 +00:00
|
|
|
const struct rpc_call_ops *call_sync_ops;
|
2010-06-16 13:52:27 +00:00
|
|
|
const struct nfs4_state_recovery_ops *reboot_recovery_ops;
|
|
|
|
const struct nfs4_state_recovery_ops *nograce_recovery_ops;
|
|
|
|
const struct nfs4_state_maintenance_ops *state_renewal_ops;
|
2013-10-17 18:12:39 +00:00
|
|
|
const struct nfs4_mig_recovery_ops *mig_recovery_ops;
|
2010-06-16 13:52:26 +00:00
|
|
|
};
|
|
|
|
|
NFSv4: Add functions to order RPC calls
NFSv4 file state-changing functions such as OPEN, CLOSE, LOCK,... are all
labelled with "sequence identifiers" in order to prevent the server from
reordering RPC requests, as this could cause its file state to
become out of sync with the client.
Currently the NFS client code enforces this ordering locally using
semaphores to restrict access to structures until the RPC call is done.
This, of course, only works with synchronous RPC calls, since the
user process must first grab the semaphore.
By dropping semaphores, and instead teaching the RPC engine to hold
the RPC calls until they are ready to be sent, we can extend this
process to work nicely with asynchronous RPC calls too.
This patch adds a new list called "rpc_sequence" that defines the order
of the RPC calls to be sent. We add one such list for each state_owner.
When an RPC call is ready to be sent, it checks if it is top of the
rpc_sequence list. If so, it proceeds. If not, it goes back to sleep,
and loops until it hits top of the list.
Once the RPC call has completed, it can then bump the sequence id counter,
and remove itself from the rpc_sequence list, and then wake up the next
sleeper.
Note that the state_owner sequence ids and lock_owner sequence ids are
all indexed to the same rpc_sequence list, so OPEN, LOCK,... requests
are all ordered w.r.t. each other.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
2005-10-18 21:20:12 +00:00
|
|
|
#define NFS_SEQID_CONFIRMED 1
|
|
|
|
struct nfs_seqid_counter {
|
2012-04-20 23:24:51 +00:00
|
|
|
ktime_t create_time;
|
2012-01-18 03:04:25 +00:00
|
|
|
int owner_id;
|
NFSv4: Add functions to order RPC calls
NFSv4 file state-changing functions such as OPEN, CLOSE, LOCK,... are all
labelled with "sequence identifiers" in order to prevent the server from
reordering RPC requests, as this could cause its file state to
become out of sync with the client.
Currently the NFS client code enforces this ordering locally using
semaphores to restrict access to structures until the RPC call is done.
This, of course, only works with synchronous RPC calls, since the
user process must first grab the semaphore.
By dropping semaphores, and instead teaching the RPC engine to hold
the RPC calls until they are ready to be sent, we can extend this
process to work nicely with asynchronous RPC calls too.
This patch adds a new list called "rpc_sequence" that defines the order
of the RPC calls to be sent. We add one such list for each state_owner.
When an RPC call is ready to be sent, it checks if it is top of the
rpc_sequence list. If so, it proceeds. If not, it goes back to sleep,
and loops until it hits top of the list.
Once the RPC call has completed, it can then bump the sequence id counter,
and remove itself from the rpc_sequence list, and then wake up the next
sleeper.
Note that the state_owner sequence ids and lock_owner sequence ids are
all indexed to the same rpc_sequence list, so OPEN, LOCK,... requests
are all ordered w.r.t. each other.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
2005-10-18 21:20:12 +00:00
|
|
|
int flags;
|
|
|
|
u32 counter;
|
2012-01-18 03:04:25 +00:00
|
|
|
spinlock_t lock; /* Protects the list */
|
|
|
|
struct list_head list; /* Defines sequence of RPC calls */
|
|
|
|
struct rpc_wait_queue wait; /* RPC call delay queue */
|
NFSv4: Add functions to order RPC calls
NFSv4 file state-changing functions such as OPEN, CLOSE, LOCK,... are all
labelled with "sequence identifiers" in order to prevent the server from
reordering RPC requests, as this could cause its file state to
become out of sync with the client.
Currently the NFS client code enforces this ordering locally using
semaphores to restrict access to structures until the RPC call is done.
This, of course, only works with synchronous RPC calls, since the
user process must first grab the semaphore.
By dropping semaphores, and instead teaching the RPC engine to hold
the RPC calls until they are ready to be sent, we can extend this
process to work nicely with asynchronous RPC calls too.
This patch adds a new list called "rpc_sequence" that defines the order
of the RPC calls to be sent. We add one such list for each state_owner.
When an RPC call is ready to be sent, it checks if it is top of the
rpc_sequence list. If so, it proceeds. If not, it goes back to sleep,
and loops until it hits top of the list.
Once the RPC call has completed, it can then bump the sequence id counter,
and remove itself from the rpc_sequence list, and then wake up the next
sleeper.
Note that the state_owner sequence ids and lock_owner sequence ids are
all indexed to the same rpc_sequence list, so OPEN, LOCK,... requests
are all ordered w.r.t. each other.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
2005-10-18 21:20:12 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
struct nfs_seqid {
|
|
|
|
struct nfs_seqid_counter *sequence;
|
2005-10-20 21:22:41 +00:00
|
|
|
struct list_head list;
|
2012-01-20 23:47:05 +00:00
|
|
|
struct rpc_task *task;
|
NFSv4: Add functions to order RPC calls
NFSv4 file state-changing functions such as OPEN, CLOSE, LOCK,... are all
labelled with "sequence identifiers" in order to prevent the server from
reordering RPC requests, as this could cause its file state to
become out of sync with the client.
Currently the NFS client code enforces this ordering locally using
semaphores to restrict access to structures until the RPC call is done.
This, of course, only works with synchronous RPC calls, since the
user process must first grab the semaphore.
By dropping semaphores, and instead teaching the RPC engine to hold
the RPC calls until they are ready to be sent, we can extend this
process to work nicely with asynchronous RPC calls too.
This patch adds a new list called "rpc_sequence" that defines the order
of the RPC calls to be sent. We add one such list for each state_owner.
When an RPC call is ready to be sent, it checks if it is top of the
rpc_sequence list. If so, it proceeds. If not, it goes back to sleep,
and loops until it hits top of the list.
Once the RPC call has completed, it can then bump the sequence id counter,
and remove itself from the rpc_sequence list, and then wake up the next
sleeper.
Note that the state_owner sequence ids and lock_owner sequence ids are
all indexed to the same rpc_sequence list, so OPEN, LOCK,... requests
are all ordered w.r.t. each other.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
2005-10-18 21:20:12 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
static inline void nfs_confirm_seqid(struct nfs_seqid_counter *seqid, int status)
|
|
|
|
{
|
|
|
|
if (seqid_mutating_err(-status))
|
|
|
|
seqid->flags |= NFS_SEQID_CONFIRMED;
|
|
|
|
}
|
|
|
|
|
2005-06-22 17:16:21 +00:00
|
|
|
/*
|
|
|
|
* NFS4 state_owners and lock_owners are simply labels for ordered
|
|
|
|
* sequences of RPC calls. Their sole purpose is to provide once-only
|
|
|
|
* semantics by allowing the server to identify replayed requests.
|
|
|
|
*/
|
|
|
|
struct nfs4_state_owner {
|
2007-07-06 14:53:21 +00:00
|
|
|
struct nfs_server *so_server;
|
NFS: Cache state owners after files are closed
Servers have a finite amount of memory to store NFSv4 open and lock
owners. Moreover, servers may have a difficult time determining when
they can reap their state owner table, thanks to gray areas in the
NFSv4 protocol specification. Thus clients should be careful to reuse
state owners when possible.
Currently Linux is not too careful. When a user has closed all her
files on one mount point, the state owner's reference count goes to
zero, and it is released. The next OPEN allocates a new one. A
workload that serially opens and closes files can run through a large
number of open owners this way.
When a state owner's reference count goes to zero, slap it onto a free
list for that nfs_server, with an expiry time. Garbage collect before
looking for a state owner. This makes state owners for active users
available for re-use.
Now that there can be unused state owners remaining at umount time,
purge the state owner free list when a server is destroyed. Also be
sure not to reclaim unused state owners during state recovery.
This change has benefits for the client as well. For some workloads,
this approach drops the number of OPEN_CONFIRM calls from the same as
the number of OPEN calls, down to just one. This reduces wire traffic
and thus open(2) latency. Before this patch, untarring a kernel
source tarball shows the OPEN_CONFIRM call counter steadily increasing
through the test. With the patch, the OPEN_CONFIRM count remains at 1
throughout the entire untar.
As long as the expiry time is kept short, I don't think garbage
collection should be terribly expensive, although it does bounce the
clp->cl_lock around a bit.
[ At some point we should rationalize the use of the nfs_server
->destroy method. ]
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
[Trond: Fixed a garbage collection race and a few efficiency issues]
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
2011-12-06 21:13:48 +00:00
|
|
|
struct list_head so_lru;
|
|
|
|
unsigned long so_expires;
|
2010-12-24 01:32:43 +00:00
|
|
|
struct rb_node so_server_node;
|
2005-06-22 17:16:21 +00:00
|
|
|
|
2018-12-03 00:30:31 +00:00
|
|
|
const struct cred *so_cred; /* Associated cred */
|
2007-07-02 17:58:33 +00:00
|
|
|
|
|
|
|
spinlock_t so_lock;
|
|
|
|
atomic_t so_count;
|
2008-12-23 20:21:43 +00:00
|
|
|
unsigned long so_flags;
|
2005-06-22 17:16:21 +00:00
|
|
|
struct list_head so_states;
|
NFSv4: Add functions to order RPC calls
NFSv4 file state-changing functions such as OPEN, CLOSE, LOCK,... are all
labelled with "sequence identifiers" in order to prevent the server from
reordering RPC requests, as this could cause its file state to
become out of sync with the client.
Currently the NFS client code enforces this ordering locally using
semaphores to restrict access to structures until the RPC call is done.
This, of course, only works with synchronous RPC calls, since the
user process must first grab the semaphore.
By dropping semaphores, and instead teaching the RPC engine to hold
the RPC calls until they are ready to be sent, we can extend this
process to work nicely with asynchronous RPC calls too.
This patch adds a new list called "rpc_sequence" that defines the order
of the RPC calls to be sent. We add one such list for each state_owner.
When an RPC call is ready to be sent, it checks if it is top of the
rpc_sequence list. If so, it proceeds. If not, it goes back to sleep,
and loops until it hits top of the list.
Once the RPC call has completed, it can then bump the sequence id counter,
and remove itself from the rpc_sequence list, and then wake up the next
sleeper.
Note that the state_owner sequence ids and lock_owner sequence ids are
all indexed to the same rpc_sequence list, so OPEN, LOCK,... requests
are all ordered w.r.t. each other.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
2005-10-18 21:20:12 +00:00
|
|
|
struct nfs_seqid_counter so_seqid;
|
2020-07-20 15:55:27 +00:00
|
|
|
seqcount_spinlock_t so_reclaim_seqcount;
|
2013-02-07 15:54:07 +00:00
|
|
|
struct mutex so_delegreturn_mutex;
|
2005-06-22 17:16:21 +00:00
|
|
|
};
|
|
|
|
|
2008-12-23 20:21:43 +00:00
|
|
|
enum {
|
|
|
|
NFS_OWNER_RECLAIM_REBOOT,
|
|
|
|
NFS_OWNER_RECLAIM_NOGRACE
|
|
|
|
};
|
|
|
|
|
2009-12-09 09:50:14 +00:00
|
|
|
#define NFS_LOCK_NEW 0
|
|
|
|
#define NFS_LOCK_RECLAIM 1
|
|
|
|
#define NFS_LOCK_EXPIRED 2
|
|
|
|
|
2005-06-22 17:16:21 +00:00
|
|
|
/*
|
|
|
|
* struct nfs4_state maintains the client-side state for a given
|
|
|
|
* (state_owner,inode) tuple (OPEN) or state_owner (LOCK).
|
|
|
|
*
|
|
|
|
* OPEN:
|
|
|
|
* In order to know when to OPEN_DOWNGRADE or CLOSE the state on the server,
|
|
|
|
* we need to know how many files are open for reading or writing on a
|
|
|
|
* given inode. This information too is stored here.
|
|
|
|
*
|
|
|
|
* LOCK: one nfs4_state (LOCK) to hold the lock stateid nfs4_state(OPEN)
|
|
|
|
*/
|
|
|
|
|
|
|
|
struct nfs4_lock_state {
|
2014-09-08 12:26:01 +00:00
|
|
|
struct list_head ls_locks; /* Other lock stateids */
|
|
|
|
struct nfs4_state * ls_state; /* Pointer to open state */
|
2012-09-10 17:26:49 +00:00
|
|
|
#define NFS_LOCK_INITIALIZED 0
|
2013-09-04 07:04:49 +00:00
|
|
|
#define NFS_LOCK_LOST 1
|
2014-09-08 12:26:01 +00:00
|
|
|
unsigned long ls_flags;
|
NFSv4: Add functions to order RPC calls
NFSv4 file state-changing functions such as OPEN, CLOSE, LOCK,... are all
labelled with "sequence identifiers" in order to prevent the server from
reordering RPC requests, as this could cause its file state to
become out of sync with the client.
Currently the NFS client code enforces this ordering locally using
semaphores to restrict access to structures until the RPC call is done.
This, of course, only works with synchronous RPC calls, since the
user process must first grab the semaphore.
By dropping semaphores, and instead teaching the RPC engine to hold
the RPC calls until they are ready to be sent, we can extend this
process to work nicely with asynchronous RPC calls too.
This patch adds a new list called "rpc_sequence" that defines the order
of the RPC calls to be sent. We add one such list for each state_owner.
When an RPC call is ready to be sent, it checks if it is top of the
rpc_sequence list. If so, it proceeds. If not, it goes back to sleep,
and loops until it hits top of the list.
Once the RPC call has completed, it can then bump the sequence id counter,
and remove itself from the rpc_sequence list, and then wake up the next
sleeper.
Note that the state_owner sequence ids and lock_owner sequence ids are
all indexed to the same rpc_sequence list, so OPEN, LOCK,... requests
are all ordered w.r.t. each other.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
2005-10-18 21:20:12 +00:00
|
|
|
struct nfs_seqid_counter ls_seqid;
|
2014-09-08 12:26:01 +00:00
|
|
|
nfs4_stateid ls_stateid;
|
2017-10-20 09:53:36 +00:00
|
|
|
refcount_t ls_count;
|
2014-09-08 12:26:01 +00:00
|
|
|
fl_owner_t ls_owner;
|
2005-06-22 17:16:21 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
/* bits for nfs4_state->flags */
|
|
|
|
enum {
|
|
|
|
LK_STATE_IN_USE,
|
2007-07-05 22:07:55 +00:00
|
|
|
NFS_DELEGATED_STATE, /* Current stateid is delegation */
|
2013-04-20 05:25:45 +00:00
|
|
|
NFS_OPEN_STATE, /* OPEN stateid is set */
|
2007-07-05 22:07:55 +00:00
|
|
|
NFS_O_RDONLY_STATE, /* OPEN stateid has read-only state */
|
|
|
|
NFS_O_WRONLY_STATE, /* OPEN stateid has write-only state */
|
|
|
|
NFS_O_RDWR_STATE, /* OPEN stateid has read/write state */
|
2008-12-23 20:21:41 +00:00
|
|
|
NFS_STATE_RECLAIM_REBOOT, /* OPEN stateid server rebooted */
|
|
|
|
NFS_STATE_RECLAIM_NOGRACE, /* OPEN stateid needs to recover state */
|
2010-01-26 20:42:30 +00:00
|
|
|
NFS_STATE_POSIX_LOCKS, /* Posix locks are supported */
|
2013-03-14 20:57:48 +00:00
|
|
|
NFS_STATE_RECOVERY_FAILED, /* OPEN stateid state recovery failed */
|
2016-09-17 22:17:35 +00:00
|
|
|
NFS_STATE_MAY_NOTIFY_LOCK, /* server may CB_NOTIFY_LOCK */
|
NFSv4: Fix OPEN / CLOSE race
Ben Coddington has noted the following race between OPEN and CLOSE
on a single client.
Process 1 Process 2 Server
========= ========= ======
1) OPEN file
2) OPEN file
3) Process OPEN (1) seqid=1
4) Process OPEN (2) seqid=2
5) Reply OPEN (2)
6) Receive reply (2)
7) new stateid, seqid=2
8) CLOSE file, using
stateid w/ seqid=2
9) Reply OPEN (1)
10( Process CLOSE (8)
11) Reply CLOSE (8)
12) Forget stateid
file closed
13) Receive reply (7)
14) Forget stateid
file closed.
15) Receive reply (1).
16) New stateid seqid=1
is really the same
stateid that was
closed.
IOW: the reply to the first OPEN is delayed. Since "Process 2" does
not wait before closing the file, and it does not cache the closed
stateid, then when the delayed reply is finally received, it is treated
as setting up a new stateid by the client.
The fix is to ensure that the client processes the OPEN and CLOSE calls
in the same order in which the server processed them.
This commit ensures that we examine the seqid of the stateid
returned by OPEN. If it is a new stateid, we assume the seqid
must be equal to the value 1, and that each state transition
increments the seqid value by 1 (See RFC7530, Section 9.1.4.2,
and RFC5661, Section 8.2.2).
If the tracker sees that an OPEN returns with a seqid that is greater
than the cached seqid + 1, then it bumps a flag to ensure that the
caller waits for the RPCs carrying the missing seqids to complete.
Note that there can still be pathologies where the server crashes before
it can even send us the missing seqids. Since the OPEN call is still
holding a slot when it waits here, that could cause the recovery to
stall forever. To avoid that, we time out after a 5 second wait.
Reported-by: Benjamin Coddington <bcodding@redhat.com>
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
2017-11-06 20:28:01 +00:00
|
|
|
NFS_STATE_CHANGE_WAIT, /* A state changing operation is outstanding */
|
2018-08-13 19:33:01 +00:00
|
|
|
NFS_CLNT_DST_SSC_COPY_STATE, /* dst server open state on client*/
|
2019-06-14 18:38:40 +00:00
|
|
|
NFS_CLNT_SRC_SSC_COPY_STATE, /* src server open state on client*/
|
2019-10-08 20:34:36 +00:00
|
|
|
NFS_SRV_SSC_COPY_STATE, /* ssc state on the dst server */
|
2005-06-22 17:16:21 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
struct nfs4_state {
|
|
|
|
struct list_head open_states; /* List of states for the same state_owner */
|
|
|
|
struct list_head inode_states; /* List of states for the same inode */
|
|
|
|
struct list_head lock_states; /* List of subservient lock stateids */
|
|
|
|
|
|
|
|
struct nfs4_state_owner *owner; /* Pointer to the open owner */
|
|
|
|
struct inode *inode; /* Pointer to the inode */
|
|
|
|
|
|
|
|
unsigned long flags; /* Do we hold any locks? */
|
2005-06-22 17:16:32 +00:00
|
|
|
spinlock_t state_lock; /* Protects the lock_states list */
|
2005-06-22 17:16:21 +00:00
|
|
|
|
2007-07-09 14:45:42 +00:00
|
|
|
seqlock_t seqlock; /* Protects the stateid/open_stateid */
|
2007-07-05 22:07:55 +00:00
|
|
|
nfs4_stateid stateid; /* Current stateid: may be delegation */
|
|
|
|
nfs4_stateid open_stateid; /* OPEN stateid */
|
2005-06-22 17:16:21 +00:00
|
|
|
|
2007-07-09 14:45:42 +00:00
|
|
|
/* The following 3 fields are protected by owner->so_lock */
|
2007-07-05 22:07:55 +00:00
|
|
|
unsigned int n_rdonly; /* Number of read-only references */
|
|
|
|
unsigned int n_wronly; /* Number of write-only references */
|
|
|
|
unsigned int n_rdwr; /* Number of read/write references */
|
2008-12-23 20:21:56 +00:00
|
|
|
fmode_t state; /* State on the server (R,W, or RW) */
|
2018-09-02 23:19:07 +00:00
|
|
|
refcount_t count;
|
NFSv4: Fix OPEN / CLOSE race
Ben Coddington has noted the following race between OPEN and CLOSE
on a single client.
Process 1 Process 2 Server
========= ========= ======
1) OPEN file
2) OPEN file
3) Process OPEN (1) seqid=1
4) Process OPEN (2) seqid=2
5) Reply OPEN (2)
6) Receive reply (2)
7) new stateid, seqid=2
8) CLOSE file, using
stateid w/ seqid=2
9) Reply OPEN (1)
10( Process CLOSE (8)
11) Reply CLOSE (8)
12) Forget stateid
file closed
13) Receive reply (7)
14) Forget stateid
file closed.
15) Receive reply (1).
16) New stateid seqid=1
is really the same
stateid that was
closed.
IOW: the reply to the first OPEN is delayed. Since "Process 2" does
not wait before closing the file, and it does not cache the closed
stateid, then when the delayed reply is finally received, it is treated
as setting up a new stateid by the client.
The fix is to ensure that the client processes the OPEN and CLOSE calls
in the same order in which the server processed them.
This commit ensures that we examine the seqid of the stateid
returned by OPEN. If it is a new stateid, we assume the seqid
must be equal to the value 1, and that each state transition
increments the seqid value by 1 (See RFC7530, Section 9.1.4.2,
and RFC5661, Section 8.2.2).
If the tracker sees that an OPEN returns with a seqid that is greater
than the cached seqid + 1, then it bumps a flag to ensure that the
caller waits for the RPCs carrying the missing seqids to complete.
Note that there can still be pathologies where the server crashes before
it can even send us the missing seqids. Since the OPEN call is still
holding a slot when it waits here, that could cause the recovery to
stall forever. To avoid that, we time out after a 5 second wait.
Reported-by: Benjamin Coddington <bcodding@redhat.com>
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
2017-11-06 20:28:01 +00:00
|
|
|
|
|
|
|
wait_queue_head_t waitq;
|
2018-09-02 23:15:15 +00:00
|
|
|
struct rcu_head rcu_head;
|
2005-06-22 17:16:21 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
|
|
|
|
struct nfs4_exception {
|
2008-12-23 20:21:46 +00:00
|
|
|
struct nfs4_state *state;
|
2012-03-07 21:39:06 +00:00
|
|
|
struct inode *inode;
|
2016-06-26 12:44:35 +00:00
|
|
|
nfs4_stateid *stateid;
|
2015-09-20 18:32:45 +00:00
|
|
|
long timeout;
|
2021-06-01 15:10:05 +00:00
|
|
|
unsigned char task_is_privileged : 1;
|
2015-09-20 18:32:45 +00:00
|
|
|
unsigned char delay : 1,
|
|
|
|
recovering : 1,
|
|
|
|
retry : 1;
|
2019-04-07 17:59:09 +00:00
|
|
|
bool interruptible;
|
2005-06-22 17:16:21 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
struct nfs4_state_recovery_ops {
|
2008-12-23 20:21:43 +00:00
|
|
|
int owner_flag_bit;
|
2008-12-23 20:21:41 +00:00
|
|
|
int state_flag_bit;
|
2005-06-22 17:16:21 +00:00
|
|
|
int (*recover_open)(struct nfs4_state_owner *, struct nfs4_state *);
|
|
|
|
int (*recover_lock)(struct nfs4_state *, struct file_lock *);
|
2018-12-03 00:30:31 +00:00
|
|
|
int (*establish_clid)(struct nfs_client *, const struct cred *);
|
|
|
|
int (*reclaim_complete)(struct nfs_client *, const struct cred *);
|
2012-09-14 21:24:32 +00:00
|
|
|
int (*detect_trunking)(struct nfs_client *, struct nfs_client **,
|
2018-12-03 00:30:31 +00:00
|
|
|
const struct cred *);
|
2005-06-22 17:16:21 +00:00
|
|
|
};
|
|
|
|
|
2016-09-21 19:24:26 +00:00
|
|
|
struct nfs4_opendata {
|
|
|
|
struct kref kref;
|
|
|
|
struct nfs_openargs o_arg;
|
|
|
|
struct nfs_openres o_res;
|
|
|
|
struct nfs_open_confirmargs c_arg;
|
|
|
|
struct nfs_open_confirmres c_res;
|
|
|
|
struct nfs4_string owner_name;
|
|
|
|
struct nfs4_string group_name;
|
|
|
|
struct nfs4_label *a_label;
|
|
|
|
struct nfs_fattr f_attr;
|
|
|
|
struct dentry *dir;
|
|
|
|
struct dentry *dentry;
|
|
|
|
struct nfs4_state_owner *owner;
|
|
|
|
struct nfs4_state *state;
|
|
|
|
struct iattr attrs;
|
|
|
|
struct nfs4_layoutget *lgp;
|
|
|
|
unsigned long timestamp;
|
|
|
|
bool rpc_done;
|
|
|
|
bool file_created;
|
|
|
|
bool is_recover;
|
|
|
|
bool cancelled;
|
|
|
|
int rpc_status;
|
|
|
|
};
|
|
|
|
|
2016-09-09 13:22:29 +00:00
|
|
|
struct nfs4_add_xprt_data {
|
|
|
|
struct nfs_client *clp;
|
2018-12-03 00:30:31 +00:00
|
|
|
const struct cred *cred;
|
2016-09-09 13:22:29 +00:00
|
|
|
};
|
|
|
|
|
2009-04-01 13:22:44 +00:00
|
|
|
struct nfs4_state_maintenance_ops {
|
2018-12-03 00:30:31 +00:00
|
|
|
int (*sched_state_renewal)(struct nfs_client *, const struct cred *, unsigned);
|
|
|
|
const struct cred * (*get_state_renewal_cred)(struct nfs_client *);
|
|
|
|
int (*renew_lease)(struct nfs_client *, const struct cred *);
|
2009-04-01 13:22:44 +00:00
|
|
|
};
|
|
|
|
|
2013-10-17 18:12:39 +00:00
|
|
|
struct nfs4_mig_recovery_ops {
|
2022-01-12 15:27:38 +00:00
|
|
|
int (*get_locations)(struct nfs_server *, struct nfs_fh *,
|
|
|
|
struct nfs4_fs_locations *, struct page *, const struct cred *);
|
2018-12-03 00:30:31 +00:00
|
|
|
int (*fsid_present)(struct inode *, const struct cred *);
|
2013-10-17 18:12:39 +00:00
|
|
|
};
|
|
|
|
|
2009-02-20 05:51:22 +00:00
|
|
|
extern const struct dentry_operations nfs4_dentry_operations;
|
2012-07-16 20:39:12 +00:00
|
|
|
|
|
|
|
/* dir.c */
|
|
|
|
int nfs_atomic_open(struct inode *, struct dentry *, struct file *,
|
2018-06-08 17:32:02 +00:00
|
|
|
unsigned, umode_t);
|
2005-06-22 17:16:22 +00:00
|
|
|
|
2019-12-10 12:31:13 +00:00
|
|
|
/* fs_context.c */
|
2012-08-08 17:57:06 +00:00
|
|
|
extern struct file_system_type nfs4_fs_type;
|
|
|
|
|
2012-04-27 17:27:40 +00:00
|
|
|
/* nfs4namespace.c */
|
2016-07-20 20:34:42 +00:00
|
|
|
struct rpc_clnt *nfs4_negotiate_security(struct rpc_clnt *, struct inode *,
|
|
|
|
const struct qstr *);
|
2019-12-10 12:31:13 +00:00
|
|
|
int nfs4_submount(struct fs_context *, struct nfs_server *);
|
2013-10-17 18:12:34 +00:00
|
|
|
int nfs4_replace_transport(struct nfs_server *server,
|
|
|
|
const struct nfs4_fs_locations *locations);
|
2021-12-09 19:53:32 +00:00
|
|
|
size_t nfs_parse_server_name(char *string, size_t len, struct sockaddr *sa,
|
2021-12-09 19:53:33 +00:00
|
|
|
size_t salen, struct net *net, int port);
|
2005-06-22 17:16:21 +00:00
|
|
|
/* nfs4proc.c */
|
2014-11-25 18:18:15 +00:00
|
|
|
extern int nfs4_handle_exception(struct nfs_server *, int, struct nfs4_exception *);
|
2018-07-09 19:13:33 +00:00
|
|
|
extern int nfs4_async_handle_error(struct rpc_task *task,
|
|
|
|
struct nfs_server *server,
|
|
|
|
struct nfs4_state *state, long *timeout);
|
2014-09-26 17:58:48 +00:00
|
|
|
extern int nfs4_call_sync(struct rpc_clnt *, struct nfs_server *,
|
|
|
|
struct rpc_message *, struct nfs4_sequence_args *,
|
|
|
|
struct nfs4_sequence_res *, int);
|
2018-05-04 20:22:50 +00:00
|
|
|
extern void nfs4_init_sequence(struct nfs4_sequence_args *, struct nfs4_sequence_res *, int, int);
|
2018-12-03 00:30:31 +00:00
|
|
|
extern int nfs4_proc_setclientid(struct nfs_client *, u32, unsigned short, const struct cred *, struct nfs4_setclientid_res *);
|
|
|
|
extern int nfs4_proc_setclientid_confirm(struct nfs_client *, struct nfs4_setclientid_res *arg, const struct cred *);
|
2013-09-07 16:58:57 +00:00
|
|
|
extern int nfs4_proc_get_rootfh(struct nfs_server *, struct nfs_fh *, struct nfs_fsinfo *, bool);
|
2018-12-03 00:30:31 +00:00
|
|
|
extern int nfs4_proc_bind_conn_to_session(struct nfs_client *, const struct cred *cred);
|
|
|
|
extern int nfs4_proc_exchange_id(struct nfs_client *clp, const struct cred *cred);
|
2012-05-25 21:18:09 +00:00
|
|
|
extern int nfs4_destroy_clientid(struct nfs_client *clp);
|
2018-12-03 00:30:31 +00:00
|
|
|
extern int nfs4_init_clientid(struct nfs_client *, const struct cred *);
|
|
|
|
extern int nfs41_init_clientid(struct nfs_client *, const struct cred *);
|
2012-09-21 00:31:51 +00:00
|
|
|
extern int nfs4_do_close(struct nfs4_state *state, gfp_t gfp_mask, int wait);
|
2006-06-09 13:34:19 +00:00
|
|
|
extern int nfs4_server_capabilities(struct nfs_server *server, struct nfs_fh *fhandle);
|
2012-04-27 17:27:41 +00:00
|
|
|
extern int nfs4_proc_fs_locations(struct rpc_clnt *, struct inode *, const struct qstr *,
|
|
|
|
struct nfs4_fs_locations *, struct page *);
|
2022-01-12 15:27:38 +00:00
|
|
|
extern int nfs4_proc_get_locations(struct nfs_server *, struct nfs_fh *,
|
|
|
|
struct nfs4_fs_locations *,
|
|
|
|
struct page *page, const struct cred *);
|
2018-12-03 00:30:31 +00:00
|
|
|
extern int nfs4_proc_fsid_present(struct inode *, const struct cred *);
|
2020-01-14 17:06:34 +00:00
|
|
|
extern struct rpc_clnt *nfs4_proc_lookup_mountpoint(struct inode *,
|
|
|
|
struct dentry *,
|
|
|
|
struct nfs_fh *,
|
|
|
|
struct nfs_fattr *);
|
2012-04-27 17:27:40 +00:00
|
|
|
extern int nfs4_proc_secinfo(struct inode *, const struct qstr *, struct nfs4_secinfo_flavors *);
|
2010-12-09 11:35:25 +00:00
|
|
|
extern const struct xattr_handler *nfs4_xattr_handlers[];
|
2013-03-17 00:54:34 +00:00
|
|
|
extern int nfs4_set_rw_stateid(nfs4_stateid *stateid,
|
2013-03-17 19:52:00 +00:00
|
|
|
const struct nfs_open_context *ctx,
|
|
|
|
const struct nfs_lock_context *l_ctx,
|
|
|
|
fmode_t fmode);
|
2021-12-27 19:40:51 +00:00
|
|
|
extern void nfs4_bitmask_set(__u32 bitmask[], const __u32 src[],
|
|
|
|
struct inode *inode, unsigned long cache_validity);
|
2019-10-08 20:33:53 +00:00
|
|
|
extern int nfs4_proc_getattr(struct nfs_server *server, struct nfs_fh *fhandle,
|
2021-10-22 17:11:07 +00:00
|
|
|
struct nfs_fattr *fattr, struct inode *inode);
|
2019-10-08 20:33:53 +00:00
|
|
|
extern int update_open_stateid(struct nfs4_state *state,
|
|
|
|
const nfs4_stateid *open_stateid,
|
|
|
|
const nfs4_stateid *deleg_stateid,
|
|
|
|
fmode_t fmode);
|
2021-05-07 14:06:13 +00:00
|
|
|
extern int nfs4_proc_setlease(struct file *file, long arg,
|
|
|
|
struct file_lock **lease, void **priv);
|
2019-07-07 19:26:08 +00:00
|
|
|
extern int nfs4_proc_get_lease_time(struct nfs_client *clp,
|
|
|
|
struct nfs_fsinfo *fsinfo);
|
2020-06-23 22:38:59 +00:00
|
|
|
extern void nfs4_update_changeattr(struct inode *dir,
|
|
|
|
struct nfs4_change_info *cinfo,
|
|
|
|
unsigned long timestamp,
|
|
|
|
unsigned long cache_validity);
|
2020-06-23 22:39:01 +00:00
|
|
|
extern int nfs4_buf_to_pages_noslab(const void *buf, size_t buflen,
|
|
|
|
struct page **pages);
|
2020-06-23 22:38:59 +00:00
|
|
|
|
2009-04-01 13:21:53 +00:00
|
|
|
#if defined(CONFIG_NFS_V4_1)
|
2014-01-29 16:34:38 +00:00
|
|
|
extern int nfs41_sequence_done(struct rpc_task *, struct nfs4_sequence_res *);
|
2018-12-03 00:30:31 +00:00
|
|
|
extern int nfs4_proc_create_session(struct nfs_client *, const struct cred *);
|
|
|
|
extern int nfs4_proc_destroy_session(struct nfs4_session *, const struct cred *);
|
2011-03-23 13:27:54 +00:00
|
|
|
extern int nfs4_proc_layoutcommit(struct nfs4_layoutcommit_data *data,
|
2011-03-12 07:58:10 +00:00
|
|
|
bool sync);
|
2016-09-09 13:22:21 +00:00
|
|
|
extern int nfs4_detect_session_trunking(struct nfs_client *clp,
|
|
|
|
struct nfs41_exchange_id_res *res, struct rpc_xprt *xprt);
|
2011-03-01 01:34:12 +00:00
|
|
|
|
|
|
|
static inline bool
|
|
|
|
is_ds_only_client(struct nfs_client *clp)
|
|
|
|
{
|
|
|
|
return (clp->cl_exchange_flags & EXCHGID4_FLAG_MASK_PNFS) ==
|
|
|
|
EXCHGID4_FLAG_USE_PNFS_DS;
|
|
|
|
}
|
2011-03-01 01:34:17 +00:00
|
|
|
|
|
|
|
static inline bool
|
|
|
|
is_ds_client(struct nfs_client *clp)
|
|
|
|
{
|
|
|
|
return clp->cl_exchange_flags & EXCHGID4_FLAG_USE_PNFS_DS;
|
|
|
|
}
|
2013-08-13 20:37:33 +00:00
|
|
|
|
2013-08-13 20:37:37 +00:00
|
|
|
static inline bool
|
|
|
|
_nfs4_state_protect(struct nfs_client *clp, unsigned long sp4_mode,
|
|
|
|
struct rpc_clnt **clntp, struct rpc_message *msg)
|
2013-08-13 20:37:33 +00:00
|
|
|
{
|
|
|
|
rpc_authflavor_t flavor;
|
|
|
|
|
2017-08-18 07:12:52 +00:00
|
|
|
if (sp4_mode == NFS_SP4_MACH_CRED_CLEANUP ||
|
|
|
|
sp4_mode == NFS_SP4_MACH_CRED_PNFS_CLEANUP) {
|
|
|
|
/* Using machine creds for cleanup operations
|
|
|
|
* is only relevent if the client credentials
|
|
|
|
* might expire. So don't bother for
|
|
|
|
* RPC_AUTH_UNIX. If file was only exported to
|
|
|
|
* sec=sys, the PUTFH would fail anyway.
|
|
|
|
*/
|
|
|
|
if ((*clntp)->cl_auth->au_flavor == RPC_AUTH_UNIX)
|
|
|
|
return false;
|
|
|
|
}
|
2013-08-13 20:37:33 +00:00
|
|
|
if (test_bit(sp4_mode, &clp->cl_sp4_flags)) {
|
2018-12-03 00:30:30 +00:00
|
|
|
msg->rpc_cred = rpc_machine_cred();
|
2013-08-13 20:37:33 +00:00
|
|
|
|
|
|
|
flavor = clp->cl_rpcclient->cl_auth->au_flavor;
|
2013-09-10 22:44:33 +00:00
|
|
|
WARN_ON_ONCE(flavor != RPC_AUTH_GSS_KRB5I &&
|
|
|
|
flavor != RPC_AUTH_GSS_KRB5P);
|
2013-08-13 20:37:33 +00:00
|
|
|
*clntp = clp->cl_rpcclient;
|
2013-08-13 20:37:37 +00:00
|
|
|
|
|
|
|
return true;
|
2013-08-13 20:37:33 +00:00
|
|
|
}
|
2013-08-13 20:37:37 +00:00
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Function responsible for determining if an rpc_message should use the
|
|
|
|
* machine cred under SP4_MACH_CRED and if so switching the credential and
|
|
|
|
* authflavor (using the nfs_client's rpc_clnt which will be krb5i/p).
|
|
|
|
* Should be called before rpc_call_sync/rpc_call_async.
|
|
|
|
*/
|
|
|
|
static inline void
|
|
|
|
nfs4_state_protect(struct nfs_client *clp, unsigned long sp4_mode,
|
|
|
|
struct rpc_clnt **clntp, struct rpc_message *msg)
|
|
|
|
{
|
|
|
|
_nfs4_state_protect(clp, sp4_mode, clntp, msg);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Special wrapper to nfs4_state_protect for write.
|
|
|
|
* If WRITE can use machine cred but COMMIT cannot, make sure all writes
|
|
|
|
* that use machine cred use NFS_FILE_SYNC.
|
|
|
|
*/
|
|
|
|
static inline void
|
|
|
|
nfs4_state_protect_write(struct nfs_client *clp, struct rpc_clnt **clntp,
|
2014-06-09 15:48:35 +00:00
|
|
|
struct rpc_message *msg, struct nfs_pgio_header *hdr)
|
2013-08-13 20:37:37 +00:00
|
|
|
{
|
|
|
|
if (_nfs4_state_protect(clp, NFS_SP4_MACH_CRED_WRITE, clntp, msg) &&
|
|
|
|
!test_bit(NFS_SP4_MACH_CRED_COMMIT, &clp->cl_sp4_flags))
|
2014-06-09 15:48:35 +00:00
|
|
|
hdr->args.stable = NFS_FILE_SYNC;
|
2013-08-13 20:37:33 +00:00
|
|
|
}
|
2009-04-01 13:22:15 +00:00
|
|
|
#else /* CONFIG_NFS_v4_1 */
|
2011-03-01 01:34:12 +00:00
|
|
|
static inline bool
|
|
|
|
is_ds_only_client(struct nfs_client *clp)
|
2011-03-01 01:34:17 +00:00
|
|
|
{
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool
|
|
|
|
is_ds_client(struct nfs_client *clp)
|
2011-03-01 01:34:12 +00:00
|
|
|
{
|
|
|
|
return false;
|
|
|
|
}
|
2013-08-13 20:37:33 +00:00
|
|
|
|
|
|
|
static inline void
|
|
|
|
nfs4_state_protect(struct nfs_client *clp, unsigned long sp4_flags,
|
|
|
|
struct rpc_clnt **clntp, struct rpc_message *msg)
|
|
|
|
{
|
|
|
|
}
|
2013-08-13 20:37:37 +00:00
|
|
|
|
|
|
|
static inline void
|
|
|
|
nfs4_state_protect_write(struct nfs_client *clp, struct rpc_clnt **clntp,
|
2014-06-09 15:48:35 +00:00
|
|
|
struct rpc_message *msg, struct nfs_pgio_header *hdr)
|
2013-08-13 20:37:37 +00:00
|
|
|
{
|
|
|
|
}
|
2009-04-01 13:21:53 +00:00
|
|
|
#endif /* CONFIG_NFS_V4_1 */
|
2005-06-22 17:16:21 +00:00
|
|
|
|
2010-06-16 13:52:26 +00:00
|
|
|
extern const struct nfs4_minor_version_ops *nfs_v4_minor_ops[];
|
2009-04-01 13:22:44 +00:00
|
|
|
|
2012-06-05 13:16:47 +00:00
|
|
|
extern const u32 nfs4_fattr_bitmap[3];
|
2013-05-22 16:50:41 +00:00
|
|
|
extern const u32 nfs4_statfs_bitmap[3];
|
|
|
|
extern const u32 nfs4_pathconf_bitmap[3];
|
2011-07-31 00:52:37 +00:00
|
|
|
extern const u32 nfs4_fsinfo_bitmap[3];
|
2013-05-22 16:50:41 +00:00
|
|
|
extern const u32 nfs4_fs_locations_bitmap[3];
|
2005-06-22 17:16:21 +00:00
|
|
|
|
2013-08-09 16:49:11 +00:00
|
|
|
void nfs40_shutdown_client(struct nfs_client *);
|
|
|
|
void nfs41_shutdown_client(struct nfs_client *);
|
|
|
|
int nfs40_init_client(struct nfs_client *);
|
|
|
|
int nfs41_init_client(struct nfs_client *);
|
2012-06-20 19:53:45 +00:00
|
|
|
void nfs4_free_client(struct nfs_client *);
|
|
|
|
|
2012-06-20 19:53:46 +00:00
|
|
|
struct nfs_client *nfs4_alloc_client(const struct nfs_client_initdata *);
|
|
|
|
|
2005-06-22 17:16:21 +00:00
|
|
|
/* nfs4renewd.c */
|
2006-08-23 00:06:08 +00:00
|
|
|
extern void nfs4_schedule_state_renewal(struct nfs_client *);
|
2005-06-22 17:16:21 +00:00
|
|
|
extern void nfs4_renewd_prepare_shutdown(struct nfs_server *);
|
2006-08-23 00:06:08 +00:00
|
|
|
extern void nfs4_kill_renewd(struct nfs_client *);
|
2006-11-22 14:55:48 +00:00
|
|
|
extern void nfs4_renew_state(struct work_struct *);
|
2020-01-30 09:43:25 +00:00
|
|
|
extern void nfs4_set_lease_period(struct nfs_client *clp, unsigned long lease);
|
2016-08-05 23:13:08 +00:00
|
|
|
|
2005-06-22 17:16:21 +00:00
|
|
|
|
|
|
|
/* nfs4state.c */
|
2019-10-16 16:28:21 +00:00
|
|
|
extern const nfs4_stateid current_stateid;
|
|
|
|
|
2018-12-03 00:30:31 +00:00
|
|
|
const struct cred *nfs4_get_clid_cred(struct nfs_client *clp);
|
|
|
|
const struct cred *nfs4_get_machine_cred(struct nfs_client *clp);
|
|
|
|
const struct cred *nfs4_get_renew_cred(struct nfs_client *clp);
|
2012-09-14 21:24:32 +00:00
|
|
|
int nfs4_discover_server_trunking(struct nfs_client *clp,
|
|
|
|
struct nfs_client **);
|
|
|
|
int nfs40_discover_server_trunking(struct nfs_client *clp,
|
2018-12-03 00:30:31 +00:00
|
|
|
struct nfs_client **, const struct cred *);
|
2009-04-01 13:22:46 +00:00
|
|
|
#if defined(CONFIG_NFS_V4_1)
|
2012-09-14 21:24:32 +00:00
|
|
|
int nfs41_discover_server_trunking(struct nfs_client *clp,
|
2018-12-03 00:30:31 +00:00
|
|
|
struct nfs_client **, const struct cred *);
|
2012-05-27 17:02:53 +00:00
|
|
|
extern void nfs4_schedule_session_recovery(struct nfs4_session *, int);
|
2015-07-13 18:01:31 +00:00
|
|
|
extern void nfs41_notify_server(struct nfs_client *);
|
2019-06-04 20:14:30 +00:00
|
|
|
bool nfs4_check_serverowner_major_id(struct nfs41_server_owner *o1,
|
|
|
|
struct nfs41_server_owner *o2);
|
2011-03-09 21:00:53 +00:00
|
|
|
#else
|
2012-05-27 17:02:53 +00:00
|
|
|
static inline void nfs4_schedule_session_recovery(struct nfs4_session *session, int err)
|
2011-03-09 21:00:53 +00:00
|
|
|
{
|
|
|
|
}
|
2009-04-01 13:22:46 +00:00
|
|
|
#endif /* CONFIG_NFS_V4_1 */
|
2005-06-22 17:16:21 +00:00
|
|
|
|
2018-12-03 00:30:31 +00:00
|
|
|
extern struct nfs4_state_owner *nfs4_get_state_owner(struct nfs_server *, const struct cred *, gfp_t);
|
2005-06-22 17:16:21 +00:00
|
|
|
extern void nfs4_put_state_owner(struct nfs4_state_owner *);
|
2019-08-03 14:11:27 +00:00
|
|
|
extern void nfs4_purge_state_owners(struct nfs_server *, struct list_head *);
|
|
|
|
extern void nfs4_free_state_owners(struct list_head *head);
|
2005-06-22 17:16:21 +00:00
|
|
|
extern struct nfs4_state * nfs4_get_open_state(struct inode *, struct nfs4_state_owner *);
|
|
|
|
extern void nfs4_put_open_state(struct nfs4_state *);
|
2011-06-22 22:20:23 +00:00
|
|
|
extern void nfs4_close_state(struct nfs4_state *, fmode_t);
|
|
|
|
extern void nfs4_close_sync(struct nfs4_state *, fmode_t);
|
2008-12-23 20:21:56 +00:00
|
|
|
extern void nfs4_state_set_mode_locked(struct nfs4_state *, fmode_t);
|
2012-03-06 00:56:44 +00:00
|
|
|
extern void nfs_inode_find_state_and_recover(struct inode *inode,
|
|
|
|
const nfs4_stateid *stateid);
|
2014-02-13 00:15:06 +00:00
|
|
|
extern int nfs4_state_mark_reclaim_nograce(struct nfs_client *, struct nfs4_state *);
|
2011-03-09 21:00:53 +00:00
|
|
|
extern void nfs4_schedule_lease_recovery(struct nfs_client *);
|
2012-11-26 18:13:29 +00:00
|
|
|
extern int nfs4_wait_clnt_recover(struct nfs_client *clp);
|
|
|
|
extern int nfs4_client_recover_expired_lease(struct nfs_client *clp);
|
2008-12-23 20:21:50 +00:00
|
|
|
extern void nfs4_schedule_state_manager(struct nfs_client *);
|
2011-08-24 19:07:37 +00:00
|
|
|
extern void nfs4_schedule_path_down_recovery(struct nfs_client *clp);
|
2013-03-14 20:57:48 +00:00
|
|
|
extern int nfs4_schedule_stateid_recovery(const struct nfs_server *, struct nfs4_state *);
|
2013-10-17 18:13:02 +00:00
|
|
|
extern int nfs4_schedule_migration_recovery(const struct nfs_server *);
|
2013-10-17 18:13:35 +00:00
|
|
|
extern void nfs4_schedule_lease_moved_recovery(struct nfs_client *);
|
2016-09-22 17:38:51 +00:00
|
|
|
extern void nfs41_handle_sequence_flag_errors(struct nfs_client *clp, u32 flags, bool);
|
2011-05-31 23:05:47 +00:00
|
|
|
extern void nfs41_handle_server_scope(struct nfs_client *,
|
2012-05-22 02:44:31 +00:00
|
|
|
struct nfs41_server_scope **);
|
2005-10-18 21:20:15 +00:00
|
|
|
extern void nfs4_put_lock_state(struct nfs4_lock_state *lsp);
|
2005-06-22 17:16:32 +00:00
|
|
|
extern int nfs4_set_lock_state(struct nfs4_state *state, struct file_lock *fl);
|
2016-05-16 21:42:44 +00:00
|
|
|
extern int nfs4_select_rw_stateid(struct nfs4_state *, fmode_t,
|
2016-10-13 04:26:47 +00:00
|
|
|
const struct nfs_lock_context *, nfs4_stateid *,
|
2018-12-03 00:30:31 +00:00
|
|
|
const struct cred **);
|
2017-11-06 20:28:06 +00:00
|
|
|
extern bool nfs4_copy_open_stateid(nfs4_stateid *dst,
|
|
|
|
struct nfs4_state *state);
|
2005-06-22 17:16:21 +00:00
|
|
|
|
2010-05-13 16:51:01 +00:00
|
|
|
extern struct nfs_seqid *nfs_alloc_seqid(struct nfs_seqid_counter *counter, gfp_t gfp_mask);
|
NFSv4: Add functions to order RPC calls
NFSv4 file state-changing functions such as OPEN, CLOSE, LOCK,... are all
labelled with "sequence identifiers" in order to prevent the server from
reordering RPC requests, as this could cause its file state to
become out of sync with the client.
Currently the NFS client code enforces this ordering locally using
semaphores to restrict access to structures until the RPC call is done.
This, of course, only works with synchronous RPC calls, since the
user process must first grab the semaphore.
By dropping semaphores, and instead teaching the RPC engine to hold
the RPC calls until they are ready to be sent, we can extend this
process to work nicely with asynchronous RPC calls too.
This patch adds a new list called "rpc_sequence" that defines the order
of the RPC calls to be sent. We add one such list for each state_owner.
When an RPC call is ready to be sent, it checks if it is top of the
rpc_sequence list. If so, it proceeds. If not, it goes back to sleep,
and loops until it hits top of the list.
Once the RPC call has completed, it can then bump the sequence id counter,
and remove itself from the rpc_sequence list, and then wake up the next
sleeper.
Note that the state_owner sequence ids and lock_owner sequence ids are
all indexed to the same rpc_sequence list, so OPEN, LOCK,... requests
are all ordered w.r.t. each other.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
2005-10-18 21:20:12 +00:00
|
|
|
extern int nfs_wait_on_sequence(struct nfs_seqid *seqid, struct rpc_task *task);
|
|
|
|
extern void nfs_increment_open_seqid(int status, struct nfs_seqid *seqid);
|
|
|
|
extern void nfs_increment_lock_seqid(int status, struct nfs_seqid *seqid);
|
2009-12-15 19:47:36 +00:00
|
|
|
extern void nfs_release_seqid(struct nfs_seqid *seqid);
|
NFSv4: Add functions to order RPC calls
NFSv4 file state-changing functions such as OPEN, CLOSE, LOCK,... are all
labelled with "sequence identifiers" in order to prevent the server from
reordering RPC requests, as this could cause its file state to
become out of sync with the client.
Currently the NFS client code enforces this ordering locally using
semaphores to restrict access to structures until the RPC call is done.
This, of course, only works with synchronous RPC calls, since the
user process must first grab the semaphore.
By dropping semaphores, and instead teaching the RPC engine to hold
the RPC calls until they are ready to be sent, we can extend this
process to work nicely with asynchronous RPC calls too.
This patch adds a new list called "rpc_sequence" that defines the order
of the RPC calls to be sent. We add one such list for each state_owner.
When an RPC call is ready to be sent, it checks if it is top of the
rpc_sequence list. If so, it proceeds. If not, it goes back to sleep,
and loops until it hits top of the list.
Once the RPC call has completed, it can then bump the sequence id counter,
and remove itself from the rpc_sequence list, and then wake up the next
sleeper.
Note that the state_owner sequence ids and lock_owner sequence ids are
all indexed to the same rpc_sequence list, so OPEN, LOCK,... requests
are all ordered w.r.t. each other.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
2005-10-18 21:20:12 +00:00
|
|
|
extern void nfs_free_seqid(struct nfs_seqid *seqid);
|
2017-10-19 19:46:45 +00:00
|
|
|
extern int nfs4_setup_sequence(struct nfs_client *client,
|
2014-06-10 21:24:15 +00:00
|
|
|
struct nfs4_sequence_args *args,
|
|
|
|
struct nfs4_sequence_res *res,
|
|
|
|
struct rpc_task *task);
|
2014-06-10 21:24:16 +00:00
|
|
|
extern int nfs4_sequence_done(struct rpc_task *task,
|
|
|
|
struct nfs4_sequence_res *res);
|
NFSv4: Add functions to order RPC calls
NFSv4 file state-changing functions such as OPEN, CLOSE, LOCK,... are all
labelled with "sequence identifiers" in order to prevent the server from
reordering RPC requests, as this could cause its file state to
become out of sync with the client.
Currently the NFS client code enforces this ordering locally using
semaphores to restrict access to structures until the RPC call is done.
This, of course, only works with synchronous RPC calls, since the
user process must first grab the semaphore.
By dropping semaphores, and instead teaching the RPC engine to hold
the RPC calls until they are ready to be sent, we can extend this
process to work nicely with asynchronous RPC calls too.
This patch adds a new list called "rpc_sequence" that defines the order
of the RPC calls to be sent. We add one such list for each state_owner.
When an RPC call is ready to be sent, it checks if it is top of the
rpc_sequence list. If so, it proceeds. If not, it goes back to sleep,
and loops until it hits top of the list.
Once the RPC call has completed, it can then bump the sequence id counter,
and remove itself from the rpc_sequence list, and then wake up the next
sleeper.
Note that the state_owner sequence ids and lock_owner sequence ids are
all indexed to the same rpc_sequence list, so OPEN, LOCK,... requests
are all ordered w.r.t. each other.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
2005-10-18 21:20:12 +00:00
|
|
|
|
2012-03-19 20:17:18 +00:00
|
|
|
extern void nfs4_free_lock_state(struct nfs_server *server, struct nfs4_lock_state *lsp);
|
2018-07-09 19:13:36 +00:00
|
|
|
extern int nfs4_proc_commit(struct file *dst, __u64 offset, __u32 count, struct nfs_commitres *res);
|
2005-06-22 17:16:21 +00:00
|
|
|
extern const nfs4_stateid zero_stateid;
|
2017-11-07 17:39:44 +00:00
|
|
|
extern const nfs4_stateid invalid_stateid;
|
2005-06-22 17:16:21 +00:00
|
|
|
|
2012-07-16 20:39:13 +00:00
|
|
|
/* nfs4super.c */
|
2012-07-16 20:39:20 +00:00
|
|
|
struct nfs_mount_info;
|
2012-07-30 20:05:16 +00:00
|
|
|
extern struct nfs_subversion nfs_v4;
|
2012-07-30 20:05:22 +00:00
|
|
|
extern bool nfs4_disable_idmapping;
|
|
|
|
extern unsigned short max_session_slots;
|
2016-08-30 00:03:52 +00:00
|
|
|
extern unsigned short max_session_cb_slots;
|
2012-07-30 20:05:22 +00:00
|
|
|
extern unsigned short send_implementation_id;
|
2013-09-04 14:08:54 +00:00
|
|
|
extern bool recover_lost_locks;
|
2012-07-30 20:05:25 +00:00
|
|
|
|
2012-09-14 21:24:41 +00:00
|
|
|
#define NFS4_CLIENT_ID_UNIQ_LEN (64)
|
|
|
|
extern char nfs4_client_id_uniquifier[NFS4_CLIENT_ID_UNIQ_LEN];
|
|
|
|
|
2019-12-10 12:31:13 +00:00
|
|
|
extern int nfs4_try_get_tree(struct fs_context *);
|
|
|
|
extern int nfs4_get_referral_tree(struct fs_context *);
|
|
|
|
|
2012-07-16 20:39:14 +00:00
|
|
|
/* nfs4sysctl.c */
|
|
|
|
#ifdef CONFIG_SYSCTL
|
|
|
|
int nfs4_register_sysctl(void);
|
|
|
|
void nfs4_unregister_sysctl(void);
|
|
|
|
#else
|
|
|
|
static inline int nfs4_register_sysctl(void)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2012-07-27 18:49:26 +00:00
|
|
|
static inline void nfs4_unregister_sysctl(void)
|
2012-07-16 20:39:14 +00:00
|
|
|
{
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2005-06-22 17:16:21 +00:00
|
|
|
/* nfs4xdr.c */
|
2017-05-12 13:36:49 +00:00
|
|
|
extern const struct rpc_procinfo nfs4_procedures[];
|
2005-06-22 17:16:21 +00:00
|
|
|
|
NFSv4.2: define limits and sizes for user xattr handling
Set limits for extended attributes (attribute value size and listxattr
buffer size), based on the fs-independent limits (XATTR_*_MAX).
Define the maximum XDR sizes for the RFC 8276 XATTR operations.
In the case of operations that carry a larger payload (SETXATTR,
GETXATTR, LISTXATTR), these exclude that payload, which is added
as separate pages, like other operations do.
Define, much like for read and write operations, the maximum overhead
sizes for get/set/listxattr, and use them to limit the maximum payload
size for those operations, in combination with the channel attributes.
Signed-off-by: Frank van der Linden <fllinden@amazon.com>
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
2020-06-23 22:38:54 +00:00
|
|
|
#ifdef CONFIG_NFS_V4_2
|
|
|
|
extern const u32 nfs42_maxsetxattr_overhead;
|
|
|
|
extern const u32 nfs42_maxgetxattr_overhead;
|
|
|
|
extern const u32 nfs42_maxlistxattrs_overhead;
|
|
|
|
#endif
|
|
|
|
|
2005-06-22 17:16:21 +00:00
|
|
|
struct nfs4_mount_data;
|
|
|
|
|
|
|
|
/* callback_xdr.c */
|
2017-05-12 14:21:37 +00:00
|
|
|
extern const struct svc_version nfs4_callback_version1;
|
|
|
|
extern const struct svc_version nfs4_callback_version4;
|
2005-06-22 17:16:21 +00:00
|
|
|
|
2012-03-04 23:13:56 +00:00
|
|
|
static inline void nfs4_stateid_copy(nfs4_stateid *dst, const nfs4_stateid *src)
|
|
|
|
{
|
2016-05-16 21:42:43 +00:00
|
|
|
memcpy(dst->data, src->data, sizeof(dst->data));
|
|
|
|
dst->type = src->type;
|
2012-03-04 23:13:56 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool nfs4_stateid_match(const nfs4_stateid *dst, const nfs4_stateid *src)
|
|
|
|
{
|
2016-05-16 21:42:43 +00:00
|
|
|
if (dst->type != src->type)
|
|
|
|
return false;
|
|
|
|
return memcmp(dst->data, src->data, sizeof(dst->data)) == 0;
|
2012-03-04 23:13:56 +00:00
|
|
|
}
|
|
|
|
|
2014-02-10 23:20:47 +00:00
|
|
|
static inline bool nfs4_stateid_match_other(const nfs4_stateid *dst, const nfs4_stateid *src)
|
|
|
|
{
|
|
|
|
return memcmp(dst->other, src->other, NFS4_STATEID_OTHER_SIZE) == 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool nfs4_stateid_is_newer(const nfs4_stateid *s1, const nfs4_stateid *s2)
|
|
|
|
{
|
|
|
|
return (s32)(be32_to_cpu(s1->seqid) - be32_to_cpu(s2->seqid)) > 0;
|
|
|
|
}
|
|
|
|
|
2020-09-25 19:48:39 +00:00
|
|
|
static inline bool nfs4_stateid_is_next(const nfs4_stateid *s1, const nfs4_stateid *s2)
|
|
|
|
{
|
|
|
|
u32 seq1 = be32_to_cpu(s1->seqid);
|
|
|
|
u32 seq2 = be32_to_cpu(s2->seqid);
|
|
|
|
|
|
|
|
return seq2 == seq1 + 1U || (seq2 == 1U && seq1 == 0xffffffffU);
|
|
|
|
}
|
|
|
|
|
2019-10-26 14:16:15 +00:00
|
|
|
static inline bool nfs4_stateid_match_or_older(const nfs4_stateid *dst, const nfs4_stateid *src)
|
|
|
|
{
|
|
|
|
return nfs4_stateid_match_other(dst, src) &&
|
|
|
|
!(src->seqid && nfs4_stateid_is_newer(dst, src));
|
|
|
|
}
|
|
|
|
|
2019-09-20 11:23:44 +00:00
|
|
|
static inline void nfs4_stateid_seqid_inc(nfs4_stateid *s1)
|
|
|
|
{
|
|
|
|
u32 seqid = be32_to_cpu(s1->seqid);
|
|
|
|
|
|
|
|
if (++seqid == 0)
|
|
|
|
++seqid;
|
|
|
|
s1->seqid = cpu_to_be32(seqid);
|
|
|
|
}
|
|
|
|
|
2013-03-14 20:57:48 +00:00
|
|
|
static inline bool nfs4_valid_open_stateid(const struct nfs4_state *state)
|
|
|
|
{
|
|
|
|
return test_bit(NFS_STATE_RECOVERY_FAILED, &state->flags) == 0;
|
|
|
|
}
|
|
|
|
|
2016-11-14 16:19:55 +00:00
|
|
|
static inline bool nfs4_state_match_open_stateid_other(const struct nfs4_state *state,
|
|
|
|
const nfs4_stateid *stateid)
|
|
|
|
{
|
|
|
|
return test_bit(NFS_OPEN_STATE, &state->flags) &&
|
|
|
|
nfs4_stateid_match_other(&state->open_stateid, stateid);
|
|
|
|
}
|
|
|
|
|
2020-06-23 22:39:04 +00:00
|
|
|
/* nfs42xattr.c */
|
|
|
|
#ifdef CONFIG_NFS_V4_2
|
|
|
|
extern int __init nfs4_xattr_cache_init(void);
|
|
|
|
extern void nfs4_xattr_cache_exit(void);
|
|
|
|
extern void nfs4_xattr_cache_add(struct inode *inode, const char *name,
|
|
|
|
const char *buf, struct page **pages,
|
|
|
|
ssize_t buflen);
|
|
|
|
extern void nfs4_xattr_cache_remove(struct inode *inode, const char *name);
|
|
|
|
extern ssize_t nfs4_xattr_cache_get(struct inode *inode, const char *name,
|
|
|
|
char *buf, ssize_t buflen);
|
|
|
|
extern void nfs4_xattr_cache_set_list(struct inode *inode, const char *buf,
|
|
|
|
ssize_t buflen);
|
|
|
|
extern ssize_t nfs4_xattr_cache_list(struct inode *inode, char *buf,
|
|
|
|
ssize_t buflen);
|
|
|
|
extern void nfs4_xattr_cache_zap(struct inode *inode);
|
2005-06-22 17:16:21 +00:00
|
|
|
#else
|
2020-06-23 22:39:04 +00:00
|
|
|
static inline void nfs4_xattr_cache_zap(struct inode *inode)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
#endif /* CONFIG_NFS_V4_2 */
|
|
|
|
|
|
|
|
#else /* CONFIG_NFS_V4 */
|
2005-06-22 17:16:21 +00:00
|
|
|
|
2011-06-22 22:20:23 +00:00
|
|
|
#define nfs4_close_state(a, b) do { } while (0)
|
|
|
|
#define nfs4_close_sync(a, b) do { } while (0)
|
2013-08-13 20:37:37 +00:00
|
|
|
#define nfs4_state_protect(a, b, c, d) do { } while (0)
|
|
|
|
#define nfs4_state_protect_write(a, b, c, d) do { } while (0)
|
2005-06-22 17:16:21 +00:00
|
|
|
|
2020-06-23 22:39:04 +00:00
|
|
|
|
2005-06-22 17:16:21 +00:00
|
|
|
#endif /* CONFIG_NFS_V4 */
|
|
|
|
#endif /* __LINUX_FS_NFS_NFS4_FS.H */
|