License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 22:07:57 +08:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
2005-06-23 01:16:21 +08:00
|
|
|
/*
|
|
|
|
* linux/fs/nfs/nfs4_fs.h
|
|
|
|
*
|
|
|
|
* Copyright (C) 2005 Trond Myklebust
|
|
|
|
*
|
|
|
|
* NFSv4-specific filesystem definitions and declarations
|
|
|
|
*/
|
|
|
|
|
|
|
|
#ifndef __LINUX_FS_NFS_NFS4_FS_H
|
|
|
|
#define __LINUX_FS_NFS_NFS4_FS_H
|
|
|
|
|
2013-11-14 01:29:08 +08:00
|
|
|
#if defined(CONFIG_NFS_V4_2)
|
|
|
|
#define NFS4_MAX_MINOR_VERSION 2
|
|
|
|
#elif defined(CONFIG_NFS_V4_1)
|
|
|
|
#define NFS4_MAX_MINOR_VERSION 1
|
|
|
|
#else
|
|
|
|
#define NFS4_MAX_MINOR_VERSION 0
|
|
|
|
#endif
|
|
|
|
|
2012-07-31 04:05:25 +08:00
|
|
|
#if IS_ENABLED(CONFIG_NFS_V4)
|
2005-06-23 01:16:21 +08:00
|
|
|
|
2012-11-27 02:13:29 +08:00
|
|
|
#define NFS4_MAX_LOOP_ON_RECOVER (10)
|
|
|
|
|
2013-02-08 03:41:11 +08:00
|
|
|
#include <linux/seqlock.h>
|
|
|
|
|
2005-06-23 01:16:21 +08:00
|
|
|
struct idmap;
|
|
|
|
|
|
|
|
enum nfs4_client_state {
|
2008-12-24 04:21:48 +08:00
|
|
|
NFS4CLNT_MANAGER_RUNNING = 0,
|
2008-12-24 04:21:42 +08:00
|
|
|
NFS4CLNT_CHECK_LEASE,
|
2006-01-03 16:55:24 +08:00
|
|
|
NFS4CLNT_LEASE_EXPIRED,
|
2008-12-24 04:21:41 +08:00
|
|
|
NFS4CLNT_RECLAIM_REBOOT,
|
|
|
|
NFS4CLNT_RECLAIM_NOGRACE,
|
2008-12-24 04:21:47 +08:00
|
|
|
NFS4CLNT_DELEGRETURN,
|
2009-12-05 04:55:05 +08:00
|
|
|
NFS4CLNT_SESSION_RESET,
|
2011-04-25 02:28:18 +08:00
|
|
|
NFS4CLNT_LEASE_CONFIRM,
|
2011-06-01 07:05:47 +08:00
|
|
|
NFS4CLNT_SERVER_SCOPE_MISMATCH,
|
2012-05-22 10:45:33 +08:00
|
|
|
NFS4CLNT_PURGE_STATE,
|
2012-05-25 00:26:37 +08:00
|
|
|
NFS4CLNT_BIND_CONN_TO_SESSION,
|
2013-10-18 02:13:02 +08:00
|
|
|
NFS4CLNT_MOVED,
|
2013-10-18 02:13:35 +08:00
|
|
|
NFS4CLNT_LEASE_MOVED,
|
2016-09-23 01:38:59 +08:00
|
|
|
NFS4CLNT_DELEGATION_EXPIRED,
|
2005-06-23 01:16:21 +08:00
|
|
|
};
|
|
|
|
|
2011-08-25 03:07:37 +08:00
|
|
|
#define NFS4_RENEW_TIMEOUT 0x01
|
|
|
|
#define NFS4_RENEW_DELEGATION_CB 0x02
|
|
|
|
|
2015-01-24 08:19:25 +08:00
|
|
|
struct nfs_seqid_counter;
|
2010-06-16 21:52:26 +08:00
|
|
|
struct nfs4_minor_version_ops {
|
|
|
|
u32 minor_version;
|
2013-03-16 04:11:57 +08:00
|
|
|
unsigned init_caps;
|
2010-06-16 21:52:26 +08:00
|
|
|
|
2013-08-10 00:49:11 +08:00
|
|
|
int (*init_client)(struct nfs_client *);
|
|
|
|
void (*shutdown_client)(struct nfs_client *);
|
2012-03-05 07:13:56 +08:00
|
|
|
bool (*match_stateid)(const nfs4_stateid *,
|
2010-06-16 21:52:27 +08:00
|
|
|
const nfs4_stateid *);
|
2011-06-03 02:59:07 +08:00
|
|
|
int (*find_root_sec)(struct nfs_server *, struct nfs_fh *,
|
|
|
|
struct nfs_fsinfo *);
|
2014-05-01 18:28:47 +08:00
|
|
|
void (*free_lock_state)(struct nfs_server *,
|
2013-05-04 04:22:55 +08:00
|
|
|
struct nfs4_lock_state *);
|
2016-09-23 01:38:59 +08:00
|
|
|
int (*test_and_free_expired)(struct nfs_server *,
|
|
|
|
nfs4_stateid *, struct rpc_cred *);
|
2015-01-24 08:19:25 +08:00
|
|
|
struct nfs_seqid *
|
|
|
|
(*alloc_seqid)(struct nfs_seqid_counter *, gfp_t);
|
2016-09-09 21:22:29 +08:00
|
|
|
int (*session_trunk)(struct rpc_clnt *, struct rpc_xprt *, void *);
|
2013-08-10 00:48:27 +08:00
|
|
|
const struct rpc_call_ops *call_sync_ops;
|
2010-06-16 21:52:27 +08:00
|
|
|
const struct nfs4_state_recovery_ops *reboot_recovery_ops;
|
|
|
|
const struct nfs4_state_recovery_ops *nograce_recovery_ops;
|
|
|
|
const struct nfs4_state_maintenance_ops *state_renewal_ops;
|
2013-10-18 02:12:39 +08:00
|
|
|
const struct nfs4_mig_recovery_ops *mig_recovery_ops;
|
2010-06-16 21:52:26 +08:00
|
|
|
};
|
|
|
|
|
NFSv4: Add functions to order RPC calls
NFSv4 file state-changing functions such as OPEN, CLOSE, LOCK,... are all
labelled with "sequence identifiers" in order to prevent the server from
reordering RPC requests, as this could cause its file state to
become out of sync with the client.
Currently the NFS client code enforces this ordering locally using
semaphores to restrict access to structures until the RPC call is done.
This, of course, only works with synchronous RPC calls, since the
user process must first grab the semaphore.
By dropping semaphores, and instead teaching the RPC engine to hold
the RPC calls until they are ready to be sent, we can extend this
process to work nicely with asynchronous RPC calls too.
This patch adds a new list called "rpc_sequence" that defines the order
of the RPC calls to be sent. We add one such list for each state_owner.
When an RPC call is ready to be sent, it checks if it is top of the
rpc_sequence list. If so, it proceeds. If not, it goes back to sleep,
and loops until it hits top of the list.
Once the RPC call has completed, it can then bump the sequence id counter,
and remove itself from the rpc_sequence list, and then wake up the next
sleeper.
Note that the state_owner sequence ids and lock_owner sequence ids are
all indexed to the same rpc_sequence list, so OPEN, LOCK,... requests
are all ordered w.r.t. each other.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
2005-10-19 05:20:12 +08:00
|
|
|
#define NFS_SEQID_CONFIRMED 1
|
|
|
|
struct nfs_seqid_counter {
|
2012-04-21 07:24:51 +08:00
|
|
|
ktime_t create_time;
|
2012-01-18 11:04:25 +08:00
|
|
|
int owner_id;
|
NFSv4: Add functions to order RPC calls
NFSv4 file state-changing functions such as OPEN, CLOSE, LOCK,... are all
labelled with "sequence identifiers" in order to prevent the server from
reordering RPC requests, as this could cause its file state to
become out of sync with the client.
Currently the NFS client code enforces this ordering locally using
semaphores to restrict access to structures until the RPC call is done.
This, of course, only works with synchronous RPC calls, since the
user process must first grab the semaphore.
By dropping semaphores, and instead teaching the RPC engine to hold
the RPC calls until they are ready to be sent, we can extend this
process to work nicely with asynchronous RPC calls too.
This patch adds a new list called "rpc_sequence" that defines the order
of the RPC calls to be sent. We add one such list for each state_owner.
When an RPC call is ready to be sent, it checks if it is top of the
rpc_sequence list. If so, it proceeds. If not, it goes back to sleep,
and loops until it hits top of the list.
Once the RPC call has completed, it can then bump the sequence id counter,
and remove itself from the rpc_sequence list, and then wake up the next
sleeper.
Note that the state_owner sequence ids and lock_owner sequence ids are
all indexed to the same rpc_sequence list, so OPEN, LOCK,... requests
are all ordered w.r.t. each other.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
2005-10-19 05:20:12 +08:00
|
|
|
int flags;
|
|
|
|
u32 counter;
|
2012-01-18 11:04:25 +08:00
|
|
|
spinlock_t lock; /* Protects the list */
|
|
|
|
struct list_head list; /* Defines sequence of RPC calls */
|
|
|
|
struct rpc_wait_queue wait; /* RPC call delay queue */
|
NFSv4: Add functions to order RPC calls
NFSv4 file state-changing functions such as OPEN, CLOSE, LOCK,... are all
labelled with "sequence identifiers" in order to prevent the server from
reordering RPC requests, as this could cause its file state to
become out of sync with the client.
Currently the NFS client code enforces this ordering locally using
semaphores to restrict access to structures until the RPC call is done.
This, of course, only works with synchronous RPC calls, since the
user process must first grab the semaphore.
By dropping semaphores, and instead teaching the RPC engine to hold
the RPC calls until they are ready to be sent, we can extend this
process to work nicely with asynchronous RPC calls too.
This patch adds a new list called "rpc_sequence" that defines the order
of the RPC calls to be sent. We add one such list for each state_owner.
When an RPC call is ready to be sent, it checks if it is top of the
rpc_sequence list. If so, it proceeds. If not, it goes back to sleep,
and loops until it hits top of the list.
Once the RPC call has completed, it can then bump the sequence id counter,
and remove itself from the rpc_sequence list, and then wake up the next
sleeper.
Note that the state_owner sequence ids and lock_owner sequence ids are
all indexed to the same rpc_sequence list, so OPEN, LOCK,... requests
are all ordered w.r.t. each other.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
2005-10-19 05:20:12 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
struct nfs_seqid {
|
|
|
|
struct nfs_seqid_counter *sequence;
|
2005-10-21 05:22:41 +08:00
|
|
|
struct list_head list;
|
2012-01-21 07:47:05 +08:00
|
|
|
struct rpc_task *task;
|
NFSv4: Add functions to order RPC calls
NFSv4 file state-changing functions such as OPEN, CLOSE, LOCK,... are all
labelled with "sequence identifiers" in order to prevent the server from
reordering RPC requests, as this could cause its file state to
become out of sync with the client.
Currently the NFS client code enforces this ordering locally using
semaphores to restrict access to structures until the RPC call is done.
This, of course, only works with synchronous RPC calls, since the
user process must first grab the semaphore.
By dropping semaphores, and instead teaching the RPC engine to hold
the RPC calls until they are ready to be sent, we can extend this
process to work nicely with asynchronous RPC calls too.
This patch adds a new list called "rpc_sequence" that defines the order
of the RPC calls to be sent. We add one such list for each state_owner.
When an RPC call is ready to be sent, it checks if it is top of the
rpc_sequence list. If so, it proceeds. If not, it goes back to sleep,
and loops until it hits top of the list.
Once the RPC call has completed, it can then bump the sequence id counter,
and remove itself from the rpc_sequence list, and then wake up the next
sleeper.
Note that the state_owner sequence ids and lock_owner sequence ids are
all indexed to the same rpc_sequence list, so OPEN, LOCK,... requests
are all ordered w.r.t. each other.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
2005-10-19 05:20:12 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
static inline void nfs_confirm_seqid(struct nfs_seqid_counter *seqid, int status)
|
|
|
|
{
|
|
|
|
if (seqid_mutating_err(-status))
|
|
|
|
seqid->flags |= NFS_SEQID_CONFIRMED;
|
|
|
|
}
|
|
|
|
|
2005-06-23 01:16:21 +08:00
|
|
|
/*
|
|
|
|
* NFS4 state_owners and lock_owners are simply labels for ordered
|
|
|
|
* sequences of RPC calls. Their sole purpose is to provide once-only
|
|
|
|
* semantics by allowing the server to identify replayed requests.
|
|
|
|
*/
|
|
|
|
struct nfs4_state_owner {
|
2007-07-06 22:53:21 +08:00
|
|
|
struct nfs_server *so_server;
|
NFS: Cache state owners after files are closed
Servers have a finite amount of memory to store NFSv4 open and lock
owners. Moreover, servers may have a difficult time determining when
they can reap their state owner table, thanks to gray areas in the
NFSv4 protocol specification. Thus clients should be careful to reuse
state owners when possible.
Currently Linux is not too careful. When a user has closed all her
files on one mount point, the state owner's reference count goes to
zero, and it is released. The next OPEN allocates a new one. A
workload that serially opens and closes files can run through a large
number of open owners this way.
When a state owner's reference count goes to zero, slap it onto a free
list for that nfs_server, with an expiry time. Garbage collect before
looking for a state owner. This makes state owners for active users
available for re-use.
Now that there can be unused state owners remaining at umount time,
purge the state owner free list when a server is destroyed. Also be
sure not to reclaim unused state owners during state recovery.
This change has benefits for the client as well. For some workloads,
this approach drops the number of OPEN_CONFIRM calls from the same as
the number of OPEN calls, down to just one. This reduces wire traffic
and thus open(2) latency. Before this patch, untarring a kernel
source tarball shows the OPEN_CONFIRM call counter steadily increasing
through the test. With the patch, the OPEN_CONFIRM count remains at 1
throughout the entire untar.
As long as the expiry time is kept short, I don't think garbage
collection should be terribly expensive, although it does bounce the
clp->cl_lock around a bit.
[ At some point we should rationalize the use of the nfs_server
->destroy method. ]
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
[Trond: Fixed a garbage collection race and a few efficiency issues]
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
2011-12-07 05:13:48 +08:00
|
|
|
struct list_head so_lru;
|
|
|
|
unsigned long so_expires;
|
2010-12-24 09:32:43 +08:00
|
|
|
struct rb_node so_server_node;
|
2005-06-23 01:16:21 +08:00
|
|
|
|
|
|
|
struct rpc_cred *so_cred; /* Associated cred */
|
2007-07-03 01:58:33 +08:00
|
|
|
|
|
|
|
spinlock_t so_lock;
|
|
|
|
atomic_t so_count;
|
2008-12-24 04:21:43 +08:00
|
|
|
unsigned long so_flags;
|
2005-06-23 01:16:21 +08:00
|
|
|
struct list_head so_states;
|
NFSv4: Add functions to order RPC calls
NFSv4 file state-changing functions such as OPEN, CLOSE, LOCK,... are all
labelled with "sequence identifiers" in order to prevent the server from
reordering RPC requests, as this could cause its file state to
become out of sync with the client.
Currently the NFS client code enforces this ordering locally using
semaphores to restrict access to structures until the RPC call is done.
This, of course, only works with synchronous RPC calls, since the
user process must first grab the semaphore.
By dropping semaphores, and instead teaching the RPC engine to hold
the RPC calls until they are ready to be sent, we can extend this
process to work nicely with asynchronous RPC calls too.
This patch adds a new list called "rpc_sequence" that defines the order
of the RPC calls to be sent. We add one such list for each state_owner.
When an RPC call is ready to be sent, it checks if it is top of the
rpc_sequence list. If so, it proceeds. If not, it goes back to sleep,
and loops until it hits top of the list.
Once the RPC call has completed, it can then bump the sequence id counter,
and remove itself from the rpc_sequence list, and then wake up the next
sleeper.
Note that the state_owner sequence ids and lock_owner sequence ids are
all indexed to the same rpc_sequence list, so OPEN, LOCK,... requests
are all ordered w.r.t. each other.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
2005-10-19 05:20:12 +08:00
|
|
|
struct nfs_seqid_counter so_seqid;
|
2013-02-08 03:41:11 +08:00
|
|
|
seqcount_t so_reclaim_seqcount;
|
2013-02-07 23:54:07 +08:00
|
|
|
struct mutex so_delegreturn_mutex;
|
2005-06-23 01:16:21 +08:00
|
|
|
};
|
|
|
|
|
2008-12-24 04:21:43 +08:00
|
|
|
enum {
|
|
|
|
NFS_OWNER_RECLAIM_REBOOT,
|
|
|
|
NFS_OWNER_RECLAIM_NOGRACE
|
|
|
|
};
|
|
|
|
|
2009-12-09 17:50:14 +08:00
|
|
|
#define NFS_LOCK_NEW 0
|
|
|
|
#define NFS_LOCK_RECLAIM 1
|
|
|
|
#define NFS_LOCK_EXPIRED 2
|
|
|
|
|
2005-06-23 01:16:21 +08:00
|
|
|
/*
|
|
|
|
* struct nfs4_state maintains the client-side state for a given
|
|
|
|
* (state_owner,inode) tuple (OPEN) or state_owner (LOCK).
|
|
|
|
*
|
|
|
|
* OPEN:
|
|
|
|
* In order to know when to OPEN_DOWNGRADE or CLOSE the state on the server,
|
|
|
|
* we need to know how many files are open for reading or writing on a
|
|
|
|
* given inode. This information too is stored here.
|
|
|
|
*
|
|
|
|
* LOCK: one nfs4_state (LOCK) to hold the lock stateid nfs4_state(OPEN)
|
|
|
|
*/
|
|
|
|
|
|
|
|
struct nfs4_lock_state {
|
2014-09-08 20:26:01 +08:00
|
|
|
struct list_head ls_locks; /* Other lock stateids */
|
|
|
|
struct nfs4_state * ls_state; /* Pointer to open state */
|
2012-09-11 01:26:49 +08:00
|
|
|
#define NFS_LOCK_INITIALIZED 0
|
2013-09-04 15:04:49 +08:00
|
|
|
#define NFS_LOCK_LOST 1
|
2014-09-08 20:26:01 +08:00
|
|
|
unsigned long ls_flags;
|
NFSv4: Add functions to order RPC calls
NFSv4 file state-changing functions such as OPEN, CLOSE, LOCK,... are all
labelled with "sequence identifiers" in order to prevent the server from
reordering RPC requests, as this could cause its file state to
become out of sync with the client.
Currently the NFS client code enforces this ordering locally using
semaphores to restrict access to structures until the RPC call is done.
This, of course, only works with synchronous RPC calls, since the
user process must first grab the semaphore.
By dropping semaphores, and instead teaching the RPC engine to hold
the RPC calls until they are ready to be sent, we can extend this
process to work nicely with asynchronous RPC calls too.
This patch adds a new list called "rpc_sequence" that defines the order
of the RPC calls to be sent. We add one such list for each state_owner.
When an RPC call is ready to be sent, it checks if it is top of the
rpc_sequence list. If so, it proceeds. If not, it goes back to sleep,
and loops until it hits top of the list.
Once the RPC call has completed, it can then bump the sequence id counter,
and remove itself from the rpc_sequence list, and then wake up the next
sleeper.
Note that the state_owner sequence ids and lock_owner sequence ids are
all indexed to the same rpc_sequence list, so OPEN, LOCK,... requests
are all ordered w.r.t. each other.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
2005-10-19 05:20:12 +08:00
|
|
|
struct nfs_seqid_counter ls_seqid;
|
2014-09-08 20:26:01 +08:00
|
|
|
nfs4_stateid ls_stateid;
|
2017-10-20 17:53:36 +08:00
|
|
|
refcount_t ls_count;
|
2014-09-08 20:26:01 +08:00
|
|
|
fl_owner_t ls_owner;
|
2005-06-23 01:16:21 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
/* bits for nfs4_state->flags */
|
|
|
|
enum {
|
|
|
|
LK_STATE_IN_USE,
|
2007-07-06 06:07:55 +08:00
|
|
|
NFS_DELEGATED_STATE, /* Current stateid is delegation */
|
2013-04-20 13:25:45 +08:00
|
|
|
NFS_OPEN_STATE, /* OPEN stateid is set */
|
2007-07-06 06:07:55 +08:00
|
|
|
NFS_O_RDONLY_STATE, /* OPEN stateid has read-only state */
|
|
|
|
NFS_O_WRONLY_STATE, /* OPEN stateid has write-only state */
|
|
|
|
NFS_O_RDWR_STATE, /* OPEN stateid has read/write state */
|
2008-12-24 04:21:41 +08:00
|
|
|
NFS_STATE_RECLAIM_REBOOT, /* OPEN stateid server rebooted */
|
|
|
|
NFS_STATE_RECLAIM_NOGRACE, /* OPEN stateid needs to recover state */
|
2010-01-27 04:42:30 +08:00
|
|
|
NFS_STATE_POSIX_LOCKS, /* Posix locks are supported */
|
2013-03-15 04:57:48 +08:00
|
|
|
NFS_STATE_RECOVERY_FAILED, /* OPEN stateid state recovery failed */
|
2016-09-18 06:17:35 +08:00
|
|
|
NFS_STATE_MAY_NOTIFY_LOCK, /* server may CB_NOTIFY_LOCK */
|
NFSv4: Fix OPEN / CLOSE race
Ben Coddington has noted the following race between OPEN and CLOSE
on a single client.
Process 1 Process 2 Server
========= ========= ======
1) OPEN file
2) OPEN file
3) Process OPEN (1) seqid=1
4) Process OPEN (2) seqid=2
5) Reply OPEN (2)
6) Receive reply (2)
7) new stateid, seqid=2
8) CLOSE file, using
stateid w/ seqid=2
9) Reply OPEN (1)
10( Process CLOSE (8)
11) Reply CLOSE (8)
12) Forget stateid
file closed
13) Receive reply (7)
14) Forget stateid
file closed.
15) Receive reply (1).
16) New stateid seqid=1
is really the same
stateid that was
closed.
IOW: the reply to the first OPEN is delayed. Since "Process 2" does
not wait before closing the file, and it does not cache the closed
stateid, then when the delayed reply is finally received, it is treated
as setting up a new stateid by the client.
The fix is to ensure that the client processes the OPEN and CLOSE calls
in the same order in which the server processed them.
This commit ensures that we examine the seqid of the stateid
returned by OPEN. If it is a new stateid, we assume the seqid
must be equal to the value 1, and that each state transition
increments the seqid value by 1 (See RFC7530, Section 9.1.4.2,
and RFC5661, Section 8.2.2).
If the tracker sees that an OPEN returns with a seqid that is greater
than the cached seqid + 1, then it bumps a flag to ensure that the
caller waits for the RPCs carrying the missing seqids to complete.
Note that there can still be pathologies where the server crashes before
it can even send us the missing seqids. Since the OPEN call is still
holding a slot when it waits here, that could cause the recovery to
stall forever. To avoid that, we time out after a 5 second wait.
Reported-by: Benjamin Coddington <bcodding@redhat.com>
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
2017-11-07 04:28:01 +08:00
|
|
|
NFS_STATE_CHANGE_WAIT, /* A state changing operation is outstanding */
|
2005-06-23 01:16:21 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
struct nfs4_state {
|
|
|
|
struct list_head open_states; /* List of states for the same state_owner */
|
|
|
|
struct list_head inode_states; /* List of states for the same inode */
|
|
|
|
struct list_head lock_states; /* List of subservient lock stateids */
|
|
|
|
|
|
|
|
struct nfs4_state_owner *owner; /* Pointer to the open owner */
|
|
|
|
struct inode *inode; /* Pointer to the inode */
|
|
|
|
|
|
|
|
unsigned long flags; /* Do we hold any locks? */
|
2005-06-23 01:16:32 +08:00
|
|
|
spinlock_t state_lock; /* Protects the lock_states list */
|
2005-06-23 01:16:21 +08:00
|
|
|
|
2007-07-09 22:45:42 +08:00
|
|
|
seqlock_t seqlock; /* Protects the stateid/open_stateid */
|
2007-07-06 06:07:55 +08:00
|
|
|
nfs4_stateid stateid; /* Current stateid: may be delegation */
|
|
|
|
nfs4_stateid open_stateid; /* OPEN stateid */
|
2005-06-23 01:16:21 +08:00
|
|
|
|
2007-07-09 22:45:42 +08:00
|
|
|
/* The following 3 fields are protected by owner->so_lock */
|
2007-07-06 06:07:55 +08:00
|
|
|
unsigned int n_rdonly; /* Number of read-only references */
|
|
|
|
unsigned int n_wronly; /* Number of write-only references */
|
|
|
|
unsigned int n_rdwr; /* Number of read/write references */
|
2008-12-24 04:21:56 +08:00
|
|
|
fmode_t state; /* State on the server (R,W, or RW) */
|
2005-06-23 01:16:21 +08:00
|
|
|
atomic_t count;
|
NFSv4: Fix OPEN / CLOSE race
Ben Coddington has noted the following race between OPEN and CLOSE
on a single client.
Process 1 Process 2 Server
========= ========= ======
1) OPEN file
2) OPEN file
3) Process OPEN (1) seqid=1
4) Process OPEN (2) seqid=2
5) Reply OPEN (2)
6) Receive reply (2)
7) new stateid, seqid=2
8) CLOSE file, using
stateid w/ seqid=2
9) Reply OPEN (1)
10( Process CLOSE (8)
11) Reply CLOSE (8)
12) Forget stateid
file closed
13) Receive reply (7)
14) Forget stateid
file closed.
15) Receive reply (1).
16) New stateid seqid=1
is really the same
stateid that was
closed.
IOW: the reply to the first OPEN is delayed. Since "Process 2" does
not wait before closing the file, and it does not cache the closed
stateid, then when the delayed reply is finally received, it is treated
as setting up a new stateid by the client.
The fix is to ensure that the client processes the OPEN and CLOSE calls
in the same order in which the server processed them.
This commit ensures that we examine the seqid of the stateid
returned by OPEN. If it is a new stateid, we assume the seqid
must be equal to the value 1, and that each state transition
increments the seqid value by 1 (See RFC7530, Section 9.1.4.2,
and RFC5661, Section 8.2.2).
If the tracker sees that an OPEN returns with a seqid that is greater
than the cached seqid + 1, then it bumps a flag to ensure that the
caller waits for the RPCs carrying the missing seqids to complete.
Note that there can still be pathologies where the server crashes before
it can even send us the missing seqids. Since the OPEN call is still
holding a slot when it waits here, that could cause the recovery to
stall forever. To avoid that, we time out after a 5 second wait.
Reported-by: Benjamin Coddington <bcodding@redhat.com>
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
2017-11-07 04:28:01 +08:00
|
|
|
|
|
|
|
wait_queue_head_t waitq;
|
2005-06-23 01:16:21 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
|
|
|
|
struct nfs4_exception {
|
2008-12-24 04:21:46 +08:00
|
|
|
struct nfs4_state *state;
|
2012-03-08 05:39:06 +08:00
|
|
|
struct inode *inode;
|
2016-06-26 20:44:35 +08:00
|
|
|
nfs4_stateid *stateid;
|
2015-09-21 02:32:45 +08:00
|
|
|
long timeout;
|
|
|
|
unsigned char delay : 1,
|
|
|
|
recovering : 1,
|
|
|
|
retry : 1;
|
2005-06-23 01:16:21 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
struct nfs4_state_recovery_ops {
|
2008-12-24 04:21:43 +08:00
|
|
|
int owner_flag_bit;
|
2008-12-24 04:21:41 +08:00
|
|
|
int state_flag_bit;
|
2005-06-23 01:16:21 +08:00
|
|
|
int (*recover_open)(struct nfs4_state_owner *, struct nfs4_state *);
|
|
|
|
int (*recover_lock)(struct nfs4_state *, struct file_lock *);
|
2009-04-01 21:22:47 +08:00
|
|
|
int (*establish_clid)(struct nfs_client *, struct rpc_cred *);
|
2013-05-20 23:05:17 +08:00
|
|
|
int (*reclaim_complete)(struct nfs_client *, struct rpc_cred *);
|
2012-09-15 05:24:32 +08:00
|
|
|
int (*detect_trunking)(struct nfs_client *, struct nfs_client **,
|
|
|
|
struct rpc_cred *);
|
2005-06-23 01:16:21 +08:00
|
|
|
};
|
|
|
|
|
2016-09-09 21:22:29 +08:00
|
|
|
struct nfs4_add_xprt_data {
|
|
|
|
struct nfs_client *clp;
|
|
|
|
struct rpc_cred *cred;
|
|
|
|
};
|
|
|
|
|
2009-04-01 21:22:44 +08:00
|
|
|
struct nfs4_state_maintenance_ops {
|
2011-08-25 03:07:37 +08:00
|
|
|
int (*sched_state_renewal)(struct nfs_client *, struct rpc_cred *, unsigned);
|
2009-04-01 21:22:46 +08:00
|
|
|
struct rpc_cred * (*get_state_renewal_cred_locked)(struct nfs_client *);
|
2009-04-01 21:22:45 +08:00
|
|
|
int (*renew_lease)(struct nfs_client *, struct rpc_cred *);
|
2009-04-01 21:22:44 +08:00
|
|
|
};
|
|
|
|
|
2013-10-18 02:12:39 +08:00
|
|
|
struct nfs4_mig_recovery_ops {
|
2013-10-18 02:12:50 +08:00
|
|
|
int (*get_locations)(struct inode *, struct nfs4_fs_locations *,
|
|
|
|
struct page *, struct rpc_cred *);
|
2013-10-18 02:13:30 +08:00
|
|
|
int (*fsid_present)(struct inode *, struct rpc_cred *);
|
2013-10-18 02:12:39 +08:00
|
|
|
};
|
|
|
|
|
2009-02-20 13:51:22 +08:00
|
|
|
extern const struct dentry_operations nfs4_dentry_operations;
|
2012-07-17 04:39:12 +08:00
|
|
|
|
|
|
|
/* dir.c */
|
|
|
|
int nfs_atomic_open(struct inode *, struct dentry *, struct file *,
|
|
|
|
unsigned, umode_t, int *);
|
2005-06-23 01:16:22 +08:00
|
|
|
|
2012-08-09 01:57:06 +08:00
|
|
|
/* super.c */
|
|
|
|
extern struct file_system_type nfs4_fs_type;
|
|
|
|
|
2012-04-28 01:27:40 +08:00
|
|
|
/* nfs4namespace.c */
|
2016-07-21 04:34:42 +08:00
|
|
|
struct rpc_clnt *nfs4_negotiate_security(struct rpc_clnt *, struct inode *,
|
|
|
|
const struct qstr *);
|
2012-04-28 01:27:45 +08:00
|
|
|
struct vfsmount *nfs4_submount(struct nfs_server *, struct dentry *,
|
|
|
|
struct nfs_fh *, struct nfs_fattr *);
|
2013-10-18 02:12:34 +08:00
|
|
|
int nfs4_replace_transport(struct nfs_server *server,
|
|
|
|
const struct nfs4_fs_locations *locations);
|
2012-04-28 01:27:40 +08:00
|
|
|
|
2005-06-23 01:16:21 +08:00
|
|
|
/* nfs4proc.c */
|
2014-11-26 02:18:15 +08:00
|
|
|
extern int nfs4_handle_exception(struct nfs_server *, int, struct nfs4_exception *);
|
2014-09-27 01:58:48 +08:00
|
|
|
extern int nfs4_call_sync(struct rpc_clnt *, struct nfs_server *,
|
|
|
|
struct rpc_message *, struct nfs4_sequence_args *,
|
|
|
|
struct nfs4_sequence_res *, int);
|
2015-06-23 19:51:55 +08:00
|
|
|
extern void nfs4_init_sequence(struct nfs4_sequence_args *, struct nfs4_sequence_res *, int);
|
2010-04-17 04:43:06 +08:00
|
|
|
extern int nfs4_proc_setclientid(struct nfs_client *, u32, unsigned short, struct rpc_cred *, struct nfs4_setclientid_res *);
|
|
|
|
extern int nfs4_proc_setclientid_confirm(struct nfs_client *, struct nfs4_setclientid_res *arg, struct rpc_cred *);
|
2013-09-08 00:58:57 +08:00
|
|
|
extern int nfs4_proc_get_rootfh(struct nfs_server *, struct nfs_fh *, struct nfs_fsinfo *, bool);
|
2012-05-26 05:57:41 +08:00
|
|
|
extern int nfs4_proc_bind_conn_to_session(struct nfs_client *, struct rpc_cred *cred);
|
2009-12-05 04:52:24 +08:00
|
|
|
extern int nfs4_proc_exchange_id(struct nfs_client *clp, struct rpc_cred *cred);
|
2012-05-26 05:18:09 +08:00
|
|
|
extern int nfs4_destroy_clientid(struct nfs_client *clp);
|
2009-04-01 21:22:47 +08:00
|
|
|
extern int nfs4_init_clientid(struct nfs_client *, struct rpc_cred *);
|
2009-12-05 04:52:24 +08:00
|
|
|
extern int nfs41_init_clientid(struct nfs_client *, struct rpc_cred *);
|
2012-09-21 08:31:51 +08:00
|
|
|
extern int nfs4_do_close(struct nfs4_state *state, gfp_t gfp_mask, int wait);
|
2006-06-09 21:34:19 +08:00
|
|
|
extern int nfs4_server_capabilities(struct nfs_server *server, struct nfs_fh *fhandle);
|
2012-04-28 01:27:41 +08:00
|
|
|
extern int nfs4_proc_fs_locations(struct rpc_clnt *, struct inode *, const struct qstr *,
|
|
|
|
struct nfs4_fs_locations *, struct page *);
|
2013-10-18 02:12:50 +08:00
|
|
|
extern int nfs4_proc_get_locations(struct inode *, struct nfs4_fs_locations *,
|
|
|
|
struct page *page, struct rpc_cred *);
|
2013-10-18 02:13:30 +08:00
|
|
|
extern int nfs4_proc_fsid_present(struct inode *, struct rpc_cred *);
|
2016-07-21 04:34:42 +08:00
|
|
|
extern struct rpc_clnt *nfs4_proc_lookup_mountpoint(struct inode *, const struct qstr *,
|
2012-04-28 01:27:41 +08:00
|
|
|
struct nfs_fh *, struct nfs_fattr *);
|
2012-04-28 01:27:40 +08:00
|
|
|
extern int nfs4_proc_secinfo(struct inode *, const struct qstr *, struct nfs4_secinfo_flavors *);
|
2010-12-09 19:35:25 +08:00
|
|
|
extern const struct xattr_handler *nfs4_xattr_handlers[];
|
2013-03-17 08:54:34 +08:00
|
|
|
extern int nfs4_set_rw_stateid(nfs4_stateid *stateid,
|
2013-03-18 03:52:00 +08:00
|
|
|
const struct nfs_open_context *ctx,
|
|
|
|
const struct nfs_lock_context *l_ctx,
|
|
|
|
fmode_t fmode);
|
2005-06-23 01:16:21 +08:00
|
|
|
|
2009-04-01 21:21:53 +08:00
|
|
|
#if defined(CONFIG_NFS_V4_1)
|
2014-01-30 00:34:38 +08:00
|
|
|
extern int nfs41_sequence_done(struct rpc_task *, struct nfs4_sequence_res *);
|
2012-05-26 05:51:23 +08:00
|
|
|
extern int nfs4_proc_create_session(struct nfs_client *, struct rpc_cred *);
|
|
|
|
extern int nfs4_proc_destroy_session(struct nfs4_session *, struct rpc_cred *);
|
2009-12-07 01:23:46 +08:00
|
|
|
extern int nfs4_proc_get_lease_time(struct nfs_client *clp,
|
|
|
|
struct nfs_fsinfo *fsinfo);
|
2011-03-23 21:27:54 +08:00
|
|
|
extern int nfs4_proc_layoutcommit(struct nfs4_layoutcommit_data *data,
|
2011-03-12 15:58:10 +08:00
|
|
|
bool sync);
|
2016-09-09 21:22:21 +08:00
|
|
|
extern int nfs4_detect_session_trunking(struct nfs_client *clp,
|
|
|
|
struct nfs41_exchange_id_res *res, struct rpc_xprt *xprt);
|
2011-03-01 09:34:12 +08:00
|
|
|
|
|
|
|
static inline bool
|
|
|
|
is_ds_only_client(struct nfs_client *clp)
|
|
|
|
{
|
|
|
|
return (clp->cl_exchange_flags & EXCHGID4_FLAG_MASK_PNFS) ==
|
|
|
|
EXCHGID4_FLAG_USE_PNFS_DS;
|
|
|
|
}
|
2011-03-01 09:34:17 +08:00
|
|
|
|
|
|
|
static inline bool
|
|
|
|
is_ds_client(struct nfs_client *clp)
|
|
|
|
{
|
|
|
|
return clp->cl_exchange_flags & EXCHGID4_FLAG_USE_PNFS_DS;
|
|
|
|
}
|
2013-08-14 04:37:33 +08:00
|
|
|
|
2013-08-14 04:37:37 +08:00
|
|
|
static inline bool
|
|
|
|
_nfs4_state_protect(struct nfs_client *clp, unsigned long sp4_mode,
|
|
|
|
struct rpc_clnt **clntp, struct rpc_message *msg)
|
2013-08-14 04:37:33 +08:00
|
|
|
{
|
|
|
|
struct rpc_cred *newcred = NULL;
|
|
|
|
rpc_authflavor_t flavor;
|
|
|
|
|
2017-08-18 15:12:52 +08:00
|
|
|
if (sp4_mode == NFS_SP4_MACH_CRED_CLEANUP ||
|
|
|
|
sp4_mode == NFS_SP4_MACH_CRED_PNFS_CLEANUP) {
|
|
|
|
/* Using machine creds for cleanup operations
|
|
|
|
* is only relevent if the client credentials
|
|
|
|
* might expire. So don't bother for
|
|
|
|
* RPC_AUTH_UNIX. If file was only exported to
|
|
|
|
* sec=sys, the PUTFH would fail anyway.
|
|
|
|
*/
|
|
|
|
if ((*clntp)->cl_auth->au_flavor == RPC_AUTH_UNIX)
|
|
|
|
return false;
|
|
|
|
}
|
2013-08-14 04:37:33 +08:00
|
|
|
if (test_bit(sp4_mode, &clp->cl_sp4_flags)) {
|
|
|
|
spin_lock(&clp->cl_lock);
|
|
|
|
if (clp->cl_machine_cred != NULL)
|
2013-09-11 06:44:32 +08:00
|
|
|
/* don't call get_rpccred on the machine cred -
|
|
|
|
* a reference will be held for life of clp */
|
|
|
|
newcred = clp->cl_machine_cred;
|
2013-08-14 04:37:33 +08:00
|
|
|
spin_unlock(&clp->cl_lock);
|
|
|
|
msg->rpc_cred = newcred;
|
|
|
|
|
|
|
|
flavor = clp->cl_rpcclient->cl_auth->au_flavor;
|
2013-09-11 06:44:33 +08:00
|
|
|
WARN_ON_ONCE(flavor != RPC_AUTH_GSS_KRB5I &&
|
|
|
|
flavor != RPC_AUTH_GSS_KRB5P);
|
2013-08-14 04:37:33 +08:00
|
|
|
*clntp = clp->cl_rpcclient;
|
2013-08-14 04:37:37 +08:00
|
|
|
|
|
|
|
return true;
|
2013-08-14 04:37:33 +08:00
|
|
|
}
|
2013-08-14 04:37:37 +08:00
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Function responsible for determining if an rpc_message should use the
|
|
|
|
* machine cred under SP4_MACH_CRED and if so switching the credential and
|
|
|
|
* authflavor (using the nfs_client's rpc_clnt which will be krb5i/p).
|
|
|
|
* Should be called before rpc_call_sync/rpc_call_async.
|
|
|
|
*/
|
|
|
|
static inline void
|
|
|
|
nfs4_state_protect(struct nfs_client *clp, unsigned long sp4_mode,
|
|
|
|
struct rpc_clnt **clntp, struct rpc_message *msg)
|
|
|
|
{
|
|
|
|
_nfs4_state_protect(clp, sp4_mode, clntp, msg);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Special wrapper to nfs4_state_protect for write.
|
|
|
|
* If WRITE can use machine cred but COMMIT cannot, make sure all writes
|
|
|
|
* that use machine cred use NFS_FILE_SYNC.
|
|
|
|
*/
|
|
|
|
static inline void
|
|
|
|
nfs4_state_protect_write(struct nfs_client *clp, struct rpc_clnt **clntp,
|
2014-06-09 23:48:35 +08:00
|
|
|
struct rpc_message *msg, struct nfs_pgio_header *hdr)
|
2013-08-14 04:37:37 +08:00
|
|
|
{
|
|
|
|
if (_nfs4_state_protect(clp, NFS_SP4_MACH_CRED_WRITE, clntp, msg) &&
|
|
|
|
!test_bit(NFS_SP4_MACH_CRED_COMMIT, &clp->cl_sp4_flags))
|
2014-06-09 23:48:35 +08:00
|
|
|
hdr->args.stable = NFS_FILE_SYNC;
|
2013-08-14 04:37:33 +08:00
|
|
|
}
|
2009-04-01 21:22:15 +08:00
|
|
|
#else /* CONFIG_NFS_v4_1 */
|
2011-03-01 09:34:12 +08:00
|
|
|
static inline bool
|
|
|
|
is_ds_only_client(struct nfs_client *clp)
|
2011-03-01 09:34:17 +08:00
|
|
|
{
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool
|
|
|
|
is_ds_client(struct nfs_client *clp)
|
2011-03-01 09:34:12 +08:00
|
|
|
{
|
|
|
|
return false;
|
|
|
|
}
|
2013-08-14 04:37:33 +08:00
|
|
|
|
|
|
|
static inline void
|
|
|
|
nfs4_state_protect(struct nfs_client *clp, unsigned long sp4_flags,
|
|
|
|
struct rpc_clnt **clntp, struct rpc_message *msg)
|
|
|
|
{
|
|
|
|
}
|
2013-08-14 04:37:37 +08:00
|
|
|
|
|
|
|
static inline void
|
|
|
|
nfs4_state_protect_write(struct nfs_client *clp, struct rpc_clnt **clntp,
|
2014-06-09 23:48:35 +08:00
|
|
|
struct rpc_message *msg, struct nfs_pgio_header *hdr)
|
2013-08-14 04:37:37 +08:00
|
|
|
{
|
|
|
|
}
|
2009-04-01 21:21:53 +08:00
|
|
|
#endif /* CONFIG_NFS_V4_1 */
|
2005-06-23 01:16:21 +08:00
|
|
|
|
2010-06-16 21:52:26 +08:00
|
|
|
extern const struct nfs4_minor_version_ops *nfs_v4_minor_ops[];
|
2009-04-01 21:22:44 +08:00
|
|
|
|
2012-06-05 21:16:47 +08:00
|
|
|
extern const u32 nfs4_fattr_bitmap[3];
|
2013-05-23 00:50:41 +08:00
|
|
|
extern const u32 nfs4_statfs_bitmap[3];
|
|
|
|
extern const u32 nfs4_pathconf_bitmap[3];
|
2011-07-31 08:52:37 +08:00
|
|
|
extern const u32 nfs4_fsinfo_bitmap[3];
|
2013-05-23 00:50:41 +08:00
|
|
|
extern const u32 nfs4_fs_locations_bitmap[3];
|
2005-06-23 01:16:21 +08:00
|
|
|
|
2013-08-10 00:49:11 +08:00
|
|
|
void nfs40_shutdown_client(struct nfs_client *);
|
|
|
|
void nfs41_shutdown_client(struct nfs_client *);
|
|
|
|
int nfs40_init_client(struct nfs_client *);
|
|
|
|
int nfs41_init_client(struct nfs_client *);
|
2012-06-21 03:53:45 +08:00
|
|
|
void nfs4_free_client(struct nfs_client *);
|
|
|
|
|
2012-06-21 03:53:46 +08:00
|
|
|
struct nfs_client *nfs4_alloc_client(const struct nfs_client_initdata *);
|
|
|
|
|
2005-06-23 01:16:21 +08:00
|
|
|
/* nfs4renewd.c */
|
2006-08-23 08:06:08 +08:00
|
|
|
extern void nfs4_schedule_state_renewal(struct nfs_client *);
|
2005-06-23 01:16:21 +08:00
|
|
|
extern void nfs4_renewd_prepare_shutdown(struct nfs_server *);
|
2006-08-23 08:06:08 +08:00
|
|
|
extern void nfs4_kill_renewd(struct nfs_client *);
|
2006-11-22 22:55:48 +08:00
|
|
|
extern void nfs4_renew_state(struct work_struct *);
|
2016-08-06 07:13:08 +08:00
|
|
|
extern void nfs4_set_lease_period(struct nfs_client *clp,
|
|
|
|
unsigned long lease,
|
|
|
|
unsigned long lastrenewed);
|
|
|
|
|
2005-06-23 01:16:21 +08:00
|
|
|
|
|
|
|
/* nfs4state.c */
|
2013-07-25 00:28:37 +08:00
|
|
|
struct rpc_cred *nfs4_get_clid_cred(struct nfs_client *clp);
|
2012-11-22 11:49:36 +08:00
|
|
|
struct rpc_cred *nfs4_get_machine_cred_locked(struct nfs_client *clp);
|
2008-12-24 04:21:41 +08:00
|
|
|
struct rpc_cred *nfs4_get_renew_cred_locked(struct nfs_client *clp);
|
2012-09-15 05:24:32 +08:00
|
|
|
int nfs4_discover_server_trunking(struct nfs_client *clp,
|
|
|
|
struct nfs_client **);
|
|
|
|
int nfs40_discover_server_trunking(struct nfs_client *clp,
|
|
|
|
struct nfs_client **, struct rpc_cred *);
|
2009-04-01 21:22:46 +08:00
|
|
|
#if defined(CONFIG_NFS_V4_1)
|
2012-09-15 05:24:32 +08:00
|
|
|
int nfs41_discover_server_trunking(struct nfs_client *clp,
|
|
|
|
struct nfs_client **, struct rpc_cred *);
|
2012-05-28 01:02:53 +08:00
|
|
|
extern void nfs4_schedule_session_recovery(struct nfs4_session *, int);
|
2015-07-14 02:01:31 +08:00
|
|
|
extern void nfs41_notify_server(struct nfs_client *);
|
2011-03-10 05:00:53 +08:00
|
|
|
#else
|
2012-05-28 01:02:53 +08:00
|
|
|
static inline void nfs4_schedule_session_recovery(struct nfs4_session *session, int err)
|
2011-03-10 05:00:53 +08:00
|
|
|
{
|
|
|
|
}
|
2009-04-01 21:22:46 +08:00
|
|
|
#endif /* CONFIG_NFS_V4_1 */
|
2005-06-23 01:16:21 +08:00
|
|
|
|
2012-01-18 11:04:24 +08:00
|
|
|
extern struct nfs4_state_owner *nfs4_get_state_owner(struct nfs_server *, struct rpc_cred *, gfp_t);
|
2005-06-23 01:16:21 +08:00
|
|
|
extern void nfs4_put_state_owner(struct nfs4_state_owner *);
|
NFS: Cache state owners after files are closed
Servers have a finite amount of memory to store NFSv4 open and lock
owners. Moreover, servers may have a difficult time determining when
they can reap their state owner table, thanks to gray areas in the
NFSv4 protocol specification. Thus clients should be careful to reuse
state owners when possible.
Currently Linux is not too careful. When a user has closed all her
files on one mount point, the state owner's reference count goes to
zero, and it is released. The next OPEN allocates a new one. A
workload that serially opens and closes files can run through a large
number of open owners this way.
When a state owner's reference count goes to zero, slap it onto a free
list for that nfs_server, with an expiry time. Garbage collect before
looking for a state owner. This makes state owners for active users
available for re-use.
Now that there can be unused state owners remaining at umount time,
purge the state owner free list when a server is destroyed. Also be
sure not to reclaim unused state owners during state recovery.
This change has benefits for the client as well. For some workloads,
this approach drops the number of OPEN_CONFIRM calls from the same as
the number of OPEN calls, down to just one. This reduces wire traffic
and thus open(2) latency. Before this patch, untarring a kernel
source tarball shows the OPEN_CONFIRM call counter steadily increasing
through the test. With the patch, the OPEN_CONFIRM count remains at 1
throughout the entire untar.
As long as the expiry time is kept short, I don't think garbage
collection should be terribly expensive, although it does bounce the
clp->cl_lock around a bit.
[ At some point we should rationalize the use of the nfs_server
->destroy method. ]
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
[Trond: Fixed a garbage collection race and a few efficiency issues]
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
2011-12-07 05:13:48 +08:00
|
|
|
extern void nfs4_purge_state_owners(struct nfs_server *);
|
2005-06-23 01:16:21 +08:00
|
|
|
extern struct nfs4_state * nfs4_get_open_state(struct inode *, struct nfs4_state_owner *);
|
|
|
|
extern void nfs4_put_open_state(struct nfs4_state *);
|
2011-06-23 06:20:23 +08:00
|
|
|
extern void nfs4_close_state(struct nfs4_state *, fmode_t);
|
|
|
|
extern void nfs4_close_sync(struct nfs4_state *, fmode_t);
|
2008-12-24 04:21:56 +08:00
|
|
|
extern void nfs4_state_set_mode_locked(struct nfs4_state *, fmode_t);
|
2012-03-06 08:56:44 +08:00
|
|
|
extern void nfs_inode_find_state_and_recover(struct inode *inode,
|
|
|
|
const nfs4_stateid *stateid);
|
2014-02-13 08:15:06 +08:00
|
|
|
extern int nfs4_state_mark_reclaim_nograce(struct nfs_client *, struct nfs4_state *);
|
2011-03-10 05:00:53 +08:00
|
|
|
extern void nfs4_schedule_lease_recovery(struct nfs_client *);
|
2012-11-27 02:13:29 +08:00
|
|
|
extern int nfs4_wait_clnt_recover(struct nfs_client *clp);
|
|
|
|
extern int nfs4_client_recover_expired_lease(struct nfs_client *clp);
|
2008-12-24 04:21:50 +08:00
|
|
|
extern void nfs4_schedule_state_manager(struct nfs_client *);
|
2011-08-25 03:07:37 +08:00
|
|
|
extern void nfs4_schedule_path_down_recovery(struct nfs_client *clp);
|
2013-03-15 04:57:48 +08:00
|
|
|
extern int nfs4_schedule_stateid_recovery(const struct nfs_server *, struct nfs4_state *);
|
2013-10-18 02:13:02 +08:00
|
|
|
extern int nfs4_schedule_migration_recovery(const struct nfs_server *);
|
2013-10-18 02:13:35 +08:00
|
|
|
extern void nfs4_schedule_lease_moved_recovery(struct nfs_client *);
|
2016-09-23 01:38:51 +08:00
|
|
|
extern void nfs41_handle_sequence_flag_errors(struct nfs_client *clp, u32 flags, bool);
|
2011-06-01 07:05:47 +08:00
|
|
|
extern void nfs41_handle_server_scope(struct nfs_client *,
|
2012-05-22 10:44:31 +08:00
|
|
|
struct nfs41_server_scope **);
|
2005-10-19 05:20:15 +08:00
|
|
|
extern void nfs4_put_lock_state(struct nfs4_lock_state *lsp);
|
2005-06-23 01:16:32 +08:00
|
|
|
extern int nfs4_set_lock_state(struct nfs4_state *state, struct file_lock *fl);
|
2016-05-17 05:42:44 +08:00
|
|
|
extern int nfs4_select_rw_stateid(struct nfs4_state *, fmode_t,
|
2016-10-13 12:26:47 +08:00
|
|
|
const struct nfs_lock_context *, nfs4_stateid *,
|
2016-05-17 05:42:44 +08:00
|
|
|
struct rpc_cred **);
|
2017-11-07 04:28:05 +08:00
|
|
|
extern bool nfs4_refresh_open_stateid(nfs4_stateid *dst,
|
|
|
|
struct nfs4_state *state);
|
2017-11-07 04:28:06 +08:00
|
|
|
extern bool nfs4_copy_open_stateid(nfs4_stateid *dst,
|
|
|
|
struct nfs4_state *state);
|
2005-06-23 01:16:21 +08:00
|
|
|
|
2010-05-14 00:51:01 +08:00
|
|
|
extern struct nfs_seqid *nfs_alloc_seqid(struct nfs_seqid_counter *counter, gfp_t gfp_mask);
|
NFSv4: Add functions to order RPC calls
NFSv4 file state-changing functions such as OPEN, CLOSE, LOCK,... are all
labelled with "sequence identifiers" in order to prevent the server from
reordering RPC requests, as this could cause its file state to
become out of sync with the client.
Currently the NFS client code enforces this ordering locally using
semaphores to restrict access to structures until the RPC call is done.
This, of course, only works with synchronous RPC calls, since the
user process must first grab the semaphore.
By dropping semaphores, and instead teaching the RPC engine to hold
the RPC calls until they are ready to be sent, we can extend this
process to work nicely with asynchronous RPC calls too.
This patch adds a new list called "rpc_sequence" that defines the order
of the RPC calls to be sent. We add one such list for each state_owner.
When an RPC call is ready to be sent, it checks if it is top of the
rpc_sequence list. If so, it proceeds. If not, it goes back to sleep,
and loops until it hits top of the list.
Once the RPC call has completed, it can then bump the sequence id counter,
and remove itself from the rpc_sequence list, and then wake up the next
sleeper.
Note that the state_owner sequence ids and lock_owner sequence ids are
all indexed to the same rpc_sequence list, so OPEN, LOCK,... requests
are all ordered w.r.t. each other.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
2005-10-19 05:20:12 +08:00
|
|
|
extern int nfs_wait_on_sequence(struct nfs_seqid *seqid, struct rpc_task *task);
|
|
|
|
extern void nfs_increment_open_seqid(int status, struct nfs_seqid *seqid);
|
|
|
|
extern void nfs_increment_lock_seqid(int status, struct nfs_seqid *seqid);
|
2009-12-16 03:47:36 +08:00
|
|
|
extern void nfs_release_seqid(struct nfs_seqid *seqid);
|
NFSv4: Add functions to order RPC calls
NFSv4 file state-changing functions such as OPEN, CLOSE, LOCK,... are all
labelled with "sequence identifiers" in order to prevent the server from
reordering RPC requests, as this could cause its file state to
become out of sync with the client.
Currently the NFS client code enforces this ordering locally using
semaphores to restrict access to structures until the RPC call is done.
This, of course, only works with synchronous RPC calls, since the
user process must first grab the semaphore.
By dropping semaphores, and instead teaching the RPC engine to hold
the RPC calls until they are ready to be sent, we can extend this
process to work nicely with asynchronous RPC calls too.
This patch adds a new list called "rpc_sequence" that defines the order
of the RPC calls to be sent. We add one such list for each state_owner.
When an RPC call is ready to be sent, it checks if it is top of the
rpc_sequence list. If so, it proceeds. If not, it goes back to sleep,
and loops until it hits top of the list.
Once the RPC call has completed, it can then bump the sequence id counter,
and remove itself from the rpc_sequence list, and then wake up the next
sleeper.
Note that the state_owner sequence ids and lock_owner sequence ids are
all indexed to the same rpc_sequence list, so OPEN, LOCK,... requests
are all ordered w.r.t. each other.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
2005-10-19 05:20:12 +08:00
|
|
|
extern void nfs_free_seqid(struct nfs_seqid *seqid);
|
2017-10-20 03:46:45 +08:00
|
|
|
extern int nfs4_setup_sequence(struct nfs_client *client,
|
2014-06-11 05:24:15 +08:00
|
|
|
struct nfs4_sequence_args *args,
|
|
|
|
struct nfs4_sequence_res *res,
|
|
|
|
struct rpc_task *task);
|
2014-06-11 05:24:16 +08:00
|
|
|
extern int nfs4_sequence_done(struct rpc_task *task,
|
|
|
|
struct nfs4_sequence_res *res);
|
NFSv4: Add functions to order RPC calls
NFSv4 file state-changing functions such as OPEN, CLOSE, LOCK,... are all
labelled with "sequence identifiers" in order to prevent the server from
reordering RPC requests, as this could cause its file state to
become out of sync with the client.
Currently the NFS client code enforces this ordering locally using
semaphores to restrict access to structures until the RPC call is done.
This, of course, only works with synchronous RPC calls, since the
user process must first grab the semaphore.
By dropping semaphores, and instead teaching the RPC engine to hold
the RPC calls until they are ready to be sent, we can extend this
process to work nicely with asynchronous RPC calls too.
This patch adds a new list called "rpc_sequence" that defines the order
of the RPC calls to be sent. We add one such list for each state_owner.
When an RPC call is ready to be sent, it checks if it is top of the
rpc_sequence list. If so, it proceeds. If not, it goes back to sleep,
and loops until it hits top of the list.
Once the RPC call has completed, it can then bump the sequence id counter,
and remove itself from the rpc_sequence list, and then wake up the next
sleeper.
Note that the state_owner sequence ids and lock_owner sequence ids are
all indexed to the same rpc_sequence list, so OPEN, LOCK,... requests
are all ordered w.r.t. each other.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
2005-10-19 05:20:12 +08:00
|
|
|
|
2012-03-20 04:17:18 +08:00
|
|
|
extern void nfs4_free_lock_state(struct nfs_server *server, struct nfs4_lock_state *lsp);
|
2012-03-08 02:49:12 +08:00
|
|
|
|
2005-06-23 01:16:21 +08:00
|
|
|
extern const nfs4_stateid zero_stateid;
|
2017-11-08 01:39:44 +08:00
|
|
|
extern const nfs4_stateid invalid_stateid;
|
2005-06-23 01:16:21 +08:00
|
|
|
|
2012-07-17 04:39:13 +08:00
|
|
|
/* nfs4super.c */
|
2012-07-17 04:39:20 +08:00
|
|
|
struct nfs_mount_info;
|
2012-07-31 04:05:16 +08:00
|
|
|
extern struct nfs_subversion nfs_v4;
|
2012-07-31 04:05:18 +08:00
|
|
|
struct dentry *nfs4_try_mount(int, const char *, struct nfs_mount_info *, struct nfs_subversion *);
|
2012-07-31 04:05:22 +08:00
|
|
|
extern bool nfs4_disable_idmapping;
|
|
|
|
extern unsigned short max_session_slots;
|
2016-08-30 08:03:52 +08:00
|
|
|
extern unsigned short max_session_cb_slots;
|
2012-07-31 04:05:22 +08:00
|
|
|
extern unsigned short send_implementation_id;
|
2013-09-04 22:08:54 +08:00
|
|
|
extern bool recover_lost_locks;
|
2012-07-31 04:05:25 +08:00
|
|
|
|
2012-09-15 05:24:41 +08:00
|
|
|
#define NFS4_CLIENT_ID_UNIQ_LEN (64)
|
|
|
|
extern char nfs4_client_id_uniquifier[NFS4_CLIENT_ID_UNIQ_LEN];
|
|
|
|
|
2012-07-17 04:39:14 +08:00
|
|
|
/* nfs4sysctl.c */
|
|
|
|
#ifdef CONFIG_SYSCTL
|
|
|
|
int nfs4_register_sysctl(void);
|
|
|
|
void nfs4_unregister_sysctl(void);
|
|
|
|
#else
|
|
|
|
static inline int nfs4_register_sysctl(void)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2012-07-28 02:49:26 +08:00
|
|
|
static inline void nfs4_unregister_sysctl(void)
|
2012-07-17 04:39:14 +08:00
|
|
|
{
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2005-06-23 01:16:21 +08:00
|
|
|
/* nfs4xdr.c */
|
2017-05-12 21:36:49 +08:00
|
|
|
extern const struct rpc_procinfo nfs4_procedures[];
|
2005-06-23 01:16:21 +08:00
|
|
|
|
|
|
|
struct nfs4_mount_data;
|
|
|
|
|
|
|
|
/* callback_xdr.c */
|
2017-05-12 22:21:37 +08:00
|
|
|
extern const struct svc_version nfs4_callback_version1;
|
|
|
|
extern const struct svc_version nfs4_callback_version4;
|
2005-06-23 01:16:21 +08:00
|
|
|
|
2012-03-05 07:13:56 +08:00
|
|
|
static inline void nfs4_stateid_copy(nfs4_stateid *dst, const nfs4_stateid *src)
|
|
|
|
{
|
2016-05-17 05:42:43 +08:00
|
|
|
memcpy(dst->data, src->data, sizeof(dst->data));
|
|
|
|
dst->type = src->type;
|
2012-03-05 07:13:56 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool nfs4_stateid_match(const nfs4_stateid *dst, const nfs4_stateid *src)
|
|
|
|
{
|
2016-05-17 05:42:43 +08:00
|
|
|
if (dst->type != src->type)
|
|
|
|
return false;
|
|
|
|
return memcmp(dst->data, src->data, sizeof(dst->data)) == 0;
|
2012-03-05 07:13:56 +08:00
|
|
|
}
|
|
|
|
|
2014-02-11 07:20:47 +08:00
|
|
|
static inline bool nfs4_stateid_match_other(const nfs4_stateid *dst, const nfs4_stateid *src)
|
|
|
|
{
|
|
|
|
return memcmp(dst->other, src->other, NFS4_STATEID_OTHER_SIZE) == 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool nfs4_stateid_is_newer(const nfs4_stateid *s1, const nfs4_stateid *s2)
|
|
|
|
{
|
|
|
|
return (s32)(be32_to_cpu(s1->seqid) - be32_to_cpu(s2->seqid)) > 0;
|
|
|
|
}
|
|
|
|
|
2013-03-15 04:57:48 +08:00
|
|
|
static inline bool nfs4_valid_open_stateid(const struct nfs4_state *state)
|
|
|
|
{
|
|
|
|
return test_bit(NFS_STATE_RECOVERY_FAILED, &state->flags) == 0;
|
|
|
|
}
|
|
|
|
|
2016-11-15 00:19:55 +08:00
|
|
|
static inline bool nfs4_state_match_open_stateid_other(const struct nfs4_state *state,
|
|
|
|
const nfs4_stateid *stateid)
|
|
|
|
{
|
|
|
|
return test_bit(NFS_OPEN_STATE, &state->flags) &&
|
|
|
|
nfs4_stateid_match_other(&state->open_stateid, stateid);
|
|
|
|
}
|
|
|
|
|
2005-06-23 01:16:21 +08:00
|
|
|
#else
|
|
|
|
|
2011-06-23 06:20:23 +08:00
|
|
|
#define nfs4_close_state(a, b) do { } while (0)
|
|
|
|
#define nfs4_close_sync(a, b) do { } while (0)
|
2013-08-14 04:37:37 +08:00
|
|
|
#define nfs4_state_protect(a, b, c, d) do { } while (0)
|
|
|
|
#define nfs4_state_protect_write(a, b, c, d) do { } while (0)
|
2005-06-23 01:16:21 +08:00
|
|
|
|
|
|
|
#endif /* CONFIG_NFS_V4 */
|
|
|
|
#endif /* __LINUX_FS_NFS_NFS4_FS.H */
|