NFS client updates for Linux 3.13

Highlights include:
 
 - Changes to the RPC socket code to allow NFSv4 to turn off timeout+retry
   - Detect TCP connection breakage through the "keepalive" mechanism
 - Add client side support for NFSv4.x migration (Chuck Lever)
 - Add support for multiple security flavour arguments to the "sec=" mount
   option (Dros Adamson)
 - fs-cache bugfixes from David Howells:
   - Fix an issue whereby caching can be enabled on a file that is open for
     writing
 - More NFSv4 open code stable bugfixes
 - Various Labeled NFS (selinux) bugfixes, including one stable fix
 - Fix buffer overflow checking in the RPCSEC_GSS upcall encoding
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1.4.15 (GNU/Linux)
 
 iQIcBAABAgAGBQJSe8TEAAoJEGcL54qWCgDydu0QAJVtVhfwlUKm/HZ4oAy0Q5T8
 rJOWupqGnwyqTNLIRTlNegFSwMY+bABbkihXzSoj641o5zRb200KePlNxknzzlu1
 Q715035LDeEC1jrrHHeztTa9uWxAZ9B6gstMzilJYbV72VRYuWA6Q5LstXwQy/jN
 ViSldrGJ4sRZUe6wpNLPBRDBfOMWOtZdyRqqqjm71ZHJJnaqQWLBvThTG4MsLlpg
 j/khi5189MxJWePTKI9zGZdnXZAZ0ar1tAi1QWDNv044EwsS3LZZIko+YdBh6LZx
 9IBwk6TqOXFY0jxPDsIZtTfWPf4pjewRrPINMkjlZl3TJEf97sIlavZ7gWqvVIz5
 eXzFGy7D2XBgub8TGcmZM/7keHY/sqghz7lXZ8FulXlVem52r/95NiQ9tu8l8hq3
 Ab0FUnjtXeuaDFPBCHlKb3zmCMGFF89VqtpCj2plCPvfcGgJvXJqddWBRisQw9St
 UgD1PQWRFGtkrHv5EcQkd5boVdRNjAVAC9PaCWNpOpSVDjJyuUE+v/k75+ZwDcG8
 afAFMJSbCwRxW+cFlLAsQTfQztzuWTTOOVQvJDxfyYulcWshyIruhiYItRDfJqRp
 RynuVzrBERzUs5wsefnBbC218C/WSlOrodPbsZvdhKolvRx1RNtWT29ilZ6+p2tH
 4378ZRLtQvm9RXBnAkRc
 =gflJ
 -----END PGP SIGNATURE-----

Merge tag 'nfs-for-3.13-1' of git://git.linux-nfs.org/projects/trondmy/linux-nfs

Pull NFS client updates from Trond Myklebust:
 "Highlights include:

   - Changes to the RPC socket code to allow NFSv4 to turn off
     timeout+retry:
      * Detect TCP connection breakage through the "keepalive" mechanism
   - Add client side support for NFSv4.x migration (Chuck Lever)
   - Add support for multiple security flavour arguments to the "sec="
     mount option (Dros Adamson)
   - fs-cache bugfixes from David Howells:
     * Fix an issue whereby caching can be enabled on a file that is
       open for writing
   - More NFSv4 open code stable bugfixes
   - Various Labeled NFS (selinux) bugfixes, including one stable fix
   - Fix buffer overflow checking in the RPCSEC_GSS upcall encoding"

* tag 'nfs-for-3.13-1' of git://git.linux-nfs.org/projects/trondmy/linux-nfs: (68 commits)
  NFSv4.2: Remove redundant checks in nfs_setsecurity+nfs4_label_init_security
  NFSv4: Sanity check the server reply in _nfs4_server_capabilities
  NFSv4.2: encode_readdir - only ask for labels when doing readdirplus
  nfs: set security label when revalidating inode
  NFSv4.2: Fix a mismatch between Linux labeled NFS and the NFSv4.2 spec
  NFS: Fix a missing initialisation when reading the SELinux label
  nfs: fix oops when trying to set SELinux label
  nfs: fix inverted test for delegation in nfs4_reclaim_open_state
  SUNRPC: Cleanup xs_destroy()
  SUNRPC: close a rare race in xs_tcp_setup_socket.
  SUNRPC: remove duplicated include from clnt.c
  nfs: use IS_ROOT not DCACHE_DISCONNECTED
  SUNRPC: Fix buffer overflow checking in gss_encode_v0_msg/gss_encode_v1_msg
  SUNRPC: gss_alloc_msg - choose _either_ a v0 message or a v1 message
  SUNRPC: remove an unnecessary if statement
  nfs: Use PTR_ERR_OR_ZERO in 'nfs/nfs4super.c'
  nfs: Use PTR_ERR_OR_ZERO in 'nfs41_callback_up' function
  nfs: Remove useless 'error' assignment
  sunrpc: comment typo fix
  SUNRPC: Add correct rcu_dereference annotation in rpc_clnt_set_transport
  ...
This commit is contained in:
Linus Torvalds 2013-11-08 05:57:46 +09:00
commit c224b76b56
47 changed files with 1944 additions and 598 deletions

View File

@ -29,15 +29,16 @@ This document contains the following sections:
(6) Index registration
(7) Data file registration
(8) Miscellaneous object registration
(9) Setting the data file size
(9) Setting the data file size
(10) Page alloc/read/write
(11) Page uncaching
(12) Index and data file consistency
(13) Miscellaneous cookie operations
(14) Cookie unregistration
(15) Index invalidation
(16) Data file invalidation
(17) FS-Cache specific page flags.
(13) Cookie enablement
(14) Miscellaneous cookie operations
(15) Cookie unregistration
(16) Index invalidation
(17) Data file invalidation
(18) FS-Cache specific page flags.
=============================
@ -334,7 +335,8 @@ the path to the file:
struct fscache_cookie *
fscache_acquire_cookie(struct fscache_cookie *parent,
const struct fscache_object_def *def,
void *netfs_data);
void *netfs_data,
bool enable);
This function creates an index entry in the index represented by parent,
filling in the index entry by calling the operations pointed to by def.
@ -350,6 +352,10 @@ object needs to be created somewhere down the hierarchy. Furthermore, an index
may be created in several different caches independently at different times.
This is all handled transparently, and the netfs doesn't see any of it.
A cookie will be created in the disabled state if enabled is false. A cookie
must be enabled to do anything with it. A disabled cookie can be enabled by
calling fscache_enable_cookie() (see below).
For example, with AFS, a cell would be added to the primary index. This index
entry would have a dependent inode containing a volume location index for the
volume mappings within this cell:
@ -357,7 +363,7 @@ volume mappings within this cell:
cell->cache =
fscache_acquire_cookie(afs_cache_netfs.primary_index,
&afs_cell_cache_index_def,
cell);
cell, true);
Then when a volume location was accessed, it would be entered into the cell's
index and an inode would be allocated that acts as a volume type and hash chain
@ -366,7 +372,7 @@ combination:
vlocation->cache =
fscache_acquire_cookie(cell->cache,
&afs_vlocation_cache_index_def,
vlocation);
vlocation, true);
And then a particular flavour of volume (R/O for example) could be added to
that index, creating another index for vnodes (AFS inode equivalents):
@ -374,7 +380,7 @@ that index, creating another index for vnodes (AFS inode equivalents):
volume->cache =
fscache_acquire_cookie(vlocation->cache,
&afs_volume_cache_index_def,
volume);
volume, true);
======================
@ -388,7 +394,7 @@ the object definition should be something other than index type.
vnode->cache =
fscache_acquire_cookie(volume->cache,
&afs_vnode_cache_object_def,
vnode);
vnode, true);
=================================
@ -404,7 +410,7 @@ it would be some other type of object such as a data file.
xattr->cache =
fscache_acquire_cookie(vnode->cache,
&afs_xattr_cache_object_def,
xattr);
xattr, true);
Miscellaneous objects might be used to store extended attributes or directory
entries for example.
@ -733,6 +739,47 @@ Note that partial updates may happen automatically at other times, such as when
data blocks are added to a data file object.
=================
COOKIE ENABLEMENT
=================
Cookies exist in one of two states: enabled and disabled. If a cookie is
disabled, it ignores all attempts to acquire child cookies; check, update or
invalidate its state; allocate, read or write backing pages - though it is
still possible to uncache pages and relinquish the cookie.
The initial enablement state is set by fscache_acquire_cookie(), but the cookie
can be enabled or disabled later. To disable a cookie, call:
void fscache_disable_cookie(struct fscache_cookie *cookie,
bool invalidate);
If the cookie is not already disabled, this locks the cookie against other
enable and disable ops, marks the cookie as being disabled, discards or
invalidates any backing objects and waits for cessation of activity on any
associated object before unlocking the cookie.
All possible failures are handled internally. The caller should consider
calling fscache_uncache_all_inode_pages() afterwards to make sure all page
markings are cleared up.
Cookies can be enabled or reenabled with:
void fscache_enable_cookie(struct fscache_cookie *cookie,
bool (*can_enable)(void *data),
void *data)
If the cookie is not already enabled, this locks the cookie against other
enable and disable ops, invokes can_enable() and, if the cookie is not an index
cookie, will begin the procedure of acquiring backing objects.
The optional can_enable() function is passed the data argument and returns a
ruling as to whether or not enablement should actually be permitted to begin.
All possible failures are handled internally. The cookie will only be marked
as enabled if provisional backing objects are allocated.
===============================
MISCELLANEOUS COOKIE OPERATIONS
===============================
@ -778,7 +825,7 @@ COOKIE UNREGISTRATION
To get rid of a cookie, this function should be called.
void fscache_relinquish_cookie(struct fscache_cookie *cookie,
int retire);
bool retire);
If retire is non-zero, then the object will be marked for recycling, and all
copies of it will be removed from all active caches in which it is present.

View File

@ -90,7 +90,7 @@ void v9fs_cache_session_get_cookie(struct v9fs_session_info *v9ses)
v9ses->fscache = fscache_acquire_cookie(v9fs_cache_netfs.primary_index,
&v9fs_cache_session_index_def,
v9ses);
v9ses, true);
p9_debug(P9_DEBUG_FSC, "session %p get cookie %p\n",
v9ses, v9ses->fscache);
}
@ -204,7 +204,7 @@ void v9fs_cache_inode_get_cookie(struct inode *inode)
v9ses = v9fs_inode2v9ses(inode);
v9inode->fscache = fscache_acquire_cookie(v9ses->fscache,
&v9fs_cache_inode_index_def,
v9inode);
v9inode, true);
p9_debug(P9_DEBUG_FSC, "inode %p get cookie %p\n",
inode, v9inode->fscache);
@ -271,7 +271,7 @@ void v9fs_cache_inode_reset_cookie(struct inode *inode)
v9ses = v9fs_inode2v9ses(inode);
v9inode->fscache = fscache_acquire_cookie(v9ses->fscache,
&v9fs_cache_inode_index_def,
v9inode);
v9inode, true);
p9_debug(P9_DEBUG_FSC, "inode %p revalidating cookie old %p new %p\n",
inode, old, v9inode->fscache);

View File

@ -179,7 +179,7 @@ struct afs_cell *afs_cell_create(const char *name, unsigned namesz,
/* put it up for caching (this never returns an error) */
cell->cache = fscache_acquire_cookie(afs_cache_netfs.primary_index,
&afs_cell_cache_index_def,
cell);
cell, true);
#endif
/* add to the cell lists */

View File

@ -259,7 +259,7 @@ struct inode *afs_iget(struct super_block *sb, struct key *key,
#ifdef CONFIG_AFS_FSCACHE
vnode->cache = fscache_acquire_cookie(vnode->volume->cache,
&afs_vnode_cache_index_def,
vnode);
vnode, true);
#endif
ret = afs_inode_map_status(vnode, key);

View File

@ -308,7 +308,8 @@ static int afs_vlocation_fill_in_record(struct afs_vlocation *vl,
/* see if we have an in-cache copy (will set vl->valid if there is) */
#ifdef CONFIG_AFS_FSCACHE
vl->cache = fscache_acquire_cookie(vl->cell->cache,
&afs_vlocation_cache_index_def, vl);
&afs_vlocation_cache_index_def, vl,
true);
#endif
if (vl->valid) {

View File

@ -131,7 +131,7 @@ struct afs_volume *afs_volume_lookup(struct afs_mount_params *params)
#ifdef CONFIG_AFS_FSCACHE
volume->cache = fscache_acquire_cookie(vlocation->cache,
&afs_volume_cache_index_def,
volume);
volume, true);
#endif
afs_get_vlocation(vlocation);
volume->vlocation = vlocation;

View File

@ -270,7 +270,7 @@ static void cachefiles_drop_object(struct fscache_object *_object)
#endif
/* delete retired objects */
if (test_bit(FSCACHE_COOKIE_RETIRED, &object->fscache.cookie->flags) &&
if (test_bit(FSCACHE_OBJECT_RETIRED, &object->fscache.flags) &&
_object != cache->cache.fsdef
) {
_debug("- retire object OBJ%x", object->fscache.debug_id);

View File

@ -68,7 +68,7 @@ int ceph_fscache_register_fs(struct ceph_fs_client* fsc)
{
fsc->fscache = fscache_acquire_cookie(ceph_cache_netfs.primary_index,
&ceph_fscache_fsid_object_def,
fsc);
fsc, true);
if (fsc->fscache == NULL) {
pr_err("Unable to resgister fsid: %p fscache cookie", fsc);
@ -204,7 +204,7 @@ void ceph_fscache_register_inode_cookie(struct ceph_fs_client* fsc,
ci->fscache = fscache_acquire_cookie(fsc->fscache,
&ceph_fscache_inode_object_def,
ci);
ci, true);
done:
mutex_unlock(&inode->i_mutex);

View File

@ -27,7 +27,7 @@ void cifs_fscache_get_client_cookie(struct TCP_Server_Info *server)
{
server->fscache =
fscache_acquire_cookie(cifs_fscache_netfs.primary_index,
&cifs_fscache_server_index_def, server);
&cifs_fscache_server_index_def, server, true);
cifs_dbg(FYI, "%s: (0x%p/0x%p)\n",
__func__, server, server->fscache);
}
@ -46,7 +46,7 @@ void cifs_fscache_get_super_cookie(struct cifs_tcon *tcon)
tcon->fscache =
fscache_acquire_cookie(server->fscache,
&cifs_fscache_super_index_def, tcon);
&cifs_fscache_super_index_def, tcon, true);
cifs_dbg(FYI, "%s: (0x%p/0x%p)\n",
__func__, server->fscache, tcon->fscache);
}
@ -69,7 +69,7 @@ static void cifs_fscache_enable_inode_cookie(struct inode *inode)
if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_FSCACHE) {
cifsi->fscache = fscache_acquire_cookie(tcon->fscache,
&cifs_fscache_inode_object_def, cifsi);
&cifs_fscache_inode_object_def, cifsi, true);
cifs_dbg(FYI, "%s: got FH cookie (0x%p/0x%p)\n",
__func__, tcon->fscache, cifsi->fscache);
}
@ -119,7 +119,7 @@ void cifs_fscache_reset_inode_cookie(struct inode *inode)
cifsi->fscache = fscache_acquire_cookie(
cifs_sb_master_tcon(cifs_sb)->fscache,
&cifs_fscache_inode_object_def,
cifsi);
cifsi, true);
cifs_dbg(FYI, "%s: new cookie 0x%p oldcookie 0x%p\n",
__func__, cifsi->fscache, old);
}

View File

@ -58,15 +58,16 @@ void fscache_cookie_init_once(void *_cookie)
struct fscache_cookie *__fscache_acquire_cookie(
struct fscache_cookie *parent,
const struct fscache_cookie_def *def,
void *netfs_data)
void *netfs_data,
bool enable)
{
struct fscache_cookie *cookie;
BUG_ON(!def);
_enter("{%s},{%s},%p",
_enter("{%s},{%s},%p,%u",
parent ? (char *) parent->def->name : "<no-parent>",
def->name, netfs_data);
def->name, netfs_data, enable);
fscache_stat(&fscache_n_acquires);
@ -106,7 +107,7 @@ struct fscache_cookie *__fscache_acquire_cookie(
cookie->def = def;
cookie->parent = parent;
cookie->netfs_data = netfs_data;
cookie->flags = 0;
cookie->flags = (1 << FSCACHE_COOKIE_NO_DATA_YET);
/* radix tree insertion won't use the preallocation pool unless it's
* told it may not wait */
@ -124,16 +125,22 @@ struct fscache_cookie *__fscache_acquire_cookie(
break;
}
/* if the object is an index then we need do nothing more here - we
* create indices on disk when we need them as an index may exist in
* multiple caches */
if (cookie->def->type != FSCACHE_COOKIE_TYPE_INDEX) {
if (fscache_acquire_non_index_cookie(cookie) < 0) {
atomic_dec(&parent->n_children);
__fscache_cookie_put(cookie);
fscache_stat(&fscache_n_acquires_nobufs);
_leave(" = NULL");
return NULL;
if (enable) {
/* if the object is an index then we need do nothing more here
* - we create indices on disk when we need them as an index
* may exist in multiple caches */
if (cookie->def->type != FSCACHE_COOKIE_TYPE_INDEX) {
if (fscache_acquire_non_index_cookie(cookie) == 0) {
set_bit(FSCACHE_COOKIE_ENABLED, &cookie->flags);
} else {
atomic_dec(&parent->n_children);
__fscache_cookie_put(cookie);
fscache_stat(&fscache_n_acquires_nobufs);
_leave(" = NULL");
return NULL;
}
} else {
set_bit(FSCACHE_COOKIE_ENABLED, &cookie->flags);
}
}
@ -143,6 +150,39 @@ struct fscache_cookie *__fscache_acquire_cookie(
}
EXPORT_SYMBOL(__fscache_acquire_cookie);
/*
* Enable a cookie to permit it to accept new operations.
*/
void __fscache_enable_cookie(struct fscache_cookie *cookie,
bool (*can_enable)(void *data),
void *data)
{
_enter("%p", cookie);
wait_on_bit_lock(&cookie->flags, FSCACHE_COOKIE_ENABLEMENT_LOCK,
fscache_wait_bit, TASK_UNINTERRUPTIBLE);
if (test_bit(FSCACHE_COOKIE_ENABLED, &cookie->flags))
goto out_unlock;
if (can_enable && !can_enable(data)) {
/* The netfs decided it didn't want to enable after all */
} else if (cookie->def->type != FSCACHE_COOKIE_TYPE_INDEX) {
/* Wait for outstanding disablement to complete */
__fscache_wait_on_invalidate(cookie);
if (fscache_acquire_non_index_cookie(cookie) == 0)
set_bit(FSCACHE_COOKIE_ENABLED, &cookie->flags);
} else {
set_bit(FSCACHE_COOKIE_ENABLED, &cookie->flags);
}
out_unlock:
clear_bit_unlock(FSCACHE_COOKIE_ENABLEMENT_LOCK, &cookie->flags);
wake_up_bit(&cookie->flags, FSCACHE_COOKIE_ENABLEMENT_LOCK);
}
EXPORT_SYMBOL(__fscache_enable_cookie);
/*
* acquire a non-index cookie
* - this must make sure the index chain is instantiated and instantiate the
@ -157,7 +197,7 @@ static int fscache_acquire_non_index_cookie(struct fscache_cookie *cookie)
_enter("");
cookie->flags = 1 << FSCACHE_COOKIE_UNAVAILABLE;
set_bit(FSCACHE_COOKIE_UNAVAILABLE, &cookie->flags);
/* now we need to see whether the backing objects for this cookie yet
* exist, if not there'll be nothing to search */
@ -180,9 +220,7 @@ static int fscache_acquire_non_index_cookie(struct fscache_cookie *cookie)
_debug("cache %s", cache->tag->name);
cookie->flags =
(1 << FSCACHE_COOKIE_LOOKING_UP) |
(1 << FSCACHE_COOKIE_NO_DATA_YET);
set_bit(FSCACHE_COOKIE_LOOKING_UP, &cookie->flags);
/* ask the cache to allocate objects for this cookie and its parent
* chain */
@ -398,7 +436,8 @@ void __fscache_invalidate(struct fscache_cookie *cookie)
if (!hlist_empty(&cookie->backing_objects)) {
spin_lock(&cookie->lock);
if (!hlist_empty(&cookie->backing_objects) &&
if (fscache_cookie_enabled(cookie) &&
!hlist_empty(&cookie->backing_objects) &&
!test_and_set_bit(FSCACHE_COOKIE_INVALIDATING,
&cookie->flags)) {
object = hlist_entry(cookie->backing_objects.first,
@ -452,10 +491,14 @@ void __fscache_update_cookie(struct fscache_cookie *cookie)
spin_lock(&cookie->lock);
/* update the index entry on disk in each cache backing this cookie */
hlist_for_each_entry(object,
&cookie->backing_objects, cookie_link) {
fscache_raise_event(object, FSCACHE_OBJECT_EV_UPDATE);
if (fscache_cookie_enabled(cookie)) {
/* update the index entry on disk in each cache backing this
* cookie.
*/
hlist_for_each_entry(object,
&cookie->backing_objects, cookie_link) {
fscache_raise_event(object, FSCACHE_OBJECT_EV_UPDATE);
}
}
spin_unlock(&cookie->lock);
@ -463,16 +506,81 @@ void __fscache_update_cookie(struct fscache_cookie *cookie)
}
EXPORT_SYMBOL(__fscache_update_cookie);
/*
* Disable a cookie to stop it from accepting new requests from the netfs.
*/
void __fscache_disable_cookie(struct fscache_cookie *cookie, bool invalidate)
{
struct fscache_object *object;
bool awaken = false;
_enter("%p,%u", cookie, invalidate);
ASSERTCMP(atomic_read(&cookie->n_active), >, 0);
if (atomic_read(&cookie->n_children) != 0) {
printk(KERN_ERR "FS-Cache: Cookie '%s' still has children\n",
cookie->def->name);
BUG();
}
wait_on_bit_lock(&cookie->flags, FSCACHE_COOKIE_ENABLEMENT_LOCK,
fscache_wait_bit, TASK_UNINTERRUPTIBLE);
if (!test_and_clear_bit(FSCACHE_COOKIE_ENABLED, &cookie->flags))
goto out_unlock_enable;
/* If the cookie is being invalidated, wait for that to complete first
* so that we can reuse the flag.
*/
__fscache_wait_on_invalidate(cookie);
/* Dispose of the backing objects */
set_bit(FSCACHE_COOKIE_INVALIDATING, &cookie->flags);
spin_lock(&cookie->lock);
if (!hlist_empty(&cookie->backing_objects)) {
hlist_for_each_entry(object, &cookie->backing_objects, cookie_link) {
if (invalidate)
set_bit(FSCACHE_OBJECT_RETIRED, &object->flags);
fscache_raise_event(object, FSCACHE_OBJECT_EV_KILL);
}
} else {
if (test_and_clear_bit(FSCACHE_COOKIE_INVALIDATING, &cookie->flags))
awaken = true;
}
spin_unlock(&cookie->lock);
if (awaken)
wake_up_bit(&cookie->flags, FSCACHE_COOKIE_INVALIDATING);
/* Wait for cessation of activity requiring access to the netfs (when
* n_active reaches 0). This makes sure outstanding reads and writes
* have completed.
*/
if (!atomic_dec_and_test(&cookie->n_active))
wait_on_atomic_t(&cookie->n_active, fscache_wait_atomic_t,
TASK_UNINTERRUPTIBLE);
/* Reset the cookie state if it wasn't relinquished */
if (!test_bit(FSCACHE_COOKIE_RELINQUISHED, &cookie->flags)) {
atomic_inc(&cookie->n_active);
set_bit(FSCACHE_COOKIE_NO_DATA_YET, &cookie->flags);
}
out_unlock_enable:
clear_bit_unlock(FSCACHE_COOKIE_ENABLEMENT_LOCK, &cookie->flags);
wake_up_bit(&cookie->flags, FSCACHE_COOKIE_ENABLEMENT_LOCK);
_leave("");
}
EXPORT_SYMBOL(__fscache_disable_cookie);
/*
* release a cookie back to the cache
* - the object will be marked as recyclable on disk if retire is true
* - all dependents of this cookie must have already been unregistered
* (indices/files/pages)
*/
void __fscache_relinquish_cookie(struct fscache_cookie *cookie, int retire)
void __fscache_relinquish_cookie(struct fscache_cookie *cookie, bool retire)
{
struct fscache_object *object;
fscache_stat(&fscache_n_relinquishes);
if (retire)
fscache_stat(&fscache_n_relinquishes_retire);
@ -487,31 +595,10 @@ void __fscache_relinquish_cookie(struct fscache_cookie *cookie, int retire)
cookie, cookie->def->name, cookie->netfs_data,
atomic_read(&cookie->n_active), retire);
ASSERTCMP(atomic_read(&cookie->n_active), >, 0);
if (atomic_read(&cookie->n_children) != 0) {
printk(KERN_ERR "FS-Cache: Cookie '%s' still has children\n",
cookie->def->name);
BUG();
}
/* No further netfs-accessing operations on this cookie permitted */
set_bit(FSCACHE_COOKIE_RELINQUISHED, &cookie->flags);
if (retire)
set_bit(FSCACHE_COOKIE_RETIRED, &cookie->flags);
spin_lock(&cookie->lock);
hlist_for_each_entry(object, &cookie->backing_objects, cookie_link) {
fscache_raise_event(object, FSCACHE_OBJECT_EV_KILL);
}
spin_unlock(&cookie->lock);
/* Wait for cessation of activity requiring access to the netfs (when
* n_active reaches 0).
*/
if (!atomic_dec_and_test(&cookie->n_active))
wait_on_atomic_t(&cookie->n_active, fscache_wait_atomic_t,
TASK_UNINTERRUPTIBLE);
__fscache_disable_cookie(cookie, retire);
/* Clear pointers back to the netfs */
cookie->netfs_data = NULL;
@ -568,6 +655,7 @@ int __fscache_check_consistency(struct fscache_cookie *cookie)
{
struct fscache_operation *op;
struct fscache_object *object;
bool wake_cookie = false;
int ret;
_enter("%p,", cookie);
@ -591,7 +679,8 @@ int __fscache_check_consistency(struct fscache_cookie *cookie)
spin_lock(&cookie->lock);
if (hlist_empty(&cookie->backing_objects))
if (!fscache_cookie_enabled(cookie) ||
hlist_empty(&cookie->backing_objects))
goto inconsistent;
object = hlist_entry(cookie->backing_objects.first,
struct fscache_object, cookie_link);
@ -600,7 +689,7 @@ int __fscache_check_consistency(struct fscache_cookie *cookie)
op->debug_id = atomic_inc_return(&fscache_op_debug_id);
atomic_inc(&cookie->n_active);
__fscache_use_cookie(cookie);
if (fscache_submit_op(object, op) < 0)
goto submit_failed;
@ -622,9 +711,11 @@ int __fscache_check_consistency(struct fscache_cookie *cookie)
return ret;
submit_failed:
atomic_dec(&cookie->n_active);
wake_cookie = __fscache_unuse_cookie(cookie);
inconsistent:
spin_unlock(&cookie->lock);
if (wake_cookie)
__fscache_wake_unused_cookie(cookie);
kfree(op);
_leave(" = -ESTALE");
return -ESTALE;

View File

@ -59,6 +59,7 @@ struct fscache_cookie fscache_fsdef_index = {
.lock = __SPIN_LOCK_UNLOCKED(fscache_fsdef_index.lock),
.backing_objects = HLIST_HEAD_INIT,
.def = &fscache_fsdef_index_def,
.flags = 1 << FSCACHE_COOKIE_ENABLED,
};
EXPORT_SYMBOL(fscache_fsdef_index);

View File

@ -45,6 +45,7 @@ int __fscache_register_netfs(struct fscache_netfs *netfs)
netfs->primary_index->def = &fscache_fsdef_netfs_def;
netfs->primary_index->parent = &fscache_fsdef_index;
netfs->primary_index->netfs_data = netfs;
netfs->primary_index->flags = 1 << FSCACHE_COOKIE_ENABLED;
atomic_inc(&netfs->primary_index->parent->usage);
atomic_inc(&netfs->primary_index->parent->n_children);

View File

@ -495,6 +495,7 @@ void fscache_object_lookup_negative(struct fscache_object *object)
* returning ENODATA.
*/
set_bit(FSCACHE_COOKIE_NO_DATA_YET, &cookie->flags);
clear_bit(FSCACHE_COOKIE_UNAVAILABLE, &cookie->flags);
_debug("wake up lookup %p", &cookie->flags);
clear_bit_unlock(FSCACHE_COOKIE_LOOKING_UP, &cookie->flags);
@ -527,6 +528,7 @@ void fscache_obtained_object(struct fscache_object *object)
/* We do (presumably) have data */
clear_bit_unlock(FSCACHE_COOKIE_NO_DATA_YET, &cookie->flags);
clear_bit(FSCACHE_COOKIE_UNAVAILABLE, &cookie->flags);
/* Allow write requests to begin stacking up and read requests
* to begin shovelling data.
@ -679,7 +681,8 @@ static const struct fscache_state *fscache_drop_object(struct fscache_object *ob
*/
spin_lock(&cookie->lock);
hlist_del_init(&object->cookie_link);
if (test_and_clear_bit(FSCACHE_COOKIE_INVALIDATING, &cookie->flags))
if (hlist_empty(&cookie->backing_objects) &&
test_and_clear_bit(FSCACHE_COOKIE_INVALIDATING, &cookie->flags))
awaken = true;
spin_unlock(&cookie->lock);
@ -927,7 +930,7 @@ static const struct fscache_state *_fscache_invalidate_object(struct fscache_obj
*/
if (!fscache_use_cookie(object)) {
ASSERT(object->cookie->stores.rnode == NULL);
set_bit(FSCACHE_COOKIE_RETIRED, &cookie->flags);
set_bit(FSCACHE_OBJECT_RETIRED, &object->flags);
_leave(" [no cookie]");
return transit_to(KILL_OBJECT);
}

View File

@ -163,12 +163,10 @@ static void fscache_attr_changed_op(struct fscache_operation *op)
fscache_stat(&fscache_n_attr_changed_calls);
if (fscache_object_is_active(object) &&
fscache_use_cookie(object)) {
if (fscache_object_is_active(object)) {
fscache_stat(&fscache_n_cop_attr_changed);
ret = object->cache->ops->attr_changed(object);
fscache_stat_d(&fscache_n_cop_attr_changed);
fscache_unuse_cookie(object);
if (ret < 0)
fscache_abort_object(object);
}
@ -184,6 +182,7 @@ int __fscache_attr_changed(struct fscache_cookie *cookie)
{
struct fscache_operation *op;
struct fscache_object *object;
bool wake_cookie;
_enter("%p", cookie);
@ -199,15 +198,19 @@ int __fscache_attr_changed(struct fscache_cookie *cookie)
}
fscache_operation_init(op, fscache_attr_changed_op, NULL);
op->flags = FSCACHE_OP_ASYNC | (1 << FSCACHE_OP_EXCLUSIVE);
op->flags = FSCACHE_OP_ASYNC |
(1 << FSCACHE_OP_EXCLUSIVE) |
(1 << FSCACHE_OP_UNUSE_COOKIE);
spin_lock(&cookie->lock);
if (hlist_empty(&cookie->backing_objects))
if (!fscache_cookie_enabled(cookie) ||
hlist_empty(&cookie->backing_objects))
goto nobufs;
object = hlist_entry(cookie->backing_objects.first,
struct fscache_object, cookie_link);
__fscache_use_cookie(cookie);
if (fscache_submit_exclusive_op(object, op) < 0)
goto nobufs;
spin_unlock(&cookie->lock);
@ -217,8 +220,11 @@ int __fscache_attr_changed(struct fscache_cookie *cookie)
return 0;
nobufs:
wake_cookie = __fscache_unuse_cookie(cookie);
spin_unlock(&cookie->lock);
kfree(op);
if (wake_cookie)
__fscache_wake_unused_cookie(cookie);
fscache_stat(&fscache_n_attr_changed_nobufs);
_leave(" = %d", -ENOBUFS);
return -ENOBUFS;
@ -263,7 +269,6 @@ static struct fscache_retrieval *fscache_alloc_retrieval(
}
fscache_operation_init(&op->op, NULL, fscache_release_retrieval_op);
atomic_inc(&cookie->n_active);
op->op.flags = FSCACHE_OP_MYTHREAD |
(1UL << FSCACHE_OP_WAITING) |
(1UL << FSCACHE_OP_UNUSE_COOKIE);
@ -384,6 +389,7 @@ int __fscache_read_or_alloc_page(struct fscache_cookie *cookie,
{
struct fscache_retrieval *op;
struct fscache_object *object;
bool wake_cookie = false;
int ret;
_enter("%p,%p,,,", cookie, page);
@ -405,7 +411,7 @@ int __fscache_read_or_alloc_page(struct fscache_cookie *cookie,
return -ERESTARTSYS;
op = fscache_alloc_retrieval(cookie, page->mapping,
end_io_func,context);
end_io_func, context);
if (!op) {
_leave(" = -ENOMEM");
return -ENOMEM;
@ -414,13 +420,15 @@ int __fscache_read_or_alloc_page(struct fscache_cookie *cookie,
spin_lock(&cookie->lock);
if (hlist_empty(&cookie->backing_objects))
if (!fscache_cookie_enabled(cookie) ||
hlist_empty(&cookie->backing_objects))
goto nobufs_unlock;
object = hlist_entry(cookie->backing_objects.first,
struct fscache_object, cookie_link);
ASSERT(test_bit(FSCACHE_OBJECT_IS_LOOKED_UP, &object->flags));
__fscache_use_cookie(cookie);
atomic_inc(&object->n_reads);
__set_bit(FSCACHE_OP_DEC_READ_CNT, &op->op.flags);
@ -475,9 +483,11 @@ int __fscache_read_or_alloc_page(struct fscache_cookie *cookie,
nobufs_unlock_dec:
atomic_dec(&object->n_reads);
wake_cookie = __fscache_unuse_cookie(cookie);
nobufs_unlock:
spin_unlock(&cookie->lock);
atomic_dec(&cookie->n_active);
if (wake_cookie)
__fscache_wake_unused_cookie(cookie);
kfree(op);
nobufs:
fscache_stat(&fscache_n_retrievals_nobufs);
@ -514,6 +524,7 @@ int __fscache_read_or_alloc_pages(struct fscache_cookie *cookie,
{
struct fscache_retrieval *op;
struct fscache_object *object;
bool wake_cookie = false;
int ret;
_enter("%p,,%d,,,", cookie, *nr_pages);
@ -542,11 +553,13 @@ int __fscache_read_or_alloc_pages(struct fscache_cookie *cookie,
spin_lock(&cookie->lock);
if (hlist_empty(&cookie->backing_objects))
if (!fscache_cookie_enabled(cookie) ||
hlist_empty(&cookie->backing_objects))
goto nobufs_unlock;
object = hlist_entry(cookie->backing_objects.first,
struct fscache_object, cookie_link);
__fscache_use_cookie(cookie);
atomic_inc(&object->n_reads);
__set_bit(FSCACHE_OP_DEC_READ_CNT, &op->op.flags);
@ -601,10 +614,12 @@ int __fscache_read_or_alloc_pages(struct fscache_cookie *cookie,
nobufs_unlock_dec:
atomic_dec(&object->n_reads);
wake_cookie = __fscache_unuse_cookie(cookie);
nobufs_unlock:
spin_unlock(&cookie->lock);
atomic_dec(&cookie->n_active);
kfree(op);
if (wake_cookie)
__fscache_wake_unused_cookie(cookie);
nobufs:
fscache_stat(&fscache_n_retrievals_nobufs);
_leave(" = -ENOBUFS");
@ -626,6 +641,7 @@ int __fscache_alloc_page(struct fscache_cookie *cookie,
{
struct fscache_retrieval *op;
struct fscache_object *object;
bool wake_cookie = false;
int ret;
_enter("%p,%p,,,", cookie, page);
@ -653,13 +669,15 @@ int __fscache_alloc_page(struct fscache_cookie *cookie,
spin_lock(&cookie->lock);
if (hlist_empty(&cookie->backing_objects))
if (!fscache_cookie_enabled(cookie) ||
hlist_empty(&cookie->backing_objects))
goto nobufs_unlock;
object = hlist_entry(cookie->backing_objects.first,
struct fscache_object, cookie_link);
__fscache_use_cookie(cookie);
if (fscache_submit_op(object, &op->op) < 0)
goto nobufs_unlock;
goto nobufs_unlock_dec;
spin_unlock(&cookie->lock);
fscache_stat(&fscache_n_alloc_ops);
@ -689,10 +707,13 @@ int __fscache_alloc_page(struct fscache_cookie *cookie,
_leave(" = %d", ret);
return ret;
nobufs_unlock_dec:
wake_cookie = __fscache_unuse_cookie(cookie);
nobufs_unlock:
spin_unlock(&cookie->lock);
atomic_dec(&cookie->n_active);
kfree(op);
if (wake_cookie)
__fscache_wake_unused_cookie(cookie);
nobufs:
fscache_stat(&fscache_n_allocs_nobufs);
_leave(" = -ENOBUFS");
@ -889,6 +910,7 @@ int __fscache_write_page(struct fscache_cookie *cookie,
{
struct fscache_storage *op;
struct fscache_object *object;
bool wake_cookie = false;
int ret;
_enter("%p,%x,", cookie, (u32) page->flags);
@ -920,7 +942,8 @@ int __fscache_write_page(struct fscache_cookie *cookie,
ret = -ENOBUFS;
spin_lock(&cookie->lock);
if (hlist_empty(&cookie->backing_objects))
if (!fscache_cookie_enabled(cookie) ||
hlist_empty(&cookie->backing_objects))
goto nobufs;
object = hlist_entry(cookie->backing_objects.first,
struct fscache_object, cookie_link);
@ -957,7 +980,7 @@ int __fscache_write_page(struct fscache_cookie *cookie,
op->op.debug_id = atomic_inc_return(&fscache_op_debug_id);
op->store_limit = object->store_limit;
atomic_inc(&cookie->n_active);
__fscache_use_cookie(cookie);
if (fscache_submit_op(object, &op->op) < 0)
goto submit_failed;
@ -984,10 +1007,10 @@ int __fscache_write_page(struct fscache_cookie *cookie,
return 0;
submit_failed:
atomic_dec(&cookie->n_active);
spin_lock(&cookie->stores_lock);
radix_tree_delete(&cookie->stores, page->index);
spin_unlock(&cookie->stores_lock);
wake_cookie = __fscache_unuse_cookie(cookie);
page_cache_release(page);
ret = -ENOBUFS;
goto nobufs;
@ -999,6 +1022,8 @@ int __fscache_write_page(struct fscache_cookie *cookie,
spin_unlock(&cookie->lock);
radix_tree_preload_end();
kfree(op);
if (wake_cookie)
__fscache_wake_unused_cookie(cookie);
fscache_stat(&fscache_n_stores_nobufs);
_leave(" = -ENOBUFS");
return -ENOBUFS;

View File

@ -140,6 +140,17 @@ config NFS_V4_1_IMPLEMENTATION_ID_DOMAIN
If the NFS client is unchanged from the upstream kernel, this
option should be set to the default "kernel.org".
config NFS_V4_1_MIGRATION
bool "NFSv4.1 client support for migration"
depends on NFS_V4_1
default n
help
This option makes the NFS client advertise to NFSv4.1 servers that
it can support NFSv4 migration.
The NFSv4.1 pieces of the Linux NFSv4 migration implementation are
still experimental. If you are not an NFSv4 developer, say N here.
config NFS_V4_SECURITY_LABEL
bool
depends on NFS_V4_2 && SECURITY

View File

@ -164,8 +164,7 @@ nfs41_callback_up(struct svc_serv *serv)
svc_xprt_put(serv->sv_bc_xprt);
serv->sv_bc_xprt = NULL;
}
dprintk("--> %s return %ld\n", __func__,
IS_ERR(rqstp) ? PTR_ERR(rqstp) : 0);
dprintk("--> %s return %d\n", __func__, PTR_ERR_OR_ZERO(rqstp));
return rqstp;
}

View File

@ -590,6 +590,8 @@ int nfs_create_rpc_client(struct nfs_client *clp,
if (test_bit(NFS_CS_DISCRTRY, &clp->cl_flags))
args.flags |= RPC_CLNT_CREATE_DISCRTRY;
if (test_bit(NFS_CS_NO_RETRANS_TIMEOUT, &clp->cl_flags))
args.flags |= RPC_CLNT_CREATE_NO_RETRANS_TIMEOUT;
if (test_bit(NFS_CS_NORESVPORT, &clp->cl_flags))
args.flags |= RPC_CLNT_CREATE_NONPRIVPORT;
if (test_bit(NFS_CS_INFINITE_SLOTS, &clp->cl_flags))
@ -784,8 +786,10 @@ static int nfs_init_server(struct nfs_server *server,
goto error;
server->port = data->nfs_server.port;
server->auth_info = data->auth_info;
error = nfs_init_server_rpcclient(server, &timeparms, data->auth_flavors[0]);
error = nfs_init_server_rpcclient(server, &timeparms,
data->selected_flavor);
if (error < 0)
goto error;
@ -926,6 +930,7 @@ void nfs_server_copy_userdata(struct nfs_server *target, struct nfs_server *sour
target->acdirmax = source->acdirmax;
target->caps = source->caps;
target->options = source->options;
target->auth_info = source->auth_info;
}
EXPORT_SYMBOL_GPL(nfs_server_copy_userdata);
@ -943,7 +948,7 @@ void nfs_server_insert_lists(struct nfs_server *server)
}
EXPORT_SYMBOL_GPL(nfs_server_insert_lists);
static void nfs_server_remove_lists(struct nfs_server *server)
void nfs_server_remove_lists(struct nfs_server *server)
{
struct nfs_client *clp = server->nfs_client;
struct nfs_net *nn;
@ -960,6 +965,7 @@ static void nfs_server_remove_lists(struct nfs_server *server)
synchronize_rcu();
}
EXPORT_SYMBOL_GPL(nfs_server_remove_lists);
/*
* Allocate and initialise a server record

View File

@ -1139,7 +1139,13 @@ static int nfs_lookup_revalidate(struct dentry *dentry, unsigned int flags)
if (inode && S_ISDIR(inode->i_mode)) {
/* Purge readdir caches. */
nfs_zap_caches(inode);
if (dentry->d_flags & DCACHE_DISCONNECTED)
/*
* We can't d_drop the root of a disconnected tree:
* its d_hash is on the s_anon list and d_drop() would hide
* it from shrink_dcache_for_unmount(), leading to busy
* inodes on unmount and further oopses.
*/
if (IS_ROOT(dentry))
goto out_valid;
}
/* If we have submounts, don't unhash ! */
@ -1381,7 +1387,7 @@ static struct nfs_open_context *create_nfs_open_context(struct dentry *dentry, i
static int do_open(struct inode *inode, struct file *filp)
{
nfs_fscache_set_inode_cookie(inode, filp);
nfs_fscache_open_file(inode, filp);
return 0;
}

View File

@ -39,7 +39,7 @@ void nfs_fscache_get_client_cookie(struct nfs_client *clp)
/* create a cache index for looking up filehandles */
clp->fscache = fscache_acquire_cookie(nfs_fscache_netfs.primary_index,
&nfs_fscache_server_index_def,
clp);
clp, true);
dfprintk(FSCACHE, "NFS: get client cookie (0x%p/0x%p)\n",
clp, clp->fscache);
}
@ -139,7 +139,7 @@ void nfs_fscache_get_super_cookie(struct super_block *sb, const char *uniq, int
/* create a cache index for looking up filehandles */
nfss->fscache = fscache_acquire_cookie(nfss->nfs_client->fscache,
&nfs_fscache_super_index_def,
nfss);
nfss, true);
dfprintk(FSCACHE, "NFS: get superblock cookie (0x%p/0x%p)\n",
nfss, nfss->fscache);
return;
@ -178,163 +178,79 @@ void nfs_fscache_release_super_cookie(struct super_block *sb)
/*
* Initialise the per-inode cache cookie pointer for an NFS inode.
*/
void nfs_fscache_init_inode_cookie(struct inode *inode)
void nfs_fscache_init_inode(struct inode *inode)
{
NFS_I(inode)->fscache = NULL;
if (S_ISREG(inode->i_mode))
set_bit(NFS_INO_FSCACHE, &NFS_I(inode)->flags);
}
/*
* Get the per-inode cache cookie for an NFS inode.
*/
static void nfs_fscache_enable_inode_cookie(struct inode *inode)
{
struct super_block *sb = inode->i_sb;
struct nfs_inode *nfsi = NFS_I(inode);
if (nfsi->fscache || !NFS_FSCACHE(inode))
nfsi->fscache = NULL;
if (!S_ISREG(inode->i_mode))
return;
if ((NFS_SB(sb)->options & NFS_OPTION_FSCACHE)) {
nfsi->fscache = fscache_acquire_cookie(
NFS_SB(sb)->fscache,
&nfs_fscache_inode_object_def,
nfsi);
dfprintk(FSCACHE, "NFS: get FH cookie (0x%p/0x%p/0x%p)\n",
sb, nfsi, nfsi->fscache);
}
nfsi->fscache = fscache_acquire_cookie(NFS_SB(inode->i_sb)->fscache,
&nfs_fscache_inode_object_def,
nfsi, false);
}
/*
* Release a per-inode cookie.
*/
void nfs_fscache_release_inode_cookie(struct inode *inode)
void nfs_fscache_clear_inode(struct inode *inode)
{
struct nfs_inode *nfsi = NFS_I(inode);
struct fscache_cookie *cookie = nfs_i_fscache(inode);
dfprintk(FSCACHE, "NFS: clear cookie (0x%p/0x%p)\n",
nfsi, nfsi->fscache);
dfprintk(FSCACHE, "NFS: clear cookie (0x%p/0x%p)\n", nfsi, cookie);
fscache_relinquish_cookie(nfsi->fscache, 0);
fscache_relinquish_cookie(cookie, false);
nfsi->fscache = NULL;
}
/*
* Retire a per-inode cookie, destroying the data attached to it.
*/
void nfs_fscache_zap_inode_cookie(struct inode *inode)
static bool nfs_fscache_can_enable(void *data)
{
struct nfs_inode *nfsi = NFS_I(inode);
struct inode *inode = data;
dfprintk(FSCACHE, "NFS: zapping cookie (0x%p/0x%p)\n",
nfsi, nfsi->fscache);
fscache_relinquish_cookie(nfsi->fscache, 1);
nfsi->fscache = NULL;
return !inode_is_open_for_write(inode);
}
/*
* Turn off the cache with regard to a per-inode cookie if opened for writing,
* invalidating all the pages in the page cache relating to the associated
* inode to clear the per-page caching.
* Enable or disable caching for a file that is being opened as appropriate.
* The cookie is allocated when the inode is initialised, but is not enabled at
* that time. Enablement is deferred to file-open time to avoid stat() and
* access() thrashing the cache.
*
* For now, with NFS, only regular files that are open read-only will be able
* to use the cache.
*
* We enable the cache for an inode if we open it read-only and it isn't
* currently open for writing. We disable the cache if the inode is open
* write-only.
*
* The caller uses the file struct to pin i_writecount on the inode before
* calling us when a file is opened for writing, so we can make use of that.
*
* Note that this may be invoked multiple times in parallel by parallel
* nfs_open() functions.
*/
static void nfs_fscache_disable_inode_cookie(struct inode *inode)
void nfs_fscache_open_file(struct inode *inode, struct file *filp)
{
clear_bit(NFS_INO_FSCACHE, &NFS_I(inode)->flags);
struct nfs_inode *nfsi = NFS_I(inode);
struct fscache_cookie *cookie = nfs_i_fscache(inode);
if (NFS_I(inode)->fscache) {
dfprintk(FSCACHE,
"NFS: nfsi 0x%p turning cache off\n", NFS_I(inode));
if (!fscache_cookie_valid(cookie))
return;
/* Need to uncache any pages attached to this inode that
* fscache knows about before turning off the cache.
*/
fscache_uncache_all_inode_pages(NFS_I(inode)->fscache, inode);
nfs_fscache_zap_inode_cookie(inode);
if (inode_is_open_for_write(inode)) {
dfprintk(FSCACHE, "NFS: nfsi 0x%p disabling cache\n", nfsi);
clear_bit(NFS_INO_FSCACHE, &nfsi->flags);
fscache_disable_cookie(cookie, true);
fscache_uncache_all_inode_pages(cookie, inode);
} else {
dfprintk(FSCACHE, "NFS: nfsi 0x%p enabling cache\n", nfsi);
fscache_enable_cookie(cookie, nfs_fscache_can_enable, inode);
if (fscache_cookie_enabled(cookie))
set_bit(NFS_INO_FSCACHE, &NFS_I(inode)->flags);
}
}
/*
* wait_on_bit() sleep function for uninterruptible waiting
*/
static int nfs_fscache_wait_bit(void *flags)
{
schedule();
return 0;
}
/*
* Lock against someone else trying to also acquire or relinquish a cookie
*/
static inline void nfs_fscache_inode_lock(struct inode *inode)
{
struct nfs_inode *nfsi = NFS_I(inode);
while (test_and_set_bit(NFS_INO_FSCACHE_LOCK, &nfsi->flags))
wait_on_bit(&nfsi->flags, NFS_INO_FSCACHE_LOCK,
nfs_fscache_wait_bit, TASK_UNINTERRUPTIBLE);
}
/*
* Unlock cookie management lock
*/
static inline void nfs_fscache_inode_unlock(struct inode *inode)
{
struct nfs_inode *nfsi = NFS_I(inode);
smp_mb__before_clear_bit();
clear_bit(NFS_INO_FSCACHE_LOCK, &nfsi->flags);
smp_mb__after_clear_bit();
wake_up_bit(&nfsi->flags, NFS_INO_FSCACHE_LOCK);
}
/*
* Decide if we should enable or disable local caching for this inode.
* - For now, with NFS, only regular files that are open read-only will be able
* to use the cache.
* - May be invoked multiple times in parallel by parallel nfs_open() functions.
*/
void nfs_fscache_set_inode_cookie(struct inode *inode, struct file *filp)
{
if (NFS_FSCACHE(inode)) {
nfs_fscache_inode_lock(inode);
if ((filp->f_flags & O_ACCMODE) != O_RDONLY)
nfs_fscache_disable_inode_cookie(inode);
else
nfs_fscache_enable_inode_cookie(inode);
nfs_fscache_inode_unlock(inode);
}
}
EXPORT_SYMBOL_GPL(nfs_fscache_set_inode_cookie);
/*
* Replace a per-inode cookie due to revalidation detecting a file having
* changed on the server.
*/
void nfs_fscache_reset_inode_cookie(struct inode *inode)
{
struct nfs_inode *nfsi = NFS_I(inode);
struct nfs_server *nfss = NFS_SERVER(inode);
NFS_IFDEBUG(struct fscache_cookie *old = nfsi->fscache);
nfs_fscache_inode_lock(inode);
if (nfsi->fscache) {
/* retire the current fscache cache and get a new one */
fscache_relinquish_cookie(nfsi->fscache, 1);
nfsi->fscache = fscache_acquire_cookie(
nfss->nfs_client->fscache,
&nfs_fscache_inode_object_def,
nfsi);
dfprintk(FSCACHE,
"NFS: revalidation new cookie (0x%p/0x%p/0x%p/0x%p)\n",
nfss, nfsi, old, nfsi->fscache);
}
nfs_fscache_inode_unlock(inode);
}
EXPORT_SYMBOL_GPL(nfs_fscache_open_file);
/*
* Release the caching state associated with a page, if the page isn't busy
@ -344,12 +260,11 @@ void nfs_fscache_reset_inode_cookie(struct inode *inode)
int nfs_fscache_release_page(struct page *page, gfp_t gfp)
{
if (PageFsCache(page)) {
struct nfs_inode *nfsi = NFS_I(page->mapping->host);
struct fscache_cookie *cookie = nfsi->fscache;
struct fscache_cookie *cookie = nfs_i_fscache(page->mapping->host);
BUG_ON(!cookie);
dfprintk(FSCACHE, "NFS: fscache releasepage (0x%p/0x%p/0x%p)\n",
cookie, page, nfsi);
cookie, page, NFS_I(page->mapping->host));
if (!fscache_maybe_release_page(cookie, page, gfp))
return 0;
@ -367,13 +282,12 @@ int nfs_fscache_release_page(struct page *page, gfp_t gfp)
*/
void __nfs_fscache_invalidate_page(struct page *page, struct inode *inode)
{
struct nfs_inode *nfsi = NFS_I(inode);
struct fscache_cookie *cookie = nfsi->fscache;
struct fscache_cookie *cookie = nfs_i_fscache(inode);
BUG_ON(!cookie);
dfprintk(FSCACHE, "NFS: fscache invalidatepage (0x%p/0x%p/0x%p)\n",
cookie, page, nfsi);
cookie, page, NFS_I(inode));
fscache_wait_on_page_write(cookie, page);
@ -417,9 +331,9 @@ int __nfs_readpage_from_fscache(struct nfs_open_context *ctx,
dfprintk(FSCACHE,
"NFS: readpage_from_fscache(fsc:%p/p:%p(i:%lx f:%lx)/0x%p)\n",
NFS_I(inode)->fscache, page, page->index, page->flags, inode);
nfs_i_fscache(inode), page, page->index, page->flags, inode);
ret = fscache_read_or_alloc_page(NFS_I(inode)->fscache,
ret = fscache_read_or_alloc_page(nfs_i_fscache(inode),
page,
nfs_readpage_from_fscache_complete,
ctx,
@ -459,9 +373,9 @@ int __nfs_readpages_from_fscache(struct nfs_open_context *ctx,
int ret;
dfprintk(FSCACHE, "NFS: nfs_getpages_from_fscache (0x%p/%u/0x%p)\n",
NFS_I(inode)->fscache, npages, inode);
nfs_i_fscache(inode), npages, inode);
ret = fscache_read_or_alloc_pages(NFS_I(inode)->fscache,
ret = fscache_read_or_alloc_pages(nfs_i_fscache(inode),
mapping, pages, nr_pages,
nfs_readpage_from_fscache_complete,
ctx,
@ -506,15 +420,15 @@ void __nfs_readpage_to_fscache(struct inode *inode, struct page *page, int sync)
dfprintk(FSCACHE,
"NFS: readpage_to_fscache(fsc:%p/p:%p(i:%lx f:%lx)/%d)\n",
NFS_I(inode)->fscache, page, page->index, page->flags, sync);
nfs_i_fscache(inode), page, page->index, page->flags, sync);
ret = fscache_write_page(NFS_I(inode)->fscache, page, GFP_KERNEL);
ret = fscache_write_page(nfs_i_fscache(inode), page, GFP_KERNEL);
dfprintk(FSCACHE,
"NFS: readpage_to_fscache: p:%p(i:%lu f:%lx) ret %d\n",
page, page->index, page->flags, ret);
if (ret != 0) {
fscache_uncache_page(NFS_I(inode)->fscache, page);
fscache_uncache_page(nfs_i_fscache(inode), page);
nfs_add_fscache_stats(inode,
NFSIOS_FSCACHE_PAGES_WRITTEN_FAIL, 1);
nfs_add_fscache_stats(inode, NFSIOS_FSCACHE_PAGES_UNCACHED, 1);

View File

@ -76,11 +76,9 @@ extern void nfs_fscache_release_client_cookie(struct nfs_client *);
extern void nfs_fscache_get_super_cookie(struct super_block *, const char *, int);
extern void nfs_fscache_release_super_cookie(struct super_block *);
extern void nfs_fscache_init_inode_cookie(struct inode *);
extern void nfs_fscache_release_inode_cookie(struct inode *);
extern void nfs_fscache_zap_inode_cookie(struct inode *);
extern void nfs_fscache_set_inode_cookie(struct inode *, struct file *);
extern void nfs_fscache_reset_inode_cookie(struct inode *);
extern void nfs_fscache_init_inode(struct inode *);
extern void nfs_fscache_clear_inode(struct inode *);
extern void nfs_fscache_open_file(struct inode *, struct file *);
extern void __nfs_fscache_invalidate_page(struct page *, struct inode *);
extern int nfs_fscache_release_page(struct page *, gfp_t);
@ -187,12 +185,10 @@ static inline void nfs_fscache_release_client_cookie(struct nfs_client *clp) {}
static inline void nfs_fscache_release_super_cookie(struct super_block *sb) {}
static inline void nfs_fscache_init_inode_cookie(struct inode *inode) {}
static inline void nfs_fscache_release_inode_cookie(struct inode *inode) {}
static inline void nfs_fscache_zap_inode_cookie(struct inode *inode) {}
static inline void nfs_fscache_set_inode_cookie(struct inode *inode,
struct file *filp) {}
static inline void nfs_fscache_reset_inode_cookie(struct inode *inode) {}
static inline void nfs_fscache_init_inode(struct inode *inode) {}
static inline void nfs_fscache_clear_inode(struct inode *inode) {}
static inline void nfs_fscache_open_file(struct inode *inode,
struct file *filp) {}
static inline int nfs_fscache_release_page(struct page *page, gfp_t gfp)
{

View File

@ -122,7 +122,7 @@ void nfs_clear_inode(struct inode *inode)
WARN_ON_ONCE(!list_empty(&NFS_I(inode)->open_files));
nfs_zap_acl_cache(inode);
nfs_access_zap_cache(inode);
nfs_fscache_release_inode_cookie(inode);
nfs_fscache_clear_inode(inode);
}
EXPORT_SYMBOL_GPL(nfs_clear_inode);
@ -274,12 +274,6 @@ void nfs_setsecurity(struct inode *inode, struct nfs_fattr *fattr,
if (label == NULL)
return;
if (nfs_server_capable(inode, NFS_CAP_SECURITY_LABEL) == 0)
return;
if (NFS_SERVER(inode)->nfs_client->cl_minorversion < 2)
return;
if ((fattr->valid & NFS_ATTR_FATTR_V4_SECURITY_LABEL) && inode->i_security) {
error = security_inode_notifysecctx(inode, label->label,
label->len);
@ -459,7 +453,7 @@ nfs_fhget(struct super_block *sb, struct nfs_fh *fh, struct nfs_fattr *fattr, st
nfsi->attrtimeo_timestamp = now;
nfsi->access_cache = RB_ROOT;
nfs_fscache_init_inode_cookie(inode);
nfs_fscache_init_inode(inode);
unlock_new_inode(inode);
} else
@ -854,7 +848,7 @@ int nfs_open(struct inode *inode, struct file *filp)
return PTR_ERR(ctx);
nfs_file_set_open_context(filp, ctx);
put_nfs_open_context(ctx);
nfs_fscache_set_inode_cookie(inode, filp);
nfs_fscache_open_file(inode, filp);
return 0;
}
@ -923,6 +917,8 @@ __nfs_revalidate_inode(struct nfs_server *server, struct inode *inode)
if (nfsi->cache_validity & NFS_INO_INVALID_ACL)
nfs_zap_acl_cache(inode);
nfs_setsecurity(inode, fattr, label);
dfprintk(PAGECACHE, "NFS: (%s/%Ld) revalidation complete\n",
inode->i_sb->s_id,
(long long)NFS_FILEID(inode));
@ -1209,6 +1205,7 @@ u32 _nfs_display_fhandle_hash(const struct nfs_fh *fh)
* not on the result */
return nfs_fhandle_hash(fh);
}
EXPORT_SYMBOL_GPL(_nfs_display_fhandle_hash);
/*
* _nfs_display_fhandle - display an NFS file handle on the console
@ -1253,6 +1250,7 @@ void _nfs_display_fhandle(const struct nfs_fh *fh, const char *caption)
}
}
}
EXPORT_SYMBOL_GPL(_nfs_display_fhandle);
#endif
/**

View File

@ -88,8 +88,8 @@ struct nfs_parsed_mount_data {
unsigned int namlen;
unsigned int options;
unsigned int bsize;
unsigned int auth_flavor_len;
rpc_authflavor_t auth_flavors[1];
struct nfs_auth_info auth_info;
rpc_authflavor_t selected_flavor;
char *client_address;
unsigned int version;
unsigned int minorversion;
@ -154,6 +154,7 @@ struct nfs_client *nfs_get_client(const struct nfs_client_initdata *,
rpc_authflavor_t);
int nfs_probe_fsinfo(struct nfs_server *server, struct nfs_fh *, struct nfs_fattr *);
void nfs_server_insert_lists(struct nfs_server *);
void nfs_server_remove_lists(struct nfs_server *);
void nfs_init_timeout_values(struct rpc_timeout *, int, unsigned int, unsigned int);
int nfs_init_server_rpcclient(struct nfs_server *, const struct rpc_timeout *t,
rpc_authflavor_t);
@ -174,6 +175,8 @@ extern struct nfs_server *nfs4_create_server(
struct nfs_subversion *);
extern struct nfs_server *nfs4_create_referral_server(struct nfs_clone_mount *,
struct nfs_fh *);
extern int nfs4_update_server(struct nfs_server *server, const char *hostname,
struct sockaddr *sap, size_t salen);
extern void nfs_free_server(struct nfs_server *server);
extern struct nfs_server *nfs_clone_server(struct nfs_server *,
struct nfs_fh *,
@ -323,6 +326,7 @@ extern struct file_system_type nfs_xdev_fs_type;
extern struct file_system_type nfs4_xdev_fs_type;
extern struct file_system_type nfs4_referral_fs_type;
#endif
bool nfs_auth_info_match(const struct nfs_auth_info *, rpc_authflavor_t);
struct dentry *nfs_try_mount(int, const char *, struct nfs_mount_info *,
struct nfs_subversion *);
void nfs_initialise_sb(struct super_block *);

View File

@ -29,6 +29,8 @@ enum nfs4_client_state {
NFS4CLNT_SERVER_SCOPE_MISMATCH,
NFS4CLNT_PURGE_STATE,
NFS4CLNT_BIND_CONN_TO_SESSION,
NFS4CLNT_MOVED,
NFS4CLNT_LEASE_MOVED,
};
#define NFS4_RENEW_TIMEOUT 0x01
@ -50,6 +52,7 @@ struct nfs4_minor_version_ops {
const struct nfs4_state_recovery_ops *reboot_recovery_ops;
const struct nfs4_state_recovery_ops *nograce_recovery_ops;
const struct nfs4_state_maintenance_ops *state_renewal_ops;
const struct nfs4_mig_recovery_ops *mig_recovery_ops;
};
#define NFS_SEQID_CONFIRMED 1
@ -203,6 +206,12 @@ struct nfs4_state_maintenance_ops {
int (*renew_lease)(struct nfs_client *, struct rpc_cred *);
};
struct nfs4_mig_recovery_ops {
int (*get_locations)(struct inode *, struct nfs4_fs_locations *,
struct page *, struct rpc_cred *);
int (*fsid_present)(struct inode *, struct rpc_cred *);
};
extern const struct dentry_operations nfs4_dentry_operations;
/* dir.c */
@ -213,10 +222,11 @@ int nfs_atomic_open(struct inode *, struct dentry *, struct file *,
extern struct file_system_type nfs4_fs_type;
/* nfs4namespace.c */
rpc_authflavor_t nfs_find_best_sec(struct nfs4_secinfo_flavors *);
struct rpc_clnt *nfs4_create_sec_client(struct rpc_clnt *, struct inode *, struct qstr *);
struct vfsmount *nfs4_submount(struct nfs_server *, struct dentry *,
struct nfs_fh *, struct nfs_fattr *);
int nfs4_replace_transport(struct nfs_server *server,
const struct nfs4_fs_locations *locations);
/* nfs4proc.c */
extern int nfs4_proc_setclientid(struct nfs_client *, u32, unsigned short, struct rpc_cred *, struct nfs4_setclientid_res *);
@ -231,6 +241,9 @@ extern int nfs4_do_close(struct nfs4_state *state, gfp_t gfp_mask, int wait);
extern int nfs4_server_capabilities(struct nfs_server *server, struct nfs_fh *fhandle);
extern int nfs4_proc_fs_locations(struct rpc_clnt *, struct inode *, const struct qstr *,
struct nfs4_fs_locations *, struct page *);
extern int nfs4_proc_get_locations(struct inode *, struct nfs4_fs_locations *,
struct page *page, struct rpc_cred *);
extern int nfs4_proc_fsid_present(struct inode *, struct rpc_cred *);
extern struct rpc_clnt *nfs4_proc_lookup_mountpoint(struct inode *, struct qstr *,
struct nfs_fh *, struct nfs_fattr *);
extern int nfs4_proc_secinfo(struct inode *, const struct qstr *, struct nfs4_secinfo_flavors *);
@ -411,6 +424,8 @@ extern int nfs4_client_recover_expired_lease(struct nfs_client *clp);
extern void nfs4_schedule_state_manager(struct nfs_client *);
extern void nfs4_schedule_path_down_recovery(struct nfs_client *clp);
extern int nfs4_schedule_stateid_recovery(const struct nfs_server *, struct nfs4_state *);
extern int nfs4_schedule_migration_recovery(const struct nfs_server *);
extern void nfs4_schedule_lease_moved_recovery(struct nfs_client *);
extern void nfs41_handle_sequence_flag_errors(struct nfs_client *clp, u32 flags);
extern void nfs41_handle_server_scope(struct nfs_client *,
struct nfs41_server_scope **);

View File

@ -197,6 +197,7 @@ struct nfs_client *nfs4_alloc_client(const struct nfs_client_initdata *cl_init)
clp->cl_state = 1 << NFS4CLNT_LEASE_EXPIRED;
clp->cl_minorversion = cl_init->minorversion;
clp->cl_mvops = nfs_v4_minor_ops[cl_init->minorversion];
clp->cl_mig_gen = 1;
return clp;
error:
@ -368,6 +369,7 @@ struct nfs_client *nfs4_init_client(struct nfs_client *clp,
if (clp->cl_minorversion != 0)
__set_bit(NFS_CS_INFINITE_SLOTS, &clp->cl_flags);
__set_bit(NFS_CS_DISCRTRY, &clp->cl_flags);
__set_bit(NFS_CS_NO_RETRANS_TIMEOUT, &clp->cl_flags);
error = nfs_create_rpc_client(clp, timeparms, RPC_AUTH_GSS_KRB5I);
if (error == -EINVAL)
error = nfs_create_rpc_client(clp, timeparms, RPC_AUTH_UNIX);
@ -924,7 +926,7 @@ static int nfs4_server_common_setup(struct nfs_server *server,
dprintk("Server FSID: %llx:%llx\n",
(unsigned long long) server->fsid.major,
(unsigned long long) server->fsid.minor);
dprintk("Mount FH: %d\n", mntfh->size);
nfs_display_fhandle(mntfh, "Pseudo-fs root FH");
nfs4_session_set_rwsize(server);
@ -947,9 +949,8 @@ static int nfs4_server_common_setup(struct nfs_server *server,
* Create a version 4 volume record
*/
static int nfs4_init_server(struct nfs_server *server,
const struct nfs_parsed_mount_data *data)
struct nfs_parsed_mount_data *data)
{
rpc_authflavor_t pseudoflavor = RPC_AUTH_UNIX;
struct rpc_timeout timeparms;
int error;
@ -961,9 +962,15 @@ static int nfs4_init_server(struct nfs_server *server,
/* Initialise the client representation from the mount data */
server->flags = data->flags;
server->options = data->options;
server->auth_info = data->auth_info;
if (data->auth_flavor_len >= 1)
pseudoflavor = data->auth_flavors[0];
/* Use the first specified auth flavor. If this flavor isn't
* allowed by the server, use the SECINFO path to try the
* other specified flavors */
if (data->auth_info.flavor_len >= 1)
data->selected_flavor = data->auth_info.flavors[0];
else
data->selected_flavor = RPC_AUTH_UNIX;
/* Get a client record */
error = nfs4_set_client(server,
@ -971,7 +978,7 @@ static int nfs4_init_server(struct nfs_server *server,
(const struct sockaddr *)&data->nfs_server.address,
data->nfs_server.addrlen,
data->client_address,
pseudoflavor,
data->selected_flavor,
data->nfs_server.protocol,
&timeparms,
data->minorversion,
@ -991,7 +998,8 @@ static int nfs4_init_server(struct nfs_server *server,
server->port = data->nfs_server.port;
error = nfs_init_server_rpcclient(server, &timeparms, pseudoflavor);
error = nfs_init_server_rpcclient(server, &timeparms,
data->selected_flavor);
error:
/* Done */
@ -1018,7 +1026,7 @@ struct nfs_server *nfs4_create_server(struct nfs_mount_info *mount_info,
if (!server)
return ERR_PTR(-ENOMEM);
auth_probe = mount_info->parsed->auth_flavor_len < 1;
auth_probe = mount_info->parsed->auth_info.flavor_len < 1;
/* set up the general RPC client */
error = nfs4_init_server(server, mount_info->parsed);
@ -1046,6 +1054,7 @@ struct nfs_server *nfs4_create_referral_server(struct nfs_clone_mount *data,
{
struct nfs_client *parent_client;
struct nfs_server *server, *parent_server;
bool auth_probe;
int error;
dprintk("--> nfs4_create_referral_server()\n");
@ -1078,8 +1087,9 @@ struct nfs_server *nfs4_create_referral_server(struct nfs_clone_mount *data,
if (error < 0)
goto error;
error = nfs4_server_common_setup(server, mntfh,
!(parent_server->flags & NFS_MOUNT_SECFLAVOUR));
auth_probe = parent_server->auth_info.flavor_len < 1;
error = nfs4_server_common_setup(server, mntfh, auth_probe);
if (error < 0)
goto error;
@ -1091,3 +1101,111 @@ struct nfs_server *nfs4_create_referral_server(struct nfs_clone_mount *data,
dprintk("<-- nfs4_create_referral_server() = error %d\n", error);
return ERR_PTR(error);
}
/*
* Grab the destination's particulars, including lease expiry time.
*
* Returns zero if probe succeeded and retrieved FSID matches the FSID
* we have cached.
*/
static int nfs_probe_destination(struct nfs_server *server)
{
struct inode *inode = server->super->s_root->d_inode;
struct nfs_fattr *fattr;
int error;
fattr = nfs_alloc_fattr();
if (fattr == NULL)
return -ENOMEM;
/* Sanity: the probe won't work if the destination server
* does not recognize the migrated FH. */
error = nfs_probe_fsinfo(server, NFS_FH(inode), fattr);
nfs_free_fattr(fattr);
return error;
}
/**
* nfs4_update_server - Move an nfs_server to a different nfs_client
*
* @server: represents FSID to be moved
* @hostname: new end-point's hostname
* @sap: new end-point's socket address
* @salen: size of "sap"
*
* The nfs_server must be quiescent before this function is invoked.
* Either its session is drained (NFSv4.1+), or its transport is
* plugged and drained (NFSv4.0).
*
* Returns zero on success, or a negative errno value.
*/
int nfs4_update_server(struct nfs_server *server, const char *hostname,
struct sockaddr *sap, size_t salen)
{
struct nfs_client *clp = server->nfs_client;
struct rpc_clnt *clnt = server->client;
struct xprt_create xargs = {
.ident = clp->cl_proto,
.net = &init_net,
.dstaddr = sap,
.addrlen = salen,
.servername = hostname,
};
char buf[INET6_ADDRSTRLEN + 1];
struct sockaddr_storage address;
struct sockaddr *localaddr = (struct sockaddr *)&address;
int error;
dprintk("--> %s: move FSID %llx:%llx to \"%s\")\n", __func__,
(unsigned long long)server->fsid.major,
(unsigned long long)server->fsid.minor,
hostname);
error = rpc_switch_client_transport(clnt, &xargs, clnt->cl_timeout);
if (error != 0) {
dprintk("<-- %s(): rpc_switch_client_transport returned %d\n",
__func__, error);
goto out;
}
error = rpc_localaddr(clnt, localaddr, sizeof(address));
if (error != 0) {
dprintk("<-- %s(): rpc_localaddr returned %d\n",
__func__, error);
goto out;
}
error = -EAFNOSUPPORT;
if (rpc_ntop(localaddr, buf, sizeof(buf)) == 0) {
dprintk("<-- %s(): rpc_ntop returned %d\n",
__func__, error);
goto out;
}
nfs_server_remove_lists(server);
error = nfs4_set_client(server, hostname, sap, salen, buf,
clp->cl_rpcclient->cl_auth->au_flavor,
clp->cl_proto, clnt->cl_timeout,
clp->cl_minorversion, clp->cl_net);
nfs_put_client(clp);
if (error != 0) {
nfs_server_insert_lists(server);
dprintk("<-- %s(): nfs4_set_client returned %d\n",
__func__, error);
goto out;
}
if (server->nfs_client->cl_hostname == NULL)
server->nfs_client->cl_hostname = kstrdup(hostname, GFP_KERNEL);
nfs_server_insert_lists(server);
error = nfs_probe_destination(server);
if (error < 0)
goto out;
dprintk("<-- %s() succeeded\n", __func__);
out:
return error;
}

View File

@ -75,7 +75,7 @@ nfs4_file_open(struct inode *inode, struct file *filp)
nfs_set_verifier(dentry, nfs_save_change_attribute(dir));
nfs_file_set_open_context(filp, ctx);
nfs_fscache_set_inode_cookie(inode, filp);
nfs_fscache_open_file(inode, filp);
err = 0;
out_put_ctx:

View File

@ -137,6 +137,7 @@ static size_t nfs_parse_server_name(char *string, size_t len,
/**
* nfs_find_best_sec - Find a security mechanism supported locally
* @server: NFS server struct
* @flavors: List of security tuples returned by SECINFO procedure
*
* Return the pseudoflavor of the first security mechanism in
@ -145,7 +146,8 @@ static size_t nfs_parse_server_name(char *string, size_t len,
* is searched in the order returned from the server, per RFC 3530
* recommendation.
*/
rpc_authflavor_t nfs_find_best_sec(struct nfs4_secinfo_flavors *flavors)
static rpc_authflavor_t nfs_find_best_sec(struct nfs_server *server,
struct nfs4_secinfo_flavors *flavors)
{
rpc_authflavor_t pseudoflavor;
struct nfs4_secinfo4 *secinfo;
@ -160,12 +162,19 @@ rpc_authflavor_t nfs_find_best_sec(struct nfs4_secinfo_flavors *flavors)
case RPC_AUTH_GSS:
pseudoflavor = rpcauth_get_pseudoflavor(secinfo->flavor,
&secinfo->flavor_info);
if (pseudoflavor != RPC_AUTH_MAXFLAVOR)
/* make sure pseudoflavor matches sec= mount opt */
if (pseudoflavor != RPC_AUTH_MAXFLAVOR &&
nfs_auth_info_match(&server->auth_info,
pseudoflavor))
return pseudoflavor;
break;
}
}
/* if there were any sec= options then nothing matched */
if (server->auth_info.flavor_len > 0)
return -EPERM;
return RPC_AUTH_UNIX;
}
@ -187,7 +196,7 @@ static rpc_authflavor_t nfs4_negotiate_security(struct inode *inode, struct qstr
goto out;
}
flavor = nfs_find_best_sec(flavors);
flavor = nfs_find_best_sec(NFS_SERVER(inode), flavors);
out:
put_page(page);
@ -390,7 +399,7 @@ struct vfsmount *nfs4_submount(struct nfs_server *server, struct dentry *dentry,
if (client->cl_auth->au_flavor != flavor)
flavor = client->cl_auth->au_flavor;
else if (!(server->flags & NFS_MOUNT_SECFLAVOUR)) {
else {
rpc_authflavor_t new = nfs4_negotiate_security(dir, name);
if ((int)new >= 0)
flavor = new;
@ -400,3 +409,104 @@ struct vfsmount *nfs4_submount(struct nfs_server *server, struct dentry *dentry,
rpc_shutdown_client(client);
return mnt;
}
/*
* Try one location from the fs_locations array.
*
* Returns zero on success, or a negative errno value.
*/
static int nfs4_try_replacing_one_location(struct nfs_server *server,
char *page, char *page2,
const struct nfs4_fs_location *location)
{
const size_t addr_bufsize = sizeof(struct sockaddr_storage);
struct sockaddr *sap;
unsigned int s;
size_t salen;
int error;
sap = kmalloc(addr_bufsize, GFP_KERNEL);
if (sap == NULL)
return -ENOMEM;
error = -ENOENT;
for (s = 0; s < location->nservers; s++) {
const struct nfs4_string *buf = &location->servers[s];
char *hostname;
if (buf->len <= 0 || buf->len > PAGE_SIZE)
continue;
if (memchr(buf->data, IPV6_SCOPE_DELIMITER, buf->len) != NULL)
continue;
salen = nfs_parse_server_name(buf->data, buf->len,
sap, addr_bufsize, server);
if (salen == 0)
continue;
rpc_set_port(sap, NFS_PORT);
error = -ENOMEM;
hostname = kstrndup(buf->data, buf->len, GFP_KERNEL);
if (hostname == NULL)
break;
error = nfs4_update_server(server, hostname, sap, salen);
kfree(hostname);
if (error == 0)
break;
}
kfree(sap);
return error;
}
/**
* nfs4_replace_transport - set up transport to destination server
*
* @server: export being migrated
* @locations: fs_locations array
*
* Returns zero on success, or a negative errno value.
*
* The client tries all the entries in the "locations" array, in the
* order returned by the server, until one works or the end of the
* array is reached.
*/
int nfs4_replace_transport(struct nfs_server *server,
const struct nfs4_fs_locations *locations)
{
char *page = NULL, *page2 = NULL;
int loc, error;
error = -ENOENT;
if (locations == NULL || locations->nlocations <= 0)
goto out;
error = -ENOMEM;
page = (char *) __get_free_page(GFP_USER);
if (!page)
goto out;
page2 = (char *) __get_free_page(GFP_USER);
if (!page2)
goto out;
for (loc = 0; loc < locations->nlocations; loc++) {
const struct nfs4_fs_location *location =
&locations->locations[loc];
if (location == NULL || location->nservers <= 0 ||
location->rootpath.ncomponents == 0)
continue;
error = nfs4_try_replacing_one_location(server, page,
page2, location);
if (error == 0)
break;
}
out:
free_page((unsigned long)page);
free_page((unsigned long)page2);
return error;
}

View File

@ -105,9 +105,6 @@ nfs4_label_init_security(struct inode *dir, struct dentry *dentry,
if (nfs_server_capable(dir, NFS_CAP_SECURITY_LABEL) == 0)
return NULL;
if (NFS_SERVER(dir)->nfs_client->cl_minorversion < 2)
return NULL;
err = security_dentry_init_security(dentry, sattr->ia_mode,
&dentry->d_name, (void **)&label->label, &label->len);
if (err == 0)
@ -384,6 +381,14 @@ static int nfs4_handle_exception(struct nfs_server *server, int errorcode, struc
case -NFS4ERR_STALE_CLIENTID:
nfs4_schedule_lease_recovery(clp);
goto wait_on_recovery;
case -NFS4ERR_MOVED:
ret = nfs4_schedule_migration_recovery(server);
if (ret < 0)
break;
goto wait_on_recovery;
case -NFS4ERR_LEASE_MOVED:
nfs4_schedule_lease_moved_recovery(clp);
goto wait_on_recovery;
#if defined(CONFIG_NFS_V4_1)
case -NFS4ERR_BADSESSION:
case -NFS4ERR_BADSLOT:
@ -431,6 +436,8 @@ static int nfs4_handle_exception(struct nfs_server *server, int errorcode, struc
return nfs4_map_errors(ret);
wait_on_recovery:
ret = nfs4_wait_clnt_recover(clp);
if (test_bit(NFS_MIG_FAILED, &server->mig_status))
return -EIO;
if (ret == 0)
exception->retry = 1;
return ret;
@ -1318,31 +1325,24 @@ _nfs4_opendata_reclaim_to_nfs4_state(struct nfs4_opendata *data)
int ret;
if (!data->rpc_done) {
ret = data->rpc_status;
goto err;
if (data->rpc_status) {
ret = data->rpc_status;
goto err;
}
/* cached opens have already been processed */
goto update;
}
ret = -ESTALE;
if (!(data->f_attr.valid & NFS_ATTR_FATTR_TYPE) ||
!(data->f_attr.valid & NFS_ATTR_FATTR_FILEID) ||
!(data->f_attr.valid & NFS_ATTR_FATTR_CHANGE))
goto err;
ret = -ENOMEM;
state = nfs4_get_open_state(inode, data->owner);
if (state == NULL)
goto err;
ret = nfs_refresh_inode(inode, &data->f_attr);
if (ret)
goto err;
nfs_setsecurity(inode, &data->f_attr, data->f_label);
if (data->o_res.delegation_type != 0)
nfs4_opendata_check_deleg(data, state);
update:
update_open_stateid(state, &data->o_res.stateid, NULL,
data->o_arg.fmode);
atomic_inc(&state->count);
return state;
err:
@ -1575,6 +1575,12 @@ static int nfs4_handle_delegation_recall_error(struct nfs_server *server, struct
/* Don't recall a delegation if it was lost */
nfs4_schedule_lease_recovery(server->nfs_client);
return -EAGAIN;
case -NFS4ERR_MOVED:
nfs4_schedule_migration_recovery(server);
return -EAGAIN;
case -NFS4ERR_LEASE_MOVED:
nfs4_schedule_lease_moved_recovery(server->nfs_client);
return -EAGAIN;
case -NFS4ERR_DELEG_REVOKED:
case -NFS4ERR_ADMIN_REVOKED:
case -NFS4ERR_BAD_STATEID:
@ -2697,6 +2703,10 @@ static void nfs4_close_context(struct nfs_open_context *ctx, int is_sync)
nfs4_close_state(ctx->state, ctx->mode);
}
#define FATTR4_WORD1_NFS40_MASK (2*FATTR4_WORD1_MOUNTED_ON_FILEID - 1UL)
#define FATTR4_WORD2_NFS41_MASK (2*FATTR4_WORD2_SUPPATTR_EXCLCREAT - 1UL)
#define FATTR4_WORD2_NFS42_MASK (2*FATTR4_WORD2_CHANGE_SECURITY_LABEL - 1UL)
static int _nfs4_server_capabilities(struct nfs_server *server, struct nfs_fh *fhandle)
{
struct nfs4_server_caps_arg args = {
@ -2712,12 +2722,25 @@ static int _nfs4_server_capabilities(struct nfs_server *server, struct nfs_fh *f
status = nfs4_call_sync(server->client, server, &msg, &args.seq_args, &res.seq_res, 0);
if (status == 0) {
/* Sanity check the server answers */
switch (server->nfs_client->cl_minorversion) {
case 0:
res.attr_bitmask[1] &= FATTR4_WORD1_NFS40_MASK;
res.attr_bitmask[2] = 0;
break;
case 1:
res.attr_bitmask[2] &= FATTR4_WORD2_NFS41_MASK;
break;
case 2:
res.attr_bitmask[2] &= FATTR4_WORD2_NFS42_MASK;
}
memcpy(server->attr_bitmask, res.attr_bitmask, sizeof(server->attr_bitmask));
server->caps &= ~(NFS_CAP_ACLS|NFS_CAP_HARDLINKS|
NFS_CAP_SYMLINKS|NFS_CAP_FILEID|
NFS_CAP_MODE|NFS_CAP_NLINK|NFS_CAP_OWNER|
NFS_CAP_OWNER_GROUP|NFS_CAP_ATIME|
NFS_CAP_CTIME|NFS_CAP_MTIME);
NFS_CAP_CTIME|NFS_CAP_MTIME|
NFS_CAP_SECURITY_LABEL);
if (res.attr_bitmask[0] & FATTR4_WORD0_ACL)
server->caps |= NFS_CAP_ACLS;
if (res.has_links != 0)
@ -2746,14 +2769,12 @@ static int _nfs4_server_capabilities(struct nfs_server *server, struct nfs_fh *f
#endif
memcpy(server->attr_bitmask_nl, res.attr_bitmask,
sizeof(server->attr_bitmask));
server->attr_bitmask_nl[2] &= ~FATTR4_WORD2_SECURITY_LABEL;
if (server->caps & NFS_CAP_SECURITY_LABEL) {
server->attr_bitmask_nl[2] &= ~FATTR4_WORD2_SECURITY_LABEL;
res.attr_bitmask[2] &= ~FATTR4_WORD2_SECURITY_LABEL;
}
memcpy(server->cache_consistency_bitmask, res.attr_bitmask, sizeof(server->cache_consistency_bitmask));
server->cache_consistency_bitmask[0] &= FATTR4_WORD0_CHANGE|FATTR4_WORD0_SIZE;
server->cache_consistency_bitmask[1] &= FATTR4_WORD1_TIME_METADATA|FATTR4_WORD1_TIME_MODIFY;
server->cache_consistency_bitmask[2] = 0;
server->acl_bitmask = res.acl_bitmask;
server->fh_expire_type = res.fh_expire_type;
}
@ -2864,11 +2885,24 @@ static int nfs4_find_root_sec(struct nfs_server *server, struct nfs_fh *fhandle,
int status = -EPERM;
size_t i;
for (i = 0; i < ARRAY_SIZE(flav_array); i++) {
status = nfs4_lookup_root_sec(server, fhandle, info, flav_array[i]);
if (status == -NFS4ERR_WRONGSEC || status == -EACCES)
continue;
break;
if (server->auth_info.flavor_len > 0) {
/* try each flavor specified by user */
for (i = 0; i < server->auth_info.flavor_len; i++) {
status = nfs4_lookup_root_sec(server, fhandle, info,
server->auth_info.flavors[i]);
if (status == -NFS4ERR_WRONGSEC || status == -EACCES)
continue;
break;
}
} else {
/* no flavors specified by user, try default list */
for (i = 0; i < ARRAY_SIZE(flav_array); i++) {
status = nfs4_lookup_root_sec(server, fhandle, info,
flav_array[i]);
if (status == -NFS4ERR_WRONGSEC || status == -EACCES)
continue;
break;
}
}
/*
@ -2910,9 +2944,6 @@ int nfs4_proc_get_rootfh(struct nfs_server *server, struct nfs_fh *fhandle,
status = nfs4_lookup_root(server, fhandle, info);
if (status != -NFS4ERR_WRONGSEC)
break;
/* Did user force a 'sec=' mount option? */
if (server->flags & NFS_MOUNT_SECFLAVOUR)
break;
default:
status = nfs4_do_find_root_sec(server, fhandle, info);
}
@ -2981,11 +3012,16 @@ static int nfs4_get_referral(struct rpc_clnt *client, struct inode *dir,
status = nfs4_proc_fs_locations(client, dir, name, locations, page);
if (status != 0)
goto out;
/* Make sure server returned a different fsid for the referral */
/*
* If the fsid didn't change, this is a migration event, not a
* referral. Cause us to drop into the exception handler, which
* will kick off migration recovery.
*/
if (nfs_fsid_equal(&NFS_SERVER(dir)->fsid, &locations->fattr.fsid)) {
dprintk("%s: server did not return a different fsid for"
" a referral at %s\n", __func__, name->name);
status = -EIO;
status = -NFS4ERR_MOVED;
goto out;
}
/* Fixup attributes for the nfs_lookup() call to nfs_fhget() */
@ -3165,9 +3201,6 @@ static int nfs4_proc_lookup_common(struct rpc_clnt **clnt, struct inode *dir,
err = -EPERM;
if (client != *clnt)
goto out;
/* No security negotiation if the user specified 'sec=' */
if (NFS_SERVER(dir)->flags & NFS_MOUNT_SECFLAVOUR)
goto out;
client = nfs4_create_sec_client(client, dir, name);
if (IS_ERR(client))
return PTR_ERR(client);
@ -4221,7 +4254,13 @@ static void nfs4_renew_done(struct rpc_task *task, void *calldata)
unsigned long timestamp = data->timestamp;
trace_nfs4_renew_async(clp, task->tk_status);
if (task->tk_status < 0) {
switch (task->tk_status) {
case 0:
break;
case -NFS4ERR_LEASE_MOVED:
nfs4_schedule_lease_moved_recovery(clp);
break;
default:
/* Unless we're shutting down, schedule state recovery! */
if (test_bit(NFS_CS_RENEWD, &clp->cl_res_state) == 0)
return;
@ -4575,7 +4614,7 @@ static int _nfs4_get_security_label(struct inode *inode, void *buf,
struct nfs4_label label = {0, 0, buflen, buf};
u32 bitmask[3] = { 0, 0, FATTR4_WORD2_SECURITY_LABEL };
struct nfs4_getattr_arg args = {
struct nfs4_getattr_arg arg = {
.fh = NFS_FH(inode),
.bitmask = bitmask,
};
@ -4586,14 +4625,14 @@ static int _nfs4_get_security_label(struct inode *inode, void *buf,
};
struct rpc_message msg = {
.rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_GETATTR],
.rpc_argp = &args,
.rpc_argp = &arg,
.rpc_resp = &res,
};
int ret;
nfs_fattr_init(&fattr);
ret = rpc_call_sync(server->client, &msg, 0);
ret = nfs4_call_sync(server->client, server, &msg, &arg.seq_args, &res.seq_res, 0);
if (ret)
return ret;
if (!(fattr.valid & NFS_ATTR_FATTR_V4_SECURITY_LABEL))
@ -4630,7 +4669,7 @@ static int _nfs4_do_set_security_label(struct inode *inode,
struct iattr sattr = {0};
struct nfs_server *server = NFS_SERVER(inode);
const u32 bitmask[3] = { 0, 0, FATTR4_WORD2_SECURITY_LABEL };
struct nfs_setattrargs args = {
struct nfs_setattrargs arg = {
.fh = NFS_FH(inode),
.iap = &sattr,
.server = server,
@ -4644,14 +4683,14 @@ static int _nfs4_do_set_security_label(struct inode *inode,
};
struct rpc_message msg = {
.rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_SETATTR],
.rpc_argp = &args,
.rpc_argp = &arg,
.rpc_resp = &res,
};
int status;
nfs4_stateid_copy(&args.stateid, &zero_stateid);
nfs4_stateid_copy(&arg.stateid, &zero_stateid);
status = rpc_call_sync(server->client, &msg, 0);
status = nfs4_call_sync(server->client, server, &msg, &arg.seq_args, &res.seq_res, 1);
if (status)
dprintk("%s failed: %d\n", __func__, status);
@ -4735,17 +4774,24 @@ nfs4_async_handle_error(struct rpc_task *task, const struct nfs_server *server,
if (state == NULL)
break;
if (nfs4_schedule_stateid_recovery(server, state) < 0)
goto stateid_invalid;
goto recovery_failed;
goto wait_on_recovery;
case -NFS4ERR_EXPIRED:
if (state != NULL) {
if (nfs4_schedule_stateid_recovery(server, state) < 0)
goto stateid_invalid;
goto recovery_failed;
}
case -NFS4ERR_STALE_STATEID:
case -NFS4ERR_STALE_CLIENTID:
nfs4_schedule_lease_recovery(clp);
goto wait_on_recovery;
case -NFS4ERR_MOVED:
if (nfs4_schedule_migration_recovery(server) < 0)
goto recovery_failed;
goto wait_on_recovery;
case -NFS4ERR_LEASE_MOVED:
nfs4_schedule_lease_moved_recovery(clp);
goto wait_on_recovery;
#if defined(CONFIG_NFS_V4_1)
case -NFS4ERR_BADSESSION:
case -NFS4ERR_BADSLOT:
@ -4757,29 +4803,28 @@ nfs4_async_handle_error(struct rpc_task *task, const struct nfs_server *server,
dprintk("%s ERROR %d, Reset session\n", __func__,
task->tk_status);
nfs4_schedule_session_recovery(clp->cl_session, task->tk_status);
task->tk_status = 0;
return -EAGAIN;
goto restart_call;
#endif /* CONFIG_NFS_V4_1 */
case -NFS4ERR_DELAY:
nfs_inc_server_stats(server, NFSIOS_DELAY);
case -NFS4ERR_GRACE:
rpc_delay(task, NFS4_POLL_RETRY_MAX);
task->tk_status = 0;
return -EAGAIN;
case -NFS4ERR_RETRY_UNCACHED_REP:
case -NFS4ERR_OLD_STATEID:
task->tk_status = 0;
return -EAGAIN;
goto restart_call;
}
task->tk_status = nfs4_map_errors(task->tk_status);
return 0;
stateid_invalid:
recovery_failed:
task->tk_status = -EIO;
return 0;
wait_on_recovery:
rpc_sleep_on(&clp->cl_rpcwaitq, task, NULL);
if (test_bit(NFS4CLNT_MANAGER_RUNNING, &clp->cl_state) == 0)
rpc_wake_up_queued_task(&clp->cl_rpcwaitq, task);
if (test_bit(NFS_MIG_FAILED, &server->mig_status))
goto recovery_failed;
restart_call:
task->tk_status = 0;
return -EAGAIN;
}
@ -5106,6 +5151,7 @@ static int _nfs4_proc_getlk(struct nfs4_state *state, int cmd, struct file_lock
status = 0;
}
request->fl_ops->fl_release_private(request);
request->fl_ops = NULL;
out:
return status;
}
@ -5779,6 +5825,7 @@ struct nfs_release_lockowner_data {
struct nfs_release_lockowner_args args;
struct nfs4_sequence_args seq_args;
struct nfs4_sequence_res seq_res;
unsigned long timestamp;
};
static void nfs4_release_lockowner_prepare(struct rpc_task *task, void *calldata)
@ -5786,12 +5833,27 @@ static void nfs4_release_lockowner_prepare(struct rpc_task *task, void *calldata
struct nfs_release_lockowner_data *data = calldata;
nfs40_setup_sequence(data->server,
&data->seq_args, &data->seq_res, task);
data->timestamp = jiffies;
}
static void nfs4_release_lockowner_done(struct rpc_task *task, void *calldata)
{
struct nfs_release_lockowner_data *data = calldata;
struct nfs_server *server = data->server;
nfs40_sequence_done(task, &data->seq_res);
switch (task->tk_status) {
case 0:
renew_lease(server, data->timestamp);
break;
case -NFS4ERR_STALE_CLIENTID:
case -NFS4ERR_EXPIRED:
case -NFS4ERR_LEASE_MOVED:
case -NFS4ERR_DELAY:
if (nfs4_async_handle_error(task, server, NULL) == -EAGAIN)
rpc_restart_call_prepare(task);
}
}
static void nfs4_release_lockowner_release(void *calldata)
@ -5990,6 +6052,283 @@ int nfs4_proc_fs_locations(struct rpc_clnt *client, struct inode *dir,
return err;
}
/*
* This operation also signals the server that this client is
* performing migration recovery. The server can stop returning
* NFS4ERR_LEASE_MOVED to this client. A RENEW operation is
* appended to this compound to identify the client ID which is
* performing recovery.
*/
static int _nfs40_proc_get_locations(struct inode *inode,
struct nfs4_fs_locations *locations,
struct page *page, struct rpc_cred *cred)
{
struct nfs_server *server = NFS_SERVER(inode);
struct rpc_clnt *clnt = server->client;
u32 bitmask[2] = {
[0] = FATTR4_WORD0_FSID | FATTR4_WORD0_FS_LOCATIONS,
};
struct nfs4_fs_locations_arg args = {
.clientid = server->nfs_client->cl_clientid,
.fh = NFS_FH(inode),
.page = page,
.bitmask = bitmask,
.migration = 1, /* skip LOOKUP */
.renew = 1, /* append RENEW */
};
struct nfs4_fs_locations_res res = {
.fs_locations = locations,
.migration = 1,
.renew = 1,
};
struct rpc_message msg = {
.rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_FS_LOCATIONS],
.rpc_argp = &args,
.rpc_resp = &res,
.rpc_cred = cred,
};
unsigned long now = jiffies;
int status;
nfs_fattr_init(&locations->fattr);
locations->server = server;
locations->nlocations = 0;
nfs4_init_sequence(&args.seq_args, &res.seq_res, 0);
nfs4_set_sequence_privileged(&args.seq_args);
status = nfs4_call_sync_sequence(clnt, server, &msg,
&args.seq_args, &res.seq_res);
if (status)
return status;
renew_lease(server, now);
return 0;
}
#ifdef CONFIG_NFS_V4_1
/*
* This operation also signals the server that this client is
* performing migration recovery. The server can stop asserting
* SEQ4_STATUS_LEASE_MOVED for this client. The client ID
* performing this operation is identified in the SEQUENCE
* operation in this compound.
*
* When the client supports GETATTR(fs_locations_info), it can
* be plumbed in here.
*/
static int _nfs41_proc_get_locations(struct inode *inode,
struct nfs4_fs_locations *locations,
struct page *page, struct rpc_cred *cred)
{
struct nfs_server *server = NFS_SERVER(inode);
struct rpc_clnt *clnt = server->client;
u32 bitmask[2] = {
[0] = FATTR4_WORD0_FSID | FATTR4_WORD0_FS_LOCATIONS,
};
struct nfs4_fs_locations_arg args = {
.fh = NFS_FH(inode),
.page = page,
.bitmask = bitmask,
.migration = 1, /* skip LOOKUP */
};
struct nfs4_fs_locations_res res = {
.fs_locations = locations,
.migration = 1,
};
struct rpc_message msg = {
.rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_FS_LOCATIONS],
.rpc_argp = &args,
.rpc_resp = &res,
.rpc_cred = cred,
};
int status;
nfs_fattr_init(&locations->fattr);
locations->server = server;
locations->nlocations = 0;
nfs4_init_sequence(&args.seq_args, &res.seq_res, 0);
nfs4_set_sequence_privileged(&args.seq_args);
status = nfs4_call_sync_sequence(clnt, server, &msg,
&args.seq_args, &res.seq_res);
if (status == NFS4_OK &&
res.seq_res.sr_status_flags & SEQ4_STATUS_LEASE_MOVED)
status = -NFS4ERR_LEASE_MOVED;
return status;
}
#endif /* CONFIG_NFS_V4_1 */
/**
* nfs4_proc_get_locations - discover locations for a migrated FSID
* @inode: inode on FSID that is migrating
* @locations: result of query
* @page: buffer
* @cred: credential to use for this operation
*
* Returns NFS4_OK on success, a negative NFS4ERR status code if the
* operation failed, or a negative errno if a local error occurred.
*
* On success, "locations" is filled in, but if the server has
* no locations information, NFS_ATTR_FATTR_V4_LOCATIONS is not
* asserted.
*
* -NFS4ERR_LEASE_MOVED is returned if the server still has leases
* from this client that require migration recovery.
*/
int nfs4_proc_get_locations(struct inode *inode,
struct nfs4_fs_locations *locations,
struct page *page, struct rpc_cred *cred)
{
struct nfs_server *server = NFS_SERVER(inode);
struct nfs_client *clp = server->nfs_client;
const struct nfs4_mig_recovery_ops *ops =
clp->cl_mvops->mig_recovery_ops;
struct nfs4_exception exception = { };
int status;
dprintk("%s: FSID %llx:%llx on \"%s\"\n", __func__,
(unsigned long long)server->fsid.major,
(unsigned long long)server->fsid.minor,
clp->cl_hostname);
nfs_display_fhandle(NFS_FH(inode), __func__);
do {
status = ops->get_locations(inode, locations, page, cred);
if (status != -NFS4ERR_DELAY)
break;
nfs4_handle_exception(server, status, &exception);
} while (exception.retry);
return status;
}
/*
* This operation also signals the server that this client is
* performing "lease moved" recovery. The server can stop
* returning NFS4ERR_LEASE_MOVED to this client. A RENEW operation
* is appended to this compound to identify the client ID which is
* performing recovery.
*/
static int _nfs40_proc_fsid_present(struct inode *inode, struct rpc_cred *cred)
{
struct nfs_server *server = NFS_SERVER(inode);
struct nfs_client *clp = NFS_SERVER(inode)->nfs_client;
struct rpc_clnt *clnt = server->client;
struct nfs4_fsid_present_arg args = {
.fh = NFS_FH(inode),
.clientid = clp->cl_clientid,
.renew = 1, /* append RENEW */
};
struct nfs4_fsid_present_res res = {
.renew = 1,
};
struct rpc_message msg = {
.rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_FSID_PRESENT],
.rpc_argp = &args,
.rpc_resp = &res,
.rpc_cred = cred,
};
unsigned long now = jiffies;
int status;
res.fh = nfs_alloc_fhandle();
if (res.fh == NULL)
return -ENOMEM;
nfs4_init_sequence(&args.seq_args, &res.seq_res, 0);
nfs4_set_sequence_privileged(&args.seq_args);
status = nfs4_call_sync_sequence(clnt, server, &msg,
&args.seq_args, &res.seq_res);
nfs_free_fhandle(res.fh);
if (status)
return status;
do_renew_lease(clp, now);
return 0;
}
#ifdef CONFIG_NFS_V4_1
/*
* This operation also signals the server that this client is
* performing "lease moved" recovery. The server can stop asserting
* SEQ4_STATUS_LEASE_MOVED for this client. The client ID performing
* this operation is identified in the SEQUENCE operation in this
* compound.
*/
static int _nfs41_proc_fsid_present(struct inode *inode, struct rpc_cred *cred)
{
struct nfs_server *server = NFS_SERVER(inode);
struct rpc_clnt *clnt = server->client;
struct nfs4_fsid_present_arg args = {
.fh = NFS_FH(inode),
};
struct nfs4_fsid_present_res res = {
};
struct rpc_message msg = {
.rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_FSID_PRESENT],
.rpc_argp = &args,
.rpc_resp = &res,
.rpc_cred = cred,
};
int status;
res.fh = nfs_alloc_fhandle();
if (res.fh == NULL)
return -ENOMEM;
nfs4_init_sequence(&args.seq_args, &res.seq_res, 0);
nfs4_set_sequence_privileged(&args.seq_args);
status = nfs4_call_sync_sequence(clnt, server, &msg,
&args.seq_args, &res.seq_res);
nfs_free_fhandle(res.fh);
if (status == NFS4_OK &&
res.seq_res.sr_status_flags & SEQ4_STATUS_LEASE_MOVED)
status = -NFS4ERR_LEASE_MOVED;
return status;
}
#endif /* CONFIG_NFS_V4_1 */
/**
* nfs4_proc_fsid_present - Is this FSID present or absent on server?
* @inode: inode on FSID to check
* @cred: credential to use for this operation
*
* Server indicates whether the FSID is present, moved, or not
* recognized. This operation is necessary to clear a LEASE_MOVED
* condition for this client ID.
*
* Returns NFS4_OK if the FSID is present on this server,
* -NFS4ERR_MOVED if the FSID is no longer present, a negative
* NFS4ERR code if some error occurred on the server, or a
* negative errno if a local failure occurred.
*/
int nfs4_proc_fsid_present(struct inode *inode, struct rpc_cred *cred)
{
struct nfs_server *server = NFS_SERVER(inode);
struct nfs_client *clp = server->nfs_client;
const struct nfs4_mig_recovery_ops *ops =
clp->cl_mvops->mig_recovery_ops;
struct nfs4_exception exception = { };
int status;
dprintk("%s: FSID %llx:%llx on \"%s\"\n", __func__,
(unsigned long long)server->fsid.major,
(unsigned long long)server->fsid.minor,
clp->cl_hostname);
nfs_display_fhandle(NFS_FH(inode), __func__);
do {
status = ops->fsid_present(inode, cred);
if (status != -NFS4ERR_DELAY)
break;
nfs4_handle_exception(server, status, &exception);
} while (exception.retry);
return status;
}
/**
* If 'use_integrity' is true and the state managment nfs_client
* cl_rpcclient is using krb5i/p, use the integrity protected cl_rpcclient
@ -6276,8 +6615,14 @@ static int _nfs4_proc_exchange_id(struct nfs_client *clp, struct rpc_cred *cred,
struct nfs41_exchange_id_args args = {
.verifier = &verifier,
.client = clp,
#ifdef CONFIG_NFS_V4_1_MIGRATION
.flags = EXCHGID4_FLAG_SUPP_MOVED_REFER |
EXCHGID4_FLAG_BIND_PRINC_STATEID,
EXCHGID4_FLAG_BIND_PRINC_STATEID |
EXCHGID4_FLAG_SUPP_MOVED_MIGR,
#else
.flags = EXCHGID4_FLAG_SUPP_MOVED_REFER |
EXCHGID4_FLAG_BIND_PRINC_STATEID,
#endif
};
struct nfs41_exchange_id_res res = {
0
@ -7616,6 +7961,9 @@ nfs41_find_root_sec(struct nfs_server *server, struct nfs_fh *fhandle,
break;
}
if (!nfs_auth_info_match(&server->auth_info, flavor))
flavor = RPC_AUTH_MAXFLAVOR;
if (flavor != RPC_AUTH_MAXFLAVOR) {
err = nfs4_lookup_root_sec(server, fhandle,
info, flavor);
@ -7887,6 +8235,18 @@ static const struct nfs4_state_maintenance_ops nfs41_state_renewal_ops = {
};
#endif
static const struct nfs4_mig_recovery_ops nfs40_mig_recovery_ops = {
.get_locations = _nfs40_proc_get_locations,
.fsid_present = _nfs40_proc_fsid_present,
};
#if defined(CONFIG_NFS_V4_1)
static const struct nfs4_mig_recovery_ops nfs41_mig_recovery_ops = {
.get_locations = _nfs41_proc_get_locations,
.fsid_present = _nfs41_proc_fsid_present,
};
#endif /* CONFIG_NFS_V4_1 */
static const struct nfs4_minor_version_ops nfs_v4_0_minor_ops = {
.minor_version = 0,
.init_caps = NFS_CAP_READDIRPLUS
@ -7902,6 +8262,7 @@ static const struct nfs4_minor_version_ops nfs_v4_0_minor_ops = {
.reboot_recovery_ops = &nfs40_reboot_recovery_ops,
.nograce_recovery_ops = &nfs40_nograce_recovery_ops,
.state_renewal_ops = &nfs40_state_renewal_ops,
.mig_recovery_ops = &nfs40_mig_recovery_ops,
};
#if defined(CONFIG_NFS_V4_1)
@ -7922,6 +8283,7 @@ static const struct nfs4_minor_version_ops nfs_v4_1_minor_ops = {
.reboot_recovery_ops = &nfs41_reboot_recovery_ops,
.nograce_recovery_ops = &nfs41_nograce_recovery_ops,
.state_renewal_ops = &nfs41_state_renewal_ops,
.mig_recovery_ops = &nfs41_mig_recovery_ops,
};
#endif

View File

@ -239,8 +239,6 @@ static void nfs4_end_drain_session(struct nfs_client *clp)
}
}
#if defined(CONFIG_NFS_V4_1)
static int nfs4_drain_slot_tbl(struct nfs4_slot_table *tbl)
{
set_bit(NFS4_SLOT_TBL_DRAINING, &tbl->slot_tbl_state);
@ -270,6 +268,8 @@ static int nfs4_begin_drain_session(struct nfs_client *clp)
return nfs4_drain_slot_tbl(&ses->fc_slot_table);
}
#if defined(CONFIG_NFS_V4_1)
static int nfs41_setup_state_renewal(struct nfs_client *clp)
{
int status;
@ -1197,20 +1197,74 @@ void nfs4_schedule_lease_recovery(struct nfs_client *clp)
}
EXPORT_SYMBOL_GPL(nfs4_schedule_lease_recovery);
/**
* nfs4_schedule_migration_recovery - trigger migration recovery
*
* @server: FSID that is migrating
*
* Returns zero if recovery has started, otherwise a negative NFS4ERR
* value is returned.
*/
int nfs4_schedule_migration_recovery(const struct nfs_server *server)
{
struct nfs_client *clp = server->nfs_client;
if (server->fh_expire_type != NFS4_FH_PERSISTENT) {
pr_err("NFS: volatile file handles not supported (server %s)\n",
clp->cl_hostname);
return -NFS4ERR_IO;
}
if (test_bit(NFS_MIG_FAILED, &server->mig_status))
return -NFS4ERR_IO;
dprintk("%s: scheduling migration recovery for (%llx:%llx) on %s\n",
__func__,
(unsigned long long)server->fsid.major,
(unsigned long long)server->fsid.minor,
clp->cl_hostname);
set_bit(NFS_MIG_IN_TRANSITION,
&((struct nfs_server *)server)->mig_status);
set_bit(NFS4CLNT_MOVED, &clp->cl_state);
nfs4_schedule_state_manager(clp);
return 0;
}
EXPORT_SYMBOL_GPL(nfs4_schedule_migration_recovery);
/**
* nfs4_schedule_lease_moved_recovery - start lease-moved recovery
*
* @clp: server to check for moved leases
*
*/
void nfs4_schedule_lease_moved_recovery(struct nfs_client *clp)
{
dprintk("%s: scheduling lease-moved recovery for client ID %llx on %s\n",
__func__, clp->cl_clientid, clp->cl_hostname);
set_bit(NFS4CLNT_LEASE_MOVED, &clp->cl_state);
nfs4_schedule_state_manager(clp);
}
EXPORT_SYMBOL_GPL(nfs4_schedule_lease_moved_recovery);
int nfs4_wait_clnt_recover(struct nfs_client *clp)
{
int res;
might_sleep();
atomic_inc(&clp->cl_count);
res = wait_on_bit(&clp->cl_state, NFS4CLNT_MANAGER_RUNNING,
nfs_wait_bit_killable, TASK_KILLABLE);
if (res)
return res;
goto out;
if (clp->cl_cons_state < 0)
return clp->cl_cons_state;
return 0;
res = clp->cl_cons_state;
out:
nfs_put_client(clp);
return res;
}
int nfs4_client_recover_expired_lease(struct nfs_client *clp)
@ -1375,8 +1429,8 @@ static int nfs4_reclaim_locks(struct nfs4_state *state, const struct nfs4_state_
case -NFS4ERR_CONN_NOT_BOUND_TO_SESSION:
goto out;
default:
printk(KERN_ERR "NFS: %s: unhandled error %d. "
"Zeroing state\n", __func__, status);
printk(KERN_ERR "NFS: %s: unhandled error %d\n",
__func__, status);
case -ENOMEM:
case -NFS4ERR_DENIED:
case -NFS4ERR_RECLAIM_BAD:
@ -1422,7 +1476,7 @@ static int nfs4_reclaim_open_state(struct nfs4_state_owner *sp, const struct nfs
if (status >= 0) {
status = nfs4_reclaim_locks(state, ops);
if (status >= 0) {
if (test_bit(NFS_DELEGATED_STATE, &state->flags) != 0) {
if (!test_bit(NFS_DELEGATED_STATE, &state->flags)) {
spin_lock(&state->state_lock);
list_for_each_entry(lock, &state->lock_states, ls_locks) {
if (!test_bit(NFS_LOCK_INITIALIZED, &lock->ls_flags))
@ -1439,15 +1493,12 @@ static int nfs4_reclaim_open_state(struct nfs4_state_owner *sp, const struct nfs
}
switch (status) {
default:
printk(KERN_ERR "NFS: %s: unhandled error %d. "
"Zeroing state\n", __func__, status);
printk(KERN_ERR "NFS: %s: unhandled error %d\n",
__func__, status);
case -ENOENT:
case -ENOMEM:
case -ESTALE:
/*
* Open state on this file cannot be recovered
* All we can do is revert to using the zero stateid.
*/
/* Open state on this file cannot be recovered */
nfs4_state_mark_recovery_failed(state, status);
break;
case -EAGAIN:
@ -1628,7 +1679,6 @@ static int nfs4_recovery_handle_error(struct nfs_client *clp, int error)
nfs4_state_end_reclaim_reboot(clp);
break;
case -NFS4ERR_STALE_CLIENTID:
case -NFS4ERR_LEASE_MOVED:
set_bit(NFS4CLNT_LEASE_EXPIRED, &clp->cl_state);
nfs4_state_clear_reclaim_reboot(clp);
nfs4_state_start_reclaim_reboot(clp);
@ -1829,6 +1879,168 @@ static int nfs4_purge_lease(struct nfs_client *clp)
return 0;
}
/*
* Try remote migration of one FSID from a source server to a
* destination server. The source server provides a list of
* potential destinations.
*
* Returns zero or a negative NFS4ERR status code.
*/
static int nfs4_try_migration(struct nfs_server *server, struct rpc_cred *cred)
{
struct nfs_client *clp = server->nfs_client;
struct nfs4_fs_locations *locations = NULL;
struct inode *inode;
struct page *page;
int status, result;
dprintk("--> %s: FSID %llx:%llx on \"%s\"\n", __func__,
(unsigned long long)server->fsid.major,
(unsigned long long)server->fsid.minor,
clp->cl_hostname);
result = 0;
page = alloc_page(GFP_KERNEL);
locations = kmalloc(sizeof(struct nfs4_fs_locations), GFP_KERNEL);
if (page == NULL || locations == NULL) {
dprintk("<-- %s: no memory\n", __func__);
goto out;
}
inode = server->super->s_root->d_inode;
result = nfs4_proc_get_locations(inode, locations, page, cred);
if (result) {
dprintk("<-- %s: failed to retrieve fs_locations: %d\n",
__func__, result);
goto out;
}
result = -NFS4ERR_NXIO;
if (!(locations->fattr.valid & NFS_ATTR_FATTR_V4_LOCATIONS)) {
dprintk("<-- %s: No fs_locations data, migration skipped\n",
__func__);
goto out;
}
nfs4_begin_drain_session(clp);
status = nfs4_replace_transport(server, locations);
if (status != 0) {
dprintk("<-- %s: failed to replace transport: %d\n",
__func__, status);
goto out;
}
result = 0;
dprintk("<-- %s: migration succeeded\n", __func__);
out:
if (page != NULL)
__free_page(page);
kfree(locations);
if (result) {
pr_err("NFS: migration recovery failed (server %s)\n",
clp->cl_hostname);
set_bit(NFS_MIG_FAILED, &server->mig_status);
}
return result;
}
/*
* Returns zero or a negative NFS4ERR status code.
*/
static int nfs4_handle_migration(struct nfs_client *clp)
{
const struct nfs4_state_maintenance_ops *ops =
clp->cl_mvops->state_renewal_ops;
struct nfs_server *server;
struct rpc_cred *cred;
dprintk("%s: migration reported on \"%s\"\n", __func__,
clp->cl_hostname);
spin_lock(&clp->cl_lock);
cred = ops->get_state_renewal_cred_locked(clp);
spin_unlock(&clp->cl_lock);
if (cred == NULL)
return -NFS4ERR_NOENT;
clp->cl_mig_gen++;
restart:
rcu_read_lock();
list_for_each_entry_rcu(server, &clp->cl_superblocks, client_link) {
int status;
if (server->mig_gen == clp->cl_mig_gen)
continue;
server->mig_gen = clp->cl_mig_gen;
if (!test_and_clear_bit(NFS_MIG_IN_TRANSITION,
&server->mig_status))
continue;
rcu_read_unlock();
status = nfs4_try_migration(server, cred);
if (status < 0) {
put_rpccred(cred);
return status;
}
goto restart;
}
rcu_read_unlock();
put_rpccred(cred);
return 0;
}
/*
* Test each nfs_server on the clp's cl_superblocks list to see
* if it's moved to another server. Stop when the server no longer
* returns NFS4ERR_LEASE_MOVED.
*/
static int nfs4_handle_lease_moved(struct nfs_client *clp)
{
const struct nfs4_state_maintenance_ops *ops =
clp->cl_mvops->state_renewal_ops;
struct nfs_server *server;
struct rpc_cred *cred;
dprintk("%s: lease moved reported on \"%s\"\n", __func__,
clp->cl_hostname);
spin_lock(&clp->cl_lock);
cred = ops->get_state_renewal_cred_locked(clp);
spin_unlock(&clp->cl_lock);
if (cred == NULL)
return -NFS4ERR_NOENT;
clp->cl_mig_gen++;
restart:
rcu_read_lock();
list_for_each_entry_rcu(server, &clp->cl_superblocks, client_link) {
struct inode *inode;
int status;
if (server->mig_gen == clp->cl_mig_gen)
continue;
server->mig_gen = clp->cl_mig_gen;
rcu_read_unlock();
inode = server->super->s_root->d_inode;
status = nfs4_proc_fsid_present(inode, cred);
if (status != -NFS4ERR_MOVED)
goto restart; /* wasn't this one */
if (nfs4_try_migration(server, cred) == -NFS4ERR_LEASE_MOVED)
goto restart; /* there are more */
goto out;
}
rcu_read_unlock();
out:
put_rpccred(cred);
return 0;
}
/**
* nfs4_discover_server_trunking - Detect server IP address trunking
*
@ -2017,9 +2229,10 @@ void nfs41_handle_sequence_flag_errors(struct nfs_client *clp, u32 flags)
nfs41_handle_server_reboot(clp);
if (flags & (SEQ4_STATUS_EXPIRED_ALL_STATE_REVOKED |
SEQ4_STATUS_EXPIRED_SOME_STATE_REVOKED |
SEQ4_STATUS_ADMIN_STATE_REVOKED |
SEQ4_STATUS_LEASE_MOVED))
SEQ4_STATUS_ADMIN_STATE_REVOKED))
nfs41_handle_state_revoked(clp);
if (flags & SEQ4_STATUS_LEASE_MOVED)
nfs4_schedule_lease_moved_recovery(clp);
if (flags & SEQ4_STATUS_RECALLABLE_STATE_REVOKED)
nfs41_handle_recallable_state_revoked(clp);
if (flags & SEQ4_STATUS_BACKCHANNEL_FAULT)
@ -2157,7 +2370,20 @@ static void nfs4_state_manager(struct nfs_client *clp)
status = nfs4_check_lease(clp);
if (status < 0)
goto out_error;
continue;
}
if (test_and_clear_bit(NFS4CLNT_MOVED, &clp->cl_state)) {
section = "migration";
status = nfs4_handle_migration(clp);
if (status < 0)
goto out_error;
}
if (test_and_clear_bit(NFS4CLNT_LEASE_MOVED, &clp->cl_state)) {
section = "lease moved";
status = nfs4_handle_lease_moved(clp);
if (status < 0)
goto out_error;
}
/* First recover reboot state... */

View File

@ -261,9 +261,9 @@ struct dentry *nfs4_try_mount(int flags, const char *dev_name,
res = nfs_follow_remote_path(root_mnt, export_path);
dfprintk(MOUNT, "<-- nfs4_try_mount() = %ld%s\n",
IS_ERR(res) ? PTR_ERR(res) : 0,
IS_ERR(res) ? " [error]" : "");
dfprintk(MOUNT, "<-- nfs4_try_mount() = %d%s\n",
PTR_ERR_OR_ZERO(res),
IS_ERR(res) ? " [error]" : "");
return res;
}
@ -319,9 +319,9 @@ static struct dentry *nfs4_referral_mount(struct file_system_type *fs_type,
data->mnt_path = export_path;
res = nfs_follow_remote_path(root_mnt, export_path);
dprintk("<-- nfs4_referral_mount() = %ld%s\n",
IS_ERR(res) ? PTR_ERR(res) : 0,
IS_ERR(res) ? " [error]" : "");
dprintk("<-- nfs4_referral_mount() = %d%s\n",
PTR_ERR_OR_ZERO(res),
IS_ERR(res) ? " [error]" : "");
return res;
}

View File

@ -105,12 +105,8 @@ static int nfs4_stat_to_errno(int);
#ifdef CONFIG_NFS_V4_SECURITY_LABEL
/* PI(4 bytes) + LFS(4 bytes) + 1(for null terminator?) + MAXLABELLEN */
#define nfs4_label_maxsz (4 + 4 + 1 + XDR_QUADLEN(NFS4_MAXLABELLEN))
#define encode_readdir_space 24
#define encode_readdir_bitmask_sz 3
#else
#define nfs4_label_maxsz 0
#define encode_readdir_space 20
#define encode_readdir_bitmask_sz 2
#endif
/* We support only one layout type per file system */
#define decode_mdsthreshold_maxsz (1 + 1 + nfs4_fattr_bitmap_maxsz + 1 + 8)
@ -595,11 +591,13 @@ static int nfs4_stat_to_errno(int);
#define NFS4_enc_getattr_sz (compound_encode_hdr_maxsz + \
encode_sequence_maxsz + \
encode_putfh_maxsz + \
encode_getattr_maxsz)
encode_getattr_maxsz + \
encode_renew_maxsz)
#define NFS4_dec_getattr_sz (compound_decode_hdr_maxsz + \
decode_sequence_maxsz + \
decode_putfh_maxsz + \
decode_getattr_maxsz)
decode_getattr_maxsz + \
decode_renew_maxsz)
#define NFS4_enc_lookup_sz (compound_encode_hdr_maxsz + \
encode_sequence_maxsz + \
encode_putfh_maxsz + \
@ -736,13 +734,15 @@ static int nfs4_stat_to_errno(int);
encode_sequence_maxsz + \
encode_putfh_maxsz + \
encode_lookup_maxsz + \
encode_fs_locations_maxsz)
encode_fs_locations_maxsz + \
encode_renew_maxsz)
#define NFS4_dec_fs_locations_sz \
(compound_decode_hdr_maxsz + \
decode_sequence_maxsz + \
decode_putfh_maxsz + \
decode_lookup_maxsz + \
decode_fs_locations_maxsz)
decode_fs_locations_maxsz + \
decode_renew_maxsz)
#define NFS4_enc_secinfo_sz (compound_encode_hdr_maxsz + \
encode_sequence_maxsz + \
encode_putfh_maxsz + \
@ -751,6 +751,18 @@ static int nfs4_stat_to_errno(int);
decode_sequence_maxsz + \
decode_putfh_maxsz + \
decode_secinfo_maxsz)
#define NFS4_enc_fsid_present_sz \
(compound_encode_hdr_maxsz + \
encode_sequence_maxsz + \
encode_putfh_maxsz + \
encode_getfh_maxsz + \
encode_renew_maxsz)
#define NFS4_dec_fsid_present_sz \
(compound_decode_hdr_maxsz + \
decode_sequence_maxsz + \
decode_putfh_maxsz + \
decode_getfh_maxsz + \
decode_renew_maxsz)
#if defined(CONFIG_NFS_V4_1)
#define NFS4_enc_bind_conn_to_session_sz \
(compound_encode_hdr_maxsz + \
@ -1565,6 +1577,8 @@ static void encode_readdir(struct xdr_stream *xdr, const struct nfs4_readdir_arg
};
uint32_t dircount = readdir->count >> 1;
__be32 *p, verf[2];
uint32_t attrlen = 0;
unsigned int i;
if (readdir->plus) {
attrs[0] |= FATTR4_WORD0_TYPE|FATTR4_WORD0_CHANGE|FATTR4_WORD0_SIZE|
@ -1573,26 +1587,27 @@ static void encode_readdir(struct xdr_stream *xdr, const struct nfs4_readdir_arg
FATTR4_WORD1_OWNER_GROUP|FATTR4_WORD1_RAWDEV|
FATTR4_WORD1_SPACE_USED|FATTR4_WORD1_TIME_ACCESS|
FATTR4_WORD1_TIME_METADATA|FATTR4_WORD1_TIME_MODIFY;
attrs[2] |= FATTR4_WORD2_SECURITY_LABEL;
dircount >>= 1;
}
/* Use mounted_on_fileid only if the server supports it */
if (!(readdir->bitmask[1] & FATTR4_WORD1_MOUNTED_ON_FILEID))
attrs[0] |= FATTR4_WORD0_FILEID;
for (i = 0; i < ARRAY_SIZE(attrs); i++) {
attrs[i] &= readdir->bitmask[i];
if (attrs[i] != 0)
attrlen = i+1;
}
encode_op_hdr(xdr, OP_READDIR, decode_readdir_maxsz, hdr);
encode_uint64(xdr, readdir->cookie);
encode_nfs4_verifier(xdr, &readdir->verifier);
p = reserve_space(xdr, encode_readdir_space);
p = reserve_space(xdr, 12 + (attrlen << 2));
*p++ = cpu_to_be32(dircount);
*p++ = cpu_to_be32(readdir->count);
*p++ = cpu_to_be32(encode_readdir_bitmask_sz);
*p++ = cpu_to_be32(attrs[0] & readdir->bitmask[0]);
*p = cpu_to_be32(attrs[1] & readdir->bitmask[1]);
if (encode_readdir_bitmask_sz > 2) {
if (hdr->minorversion > 1)
attrs[2] |= FATTR4_WORD2_SECURITY_LABEL;
p++, *p++ = cpu_to_be32(attrs[2] & readdir->bitmask[2]);
}
*p++ = cpu_to_be32(attrlen);
for (i = 0; i < attrlen; i++)
*p++ = cpu_to_be32(attrs[i]);
memcpy(verf, readdir->verifier.data, sizeof(verf));
dprintk("%s: cookie = %llu, verifier = %08x:%08x, bitmap = %08x:%08x:%08x\n",
@ -2687,11 +2702,20 @@ static void nfs4_xdr_enc_fs_locations(struct rpc_rqst *req,
encode_compound_hdr(xdr, req, &hdr);
encode_sequence(xdr, &args->seq_args, &hdr);
encode_putfh(xdr, args->dir_fh, &hdr);
encode_lookup(xdr, args->name, &hdr);
replen = hdr.replen; /* get the attribute into args->page */
encode_fs_locations(xdr, args->bitmask, &hdr);
if (args->migration) {
encode_putfh(xdr, args->fh, &hdr);
replen = hdr.replen;
encode_fs_locations(xdr, args->bitmask, &hdr);
if (args->renew)
encode_renew(xdr, args->clientid, &hdr);
} else {
encode_putfh(xdr, args->dir_fh, &hdr);
encode_lookup(xdr, args->name, &hdr);
replen = hdr.replen;
encode_fs_locations(xdr, args->bitmask, &hdr);
}
/* Set up reply kvec to capture returned fs_locations array. */
xdr_inline_pages(&req->rq_rcv_buf, replen << 2, &args->page,
0, PAGE_SIZE);
encode_nops(&hdr);
@ -2715,6 +2739,26 @@ static void nfs4_xdr_enc_secinfo(struct rpc_rqst *req,
encode_nops(&hdr);
}
/*
* Encode FSID_PRESENT request
*/
static void nfs4_xdr_enc_fsid_present(struct rpc_rqst *req,
struct xdr_stream *xdr,
struct nfs4_fsid_present_arg *args)
{
struct compound_hdr hdr = {
.minorversion = nfs4_xdr_minorversion(&args->seq_args),
};
encode_compound_hdr(xdr, req, &hdr);
encode_sequence(xdr, &args->seq_args, &hdr);
encode_putfh(xdr, args->fh, &hdr);
encode_getfh(xdr, &hdr);
if (args->renew)
encode_renew(xdr, args->clientid, &hdr);
encode_nops(&hdr);
}
#if defined(CONFIG_NFS_V4_1)
/*
* BIND_CONN_TO_SESSION request
@ -6824,13 +6868,26 @@ static int nfs4_xdr_dec_fs_locations(struct rpc_rqst *req,
status = decode_putfh(xdr);
if (status)
goto out;
status = decode_lookup(xdr);
if (status)
goto out;
xdr_enter_page(xdr, PAGE_SIZE);
status = decode_getfattr_generic(xdr, &res->fs_locations->fattr,
if (res->migration) {
xdr_enter_page(xdr, PAGE_SIZE);
status = decode_getfattr_generic(xdr,
&res->fs_locations->fattr,
NULL, res->fs_locations,
NULL, res->fs_locations->server);
if (status)
goto out;
if (res->renew)
status = decode_renew(xdr);
} else {
status = decode_lookup(xdr);
if (status)
goto out;
xdr_enter_page(xdr, PAGE_SIZE);
status = decode_getfattr_generic(xdr,
&res->fs_locations->fattr,
NULL, res->fs_locations,
NULL, res->fs_locations->server);
}
out:
return status;
}
@ -6859,6 +6916,34 @@ static int nfs4_xdr_dec_secinfo(struct rpc_rqst *rqstp,
return status;
}
/*
* Decode FSID_PRESENT response
*/
static int nfs4_xdr_dec_fsid_present(struct rpc_rqst *rqstp,
struct xdr_stream *xdr,
struct nfs4_fsid_present_res *res)
{
struct compound_hdr hdr;
int status;
status = decode_compound_hdr(xdr, &hdr);
if (status)
goto out;
status = decode_sequence(xdr, &res->seq_res, rqstp);
if (status)
goto out;
status = decode_putfh(xdr);
if (status)
goto out;
status = decode_getfh(xdr, res->fh);
if (status)
goto out;
if (res->renew)
status = decode_renew(xdr);
out:
return status;
}
#if defined(CONFIG_NFS_V4_1)
/*
* Decode BIND_CONN_TO_SESSION response
@ -7373,6 +7458,7 @@ struct rpc_procinfo nfs4_procedures[] = {
PROC(FS_LOCATIONS, enc_fs_locations, dec_fs_locations),
PROC(RELEASE_LOCKOWNER, enc_release_lockowner, dec_release_lockowner),
PROC(SECINFO, enc_secinfo, dec_secinfo),
PROC(FSID_PRESENT, enc_fsid_present, dec_fsid_present),
#if defined(CONFIG_NFS_V4_1)
PROC(EXCHANGE_ID, enc_exchange_id, dec_exchange_id),
PROC(CREATE_SESSION, enc_create_session, dec_create_session),

View File

@ -497,7 +497,8 @@ static const char *nfs_pseudoflavour_to_name(rpc_authflavor_t flavour)
static const struct {
rpc_authflavor_t flavour;
const char *str;
} sec_flavours[] = {
} sec_flavours[NFS_AUTH_INFO_MAX_FLAVORS] = {
/* update NFS_AUTH_INFO_MAX_FLAVORS when this list changes! */
{ RPC_AUTH_NULL, "null" },
{ RPC_AUTH_UNIX, "sys" },
{ RPC_AUTH_GSS_KRB5, "krb5" },
@ -923,8 +924,7 @@ static struct nfs_parsed_mount_data *nfs_alloc_parsed_mount_data(void)
data->mount_server.port = NFS_UNSPEC_PORT;
data->nfs_server.port = NFS_UNSPEC_PORT;
data->nfs_server.protocol = XPRT_TRANSPORT_TCP;
data->auth_flavors[0] = RPC_AUTH_MAXFLAVOR;
data->auth_flavor_len = 0;
data->selected_flavor = RPC_AUTH_MAXFLAVOR;
data->minorversion = 0;
data->need_mount = true;
data->net = current->nsproxy->net_ns;
@ -1019,13 +1019,52 @@ static void nfs_set_mount_transport_protocol(struct nfs_parsed_mount_data *mnt)
}
}
static void nfs_set_auth_parsed_mount_data(struct nfs_parsed_mount_data *data,
rpc_authflavor_t pseudoflavor)
/*
* Add 'flavor' to 'auth_info' if not already present.
* Returns true if 'flavor' ends up in the list, false otherwise
*/
static bool nfs_auth_info_add(struct nfs_auth_info *auth_info,
rpc_authflavor_t flavor)
{
data->auth_flavors[0] = pseudoflavor;
data->auth_flavor_len = 1;
unsigned int i;
unsigned int max_flavor_len = (sizeof(auth_info->flavors) /
sizeof(auth_info->flavors[0]));
/* make sure this flavor isn't already in the list */
for (i = 0; i < auth_info->flavor_len; i++) {
if (flavor == auth_info->flavors[i])
return true;
}
if (auth_info->flavor_len + 1 >= max_flavor_len) {
dfprintk(MOUNT, "NFS: too many sec= flavors\n");
return false;
}
auth_info->flavors[auth_info->flavor_len++] = flavor;
return true;
}
/*
* Return true if 'match' is in auth_info or auth_info is empty.
* Return false otherwise.
*/
bool nfs_auth_info_match(const struct nfs_auth_info *auth_info,
rpc_authflavor_t match)
{
int i;
if (!auth_info->flavor_len)
return true;
for (i = 0; i < auth_info->flavor_len; i++) {
if (auth_info->flavors[i] == match)
return true;
}
return false;
}
EXPORT_SYMBOL_GPL(nfs_auth_info_match);
/*
* Parse the value of the 'sec=' option.
*/
@ -1034,49 +1073,55 @@ static int nfs_parse_security_flavors(char *value,
{
substring_t args[MAX_OPT_ARGS];
rpc_authflavor_t pseudoflavor;
char *p;
dfprintk(MOUNT, "NFS: parsing sec=%s option\n", value);
switch (match_token(value, nfs_secflavor_tokens, args)) {
case Opt_sec_none:
pseudoflavor = RPC_AUTH_NULL;
break;
case Opt_sec_sys:
pseudoflavor = RPC_AUTH_UNIX;
break;
case Opt_sec_krb5:
pseudoflavor = RPC_AUTH_GSS_KRB5;
break;
case Opt_sec_krb5i:
pseudoflavor = RPC_AUTH_GSS_KRB5I;
break;
case Opt_sec_krb5p:
pseudoflavor = RPC_AUTH_GSS_KRB5P;
break;
case Opt_sec_lkey:
pseudoflavor = RPC_AUTH_GSS_LKEY;
break;
case Opt_sec_lkeyi:
pseudoflavor = RPC_AUTH_GSS_LKEYI;
break;
case Opt_sec_lkeyp:
pseudoflavor = RPC_AUTH_GSS_LKEYP;
break;
case Opt_sec_spkm:
pseudoflavor = RPC_AUTH_GSS_SPKM;
break;
case Opt_sec_spkmi:
pseudoflavor = RPC_AUTH_GSS_SPKMI;
break;
case Opt_sec_spkmp:
pseudoflavor = RPC_AUTH_GSS_SPKMP;
break;
default:
return 0;
while ((p = strsep(&value, ":")) != NULL) {
switch (match_token(p, nfs_secflavor_tokens, args)) {
case Opt_sec_none:
pseudoflavor = RPC_AUTH_NULL;
break;
case Opt_sec_sys:
pseudoflavor = RPC_AUTH_UNIX;
break;
case Opt_sec_krb5:
pseudoflavor = RPC_AUTH_GSS_KRB5;
break;
case Opt_sec_krb5i:
pseudoflavor = RPC_AUTH_GSS_KRB5I;
break;
case Opt_sec_krb5p:
pseudoflavor = RPC_AUTH_GSS_KRB5P;
break;
case Opt_sec_lkey:
pseudoflavor = RPC_AUTH_GSS_LKEY;
break;
case Opt_sec_lkeyi:
pseudoflavor = RPC_AUTH_GSS_LKEYI;
break;
case Opt_sec_lkeyp:
pseudoflavor = RPC_AUTH_GSS_LKEYP;
break;
case Opt_sec_spkm:
pseudoflavor = RPC_AUTH_GSS_SPKM;
break;
case Opt_sec_spkmi:
pseudoflavor = RPC_AUTH_GSS_SPKMI;
break;
case Opt_sec_spkmp:
pseudoflavor = RPC_AUTH_GSS_SPKMP;
break;
default:
dfprintk(MOUNT,
"NFS: sec= option '%s' not recognized\n", p);
return 0;
}
if (!nfs_auth_info_add(&mnt->auth_info, pseudoflavor))
return 0;
}
mnt->flags |= NFS_MOUNT_SECFLAVOUR;
nfs_set_auth_parsed_mount_data(mnt, pseudoflavor);
return 1;
}
@ -1623,12 +1668,14 @@ static int nfs_parse_mount_options(char *raw,
}
/*
* Ensure that the specified authtype in args->auth_flavors[0] is supported by
* the server. Returns 0 if it's ok, and -EACCES if not.
* Ensure that a specified authtype in args->auth_info is supported by
* the server. Returns 0 and sets args->selected_flavor if it's ok, and
* -EACCES if not.
*/
static int nfs_verify_authflavor(struct nfs_parsed_mount_data *args,
static int nfs_verify_authflavors(struct nfs_parsed_mount_data *args,
rpc_authflavor_t *server_authlist, unsigned int count)
{
rpc_authflavor_t flavor = RPC_AUTH_MAXFLAVOR;
unsigned int i;
/*
@ -1640,17 +1687,20 @@ static int nfs_verify_authflavor(struct nfs_parsed_mount_data *args,
* can be used.
*/
for (i = 0; i < count; i++) {
if (args->auth_flavors[0] == server_authlist[i] ||
server_authlist[i] == RPC_AUTH_NULL)
flavor = server_authlist[i];
if (nfs_auth_info_match(&args->auth_info, flavor) ||
flavor == RPC_AUTH_NULL)
goto out;
}
dfprintk(MOUNT, "NFS: auth flavor %u not supported by server\n",
args->auth_flavors[0]);
dfprintk(MOUNT,
"NFS: specified auth flavors not supported by server\n");
return -EACCES;
out:
dfprintk(MOUNT, "NFS: using auth flavor %u\n", args->auth_flavors[0]);
args->selected_flavor = flavor;
dfprintk(MOUNT, "NFS: using auth flavor %u\n", args->selected_flavor);
return 0;
}
@ -1738,9 +1788,10 @@ static struct nfs_server *nfs_try_mount_request(struct nfs_mount_info *mount_inf
* Was a sec= authflavor specified in the options? First, verify
* whether the server supports it, and then just try to use it if so.
*/
if (args->auth_flavor_len > 0) {
status = nfs_verify_authflavor(args, authlist, authlist_len);
dfprintk(MOUNT, "NFS: using auth flavor %u\n", args->auth_flavors[0]);
if (args->auth_info.flavor_len > 0) {
status = nfs_verify_authflavors(args, authlist, authlist_len);
dfprintk(MOUNT, "NFS: using auth flavor %u\n",
args->selected_flavor);
if (status)
return ERR_PTR(status);
return nfs_mod->rpc_ops->create_server(mount_info, nfs_mod);
@ -1769,7 +1820,7 @@ static struct nfs_server *nfs_try_mount_request(struct nfs_mount_info *mount_inf
/* Fallthrough */
}
dfprintk(MOUNT, "NFS: attempting to use auth flavor %u\n", flavor);
nfs_set_auth_parsed_mount_data(args, flavor);
args->selected_flavor = flavor;
server = nfs_mod->rpc_ops->create_server(mount_info, nfs_mod);
if (!IS_ERR(server))
return server;
@ -1785,7 +1836,7 @@ static struct nfs_server *nfs_try_mount_request(struct nfs_mount_info *mount_inf
/* Last chance! Try AUTH_UNIX */
dfprintk(MOUNT, "NFS: attempting to use auth flavor %u\n", RPC_AUTH_UNIX);
nfs_set_auth_parsed_mount_data(args, RPC_AUTH_UNIX);
args->selected_flavor = RPC_AUTH_UNIX;
return nfs_mod->rpc_ops->create_server(mount_info, nfs_mod);
}
@ -1972,9 +2023,9 @@ static int nfs23_validate_mount_data(void *options,
args->bsize = data->bsize;
if (data->flags & NFS_MOUNT_SECFLAVOUR)
nfs_set_auth_parsed_mount_data(args, data->pseudoflavor);
args->selected_flavor = data->pseudoflavor;
else
nfs_set_auth_parsed_mount_data(args, RPC_AUTH_UNIX);
args->selected_flavor = RPC_AUTH_UNIX;
if (!args->nfs_server.hostname)
goto out_nomem;
@ -2108,9 +2159,6 @@ static int nfs_validate_text_mount_data(void *options,
nfs_set_port(sap, &args->nfs_server.port, port);
if (args->auth_flavor_len > 1)
goto out_bad_auth;
return nfs_parse_devname(dev_name,
&args->nfs_server.hostname,
max_namelen,
@ -2130,10 +2178,6 @@ static int nfs_validate_text_mount_data(void *options,
out_no_address:
dfprintk(MOUNT, "NFS: mount program didn't pass remote address\n");
return -EINVAL;
out_bad_auth:
dfprintk(MOUNT, "NFS: Too many RPC auth flavours specified\n");
return -EINVAL;
}
static int
@ -2143,8 +2187,10 @@ nfs_compare_remount_data(struct nfs_server *nfss,
if (data->flags != nfss->flags ||
data->rsize != nfss->rsize ||
data->wsize != nfss->wsize ||
data->version != nfss->nfs_client->rpc_ops->version ||
data->minorversion != nfss->nfs_client->cl_minorversion ||
data->retrans != nfss->client->cl_timeout->to_retries ||
data->auth_flavors[0] != nfss->client->cl_auth->au_flavor ||
data->selected_flavor != nfss->client->cl_auth->au_flavor ||
data->acregmin != nfss->acregmin / HZ ||
data->acregmax != nfss->acregmax / HZ ||
data->acdirmin != nfss->acdirmin / HZ ||
@ -2189,7 +2235,8 @@ nfs_remount(struct super_block *sb, int *flags, char *raw_data)
data->rsize = nfss->rsize;
data->wsize = nfss->wsize;
data->retrans = nfss->client->cl_timeout->to_retries;
nfs_set_auth_parsed_mount_data(data, nfss->client->cl_auth->au_flavor);
data->selected_flavor = nfss->client->cl_auth->au_flavor;
data->auth_info = nfss->auth_info;
data->acregmin = nfss->acregmin / HZ;
data->acregmax = nfss->acregmax / HZ;
data->acdirmin = nfss->acdirmin / HZ;
@ -2197,12 +2244,14 @@ nfs_remount(struct super_block *sb, int *flags, char *raw_data)
data->timeo = 10U * nfss->client->cl_timeout->to_initval / HZ;
data->nfs_server.port = nfss->port;
data->nfs_server.addrlen = nfss->nfs_client->cl_addrlen;
data->version = nfsvers;
data->minorversion = nfss->nfs_client->cl_minorversion;
memcpy(&data->nfs_server.address, &nfss->nfs_client->cl_addr,
data->nfs_server.addrlen);
/* overwrite those values with any that were specified */
error = nfs_parse_mount_options((char *)options, data);
if (error < 0)
error = -EINVAL;
if (!nfs_parse_mount_options((char *)options, data))
goto out;
/*
@ -2332,7 +2381,7 @@ static int nfs_compare_mount_options(const struct super_block *s, const struct n
goto Ebusy;
if (a->acdirmax != b->acdirmax)
goto Ebusy;
if (b->flags & NFS_MOUNT_SECFLAVOUR &&
if (b->auth_info.flavor_len > 0 &&
clnt_a->cl_auth->au_flavor != clnt_b->cl_auth->au_flavor)
goto Ebusy;
return 1;
@ -2530,6 +2579,7 @@ struct dentry *nfs_fs_mount_common(struct nfs_server *server,
mntroot = ERR_PTR(error);
goto error_splat_bdi;
}
server->super = s;
}
if (!s->s_root) {
@ -2713,9 +2763,9 @@ static int nfs4_validate_mount_data(void *options,
data->auth_flavours,
sizeof(pseudoflavor)))
return -EFAULT;
nfs_set_auth_parsed_mount_data(args, pseudoflavor);
args->selected_flavor = pseudoflavor;
} else
nfs_set_auth_parsed_mount_data(args, RPC_AUTH_UNIX);
args->selected_flavor = RPC_AUTH_UNIX;
c = strndup_user(data->hostname.data, NFS4_MAXNAMLEN);
if (IS_ERR(c))

View File

@ -493,7 +493,7 @@ nfs_sillyrename(struct inode *dir, struct dentry *dentry)
unsigned long long fileid;
struct dentry *sdentry;
struct rpc_task *task;
int error = -EIO;
int error = -EBUSY;
dfprintk(VFS, "NFS: silly-rename(%s/%s, ct=%d)\n",
dentry->d_parent->d_name.name, dentry->d_name.name,
@ -503,7 +503,6 @@ nfs_sillyrename(struct inode *dir, struct dentry *dentry)
/*
* We don't allow a dentry to be silly-renamed twice.
*/
error = -EBUSY;
if (dentry->d_flags & DCACHE_NFSFS_RENAMED)
goto out;

View File

@ -2292,6 +2292,11 @@ static inline void allow_write_access(struct file *file)
if (file)
atomic_inc(&file_inode(file)->i_writecount);
}
static inline bool inode_is_open_for_write(const struct inode *inode)
{
return atomic_read(&inode->i_writecount) > 0;
}
#ifdef CONFIG_IMA
static inline void i_readcount_dec(struct inode *inode)
{

View File

@ -308,36 +308,6 @@ struct fscache_cache_ops {
void (*dissociate_pages)(struct fscache_cache *cache);
};
/*
* data file or index object cookie
* - a file will only appear in one cache
* - a request to cache a file may or may not be honoured, subject to
* constraints such as disk space
* - indices are created on disk just-in-time
*/
struct fscache_cookie {
atomic_t usage; /* number of users of this cookie */
atomic_t n_children; /* number of children of this cookie */
atomic_t n_active; /* number of active users of netfs ptrs */
spinlock_t lock;
spinlock_t stores_lock; /* lock on page store tree */
struct hlist_head backing_objects; /* object(s) backing this file/index */
const struct fscache_cookie_def *def; /* definition */
struct fscache_cookie *parent; /* parent of this entry */
void *netfs_data; /* back pointer to netfs */
struct radix_tree_root stores; /* pages to be stored on this cookie */
#define FSCACHE_COOKIE_PENDING_TAG 0 /* pages tag: pending write to cache */
#define FSCACHE_COOKIE_STORING_TAG 1 /* pages tag: writing to cache */
unsigned long flags;
#define FSCACHE_COOKIE_LOOKING_UP 0 /* T if non-index cookie being looked up still */
#define FSCACHE_COOKIE_NO_DATA_YET 1 /* T if new object with no cached data yet */
#define FSCACHE_COOKIE_UNAVAILABLE 2 /* T if cookie is unavailable (error, etc) */
#define FSCACHE_COOKIE_INVALIDATING 3 /* T if cookie is being invalidated */
#define FSCACHE_COOKIE_RELINQUISHED 4 /* T if cookie has been relinquished */
#define FSCACHE_COOKIE_RETIRED 5 /* T if cookie was retired */
};
extern struct fscache_cookie fscache_fsdef_index;
/*
@ -400,6 +370,7 @@ struct fscache_object {
#define FSCACHE_OBJECT_IS_LIVE 3 /* T if object is not withdrawn or relinquished */
#define FSCACHE_OBJECT_IS_LOOKED_UP 4 /* T if object has been looked up */
#define FSCACHE_OBJECT_IS_AVAILABLE 5 /* T if object has become active */
#define FSCACHE_OBJECT_RETIRED 6 /* T if object was retired on relinquishment */
struct list_head cache_link; /* link in cache->object_list */
struct hlist_node cookie_link; /* link in cookie->backing_objects */
@ -511,6 +482,11 @@ static inline void fscache_end_io(struct fscache_retrieval *op,
op->end_io_func(page, op->context, error);
}
static inline void __fscache_use_cookie(struct fscache_cookie *cookie)
{
atomic_inc(&cookie->n_active);
}
/**
* fscache_use_cookie - Request usage of cookie attached to an object
* @object: Object description
@ -524,6 +500,16 @@ static inline bool fscache_use_cookie(struct fscache_object *object)
return atomic_inc_not_zero(&cookie->n_active) != 0;
}
static inline bool __fscache_unuse_cookie(struct fscache_cookie *cookie)
{
return atomic_dec_and_test(&cookie->n_active);
}
static inline void __fscache_wake_unused_cookie(struct fscache_cookie *cookie)
{
wake_up_atomic_t(&cookie->n_active);
}
/**
* fscache_unuse_cookie - Cease usage of cookie attached to an object
* @object: Object description
@ -534,8 +520,8 @@ static inline bool fscache_use_cookie(struct fscache_object *object)
static inline void fscache_unuse_cookie(struct fscache_object *object)
{
struct fscache_cookie *cookie = object->cookie;
if (atomic_dec_and_test(&cookie->n_active))
wake_up_atomic_t(&cookie->n_active);
if (__fscache_unuse_cookie(cookie))
__fscache_wake_unused_cookie(cookie);
}
/*

View File

@ -166,6 +166,42 @@ struct fscache_netfs {
struct list_head link; /* internal link */
};
/*
* data file or index object cookie
* - a file will only appear in one cache
* - a request to cache a file may or may not be honoured, subject to
* constraints such as disk space
* - indices are created on disk just-in-time
*/
struct fscache_cookie {
atomic_t usage; /* number of users of this cookie */
atomic_t n_children; /* number of children of this cookie */
atomic_t n_active; /* number of active users of netfs ptrs */
spinlock_t lock;
spinlock_t stores_lock; /* lock on page store tree */
struct hlist_head backing_objects; /* object(s) backing this file/index */
const struct fscache_cookie_def *def; /* definition */
struct fscache_cookie *parent; /* parent of this entry */
void *netfs_data; /* back pointer to netfs */
struct radix_tree_root stores; /* pages to be stored on this cookie */
#define FSCACHE_COOKIE_PENDING_TAG 0 /* pages tag: pending write to cache */
#define FSCACHE_COOKIE_STORING_TAG 1 /* pages tag: writing to cache */
unsigned long flags;
#define FSCACHE_COOKIE_LOOKING_UP 0 /* T if non-index cookie being looked up still */
#define FSCACHE_COOKIE_NO_DATA_YET 1 /* T if new object with no cached data yet */
#define FSCACHE_COOKIE_UNAVAILABLE 2 /* T if cookie is unavailable (error, etc) */
#define FSCACHE_COOKIE_INVALIDATING 3 /* T if cookie is being invalidated */
#define FSCACHE_COOKIE_RELINQUISHED 4 /* T if cookie has been relinquished */
#define FSCACHE_COOKIE_ENABLED 5 /* T if cookie is enabled */
#define FSCACHE_COOKIE_ENABLEMENT_LOCK 6 /* T if cookie is being en/disabled */
};
static inline bool fscache_cookie_enabled(struct fscache_cookie *cookie)
{
return test_bit(FSCACHE_COOKIE_ENABLED, &cookie->flags);
}
/*
* slow-path functions for when there is actually caching available, and the
* netfs does actually have a valid token
@ -181,8 +217,8 @@ extern void __fscache_release_cache_tag(struct fscache_cache_tag *);
extern struct fscache_cookie *__fscache_acquire_cookie(
struct fscache_cookie *,
const struct fscache_cookie_def *,
void *);
extern void __fscache_relinquish_cookie(struct fscache_cookie *, int);
void *, bool);
extern void __fscache_relinquish_cookie(struct fscache_cookie *, bool);
extern int __fscache_check_consistency(struct fscache_cookie *);
extern void __fscache_update_cookie(struct fscache_cookie *);
extern int __fscache_attr_changed(struct fscache_cookie *);
@ -211,6 +247,9 @@ extern void __fscache_uncache_all_inode_pages(struct fscache_cookie *,
struct inode *);
extern void __fscache_readpages_cancel(struct fscache_cookie *cookie,
struct list_head *pages);
extern void __fscache_disable_cookie(struct fscache_cookie *, bool);
extern void __fscache_enable_cookie(struct fscache_cookie *,
bool (*)(void *), void *);
/**
* fscache_register_netfs - Register a filesystem as desiring caching services
@ -289,6 +328,7 @@ void fscache_release_cache_tag(struct fscache_cache_tag *tag)
* @def: A description of the cache object, including callback operations
* @netfs_data: An arbitrary piece of data to be kept in the cookie to
* represent the cache object to the netfs
* @enable: Whether or not to enable a data cookie immediately
*
* This function is used to inform FS-Cache about part of an index hierarchy
* that can be used to locate files. This is done by requesting a cookie for
@ -301,10 +341,12 @@ static inline
struct fscache_cookie *fscache_acquire_cookie(
struct fscache_cookie *parent,
const struct fscache_cookie_def *def,
void *netfs_data)
void *netfs_data,
bool enable)
{
if (fscache_cookie_valid(parent))
return __fscache_acquire_cookie(parent, def, netfs_data);
if (fscache_cookie_valid(parent) && fscache_cookie_enabled(parent))
return __fscache_acquire_cookie(parent, def, netfs_data,
enable);
else
return NULL;
}
@ -322,7 +364,7 @@ struct fscache_cookie *fscache_acquire_cookie(
* description.
*/
static inline
void fscache_relinquish_cookie(struct fscache_cookie *cookie, int retire)
void fscache_relinquish_cookie(struct fscache_cookie *cookie, bool retire)
{
if (fscache_cookie_valid(cookie))
__fscache_relinquish_cookie(cookie, retire);
@ -341,7 +383,7 @@ void fscache_relinquish_cookie(struct fscache_cookie *cookie, int retire)
static inline
int fscache_check_consistency(struct fscache_cookie *cookie)
{
if (fscache_cookie_valid(cookie))
if (fscache_cookie_valid(cookie) && fscache_cookie_enabled(cookie))
return __fscache_check_consistency(cookie);
else
return 0;
@ -360,7 +402,7 @@ int fscache_check_consistency(struct fscache_cookie *cookie)
static inline
void fscache_update_cookie(struct fscache_cookie *cookie)
{
if (fscache_cookie_valid(cookie))
if (fscache_cookie_valid(cookie) && fscache_cookie_enabled(cookie))
__fscache_update_cookie(cookie);
}
@ -407,7 +449,7 @@ void fscache_unpin_cookie(struct fscache_cookie *cookie)
static inline
int fscache_attr_changed(struct fscache_cookie *cookie)
{
if (fscache_cookie_valid(cookie))
if (fscache_cookie_valid(cookie) && fscache_cookie_enabled(cookie))
return __fscache_attr_changed(cookie);
else
return -ENOBUFS;
@ -429,7 +471,7 @@ int fscache_attr_changed(struct fscache_cookie *cookie)
static inline
void fscache_invalidate(struct fscache_cookie *cookie)
{
if (fscache_cookie_valid(cookie))
if (fscache_cookie_valid(cookie) && fscache_cookie_enabled(cookie))
__fscache_invalidate(cookie);
}
@ -503,7 +545,7 @@ int fscache_read_or_alloc_page(struct fscache_cookie *cookie,
void *context,
gfp_t gfp)
{
if (fscache_cookie_valid(cookie))
if (fscache_cookie_valid(cookie) && fscache_cookie_enabled(cookie))
return __fscache_read_or_alloc_page(cookie, page, end_io_func,
context, gfp);
else
@ -554,7 +596,7 @@ int fscache_read_or_alloc_pages(struct fscache_cookie *cookie,
void *context,
gfp_t gfp)
{
if (fscache_cookie_valid(cookie))
if (fscache_cookie_valid(cookie) && fscache_cookie_enabled(cookie))
return __fscache_read_or_alloc_pages(cookie, mapping, pages,
nr_pages, end_io_func,
context, gfp);
@ -585,7 +627,7 @@ int fscache_alloc_page(struct fscache_cookie *cookie,
struct page *page,
gfp_t gfp)
{
if (fscache_cookie_valid(cookie))
if (fscache_cookie_valid(cookie) && fscache_cookie_enabled(cookie))
return __fscache_alloc_page(cookie, page, gfp);
else
return -ENOBUFS;
@ -634,7 +676,7 @@ int fscache_write_page(struct fscache_cookie *cookie,
struct page *page,
gfp_t gfp)
{
if (fscache_cookie_valid(cookie))
if (fscache_cookie_valid(cookie) && fscache_cookie_enabled(cookie))
return __fscache_write_page(cookie, page, gfp);
else
return -ENOBUFS;
@ -744,4 +786,47 @@ void fscache_uncache_all_inode_pages(struct fscache_cookie *cookie,
__fscache_uncache_all_inode_pages(cookie, inode);
}
/**
* fscache_disable_cookie - Disable a cookie
* @cookie: The cookie representing the cache object
* @invalidate: Invalidate the backing object
*
* Disable a cookie from accepting further alloc, read, write, invalidate,
* update or acquire operations. Outstanding operations can still be waited
* upon and pages can still be uncached and the cookie relinquished.
*
* This will not return until all outstanding operations have completed.
*
* If @invalidate is set, then the backing object will be invalidated and
* detached, otherwise it will just be detached.
*/
static inline
void fscache_disable_cookie(struct fscache_cookie *cookie, bool invalidate)
{
if (fscache_cookie_valid(cookie) && fscache_cookie_enabled(cookie))
__fscache_disable_cookie(cookie, invalidate);
}
/**
* fscache_enable_cookie - Reenable a cookie
* @cookie: The cookie representing the cache object
* @can_enable: A function to permit enablement once lock is held
* @data: Data for can_enable()
*
* Reenable a previously disabled cookie, allowing it to accept further alloc,
* read, write, invalidate, update or acquire operations. An attempt will be
* made to immediately reattach the cookie to a backing object.
*
* The can_enable() function is called (if not NULL) once the enablement lock
* is held to rule on whether enablement is still permitted to go ahead.
*/
static inline
void fscache_enable_cookie(struct fscache_cookie *cookie,
bool (*can_enable)(void *data),
void *data)
{
if (fscache_cookie_valid(cookie) && !fscache_cookie_enabled(cookie))
__fscache_enable_cookie(cookie, can_enable, data);
}
#endif /* _LINUX_FSCACHE_H */

View File

@ -395,7 +395,9 @@ enum lock_type4 {
#define FATTR4_WORD1_FS_LAYOUT_TYPES (1UL << 30)
#define FATTR4_WORD2_LAYOUT_BLKSIZE (1UL << 1)
#define FATTR4_WORD2_MDSTHRESHOLD (1UL << 4)
#define FATTR4_WORD2_SECURITY_LABEL (1UL << 17)
#define FATTR4_WORD2_SECURITY_LABEL (1UL << 16)
#define FATTR4_WORD2_CHANGE_SECURITY_LABEL \
(1UL << 17)
/* MDS threshold bitmap bits */
#define THRESHOLD_RD (1UL << 0)
@ -460,6 +462,7 @@ enum {
NFSPROC4_CLNT_FS_LOCATIONS,
NFSPROC4_CLNT_RELEASE_LOCKOWNER,
NFSPROC4_CLNT_SECINFO,
NFSPROC4_CLNT_FSID_PRESENT,
/* nfs41 */
NFSPROC4_CLNT_EXCHANGE_ID,

View File

@ -269,9 +269,13 @@ static inline int NFS_STALE(const struct inode *inode)
return test_bit(NFS_INO_STALE, &NFS_I(inode)->flags);
}
static inline int NFS_FSCACHE(const struct inode *inode)
static inline struct fscache_cookie *nfs_i_fscache(struct inode *inode)
{
return test_bit(NFS_INO_FSCACHE, &NFS_I(inode)->flags);
#ifdef CONFIG_NFS_FSCACHE
return NFS_I(inode)->fscache;
#else
return NULL;
#endif
}
static inline __u64 NFS_FILEID(const struct inode *inode)

View File

@ -41,6 +41,7 @@ struct nfs_client {
#define NFS_CS_DISCRTRY 1 /* - disconnect on RPC retry */
#define NFS_CS_MIGRATION 2 /* - transparent state migr */
#define NFS_CS_INFINITE_SLOTS 3 /* - don't limit TCP slots */
#define NFS_CS_NO_RETRANS_TIMEOUT 4 /* - Disable retransmit timeouts */
struct sockaddr_storage cl_addr; /* server identifier */
size_t cl_addrlen;
char * cl_hostname; /* hostname of server */
@ -78,6 +79,7 @@ struct nfs_client {
char cl_ipaddr[48];
u32 cl_cb_ident; /* v4.0 callback identifier */
const struct nfs4_minor_version_ops *cl_mvops;
unsigned long cl_mig_gen;
/* NFSv4.0 transport blocking */
struct nfs4_slot_table *cl_slot_tbl;
@ -147,7 +149,9 @@ struct nfs_server {
__u64 maxfilesize; /* maximum file size */
struct timespec time_delta; /* smallest time granularity */
unsigned long mount_time; /* when this fs was mounted */
struct super_block *super; /* VFS super block */
dev_t s_dev; /* superblock dev numbers */
struct nfs_auth_info auth_info; /* parsed auth flavors */
#ifdef CONFIG_NFS_FSCACHE
struct nfs_fscache_key *fscache_key; /* unique key for superblock */
@ -187,6 +191,12 @@ struct nfs_server {
struct list_head state_owners_lru;
struct list_head layouts;
struct list_head delegations;
unsigned long mig_gen;
unsigned long mig_status;
#define NFS_MIG_IN_TRANSITION (1)
#define NFS_MIG_FAILED (2)
void (*destroy)(struct nfs_server *);
atomic_t active; /* Keep trace of any activity to this server */

View File

@ -591,6 +591,13 @@ struct nfs_renameres {
struct nfs_fattr *new_fattr;
};
/* parsed sec= options */
#define NFS_AUTH_INFO_MAX_FLAVORS 12 /* see fs/nfs/super.c */
struct nfs_auth_info {
unsigned int flavor_len;
rpc_authflavor_t flavors[NFS_AUTH_INFO_MAX_FLAVORS];
};
/*
* Argument struct for decode_entry function
*/
@ -1053,14 +1060,18 @@ struct nfs4_fs_locations {
struct nfs4_fs_locations_arg {
struct nfs4_sequence_args seq_args;
const struct nfs_fh *dir_fh;
const struct nfs_fh *fh;
const struct qstr *name;
struct page *page;
const u32 *bitmask;
clientid4 clientid;
unsigned char migration:1, renew:1;
};
struct nfs4_fs_locations_res {
struct nfs4_sequence_res seq_res;
struct nfs4_fs_locations *fs_locations;
unsigned char migration:1, renew:1;
};
struct nfs4_secinfo4 {
@ -1084,6 +1095,19 @@ struct nfs4_secinfo_res {
struct nfs4_secinfo_flavors *flavors;
};
struct nfs4_fsid_present_arg {
struct nfs4_sequence_args seq_args;
const struct nfs_fh *fh;
clientid4 clientid;
unsigned char renew:1;
};
struct nfs4_fsid_present_res {
struct nfs4_sequence_res seq_res;
struct nfs_fh *fh;
unsigned char renew:1;
};
#endif /* CONFIG_NFS_V4 */
struct nfstime4 {

View File

@ -49,6 +49,7 @@ struct rpc_clnt {
unsigned int cl_softrtry : 1,/* soft timeouts */
cl_discrtry : 1,/* disconnect before retry */
cl_noretranstimeo: 1,/* No retransmit timeouts */
cl_autobind : 1,/* use getport() */
cl_chatty : 1;/* be verbose */
@ -126,6 +127,7 @@ struct rpc_create_args {
#define RPC_CLNT_CREATE_QUIET (1UL << 6)
#define RPC_CLNT_CREATE_INFINITE_SLOTS (1UL << 7)
#define RPC_CLNT_CREATE_NO_IDLE_TIMEOUT (1UL << 8)
#define RPC_CLNT_CREATE_NO_RETRANS_TIMEOUT (1UL << 9)
struct rpc_clnt *rpc_create(struct rpc_create_args *args);
struct rpc_clnt *rpc_bind_new_program(struct rpc_clnt *,
@ -134,6 +136,10 @@ void rpc_task_reset_client(struct rpc_task *task, struct rpc_clnt *clnt);
struct rpc_clnt *rpc_clone_client(struct rpc_clnt *);
struct rpc_clnt *rpc_clone_client_set_auth(struct rpc_clnt *,
rpc_authflavor_t);
int rpc_switch_client_transport(struct rpc_clnt *,
struct xprt_create *,
const struct rpc_timeout *);
void rpc_shutdown_client(struct rpc_clnt *);
void rpc_release_client(struct rpc_clnt *);
void rpc_task_release_client(struct rpc_task *);

View File

@ -122,6 +122,7 @@ struct rpc_task_setup {
#define RPC_TASK_SENT 0x0800 /* message was sent */
#define RPC_TASK_TIMEOUT 0x1000 /* fail with ETIMEDOUT on timeout */
#define RPC_TASK_NOCONNECT 0x2000 /* return ENOTCONN if not connected */
#define RPC_TASK_NO_RETRANS_TIMEOUT 0x4000 /* wait forever for a reply */
#define RPC_IS_ASYNC(t) ((t)->tk_flags & RPC_TASK_ASYNC)
#define RPC_IS_SWAPPER(t) ((t)->tk_flags & RPC_TASK_SWAPPER)

View File

@ -288,7 +288,7 @@ int xprt_reserve_xprt(struct rpc_xprt *xprt, struct rpc_task *task);
int xprt_reserve_xprt_cong(struct rpc_xprt *xprt, struct rpc_task *task);
void xprt_alloc_slot(struct rpc_xprt *xprt, struct rpc_task *task);
void xprt_lock_and_alloc_slot(struct rpc_xprt *xprt, struct rpc_task *task);
int xprt_prepare_transmit(struct rpc_task *task);
bool xprt_prepare_transmit(struct rpc_task *task);
void xprt_transmit(struct rpc_task *task);
void xprt_end_transmit(struct rpc_task *task);
int xprt_adjust_timeout(struct rpc_rqst *req);

View File

@ -60,7 +60,7 @@ struct nfs_mount_data {
#define NFS_MOUNT_BROKEN_SUID 0x0400 /* 4 */
#define NFS_MOUNT_NOACL 0x0800 /* 4 */
#define NFS_MOUNT_STRICTLOCK 0x1000 /* reserved for NFSv4 */
#define NFS_MOUNT_SECFLAVOUR 0x2000 /* 5 */
#define NFS_MOUNT_SECFLAVOUR 0x2000 /* 5 non-text parsed mount data only */
#define NFS_MOUNT_NORDIRPLUS 0x4000 /* 5 */
#define NFS_MOUNT_UNSHARED 0x8000 /* 5 */
#define NFS_MOUNT_FLAGMASK 0xFFFF

View File

@ -420,41 +420,53 @@ static void gss_encode_v0_msg(struct gss_upcall_msg *gss_msg)
memcpy(gss_msg->databuf, &uid, sizeof(uid));
gss_msg->msg.data = gss_msg->databuf;
gss_msg->msg.len = sizeof(uid);
BUG_ON(sizeof(uid) > UPCALL_BUF_LEN);
BUILD_BUG_ON(sizeof(uid) > sizeof(gss_msg->databuf));
}
static void gss_encode_v1_msg(struct gss_upcall_msg *gss_msg,
static int gss_encode_v1_msg(struct gss_upcall_msg *gss_msg,
const char *service_name,
const char *target_name)
{
struct gss_api_mech *mech = gss_msg->auth->mech;
char *p = gss_msg->databuf;
int len = 0;
size_t buflen = sizeof(gss_msg->databuf);
int len;
gss_msg->msg.len = sprintf(gss_msg->databuf, "mech=%s uid=%d ",
mech->gm_name,
from_kuid(&init_user_ns, gss_msg->uid));
p += gss_msg->msg.len;
len = scnprintf(p, buflen, "mech=%s uid=%d ", mech->gm_name,
from_kuid(&init_user_ns, gss_msg->uid));
buflen -= len;
p += len;
gss_msg->msg.len = len;
if (target_name) {
len = sprintf(p, "target=%s ", target_name);
len = scnprintf(p, buflen, "target=%s ", target_name);
buflen -= len;
p += len;
gss_msg->msg.len += len;
}
if (service_name != NULL) {
len = sprintf(p, "service=%s ", service_name);
len = scnprintf(p, buflen, "service=%s ", service_name);
buflen -= len;
p += len;
gss_msg->msg.len += len;
}
if (mech->gm_upcall_enctypes) {
len = sprintf(p, "enctypes=%s ", mech->gm_upcall_enctypes);
len = scnprintf(p, buflen, "enctypes=%s ",
mech->gm_upcall_enctypes);
buflen -= len;
p += len;
gss_msg->msg.len += len;
}
len = sprintf(p, "\n");
len = scnprintf(p, buflen, "\n");
if (len == 0)
goto out_overflow;
gss_msg->msg.len += len;
gss_msg->msg.data = gss_msg->databuf;
BUG_ON(gss_msg->msg.len > UPCALL_BUF_LEN);
return 0;
out_overflow:
WARN_ON_ONCE(1);
return -ENOMEM;
}
static struct gss_upcall_msg *
@ -463,15 +475,15 @@ gss_alloc_msg(struct gss_auth *gss_auth,
{
struct gss_upcall_msg *gss_msg;
int vers;
int err = -ENOMEM;
gss_msg = kzalloc(sizeof(*gss_msg), GFP_NOFS);
if (gss_msg == NULL)
return ERR_PTR(-ENOMEM);
goto err;
vers = get_pipe_version(gss_auth->net);
if (vers < 0) {
kfree(gss_msg);
return ERR_PTR(vers);
}
err = vers;
if (err < 0)
goto err_free_msg;
gss_msg->pipe = gss_auth->gss_pipe[vers]->pipe;
INIT_LIST_HEAD(&gss_msg->list);
rpc_init_wait_queue(&gss_msg->rpc_waitqueue, "RPCSEC_GSS upcall waitq");
@ -482,10 +494,17 @@ gss_alloc_msg(struct gss_auth *gss_auth,
switch (vers) {
case 0:
gss_encode_v0_msg(gss_msg);
break;
default:
gss_encode_v1_msg(gss_msg, service_name, gss_auth->target_name);
err = gss_encode_v1_msg(gss_msg, service_name, gss_auth->target_name);
if (err)
goto err_free_msg;
};
return gss_msg;
err_free_msg:
kfree(gss_msg);
err:
return ERR_PTR(err);
}
static struct gss_upcall_msg *

View File

@ -25,12 +25,12 @@
#include <linux/namei.h>
#include <linux/mount.h>
#include <linux/slab.h>
#include <linux/rcupdate.h>
#include <linux/utsname.h>
#include <linux/workqueue.h>
#include <linux/in.h>
#include <linux/in6.h>
#include <linux/un.h>
#include <linux/rcupdate.h>
#include <linux/sunrpc/clnt.h>
#include <linux/sunrpc/addr.h>
@ -264,6 +264,26 @@ void rpc_clients_notifier_unregister(void)
return rpc_pipefs_notifier_unregister(&rpc_clients_block);
}
static struct rpc_xprt *rpc_clnt_set_transport(struct rpc_clnt *clnt,
struct rpc_xprt *xprt,
const struct rpc_timeout *timeout)
{
struct rpc_xprt *old;
spin_lock(&clnt->cl_lock);
old = rcu_dereference_protected(clnt->cl_xprt,
lockdep_is_held(&clnt->cl_lock));
if (!xprt_bound(xprt))
clnt->cl_autobind = 1;
clnt->cl_timeout = timeout;
rcu_assign_pointer(clnt->cl_xprt, xprt);
spin_unlock(&clnt->cl_lock);
return old;
}
static void rpc_clnt_set_nodename(struct rpc_clnt *clnt, const char *nodename)
{
clnt->cl_nodelen = strlen(nodename);
@ -272,12 +292,13 @@ static void rpc_clnt_set_nodename(struct rpc_clnt *clnt, const char *nodename)
memcpy(clnt->cl_nodename, nodename, clnt->cl_nodelen);
}
static int rpc_client_register(const struct rpc_create_args *args,
struct rpc_clnt *clnt)
static int rpc_client_register(struct rpc_clnt *clnt,
rpc_authflavor_t pseudoflavor,
const char *client_name)
{
struct rpc_auth_create_args auth_args = {
.pseudoflavor = args->authflavor,
.target_name = args->client_name,
.pseudoflavor = pseudoflavor,
.target_name = client_name,
};
struct rpc_auth *auth;
struct net *net = rpc_net_ns(clnt);
@ -298,7 +319,7 @@ static int rpc_client_register(const struct rpc_create_args *args,
auth = rpcauth_create(&auth_args, clnt);
if (IS_ERR(auth)) {
dprintk("RPC: Couldn't create auth handle (flavor %u)\n",
args->authflavor);
pseudoflavor);
err = PTR_ERR(auth);
goto err_auth;
}
@ -337,7 +358,8 @@ static struct rpc_clnt * rpc_new_client(const struct rpc_create_args *args,
{
const struct rpc_program *program = args->program;
const struct rpc_version *version;
struct rpc_clnt *clnt = NULL;
struct rpc_clnt *clnt = NULL;
const struct rpc_timeout *timeout;
int err;
/* sanity check the name before trying to print it */
@ -365,7 +387,6 @@ static struct rpc_clnt * rpc_new_client(const struct rpc_create_args *args,
if (err)
goto out_no_clid;
rcu_assign_pointer(clnt->cl_xprt, xprt);
clnt->cl_procinfo = version->procs;
clnt->cl_maxproc = version->nrprocs;
clnt->cl_prog = args->prognumber ? : program->number;
@ -380,16 +401,15 @@ static struct rpc_clnt * rpc_new_client(const struct rpc_create_args *args,
INIT_LIST_HEAD(&clnt->cl_tasks);
spin_lock_init(&clnt->cl_lock);
if (!xprt_bound(xprt))
clnt->cl_autobind = 1;
clnt->cl_timeout = xprt->timeout;
timeout = xprt->timeout;
if (args->timeout != NULL) {
memcpy(&clnt->cl_timeout_default, args->timeout,
sizeof(clnt->cl_timeout_default));
clnt->cl_timeout = &clnt->cl_timeout_default;
timeout = &clnt->cl_timeout_default;
}
rpc_clnt_set_transport(clnt, xprt, timeout);
clnt->cl_rtt = &clnt->cl_rtt_default;
rpc_init_rtt(&clnt->cl_rtt_default, clnt->cl_timeout->to_initval);
@ -398,7 +418,7 @@ static struct rpc_clnt * rpc_new_client(const struct rpc_create_args *args,
/* save the nodename */
rpc_clnt_set_nodename(clnt, utsname()->nodename);
err = rpc_client_register(args, clnt);
err = rpc_client_register(clnt, args->authflavor, args->client_name);
if (err)
goto out_no_path;
if (parent)
@ -600,6 +620,80 @@ rpc_clone_client_set_auth(struct rpc_clnt *clnt, rpc_authflavor_t flavor)
}
EXPORT_SYMBOL_GPL(rpc_clone_client_set_auth);
/**
* rpc_switch_client_transport: switch the RPC transport on the fly
* @clnt: pointer to a struct rpc_clnt
* @args: pointer to the new transport arguments
* @timeout: pointer to the new timeout parameters
*
* This function allows the caller to switch the RPC transport for the
* rpc_clnt structure 'clnt' to allow it to connect to a mirrored NFS
* server, for instance. It assumes that the caller has ensured that
* there are no active RPC tasks by using some form of locking.
*
* Returns zero if "clnt" is now using the new xprt. Otherwise a
* negative errno is returned, and "clnt" continues to use the old
* xprt.
*/
int rpc_switch_client_transport(struct rpc_clnt *clnt,
struct xprt_create *args,
const struct rpc_timeout *timeout)
{
const struct rpc_timeout *old_timeo;
rpc_authflavor_t pseudoflavor;
struct rpc_xprt *xprt, *old;
struct rpc_clnt *parent;
int err;
xprt = xprt_create_transport(args);
if (IS_ERR(xprt)) {
dprintk("RPC: failed to create new xprt for clnt %p\n",
clnt);
return PTR_ERR(xprt);
}
pseudoflavor = clnt->cl_auth->au_flavor;
old_timeo = clnt->cl_timeout;
old = rpc_clnt_set_transport(clnt, xprt, timeout);
rpc_unregister_client(clnt);
__rpc_clnt_remove_pipedir(clnt);
/*
* A new transport was created. "clnt" therefore
* becomes the root of a new cl_parent tree. clnt's
* children, if it has any, still point to the old xprt.
*/
parent = clnt->cl_parent;
clnt->cl_parent = clnt;
/*
* The old rpc_auth cache cannot be re-used. GSS
* contexts in particular are between a single
* client and server.
*/
err = rpc_client_register(clnt, pseudoflavor, NULL);
if (err)
goto out_revert;
synchronize_rcu();
if (parent != clnt)
rpc_release_client(parent);
xprt_put(old);
dprintk("RPC: replaced xprt for clnt %p\n", clnt);
return 0;
out_revert:
rpc_clnt_set_transport(clnt, old, old_timeo);
clnt->cl_parent = parent;
rpc_client_register(clnt, pseudoflavor, NULL);
xprt_put(xprt);
dprintk("RPC: failed to switch xprt for clnt %p\n", clnt);
return err;
}
EXPORT_SYMBOL_GPL(rpc_switch_client_transport);
/*
* Kill all tasks for the given client.
* XXX: kill their descendants as well?
@ -772,6 +866,8 @@ void rpc_task_set_client(struct rpc_task *task, struct rpc_clnt *clnt)
atomic_inc(&clnt->cl_count);
if (clnt->cl_softrtry)
task->tk_flags |= RPC_TASK_SOFT;
if (clnt->cl_noretranstimeo)
task->tk_flags |= RPC_TASK_NO_RETRANS_TIMEOUT;
if (sk_memalloc_socks()) {
struct rpc_xprt *xprt;
@ -1690,6 +1786,7 @@ call_connect_status(struct rpc_task *task)
dprint_status(task);
trace_rpc_connect_status(task, status);
task->tk_status = 0;
switch (status) {
/* if soft mounted, test if we've timed out */
case -ETIMEDOUT:
@ -1698,12 +1795,14 @@ call_connect_status(struct rpc_task *task)
case -ECONNREFUSED:
case -ECONNRESET:
case -ENETUNREACH:
/* retry with existing socket, after a delay */
rpc_delay(task, 3*HZ);
if (RPC_IS_SOFTCONN(task))
break;
/* retry with existing socket, after a delay */
case 0:
case -EAGAIN:
task->tk_status = 0;
task->tk_action = call_bind;
return;
case 0:
clnt->cl_stats->netreconn++;
task->tk_action = call_transmit;
return;
@ -1717,13 +1816,14 @@ call_connect_status(struct rpc_task *task)
static void
call_transmit(struct rpc_task *task)
{
int is_retrans = RPC_WAS_SENT(task);
dprint_status(task);
task->tk_action = call_status;
if (task->tk_status < 0)
return;
task->tk_status = xprt_prepare_transmit(task);
if (task->tk_status != 0)
if (!xprt_prepare_transmit(task))
return;
task->tk_action = call_transmit_status;
/* Encode here so that rpcsec_gss can use correct sequence number. */
@ -1742,6 +1842,8 @@ call_transmit(struct rpc_task *task)
xprt_transmit(task);
if (task->tk_status < 0)
return;
if (is_retrans)
task->tk_client->cl_stats->rpcretrans++;
/*
* On success, ensure that we call xprt_end_transmit() before sleeping
* in order to allow access to the socket to other RPC requests.
@ -1811,8 +1913,7 @@ call_bc_transmit(struct rpc_task *task)
{
struct rpc_rqst *req = task->tk_rqstp;
task->tk_status = xprt_prepare_transmit(task);
if (task->tk_status == -EAGAIN) {
if (!xprt_prepare_transmit(task)) {
/*
* Could not reserve the transport. Try again after the
* transport is released.
@ -1900,7 +2001,8 @@ call_status(struct rpc_task *task)
rpc_delay(task, 3*HZ);
case -ETIMEDOUT:
task->tk_action = call_timeout;
if (task->tk_client->cl_discrtry)
if (!(task->tk_flags & RPC_TASK_NO_RETRANS_TIMEOUT)
&& task->tk_client->cl_discrtry)
xprt_conditional_disconnect(req->rq_xprt,
req->rq_connect_cookie);
break;
@ -1982,7 +2084,6 @@ call_timeout(struct rpc_task *task)
rpcauth_invalcred(task);
retry:
clnt->cl_stats->rpcretrans++;
task->tk_action = call_bind;
task->tk_status = 0;
}
@ -2025,7 +2126,6 @@ call_decode(struct rpc_task *task)
if (req->rq_rcv_buf.len < 12) {
if (!RPC_IS_SOFT(task)) {
task->tk_action = call_bind;
clnt->cl_stats->rpcretrans++;
goto out_retry;
}
dprintk("RPC: %s: too small RPC reply size (%d bytes)\n",

View File

@ -205,10 +205,8 @@ int xprt_reserve_xprt(struct rpc_xprt *xprt, struct rpc_task *task)
goto out_sleep;
}
xprt->snd_task = task;
if (req != NULL) {
req->rq_bytes_sent = 0;
if (req != NULL)
req->rq_ntrans++;
}
return 1;
@ -263,7 +261,6 @@ int xprt_reserve_xprt_cong(struct rpc_xprt *xprt, struct rpc_task *task)
}
if (__xprt_get_cong(xprt, task)) {
xprt->snd_task = task;
req->rq_bytes_sent = 0;
req->rq_ntrans++;
return 1;
}
@ -300,10 +297,8 @@ static bool __xprt_lock_write_func(struct rpc_task *task, void *data)
req = task->tk_rqstp;
xprt->snd_task = task;
if (req) {
req->rq_bytes_sent = 0;
if (req)
req->rq_ntrans++;
}
return true;
}
@ -329,7 +324,6 @@ static bool __xprt_lock_write_cong_func(struct rpc_task *task, void *data)
}
if (__xprt_get_cong(xprt, task)) {
xprt->snd_task = task;
req->rq_bytes_sent = 0;
req->rq_ntrans++;
return true;
}
@ -358,6 +352,11 @@ static void __xprt_lock_write_next_cong(struct rpc_xprt *xprt)
void xprt_release_xprt(struct rpc_xprt *xprt, struct rpc_task *task)
{
if (xprt->snd_task == task) {
if (task != NULL) {
struct rpc_rqst *req = task->tk_rqstp;
if (req != NULL)
req->rq_bytes_sent = 0;
}
xprt_clear_locked(xprt);
__xprt_lock_write_next(xprt);
}
@ -375,6 +374,11 @@ EXPORT_SYMBOL_GPL(xprt_release_xprt);
void xprt_release_xprt_cong(struct rpc_xprt *xprt, struct rpc_task *task)
{
if (xprt->snd_task == task) {
if (task != NULL) {
struct rpc_rqst *req = task->tk_rqstp;
if (req != NULL)
req->rq_bytes_sent = 0;
}
xprt_clear_locked(xprt);
__xprt_lock_write_next_cong(xprt);
}
@ -854,24 +858,36 @@ static inline int xprt_has_timer(struct rpc_xprt *xprt)
* @task: RPC task about to send a request
*
*/
int xprt_prepare_transmit(struct rpc_task *task)
bool xprt_prepare_transmit(struct rpc_task *task)
{
struct rpc_rqst *req = task->tk_rqstp;
struct rpc_xprt *xprt = req->rq_xprt;
int err = 0;
bool ret = false;
dprintk("RPC: %5u xprt_prepare_transmit\n", task->tk_pid);
spin_lock_bh(&xprt->transport_lock);
if (req->rq_reply_bytes_recvd && !req->rq_bytes_sent) {
err = req->rq_reply_bytes_recvd;
if (!req->rq_bytes_sent) {
if (req->rq_reply_bytes_recvd) {
task->tk_status = req->rq_reply_bytes_recvd;
goto out_unlock;
}
if ((task->tk_flags & RPC_TASK_NO_RETRANS_TIMEOUT)
&& xprt_connected(xprt)
&& req->rq_connect_cookie == xprt->connect_cookie) {
xprt->ops->set_retrans_timeout(task);
rpc_sleep_on(&xprt->pending, task, xprt_timer);
goto out_unlock;
}
}
if (!xprt->ops->reserve_xprt(xprt, task)) {
task->tk_status = -EAGAIN;
goto out_unlock;
}
if (!xprt->ops->reserve_xprt(xprt, task))
err = -EAGAIN;
ret = true;
out_unlock:
spin_unlock_bh(&xprt->transport_lock);
return err;
return ret;
}
void xprt_end_transmit(struct rpc_task *task)
@ -912,7 +928,6 @@ void xprt_transmit(struct rpc_task *task)
} else if (!req->rq_bytes_sent)
return;
req->rq_connect_cookie = xprt->connect_cookie;
req->rq_xtime = ktime_get();
status = xprt->ops->send_request(task);
if (status != 0) {
@ -938,12 +953,14 @@ void xprt_transmit(struct rpc_task *task)
/* Don't race with disconnect */
if (!xprt_connected(xprt))
task->tk_status = -ENOTCONN;
else if (!req->rq_reply_bytes_recvd && rpc_reply_expected(task)) {
else {
/*
* Sleep on the pending queue since
* we're expecting a reply.
*/
rpc_sleep_on(&xprt->pending, task, xprt_timer);
if (!req->rq_reply_bytes_recvd && rpc_reply_expected(task))
rpc_sleep_on(&xprt->pending, task, xprt_timer);
req->rq_connect_cookie = xprt->connect_cookie;
}
spin_unlock_bh(&xprt->transport_lock);
}
@ -1087,11 +1104,9 @@ struct rpc_xprt *xprt_alloc(struct net *net, size_t size,
for (i = 0; i < num_prealloc; i++) {
req = kzalloc(sizeof(struct rpc_rqst), GFP_KERNEL);
if (!req)
break;
goto out_free;
list_add(&req->rq_list, &xprt->free);
}
if (i < num_prealloc)
goto out_free;
if (max_alloc > num_prealloc)
xprt->max_reqs = max_alloc;
else
@ -1186,6 +1201,12 @@ static void xprt_request_init(struct rpc_task *task, struct rpc_xprt *xprt)
req->rq_xprt = xprt;
req->rq_buffer = NULL;
req->rq_xid = xprt_alloc_xid(xprt);
req->rq_connect_cookie = xprt->connect_cookie - 1;
req->rq_bytes_sent = 0;
req->rq_snd_buf.len = 0;
req->rq_snd_buf.buflen = 0;
req->rq_rcv_buf.len = 0;
req->rq_rcv_buf.buflen = 0;
req->rq_release_snd_buf = NULL;
xprt_reset_majortimeo(req);
dprintk("RPC: %5u reserved req %p xid %08x\n", task->tk_pid,

View File

@ -835,6 +835,8 @@ static void xs_close(struct rpc_xprt *xprt)
dprintk("RPC: xs_close xprt %p\n", xprt);
cancel_delayed_work_sync(&transport->connect_worker);
xs_reset_transport(transport);
xprt->reestablish_timeout = 0;
@ -854,14 +856,6 @@ static void xs_tcp_close(struct rpc_xprt *xprt)
xs_tcp_shutdown(xprt);
}
static void xs_local_destroy(struct rpc_xprt *xprt)
{
xs_close(xprt);
xs_free_peer_addresses(xprt);
xprt_free(xprt);
module_put(THIS_MODULE);
}
/**
* xs_destroy - prepare to shutdown a transport
* @xprt: doomed transport
@ -869,13 +863,12 @@ static void xs_local_destroy(struct rpc_xprt *xprt)
*/
static void xs_destroy(struct rpc_xprt *xprt)
{
struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt);
dprintk("RPC: xs_destroy xprt %p\n", xprt);
cancel_delayed_work_sync(&transport->connect_worker);
xs_local_destroy(xprt);
xs_close(xprt);
xs_free_peer_addresses(xprt);
xprt_free(xprt);
module_put(THIS_MODULE);
}
static inline struct rpc_xprt *xprt_from_sock(struct sock *sk)
@ -1511,6 +1504,7 @@ static void xs_tcp_state_change(struct sock *sk)
transport->tcp_copied = 0;
transport->tcp_flags =
TCP_RCV_COPY_FRAGHDR | TCP_RCV_COPY_XID;
xprt->connect_cookie++;
xprt_wake_pending_tasks(xprt, -EAGAIN);
}
@ -1816,6 +1810,10 @@ static inline void xs_reclassify_socket(int family, struct socket *sock)
}
#endif
static void xs_dummy_setup_socket(struct work_struct *work)
{
}
static struct socket *xs_create_sock(struct rpc_xprt *xprt,
struct sock_xprt *transport, int family, int type, int protocol)
{
@ -2112,6 +2110,19 @@ static int xs_tcp_finish_connecting(struct rpc_xprt *xprt, struct socket *sock)
if (!transport->inet) {
struct sock *sk = sock->sk;
unsigned int keepidle = xprt->timeout->to_initval / HZ;
unsigned int keepcnt = xprt->timeout->to_retries + 1;
unsigned int opt_on = 1;
/* TCP Keepalive options */
kernel_setsockopt(sock, SOL_SOCKET, SO_KEEPALIVE,
(char *)&opt_on, sizeof(opt_on));
kernel_setsockopt(sock, SOL_TCP, TCP_KEEPIDLE,
(char *)&keepidle, sizeof(keepidle));
kernel_setsockopt(sock, SOL_TCP, TCP_KEEPINTVL,
(char *)&keepidle, sizeof(keepidle));
kernel_setsockopt(sock, SOL_TCP, TCP_KEEPCNT,
(char *)&keepcnt, sizeof(keepcnt));
write_lock_bh(&sk->sk_callback_lock);
@ -2151,7 +2162,6 @@ static int xs_tcp_finish_connecting(struct rpc_xprt *xprt, struct socket *sock)
case 0:
case -EINPROGRESS:
/* SYN_SENT! */
xprt->connect_cookie++;
if (xprt->reestablish_timeout < XS_TCP_INIT_REEST_TO)
xprt->reestablish_timeout = XS_TCP_INIT_REEST_TO;
}
@ -2498,7 +2508,7 @@ static struct rpc_xprt_ops xs_local_ops = {
.send_request = xs_local_send_request,
.set_retrans_timeout = xprt_set_retrans_timeout_def,
.close = xs_close,
.destroy = xs_local_destroy,
.destroy = xs_destroy,
.print_stats = xs_local_print_stats,
};
@ -2655,6 +2665,9 @@ static struct rpc_xprt *xs_setup_local(struct xprt_create *args)
xprt->ops = &xs_local_ops;
xprt->timeout = &xs_local_default_timeout;
INIT_DELAYED_WORK(&transport->connect_worker,
xs_dummy_setup_socket);
switch (sun->sun_family) {
case AF_LOCAL:
if (sun->sun_path[0] != '/') {
@ -2859,8 +2872,8 @@ static struct rpc_xprt *xs_setup_bc_tcp(struct xprt_create *args)
if (args->bc_xprt->xpt_bc_xprt) {
/*
* This server connection already has a backchannel
* export; we can't create a new one, as we wouldn't be
* able to match replies based on xid any more. So,
* transport; we can't create a new one, as we wouldn't
* be able to match replies based on xid any more. So,
* reuse the already-existing one:
*/
return args->bc_xprt->xpt_bc_xprt;