mirror of https://gitee.com/openkylin/linux.git
NFSD: Reduce svc_rqst::rq_pages churn during READDIR operations
During NFSv2 and NFSv3 READDIR/PLUS operations, NFSD advances rq_next_page to the full size of the client-requested buffer, then releases all those pages at the end of the request. The next request to use that nfsd thread has to refill the pages. NFSD does this even when the dirlist in the reply is small. With NFSv3 clients that send READDIR operations with large buffer sizes, that can be 256 put_page/alloc_page pairs per READDIR request, even though those pages often remain unused. We can save some work by not releasing dirlist buffer pages that were not used to form the READDIR Reply. I've left the NFSv2 code alone since there are never more than three pages involved in an NFSv2 READDIR Reply. Eventually we should nail down why these pages need to be released at all in order to avoid allocating and releasing pages unnecessarily. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
This commit is contained in:
parent
1411934627
commit
76ed0dd96e
|
@ -493,6 +493,9 @@ nfsd3_proc_readdir(struct svc_rqst *rqstp)
|
|||
memcpy(resp->verf, argp->verf, 8);
|
||||
nfs3svc_encode_cookie3(resp, offset);
|
||||
|
||||
/* Recycle only pages that were part of the reply */
|
||||
rqstp->rq_next_page = resp->xdr.page_ptr + 1;
|
||||
|
||||
return rpc_success;
|
||||
}
|
||||
|
||||
|
@ -533,6 +536,9 @@ nfsd3_proc_readdirplus(struct svc_rqst *rqstp)
|
|||
memcpy(resp->verf, argp->verf, 8);
|
||||
nfs3svc_encode_cookie3(resp, offset);
|
||||
|
||||
/* Recycle only pages that were part of the reply */
|
||||
rqstp->rq_next_page = resp->xdr.page_ptr + 1;
|
||||
|
||||
out:
|
||||
return rpc_success;
|
||||
}
|
||||
|
|
Loading…
Reference in New Issue