fs: Add aops->release_folio

This replaces aops->releasepage.  Update the documentation, and call it
if it exists.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Jeff Layton <jlayton@kernel.org>
This commit is contained in:
Matthew Wilcox (Oracle) 2022-04-29 17:00:05 -04:00
parent 0795000869
commit fa29000b6b
5 changed files with 34 additions and 32 deletions

View File

@ -433,11 +433,11 @@ has done a write and then the page it wrote from has been released by the VM,
after which it *has* to look in the cache. after which it *has* to look in the cache.
To inform fscache that a page might now be in the cache, the following function To inform fscache that a page might now be in the cache, the following function
should be called from the ``releasepage`` address space op:: should be called from the ``release_folio`` address space op::
void fscache_note_page_release(struct fscache_cookie *cookie); void fscache_note_page_release(struct fscache_cookie *cookie);
if the page has been released (ie. releasepage returned true). if the page has been released (ie. release_folio returned true).
Page release and page invalidation should also wait for any mark left on the Page release and page invalidation should also wait for any mark left on the
page to say that a DIO write is underway from that page:: page to say that a DIO write is underway from that page::

View File

@ -249,7 +249,7 @@ prototypes::
struct page *page, void *fsdata); struct page *page, void *fsdata);
sector_t (*bmap)(struct address_space *, sector_t); sector_t (*bmap)(struct address_space *, sector_t);
void (*invalidate_folio) (struct folio *, size_t start, size_t len); void (*invalidate_folio) (struct folio *, size_t start, size_t len);
int (*releasepage) (struct page *, int); bool (*release_folio)(struct folio *, gfp_t);
void (*freepage)(struct page *); void (*freepage)(struct page *);
int (*direct_IO)(struct kiocb *, struct iov_iter *iter); int (*direct_IO)(struct kiocb *, struct iov_iter *iter);
bool (*isolate_page) (struct page *, isolate_mode_t); bool (*isolate_page) (struct page *, isolate_mode_t);
@ -270,13 +270,13 @@ ops PageLocked(page) i_rwsem invalidate_lock
writepage: yes, unlocks (see below) writepage: yes, unlocks (see below)
read_folio: yes, unlocks shared read_folio: yes, unlocks shared
writepages: writepages:
dirty_folio maybe dirty_folio: maybe
readahead: yes, unlocks shared readahead: yes, unlocks shared
write_begin: locks the page exclusive write_begin: locks the page exclusive
write_end: yes, unlocks exclusive write_end: yes, unlocks exclusive
bmap: bmap:
invalidate_folio: yes exclusive invalidate_folio: yes exclusive
releasepage: yes release_folio: yes
freepage: yes freepage: yes
direct_IO: direct_IO:
isolate_page: yes isolate_page: yes
@ -372,10 +372,10 @@ invalidate_lock before invalidating page cache in truncate / hole punch
path (and thus calling into ->invalidate_folio) to block races between page path (and thus calling into ->invalidate_folio) to block races between page
cache invalidation and page cache filling functions (fault, read, ...). cache invalidation and page cache filling functions (fault, read, ...).
->releasepage() is called when the kernel is about to try to drop the ->release_folio() is called when the kernel is about to try to drop the
buffers from the page in preparation for freeing it. It returns zero to buffers from the folio in preparation for freeing it. It returns false to
indicate that the buffers are (or may be) freeable. If ->releasepage is zero, indicate that the buffers are (or may be) freeable. If ->release_folio is
the kernel assumes that the fs has no private interest in the buffers. NULL, the kernel assumes that the fs has no private interest in the buffers.
->freepage() is called when the kernel is done dropping the page ->freepage() is called when the kernel is done dropping the page
from the page cache. from the page cache.

View File

@ -620,9 +620,9 @@ Writeback.
The first can be used independently to the others. The VM can try to The first can be used independently to the others. The VM can try to
either write dirty pages in order to clean them, or release clean pages either write dirty pages in order to clean them, or release clean pages
in order to reuse them. To do this it can call the ->writepage method in order to reuse them. To do this it can call the ->writepage method
on dirty pages, and ->releasepage on clean pages with PagePrivate set. on dirty pages, and ->release_folio on clean folios with the private
Clean pages without PagePrivate and with no external references will be flag set. Clean pages without PagePrivate and with no external references
released without notice being given to the address_space. will be released without notice being given to the address_space.
To achieve this functionality, pages need to be placed on an LRU with To achieve this functionality, pages need to be placed on an LRU with
lru_cache_add and mark_page_active needs to be called whenever the page lru_cache_add and mark_page_active needs to be called whenever the page
@ -734,7 +734,7 @@ cache in your filesystem. The following members are defined:
struct page *page, void *fsdata); struct page *page, void *fsdata);
sector_t (*bmap)(struct address_space *, sector_t); sector_t (*bmap)(struct address_space *, sector_t);
void (*invalidate_folio) (struct folio *, size_t start, size_t len); void (*invalidate_folio) (struct folio *, size_t start, size_t len);
int (*releasepage) (struct page *, int); bool (*release_folio)(struct folio *, gfp_t);
void (*freepage)(struct page *); void (*freepage)(struct page *);
ssize_t (*direct_IO)(struct kiocb *, struct iov_iter *iter); ssize_t (*direct_IO)(struct kiocb *, struct iov_iter *iter);
/* isolate a page for migration */ /* isolate a page for migration */
@ -864,33 +864,32 @@ cache in your filesystem. The following members are defined:
address space. This generally corresponds to either a address space. This generally corresponds to either a
truncation, punch hole or a complete invalidation of the address truncation, punch hole or a complete invalidation of the address
space (in the latter case 'offset' will always be 0 and 'length' space (in the latter case 'offset' will always be 0 and 'length'
will be folio_size()). Any private data associated with the page will be folio_size()). Any private data associated with the folio
should be updated to reflect this truncation. If offset is 0 should be updated to reflect this truncation. If offset is 0
and length is folio_size(), then the private data should be and length is folio_size(), then the private data should be
released, because the page must be able to be completely released, because the folio must be able to be completely
discarded. This may be done by calling the ->releasepage discarded. This may be done by calling the ->release_folio
function, but in this case the release MUST succeed. function, but in this case the release MUST succeed.
``releasepage`` ``release_folio``
releasepage is called on PagePrivate pages to indicate that the release_folio is called on folios with private data to tell the
page should be freed if possible. ->releasepage should remove filesystem that the folio is about to be freed. ->release_folio
any private data from the page and clear the PagePrivate flag. should remove any private data from the folio and clear the
If releasepage() fails for some reason, it must indicate failure private flag. If release_folio() fails, it should return false.
with a 0 return value. releasepage() is used in two distinct release_folio() is used in two distinct though related cases.
though related cases. The first is when the VM finds a clean The first is when the VM wants to free a clean folio with no
page with no active users and wants to make it a free page. If active users. If ->release_folio succeeds, the folio will be
->releasepage succeeds, the page will be removed from the removed from the address_space and be freed.
address_space and become free.
The second case is when a request has been made to invalidate The second case is when a request has been made to invalidate
some or all pages in an address_space. This can happen through some or all folios in an address_space. This can happen
the fadvise(POSIX_FADV_DONTNEED) system call or by the through the fadvise(POSIX_FADV_DONTNEED) system call or by the
filesystem explicitly requesting it as nfs and 9fs do (when they filesystem explicitly requesting it as nfs and 9p do (when they
believe the cache may be out of date with storage) by calling believe the cache may be out of date with storage) by calling
invalidate_inode_pages2(). If the filesystem makes such a call, invalidate_inode_pages2(). If the filesystem makes such a call,
and needs to be certain that all pages are invalidated, then its and needs to be certain that all folios are invalidated, then
releasepage will need to ensure this. Possibly it can clear the its release_folio will need to ensure this. Possibly it can
PageUptodate bit if it cannot free private data yet. clear the uptodate flag if it cannot free private data yet.
``freepage`` ``freepage``
freepage is called once the page is no longer visible in the freepage is called once the page is no longer visible in the

View File

@ -355,6 +355,7 @@ struct address_space_operations {
/* Unfortunately this kludge is needed for FIBMAP. Don't use it */ /* Unfortunately this kludge is needed for FIBMAP. Don't use it */
sector_t (*bmap)(struct address_space *, sector_t); sector_t (*bmap)(struct address_space *, sector_t);
void (*invalidate_folio) (struct folio *, size_t offset, size_t len); void (*invalidate_folio) (struct folio *, size_t offset, size_t len);
bool (*release_folio)(struct folio *, gfp_t);
int (*releasepage) (struct page *, gfp_t); int (*releasepage) (struct page *, gfp_t);
void (*freepage)(struct page *); void (*freepage)(struct page *);
ssize_t (*direct_IO)(struct kiocb *, struct iov_iter *iter); ssize_t (*direct_IO)(struct kiocb *, struct iov_iter *iter);

View File

@ -3955,6 +3955,8 @@ bool filemap_release_folio(struct folio *folio, gfp_t gfp)
if (folio_test_writeback(folio)) if (folio_test_writeback(folio))
return false; return false;
if (mapping && mapping->a_ops->release_folio)
return mapping->a_ops->release_folio(folio, gfp);
if (mapping && mapping->a_ops->releasepage) if (mapping && mapping->a_ops->releasepage)
return mapping->a_ops->releasepage(&folio->page, gfp); return mapping->a_ops->releasepage(&folio->page, gfp);
return try_to_free_buffers(&folio->page); return try_to_free_buffers(&folio->page);