xfs: account format bouncing into rmapbt swapext tx reservation

The extent swap mechanism requires a unique implementation for
rmapbt enabled filesystems. Because the rmapbt tracks extent owner
information, extent swap must individually unmap and remap each
extent between the two inodes.

The rmapbt extent swap transaction block reservation currently
accounts for the worst case bmapbt block and rmapbt block
consumption based on the extent count of each inode. There is a
corner case that exists due to the extent swap implementation that
is not covered by this reservation, however.

If one of the associated inodes is just over the max extent count
used for extent format inodes (i.e., the inode is in btree format by
a single extent), the unmap/remap cycle of the extent swap can
bounce the inode between extent and btree format multiple times,
almost as many times as there are extents in the inode (if the
opposing inode happens to have one less, for example). Each back and
forth cycle involves a block free and allocation, which isn't a
problem except for that the initial transaction reservation must
account for the total number of block allocations performed by the
chain of deferred operations. If not, a block reservation overrun
occurs and the filesystem shuts down.

Update the rmapbt extent swap block reservation to check for this
situation and add some block reservation slop to ensure the entire
operation succeeds. We'd never likely require reservation for both
inodes as fsr wouldn't defrag the file in that case, but the
additional reservation is constrained by the data fork size so be
cautious and check for both.

Signed-off-by: Brian Foster <bfoster@redhat.com>
Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
This commit is contained in:
Brian Foster 2018-03-09 14:01:58 -08:00 committed by Darrick J. Wong
parent 3e78b9a468
commit b3fed43482
1 changed files with 20 additions and 9 deletions

View File

@ -1899,17 +1899,28 @@ xfs_swap_extents(
* performed with log redo items! * performed with log redo items!
*/ */
if (xfs_sb_version_hasrmapbt(&mp->m_sb)) { if (xfs_sb_version_hasrmapbt(&mp->m_sb)) {
int w = XFS_DATA_FORK;
uint32_t ipnext = XFS_IFORK_NEXTENTS(ip, w);
uint32_t tipnext = XFS_IFORK_NEXTENTS(tip, w);
/* /*
* Conceptually this shouldn't affect the shape of either * Conceptually this shouldn't affect the shape of either bmbt,
* bmbt, but since we atomically move extents one by one, * but since we atomically move extents one by one, we reserve
* we reserve enough space to rebuild both trees. * enough space to rebuild both trees.
*/ */
resblks = XFS_SWAP_RMAP_SPACE_RES(mp, resblks = XFS_SWAP_RMAP_SPACE_RES(mp, ipnext, w);
XFS_IFORK_NEXTENTS(ip, XFS_DATA_FORK), resblks += XFS_SWAP_RMAP_SPACE_RES(mp, tipnext, w);
XFS_DATA_FORK) +
XFS_SWAP_RMAP_SPACE_RES(mp, /*
XFS_IFORK_NEXTENTS(tip, XFS_DATA_FORK), * Handle the corner case where either inode might straddle the
XFS_DATA_FORK); * btree format boundary. If so, the inode could bounce between
* btree <-> extent format on unmap -> remap cycles, freeing and
* allocating a bmapbt block each time.
*/
if (ipnext == (XFS_IFORK_MAXEXT(ip, w) + 1))
resblks += XFS_IFORK_MAXEXT(ip, w);
if (tipnext == (XFS_IFORK_MAXEXT(tip, w) + 1))
resblks += XFS_IFORK_MAXEXT(tip, w);
} }
error = xfs_trans_alloc(mp, &M_RES(mp)->tr_write, resblks, 0, 0, &tp); error = xfs_trans_alloc(mp, &M_RES(mp)->tr_write, resblks, 0, 0, &tp);
if (error) if (error)