Btrfs: test_check_exists: Fix infinite loop when searching for free space entries

On a ppc64 machine using 64K as the block size, assume that the RB
tree at btrfs_free_space_ctl->free_space_offset contains following
two entries:

1. A bitmap entry having an offset value of 0 and having the bits
   corresponding to the address range [128M+512K, 128M+768K] set.
2. An extent entry corresponding to the address range
   [128M-256K, 128M-128K]

In such a scenario, test_check_exists() invoked for checking the
existence of address range [128M+768K, 256M] can lead to an
infinite loop as explained below:

- Checking for the extent entry fails.
- Checking for a bitmap entry results in the free space info in
  range [128M+512K, 128M+768K] beng returned.
- rb_prev(info) returns NULL because the bitmap entry starting from
  offset 0 comes first in the RB tree.
- current_node = bitmap node.
- while (current_node)
	tmp = rb_next(bitmap_node);/*tmp is extent based free space entry*/
	Since extent based free space entry's last address is smaller
	than the address being searched for (i.e. 128M+768K) we
	incorrectly again obtain the extent node as the "next right node"
	of the RB tree and thus end up looping infinitely.

This patch fixes the issue by checking the "tmp" variable which point
to the most recently searched free space node.

Reviewed-by: Josef Bacik <jbacik@fb.com>
Reviewed-by: Chandan Rajendra <chandan@linux.vnet.ibm.com>
Signed-off-by: Feifei Xu <xufeifei@linux.vnet.ibm.com>
Signed-off-by: David Sterba <dsterba@suse.com>
This commit is contained in:
Feifei Xu 2016-06-01 19:18:23 +08:00 committed by David Sterba
parent 56244ef151
commit 5473e0c426
1 changed files with 2 additions and 2 deletions

View File

@ -3662,7 +3662,7 @@ int test_check_exists(struct btrfs_block_group_cache *cache,
if (tmp->offset + tmp->bytes < offset)
break;
if (offset + bytes < tmp->offset) {
n = rb_prev(&info->offset_index);
n = rb_prev(&tmp->offset_index);
continue;
}
info = tmp;
@ -3676,7 +3676,7 @@ int test_check_exists(struct btrfs_block_group_cache *cache,
if (offset + bytes < tmp->offset)
break;
if (tmp->offset + tmp->bytes < offset) {
n = rb_next(&info->offset_index);
n = rb_next(&tmp->offset_index);
continue;
}
info = tmp;