2016-05-21 08:02:14 +08:00
|
|
|
/*
|
|
|
|
* multiorder.c: Multi-order radix tree entry testing
|
|
|
|
* Copyright (c) 2016 Intel Corporation
|
|
|
|
* Author: Ross Zwisler <ross.zwisler@linux.intel.com>
|
|
|
|
* Author: Matthew Wilcox <matthew.r.wilcox@intel.com>
|
|
|
|
*
|
|
|
|
* This program is free software; you can redistribute it and/or modify it
|
|
|
|
* under the terms and conditions of the GNU General Public License,
|
|
|
|
* version 2, as published by the Free Software Foundation.
|
|
|
|
*
|
|
|
|
* This program is distributed in the hope it will be useful, but WITHOUT
|
|
|
|
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
|
|
|
|
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
|
|
|
|
* more details.
|
|
|
|
*/
|
|
|
|
#include <linux/radix-tree.h>
|
|
|
|
#include <linux/slab.h>
|
|
|
|
#include <linux/errno.h>
|
radix tree test suite: multi-order iteration race
Add a test which shows a race in the multi-order iteration code. This
test reliably hits the race in under a second on my machine, and is the
result of a real bug report against kernel a production v4.15 based
kernel (4.15.6-300.fc27.x86_64). With a real kernel this issue is hit
when using order 9 PMD DAX radix tree entries.
The race has to do with how we tear down multi-order sibling entries
when we are removing an item from the tree. Remember that an order 2
entry looks like this:
struct radix_tree_node.slots[] = [entry][sibling][sibling][sibling]
where 'entry' is in some slot in the struct radix_tree_node, and the
three slots following 'entry' contain sibling pointers which point back
to 'entry.'
When we delete 'entry' from the tree, we call :
radix_tree_delete()
radix_tree_delete_item()
__radix_tree_delete()
replace_slot()
replace_slot() first removes the siblings in order from the first to the
last, then at then replaces 'entry' with NULL. This means that for a
brief period of time we end up with one or more of the siblings removed,
so:
struct radix_tree_node.slots[] = [entry][NULL][sibling][sibling]
This causes an issue if you have a reader iterating over the slots in
the tree via radix_tree_for_each_slot() while only under
rcu_read_lock()/rcu_read_unlock() protection. This is a common case in
mm/filemap.c.
The issue is that when __radix_tree_next_slot() => skip_siblings() tries
to skip over the sibling entries in the slots, it currently does so with
an exact match on the slot directly preceding our current slot.
Normally this works:
V preceding slot
struct radix_tree_node.slots[] = [entry][sibling][sibling][sibling]
^ current slot
This lets you find the first sibling, and you skip them all in order.
But in the case where one of the siblings is NULL, that slot is skipped
and then our sibling detection is interrupted:
V preceding slot
struct radix_tree_node.slots[] = [entry][NULL][sibling][sibling]
^ current slot
This means that the sibling pointers aren't recognized since they point
all the way back to 'entry', so we think that they are normal internal
radix tree pointers. This causes us to think we need to walk down to a
struct radix_tree_node starting at the address of 'entry'.
In a real running kernel this will crash the thread with a GP fault when
you try and dereference the slots in your broken node starting at
'entry'.
In the radix tree test suite this will be caught by the address
sanitizer:
==27063==ERROR: AddressSanitizer: heap-buffer-overflow on address
0x60c0008ae400 at pc 0x00000040ce4f bp 0x7fa89b8fcad0 sp 0x7fa89b8fcac0
READ of size 8 at 0x60c0008ae400 thread T3
#0 0x40ce4e in __radix_tree_next_slot /home/rzwisler/project/linux/tools/testing/radix-tree/radix-tree.c:1660
#1 0x4022cc in radix_tree_next_slot linux/../../../../include/linux/radix-tree.h:567
#2 0x4022cc in iterator_func /home/rzwisler/project/linux/tools/testing/radix-tree/multiorder.c:655
#3 0x7fa8a088d50a in start_thread (/lib64/libpthread.so.0+0x750a)
#4 0x7fa8a03bd16e in clone (/lib64/libc.so.6+0xf516e)
Link: http://lkml.kernel.org/r/20180503192430.7582-5-ross.zwisler@linux.intel.com
Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: CR, Sapthagirish <sapthagirish.cr@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Jan Kara <jack@suse.cz>
Cc: Matthew Wilcox <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-05-19 07:09:01 +08:00
|
|
|
#include <pthread.h>
|
2016-05-21 08:02:14 +08:00
|
|
|
|
|
|
|
#include "test.h"
|
|
|
|
|
2016-05-21 08:02:29 +08:00
|
|
|
void multiorder_iteration(void)
|
|
|
|
{
|
|
|
|
RADIX_TREE(tree, GFP_KERNEL);
|
|
|
|
struct radix_tree_iter iter;
|
|
|
|
void **slot;
|
2016-05-21 08:03:36 +08:00
|
|
|
int i, j, err;
|
2016-05-21 08:02:29 +08:00
|
|
|
|
2017-01-05 00:55:00 +08:00
|
|
|
printv(1, "Multiorder iteration test\n");
|
2016-05-21 08:02:29 +08:00
|
|
|
|
|
|
|
#define NUM_ENTRIES 11
|
|
|
|
int index[NUM_ENTRIES] = {0, 2, 4, 8, 16, 32, 34, 36, 64, 72, 128};
|
|
|
|
int order[NUM_ENTRIES] = {1, 1, 2, 3, 4, 1, 0, 1, 3, 0, 7};
|
|
|
|
|
|
|
|
for (i = 0; i < NUM_ENTRIES; i++) {
|
|
|
|
err = item_insert_order(&tree, index[i], order[i]);
|
|
|
|
assert(!err);
|
|
|
|
}
|
|
|
|
|
2016-05-21 08:03:36 +08:00
|
|
|
for (j = 0; j < 256; j++) {
|
|
|
|
for (i = 0; i < NUM_ENTRIES; i++)
|
|
|
|
if (j <= (index[i] | ((1 << order[i]) - 1)))
|
|
|
|
break;
|
|
|
|
|
|
|
|
radix_tree_for_each_slot(slot, &tree, &iter, j) {
|
|
|
|
int height = order[i] / RADIX_TREE_MAP_SHIFT;
|
|
|
|
int shift = height * RADIX_TREE_MAP_SHIFT;
|
2016-12-15 07:08:49 +08:00
|
|
|
unsigned long mask = (1UL << order[i]) - 1;
|
|
|
|
struct item *item = *slot;
|
2016-05-21 08:03:36 +08:00
|
|
|
|
2016-12-15 07:08:49 +08:00
|
|
|
assert((iter.index | mask) == (index[i] | mask));
|
2016-05-21 08:03:36 +08:00
|
|
|
assert(iter.shift == shift);
|
2016-12-15 07:08:49 +08:00
|
|
|
assert(!radix_tree_is_internal_node(item));
|
|
|
|
assert((item->index | mask) == (index[i] | mask));
|
|
|
|
assert(item->order == order[i]);
|
2016-05-21 08:03:36 +08:00
|
|
|
i++;
|
|
|
|
}
|
2016-05-21 08:02:29 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
item_kill_tree(&tree);
|
|
|
|
}
|
|
|
|
|
|
|
|
void multiorder_tagged_iteration(void)
|
|
|
|
{
|
|
|
|
RADIX_TREE(tree, GFP_KERNEL);
|
|
|
|
struct radix_tree_iter iter;
|
|
|
|
void **slot;
|
2016-05-21 08:03:36 +08:00
|
|
|
int i, j;
|
2016-05-21 08:02:29 +08:00
|
|
|
|
2017-01-05 00:55:00 +08:00
|
|
|
printv(1, "Multiorder tagged iteration test\n");
|
2016-05-21 08:02:29 +08:00
|
|
|
|
|
|
|
#define MT_NUM_ENTRIES 9
|
|
|
|
int index[MT_NUM_ENTRIES] = {0, 2, 4, 16, 32, 40, 64, 72, 128};
|
|
|
|
int order[MT_NUM_ENTRIES] = {1, 0, 2, 4, 3, 1, 3, 0, 7};
|
|
|
|
|
|
|
|
#define TAG_ENTRIES 7
|
|
|
|
int tag_index[TAG_ENTRIES] = {0, 4, 16, 40, 64, 72, 128};
|
|
|
|
|
|
|
|
for (i = 0; i < MT_NUM_ENTRIES; i++)
|
|
|
|
assert(!item_insert_order(&tree, index[i], order[i]));
|
|
|
|
|
|
|
|
assert(!radix_tree_tagged(&tree, 1));
|
|
|
|
|
|
|
|
for (i = 0; i < TAG_ENTRIES; i++)
|
|
|
|
assert(radix_tree_tag_set(&tree, tag_index[i], 1));
|
|
|
|
|
2016-05-21 08:03:36 +08:00
|
|
|
for (j = 0; j < 256; j++) {
|
2016-12-15 07:08:49 +08:00
|
|
|
int k;
|
2016-05-21 08:03:36 +08:00
|
|
|
|
|
|
|
for (i = 0; i < TAG_ENTRIES; i++) {
|
|
|
|
for (k = i; index[k] < tag_index[i]; k++)
|
|
|
|
;
|
|
|
|
if (j <= (index[k] | ((1 << order[k]) - 1)))
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
radix_tree_for_each_tagged(slot, &tree, &iter, j, 1) {
|
2016-12-15 07:08:49 +08:00
|
|
|
unsigned long mask;
|
|
|
|
struct item *item = *slot;
|
2016-05-21 08:03:36 +08:00
|
|
|
for (k = i; index[k] < tag_index[i]; k++)
|
|
|
|
;
|
2016-12-15 07:08:49 +08:00
|
|
|
mask = (1UL << order[k]) - 1;
|
2016-05-21 08:03:36 +08:00
|
|
|
|
2016-12-15 07:08:49 +08:00
|
|
|
assert((iter.index | mask) == (tag_index[i] | mask));
|
|
|
|
assert(!radix_tree_is_internal_node(item));
|
|
|
|
assert((item->index | mask) == (tag_index[i] | mask));
|
|
|
|
assert(item->order == order[k]);
|
2016-05-21 08:03:36 +08:00
|
|
|
i++;
|
|
|
|
}
|
2016-05-21 08:02:29 +08:00
|
|
|
}
|
|
|
|
|
2018-08-18 19:09:22 +08:00
|
|
|
assert(tag_tagged_items(&tree, 0, ~0UL, TAG_ENTRIES, XA_MARK_1,
|
|
|
|
XA_MARK_2) == TAG_ENTRIES);
|
2016-05-21 08:02:52 +08:00
|
|
|
|
2016-05-21 08:03:36 +08:00
|
|
|
for (j = 0; j < 256; j++) {
|
|
|
|
int mask, k;
|
|
|
|
|
|
|
|
for (i = 0; i < TAG_ENTRIES; i++) {
|
|
|
|
for (k = i; index[k] < tag_index[i]; k++)
|
|
|
|
;
|
|
|
|
if (j <= (index[k] | ((1 << order[k]) - 1)))
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
radix_tree_for_each_tagged(slot, &tree, &iter, j, 2) {
|
2016-12-15 07:08:49 +08:00
|
|
|
struct item *item = *slot;
|
2016-05-21 08:03:36 +08:00
|
|
|
for (k = i; index[k] < tag_index[i]; k++)
|
|
|
|
;
|
|
|
|
mask = (1 << order[k]) - 1;
|
|
|
|
|
2016-12-15 07:08:49 +08:00
|
|
|
assert((iter.index | mask) == (tag_index[i] | mask));
|
|
|
|
assert(!radix_tree_is_internal_node(item));
|
|
|
|
assert((item->index | mask) == (tag_index[i] | mask));
|
|
|
|
assert(item->order == order[k]);
|
2016-05-21 08:03:36 +08:00
|
|
|
i++;
|
|
|
|
}
|
2016-05-21 08:02:52 +08:00
|
|
|
}
|
|
|
|
|
2018-08-18 19:09:22 +08:00
|
|
|
assert(tag_tagged_items(&tree, 1, ~0UL, MT_NUM_ENTRIES * 2, XA_MARK_1,
|
|
|
|
XA_MARK_0) == TAG_ENTRIES);
|
2016-05-21 08:02:52 +08:00
|
|
|
i = 0;
|
|
|
|
radix_tree_for_each_tagged(slot, &tree, &iter, 0, 0) {
|
|
|
|
assert(iter.index == tag_index[i]);
|
|
|
|
i++;
|
|
|
|
}
|
|
|
|
|
2016-05-21 08:02:29 +08:00
|
|
|
item_kill_tree(&tree);
|
|
|
|
}
|
|
|
|
|
radix tree test suite: multi-order iteration race
Add a test which shows a race in the multi-order iteration code. This
test reliably hits the race in under a second on my machine, and is the
result of a real bug report against kernel a production v4.15 based
kernel (4.15.6-300.fc27.x86_64). With a real kernel this issue is hit
when using order 9 PMD DAX radix tree entries.
The race has to do with how we tear down multi-order sibling entries
when we are removing an item from the tree. Remember that an order 2
entry looks like this:
struct radix_tree_node.slots[] = [entry][sibling][sibling][sibling]
where 'entry' is in some slot in the struct radix_tree_node, and the
three slots following 'entry' contain sibling pointers which point back
to 'entry.'
When we delete 'entry' from the tree, we call :
radix_tree_delete()
radix_tree_delete_item()
__radix_tree_delete()
replace_slot()
replace_slot() first removes the siblings in order from the first to the
last, then at then replaces 'entry' with NULL. This means that for a
brief period of time we end up with one or more of the siblings removed,
so:
struct radix_tree_node.slots[] = [entry][NULL][sibling][sibling]
This causes an issue if you have a reader iterating over the slots in
the tree via radix_tree_for_each_slot() while only under
rcu_read_lock()/rcu_read_unlock() protection. This is a common case in
mm/filemap.c.
The issue is that when __radix_tree_next_slot() => skip_siblings() tries
to skip over the sibling entries in the slots, it currently does so with
an exact match on the slot directly preceding our current slot.
Normally this works:
V preceding slot
struct radix_tree_node.slots[] = [entry][sibling][sibling][sibling]
^ current slot
This lets you find the first sibling, and you skip them all in order.
But in the case where one of the siblings is NULL, that slot is skipped
and then our sibling detection is interrupted:
V preceding slot
struct radix_tree_node.slots[] = [entry][NULL][sibling][sibling]
^ current slot
This means that the sibling pointers aren't recognized since they point
all the way back to 'entry', so we think that they are normal internal
radix tree pointers. This causes us to think we need to walk down to a
struct radix_tree_node starting at the address of 'entry'.
In a real running kernel this will crash the thread with a GP fault when
you try and dereference the slots in your broken node starting at
'entry'.
In the radix tree test suite this will be caught by the address
sanitizer:
==27063==ERROR: AddressSanitizer: heap-buffer-overflow on address
0x60c0008ae400 at pc 0x00000040ce4f bp 0x7fa89b8fcad0 sp 0x7fa89b8fcac0
READ of size 8 at 0x60c0008ae400 thread T3
#0 0x40ce4e in __radix_tree_next_slot /home/rzwisler/project/linux/tools/testing/radix-tree/radix-tree.c:1660
#1 0x4022cc in radix_tree_next_slot linux/../../../../include/linux/radix-tree.h:567
#2 0x4022cc in iterator_func /home/rzwisler/project/linux/tools/testing/radix-tree/multiorder.c:655
#3 0x7fa8a088d50a in start_thread (/lib64/libpthread.so.0+0x750a)
#4 0x7fa8a03bd16e in clone (/lib64/libc.so.6+0xf516e)
Link: http://lkml.kernel.org/r/20180503192430.7582-5-ross.zwisler@linux.intel.com
Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: CR, Sapthagirish <sapthagirish.cr@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Jan Kara <jack@suse.cz>
Cc: Matthew Wilcox <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-05-19 07:09:01 +08:00
|
|
|
bool stop_iteration = false;
|
|
|
|
|
|
|
|
static void *creator_func(void *ptr)
|
|
|
|
{
|
|
|
|
/* 'order' is set up to ensure we have sibling entries */
|
|
|
|
unsigned int order = RADIX_TREE_MAP_SHIFT - 1;
|
|
|
|
struct radix_tree_root *tree = ptr;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < 10000; i++) {
|
|
|
|
item_insert_order(tree, 0, order);
|
|
|
|
item_delete_rcu(tree, 0);
|
|
|
|
}
|
|
|
|
|
|
|
|
stop_iteration = true;
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void *iterator_func(void *ptr)
|
|
|
|
{
|
|
|
|
struct radix_tree_root *tree = ptr;
|
|
|
|
struct radix_tree_iter iter;
|
|
|
|
struct item *item;
|
|
|
|
void **slot;
|
|
|
|
|
|
|
|
while (!stop_iteration) {
|
|
|
|
rcu_read_lock();
|
|
|
|
radix_tree_for_each_slot(slot, tree, &iter, 0) {
|
|
|
|
item = radix_tree_deref_slot(slot);
|
|
|
|
|
|
|
|
if (!item)
|
|
|
|
continue;
|
|
|
|
if (radix_tree_deref_retry(item)) {
|
|
|
|
slot = radix_tree_iter_retry(&iter);
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
item_sanity(item, iter.index);
|
|
|
|
}
|
|
|
|
rcu_read_unlock();
|
|
|
|
}
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void multiorder_iteration_race(void)
|
|
|
|
{
|
|
|
|
const int num_threads = sysconf(_SC_NPROCESSORS_ONLN);
|
|
|
|
pthread_t worker_thread[num_threads];
|
|
|
|
RADIX_TREE(tree, GFP_KERNEL);
|
|
|
|
int i;
|
|
|
|
|
|
|
|
pthread_create(&worker_thread[0], NULL, &creator_func, &tree);
|
|
|
|
for (i = 1; i < num_threads; i++)
|
|
|
|
pthread_create(&worker_thread[i], NULL, &iterator_func, &tree);
|
|
|
|
|
|
|
|
for (i = 0; i < num_threads; i++)
|
|
|
|
pthread_join(worker_thread[i], NULL);
|
|
|
|
|
|
|
|
item_kill_tree(&tree);
|
|
|
|
}
|
|
|
|
|
2016-05-21 08:02:14 +08:00
|
|
|
void multiorder_checks(void)
|
|
|
|
{
|
2016-05-21 08:02:29 +08:00
|
|
|
multiorder_iteration();
|
|
|
|
multiorder_tagged_iteration();
|
radix tree test suite: multi-order iteration race
Add a test which shows a race in the multi-order iteration code. This
test reliably hits the race in under a second on my machine, and is the
result of a real bug report against kernel a production v4.15 based
kernel (4.15.6-300.fc27.x86_64). With a real kernel this issue is hit
when using order 9 PMD DAX radix tree entries.
The race has to do with how we tear down multi-order sibling entries
when we are removing an item from the tree. Remember that an order 2
entry looks like this:
struct radix_tree_node.slots[] = [entry][sibling][sibling][sibling]
where 'entry' is in some slot in the struct radix_tree_node, and the
three slots following 'entry' contain sibling pointers which point back
to 'entry.'
When we delete 'entry' from the tree, we call :
radix_tree_delete()
radix_tree_delete_item()
__radix_tree_delete()
replace_slot()
replace_slot() first removes the siblings in order from the first to the
last, then at then replaces 'entry' with NULL. This means that for a
brief period of time we end up with one or more of the siblings removed,
so:
struct radix_tree_node.slots[] = [entry][NULL][sibling][sibling]
This causes an issue if you have a reader iterating over the slots in
the tree via radix_tree_for_each_slot() while only under
rcu_read_lock()/rcu_read_unlock() protection. This is a common case in
mm/filemap.c.
The issue is that when __radix_tree_next_slot() => skip_siblings() tries
to skip over the sibling entries in the slots, it currently does so with
an exact match on the slot directly preceding our current slot.
Normally this works:
V preceding slot
struct radix_tree_node.slots[] = [entry][sibling][sibling][sibling]
^ current slot
This lets you find the first sibling, and you skip them all in order.
But in the case where one of the siblings is NULL, that slot is skipped
and then our sibling detection is interrupted:
V preceding slot
struct radix_tree_node.slots[] = [entry][NULL][sibling][sibling]
^ current slot
This means that the sibling pointers aren't recognized since they point
all the way back to 'entry', so we think that they are normal internal
radix tree pointers. This causes us to think we need to walk down to a
struct radix_tree_node starting at the address of 'entry'.
In a real running kernel this will crash the thread with a GP fault when
you try and dereference the slots in your broken node starting at
'entry'.
In the radix tree test suite this will be caught by the address
sanitizer:
==27063==ERROR: AddressSanitizer: heap-buffer-overflow on address
0x60c0008ae400 at pc 0x00000040ce4f bp 0x7fa89b8fcad0 sp 0x7fa89b8fcac0
READ of size 8 at 0x60c0008ae400 thread T3
#0 0x40ce4e in __radix_tree_next_slot /home/rzwisler/project/linux/tools/testing/radix-tree/radix-tree.c:1660
#1 0x4022cc in radix_tree_next_slot linux/../../../../include/linux/radix-tree.h:567
#2 0x4022cc in iterator_func /home/rzwisler/project/linux/tools/testing/radix-tree/multiorder.c:655
#3 0x7fa8a088d50a in start_thread (/lib64/libpthread.so.0+0x750a)
#4 0x7fa8a03bd16e in clone (/lib64/libc.so.6+0xf516e)
Link: http://lkml.kernel.org/r/20180503192430.7582-5-ross.zwisler@linux.intel.com
Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: CR, Sapthagirish <sapthagirish.cr@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Jan Kara <jack@suse.cz>
Cc: Matthew Wilcox <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-05-19 07:09:01 +08:00
|
|
|
multiorder_iteration_race();
|
2016-12-15 07:09:04 +08:00
|
|
|
|
|
|
|
radix_tree_cpu_dead(0);
|
2016-05-21 08:02:14 +08:00
|
|
|
}
|
2016-12-19 11:56:05 +08:00
|
|
|
|
|
|
|
int __weak main(void)
|
|
|
|
{
|
|
|
|
radix_tree_init();
|
|
|
|
multiorder_checks();
|
|
|
|
return 0;
|
|
|
|
}
|