License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 22:07:57 +08:00
|
|
|
// SPDX-License-Identifier: GPL-2.0
|
2013-03-24 07:11:31 +08:00
|
|
|
/*
|
|
|
|
* background writeback - scan btree for dirty data and write it to the backing
|
|
|
|
* device
|
|
|
|
*
|
|
|
|
* Copyright 2010, 2011 Kent Overstreet <kent.overstreet@gmail.com>
|
|
|
|
* Copyright 2012 Google, Inc.
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include "bcache.h"
|
|
|
|
#include "btree.h"
|
|
|
|
#include "debug.h"
|
2013-06-05 21:21:07 +08:00
|
|
|
#include "writeback.h"
|
2013-03-24 07:11:31 +08:00
|
|
|
|
2013-07-25 08:50:06 +08:00
|
|
|
#include <linux/delay.h>
|
|
|
|
#include <linux/kthread.h>
|
2017-02-01 23:36:40 +08:00
|
|
|
#include <linux/sched/clock.h>
|
2013-04-27 06:39:55 +08:00
|
|
|
#include <trace/events/bcache.h>
|
|
|
|
|
2013-03-24 07:11:31 +08:00
|
|
|
/* Rate limiting */
|
2018-01-09 04:21:30 +08:00
|
|
|
static uint64_t __calc_target_rate(struct cached_dev *dc)
|
2013-03-24 07:11:31 +08:00
|
|
|
{
|
|
|
|
struct cache_set *c = dc->disk.c;
|
2018-01-09 04:21:30 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* This is the size of the cache, minus the amount used for
|
|
|
|
* flash-only devices
|
|
|
|
*/
|
2017-09-06 14:25:56 +08:00
|
|
|
uint64_t cache_sectors = c->nbuckets * c->sb.bucket_size -
|
2018-07-26 12:17:33 +08:00
|
|
|
atomic_long_read(&c->flash_dev_dirty_sectors);
|
2018-01-09 04:21:30 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Unfortunately there is no control of global dirty data. If the
|
|
|
|
* user states that they want 10% dirty data in the cache, and has,
|
|
|
|
* e.g., 5 backing volumes of equal size, we try and ensure each
|
|
|
|
* backing volume uses about 2% of the cache for dirty data.
|
|
|
|
*/
|
|
|
|
uint32_t bdev_share =
|
|
|
|
div64_u64(bdev_sectors(dc->bdev) << WRITEBACK_SHARE_SHIFT,
|
|
|
|
c->cached_dev_sectors);
|
|
|
|
|
2013-03-24 07:11:31 +08:00
|
|
|
uint64_t cache_dirty_target =
|
|
|
|
div_u64(cache_sectors * dc->writeback_percent, 100);
|
|
|
|
|
2018-01-09 04:21:30 +08:00
|
|
|
/* Ensure each backing dev gets at least one dirty share */
|
|
|
|
if (bdev_share < 1)
|
|
|
|
bdev_share = 1;
|
|
|
|
|
|
|
|
return (cache_dirty_target * bdev_share) >> WRITEBACK_SHARE_SHIFT;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void __update_writeback_rate(struct cached_dev *dc)
|
|
|
|
{
|
bcache: implement PI controller for writeback rate
bcache uses a control system to attempt to keep the amount of dirty data
in cache at a user-configured level, while not responding excessively to
transients and variations in write rate. Previously, the system was a
PD controller; but the output from it was integrated, turning the
Proportional term into an Integral term, and turning the Derivative term
into a crude Proportional term. Performance of the controller has been
uneven in production, and it has tended to respond slowly, oscillate,
and overshoot.
This patch set replaces the current control system with an explicit PI
controller and tuning that should be correct for most hardware. By
default, it attempts to write at a rate that would retire 1/40th of the
current excess blocks per second. An integral term in turn works to
remove steady state errors.
IMO, this yields benefits in simplicity (removing weighted average
filtering, etc) and system performance.
Another small change is a tunable parameter is introduced to allow the
user to specify a minimum rate at which dirty blocks are retired.
There is a slight difference from earlier versions of the patch in
integral handling to prevent excessive negative integral windup.
Signed-off-by: Michael Lyle <mlyle@lyle.org>
Reviewed-by: Coly Li <colyli@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2017-10-14 07:35:36 +08:00
|
|
|
/*
|
|
|
|
* PI controller:
|
|
|
|
* Figures out the amount that should be written per second.
|
|
|
|
*
|
|
|
|
* First, the error (number of sectors that are dirty beyond our
|
|
|
|
* target) is calculated. The error is accumulated (numerically
|
|
|
|
* integrated).
|
|
|
|
*
|
|
|
|
* Then, the proportional value and integral value are scaled
|
|
|
|
* based on configured values. These are stored as inverses to
|
|
|
|
* avoid fixed point math and to make configuration easy-- e.g.
|
|
|
|
* the default value of 40 for writeback_rate_p_term_inverse
|
|
|
|
* attempts to write at a rate that would retire all the dirty
|
|
|
|
* blocks in 40 seconds.
|
|
|
|
*
|
|
|
|
* The writeback_rate_i_inverse value of 10000 means that 1/10000th
|
|
|
|
* of the error is accumulated in the integral term per second.
|
|
|
|
* This acts as a slow, long-term average that is not subject to
|
|
|
|
* variations in usage like the p term.
|
|
|
|
*/
|
2018-01-09 04:21:30 +08:00
|
|
|
int64_t target = __calc_target_rate(dc);
|
2013-06-05 21:21:07 +08:00
|
|
|
int64_t dirty = bcache_dev_sectors_dirty(&dc->disk);
|
bcache: implement PI controller for writeback rate
bcache uses a control system to attempt to keep the amount of dirty data
in cache at a user-configured level, while not responding excessively to
transients and variations in write rate. Previously, the system was a
PD controller; but the output from it was integrated, turning the
Proportional term into an Integral term, and turning the Derivative term
into a crude Proportional term. Performance of the controller has been
uneven in production, and it has tended to respond slowly, oscillate,
and overshoot.
This patch set replaces the current control system with an explicit PI
controller and tuning that should be correct for most hardware. By
default, it attempts to write at a rate that would retire 1/40th of the
current excess blocks per second. An integral term in turn works to
remove steady state errors.
IMO, this yields benefits in simplicity (removing weighted average
filtering, etc) and system performance.
Another small change is a tunable parameter is introduced to allow the
user to specify a minimum rate at which dirty blocks are retired.
There is a slight difference from earlier versions of the patch in
integral handling to prevent excessive negative integral windup.
Signed-off-by: Michael Lyle <mlyle@lyle.org>
Reviewed-by: Coly Li <colyli@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2017-10-14 07:35:36 +08:00
|
|
|
int64_t error = dirty - target;
|
|
|
|
int64_t proportional_scaled =
|
|
|
|
div_s64(error, dc->writeback_rate_p_term_inverse);
|
2017-10-14 07:35:38 +08:00
|
|
|
int64_t integral_scaled;
|
|
|
|
uint32_t new_rate;
|
bcache: implement PI controller for writeback rate
bcache uses a control system to attempt to keep the amount of dirty data
in cache at a user-configured level, while not responding excessively to
transients and variations in write rate. Previously, the system was a
PD controller; but the output from it was integrated, turning the
Proportional term into an Integral term, and turning the Derivative term
into a crude Proportional term. Performance of the controller has been
uneven in production, and it has tended to respond slowly, oscillate,
and overshoot.
This patch set replaces the current control system with an explicit PI
controller and tuning that should be correct for most hardware. By
default, it attempts to write at a rate that would retire 1/40th of the
current excess blocks per second. An integral term in turn works to
remove steady state errors.
IMO, this yields benefits in simplicity (removing weighted average
filtering, etc) and system performance.
Another small change is a tunable parameter is introduced to allow the
user to specify a minimum rate at which dirty blocks are retired.
There is a slight difference from earlier versions of the patch in
integral handling to prevent excessive negative integral windup.
Signed-off-by: Michael Lyle <mlyle@lyle.org>
Reviewed-by: Coly Li <colyli@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2017-10-14 07:35:36 +08:00
|
|
|
|
|
|
|
if ((error < 0 && dc->writeback_rate_integral > 0) ||
|
|
|
|
(error > 0 && time_before64(local_clock(),
|
|
|
|
dc->writeback_rate.next + NSEC_PER_MSEC))) {
|
|
|
|
/*
|
|
|
|
* Only decrease the integral term if it's more than
|
|
|
|
* zero. Only increase the integral term if the device
|
|
|
|
* is keeping up. (Don't wind up the integral
|
|
|
|
* ineffectively in either case).
|
|
|
|
*
|
|
|
|
* It's necessary to scale this by
|
|
|
|
* writeback_rate_update_seconds to keep the integral
|
|
|
|
* term dimensioned properly.
|
|
|
|
*/
|
|
|
|
dc->writeback_rate_integral += error *
|
|
|
|
dc->writeback_rate_update_seconds;
|
|
|
|
}
|
2013-03-24 07:11:31 +08:00
|
|
|
|
bcache: implement PI controller for writeback rate
bcache uses a control system to attempt to keep the amount of dirty data
in cache at a user-configured level, while not responding excessively to
transients and variations in write rate. Previously, the system was a
PD controller; but the output from it was integrated, turning the
Proportional term into an Integral term, and turning the Derivative term
into a crude Proportional term. Performance of the controller has been
uneven in production, and it has tended to respond slowly, oscillate,
and overshoot.
This patch set replaces the current control system with an explicit PI
controller and tuning that should be correct for most hardware. By
default, it attempts to write at a rate that would retire 1/40th of the
current excess blocks per second. An integral term in turn works to
remove steady state errors.
IMO, this yields benefits in simplicity (removing weighted average
filtering, etc) and system performance.
Another small change is a tunable parameter is introduced to allow the
user to specify a minimum rate at which dirty blocks are retired.
There is a slight difference from earlier versions of the patch in
integral handling to prevent excessive negative integral windup.
Signed-off-by: Michael Lyle <mlyle@lyle.org>
Reviewed-by: Coly Li <colyli@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2017-10-14 07:35:36 +08:00
|
|
|
integral_scaled = div_s64(dc->writeback_rate_integral,
|
|
|
|
dc->writeback_rate_i_term_inverse);
|
2013-03-24 07:11:31 +08:00
|
|
|
|
2017-10-14 07:35:38 +08:00
|
|
|
new_rate = clamp_t(int32_t, (proportional_scaled + integral_scaled),
|
|
|
|
dc->writeback_rate_minimum, NSEC_PER_SEC);
|
2013-11-12 05:58:34 +08:00
|
|
|
|
bcache: implement PI controller for writeback rate
bcache uses a control system to attempt to keep the amount of dirty data
in cache at a user-configured level, while not responding excessively to
transients and variations in write rate. Previously, the system was a
PD controller; but the output from it was integrated, turning the
Proportional term into an Integral term, and turning the Derivative term
into a crude Proportional term. Performance of the controller has been
uneven in production, and it has tended to respond slowly, oscillate,
and overshoot.
This patch set replaces the current control system with an explicit PI
controller and tuning that should be correct for most hardware. By
default, it attempts to write at a rate that would retire 1/40th of the
current excess blocks per second. An integral term in turn works to
remove steady state errors.
IMO, this yields benefits in simplicity (removing weighted average
filtering, etc) and system performance.
Another small change is a tunable parameter is introduced to allow the
user to specify a minimum rate at which dirty blocks are retired.
There is a slight difference from earlier versions of the patch in
integral handling to prevent excessive negative integral windup.
Signed-off-by: Michael Lyle <mlyle@lyle.org>
Reviewed-by: Coly Li <colyli@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2017-10-14 07:35:36 +08:00
|
|
|
dc->writeback_rate_proportional = proportional_scaled;
|
|
|
|
dc->writeback_rate_integral_scaled = integral_scaled;
|
|
|
|
dc->writeback_rate_change = new_rate - dc->writeback_rate.rate;
|
|
|
|
dc->writeback_rate.rate = new_rate;
|
2013-03-24 07:11:31 +08:00
|
|
|
dc->writeback_rate_target = target;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void update_writeback_rate(struct work_struct *work)
|
|
|
|
{
|
|
|
|
struct cached_dev *dc = container_of(to_delayed_work(work),
|
|
|
|
struct cached_dev,
|
|
|
|
writeback_rate_update);
|
bcache: add CACHE_SET_IO_DISABLE to struct cache_set flags
When too many I/Os failed on cache device, bch_cache_set_error() is called
in the error handling code path to retire whole problematic cache set. If
new I/O requests continue to come and take refcount dc->count, the cache
set won't be retired immediately, this is a problem.
Further more, there are several kernel thread and self-armed kernel work
may still running after bch_cache_set_error() is called. It needs to wait
quite a while for them to stop, or they won't stop at all. They also
prevent the cache set from being retired.
The solution in this patch is, to add per cache set flag to disable I/O
request on this cache and all attached backing devices. Then new coming I/O
requests can be rejected in *_make_request() before taking refcount, kernel
threads and self-armed kernel worker can stop very fast when flags bit
CACHE_SET_IO_DISABLE is set.
Because bcache also do internal I/Os for writeback, garbage collection,
bucket allocation, journaling, this kind of I/O should be disabled after
bch_cache_set_error() is called. So closure_bio_submit() is modified to
check whether CACHE_SET_IO_DISABLE is set on cache_set->flags. If set,
closure_bio_submit() will set bio->bi_status to BLK_STS_IOERR and
return, generic_make_request() won't be called.
A sysfs interface is also added to set or clear CACHE_SET_IO_DISABLE bit
from cache_set->flags, to disable or enable cache set I/O for debugging. It
is helpful to trigger more corner case issues for failed cache device.
Changelog
v4, add wait_for_kthread_stop(), and call it before exits writeback and gc
kernel threads.
v3, change CACHE_SET_IO_DISABLE from 4 to 3, since it is bit index.
remove "bcache: " prefix when printing out kernel message.
v2, more changes by previous review,
- Use CACHE_SET_IO_DISABLE of cache_set->flags, suggested by Junhui.
- Check CACHE_SET_IO_DISABLE in bch_btree_gc() to stop a while-loop, this
is reported and inspired from origal patch of Pavel Vazharov.
v1, initial version.
Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Cc: Junhui Tang <tang.junhui@zte.com.cn>
Cc: Michael Lyle <mlyle@lyle.org>
Cc: Pavel Vazharov <freakpv@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-03-19 08:36:17 +08:00
|
|
|
struct cache_set *c = dc->disk.c;
|
2013-03-24 07:11:31 +08:00
|
|
|
|
bcache: stop dc->writeback_rate_update properly
struct delayed_work writeback_rate_update in struct cache_dev is a delayed
worker to call function update_writeback_rate() in period (the interval is
defined by dc->writeback_rate_update_seconds).
When a metadate I/O error happens on cache device, bcache error handling
routine bch_cache_set_error() will call bch_cache_set_unregister() to
retire whole cache set. On the unregister code path, this delayed work is
stopped by calling cancel_delayed_work_sync(&dc->writeback_rate_update).
dc->writeback_rate_update is a special delayed work from others in bcache.
In its routine update_writeback_rate(), this delayed work is re-armed
itself. That means when cancel_delayed_work_sync() returns, this delayed
work can still be executed after several seconds defined by
dc->writeback_rate_update_seconds.
The problem is, after cancel_delayed_work_sync() returns, the cache set
unregister code path will continue and release memory of struct cache set.
Then the delayed work is scheduled to run, __update_writeback_rate()
will reference the already released cache_set memory, and trigger a NULL
pointer deference fault.
This patch introduces two more bcache device flags,
- BCACHE_DEV_WB_RUNNING
bit set: bcache device is in writeback mode and running, it is OK for
dc->writeback_rate_update to re-arm itself.
bit clear:bcache device is trying to stop dc->writeback_rate_update,
this delayed work should not re-arm itself and quit.
- BCACHE_DEV_RATE_DW_RUNNING
bit set: routine update_writeback_rate() is executing.
bit clear: routine update_writeback_rate() quits.
This patch also adds a function cancel_writeback_rate_update_dwork() to
wait for dc->writeback_rate_update quits before cancel it by calling
cancel_delayed_work_sync(). In order to avoid a deadlock by unexpected
quit dc->writeback_rate_update, after time_out seconds this function will
give up and continue to call cancel_delayed_work_sync().
And here I explain how this patch stops self re-armed delayed work properly
with the above stuffs.
update_writeback_rate() sets BCACHE_DEV_RATE_DW_RUNNING at its beginning
and clears BCACHE_DEV_RATE_DW_RUNNING at its end. Before calling
cancel_writeback_rate_update_dwork() clear flag BCACHE_DEV_WB_RUNNING.
Before calling cancel_delayed_work_sync() wait utill flag
BCACHE_DEV_RATE_DW_RUNNING is clear. So when calling
cancel_delayed_work_sync(), dc->writeback_rate_update must be already re-
armed, or quite by seeing BCACHE_DEV_WB_RUNNING cleared. In both cases
delayed work routine update_writeback_rate() won't be executed after
cancel_delayed_work_sync() returns.
Inside update_writeback_rate() before calling schedule_delayed_work(), flag
BCACHE_DEV_WB_RUNNING is checked before. If this flag is cleared, it means
someone is about to stop the delayed work. Because flag
BCACHE_DEV_RATE_DW_RUNNING is set already and cancel_delayed_work_sync()
has to wait for this flag to be cleared, we don't need to worry about race
condition here.
If update_writeback_rate() is scheduled to run after checking
BCACHE_DEV_RATE_DW_RUNNING and before calling cancel_delayed_work_sync()
in cancel_writeback_rate_update_dwork(), it is also safe. Because at this
moment BCACHE_DEV_WB_RUNNING is cleared with memory barrier. As I mentioned
previously, update_writeback_rate() will see BCACHE_DEV_WB_RUNNING is clear
and quit immediately.
Because there are more dependences inside update_writeback_rate() to struct
cache_set memory, dc->writeback_rate_update is not a simple self re-arm
delayed work. After trying many different methods (e.g. hold dc->count, or
use locks), this is the only way I can find which works to properly stop
dc->writeback_rate_update delayed work.
Changelog:
v3: change values of BCACHE_DEV_WB_RUNNING and BCACHE_DEV_RATE_DW_RUNNING
to bit index, for test_bit().
v2: Try to fix the race issue which is pointed out by Junhui.
v1: The initial version for review
Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Junhui Tang <tang.junhui@zte.com.cn>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Cc: Michael Lyle <mlyle@lyle.org>
Cc: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-03-19 08:36:16 +08:00
|
|
|
/*
|
|
|
|
* should check BCACHE_DEV_RATE_DW_RUNNING before calling
|
|
|
|
* cancel_delayed_work_sync().
|
|
|
|
*/
|
|
|
|
set_bit(BCACHE_DEV_RATE_DW_RUNNING, &dc->disk.flags);
|
|
|
|
/* paired with where BCACHE_DEV_RATE_DW_RUNNING is tested */
|
|
|
|
smp_mb();
|
|
|
|
|
bcache: add CACHE_SET_IO_DISABLE to struct cache_set flags
When too many I/Os failed on cache device, bch_cache_set_error() is called
in the error handling code path to retire whole problematic cache set. If
new I/O requests continue to come and take refcount dc->count, the cache
set won't be retired immediately, this is a problem.
Further more, there are several kernel thread and self-armed kernel work
may still running after bch_cache_set_error() is called. It needs to wait
quite a while for them to stop, or they won't stop at all. They also
prevent the cache set from being retired.
The solution in this patch is, to add per cache set flag to disable I/O
request on this cache and all attached backing devices. Then new coming I/O
requests can be rejected in *_make_request() before taking refcount, kernel
threads and self-armed kernel worker can stop very fast when flags bit
CACHE_SET_IO_DISABLE is set.
Because bcache also do internal I/Os for writeback, garbage collection,
bucket allocation, journaling, this kind of I/O should be disabled after
bch_cache_set_error() is called. So closure_bio_submit() is modified to
check whether CACHE_SET_IO_DISABLE is set on cache_set->flags. If set,
closure_bio_submit() will set bio->bi_status to BLK_STS_IOERR and
return, generic_make_request() won't be called.
A sysfs interface is also added to set or clear CACHE_SET_IO_DISABLE bit
from cache_set->flags, to disable or enable cache set I/O for debugging. It
is helpful to trigger more corner case issues for failed cache device.
Changelog
v4, add wait_for_kthread_stop(), and call it before exits writeback and gc
kernel threads.
v3, change CACHE_SET_IO_DISABLE from 4 to 3, since it is bit index.
remove "bcache: " prefix when printing out kernel message.
v2, more changes by previous review,
- Use CACHE_SET_IO_DISABLE of cache_set->flags, suggested by Junhui.
- Check CACHE_SET_IO_DISABLE in bch_btree_gc() to stop a while-loop, this
is reported and inspired from origal patch of Pavel Vazharov.
v1, initial version.
Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Cc: Junhui Tang <tang.junhui@zte.com.cn>
Cc: Michael Lyle <mlyle@lyle.org>
Cc: Pavel Vazharov <freakpv@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-03-19 08:36:17 +08:00
|
|
|
/*
|
|
|
|
* CACHE_SET_IO_DISABLE might be set via sysfs interface,
|
|
|
|
* check it here too.
|
|
|
|
*/
|
|
|
|
if (!test_bit(BCACHE_DEV_WB_RUNNING, &dc->disk.flags) ||
|
|
|
|
test_bit(CACHE_SET_IO_DISABLE, &c->flags)) {
|
bcache: stop dc->writeback_rate_update properly
struct delayed_work writeback_rate_update in struct cache_dev is a delayed
worker to call function update_writeback_rate() in period (the interval is
defined by dc->writeback_rate_update_seconds).
When a metadate I/O error happens on cache device, bcache error handling
routine bch_cache_set_error() will call bch_cache_set_unregister() to
retire whole cache set. On the unregister code path, this delayed work is
stopped by calling cancel_delayed_work_sync(&dc->writeback_rate_update).
dc->writeback_rate_update is a special delayed work from others in bcache.
In its routine update_writeback_rate(), this delayed work is re-armed
itself. That means when cancel_delayed_work_sync() returns, this delayed
work can still be executed after several seconds defined by
dc->writeback_rate_update_seconds.
The problem is, after cancel_delayed_work_sync() returns, the cache set
unregister code path will continue and release memory of struct cache set.
Then the delayed work is scheduled to run, __update_writeback_rate()
will reference the already released cache_set memory, and trigger a NULL
pointer deference fault.
This patch introduces two more bcache device flags,
- BCACHE_DEV_WB_RUNNING
bit set: bcache device is in writeback mode and running, it is OK for
dc->writeback_rate_update to re-arm itself.
bit clear:bcache device is trying to stop dc->writeback_rate_update,
this delayed work should not re-arm itself and quit.
- BCACHE_DEV_RATE_DW_RUNNING
bit set: routine update_writeback_rate() is executing.
bit clear: routine update_writeback_rate() quits.
This patch also adds a function cancel_writeback_rate_update_dwork() to
wait for dc->writeback_rate_update quits before cancel it by calling
cancel_delayed_work_sync(). In order to avoid a deadlock by unexpected
quit dc->writeback_rate_update, after time_out seconds this function will
give up and continue to call cancel_delayed_work_sync().
And here I explain how this patch stops self re-armed delayed work properly
with the above stuffs.
update_writeback_rate() sets BCACHE_DEV_RATE_DW_RUNNING at its beginning
and clears BCACHE_DEV_RATE_DW_RUNNING at its end. Before calling
cancel_writeback_rate_update_dwork() clear flag BCACHE_DEV_WB_RUNNING.
Before calling cancel_delayed_work_sync() wait utill flag
BCACHE_DEV_RATE_DW_RUNNING is clear. So when calling
cancel_delayed_work_sync(), dc->writeback_rate_update must be already re-
armed, or quite by seeing BCACHE_DEV_WB_RUNNING cleared. In both cases
delayed work routine update_writeback_rate() won't be executed after
cancel_delayed_work_sync() returns.
Inside update_writeback_rate() before calling schedule_delayed_work(), flag
BCACHE_DEV_WB_RUNNING is checked before. If this flag is cleared, it means
someone is about to stop the delayed work. Because flag
BCACHE_DEV_RATE_DW_RUNNING is set already and cancel_delayed_work_sync()
has to wait for this flag to be cleared, we don't need to worry about race
condition here.
If update_writeback_rate() is scheduled to run after checking
BCACHE_DEV_RATE_DW_RUNNING and before calling cancel_delayed_work_sync()
in cancel_writeback_rate_update_dwork(), it is also safe. Because at this
moment BCACHE_DEV_WB_RUNNING is cleared with memory barrier. As I mentioned
previously, update_writeback_rate() will see BCACHE_DEV_WB_RUNNING is clear
and quit immediately.
Because there are more dependences inside update_writeback_rate() to struct
cache_set memory, dc->writeback_rate_update is not a simple self re-arm
delayed work. After trying many different methods (e.g. hold dc->count, or
use locks), this is the only way I can find which works to properly stop
dc->writeback_rate_update delayed work.
Changelog:
v3: change values of BCACHE_DEV_WB_RUNNING and BCACHE_DEV_RATE_DW_RUNNING
to bit index, for test_bit().
v2: Try to fix the race issue which is pointed out by Junhui.
v1: The initial version for review
Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Junhui Tang <tang.junhui@zte.com.cn>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Cc: Michael Lyle <mlyle@lyle.org>
Cc: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-03-19 08:36:16 +08:00
|
|
|
clear_bit(BCACHE_DEV_RATE_DW_RUNNING, &dc->disk.flags);
|
|
|
|
/* paired with where BCACHE_DEV_RATE_DW_RUNNING is tested */
|
|
|
|
smp_mb();
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2013-03-24 07:11:31 +08:00
|
|
|
down_read(&dc->writeback_lock);
|
|
|
|
|
|
|
|
if (atomic_read(&dc->has_dirty) &&
|
|
|
|
dc->writeback_percent)
|
|
|
|
__update_writeback_rate(dc);
|
|
|
|
|
|
|
|
up_read(&dc->writeback_lock);
|
2013-07-25 08:50:06 +08:00
|
|
|
|
bcache: add CACHE_SET_IO_DISABLE to struct cache_set flags
When too many I/Os failed on cache device, bch_cache_set_error() is called
in the error handling code path to retire whole problematic cache set. If
new I/O requests continue to come and take refcount dc->count, the cache
set won't be retired immediately, this is a problem.
Further more, there are several kernel thread and self-armed kernel work
may still running after bch_cache_set_error() is called. It needs to wait
quite a while for them to stop, or they won't stop at all. They also
prevent the cache set from being retired.
The solution in this patch is, to add per cache set flag to disable I/O
request on this cache and all attached backing devices. Then new coming I/O
requests can be rejected in *_make_request() before taking refcount, kernel
threads and self-armed kernel worker can stop very fast when flags bit
CACHE_SET_IO_DISABLE is set.
Because bcache also do internal I/Os for writeback, garbage collection,
bucket allocation, journaling, this kind of I/O should be disabled after
bch_cache_set_error() is called. So closure_bio_submit() is modified to
check whether CACHE_SET_IO_DISABLE is set on cache_set->flags. If set,
closure_bio_submit() will set bio->bi_status to BLK_STS_IOERR and
return, generic_make_request() won't be called.
A sysfs interface is also added to set or clear CACHE_SET_IO_DISABLE bit
from cache_set->flags, to disable or enable cache set I/O for debugging. It
is helpful to trigger more corner case issues for failed cache device.
Changelog
v4, add wait_for_kthread_stop(), and call it before exits writeback and gc
kernel threads.
v3, change CACHE_SET_IO_DISABLE from 4 to 3, since it is bit index.
remove "bcache: " prefix when printing out kernel message.
v2, more changes by previous review,
- Use CACHE_SET_IO_DISABLE of cache_set->flags, suggested by Junhui.
- Check CACHE_SET_IO_DISABLE in bch_btree_gc() to stop a while-loop, this
is reported and inspired from origal patch of Pavel Vazharov.
v1, initial version.
Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Cc: Junhui Tang <tang.junhui@zte.com.cn>
Cc: Michael Lyle <mlyle@lyle.org>
Cc: Pavel Vazharov <freakpv@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-03-19 08:36:17 +08:00
|
|
|
/*
|
|
|
|
* CACHE_SET_IO_DISABLE might be set via sysfs interface,
|
|
|
|
* check it here too.
|
|
|
|
*/
|
|
|
|
if (test_bit(BCACHE_DEV_WB_RUNNING, &dc->disk.flags) &&
|
|
|
|
!test_bit(CACHE_SET_IO_DISABLE, &c->flags)) {
|
bcache: stop dc->writeback_rate_update properly
struct delayed_work writeback_rate_update in struct cache_dev is a delayed
worker to call function update_writeback_rate() in period (the interval is
defined by dc->writeback_rate_update_seconds).
When a metadate I/O error happens on cache device, bcache error handling
routine bch_cache_set_error() will call bch_cache_set_unregister() to
retire whole cache set. On the unregister code path, this delayed work is
stopped by calling cancel_delayed_work_sync(&dc->writeback_rate_update).
dc->writeback_rate_update is a special delayed work from others in bcache.
In its routine update_writeback_rate(), this delayed work is re-armed
itself. That means when cancel_delayed_work_sync() returns, this delayed
work can still be executed after several seconds defined by
dc->writeback_rate_update_seconds.
The problem is, after cancel_delayed_work_sync() returns, the cache set
unregister code path will continue and release memory of struct cache set.
Then the delayed work is scheduled to run, __update_writeback_rate()
will reference the already released cache_set memory, and trigger a NULL
pointer deference fault.
This patch introduces two more bcache device flags,
- BCACHE_DEV_WB_RUNNING
bit set: bcache device is in writeback mode and running, it is OK for
dc->writeback_rate_update to re-arm itself.
bit clear:bcache device is trying to stop dc->writeback_rate_update,
this delayed work should not re-arm itself and quit.
- BCACHE_DEV_RATE_DW_RUNNING
bit set: routine update_writeback_rate() is executing.
bit clear: routine update_writeback_rate() quits.
This patch also adds a function cancel_writeback_rate_update_dwork() to
wait for dc->writeback_rate_update quits before cancel it by calling
cancel_delayed_work_sync(). In order to avoid a deadlock by unexpected
quit dc->writeback_rate_update, after time_out seconds this function will
give up and continue to call cancel_delayed_work_sync().
And here I explain how this patch stops self re-armed delayed work properly
with the above stuffs.
update_writeback_rate() sets BCACHE_DEV_RATE_DW_RUNNING at its beginning
and clears BCACHE_DEV_RATE_DW_RUNNING at its end. Before calling
cancel_writeback_rate_update_dwork() clear flag BCACHE_DEV_WB_RUNNING.
Before calling cancel_delayed_work_sync() wait utill flag
BCACHE_DEV_RATE_DW_RUNNING is clear. So when calling
cancel_delayed_work_sync(), dc->writeback_rate_update must be already re-
armed, or quite by seeing BCACHE_DEV_WB_RUNNING cleared. In both cases
delayed work routine update_writeback_rate() won't be executed after
cancel_delayed_work_sync() returns.
Inside update_writeback_rate() before calling schedule_delayed_work(), flag
BCACHE_DEV_WB_RUNNING is checked before. If this flag is cleared, it means
someone is about to stop the delayed work. Because flag
BCACHE_DEV_RATE_DW_RUNNING is set already and cancel_delayed_work_sync()
has to wait for this flag to be cleared, we don't need to worry about race
condition here.
If update_writeback_rate() is scheduled to run after checking
BCACHE_DEV_RATE_DW_RUNNING and before calling cancel_delayed_work_sync()
in cancel_writeback_rate_update_dwork(), it is also safe. Because at this
moment BCACHE_DEV_WB_RUNNING is cleared with memory barrier. As I mentioned
previously, update_writeback_rate() will see BCACHE_DEV_WB_RUNNING is clear
and quit immediately.
Because there are more dependences inside update_writeback_rate() to struct
cache_set memory, dc->writeback_rate_update is not a simple self re-arm
delayed work. After trying many different methods (e.g. hold dc->count, or
use locks), this is the only way I can find which works to properly stop
dc->writeback_rate_update delayed work.
Changelog:
v3: change values of BCACHE_DEV_WB_RUNNING and BCACHE_DEV_RATE_DW_RUNNING
to bit index, for test_bit().
v2: Try to fix the race issue which is pointed out by Junhui.
v1: The initial version for review
Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Junhui Tang <tang.junhui@zte.com.cn>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Cc: Michael Lyle <mlyle@lyle.org>
Cc: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-03-19 08:36:16 +08:00
|
|
|
schedule_delayed_work(&dc->writeback_rate_update,
|
2013-07-25 08:50:06 +08:00
|
|
|
dc->writeback_rate_update_seconds * HZ);
|
bcache: stop dc->writeback_rate_update properly
struct delayed_work writeback_rate_update in struct cache_dev is a delayed
worker to call function update_writeback_rate() in period (the interval is
defined by dc->writeback_rate_update_seconds).
When a metadate I/O error happens on cache device, bcache error handling
routine bch_cache_set_error() will call bch_cache_set_unregister() to
retire whole cache set. On the unregister code path, this delayed work is
stopped by calling cancel_delayed_work_sync(&dc->writeback_rate_update).
dc->writeback_rate_update is a special delayed work from others in bcache.
In its routine update_writeback_rate(), this delayed work is re-armed
itself. That means when cancel_delayed_work_sync() returns, this delayed
work can still be executed after several seconds defined by
dc->writeback_rate_update_seconds.
The problem is, after cancel_delayed_work_sync() returns, the cache set
unregister code path will continue and release memory of struct cache set.
Then the delayed work is scheduled to run, __update_writeback_rate()
will reference the already released cache_set memory, and trigger a NULL
pointer deference fault.
This patch introduces two more bcache device flags,
- BCACHE_DEV_WB_RUNNING
bit set: bcache device is in writeback mode and running, it is OK for
dc->writeback_rate_update to re-arm itself.
bit clear:bcache device is trying to stop dc->writeback_rate_update,
this delayed work should not re-arm itself and quit.
- BCACHE_DEV_RATE_DW_RUNNING
bit set: routine update_writeback_rate() is executing.
bit clear: routine update_writeback_rate() quits.
This patch also adds a function cancel_writeback_rate_update_dwork() to
wait for dc->writeback_rate_update quits before cancel it by calling
cancel_delayed_work_sync(). In order to avoid a deadlock by unexpected
quit dc->writeback_rate_update, after time_out seconds this function will
give up and continue to call cancel_delayed_work_sync().
And here I explain how this patch stops self re-armed delayed work properly
with the above stuffs.
update_writeback_rate() sets BCACHE_DEV_RATE_DW_RUNNING at its beginning
and clears BCACHE_DEV_RATE_DW_RUNNING at its end. Before calling
cancel_writeback_rate_update_dwork() clear flag BCACHE_DEV_WB_RUNNING.
Before calling cancel_delayed_work_sync() wait utill flag
BCACHE_DEV_RATE_DW_RUNNING is clear. So when calling
cancel_delayed_work_sync(), dc->writeback_rate_update must be already re-
armed, or quite by seeing BCACHE_DEV_WB_RUNNING cleared. In both cases
delayed work routine update_writeback_rate() won't be executed after
cancel_delayed_work_sync() returns.
Inside update_writeback_rate() before calling schedule_delayed_work(), flag
BCACHE_DEV_WB_RUNNING is checked before. If this flag is cleared, it means
someone is about to stop the delayed work. Because flag
BCACHE_DEV_RATE_DW_RUNNING is set already and cancel_delayed_work_sync()
has to wait for this flag to be cleared, we don't need to worry about race
condition here.
If update_writeback_rate() is scheduled to run after checking
BCACHE_DEV_RATE_DW_RUNNING and before calling cancel_delayed_work_sync()
in cancel_writeback_rate_update_dwork(), it is also safe. Because at this
moment BCACHE_DEV_WB_RUNNING is cleared with memory barrier. As I mentioned
previously, update_writeback_rate() will see BCACHE_DEV_WB_RUNNING is clear
and quit immediately.
Because there are more dependences inside update_writeback_rate() to struct
cache_set memory, dc->writeback_rate_update is not a simple self re-arm
delayed work. After trying many different methods (e.g. hold dc->count, or
use locks), this is the only way I can find which works to properly stop
dc->writeback_rate_update delayed work.
Changelog:
v3: change values of BCACHE_DEV_WB_RUNNING and BCACHE_DEV_RATE_DW_RUNNING
to bit index, for test_bit().
v2: Try to fix the race issue which is pointed out by Junhui.
v1: The initial version for review
Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Junhui Tang <tang.junhui@zte.com.cn>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Cc: Michael Lyle <mlyle@lyle.org>
Cc: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-03-19 08:36:16 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* should check BCACHE_DEV_RATE_DW_RUNNING before calling
|
|
|
|
* cancel_delayed_work_sync().
|
|
|
|
*/
|
|
|
|
clear_bit(BCACHE_DEV_RATE_DW_RUNNING, &dc->disk.flags);
|
|
|
|
/* paired with where BCACHE_DEV_RATE_DW_RUNNING is tested */
|
|
|
|
smp_mb();
|
2013-03-24 07:11:31 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static unsigned writeback_delay(struct cached_dev *dc, unsigned sectors)
|
|
|
|
{
|
2013-08-22 08:49:09 +08:00
|
|
|
if (test_bit(BCACHE_DEV_DETACHING, &dc->disk.flags) ||
|
2013-03-24 07:11:31 +08:00
|
|
|
!dc->writeback_percent)
|
|
|
|
return 0;
|
|
|
|
|
2013-11-12 05:58:34 +08:00
|
|
|
return bch_next_delay(&dc->writeback_rate, sectors);
|
2013-03-24 07:11:31 +08:00
|
|
|
}
|
|
|
|
|
2013-07-25 08:50:06 +08:00
|
|
|
struct dirty_io {
|
|
|
|
struct closure cl;
|
|
|
|
struct cached_dev *dc;
|
2018-01-09 04:21:23 +08:00
|
|
|
uint16_t sequence;
|
2013-07-25 08:50:06 +08:00
|
|
|
struct bio bio;
|
|
|
|
};
|
2013-06-05 21:24:39 +08:00
|
|
|
|
2013-03-24 07:11:31 +08:00
|
|
|
static void dirty_init(struct keybuf_key *w)
|
|
|
|
{
|
|
|
|
struct dirty_io *io = w->private;
|
|
|
|
struct bio *bio = &io->bio;
|
|
|
|
|
2016-11-22 23:57:21 +08:00
|
|
|
bio_init(bio, bio->bi_inline_vecs,
|
|
|
|
DIV_ROUND_UP(KEY_SIZE(&w->key), PAGE_SECTORS));
|
2013-03-24 07:11:31 +08:00
|
|
|
if (!io->dc->writeback_percent)
|
|
|
|
bio_set_prio(bio, IOPRIO_PRIO_VALUE(IOPRIO_CLASS_IDLE, 0));
|
|
|
|
|
2013-10-12 06:44:27 +08:00
|
|
|
bio->bi_iter.bi_size = KEY_SIZE(&w->key) << 9;
|
2013-03-24 07:11:31 +08:00
|
|
|
bio->bi_private = w;
|
2013-03-29 02:50:55 +08:00
|
|
|
bch_bio_map(bio, NULL);
|
2013-03-24 07:11:31 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static void dirty_io_destructor(struct closure *cl)
|
|
|
|
{
|
|
|
|
struct dirty_io *io = container_of(cl, struct dirty_io, cl);
|
|
|
|
kfree(io);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void write_dirty_finish(struct closure *cl)
|
|
|
|
{
|
|
|
|
struct dirty_io *io = container_of(cl, struct dirty_io, cl);
|
|
|
|
struct keybuf_key *w = io->bio.bi_private;
|
|
|
|
struct cached_dev *dc = io->dc;
|
|
|
|
|
2016-09-22 15:10:01 +08:00
|
|
|
bio_free_pages(&io->bio);
|
2013-03-24 07:11:31 +08:00
|
|
|
|
|
|
|
/* This is kind of a dumb way of signalling errors. */
|
|
|
|
if (KEY_DIRTY(&w->key)) {
|
2013-07-25 09:07:22 +08:00
|
|
|
int ret;
|
2013-03-24 07:11:31 +08:00
|
|
|
unsigned i;
|
2013-07-25 08:26:51 +08:00
|
|
|
struct keylist keys;
|
|
|
|
|
|
|
|
bch_keylist_init(&keys);
|
2013-03-24 07:11:31 +08:00
|
|
|
|
2013-09-11 09:52:54 +08:00
|
|
|
bkey_copy(keys.top, &w->key);
|
|
|
|
SET_KEY_DIRTY(keys.top, false);
|
|
|
|
bch_keylist_push(&keys);
|
2013-03-24 07:11:31 +08:00
|
|
|
|
|
|
|
for (i = 0; i < KEY_PTRS(&w->key); i++)
|
|
|
|
atomic_inc(&PTR_BUCKET(dc->disk.c, &w->key, i)->pin);
|
|
|
|
|
2013-07-25 09:07:22 +08:00
|
|
|
ret = bch_btree_insert(dc->disk.c, &keys, NULL, &w->key);
|
2013-03-24 07:11:31 +08:00
|
|
|
|
2013-07-25 09:06:22 +08:00
|
|
|
if (ret)
|
2013-04-27 06:39:55 +08:00
|
|
|
trace_bcache_writeback_collision(&w->key);
|
|
|
|
|
2013-07-25 09:06:22 +08:00
|
|
|
atomic_long_inc(ret
|
2013-03-24 07:11:31 +08:00
|
|
|
? &dc->disk.c->writeback_keys_failed
|
|
|
|
: &dc->disk.c->writeback_keys_done);
|
|
|
|
}
|
|
|
|
|
|
|
|
bch_keybuf_del(&dc->writeback_keys, w);
|
2013-09-24 14:17:31 +08:00
|
|
|
up(&dc->in_flight);
|
2013-03-24 07:11:31 +08:00
|
|
|
|
|
|
|
closure_return_with_destructor(cl, dirty_io_destructor);
|
|
|
|
}
|
|
|
|
|
2015-07-20 21:29:37 +08:00
|
|
|
static void dirty_endio(struct bio *bio)
|
2013-03-24 07:11:31 +08:00
|
|
|
{
|
|
|
|
struct keybuf_key *w = bio->bi_private;
|
|
|
|
struct dirty_io *io = w->private;
|
|
|
|
|
2018-05-03 18:51:34 +08:00
|
|
|
if (bio->bi_status) {
|
2013-03-24 07:11:31 +08:00
|
|
|
SET_KEY_DIRTY(&w->key, false);
|
2018-05-03 18:51:34 +08:00
|
|
|
bch_count_backing_io_errors(io->dc, bio);
|
|
|
|
}
|
2013-03-24 07:11:31 +08:00
|
|
|
|
|
|
|
closure_put(&io->cl);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void write_dirty(struct closure *cl)
|
|
|
|
{
|
|
|
|
struct dirty_io *io = container_of(cl, struct dirty_io, cl);
|
|
|
|
struct keybuf_key *w = io->bio.bi_private;
|
2018-01-09 04:21:23 +08:00
|
|
|
struct cached_dev *dc = io->dc;
|
|
|
|
|
|
|
|
uint16_t next_sequence;
|
|
|
|
|
|
|
|
if (atomic_read(&dc->writeback_sequence_next) != io->sequence) {
|
|
|
|
/* Not our turn to write; wait for a write to complete */
|
|
|
|
closure_wait(&dc->writeback_ordering_wait, cl);
|
|
|
|
|
|
|
|
if (atomic_read(&dc->writeback_sequence_next) == io->sequence) {
|
|
|
|
/*
|
|
|
|
* Edge case-- it happened in indeterminate order
|
|
|
|
* relative to when we were added to wait list..
|
|
|
|
*/
|
|
|
|
closure_wake_up(&dc->writeback_ordering_wait);
|
|
|
|
}
|
|
|
|
|
|
|
|
continue_at(cl, write_dirty, io->dc->writeback_write_wq);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
next_sequence = io->sequence + 1;
|
2013-03-24 07:11:31 +08:00
|
|
|
|
2017-10-14 07:35:35 +08:00
|
|
|
/*
|
|
|
|
* IO errors are signalled using the dirty bit on the key.
|
|
|
|
* If we failed to read, we should not attempt to write to the
|
|
|
|
* backing device. Instead, immediately go to write_dirty_finish
|
|
|
|
* to clean up.
|
|
|
|
*/
|
|
|
|
if (KEY_DIRTY(&w->key)) {
|
|
|
|
dirty_init(w);
|
|
|
|
bio_set_op_attrs(&io->bio, REQ_OP_WRITE, 0);
|
|
|
|
io->bio.bi_iter.bi_sector = KEY_START(&w->key);
|
|
|
|
bio_set_dev(&io->bio, io->dc->bdev);
|
|
|
|
io->bio.bi_end_io = dirty_endio;
|
2013-03-24 07:11:31 +08:00
|
|
|
|
2018-03-19 08:36:24 +08:00
|
|
|
/* I/O request sent to backing device */
|
bcache: add CACHE_SET_IO_DISABLE to struct cache_set flags
When too many I/Os failed on cache device, bch_cache_set_error() is called
in the error handling code path to retire whole problematic cache set. If
new I/O requests continue to come and take refcount dc->count, the cache
set won't be retired immediately, this is a problem.
Further more, there are several kernel thread and self-armed kernel work
may still running after bch_cache_set_error() is called. It needs to wait
quite a while for them to stop, or they won't stop at all. They also
prevent the cache set from being retired.
The solution in this patch is, to add per cache set flag to disable I/O
request on this cache and all attached backing devices. Then new coming I/O
requests can be rejected in *_make_request() before taking refcount, kernel
threads and self-armed kernel worker can stop very fast when flags bit
CACHE_SET_IO_DISABLE is set.
Because bcache also do internal I/Os for writeback, garbage collection,
bucket allocation, journaling, this kind of I/O should be disabled after
bch_cache_set_error() is called. So closure_bio_submit() is modified to
check whether CACHE_SET_IO_DISABLE is set on cache_set->flags. If set,
closure_bio_submit() will set bio->bi_status to BLK_STS_IOERR and
return, generic_make_request() won't be called.
A sysfs interface is also added to set or clear CACHE_SET_IO_DISABLE bit
from cache_set->flags, to disable or enable cache set I/O for debugging. It
is helpful to trigger more corner case issues for failed cache device.
Changelog
v4, add wait_for_kthread_stop(), and call it before exits writeback and gc
kernel threads.
v3, change CACHE_SET_IO_DISABLE from 4 to 3, since it is bit index.
remove "bcache: " prefix when printing out kernel message.
v2, more changes by previous review,
- Use CACHE_SET_IO_DISABLE of cache_set->flags, suggested by Junhui.
- Check CACHE_SET_IO_DISABLE in bch_btree_gc() to stop a while-loop, this
is reported and inspired from origal patch of Pavel Vazharov.
v1, initial version.
Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Cc: Junhui Tang <tang.junhui@zte.com.cn>
Cc: Michael Lyle <mlyle@lyle.org>
Cc: Pavel Vazharov <freakpv@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-03-19 08:36:17 +08:00
|
|
|
closure_bio_submit(io->dc->disk.c, &io->bio, cl);
|
2017-10-14 07:35:35 +08:00
|
|
|
}
|
2013-03-24 07:11:31 +08:00
|
|
|
|
2018-01-09 04:21:23 +08:00
|
|
|
atomic_set(&dc->writeback_sequence_next, next_sequence);
|
|
|
|
closure_wake_up(&dc->writeback_ordering_wait);
|
|
|
|
|
2017-09-06 14:25:59 +08:00
|
|
|
continue_at(cl, write_dirty_finish, io->dc->writeback_write_wq);
|
2013-03-24 07:11:31 +08:00
|
|
|
}
|
|
|
|
|
2015-07-20 21:29:37 +08:00
|
|
|
static void read_dirty_endio(struct bio *bio)
|
2013-03-24 07:11:31 +08:00
|
|
|
{
|
|
|
|
struct keybuf_key *w = bio->bi_private;
|
|
|
|
struct dirty_io *io = w->private;
|
|
|
|
|
2018-01-09 04:21:29 +08:00
|
|
|
/* is_read = 1 */
|
2013-03-24 07:11:31 +08:00
|
|
|
bch_count_io_errors(PTR_CACHE(io->dc->disk.c, &w->key, 0),
|
2018-01-09 04:21:29 +08:00
|
|
|
bio->bi_status, 1,
|
|
|
|
"reading dirty data from cache");
|
2013-03-24 07:11:31 +08:00
|
|
|
|
2015-07-20 21:29:37 +08:00
|
|
|
dirty_endio(bio);
|
2013-03-24 07:11:31 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static void read_dirty_submit(struct closure *cl)
|
|
|
|
{
|
|
|
|
struct dirty_io *io = container_of(cl, struct dirty_io, cl);
|
|
|
|
|
bcache: add CACHE_SET_IO_DISABLE to struct cache_set flags
When too many I/Os failed on cache device, bch_cache_set_error() is called
in the error handling code path to retire whole problematic cache set. If
new I/O requests continue to come and take refcount dc->count, the cache
set won't be retired immediately, this is a problem.
Further more, there are several kernel thread and self-armed kernel work
may still running after bch_cache_set_error() is called. It needs to wait
quite a while for them to stop, or they won't stop at all. They also
prevent the cache set from being retired.
The solution in this patch is, to add per cache set flag to disable I/O
request on this cache and all attached backing devices. Then new coming I/O
requests can be rejected in *_make_request() before taking refcount, kernel
threads and self-armed kernel worker can stop very fast when flags bit
CACHE_SET_IO_DISABLE is set.
Because bcache also do internal I/Os for writeback, garbage collection,
bucket allocation, journaling, this kind of I/O should be disabled after
bch_cache_set_error() is called. So closure_bio_submit() is modified to
check whether CACHE_SET_IO_DISABLE is set on cache_set->flags. If set,
closure_bio_submit() will set bio->bi_status to BLK_STS_IOERR and
return, generic_make_request() won't be called.
A sysfs interface is also added to set or clear CACHE_SET_IO_DISABLE bit
from cache_set->flags, to disable or enable cache set I/O for debugging. It
is helpful to trigger more corner case issues for failed cache device.
Changelog
v4, add wait_for_kthread_stop(), and call it before exits writeback and gc
kernel threads.
v3, change CACHE_SET_IO_DISABLE from 4 to 3, since it is bit index.
remove "bcache: " prefix when printing out kernel message.
v2, more changes by previous review,
- Use CACHE_SET_IO_DISABLE of cache_set->flags, suggested by Junhui.
- Check CACHE_SET_IO_DISABLE in bch_btree_gc() to stop a while-loop, this
is reported and inspired from origal patch of Pavel Vazharov.
v1, initial version.
Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Cc: Junhui Tang <tang.junhui@zte.com.cn>
Cc: Michael Lyle <mlyle@lyle.org>
Cc: Pavel Vazharov <freakpv@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-03-19 08:36:17 +08:00
|
|
|
closure_bio_submit(io->dc->disk.c, &io->bio, cl);
|
2013-03-24 07:11:31 +08:00
|
|
|
|
2017-09-06 14:25:59 +08:00
|
|
|
continue_at(cl, write_dirty, io->dc->writeback_write_wq);
|
2013-03-24 07:11:31 +08:00
|
|
|
}
|
|
|
|
|
2013-07-25 08:50:06 +08:00
|
|
|
static void read_dirty(struct cached_dev *dc)
|
2013-03-24 07:11:31 +08:00
|
|
|
{
|
2013-07-25 08:50:06 +08:00
|
|
|
unsigned delay = 0;
|
2018-01-09 04:21:22 +08:00
|
|
|
struct keybuf_key *next, *keys[MAX_WRITEBACKS_IN_PASS], *w;
|
|
|
|
size_t size;
|
|
|
|
int nk, i;
|
2013-03-24 07:11:31 +08:00
|
|
|
struct dirty_io *io;
|
2013-07-25 08:50:06 +08:00
|
|
|
struct closure cl;
|
2018-01-09 04:21:23 +08:00
|
|
|
uint16_t sequence = 0;
|
2013-07-25 08:50:06 +08:00
|
|
|
|
2018-01-09 04:21:23 +08:00
|
|
|
BUG_ON(!llist_empty(&dc->writeback_ordering_wait.list));
|
|
|
|
atomic_set(&dc->writeback_sequence_next, sequence);
|
2013-07-25 08:50:06 +08:00
|
|
|
closure_init_stack(&cl);
|
2013-03-24 07:11:31 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* XXX: if we error, background writeback just spins. Should use some
|
|
|
|
* mempools.
|
|
|
|
*/
|
|
|
|
|
2018-01-09 04:21:22 +08:00
|
|
|
next = bch_keybuf_next(&dc->writeback_keys);
|
|
|
|
|
bcache: add CACHE_SET_IO_DISABLE to struct cache_set flags
When too many I/Os failed on cache device, bch_cache_set_error() is called
in the error handling code path to retire whole problematic cache set. If
new I/O requests continue to come and take refcount dc->count, the cache
set won't be retired immediately, this is a problem.
Further more, there are several kernel thread and self-armed kernel work
may still running after bch_cache_set_error() is called. It needs to wait
quite a while for them to stop, or they won't stop at all. They also
prevent the cache set from being retired.
The solution in this patch is, to add per cache set flag to disable I/O
request on this cache and all attached backing devices. Then new coming I/O
requests can be rejected in *_make_request() before taking refcount, kernel
threads and self-armed kernel worker can stop very fast when flags bit
CACHE_SET_IO_DISABLE is set.
Because bcache also do internal I/Os for writeback, garbage collection,
bucket allocation, journaling, this kind of I/O should be disabled after
bch_cache_set_error() is called. So closure_bio_submit() is modified to
check whether CACHE_SET_IO_DISABLE is set on cache_set->flags. If set,
closure_bio_submit() will set bio->bi_status to BLK_STS_IOERR and
return, generic_make_request() won't be called.
A sysfs interface is also added to set or clear CACHE_SET_IO_DISABLE bit
from cache_set->flags, to disable or enable cache set I/O for debugging. It
is helpful to trigger more corner case issues for failed cache device.
Changelog
v4, add wait_for_kthread_stop(), and call it before exits writeback and gc
kernel threads.
v3, change CACHE_SET_IO_DISABLE from 4 to 3, since it is bit index.
remove "bcache: " prefix when printing out kernel message.
v2, more changes by previous review,
- Use CACHE_SET_IO_DISABLE of cache_set->flags, suggested by Junhui.
- Check CACHE_SET_IO_DISABLE in bch_btree_gc() to stop a while-loop, this
is reported and inspired from origal patch of Pavel Vazharov.
v1, initial version.
Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Cc: Junhui Tang <tang.junhui@zte.com.cn>
Cc: Michael Lyle <mlyle@lyle.org>
Cc: Pavel Vazharov <freakpv@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-03-19 08:36:17 +08:00
|
|
|
while (!kthread_should_stop() &&
|
|
|
|
!test_bit(CACHE_SET_IO_DISABLE, &dc->disk.c->flags) &&
|
|
|
|
next) {
|
2018-01-09 04:21:22 +08:00
|
|
|
size = 0;
|
|
|
|
nk = 0;
|
|
|
|
|
|
|
|
do {
|
|
|
|
BUG_ON(ptr_stale(dc->disk.c, &next->key, 0));
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Don't combine too many operations, even if they
|
|
|
|
* are all small.
|
|
|
|
*/
|
|
|
|
if (nk >= MAX_WRITEBACKS_IN_PASS)
|
|
|
|
break;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If the current operation is very large, don't
|
|
|
|
* further combine operations.
|
|
|
|
*/
|
|
|
|
if (size >= MAX_WRITESIZE_IN_PASS)
|
|
|
|
break;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Operations are only eligible to be combined
|
|
|
|
* if they are contiguous.
|
|
|
|
*
|
|
|
|
* TODO: add a heuristic willing to fire a
|
|
|
|
* certain amount of non-contiguous IO per pass,
|
|
|
|
* so that we can benefit from backing device
|
|
|
|
* command queueing.
|
|
|
|
*/
|
|
|
|
if ((nk != 0) && bkey_cmp(&keys[nk-1]->key,
|
|
|
|
&START_KEY(&next->key)))
|
|
|
|
break;
|
|
|
|
|
|
|
|
size += KEY_SIZE(&next->key);
|
|
|
|
keys[nk++] = next;
|
|
|
|
} while ((next = bch_keybuf_next(&dc->writeback_keys)));
|
|
|
|
|
|
|
|
/* Now we have gathered a set of 1..5 keys to write back. */
|
|
|
|
for (i = 0; i < nk; i++) {
|
|
|
|
w = keys[i];
|
|
|
|
|
|
|
|
io = kzalloc(sizeof(struct dirty_io) +
|
|
|
|
sizeof(struct bio_vec) *
|
|
|
|
DIV_ROUND_UP(KEY_SIZE(&w->key), PAGE_SECTORS),
|
|
|
|
GFP_KERNEL);
|
|
|
|
if (!io)
|
|
|
|
goto err;
|
|
|
|
|
|
|
|
w->private = io;
|
|
|
|
io->dc = dc;
|
2018-01-09 04:21:23 +08:00
|
|
|
io->sequence = sequence++;
|
2018-01-09 04:21:22 +08:00
|
|
|
|
|
|
|
dirty_init(w);
|
|
|
|
bio_set_op_attrs(&io->bio, REQ_OP_READ, 0);
|
|
|
|
io->bio.bi_iter.bi_sector = PTR_OFFSET(&w->key, 0);
|
|
|
|
bio_set_dev(&io->bio,
|
|
|
|
PTR_CACHE(dc->disk.c, &w->key, 0)->bdev);
|
|
|
|
io->bio.bi_end_io = read_dirty_endio;
|
|
|
|
|
|
|
|
if (bch_bio_alloc_pages(&io->bio, GFP_KERNEL))
|
|
|
|
goto err_free;
|
|
|
|
|
|
|
|
trace_bcache_writeback(&w->key);
|
|
|
|
|
|
|
|
down(&dc->in_flight);
|
|
|
|
|
|
|
|
/* We've acquired a semaphore for the maximum
|
|
|
|
* simultaneous number of writebacks; from here
|
|
|
|
* everything happens asynchronously.
|
|
|
|
*/
|
|
|
|
closure_call(&io->cl, read_dirty_submit, NULL, &cl);
|
|
|
|
}
|
|
|
|
|
|
|
|
delay = writeback_delay(dc, size);
|
|
|
|
|
2018-01-09 04:21:24 +08:00
|
|
|
/* If the control system would wait for at least half a
|
|
|
|
* second, and there's been no reqs hitting the backing disk
|
|
|
|
* for awhile: use an alternate mode where we have at most
|
|
|
|
* one contiguous set of writebacks in flight at a time. If
|
|
|
|
* someone wants to do IO it will be quick, as it will only
|
|
|
|
* have to contend with one operation in flight, and we'll
|
|
|
|
* be round-tripping data to the backing disk as quickly as
|
|
|
|
* it can accept it.
|
|
|
|
*/
|
|
|
|
if (delay >= HZ / 2) {
|
|
|
|
/* 3 means at least 1.5 seconds, up to 7.5 if we
|
|
|
|
* have slowed way down.
|
|
|
|
*/
|
|
|
|
if (atomic_inc_return(&dc->backing_idle) >= 3) {
|
|
|
|
/* Wait for current I/Os to finish */
|
|
|
|
closure_sync(&cl);
|
|
|
|
/* And immediately launch a new set. */
|
|
|
|
delay = 0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
bcache: add CACHE_SET_IO_DISABLE to struct cache_set flags
When too many I/Os failed on cache device, bch_cache_set_error() is called
in the error handling code path to retire whole problematic cache set. If
new I/O requests continue to come and take refcount dc->count, the cache
set won't be retired immediately, this is a problem.
Further more, there are several kernel thread and self-armed kernel work
may still running after bch_cache_set_error() is called. It needs to wait
quite a while for them to stop, or they won't stop at all. They also
prevent the cache set from being retired.
The solution in this patch is, to add per cache set flag to disable I/O
request on this cache and all attached backing devices. Then new coming I/O
requests can be rejected in *_make_request() before taking refcount, kernel
threads and self-armed kernel worker can stop very fast when flags bit
CACHE_SET_IO_DISABLE is set.
Because bcache also do internal I/Os for writeback, garbage collection,
bucket allocation, journaling, this kind of I/O should be disabled after
bch_cache_set_error() is called. So closure_bio_submit() is modified to
check whether CACHE_SET_IO_DISABLE is set on cache_set->flags. If set,
closure_bio_submit() will set bio->bi_status to BLK_STS_IOERR and
return, generic_make_request() won't be called.
A sysfs interface is also added to set or clear CACHE_SET_IO_DISABLE bit
from cache_set->flags, to disable or enable cache set I/O for debugging. It
is helpful to trigger more corner case issues for failed cache device.
Changelog
v4, add wait_for_kthread_stop(), and call it before exits writeback and gc
kernel threads.
v3, change CACHE_SET_IO_DISABLE from 4 to 3, since it is bit index.
remove "bcache: " prefix when printing out kernel message.
v2, more changes by previous review,
- Use CACHE_SET_IO_DISABLE of cache_set->flags, suggested by Junhui.
- Check CACHE_SET_IO_DISABLE in bch_btree_gc() to stop a while-loop, this
is reported and inspired from origal patch of Pavel Vazharov.
v1, initial version.
Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Cc: Junhui Tang <tang.junhui@zte.com.cn>
Cc: Michael Lyle <mlyle@lyle.org>
Cc: Pavel Vazharov <freakpv@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-03-19 08:36:17 +08:00
|
|
|
while (!kthread_should_stop() &&
|
|
|
|
!test_bit(CACHE_SET_IO_DISABLE, &dc->disk.c->flags) &&
|
|
|
|
delay) {
|
2018-01-09 04:21:22 +08:00
|
|
|
schedule_timeout_interruptible(delay);
|
|
|
|
delay = writeback_delay(dc, 0);
|
|
|
|
}
|
2013-03-24 07:11:31 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
if (0) {
|
|
|
|
err_free:
|
|
|
|
kfree(w->private);
|
|
|
|
err:
|
|
|
|
bch_keybuf_del(&dc->writeback_keys, w);
|
|
|
|
}
|
|
|
|
|
2013-09-24 14:17:31 +08:00
|
|
|
/*
|
|
|
|
* Wait for outstanding writeback IOs to finish (and keybuf slots to be
|
|
|
|
* freed) before refilling again
|
|
|
|
*/
|
2013-07-25 08:50:06 +08:00
|
|
|
closure_sync(&cl);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Scan for dirty data */
|
|
|
|
|
|
|
|
void bcache_dev_sectors_dirty_add(struct cache_set *c, unsigned inode,
|
|
|
|
uint64_t offset, int nr_sectors)
|
|
|
|
{
|
|
|
|
struct bcache_device *d = c->devices[inode];
|
2013-11-01 06:43:22 +08:00
|
|
|
unsigned stripe_offset, stripe, sectors_dirty;
|
2013-07-25 08:50:06 +08:00
|
|
|
|
|
|
|
if (!d)
|
|
|
|
return;
|
|
|
|
|
2018-07-26 12:17:33 +08:00
|
|
|
if (UUID_FLASH_ONLY(&c->uuids[inode]))
|
|
|
|
atomic_long_add(nr_sectors, &c->flash_dev_dirty_sectors);
|
|
|
|
|
2013-11-01 06:43:22 +08:00
|
|
|
stripe = offset_to_stripe(d, offset);
|
2013-07-25 08:50:06 +08:00
|
|
|
stripe_offset = offset & (d->stripe_size - 1);
|
|
|
|
|
|
|
|
while (nr_sectors) {
|
|
|
|
int s = min_t(unsigned, abs(nr_sectors),
|
|
|
|
d->stripe_size - stripe_offset);
|
|
|
|
|
|
|
|
if (nr_sectors < 0)
|
|
|
|
s = -s;
|
|
|
|
|
2013-11-01 06:43:22 +08:00
|
|
|
if (stripe >= d->nr_stripes)
|
|
|
|
return;
|
|
|
|
|
|
|
|
sectors_dirty = atomic_add_return(s,
|
|
|
|
d->stripe_sectors_dirty + stripe);
|
|
|
|
if (sectors_dirty == d->stripe_size)
|
|
|
|
set_bit(stripe, d->full_dirty_stripes);
|
|
|
|
else
|
|
|
|
clear_bit(stripe, d->full_dirty_stripes);
|
|
|
|
|
2013-07-25 08:50:06 +08:00
|
|
|
nr_sectors -= s;
|
|
|
|
stripe_offset = 0;
|
|
|
|
stripe++;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool dirty_pred(struct keybuf *buf, struct bkey *k)
|
|
|
|
{
|
2015-11-30 10:47:01 +08:00
|
|
|
struct cached_dev *dc = container_of(buf, struct cached_dev, writeback_keys);
|
|
|
|
|
|
|
|
BUG_ON(KEY_INODE(k) != dc->disk.id);
|
|
|
|
|
2013-07-25 08:50:06 +08:00
|
|
|
return KEY_DIRTY(k);
|
|
|
|
}
|
|
|
|
|
2013-11-01 06:43:22 +08:00
|
|
|
static void refill_full_stripes(struct cached_dev *dc)
|
2013-07-25 08:50:06 +08:00
|
|
|
{
|
2013-11-01 06:43:22 +08:00
|
|
|
struct keybuf *buf = &dc->writeback_keys;
|
|
|
|
unsigned start_stripe, stripe, next_stripe;
|
|
|
|
bool wrapped = false;
|
|
|
|
|
|
|
|
stripe = offset_to_stripe(&dc->disk, KEY_OFFSET(&buf->last_scanned));
|
2013-07-25 08:50:06 +08:00
|
|
|
|
2013-11-01 06:43:22 +08:00
|
|
|
if (stripe >= dc->disk.nr_stripes)
|
|
|
|
stripe = 0;
|
2013-07-25 08:50:06 +08:00
|
|
|
|
2013-11-01 06:43:22 +08:00
|
|
|
start_stripe = stripe;
|
2013-07-25 08:50:06 +08:00
|
|
|
|
|
|
|
while (1) {
|
2013-11-01 06:43:22 +08:00
|
|
|
stripe = find_next_bit(dc->disk.full_dirty_stripes,
|
|
|
|
dc->disk.nr_stripes, stripe);
|
2013-07-25 08:50:06 +08:00
|
|
|
|
2013-11-01 06:43:22 +08:00
|
|
|
if (stripe == dc->disk.nr_stripes)
|
|
|
|
goto next;
|
2013-07-25 08:50:06 +08:00
|
|
|
|
2013-11-01 06:43:22 +08:00
|
|
|
next_stripe = find_next_zero_bit(dc->disk.full_dirty_stripes,
|
|
|
|
dc->disk.nr_stripes, stripe);
|
|
|
|
|
|
|
|
buf->last_scanned = KEY(dc->disk.id,
|
|
|
|
stripe * dc->disk.stripe_size, 0);
|
|
|
|
|
|
|
|
bch_refill_keybuf(dc->disk.c, buf,
|
|
|
|
&KEY(dc->disk.id,
|
|
|
|
next_stripe * dc->disk.stripe_size, 0),
|
|
|
|
dirty_pred);
|
|
|
|
|
|
|
|
if (array_freelist_empty(&buf->freelist))
|
|
|
|
return;
|
|
|
|
|
|
|
|
stripe = next_stripe;
|
|
|
|
next:
|
|
|
|
if (wrapped && stripe > start_stripe)
|
|
|
|
return;
|
|
|
|
|
|
|
|
if (stripe == dc->disk.nr_stripes) {
|
|
|
|
stripe = 0;
|
|
|
|
wrapped = true;
|
|
|
|
}
|
2013-07-25 08:50:06 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-11-30 10:47:01 +08:00
|
|
|
/*
|
|
|
|
* Returns true if we scanned the entire disk
|
|
|
|
*/
|
2013-07-25 08:50:06 +08:00
|
|
|
static bool refill_dirty(struct cached_dev *dc)
|
|
|
|
{
|
|
|
|
struct keybuf *buf = &dc->writeback_keys;
|
2015-11-30 10:47:01 +08:00
|
|
|
struct bkey start = KEY(dc->disk.id, 0, 0);
|
2013-07-25 08:50:06 +08:00
|
|
|
struct bkey end = KEY(dc->disk.id, MAX_KEY_OFFSET, 0);
|
2015-11-30 10:47:01 +08:00
|
|
|
struct bkey start_pos;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* make sure keybuf pos is inside the range for this disk - at bringup
|
|
|
|
* we might not be attached yet so this disk's inode nr isn't
|
|
|
|
* initialized then
|
|
|
|
*/
|
|
|
|
if (bkey_cmp(&buf->last_scanned, &start) < 0 ||
|
|
|
|
bkey_cmp(&buf->last_scanned, &end) > 0)
|
|
|
|
buf->last_scanned = start;
|
2013-11-01 06:43:22 +08:00
|
|
|
|
|
|
|
if (dc->partial_stripes_expensive) {
|
|
|
|
refill_full_stripes(dc);
|
|
|
|
if (array_freelist_empty(&buf->freelist))
|
|
|
|
return false;
|
|
|
|
}
|
2013-07-25 08:50:06 +08:00
|
|
|
|
2015-11-30 10:47:01 +08:00
|
|
|
start_pos = buf->last_scanned;
|
2013-11-01 06:43:22 +08:00
|
|
|
bch_refill_keybuf(dc->disk.c, buf, &end, dirty_pred);
|
2013-07-25 08:50:06 +08:00
|
|
|
|
2015-11-30 10:47:01 +08:00
|
|
|
if (bkey_cmp(&buf->last_scanned, &end) < 0)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If we get to the end start scanning again from the beginning, and
|
|
|
|
* only scan up to where we initially started scanning from:
|
|
|
|
*/
|
|
|
|
buf->last_scanned = start;
|
|
|
|
bch_refill_keybuf(dc->disk.c, buf, &start_pos, dirty_pred);
|
|
|
|
|
|
|
|
return bkey_cmp(&buf->last_scanned, &start_pos) >= 0;
|
2013-07-25 08:50:06 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static int bch_writeback_thread(void *arg)
|
|
|
|
{
|
|
|
|
struct cached_dev *dc = arg;
|
bcache: add CACHE_SET_IO_DISABLE to struct cache_set flags
When too many I/Os failed on cache device, bch_cache_set_error() is called
in the error handling code path to retire whole problematic cache set. If
new I/O requests continue to come and take refcount dc->count, the cache
set won't be retired immediately, this is a problem.
Further more, there are several kernel thread and self-armed kernel work
may still running after bch_cache_set_error() is called. It needs to wait
quite a while for them to stop, or they won't stop at all. They also
prevent the cache set from being retired.
The solution in this patch is, to add per cache set flag to disable I/O
request on this cache and all attached backing devices. Then new coming I/O
requests can be rejected in *_make_request() before taking refcount, kernel
threads and self-armed kernel worker can stop very fast when flags bit
CACHE_SET_IO_DISABLE is set.
Because bcache also do internal I/Os for writeback, garbage collection,
bucket allocation, journaling, this kind of I/O should be disabled after
bch_cache_set_error() is called. So closure_bio_submit() is modified to
check whether CACHE_SET_IO_DISABLE is set on cache_set->flags. If set,
closure_bio_submit() will set bio->bi_status to BLK_STS_IOERR and
return, generic_make_request() won't be called.
A sysfs interface is also added to set or clear CACHE_SET_IO_DISABLE bit
from cache_set->flags, to disable or enable cache set I/O for debugging. It
is helpful to trigger more corner case issues for failed cache device.
Changelog
v4, add wait_for_kthread_stop(), and call it before exits writeback and gc
kernel threads.
v3, change CACHE_SET_IO_DISABLE from 4 to 3, since it is bit index.
remove "bcache: " prefix when printing out kernel message.
v2, more changes by previous review,
- Use CACHE_SET_IO_DISABLE of cache_set->flags, suggested by Junhui.
- Check CACHE_SET_IO_DISABLE in bch_btree_gc() to stop a while-loop, this
is reported and inspired from origal patch of Pavel Vazharov.
v1, initial version.
Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Cc: Junhui Tang <tang.junhui@zte.com.cn>
Cc: Michael Lyle <mlyle@lyle.org>
Cc: Pavel Vazharov <freakpv@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-03-19 08:36:17 +08:00
|
|
|
struct cache_set *c = dc->disk.c;
|
2013-07-25 08:50:06 +08:00
|
|
|
bool searched_full_index;
|
|
|
|
|
2017-10-14 07:35:39 +08:00
|
|
|
bch_ratelimit_reset(&dc->writeback_rate);
|
|
|
|
|
bcache: add CACHE_SET_IO_DISABLE to struct cache_set flags
When too many I/Os failed on cache device, bch_cache_set_error() is called
in the error handling code path to retire whole problematic cache set. If
new I/O requests continue to come and take refcount dc->count, the cache
set won't be retired immediately, this is a problem.
Further more, there are several kernel thread and self-armed kernel work
may still running after bch_cache_set_error() is called. It needs to wait
quite a while for them to stop, or they won't stop at all. They also
prevent the cache set from being retired.
The solution in this patch is, to add per cache set flag to disable I/O
request on this cache and all attached backing devices. Then new coming I/O
requests can be rejected in *_make_request() before taking refcount, kernel
threads and self-armed kernel worker can stop very fast when flags bit
CACHE_SET_IO_DISABLE is set.
Because bcache also do internal I/Os for writeback, garbage collection,
bucket allocation, journaling, this kind of I/O should be disabled after
bch_cache_set_error() is called. So closure_bio_submit() is modified to
check whether CACHE_SET_IO_DISABLE is set on cache_set->flags. If set,
closure_bio_submit() will set bio->bi_status to BLK_STS_IOERR and
return, generic_make_request() won't be called.
A sysfs interface is also added to set or clear CACHE_SET_IO_DISABLE bit
from cache_set->flags, to disable or enable cache set I/O for debugging. It
is helpful to trigger more corner case issues for failed cache device.
Changelog
v4, add wait_for_kthread_stop(), and call it before exits writeback and gc
kernel threads.
v3, change CACHE_SET_IO_DISABLE from 4 to 3, since it is bit index.
remove "bcache: " prefix when printing out kernel message.
v2, more changes by previous review,
- Use CACHE_SET_IO_DISABLE of cache_set->flags, suggested by Junhui.
- Check CACHE_SET_IO_DISABLE in bch_btree_gc() to stop a while-loop, this
is reported and inspired from origal patch of Pavel Vazharov.
v1, initial version.
Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Cc: Junhui Tang <tang.junhui@zte.com.cn>
Cc: Michael Lyle <mlyle@lyle.org>
Cc: Pavel Vazharov <freakpv@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-03-19 08:36:17 +08:00
|
|
|
while (!kthread_should_stop() &&
|
|
|
|
!test_bit(CACHE_SET_IO_DISABLE, &c->flags)) {
|
2013-07-25 08:50:06 +08:00
|
|
|
down_write(&dc->writeback_lock);
|
bcache: properly set task state in bch_writeback_thread()
Kernel thread routine bch_writeback_thread() has the following code block,
447 down_write(&dc->writeback_lock);
448~450 if (check conditions) {
451 up_write(&dc->writeback_lock);
452 set_current_state(TASK_INTERRUPTIBLE);
453
454 if (kthread_should_stop())
455 return 0;
456
457 schedule();
458 continue;
459 }
If condition check is true, its task state is set to TASK_INTERRUPTIBLE
and call schedule() to wait for others to wake up it.
There are 2 issues in current code,
1, Task state is set to TASK_INTERRUPTIBLE after the condition checks, if
another process changes the condition and call wake_up_process(dc->
writeback_thread), then at line 452 task state is set back to
TASK_INTERRUPTIBLE, the writeback kernel thread will lose a chance to be
waken up.
2, At line 454 if kthread_should_stop() is true, writeback kernel thread
will return to kernel/kthread.c:kthread() with TASK_INTERRUPTIBLE and
call do_exit(). It is not good to enter do_exit() with task state
TASK_INTERRUPTIBLE, in following code path might_sleep() is called and a
warning message is reported by __might_sleep(): "WARNING: do not call
blocking ops when !TASK_RUNNING; state=1 set at [xxxx]".
For the first issue, task state should be set before condition checks.
Ineed because dc->writeback_lock is required when modifying all the
conditions, calling set_current_state() inside code block where dc->
writeback_lock is hold is safe. But this is quite implicit, so I still move
set_current_state() before all the condition checks.
For the second issue, frankley speaking it does not hurt when kernel thread
exits with TASK_INTERRUPTIBLE state, but this warning message scares users,
makes them feel there might be something risky with bcache and hurt their
data. Setting task state to TASK_RUNNING before returning fixes this
problem.
In alloc.c:allocator_wait(), there is also a similar issue, and is also
fixed in this patch.
Changelog:
v3: merge two similar fixes into one patch
v2: fix the race issue in v1 patch.
v1: initial buggy fix.
Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Cc: Michael Lyle <mlyle@lyle.org>
Cc: Junhui Tang <tang.junhui@zte.com.cn>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-02-08 03:41:41 +08:00
|
|
|
set_current_state(TASK_INTERRUPTIBLE);
|
bcache: quit dc->writeback_thread when BCACHE_DEV_DETACHING is set
In patch "bcache: fix cached_dev->count usage for bch_cache_set_error()",
cached_dev_get() is called when creating dc->writeback_thread, and
cached_dev_put() is called when exiting dc->writeback_thread. This
modification works well unless people detach the bcache device manually by
'echo 1 > /sys/block/bcache<N>/bcache/detach'
Because this sysfs interface only calls bch_cached_dev_detach() which wakes
up dc->writeback_thread but does not stop it. The reason is, before patch
"bcache: fix cached_dev->count usage for bch_cache_set_error()", inside
bch_writeback_thread(), if cache is not dirty after writeback,
cached_dev_put() will be called here. And in cached_dev_make_request() when
a new write request makes cache from clean to dirty, cached_dev_get() will
be called there. Since we don't operate dc->count in these locations,
refcount d->count cannot be dropped after cache becomes clean, and
cached_dev_detach_finish() won't be called to detach bcache device.
This patch fixes the issue by checking whether BCACHE_DEV_DETACHING is
set inside bch_writeback_thread(). If this bit is set and cache is clean
(no existing writeback_keys), break the while-loop, call cached_dev_put()
and quit the writeback thread.
Please note if cache is still dirty, even BCACHE_DEV_DETACHING is set the
writeback thread should continue to perform writeback, this is the original
design of manually detach.
It is safe to do the following check without locking, let me explain why,
+ if (!test_bit(BCACHE_DEV_DETACHING, &dc->disk.flags) &&
+ (!atomic_read(&dc->has_dirty) || !dc->writeback_running)) {
If the kenrel thread does not sleep and continue to run due to conditions
are not updated in time on the running CPU core, it just consumes more CPU
cycles and has no hurt. This should-sleep-but-run is safe here. We just
focus on the should-run-but-sleep condition, which means the writeback
thread goes to sleep in mistake while it should continue to run.
1, First of all, no matter the writeback thread is hung or not,
kthread_stop() from cached_dev_detach_finish() will wake up it and
terminate by making kthread_should_stop() return true. And in normal
run time, bit on index BCACHE_DEV_DETACHING is always cleared, the
condition
!test_bit(BCACHE_DEV_DETACHING, &dc->disk.flags)
is always true and can be ignored as constant value.
2, If one of the following conditions is true, the writeback thread should
go to sleep,
"!atomic_read(&dc->has_dirty)" or "!dc->writeback_running)"
each of them independently controls the writeback thread should sleep or
not, let's analyse them one by one.
2.1 condition "!atomic_read(&dc->has_dirty)"
If dc->has_dirty is set from 0 to 1 on another CPU core, bcache will
call bch_writeback_queue() immediately or call bch_writeback_add() which
indirectly calls bch_writeback_queue() too. In bch_writeback_queue(),
wake_up_process(dc->writeback_thread) is called. It sets writeback
thread's task state to TASK_RUNNING and following an implicit memory
barrier, then tries to wake up the writeback thread.
In writeback thread, its task state is set to TASK_INTERRUPTIBLE before
doing the condition check. If other CPU core sets the TASK_RUNNING state
after writeback thread setting TASK_INTERRUPTIBLE, the writeback thread
will be scheduled to run very soon because its state is not
TASK_INTERRUPTIBLE. If other CPU core sets the TASK_RUNNING state before
writeback thread setting TASK_INTERRUPTIBLE, the implict memory barrier
of wake_up_process() will make sure modification of dc->has_dirty on
other CPU core is updated and observed on the CPU core of writeback
thread. Therefore the condition check will correctly be false, and
continue writeback code without sleeping.
2.2 condition "!dc->writeback_running)"
dc->writeback_running can be changed via sysfs file, every time it is
modified, a following bch_writeback_queue() is alwasy called. So the
change is always observed on the CPU core of writeback thread. If
dc->writeback_running is changed from 0 to 1 on other CPU core, this
condition check will observe the modification and allow writeback
thread to continue to run without sleeping.
Now we can see, even without a locking protection, multiple conditions
check is safe here, no deadlock or process hang up will happen.
I compose a separte patch because that patch "bcache: fix cached_dev->count
usage for bch_cache_set_error()" already gets a "Reviewed-by:" from Hannes
Reinecke. Also this fix is not trivial and good for a separate patch.
Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Huijun Tang <tang.junhui@zte.com.cn>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-03-19 08:36:15 +08:00
|
|
|
/*
|
|
|
|
* If the bache device is detaching, skip here and continue
|
|
|
|
* to perform writeback. Otherwise, if no dirty data on cache,
|
|
|
|
* or there is dirty data on cache but writeback is disabled,
|
|
|
|
* the writeback thread should sleep here and wait for others
|
|
|
|
* to wake up it.
|
|
|
|
*/
|
|
|
|
if (!test_bit(BCACHE_DEV_DETACHING, &dc->disk.flags) &&
|
|
|
|
(!atomic_read(&dc->has_dirty) || !dc->writeback_running)) {
|
2013-07-25 08:50:06 +08:00
|
|
|
up_write(&dc->writeback_lock);
|
|
|
|
|
bcache: add CACHE_SET_IO_DISABLE to struct cache_set flags
When too many I/Os failed on cache device, bch_cache_set_error() is called
in the error handling code path to retire whole problematic cache set. If
new I/O requests continue to come and take refcount dc->count, the cache
set won't be retired immediately, this is a problem.
Further more, there are several kernel thread and self-armed kernel work
may still running after bch_cache_set_error() is called. It needs to wait
quite a while for them to stop, or they won't stop at all. They also
prevent the cache set from being retired.
The solution in this patch is, to add per cache set flag to disable I/O
request on this cache and all attached backing devices. Then new coming I/O
requests can be rejected in *_make_request() before taking refcount, kernel
threads and self-armed kernel worker can stop very fast when flags bit
CACHE_SET_IO_DISABLE is set.
Because bcache also do internal I/Os for writeback, garbage collection,
bucket allocation, journaling, this kind of I/O should be disabled after
bch_cache_set_error() is called. So closure_bio_submit() is modified to
check whether CACHE_SET_IO_DISABLE is set on cache_set->flags. If set,
closure_bio_submit() will set bio->bi_status to BLK_STS_IOERR and
return, generic_make_request() won't be called.
A sysfs interface is also added to set or clear CACHE_SET_IO_DISABLE bit
from cache_set->flags, to disable or enable cache set I/O for debugging. It
is helpful to trigger more corner case issues for failed cache device.
Changelog
v4, add wait_for_kthread_stop(), and call it before exits writeback and gc
kernel threads.
v3, change CACHE_SET_IO_DISABLE from 4 to 3, since it is bit index.
remove "bcache: " prefix when printing out kernel message.
v2, more changes by previous review,
- Use CACHE_SET_IO_DISABLE of cache_set->flags, suggested by Junhui.
- Check CACHE_SET_IO_DISABLE in bch_btree_gc() to stop a while-loop, this
is reported and inspired from origal patch of Pavel Vazharov.
v1, initial version.
Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Cc: Junhui Tang <tang.junhui@zte.com.cn>
Cc: Michael Lyle <mlyle@lyle.org>
Cc: Pavel Vazharov <freakpv@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-03-19 08:36:17 +08:00
|
|
|
if (kthread_should_stop() ||
|
|
|
|
test_bit(CACHE_SET_IO_DISABLE, &c->flags)) {
|
bcache: properly set task state in bch_writeback_thread()
Kernel thread routine bch_writeback_thread() has the following code block,
447 down_write(&dc->writeback_lock);
448~450 if (check conditions) {
451 up_write(&dc->writeback_lock);
452 set_current_state(TASK_INTERRUPTIBLE);
453
454 if (kthread_should_stop())
455 return 0;
456
457 schedule();
458 continue;
459 }
If condition check is true, its task state is set to TASK_INTERRUPTIBLE
and call schedule() to wait for others to wake up it.
There are 2 issues in current code,
1, Task state is set to TASK_INTERRUPTIBLE after the condition checks, if
another process changes the condition and call wake_up_process(dc->
writeback_thread), then at line 452 task state is set back to
TASK_INTERRUPTIBLE, the writeback kernel thread will lose a chance to be
waken up.
2, At line 454 if kthread_should_stop() is true, writeback kernel thread
will return to kernel/kthread.c:kthread() with TASK_INTERRUPTIBLE and
call do_exit(). It is not good to enter do_exit() with task state
TASK_INTERRUPTIBLE, in following code path might_sleep() is called and a
warning message is reported by __might_sleep(): "WARNING: do not call
blocking ops when !TASK_RUNNING; state=1 set at [xxxx]".
For the first issue, task state should be set before condition checks.
Ineed because dc->writeback_lock is required when modifying all the
conditions, calling set_current_state() inside code block where dc->
writeback_lock is hold is safe. But this is quite implicit, so I still move
set_current_state() before all the condition checks.
For the second issue, frankley speaking it does not hurt when kernel thread
exits with TASK_INTERRUPTIBLE state, but this warning message scares users,
makes them feel there might be something risky with bcache and hurt their
data. Setting task state to TASK_RUNNING before returning fixes this
problem.
In alloc.c:allocator_wait(), there is also a similar issue, and is also
fixed in this patch.
Changelog:
v3: merge two similar fixes into one patch
v2: fix the race issue in v1 patch.
v1: initial buggy fix.
Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Cc: Michael Lyle <mlyle@lyle.org>
Cc: Junhui Tang <tang.junhui@zte.com.cn>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-02-08 03:41:41 +08:00
|
|
|
set_current_state(TASK_RUNNING);
|
bcache: fix cached_dev->count usage for bch_cache_set_error()
When bcache metadata I/O fails, bcache will call bch_cache_set_error()
to retire the whole cache set. The expected behavior to retire a cache
set is to unregister the cache set, and unregister all backing device
attached to this cache set, then remove sysfs entries of the cache set
and all attached backing devices, finally release memory of structs
cache_set, cache, cached_dev and bcache_device.
In my testing when journal I/O failure triggered by disconnected cache
device, sometimes the cache set cannot be retired, and its sysfs
entry /sys/fs/bcache/<uuid> still exits and the backing device also
references it. This is not expected behavior.
When metadata I/O failes, the call senquence to retire whole cache set is,
bch_cache_set_error()
bch_cache_set_unregister()
bch_cache_set_stop()
__cache_set_unregister() <- called as callback by calling
clousre_queue(&c->caching)
cache_set_flush() <- called as a callback when refcount
of cache_set->caching is 0
cache_set_free() <- called as a callback when refcount
of catch_set->cl is 0
bch_cache_set_release() <- called as a callback when refcount
of catch_set->kobj is 0
I find if kernel thread bch_writeback_thread() quits while-loop when
kthread_should_stop() is true and searched_full_index is false, clousre
callback cache_set_flush() set by continue_at() will never be called. The
result is, bcache fails to retire whole cache set.
cache_set_flush() will be called when refcount of closure c->caching is 0,
and in function bcache_device_detach() refcount of closure c->caching is
released to 0 by clousre_put(). In metadata error code path, function
bcache_device_detach() is called by cached_dev_detach_finish(). This is a
callback routine being called when cached_dev->count is 0. This refcount
is decreased by cached_dev_put().
The above dependence indicates, cache_set_flush() will be called when
refcount of cache_set->cl is 0, and refcount of cache_set->cl to be 0
when refcount of cache_dev->count is 0.
The reason why sometimes cache_dev->count is not 0 (when metadata I/O fails
and bch_cache_set_error() called) is, in bch_writeback_thread(), refcount
of cache_dev is not decreased properly.
In bch_writeback_thread(), cached_dev_put() is called only when
searched_full_index is true and cached_dev->writeback_keys is empty, a.k.a
there is no dirty data on cache. In most of run time it is correct, but
when bch_writeback_thread() quits the while-loop while cache is still
dirty, current code forget to call cached_dev_put() before this kernel
thread exits. This is why sometimes cache_set_flush() is not executed and
cache set fails to be retired.
The reason to call cached_dev_put() in bch_writeback_rate() is, when the
cache device changes from clean to dirty, cached_dev_get() is called, to
make sure during writeback operatiions both backing and cache devices
won't be released.
Adding following code in bch_writeback_thread() does not work,
static int bch_writeback_thread(void *arg)
}
+ if (atomic_read(&dc->has_dirty))
+ cached_dev_put()
+
return 0;
}
because writeback kernel thread can be waken up and start via sysfs entry:
echo 1 > /sys/block/bcache<N>/bcache/writeback_running
It is difficult to check whether backing device is dirty without race and
extra lock. So the above modification will introduce potential refcount
underflow in some conditions.
The correct fix is, to take cached dev refcount when creating the kernel
thread, and put it before the kernel thread exits. Then bcache does not
need to take a cached dev refcount when cache turns from clean to dirty,
or to put a cached dev refcount when cache turns from ditry to clean. The
writeback kernel thread is alwasy safe to reference data structure from
cache set, cache and cached device (because a refcount of cache device is
taken for it already), and no matter the kernel thread is stopped by I/O
errors or system reboot, cached_dev->count can always be used correctly.
The patch is simple, but understanding how it works is quite complicated.
Changelog:
v2: set dc->writeback_thread to NULL in this patch, as suggested by Hannes.
v1: initial version for review.
Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Cc: Michael Lyle <mlyle@lyle.org>
Cc: Junhui Tang <tang.junhui@zte.com.cn>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-03-19 08:36:14 +08:00
|
|
|
break;
|
bcache: properly set task state in bch_writeback_thread()
Kernel thread routine bch_writeback_thread() has the following code block,
447 down_write(&dc->writeback_lock);
448~450 if (check conditions) {
451 up_write(&dc->writeback_lock);
452 set_current_state(TASK_INTERRUPTIBLE);
453
454 if (kthread_should_stop())
455 return 0;
456
457 schedule();
458 continue;
459 }
If condition check is true, its task state is set to TASK_INTERRUPTIBLE
and call schedule() to wait for others to wake up it.
There are 2 issues in current code,
1, Task state is set to TASK_INTERRUPTIBLE after the condition checks, if
another process changes the condition and call wake_up_process(dc->
writeback_thread), then at line 452 task state is set back to
TASK_INTERRUPTIBLE, the writeback kernel thread will lose a chance to be
waken up.
2, At line 454 if kthread_should_stop() is true, writeback kernel thread
will return to kernel/kthread.c:kthread() with TASK_INTERRUPTIBLE and
call do_exit(). It is not good to enter do_exit() with task state
TASK_INTERRUPTIBLE, in following code path might_sleep() is called and a
warning message is reported by __might_sleep(): "WARNING: do not call
blocking ops when !TASK_RUNNING; state=1 set at [xxxx]".
For the first issue, task state should be set before condition checks.
Ineed because dc->writeback_lock is required when modifying all the
conditions, calling set_current_state() inside code block where dc->
writeback_lock is hold is safe. But this is quite implicit, so I still move
set_current_state() before all the condition checks.
For the second issue, frankley speaking it does not hurt when kernel thread
exits with TASK_INTERRUPTIBLE state, but this warning message scares users,
makes them feel there might be something risky with bcache and hurt their
data. Setting task state to TASK_RUNNING before returning fixes this
problem.
In alloc.c:allocator_wait(), there is also a similar issue, and is also
fixed in this patch.
Changelog:
v3: merge two similar fixes into one patch
v2: fix the race issue in v1 patch.
v1: initial buggy fix.
Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Cc: Michael Lyle <mlyle@lyle.org>
Cc: Junhui Tang <tang.junhui@zte.com.cn>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-02-08 03:41:41 +08:00
|
|
|
}
|
2013-07-25 08:50:06 +08:00
|
|
|
|
|
|
|
schedule();
|
|
|
|
continue;
|
|
|
|
}
|
bcache: properly set task state in bch_writeback_thread()
Kernel thread routine bch_writeback_thread() has the following code block,
447 down_write(&dc->writeback_lock);
448~450 if (check conditions) {
451 up_write(&dc->writeback_lock);
452 set_current_state(TASK_INTERRUPTIBLE);
453
454 if (kthread_should_stop())
455 return 0;
456
457 schedule();
458 continue;
459 }
If condition check is true, its task state is set to TASK_INTERRUPTIBLE
and call schedule() to wait for others to wake up it.
There are 2 issues in current code,
1, Task state is set to TASK_INTERRUPTIBLE after the condition checks, if
another process changes the condition and call wake_up_process(dc->
writeback_thread), then at line 452 task state is set back to
TASK_INTERRUPTIBLE, the writeback kernel thread will lose a chance to be
waken up.
2, At line 454 if kthread_should_stop() is true, writeback kernel thread
will return to kernel/kthread.c:kthread() with TASK_INTERRUPTIBLE and
call do_exit(). It is not good to enter do_exit() with task state
TASK_INTERRUPTIBLE, in following code path might_sleep() is called and a
warning message is reported by __might_sleep(): "WARNING: do not call
blocking ops when !TASK_RUNNING; state=1 set at [xxxx]".
For the first issue, task state should be set before condition checks.
Ineed because dc->writeback_lock is required when modifying all the
conditions, calling set_current_state() inside code block where dc->
writeback_lock is hold is safe. But this is quite implicit, so I still move
set_current_state() before all the condition checks.
For the second issue, frankley speaking it does not hurt when kernel thread
exits with TASK_INTERRUPTIBLE state, but this warning message scares users,
makes them feel there might be something risky with bcache and hurt their
data. Setting task state to TASK_RUNNING before returning fixes this
problem.
In alloc.c:allocator_wait(), there is also a similar issue, and is also
fixed in this patch.
Changelog:
v3: merge two similar fixes into one patch
v2: fix the race issue in v1 patch.
v1: initial buggy fix.
Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Cc: Michael Lyle <mlyle@lyle.org>
Cc: Junhui Tang <tang.junhui@zte.com.cn>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-02-08 03:41:41 +08:00
|
|
|
set_current_state(TASK_RUNNING);
|
2013-07-25 08:50:06 +08:00
|
|
|
|
|
|
|
searched_full_index = refill_dirty(dc);
|
|
|
|
|
|
|
|
if (searched_full_index &&
|
|
|
|
RB_EMPTY_ROOT(&dc->writeback_keys.keys)) {
|
|
|
|
atomic_set(&dc->has_dirty, 0);
|
|
|
|
SET_BDEV_STATE(&dc->sb, BDEV_STATE_CLEAN);
|
|
|
|
bch_write_bdev_super(dc, NULL);
|
bcache: quit dc->writeback_thread when BCACHE_DEV_DETACHING is set
In patch "bcache: fix cached_dev->count usage for bch_cache_set_error()",
cached_dev_get() is called when creating dc->writeback_thread, and
cached_dev_put() is called when exiting dc->writeback_thread. This
modification works well unless people detach the bcache device manually by
'echo 1 > /sys/block/bcache<N>/bcache/detach'
Because this sysfs interface only calls bch_cached_dev_detach() which wakes
up dc->writeback_thread but does not stop it. The reason is, before patch
"bcache: fix cached_dev->count usage for bch_cache_set_error()", inside
bch_writeback_thread(), if cache is not dirty after writeback,
cached_dev_put() will be called here. And in cached_dev_make_request() when
a new write request makes cache from clean to dirty, cached_dev_get() will
be called there. Since we don't operate dc->count in these locations,
refcount d->count cannot be dropped after cache becomes clean, and
cached_dev_detach_finish() won't be called to detach bcache device.
This patch fixes the issue by checking whether BCACHE_DEV_DETACHING is
set inside bch_writeback_thread(). If this bit is set and cache is clean
(no existing writeback_keys), break the while-loop, call cached_dev_put()
and quit the writeback thread.
Please note if cache is still dirty, even BCACHE_DEV_DETACHING is set the
writeback thread should continue to perform writeback, this is the original
design of manually detach.
It is safe to do the following check without locking, let me explain why,
+ if (!test_bit(BCACHE_DEV_DETACHING, &dc->disk.flags) &&
+ (!atomic_read(&dc->has_dirty) || !dc->writeback_running)) {
If the kenrel thread does not sleep and continue to run due to conditions
are not updated in time on the running CPU core, it just consumes more CPU
cycles and has no hurt. This should-sleep-but-run is safe here. We just
focus on the should-run-but-sleep condition, which means the writeback
thread goes to sleep in mistake while it should continue to run.
1, First of all, no matter the writeback thread is hung or not,
kthread_stop() from cached_dev_detach_finish() will wake up it and
terminate by making kthread_should_stop() return true. And in normal
run time, bit on index BCACHE_DEV_DETACHING is always cleared, the
condition
!test_bit(BCACHE_DEV_DETACHING, &dc->disk.flags)
is always true and can be ignored as constant value.
2, If one of the following conditions is true, the writeback thread should
go to sleep,
"!atomic_read(&dc->has_dirty)" or "!dc->writeback_running)"
each of them independently controls the writeback thread should sleep or
not, let's analyse them one by one.
2.1 condition "!atomic_read(&dc->has_dirty)"
If dc->has_dirty is set from 0 to 1 on another CPU core, bcache will
call bch_writeback_queue() immediately or call bch_writeback_add() which
indirectly calls bch_writeback_queue() too. In bch_writeback_queue(),
wake_up_process(dc->writeback_thread) is called. It sets writeback
thread's task state to TASK_RUNNING and following an implicit memory
barrier, then tries to wake up the writeback thread.
In writeback thread, its task state is set to TASK_INTERRUPTIBLE before
doing the condition check. If other CPU core sets the TASK_RUNNING state
after writeback thread setting TASK_INTERRUPTIBLE, the writeback thread
will be scheduled to run very soon because its state is not
TASK_INTERRUPTIBLE. If other CPU core sets the TASK_RUNNING state before
writeback thread setting TASK_INTERRUPTIBLE, the implict memory barrier
of wake_up_process() will make sure modification of dc->has_dirty on
other CPU core is updated and observed on the CPU core of writeback
thread. Therefore the condition check will correctly be false, and
continue writeback code without sleeping.
2.2 condition "!dc->writeback_running)"
dc->writeback_running can be changed via sysfs file, every time it is
modified, a following bch_writeback_queue() is alwasy called. So the
change is always observed on the CPU core of writeback thread. If
dc->writeback_running is changed from 0 to 1 on other CPU core, this
condition check will observe the modification and allow writeback
thread to continue to run without sleeping.
Now we can see, even without a locking protection, multiple conditions
check is safe here, no deadlock or process hang up will happen.
I compose a separte patch because that patch "bcache: fix cached_dev->count
usage for bch_cache_set_error()" already gets a "Reviewed-by:" from Hannes
Reinecke. Also this fix is not trivial and good for a separate patch.
Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Huijun Tang <tang.junhui@zte.com.cn>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-03-19 08:36:15 +08:00
|
|
|
/*
|
|
|
|
* If bcache device is detaching via sysfs interface,
|
|
|
|
* writeback thread should stop after there is no dirty
|
|
|
|
* data on cache. BCACHE_DEV_DETACHING flag is set in
|
|
|
|
* bch_cached_dev_detach().
|
|
|
|
*/
|
|
|
|
if (test_bit(BCACHE_DEV_DETACHING, &dc->disk.flags))
|
|
|
|
break;
|
2013-07-25 08:50:06 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
up_write(&dc->writeback_lock);
|
|
|
|
|
|
|
|
read_dirty(dc);
|
|
|
|
|
|
|
|
if (searched_full_index) {
|
|
|
|
unsigned delay = dc->writeback_delay * HZ;
|
|
|
|
|
|
|
|
while (delay &&
|
|
|
|
!kthread_should_stop() &&
|
bcache: add CACHE_SET_IO_DISABLE to struct cache_set flags
When too many I/Os failed on cache device, bch_cache_set_error() is called
in the error handling code path to retire whole problematic cache set. If
new I/O requests continue to come and take refcount dc->count, the cache
set won't be retired immediately, this is a problem.
Further more, there are several kernel thread and self-armed kernel work
may still running after bch_cache_set_error() is called. It needs to wait
quite a while for them to stop, or they won't stop at all. They also
prevent the cache set from being retired.
The solution in this patch is, to add per cache set flag to disable I/O
request on this cache and all attached backing devices. Then new coming I/O
requests can be rejected in *_make_request() before taking refcount, kernel
threads and self-armed kernel worker can stop very fast when flags bit
CACHE_SET_IO_DISABLE is set.
Because bcache also do internal I/Os for writeback, garbage collection,
bucket allocation, journaling, this kind of I/O should be disabled after
bch_cache_set_error() is called. So closure_bio_submit() is modified to
check whether CACHE_SET_IO_DISABLE is set on cache_set->flags. If set,
closure_bio_submit() will set bio->bi_status to BLK_STS_IOERR and
return, generic_make_request() won't be called.
A sysfs interface is also added to set or clear CACHE_SET_IO_DISABLE bit
from cache_set->flags, to disable or enable cache set I/O for debugging. It
is helpful to trigger more corner case issues for failed cache device.
Changelog
v4, add wait_for_kthread_stop(), and call it before exits writeback and gc
kernel threads.
v3, change CACHE_SET_IO_DISABLE from 4 to 3, since it is bit index.
remove "bcache: " prefix when printing out kernel message.
v2, more changes by previous review,
- Use CACHE_SET_IO_DISABLE of cache_set->flags, suggested by Junhui.
- Check CACHE_SET_IO_DISABLE in bch_btree_gc() to stop a while-loop, this
is reported and inspired from origal patch of Pavel Vazharov.
v1, initial version.
Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Cc: Junhui Tang <tang.junhui@zte.com.cn>
Cc: Michael Lyle <mlyle@lyle.org>
Cc: Pavel Vazharov <freakpv@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-03-19 08:36:17 +08:00
|
|
|
!test_bit(CACHE_SET_IO_DISABLE, &c->flags) &&
|
2013-08-22 08:49:09 +08:00
|
|
|
!test_bit(BCACHE_DEV_DETACHING, &dc->disk.flags))
|
2014-05-02 04:48:57 +08:00
|
|
|
delay = schedule_timeout_interruptible(delay);
|
2017-10-14 07:35:39 +08:00
|
|
|
|
|
|
|
bch_ratelimit_reset(&dc->writeback_rate);
|
2013-07-25 08:50:06 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
bcache: fix cached_dev->count usage for bch_cache_set_error()
When bcache metadata I/O fails, bcache will call bch_cache_set_error()
to retire the whole cache set. The expected behavior to retire a cache
set is to unregister the cache set, and unregister all backing device
attached to this cache set, then remove sysfs entries of the cache set
and all attached backing devices, finally release memory of structs
cache_set, cache, cached_dev and bcache_device.
In my testing when journal I/O failure triggered by disconnected cache
device, sometimes the cache set cannot be retired, and its sysfs
entry /sys/fs/bcache/<uuid> still exits and the backing device also
references it. This is not expected behavior.
When metadata I/O failes, the call senquence to retire whole cache set is,
bch_cache_set_error()
bch_cache_set_unregister()
bch_cache_set_stop()
__cache_set_unregister() <- called as callback by calling
clousre_queue(&c->caching)
cache_set_flush() <- called as a callback when refcount
of cache_set->caching is 0
cache_set_free() <- called as a callback when refcount
of catch_set->cl is 0
bch_cache_set_release() <- called as a callback when refcount
of catch_set->kobj is 0
I find if kernel thread bch_writeback_thread() quits while-loop when
kthread_should_stop() is true and searched_full_index is false, clousre
callback cache_set_flush() set by continue_at() will never be called. The
result is, bcache fails to retire whole cache set.
cache_set_flush() will be called when refcount of closure c->caching is 0,
and in function bcache_device_detach() refcount of closure c->caching is
released to 0 by clousre_put(). In metadata error code path, function
bcache_device_detach() is called by cached_dev_detach_finish(). This is a
callback routine being called when cached_dev->count is 0. This refcount
is decreased by cached_dev_put().
The above dependence indicates, cache_set_flush() will be called when
refcount of cache_set->cl is 0, and refcount of cache_set->cl to be 0
when refcount of cache_dev->count is 0.
The reason why sometimes cache_dev->count is not 0 (when metadata I/O fails
and bch_cache_set_error() called) is, in bch_writeback_thread(), refcount
of cache_dev is not decreased properly.
In bch_writeback_thread(), cached_dev_put() is called only when
searched_full_index is true and cached_dev->writeback_keys is empty, a.k.a
there is no dirty data on cache. In most of run time it is correct, but
when bch_writeback_thread() quits the while-loop while cache is still
dirty, current code forget to call cached_dev_put() before this kernel
thread exits. This is why sometimes cache_set_flush() is not executed and
cache set fails to be retired.
The reason to call cached_dev_put() in bch_writeback_rate() is, when the
cache device changes from clean to dirty, cached_dev_get() is called, to
make sure during writeback operatiions both backing and cache devices
won't be released.
Adding following code in bch_writeback_thread() does not work,
static int bch_writeback_thread(void *arg)
}
+ if (atomic_read(&dc->has_dirty))
+ cached_dev_put()
+
return 0;
}
because writeback kernel thread can be waken up and start via sysfs entry:
echo 1 > /sys/block/bcache<N>/bcache/writeback_running
It is difficult to check whether backing device is dirty without race and
extra lock. So the above modification will introduce potential refcount
underflow in some conditions.
The correct fix is, to take cached dev refcount when creating the kernel
thread, and put it before the kernel thread exits. Then bcache does not
need to take a cached dev refcount when cache turns from clean to dirty,
or to put a cached dev refcount when cache turns from ditry to clean. The
writeback kernel thread is alwasy safe to reference data structure from
cache set, cache and cached device (because a refcount of cache device is
taken for it already), and no matter the kernel thread is stopped by I/O
errors or system reboot, cached_dev->count can always be used correctly.
The patch is simple, but understanding how it works is quite complicated.
Changelog:
v2: set dc->writeback_thread to NULL in this patch, as suggested by Hannes.
v1: initial version for review.
Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Cc: Michael Lyle <mlyle@lyle.org>
Cc: Junhui Tang <tang.junhui@zte.com.cn>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-03-19 08:36:14 +08:00
|
|
|
cached_dev_put(dc);
|
bcache: add CACHE_SET_IO_DISABLE to struct cache_set flags
When too many I/Os failed on cache device, bch_cache_set_error() is called
in the error handling code path to retire whole problematic cache set. If
new I/O requests continue to come and take refcount dc->count, the cache
set won't be retired immediately, this is a problem.
Further more, there are several kernel thread and self-armed kernel work
may still running after bch_cache_set_error() is called. It needs to wait
quite a while for them to stop, or they won't stop at all. They also
prevent the cache set from being retired.
The solution in this patch is, to add per cache set flag to disable I/O
request on this cache and all attached backing devices. Then new coming I/O
requests can be rejected in *_make_request() before taking refcount, kernel
threads and self-armed kernel worker can stop very fast when flags bit
CACHE_SET_IO_DISABLE is set.
Because bcache also do internal I/Os for writeback, garbage collection,
bucket allocation, journaling, this kind of I/O should be disabled after
bch_cache_set_error() is called. So closure_bio_submit() is modified to
check whether CACHE_SET_IO_DISABLE is set on cache_set->flags. If set,
closure_bio_submit() will set bio->bi_status to BLK_STS_IOERR and
return, generic_make_request() won't be called.
A sysfs interface is also added to set or clear CACHE_SET_IO_DISABLE bit
from cache_set->flags, to disable or enable cache set I/O for debugging. It
is helpful to trigger more corner case issues for failed cache device.
Changelog
v4, add wait_for_kthread_stop(), and call it before exits writeback and gc
kernel threads.
v3, change CACHE_SET_IO_DISABLE from 4 to 3, since it is bit index.
remove "bcache: " prefix when printing out kernel message.
v2, more changes by previous review,
- Use CACHE_SET_IO_DISABLE of cache_set->flags, suggested by Junhui.
- Check CACHE_SET_IO_DISABLE in bch_btree_gc() to stop a while-loop, this
is reported and inspired from origal patch of Pavel Vazharov.
v1, initial version.
Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Cc: Junhui Tang <tang.junhui@zte.com.cn>
Cc: Michael Lyle <mlyle@lyle.org>
Cc: Pavel Vazharov <freakpv@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-03-19 08:36:17 +08:00
|
|
|
wait_for_kthread_stop();
|
bcache: fix cached_dev->count usage for bch_cache_set_error()
When bcache metadata I/O fails, bcache will call bch_cache_set_error()
to retire the whole cache set. The expected behavior to retire a cache
set is to unregister the cache set, and unregister all backing device
attached to this cache set, then remove sysfs entries of the cache set
and all attached backing devices, finally release memory of structs
cache_set, cache, cached_dev and bcache_device.
In my testing when journal I/O failure triggered by disconnected cache
device, sometimes the cache set cannot be retired, and its sysfs
entry /sys/fs/bcache/<uuid> still exits and the backing device also
references it. This is not expected behavior.
When metadata I/O failes, the call senquence to retire whole cache set is,
bch_cache_set_error()
bch_cache_set_unregister()
bch_cache_set_stop()
__cache_set_unregister() <- called as callback by calling
clousre_queue(&c->caching)
cache_set_flush() <- called as a callback when refcount
of cache_set->caching is 0
cache_set_free() <- called as a callback when refcount
of catch_set->cl is 0
bch_cache_set_release() <- called as a callback when refcount
of catch_set->kobj is 0
I find if kernel thread bch_writeback_thread() quits while-loop when
kthread_should_stop() is true and searched_full_index is false, clousre
callback cache_set_flush() set by continue_at() will never be called. The
result is, bcache fails to retire whole cache set.
cache_set_flush() will be called when refcount of closure c->caching is 0,
and in function bcache_device_detach() refcount of closure c->caching is
released to 0 by clousre_put(). In metadata error code path, function
bcache_device_detach() is called by cached_dev_detach_finish(). This is a
callback routine being called when cached_dev->count is 0. This refcount
is decreased by cached_dev_put().
The above dependence indicates, cache_set_flush() will be called when
refcount of cache_set->cl is 0, and refcount of cache_set->cl to be 0
when refcount of cache_dev->count is 0.
The reason why sometimes cache_dev->count is not 0 (when metadata I/O fails
and bch_cache_set_error() called) is, in bch_writeback_thread(), refcount
of cache_dev is not decreased properly.
In bch_writeback_thread(), cached_dev_put() is called only when
searched_full_index is true and cached_dev->writeback_keys is empty, a.k.a
there is no dirty data on cache. In most of run time it is correct, but
when bch_writeback_thread() quits the while-loop while cache is still
dirty, current code forget to call cached_dev_put() before this kernel
thread exits. This is why sometimes cache_set_flush() is not executed and
cache set fails to be retired.
The reason to call cached_dev_put() in bch_writeback_rate() is, when the
cache device changes from clean to dirty, cached_dev_get() is called, to
make sure during writeback operatiions both backing and cache devices
won't be released.
Adding following code in bch_writeback_thread() does not work,
static int bch_writeback_thread(void *arg)
}
+ if (atomic_read(&dc->has_dirty))
+ cached_dev_put()
+
return 0;
}
because writeback kernel thread can be waken up and start via sysfs entry:
echo 1 > /sys/block/bcache<N>/bcache/writeback_running
It is difficult to check whether backing device is dirty without race and
extra lock. So the above modification will introduce potential refcount
underflow in some conditions.
The correct fix is, to take cached dev refcount when creating the kernel
thread, and put it before the kernel thread exits. Then bcache does not
need to take a cached dev refcount when cache turns from clean to dirty,
or to put a cached dev refcount when cache turns from ditry to clean. The
writeback kernel thread is alwasy safe to reference data structure from
cache set, cache and cached device (because a refcount of cache device is
taken for it already), and no matter the kernel thread is stopped by I/O
errors or system reboot, cached_dev->count can always be used correctly.
The patch is simple, but understanding how it works is quite complicated.
Changelog:
v2: set dc->writeback_thread to NULL in this patch, as suggested by Hannes.
v1: initial version for review.
Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Cc: Michael Lyle <mlyle@lyle.org>
Cc: Junhui Tang <tang.junhui@zte.com.cn>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-03-19 08:36:14 +08:00
|
|
|
|
2013-07-25 08:50:06 +08:00
|
|
|
return 0;
|
2013-03-24 07:11:31 +08:00
|
|
|
}
|
|
|
|
|
2013-05-12 08:07:26 +08:00
|
|
|
/* Init */
|
bcache: fix I/O significant decline while backend devices registering
I attached several backend devices in the same cache set, and produced lots
of dirty data by running small rand I/O writes in a long time, then I
continue run I/O in the others cached devices, and stopped a cached device,
after a mean while, I register the stopped device again, I see the running
I/O in the others cached devices dropped significantly, sometimes even
jumps to zero.
In currently code, bcache would traverse each keys and btree node to count
the dirty data under read locker, and the writes threads can not get the
btree write locker, and when there is a lot of keys and btree node in the
registering device, it would last several seconds, so the write I/Os in
others cached device are blocked and declined significantly.
In this patch, when a device registering to a ache set, which exist others
cached devices with running I/Os, we get the amount of dirty data of the
device in an incremental way, and do not block other cached devices all the
time.
Patch v2: Rename some variables and macros name as Coly suggested.
Signed-off-by: Tang Junhui <tang.junhui@zte.com.cn>
Signed-off-by: Coly Li <colyli@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-07-26 12:17:36 +08:00
|
|
|
#define INIT_KEYS_EACH_TIME 500000
|
|
|
|
#define INIT_KEYS_SLEEP_MS 100
|
2013-05-12 08:07:26 +08:00
|
|
|
|
2013-07-25 08:44:17 +08:00
|
|
|
struct sectors_dirty_init {
|
|
|
|
struct btree_op op;
|
|
|
|
unsigned inode;
|
bcache: fix I/O significant decline while backend devices registering
I attached several backend devices in the same cache set, and produced lots
of dirty data by running small rand I/O writes in a long time, then I
continue run I/O in the others cached devices, and stopped a cached device,
after a mean while, I register the stopped device again, I see the running
I/O in the others cached devices dropped significantly, sometimes even
jumps to zero.
In currently code, bcache would traverse each keys and btree node to count
the dirty data under read locker, and the writes threads can not get the
btree write locker, and when there is a lot of keys and btree node in the
registering device, it would last several seconds, so the write I/Os in
others cached device are blocked and declined significantly.
In this patch, when a device registering to a ache set, which exist others
cached devices with running I/Os, we get the amount of dirty data of the
device in an incremental way, and do not block other cached devices all the
time.
Patch v2: Rename some variables and macros name as Coly suggested.
Signed-off-by: Tang Junhui <tang.junhui@zte.com.cn>
Signed-off-by: Coly Li <colyli@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-07-26 12:17:36 +08:00
|
|
|
size_t count;
|
|
|
|
struct bkey start;
|
2013-07-25 08:44:17 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
static int sectors_dirty_init_fn(struct btree_op *_op, struct btree *b,
|
2013-09-11 09:48:51 +08:00
|
|
|
struct bkey *k)
|
2013-05-12 08:07:26 +08:00
|
|
|
{
|
2013-07-25 08:44:17 +08:00
|
|
|
struct sectors_dirty_init *op = container_of(_op,
|
|
|
|
struct sectors_dirty_init, op);
|
2013-09-11 09:48:51 +08:00
|
|
|
if (KEY_INODE(k) > op->inode)
|
|
|
|
return MAP_DONE;
|
2013-05-12 08:07:26 +08:00
|
|
|
|
2013-09-11 09:48:51 +08:00
|
|
|
if (KEY_DIRTY(k))
|
|
|
|
bcache_dev_sectors_dirty_add(b->c, KEY_INODE(k),
|
|
|
|
KEY_START(k), KEY_SIZE(k));
|
|
|
|
|
bcache: fix I/O significant decline while backend devices registering
I attached several backend devices in the same cache set, and produced lots
of dirty data by running small rand I/O writes in a long time, then I
continue run I/O in the others cached devices, and stopped a cached device,
after a mean while, I register the stopped device again, I see the running
I/O in the others cached devices dropped significantly, sometimes even
jumps to zero.
In currently code, bcache would traverse each keys and btree node to count
the dirty data under read locker, and the writes threads can not get the
btree write locker, and when there is a lot of keys and btree node in the
registering device, it would last several seconds, so the write I/Os in
others cached device are blocked and declined significantly.
In this patch, when a device registering to a ache set, which exist others
cached devices with running I/Os, we get the amount of dirty data of the
device in an incremental way, and do not block other cached devices all the
time.
Patch v2: Rename some variables and macros name as Coly suggested.
Signed-off-by: Tang Junhui <tang.junhui@zte.com.cn>
Signed-off-by: Coly Li <colyli@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-07-26 12:17:36 +08:00
|
|
|
op->count++;
|
|
|
|
if (atomic_read(&b->c->search_inflight) &&
|
|
|
|
!(op->count % INIT_KEYS_EACH_TIME)) {
|
|
|
|
bkey_copy_key(&op->start, k);
|
|
|
|
return -EAGAIN;
|
|
|
|
}
|
|
|
|
|
2013-09-11 09:48:51 +08:00
|
|
|
return MAP_CONTINUE;
|
2013-05-12 08:07:26 +08:00
|
|
|
}
|
|
|
|
|
2017-09-07 01:28:53 +08:00
|
|
|
void bch_sectors_dirty_init(struct bcache_device *d)
|
2013-05-12 08:07:26 +08:00
|
|
|
{
|
2013-07-25 08:44:17 +08:00
|
|
|
struct sectors_dirty_init op;
|
bcache: fix I/O significant decline while backend devices registering
I attached several backend devices in the same cache set, and produced lots
of dirty data by running small rand I/O writes in a long time, then I
continue run I/O in the others cached devices, and stopped a cached device,
after a mean while, I register the stopped device again, I see the running
I/O in the others cached devices dropped significantly, sometimes even
jumps to zero.
In currently code, bcache would traverse each keys and btree node to count
the dirty data under read locker, and the writes threads can not get the
btree write locker, and when there is a lot of keys and btree node in the
registering device, it would last several seconds, so the write I/Os in
others cached device are blocked and declined significantly.
In this patch, when a device registering to a ache set, which exist others
cached devices with running I/Os, we get the amount of dirty data of the
device in an incremental way, and do not block other cached devices all the
time.
Patch v2: Rename some variables and macros name as Coly suggested.
Signed-off-by: Tang Junhui <tang.junhui@zte.com.cn>
Signed-off-by: Coly Li <colyli@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-07-26 12:17:36 +08:00
|
|
|
int ret;
|
2013-05-12 08:07:26 +08:00
|
|
|
|
2013-07-25 09:04:18 +08:00
|
|
|
bch_btree_op_init(&op.op, -1);
|
2017-09-07 01:28:53 +08:00
|
|
|
op.inode = d->id;
|
bcache: fix I/O significant decline while backend devices registering
I attached several backend devices in the same cache set, and produced lots
of dirty data by running small rand I/O writes in a long time, then I
continue run I/O in the others cached devices, and stopped a cached device,
after a mean while, I register the stopped device again, I see the running
I/O in the others cached devices dropped significantly, sometimes even
jumps to zero.
In currently code, bcache would traverse each keys and btree node to count
the dirty data under read locker, and the writes threads can not get the
btree write locker, and when there is a lot of keys and btree node in the
registering device, it would last several seconds, so the write I/Os in
others cached device are blocked and declined significantly.
In this patch, when a device registering to a ache set, which exist others
cached devices with running I/Os, we get the amount of dirty data of the
device in an incremental way, and do not block other cached devices all the
time.
Patch v2: Rename some variables and macros name as Coly suggested.
Signed-off-by: Tang Junhui <tang.junhui@zte.com.cn>
Signed-off-by: Coly Li <colyli@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-07-26 12:17:36 +08:00
|
|
|
op.count = 0;
|
|
|
|
op.start = KEY(op.inode, 0, 0);
|
|
|
|
|
|
|
|
do {
|
|
|
|
ret = bch_btree_map_keys(&op.op, d->c, &op.start,
|
|
|
|
sectors_dirty_init_fn, 0);
|
|
|
|
if (ret == -EAGAIN)
|
|
|
|
schedule_timeout_interruptible(
|
|
|
|
msecs_to_jiffies(INIT_KEYS_SLEEP_MS));
|
|
|
|
else if (ret < 0) {
|
|
|
|
pr_warn("sectors dirty init failed, ret=%d!", ret);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
} while (ret == -EAGAIN);
|
2013-05-12 08:07:26 +08:00
|
|
|
}
|
|
|
|
|
2014-05-02 04:48:57 +08:00
|
|
|
void bch_cached_dev_writeback_init(struct cached_dev *dc)
|
2013-03-24 07:11:31 +08:00
|
|
|
{
|
2013-09-24 14:17:31 +08:00
|
|
|
sema_init(&dc->in_flight, 64);
|
2013-03-24 07:11:31 +08:00
|
|
|
init_rwsem(&dc->writeback_lock);
|
2013-06-05 21:24:39 +08:00
|
|
|
bch_keybuf_init(&dc->writeback_keys);
|
2013-03-24 07:11:31 +08:00
|
|
|
|
|
|
|
dc->writeback_metadata = true;
|
|
|
|
dc->writeback_running = true;
|
|
|
|
dc->writeback_percent = 10;
|
|
|
|
dc->writeback_delay = 30;
|
|
|
|
dc->writeback_rate.rate = 1024;
|
2017-10-14 07:35:37 +08:00
|
|
|
dc->writeback_rate_minimum = 8;
|
2013-03-24 07:11:31 +08:00
|
|
|
|
2018-02-08 03:41:44 +08:00
|
|
|
dc->writeback_rate_update_seconds = WRITEBACK_RATE_UPDATE_SECS_DEFAULT;
|
bcache: implement PI controller for writeback rate
bcache uses a control system to attempt to keep the amount of dirty data
in cache at a user-configured level, while not responding excessively to
transients and variations in write rate. Previously, the system was a
PD controller; but the output from it was integrated, turning the
Proportional term into an Integral term, and turning the Derivative term
into a crude Proportional term. Performance of the controller has been
uneven in production, and it has tended to respond slowly, oscillate,
and overshoot.
This patch set replaces the current control system with an explicit PI
controller and tuning that should be correct for most hardware. By
default, it attempts to write at a rate that would retire 1/40th of the
current excess blocks per second. An integral term in turn works to
remove steady state errors.
IMO, this yields benefits in simplicity (removing weighted average
filtering, etc) and system performance.
Another small change is a tunable parameter is introduced to allow the
user to specify a minimum rate at which dirty blocks are retired.
There is a slight difference from earlier versions of the patch in
integral handling to prevent excessive negative integral windup.
Signed-off-by: Michael Lyle <mlyle@lyle.org>
Reviewed-by: Coly Li <colyli@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2017-10-14 07:35:36 +08:00
|
|
|
dc->writeback_rate_p_term_inverse = 40;
|
|
|
|
dc->writeback_rate_i_term_inverse = 10000;
|
2013-03-24 07:11:31 +08:00
|
|
|
|
bcache: stop dc->writeback_rate_update properly
struct delayed_work writeback_rate_update in struct cache_dev is a delayed
worker to call function update_writeback_rate() in period (the interval is
defined by dc->writeback_rate_update_seconds).
When a metadate I/O error happens on cache device, bcache error handling
routine bch_cache_set_error() will call bch_cache_set_unregister() to
retire whole cache set. On the unregister code path, this delayed work is
stopped by calling cancel_delayed_work_sync(&dc->writeback_rate_update).
dc->writeback_rate_update is a special delayed work from others in bcache.
In its routine update_writeback_rate(), this delayed work is re-armed
itself. That means when cancel_delayed_work_sync() returns, this delayed
work can still be executed after several seconds defined by
dc->writeback_rate_update_seconds.
The problem is, after cancel_delayed_work_sync() returns, the cache set
unregister code path will continue and release memory of struct cache set.
Then the delayed work is scheduled to run, __update_writeback_rate()
will reference the already released cache_set memory, and trigger a NULL
pointer deference fault.
This patch introduces two more bcache device flags,
- BCACHE_DEV_WB_RUNNING
bit set: bcache device is in writeback mode and running, it is OK for
dc->writeback_rate_update to re-arm itself.
bit clear:bcache device is trying to stop dc->writeback_rate_update,
this delayed work should not re-arm itself and quit.
- BCACHE_DEV_RATE_DW_RUNNING
bit set: routine update_writeback_rate() is executing.
bit clear: routine update_writeback_rate() quits.
This patch also adds a function cancel_writeback_rate_update_dwork() to
wait for dc->writeback_rate_update quits before cancel it by calling
cancel_delayed_work_sync(). In order to avoid a deadlock by unexpected
quit dc->writeback_rate_update, after time_out seconds this function will
give up and continue to call cancel_delayed_work_sync().
And here I explain how this patch stops self re-armed delayed work properly
with the above stuffs.
update_writeback_rate() sets BCACHE_DEV_RATE_DW_RUNNING at its beginning
and clears BCACHE_DEV_RATE_DW_RUNNING at its end. Before calling
cancel_writeback_rate_update_dwork() clear flag BCACHE_DEV_WB_RUNNING.
Before calling cancel_delayed_work_sync() wait utill flag
BCACHE_DEV_RATE_DW_RUNNING is clear. So when calling
cancel_delayed_work_sync(), dc->writeback_rate_update must be already re-
armed, or quite by seeing BCACHE_DEV_WB_RUNNING cleared. In both cases
delayed work routine update_writeback_rate() won't be executed after
cancel_delayed_work_sync() returns.
Inside update_writeback_rate() before calling schedule_delayed_work(), flag
BCACHE_DEV_WB_RUNNING is checked before. If this flag is cleared, it means
someone is about to stop the delayed work. Because flag
BCACHE_DEV_RATE_DW_RUNNING is set already and cancel_delayed_work_sync()
has to wait for this flag to be cleared, we don't need to worry about race
condition here.
If update_writeback_rate() is scheduled to run after checking
BCACHE_DEV_RATE_DW_RUNNING and before calling cancel_delayed_work_sync()
in cancel_writeback_rate_update_dwork(), it is also safe. Because at this
moment BCACHE_DEV_WB_RUNNING is cleared with memory barrier. As I mentioned
previously, update_writeback_rate() will see BCACHE_DEV_WB_RUNNING is clear
and quit immediately.
Because there are more dependences inside update_writeback_rate() to struct
cache_set memory, dc->writeback_rate_update is not a simple self re-arm
delayed work. After trying many different methods (e.g. hold dc->count, or
use locks), this is the only way I can find which works to properly stop
dc->writeback_rate_update delayed work.
Changelog:
v3: change values of BCACHE_DEV_WB_RUNNING and BCACHE_DEV_RATE_DW_RUNNING
to bit index, for test_bit().
v2: Try to fix the race issue which is pointed out by Junhui.
v1: The initial version for review
Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Junhui Tang <tang.junhui@zte.com.cn>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Cc: Michael Lyle <mlyle@lyle.org>
Cc: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-03-19 08:36:16 +08:00
|
|
|
WARN_ON(test_and_clear_bit(BCACHE_DEV_WB_RUNNING, &dc->disk.flags));
|
2014-05-02 04:48:57 +08:00
|
|
|
INIT_DELAYED_WORK(&dc->writeback_rate_update, update_writeback_rate);
|
|
|
|
}
|
|
|
|
|
|
|
|
int bch_cached_dev_writeback_start(struct cached_dev *dc)
|
|
|
|
{
|
2017-09-06 14:25:59 +08:00
|
|
|
dc->writeback_write_wq = alloc_workqueue("bcache_writeback_wq",
|
|
|
|
WQ_MEM_RECLAIM, 0);
|
|
|
|
if (!dc->writeback_write_wq)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
bcache: fix cached_dev->count usage for bch_cache_set_error()
When bcache metadata I/O fails, bcache will call bch_cache_set_error()
to retire the whole cache set. The expected behavior to retire a cache
set is to unregister the cache set, and unregister all backing device
attached to this cache set, then remove sysfs entries of the cache set
and all attached backing devices, finally release memory of structs
cache_set, cache, cached_dev and bcache_device.
In my testing when journal I/O failure triggered by disconnected cache
device, sometimes the cache set cannot be retired, and its sysfs
entry /sys/fs/bcache/<uuid> still exits and the backing device also
references it. This is not expected behavior.
When metadata I/O failes, the call senquence to retire whole cache set is,
bch_cache_set_error()
bch_cache_set_unregister()
bch_cache_set_stop()
__cache_set_unregister() <- called as callback by calling
clousre_queue(&c->caching)
cache_set_flush() <- called as a callback when refcount
of cache_set->caching is 0
cache_set_free() <- called as a callback when refcount
of catch_set->cl is 0
bch_cache_set_release() <- called as a callback when refcount
of catch_set->kobj is 0
I find if kernel thread bch_writeback_thread() quits while-loop when
kthread_should_stop() is true and searched_full_index is false, clousre
callback cache_set_flush() set by continue_at() will never be called. The
result is, bcache fails to retire whole cache set.
cache_set_flush() will be called when refcount of closure c->caching is 0,
and in function bcache_device_detach() refcount of closure c->caching is
released to 0 by clousre_put(). In metadata error code path, function
bcache_device_detach() is called by cached_dev_detach_finish(). This is a
callback routine being called when cached_dev->count is 0. This refcount
is decreased by cached_dev_put().
The above dependence indicates, cache_set_flush() will be called when
refcount of cache_set->cl is 0, and refcount of cache_set->cl to be 0
when refcount of cache_dev->count is 0.
The reason why sometimes cache_dev->count is not 0 (when metadata I/O fails
and bch_cache_set_error() called) is, in bch_writeback_thread(), refcount
of cache_dev is not decreased properly.
In bch_writeback_thread(), cached_dev_put() is called only when
searched_full_index is true and cached_dev->writeback_keys is empty, a.k.a
there is no dirty data on cache. In most of run time it is correct, but
when bch_writeback_thread() quits the while-loop while cache is still
dirty, current code forget to call cached_dev_put() before this kernel
thread exits. This is why sometimes cache_set_flush() is not executed and
cache set fails to be retired.
The reason to call cached_dev_put() in bch_writeback_rate() is, when the
cache device changes from clean to dirty, cached_dev_get() is called, to
make sure during writeback operatiions both backing and cache devices
won't be released.
Adding following code in bch_writeback_thread() does not work,
static int bch_writeback_thread(void *arg)
}
+ if (atomic_read(&dc->has_dirty))
+ cached_dev_put()
+
return 0;
}
because writeback kernel thread can be waken up and start via sysfs entry:
echo 1 > /sys/block/bcache<N>/bcache/writeback_running
It is difficult to check whether backing device is dirty without race and
extra lock. So the above modification will introduce potential refcount
underflow in some conditions.
The correct fix is, to take cached dev refcount when creating the kernel
thread, and put it before the kernel thread exits. Then bcache does not
need to take a cached dev refcount when cache turns from clean to dirty,
or to put a cached dev refcount when cache turns from ditry to clean. The
writeback kernel thread is alwasy safe to reference data structure from
cache set, cache and cached device (because a refcount of cache device is
taken for it already), and no matter the kernel thread is stopped by I/O
errors or system reboot, cached_dev->count can always be used correctly.
The patch is simple, but understanding how it works is quite complicated.
Changelog:
v2: set dc->writeback_thread to NULL in this patch, as suggested by Hannes.
v1: initial version for review.
Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Cc: Michael Lyle <mlyle@lyle.org>
Cc: Junhui Tang <tang.junhui@zte.com.cn>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-03-19 08:36:14 +08:00
|
|
|
cached_dev_get(dc);
|
2013-07-25 08:50:06 +08:00
|
|
|
dc->writeback_thread = kthread_create(bch_writeback_thread, dc,
|
|
|
|
"bcache_writeback");
|
bcache: fix cached_dev->count usage for bch_cache_set_error()
When bcache metadata I/O fails, bcache will call bch_cache_set_error()
to retire the whole cache set. The expected behavior to retire a cache
set is to unregister the cache set, and unregister all backing device
attached to this cache set, then remove sysfs entries of the cache set
and all attached backing devices, finally release memory of structs
cache_set, cache, cached_dev and bcache_device.
In my testing when journal I/O failure triggered by disconnected cache
device, sometimes the cache set cannot be retired, and its sysfs
entry /sys/fs/bcache/<uuid> still exits and the backing device also
references it. This is not expected behavior.
When metadata I/O failes, the call senquence to retire whole cache set is,
bch_cache_set_error()
bch_cache_set_unregister()
bch_cache_set_stop()
__cache_set_unregister() <- called as callback by calling
clousre_queue(&c->caching)
cache_set_flush() <- called as a callback when refcount
of cache_set->caching is 0
cache_set_free() <- called as a callback when refcount
of catch_set->cl is 0
bch_cache_set_release() <- called as a callback when refcount
of catch_set->kobj is 0
I find if kernel thread bch_writeback_thread() quits while-loop when
kthread_should_stop() is true and searched_full_index is false, clousre
callback cache_set_flush() set by continue_at() will never be called. The
result is, bcache fails to retire whole cache set.
cache_set_flush() will be called when refcount of closure c->caching is 0,
and in function bcache_device_detach() refcount of closure c->caching is
released to 0 by clousre_put(). In metadata error code path, function
bcache_device_detach() is called by cached_dev_detach_finish(). This is a
callback routine being called when cached_dev->count is 0. This refcount
is decreased by cached_dev_put().
The above dependence indicates, cache_set_flush() will be called when
refcount of cache_set->cl is 0, and refcount of cache_set->cl to be 0
when refcount of cache_dev->count is 0.
The reason why sometimes cache_dev->count is not 0 (when metadata I/O fails
and bch_cache_set_error() called) is, in bch_writeback_thread(), refcount
of cache_dev is not decreased properly.
In bch_writeback_thread(), cached_dev_put() is called only when
searched_full_index is true and cached_dev->writeback_keys is empty, a.k.a
there is no dirty data on cache. In most of run time it is correct, but
when bch_writeback_thread() quits the while-loop while cache is still
dirty, current code forget to call cached_dev_put() before this kernel
thread exits. This is why sometimes cache_set_flush() is not executed and
cache set fails to be retired.
The reason to call cached_dev_put() in bch_writeback_rate() is, when the
cache device changes from clean to dirty, cached_dev_get() is called, to
make sure during writeback operatiions both backing and cache devices
won't be released.
Adding following code in bch_writeback_thread() does not work,
static int bch_writeback_thread(void *arg)
}
+ if (atomic_read(&dc->has_dirty))
+ cached_dev_put()
+
return 0;
}
because writeback kernel thread can be waken up and start via sysfs entry:
echo 1 > /sys/block/bcache<N>/bcache/writeback_running
It is difficult to check whether backing device is dirty without race and
extra lock. So the above modification will introduce potential refcount
underflow in some conditions.
The correct fix is, to take cached dev refcount when creating the kernel
thread, and put it before the kernel thread exits. Then bcache does not
need to take a cached dev refcount when cache turns from clean to dirty,
or to put a cached dev refcount when cache turns from ditry to clean. The
writeback kernel thread is alwasy safe to reference data structure from
cache set, cache and cached device (because a refcount of cache device is
taken for it already), and no matter the kernel thread is stopped by I/O
errors or system reboot, cached_dev->count can always be used correctly.
The patch is simple, but understanding how it works is quite complicated.
Changelog:
v2: set dc->writeback_thread to NULL in this patch, as suggested by Hannes.
v1: initial version for review.
Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Cc: Michael Lyle <mlyle@lyle.org>
Cc: Junhui Tang <tang.junhui@zte.com.cn>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-03-19 08:36:14 +08:00
|
|
|
if (IS_ERR(dc->writeback_thread)) {
|
|
|
|
cached_dev_put(dc);
|
2013-07-25 08:50:06 +08:00
|
|
|
return PTR_ERR(dc->writeback_thread);
|
bcache: fix cached_dev->count usage for bch_cache_set_error()
When bcache metadata I/O fails, bcache will call bch_cache_set_error()
to retire the whole cache set. The expected behavior to retire a cache
set is to unregister the cache set, and unregister all backing device
attached to this cache set, then remove sysfs entries of the cache set
and all attached backing devices, finally release memory of structs
cache_set, cache, cached_dev and bcache_device.
In my testing when journal I/O failure triggered by disconnected cache
device, sometimes the cache set cannot be retired, and its sysfs
entry /sys/fs/bcache/<uuid> still exits and the backing device also
references it. This is not expected behavior.
When metadata I/O failes, the call senquence to retire whole cache set is,
bch_cache_set_error()
bch_cache_set_unregister()
bch_cache_set_stop()
__cache_set_unregister() <- called as callback by calling
clousre_queue(&c->caching)
cache_set_flush() <- called as a callback when refcount
of cache_set->caching is 0
cache_set_free() <- called as a callback when refcount
of catch_set->cl is 0
bch_cache_set_release() <- called as a callback when refcount
of catch_set->kobj is 0
I find if kernel thread bch_writeback_thread() quits while-loop when
kthread_should_stop() is true and searched_full_index is false, clousre
callback cache_set_flush() set by continue_at() will never be called. The
result is, bcache fails to retire whole cache set.
cache_set_flush() will be called when refcount of closure c->caching is 0,
and in function bcache_device_detach() refcount of closure c->caching is
released to 0 by clousre_put(). In metadata error code path, function
bcache_device_detach() is called by cached_dev_detach_finish(). This is a
callback routine being called when cached_dev->count is 0. This refcount
is decreased by cached_dev_put().
The above dependence indicates, cache_set_flush() will be called when
refcount of cache_set->cl is 0, and refcount of cache_set->cl to be 0
when refcount of cache_dev->count is 0.
The reason why sometimes cache_dev->count is not 0 (when metadata I/O fails
and bch_cache_set_error() called) is, in bch_writeback_thread(), refcount
of cache_dev is not decreased properly.
In bch_writeback_thread(), cached_dev_put() is called only when
searched_full_index is true and cached_dev->writeback_keys is empty, a.k.a
there is no dirty data on cache. In most of run time it is correct, but
when bch_writeback_thread() quits the while-loop while cache is still
dirty, current code forget to call cached_dev_put() before this kernel
thread exits. This is why sometimes cache_set_flush() is not executed and
cache set fails to be retired.
The reason to call cached_dev_put() in bch_writeback_rate() is, when the
cache device changes from clean to dirty, cached_dev_get() is called, to
make sure during writeback operatiions both backing and cache devices
won't be released.
Adding following code in bch_writeback_thread() does not work,
static int bch_writeback_thread(void *arg)
}
+ if (atomic_read(&dc->has_dirty))
+ cached_dev_put()
+
return 0;
}
because writeback kernel thread can be waken up and start via sysfs entry:
echo 1 > /sys/block/bcache<N>/bcache/writeback_running
It is difficult to check whether backing device is dirty without race and
extra lock. So the above modification will introduce potential refcount
underflow in some conditions.
The correct fix is, to take cached dev refcount when creating the kernel
thread, and put it before the kernel thread exits. Then bcache does not
need to take a cached dev refcount when cache turns from clean to dirty,
or to put a cached dev refcount when cache turns from ditry to clean. The
writeback kernel thread is alwasy safe to reference data structure from
cache set, cache and cached device (because a refcount of cache device is
taken for it already), and no matter the kernel thread is stopped by I/O
errors or system reboot, cached_dev->count can always be used correctly.
The patch is simple, but understanding how it works is quite complicated.
Changelog:
v2: set dc->writeback_thread to NULL in this patch, as suggested by Hannes.
v1: initial version for review.
Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Cc: Michael Lyle <mlyle@lyle.org>
Cc: Junhui Tang <tang.junhui@zte.com.cn>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-03-19 08:36:14 +08:00
|
|
|
}
|
2013-07-25 08:50:06 +08:00
|
|
|
|
bcache: stop dc->writeback_rate_update properly
struct delayed_work writeback_rate_update in struct cache_dev is a delayed
worker to call function update_writeback_rate() in period (the interval is
defined by dc->writeback_rate_update_seconds).
When a metadate I/O error happens on cache device, bcache error handling
routine bch_cache_set_error() will call bch_cache_set_unregister() to
retire whole cache set. On the unregister code path, this delayed work is
stopped by calling cancel_delayed_work_sync(&dc->writeback_rate_update).
dc->writeback_rate_update is a special delayed work from others in bcache.
In its routine update_writeback_rate(), this delayed work is re-armed
itself. That means when cancel_delayed_work_sync() returns, this delayed
work can still be executed after several seconds defined by
dc->writeback_rate_update_seconds.
The problem is, after cancel_delayed_work_sync() returns, the cache set
unregister code path will continue and release memory of struct cache set.
Then the delayed work is scheduled to run, __update_writeback_rate()
will reference the already released cache_set memory, and trigger a NULL
pointer deference fault.
This patch introduces two more bcache device flags,
- BCACHE_DEV_WB_RUNNING
bit set: bcache device is in writeback mode and running, it is OK for
dc->writeback_rate_update to re-arm itself.
bit clear:bcache device is trying to stop dc->writeback_rate_update,
this delayed work should not re-arm itself and quit.
- BCACHE_DEV_RATE_DW_RUNNING
bit set: routine update_writeback_rate() is executing.
bit clear: routine update_writeback_rate() quits.
This patch also adds a function cancel_writeback_rate_update_dwork() to
wait for dc->writeback_rate_update quits before cancel it by calling
cancel_delayed_work_sync(). In order to avoid a deadlock by unexpected
quit dc->writeback_rate_update, after time_out seconds this function will
give up and continue to call cancel_delayed_work_sync().
And here I explain how this patch stops self re-armed delayed work properly
with the above stuffs.
update_writeback_rate() sets BCACHE_DEV_RATE_DW_RUNNING at its beginning
and clears BCACHE_DEV_RATE_DW_RUNNING at its end. Before calling
cancel_writeback_rate_update_dwork() clear flag BCACHE_DEV_WB_RUNNING.
Before calling cancel_delayed_work_sync() wait utill flag
BCACHE_DEV_RATE_DW_RUNNING is clear. So when calling
cancel_delayed_work_sync(), dc->writeback_rate_update must be already re-
armed, or quite by seeing BCACHE_DEV_WB_RUNNING cleared. In both cases
delayed work routine update_writeback_rate() won't be executed after
cancel_delayed_work_sync() returns.
Inside update_writeback_rate() before calling schedule_delayed_work(), flag
BCACHE_DEV_WB_RUNNING is checked before. If this flag is cleared, it means
someone is about to stop the delayed work. Because flag
BCACHE_DEV_RATE_DW_RUNNING is set already and cancel_delayed_work_sync()
has to wait for this flag to be cleared, we don't need to worry about race
condition here.
If update_writeback_rate() is scheduled to run after checking
BCACHE_DEV_RATE_DW_RUNNING and before calling cancel_delayed_work_sync()
in cancel_writeback_rate_update_dwork(), it is also safe. Because at this
moment BCACHE_DEV_WB_RUNNING is cleared with memory barrier. As I mentioned
previously, update_writeback_rate() will see BCACHE_DEV_WB_RUNNING is clear
and quit immediately.
Because there are more dependences inside update_writeback_rate() to struct
cache_set memory, dc->writeback_rate_update is not a simple self re-arm
delayed work. After trying many different methods (e.g. hold dc->count, or
use locks), this is the only way I can find which works to properly stop
dc->writeback_rate_update delayed work.
Changelog:
v3: change values of BCACHE_DEV_WB_RUNNING and BCACHE_DEV_RATE_DW_RUNNING
to bit index, for test_bit().
v2: Try to fix the race issue which is pointed out by Junhui.
v1: The initial version for review
Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Junhui Tang <tang.junhui@zte.com.cn>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Cc: Michael Lyle <mlyle@lyle.org>
Cc: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-03-19 08:36:16 +08:00
|
|
|
WARN_ON(test_and_set_bit(BCACHE_DEV_WB_RUNNING, &dc->disk.flags));
|
2013-03-24 07:11:31 +08:00
|
|
|
schedule_delayed_work(&dc->writeback_rate_update,
|
|
|
|
dc->writeback_rate_update_seconds * HZ);
|
|
|
|
|
2014-05-02 04:48:57 +08:00
|
|
|
bch_writeback_queue(dc);
|
|
|
|
|
2013-03-24 07:11:31 +08:00
|
|
|
return 0;
|
|
|
|
}
|