workqueue: lock cwq access in drain_workqueue

Take cwq->gcwq->lock to avoid racing between drain_workqueue checking to
make sure the workqueues are empty and cwq_dec_nr_in_flight decrementing
and then incrementing nr_active when it activates a delayed work.

We discovered this when a corner case in one of our drivers resulted in
us trying to destroy a workqueue in which the remaining work would
always requeue itself again in the same workqueue.  We would hit this
race condition and trip the BUG_ON on workqueue.c:3080.

Signed-off-by: Thomas Tuttle <ttuttle@chromium.org>
Acked-by: Tejun Heo <tj@kernel.org>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
Thomas Tuttle 2011-09-14 16:22:28 -07:00 committed by Linus Torvalds
parent df4e33ad24
commit fa2563e41c
1 changed files with 6 additions and 1 deletions

View File

@ -2412,8 +2412,13 @@ void drain_workqueue(struct workqueue_struct *wq)
for_each_cwq_cpu(cpu, wq) { for_each_cwq_cpu(cpu, wq) {
struct cpu_workqueue_struct *cwq = get_cwq(cpu, wq); struct cpu_workqueue_struct *cwq = get_cwq(cpu, wq);
bool drained;
if (!cwq->nr_active && list_empty(&cwq->delayed_works)) spin_lock_irq(&cwq->gcwq->lock);
drained = !cwq->nr_active && list_empty(&cwq->delayed_works);
spin_unlock_irq(&cwq->gcwq->lock);
if (drained)
continue; continue;
if (++flush_cnt == 10 || if (++flush_cnt == 10 ||