drm/mm: Break long searches in fragmented address spaces

We try hard to select a suitable hole in the drm_mm first time. But if
that is unsuccessful, we then have to look at neighbouring nodes, and
this requires traversing the rbtree. Walking the rbtree can be slow
(much slower than a linear list for deep trees), and if the drm_mm has
been purposefully fragmented our search can be trapped for a long, long
time. For non-preemptible kernels, we need to break up long CPU bound
sections by manually checking for cond_resched(); similarly we should
also bail out if we have been told to terminate. (In an ideal world, we
would break for any signal, but we need to trade off having to perform
the search again after ERESTARTSYS, which again may form a trap of
making no forward progress.)

Reported-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Andi Shyti <andi.shyti@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200207151720.2812125-1-chris@chris-wilson.co.uk
This commit is contained in:
Chris Wilson 2020-02-07 15:17:20 +00:00
parent 78a7b61aef
commit 7be1b9b8e9
1 changed files with 7 additions and 1 deletions

View File

@ -45,6 +45,7 @@
#include <linux/export.h>
#include <linux/interval_tree_generic.h>
#include <linux/seq_file.h>
#include <linux/sched/signal.h>
#include <linux/slab.h>
#include <linux/stacktrace.h>
@ -366,6 +367,11 @@ next_hole(struct drm_mm *mm,
struct drm_mm_node *node,
enum drm_mm_insert_mode mode)
{
/* Searching is slow; check if we ran out of time/patience */
cond_resched();
if (fatal_signal_pending(current))
return NULL;
switch (mode) {
default:
case DRM_MM_INSERT_BEST:
@ -557,7 +563,7 @@ int drm_mm_insert_node_in_range(struct drm_mm * const mm,
return 0;
}
return -ENOSPC;
return signal_pending(current) ? -ERESTARTSYS : -ENOSPC;
}
EXPORT_SYMBOL(drm_mm_insert_node_in_range);