drm/i915/shrinker: Report "unevictable" pages

Inside the shrinker we call can_release_pages() to indicate whether or
not we can make forward progress in freeing up memory by unbinding that
object. When adding our report to oom, we should be using the same
logic.

Whilst here, change the reporting from bytes to pages so that it looks
smaller to the user!, is consistent with the neighbouring oom report
itself which displays counts in pages, and makes the unsigned long
overflow less likely.

v2: Split oversized format string into two lines

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1461150592-27818-2-git-send-email-chris@chris-wilson.co.uk
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
This commit is contained in:
Chris Wilson 2016-04-20 12:09:51 +01:00
parent ca61d4a425
commit 1768d4550c
1 changed files with 12 additions and 11 deletions

View File

@ -336,7 +336,7 @@ i915_gem_shrinker_oom(struct notifier_block *nb, unsigned long event, void *ptr)
container_of(nb, struct drm_i915_private, mm.oom_notifier);
struct shrinker_lock_uninterruptible slu;
struct drm_i915_gem_object *obj;
unsigned long pinned, bound, unbound, freed_pages;
unsigned long unevictable, bound, unbound, freed_pages;
if (!i915_gem_shrinker_lock_uninterruptible(dev_priv, &slu, 5000))
return NOTIFY_DONE;
@ -347,33 +347,34 @@ i915_gem_shrinker_oom(struct notifier_block *nb, unsigned long event, void *ptr)
* assert that there are no objects with pinned pages that are not
* being pointed to by hardware.
*/
unbound = bound = pinned = 0;
unbound = bound = unevictable = 0;
list_for_each_entry(obj, &dev_priv->mm.unbound_list, global_list) {
if (!obj->base.filp) /* not backed by a freeable object */
continue;
if (obj->pages_pin_count)
pinned += obj->base.size;
if (!can_release_pages(obj))
unevictable += obj->base.size >> PAGE_SHIFT;
else
unbound += obj->base.size;
unbound += obj->base.size >> PAGE_SHIFT;
}
list_for_each_entry(obj, &dev_priv->mm.bound_list, global_list) {
if (!obj->base.filp)
continue;
if (obj->pages_pin_count > num_vma_bound(obj))
pinned += obj->base.size;
if (!can_release_pages(obj))
unevictable += obj->base.size >> PAGE_SHIFT;
else
bound += obj->base.size;
bound += obj->base.size >> PAGE_SHIFT;
}
i915_gem_shrinker_unlock_uninterruptible(dev_priv, &slu);
if (freed_pages || unbound || bound)
pr_info("Purging GPU memory, %lu bytes freed, %lu bytes still pinned.\n",
freed_pages << PAGE_SHIFT, pinned);
pr_info("Purging GPU memory, %lu pages freed, "
"%lu pages still pinned.\n",
freed_pages, unevictable);
if (unbound || bound)
pr_err("%lu and %lu bytes still available in the "
pr_err("%lu and %lu pages still available in the "
"bound and unbound GPU page lists.\n",
bound, unbound);