special i915-gem-next pull as requested

- Conversion to dma_resv_locking, obj->mm.lock is gone (Maarten, with
   help from Thomas Hellström)
 - watchdog (Tvrtko, one patch to cancel individual request from Chris)
 - legacy ioctl cleanup (Jason+Ashutosh)
 - i915-gem TODO and RFC process doc (me)
 - i915_ prefix for vma_lookup (Liam Howlett) just because I spotted it
   and put it in here too
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEb4nG6jLu8Y5XI+PfTA9ye/CYqnEFAmBdtqYACgkQTA9ye/CY
 qnGwvBAAh7GJhv+KqQP/hjdaM3k8gEeNyMpAZyb1fuaVDp3EEzP9UQOBZQXiTAgZ
 ZSY3h3zN47NLJ3q22MqmUxMQwhi7OES7rN9f76jApkG1Lvq8C+NepknDDhDIQ+DZ
 Stla0mT89tCJcMMVGCsKonWhg/qWm04mxBrxArZMoIRRnduoHxqKiVPG++pUxDYh
 v3CTJuqdOUOtyLi8KV2pKBaYRo6oswiW6EAsqlMicfcC2X2/NVEHOxKQt6p57omH
 amZMroW9w28qpW6MKNxP91N5sbgQc59heNWdtdMgCbHKMoMa96aL27J0tX+rG/Vx
 3ZR6KQje8jv0I+pN8K1uC85ufry3kcx6CBmA8+GU2rZqfvaMAugD2B8kkrqFYDa3
 0SFI9XwoDViFOwbLMm1DkRkyuee+cfvBS7BxK2zW4JjFpG5jLtfPwh8Li6CmzP8I
 HrYU43SHM+fuXhKyEbvhnpjR7JIwRW9PUK3RBoRq2yPM+OlPpXpvMEzQHeTeRh8W
 y/cC6Z1WtJ7BZZ4J6clznAw6PMuo3JD/5oG2sdrw0vzPojUIgonoUDBzWojla9hi
 J/RcxAYEhXvUHOMuWtls4AAc2KRD2JgWbGW6W1gLC5wsEs8qfuYe+USN4eYnuX92
 mPdj7BjtbEg23Qm4fPl/m9G+dDbshEcfKUPpplFEIUNv5A/v1LY=
 =P0yC
 -----END PGP SIGNATURE-----

Merge tag 'topic/i915-gem-next-2021-03-26' of ssh://git.freedesktop.org/git/drm/drm into drm-next

special i915-gem-next pull as requested

- Conversion to dma_resv_locking, obj->mm.lock is gone (Maarten, with
  help from Thomas Hellström)
- watchdog (Tvrtko, one patch to cancel individual request from Chris)
- legacy ioctl cleanup (Jason+Ashutosh)
- i915-gem TODO and RFC process doc (me)
- i915_ prefix for vma_lookup (Liam Howlett) just because I spotted it
  and put it in here too

Signed-off-by: Dave Airlie <airlied@redhat.com>

From: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/YF24MHoOSjpKFEXA@phenom.ffwll.local
This commit is contained in:
Dave Airlie 2021-04-01 06:24:05 +10:00
commit 2f835b5dd8
117 changed files with 2754 additions and 2170 deletions

View File

@ -16,6 +16,7 @@ Linux GPU Driver Developer's Guide
vga-switcheroo
vgaarbiter
todo
rfc/index
.. only:: subproject and html

View File

@ -0,0 +1,17 @@
===============
GPU RFC Section
===============
For complex work, especially new uapi, it is often good to nail the high level
design issues before getting lost in the code details. This section is meant to
host such documentation:
* Each RFC should be a section in this file, explaining the goal and main design
considerations. Especially for uapi make sure you Cc: all relevant project
mailing lists and involved people outside of dri-devel.
* For uapi structures add a file to this directory with and then pull the
kerneldoc in like with real uapi headers.
* Once the code has landed move all the documentation to the right places in
the main core, helper or driver sections.

View File

@ -1,3 +1,17 @@
config DRM_I915_REQUEST_TIMEOUT
int "Default timeout for requests (ms)"
default 20000 # milliseconds
help
Configures the default timeout after which any user submissions will
be forcefully terminated.
Beware setting this value lower, or close to heartbeat interval
rounded to whole seconds times three, in order to avoid allowing
misbehaving applications causing total rendering failure in unrelated
clients.
May be 0 to disable the timeout.
config DRM_I915_FENCE_TIMEOUT
int "Timeout for unsignaled foreign fences (ms, jiffy granularity)"
default 10000 # milliseconds

View File

@ -139,7 +139,6 @@ gem-y += \
gem/i915_gem_dmabuf.o \
gem/i915_gem_domain.o \
gem/i915_gem_execbuffer.o \
gem/i915_gem_fence.o \
gem/i915_gem_internal.o \
gem/i915_gem_object.o \
gem/i915_gem_object_blt.o \

View File

@ -0,0 +1,41 @@
gem/gt TODO items
-----------------
- For discrete memory manager, merge enough dg1 to be able to refactor it to
TTM. Then land pci ids (just in case that turns up an uapi problem). TTM has
improved a lot the past 2 years, there's no reason anymore not to use it.
- Come up with a plan what to do with drm/scheduler and how to get there.
- Roll out dma_fence critical section annotations.
- There's a lot of complexity added past few years to make relocations faster.
That doesn't make sense given hw and gpu apis moved away from this model years
ago:
1. Land a modern pre-bound uapi like VM_BIND
2. Any complexity added in this area past few years which can't be justified
with VM_BIND using userspace should be removed. Looking at amdgpu dma_resv on
the bo and vm, plus some lru locks is all that needed. No complex rcu,
refcounts, caching, ... on everything.
This is the matching task on the vm side compared to ttm/dma_resv on the
backing storage side.
- i915_sw_fence seems to be the main structure for the i915-gem dma_fence model.
How-to-dma_fence is core and drivers really shouldn't build their own world
here, treating everything else as a fixed platform. i915_sw_fence concepts
should be moved to dma_fence, drm/scheduler or atomic commit helpers. Or
removed if dri-devel consensus is that it's not a good idea. Once that's done
maybe even remove it if there's nothing left.
Smaller things:
- i915_utils.h needs to be moved to the right places.
- dma_fence_work should be in drivers/dma-buf
- i915_mm.c should be moved to the right places. Some of the helpers also look a
bit fishy:
https://lore.kernel.org/linux-mm/20210301083320.943079-1-hch@lst.de/
- tasklet helpers in i915_gem.h also look a bit misplaced and should
probably be moved to tasklet headers.

View File

@ -1091,6 +1091,7 @@ static bool intel_plane_uses_fence(const struct intel_plane_state *plane_state)
struct i915_vma *
intel_pin_and_fence_fb_obj(struct drm_framebuffer *fb,
bool phys_cursor,
const struct i915_ggtt_view *view,
bool uses_fence,
unsigned long *out_flags)
@ -1099,14 +1100,19 @@ intel_pin_and_fence_fb_obj(struct drm_framebuffer *fb,
struct drm_i915_private *dev_priv = to_i915(dev);
struct drm_i915_gem_object *obj = intel_fb_obj(fb);
intel_wakeref_t wakeref;
struct i915_gem_ww_ctx ww;
struct i915_vma *vma;
unsigned int pinctl;
u32 alignment;
int ret;
if (drm_WARN_ON(dev, !i915_gem_object_is_framebuffer(obj)))
return ERR_PTR(-EINVAL);
alignment = intel_surf_alignment(fb, 0);
if (phys_cursor)
alignment = intel_cursor_alignment(dev_priv);
else
alignment = intel_surf_alignment(fb, 0);
if (drm_WARN_ON(dev, alignment && !is_power_of_2(alignment)))
return ERR_PTR(-EINVAL);
@ -1141,14 +1147,26 @@ intel_pin_and_fence_fb_obj(struct drm_framebuffer *fb,
if (HAS_GMCH(dev_priv))
pinctl |= PIN_MAPPABLE;
vma = i915_gem_object_pin_to_display_plane(obj,
alignment, view, pinctl);
if (IS_ERR(vma))
i915_gem_ww_ctx_init(&ww, true);
retry:
ret = i915_gem_object_lock(obj, &ww);
if (!ret && phys_cursor)
ret = i915_gem_object_attach_phys(obj, alignment);
if (!ret)
ret = i915_gem_object_pin_pages(obj);
if (ret)
goto err;
if (uses_fence && i915_vma_is_map_and_fenceable(vma)) {
int ret;
if (!ret) {
vma = i915_gem_object_pin_to_display_plane(obj, &ww, alignment,
view, pinctl);
if (IS_ERR(vma)) {
ret = PTR_ERR(vma);
goto err_unpin;
}
}
if (uses_fence && i915_vma_is_map_and_fenceable(vma)) {
/*
* Install a fence for tiled scan-out. Pre-i965 always needs a
* fence, whereas 965+ only requires a fence if using
@ -1169,16 +1187,28 @@ intel_pin_and_fence_fb_obj(struct drm_framebuffer *fb,
ret = i915_vma_pin_fence(vma);
if (ret != 0 && INTEL_GEN(dev_priv) < 4) {
i915_vma_unpin(vma);
vma = ERR_PTR(ret);
goto err;
goto err_unpin;
}
ret = 0;
if (ret == 0 && vma->fence)
if (vma->fence)
*out_flags |= PLANE_HAS_FENCE;
}
i915_vma_get(vma);
err_unpin:
i915_gem_object_unpin_pages(obj);
err:
if (ret == -EDEADLK) {
ret = i915_gem_ww_ctx_backoff(&ww);
if (!ret)
goto retry;
}
i915_gem_ww_ctx_fini(&ww);
if (ret)
vma = ERR_PTR(ret);
atomic_dec(&dev_priv->gpu_error.pending_fb_pin);
intel_runtime_pm_put(&dev_priv->runtime_pm, wakeref);
return vma;
@ -11333,19 +11363,11 @@ int intel_plane_pin_fb(struct intel_plane_state *plane_state)
struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
struct drm_framebuffer *fb = plane_state->hw.fb;
struct i915_vma *vma;
bool phys_cursor =
plane->id == PLANE_CURSOR &&
INTEL_INFO(dev_priv)->display.cursor_needs_physical;
if (plane->id == PLANE_CURSOR &&
INTEL_INFO(dev_priv)->display.cursor_needs_physical) {
struct drm_i915_gem_object *obj = intel_fb_obj(fb);
const int align = intel_cursor_alignment(dev_priv);
int err;
err = i915_gem_object_attach_phys(obj, align);
if (err)
return err;
}
vma = intel_pin_and_fence_fb_obj(fb,
vma = intel_pin_and_fence_fb_obj(fb, phys_cursor,
&plane_state->view,
intel_plane_uses_fence(plane_state),
&plane_state->flags);
@ -11437,13 +11459,8 @@ intel_prepare_plane_fb(struct drm_plane *_plane,
if (!obj)
return 0;
ret = i915_gem_object_pin_pages(obj);
if (ret)
return ret;
ret = intel_plane_pin_fb(new_plane_state);
i915_gem_object_unpin_pages(obj);
if (ret)
return ret;
@ -11905,7 +11922,7 @@ static int intel_user_framebuffer_create_handle(struct drm_framebuffer *fb,
struct drm_i915_gem_object *obj = intel_fb_obj(fb);
struct drm_i915_private *i915 = to_i915(obj->base.dev);
if (obj->userptr.mm) {
if (i915_gem_object_is_userptr(obj)) {
drm_dbg(&i915->drm,
"attempting to use a userptr for a framebuffer, denied\n");
return -EINVAL;

View File

@ -573,7 +573,7 @@ void intel_release_load_detect_pipe(struct drm_connector *connector,
struct intel_load_detect_pipe *old,
struct drm_modeset_acquire_ctx *ctx);
struct i915_vma *
intel_pin_and_fence_fb_obj(struct drm_framebuffer *fb,
intel_pin_and_fence_fb_obj(struct drm_framebuffer *fb, bool phys_cursor,
const struct i915_ggtt_view *view,
bool uses_fence,
unsigned long *out_flags);

View File

@ -293,7 +293,7 @@ void intel_dsb_prepare(struct intel_crtc_state *crtc_state)
goto out;
}
buf = i915_gem_object_pin_map(vma->obj, I915_MAP_WC);
buf = i915_gem_object_pin_map_unlocked(vma->obj, I915_MAP_WC);
if (IS_ERR(buf)) {
drm_err(&i915->drm, "Command buffer creation failed\n");
i915_vma_unpin_and_release(&vma, I915_VMA_RELEASE_MAP);

View File

@ -211,7 +211,7 @@ static int intelfb_create(struct drm_fb_helper *helper,
* This also validates that any existing fb inherited from the
* BIOS is suitable for own access.
*/
vma = intel_pin_and_fence_fb_obj(&ifbdev->fb->base,
vma = intel_pin_and_fence_fb_obj(&ifbdev->fb->base, false,
&view, false, &flags);
if (IS_ERR(vma)) {
ret = PTR_ERR(vma);

View File

@ -755,6 +755,32 @@ static u32 overlay_cmd_reg(struct drm_intel_overlay_put_image *params)
return cmd;
}
static struct i915_vma *intel_overlay_pin_fb(struct drm_i915_gem_object *new_bo)
{
struct i915_gem_ww_ctx ww;
struct i915_vma *vma;
int ret;
i915_gem_ww_ctx_init(&ww, true);
retry:
ret = i915_gem_object_lock(new_bo, &ww);
if (!ret) {
vma = i915_gem_object_pin_to_display_plane(new_bo, &ww, 0,
NULL, PIN_MAPPABLE);
ret = PTR_ERR_OR_ZERO(vma);
}
if (ret == -EDEADLK) {
ret = i915_gem_ww_ctx_backoff(&ww);
if (!ret)
goto retry;
}
i915_gem_ww_ctx_fini(&ww);
if (ret)
return ERR_PTR(ret);
return vma;
}
static int intel_overlay_do_put_image(struct intel_overlay *overlay,
struct drm_i915_gem_object *new_bo,
struct drm_intel_overlay_put_image *params)
@ -776,12 +802,10 @@ static int intel_overlay_do_put_image(struct intel_overlay *overlay,
atomic_inc(&dev_priv->gpu_error.pending_fb_pin);
vma = i915_gem_object_pin_to_display_plane(new_bo,
0, NULL, PIN_MAPPABLE);
if (IS_ERR(vma)) {
ret = PTR_ERR(vma);
vma = intel_overlay_pin_fb(new_bo);
if (IS_ERR(vma))
goto out_pin_section;
}
i915_gem_object_flush_frontbuffer(new_bo, ORIGIN_DIRTYFB);
if (!overlay->active) {

View File

@ -27,15 +27,8 @@ static void __do_clflush(struct drm_i915_gem_object *obj)
static int clflush_work(struct dma_fence_work *base)
{
struct clflush *clflush = container_of(base, typeof(*clflush), base);
struct drm_i915_gem_object *obj = clflush->obj;
int err;
err = i915_gem_object_pin_pages(obj);
if (err)
return err;
__do_clflush(obj);
i915_gem_object_unpin_pages(obj);
__do_clflush(clflush->obj);
return 0;
}
@ -44,6 +37,7 @@ static void clflush_release(struct dma_fence_work *base)
{
struct clflush *clflush = container_of(base, typeof(*clflush), base);
i915_gem_object_unpin_pages(clflush->obj);
i915_gem_object_put(clflush->obj);
}
@ -63,6 +57,11 @@ static struct clflush *clflush_work_create(struct drm_i915_gem_object *obj)
if (!clflush)
return NULL;
if (__i915_gem_object_get_pages(obj) < 0) {
kfree(clflush);
return NULL;
}
dma_fence_work_init(&clflush->base, &clflush_ops);
clflush->obj = i915_gem_object_get(obj); /* obj <-> clflush cycle */

View File

@ -232,6 +232,8 @@ static void intel_context_set_gem(struct intel_context *ce,
if (ctx->sched.priority >= I915_PRIORITY_NORMAL &&
intel_engine_has_timeslices(ce->engine))
__set_bit(CONTEXT_USE_SEMAPHORES, &ce->flags);
intel_context_set_watchdog_us(ce, ctx->watchdog.timeout_us);
}
static void __free_engines(struct i915_gem_engines *e, unsigned int count)
@ -386,38 +388,6 @@ static bool __cancel_engine(struct intel_engine_cs *engine)
return intel_engine_pulse(engine) == 0;
}
static bool
__active_engine(struct i915_request *rq, struct intel_engine_cs **active)
{
struct intel_engine_cs *engine, *locked;
bool ret = false;
/*
* Serialise with __i915_request_submit() so that it sees
* is-banned?, or we know the request is already inflight.
*
* Note that rq->engine is unstable, and so we double
* check that we have acquired the lock on the final engine.
*/
locked = READ_ONCE(rq->engine);
spin_lock_irq(&locked->active.lock);
while (unlikely(locked != (engine = READ_ONCE(rq->engine)))) {
spin_unlock(&locked->active.lock);
locked = engine;
spin_lock(&locked->active.lock);
}
if (i915_request_is_active(rq)) {
if (!__i915_request_is_complete(rq))
*active = locked;
ret = true;
}
spin_unlock_irq(&locked->active.lock);
return ret;
}
static struct intel_engine_cs *active_engine(struct intel_context *ce)
{
struct intel_engine_cs *engine = NULL;
@ -445,7 +415,7 @@ static struct intel_engine_cs *active_engine(struct intel_context *ce)
/* Check with the backend if the request is inflight */
found = true;
if (likely(rcu_access_pointer(rq->timeline) == ce->timeline))
found = __active_engine(rq, &engine);
found = i915_request_active_engine(rq, &engine);
i915_request_put(rq);
if (found)
@ -822,6 +792,41 @@ static void __assign_timeline(struct i915_gem_context *ctx,
context_apply_all(ctx, __apply_timeline, timeline);
}
static int __apply_watchdog(struct intel_context *ce, void *timeout_us)
{
return intel_context_set_watchdog_us(ce, (uintptr_t)timeout_us);
}
static int
__set_watchdog(struct i915_gem_context *ctx, unsigned long timeout_us)
{
int ret;
ret = context_apply_all(ctx, __apply_watchdog,
(void *)(uintptr_t)timeout_us);
if (!ret)
ctx->watchdog.timeout_us = timeout_us;
return ret;
}
static void __set_default_fence_expiry(struct i915_gem_context *ctx)
{
struct drm_i915_private *i915 = ctx->i915;
int ret;
if (!IS_ACTIVE(CONFIG_DRM_I915_REQUEST_TIMEOUT) ||
!i915->params.request_timeout_ms)
return;
/* Default expiry for user fences. */
ret = __set_watchdog(ctx, i915->params.request_timeout_ms * 1000);
if (ret)
drm_notice(&i915->drm,
"Failed to configure default fence expiry! (%d)",
ret);
}
static struct i915_gem_context *
i915_gem_create_context(struct drm_i915_private *i915, unsigned int flags)
{
@ -866,6 +871,8 @@ i915_gem_create_context(struct drm_i915_private *i915, unsigned int flags)
intel_timeline_put(timeline);
}
__set_default_fence_expiry(ctx);
trace_i915_context_create(ctx);
return ctx;

View File

@ -154,6 +154,10 @@ struct i915_gem_context {
*/
atomic_t active_count;
struct {
u64 timeout_us;
} watchdog;
/**
* @hang_timestamp: The last time(s) this context caused a GPU hang
*/

View File

@ -25,7 +25,7 @@ static struct sg_table *i915_gem_map_dma_buf(struct dma_buf_attachment *attachme
struct scatterlist *src, *dst;
int ret, i;
ret = i915_gem_object_pin_pages(obj);
ret = i915_gem_object_pin_pages_unlocked(obj);
if (ret)
goto err;
@ -82,7 +82,7 @@ static int i915_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map
struct drm_i915_gem_object *obj = dma_buf_to_obj(dma_buf);
void *vaddr;
vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB);
vaddr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB);
if (IS_ERR(vaddr))
return PTR_ERR(vaddr);
@ -123,42 +123,48 @@ static int i915_gem_begin_cpu_access(struct dma_buf *dma_buf, enum dma_data_dire
{
struct drm_i915_gem_object *obj = dma_buf_to_obj(dma_buf);
bool write = (direction == DMA_BIDIRECTIONAL || direction == DMA_TO_DEVICE);
struct i915_gem_ww_ctx ww;
int err;
err = i915_gem_object_pin_pages(obj);
if (err)
return err;
err = i915_gem_object_lock_interruptible(obj, NULL);
if (err)
goto out;
err = i915_gem_object_set_to_cpu_domain(obj, write);
i915_gem_object_unlock(obj);
out:
i915_gem_object_unpin_pages(obj);
i915_gem_ww_ctx_init(&ww, true);
retry:
err = i915_gem_object_lock(obj, &ww);
if (!err)
err = i915_gem_object_pin_pages(obj);
if (!err) {
err = i915_gem_object_set_to_cpu_domain(obj, write);
i915_gem_object_unpin_pages(obj);
}
if (err == -EDEADLK) {
err = i915_gem_ww_ctx_backoff(&ww);
if (!err)
goto retry;
}
i915_gem_ww_ctx_fini(&ww);
return err;
}
static int i915_gem_end_cpu_access(struct dma_buf *dma_buf, enum dma_data_direction direction)
{
struct drm_i915_gem_object *obj = dma_buf_to_obj(dma_buf);
struct i915_gem_ww_ctx ww;
int err;
err = i915_gem_object_pin_pages(obj);
if (err)
return err;
err = i915_gem_object_lock_interruptible(obj, NULL);
if (err)
goto out;
err = i915_gem_object_set_to_gtt_domain(obj, false);
i915_gem_object_unlock(obj);
out:
i915_gem_object_unpin_pages(obj);
i915_gem_ww_ctx_init(&ww, true);
retry:
err = i915_gem_object_lock(obj, &ww);
if (!err)
err = i915_gem_object_pin_pages(obj);
if (!err) {
err = i915_gem_object_set_to_gtt_domain(obj, false);
i915_gem_object_unpin_pages(obj);
}
if (err == -EDEADLK) {
err = i915_gem_ww_ctx_backoff(&ww);
if (!err)
goto retry;
}
i915_gem_ww_ctx_fini(&ww);
return err;
}
@ -258,7 +264,7 @@ struct drm_gem_object *i915_gem_prime_import(struct drm_device *dev,
}
drm_gem_private_object_init(dev, &obj->base, dma_buf->size);
i915_gem_object_init(obj, &i915_gem_object_dmabuf_ops, &lock_class);
i915_gem_object_init(obj, &i915_gem_object_dmabuf_ops, &lock_class, 0);
obj->base.import_attach = attach;
obj->base.resv = dma_buf->resv;

View File

@ -335,7 +335,14 @@ int i915_gem_set_caching_ioctl(struct drm_device *dev, void *data,
* not allowed to be changed by userspace.
*/
if (i915_gem_object_is_proxy(obj)) {
ret = -ENXIO;
/*
* Silently allow cached for userptr; the vulkan driver
* sets all objects to cached
*/
if (!i915_gem_object_is_userptr(obj) ||
args->caching != I915_CACHING_CACHED)
ret = -ENXIO;
goto out;
}
@ -359,12 +366,12 @@ int i915_gem_set_caching_ioctl(struct drm_device *dev, void *data,
*/
struct i915_vma *
i915_gem_object_pin_to_display_plane(struct drm_i915_gem_object *obj,
struct i915_gem_ww_ctx *ww,
u32 alignment,
const struct i915_ggtt_view *view,
unsigned int flags)
{
struct drm_i915_private *i915 = to_i915(obj->base.dev);
struct i915_gem_ww_ctx ww;
struct i915_vma *vma;
int ret;
@ -372,11 +379,6 @@ i915_gem_object_pin_to_display_plane(struct drm_i915_gem_object *obj,
if (HAS_LMEM(i915) && !i915_gem_object_is_lmem(obj))
return ERR_PTR(-EINVAL);
i915_gem_ww_ctx_init(&ww, true);
retry:
ret = i915_gem_object_lock(obj, &ww);
if (ret)
goto err;
/*
* The display engine is not coherent with the LLC cache on gen6. As
* a result, we make sure that the pinning that is about to occur is
@ -391,7 +393,7 @@ i915_gem_object_pin_to_display_plane(struct drm_i915_gem_object *obj,
HAS_WT(i915) ?
I915_CACHE_WT : I915_CACHE_NONE);
if (ret)
goto err;
return ERR_PTR(ret);
/*
* As the user may map the buffer once pinned in the display plane
@ -404,33 +406,20 @@ i915_gem_object_pin_to_display_plane(struct drm_i915_gem_object *obj,
vma = ERR_PTR(-ENOSPC);
if ((flags & PIN_MAPPABLE) == 0 &&
(!view || view->type == I915_GGTT_VIEW_NORMAL))
vma = i915_gem_object_ggtt_pin_ww(obj, &ww, view, 0, alignment,
vma = i915_gem_object_ggtt_pin_ww(obj, ww, view, 0, alignment,
flags | PIN_MAPPABLE |
PIN_NONBLOCK);
if (IS_ERR(vma) && vma != ERR_PTR(-EDEADLK))
vma = i915_gem_object_ggtt_pin_ww(obj, &ww, view, 0,
vma = i915_gem_object_ggtt_pin_ww(obj, ww, view, 0,
alignment, flags);
if (IS_ERR(vma)) {
ret = PTR_ERR(vma);
goto err;
}
if (IS_ERR(vma))
return vma;
vma->display_alignment = max_t(u64, vma->display_alignment, alignment);
i915_vma_mark_scanout(vma);
i915_gem_object_flush_if_display_locked(obj);
err:
if (ret == -EDEADLK) {
ret = i915_gem_ww_ctx_backoff(&ww);
if (!ret)
goto retry;
}
i915_gem_ww_ctx_fini(&ww);
if (ret)
return ERR_PTR(ret);
return vma;
}
@ -526,6 +515,21 @@ i915_gem_set_domain_ioctl(struct drm_device *dev, void *data,
if (err)
goto out;
if (i915_gem_object_is_userptr(obj)) {
/*
* Try to grab userptr pages, iris uses set_domain to check
* userptr validity
*/
err = i915_gem_object_userptr_validate(obj);
if (!err)
err = i915_gem_object_wait(obj,
I915_WAIT_INTERRUPTIBLE |
I915_WAIT_PRIORITY |
(write_domain ? I915_WAIT_ALL : 0),
MAX_SCHEDULE_TIMEOUT);
goto out;
}
/*
* Proxy objects do not control access to the backing storage, ergo
* they cannot be used as a means to manipulate the cache domain
@ -537,6 +541,10 @@ i915_gem_set_domain_ioctl(struct drm_device *dev, void *data,
goto out;
}
err = i915_gem_object_lock_interruptible(obj, NULL);
if (err)
goto out;
/*
* Flush and acquire obj->pages so that we are coherent through
* direct access in memory with previous cached writes through
@ -548,7 +556,7 @@ i915_gem_set_domain_ioctl(struct drm_device *dev, void *data,
*/
err = i915_gem_object_pin_pages(obj);
if (err)
goto out;
goto out_unlock;
/*
* Already in the desired write domain? Nothing for us to do!
@ -563,10 +571,6 @@ i915_gem_set_domain_ioctl(struct drm_device *dev, void *data,
if (READ_ONCE(obj->write_domain) == read_domains)
goto out_unpin;
err = i915_gem_object_lock_interruptible(obj, NULL);
if (err)
goto out_unpin;
if (read_domains & I915_GEM_DOMAIN_WC)
err = i915_gem_object_set_to_wc_domain(obj, write_domain);
else if (read_domains & I915_GEM_DOMAIN_GTT)
@ -574,13 +578,15 @@ i915_gem_set_domain_ioctl(struct drm_device *dev, void *data,
else
err = i915_gem_object_set_to_cpu_domain(obj, write_domain);
i915_gem_object_unlock(obj);
if (write_domain)
i915_gem_object_invalidate_frontbuffer(obj, ORIGIN_CPU);
out_unpin:
i915_gem_object_unpin_pages(obj);
out_unlock:
i915_gem_object_unlock(obj);
if (!err && write_domain)
i915_gem_object_invalidate_frontbuffer(obj, ORIGIN_CPU);
out:
i915_gem_object_put(obj);
return err;

View File

@ -28,6 +28,7 @@
#include "i915_sw_fence_work.h"
#include "i915_trace.h"
#include "i915_user_extensions.h"
#include "i915_memcpy.h"
struct eb_vma {
struct i915_vma *vma;
@ -49,16 +50,19 @@ enum {
#define DBG_FORCE_RELOC 0 /* choose one of the above! */
};
#define __EXEC_OBJECT_HAS_PIN BIT(31)
#define __EXEC_OBJECT_HAS_FENCE BIT(30)
#define __EXEC_OBJECT_NEEDS_MAP BIT(29)
#define __EXEC_OBJECT_NEEDS_BIAS BIT(28)
#define __EXEC_OBJECT_INTERNAL_FLAGS (~0u << 28) /* all of the above */
/* __EXEC_OBJECT_NO_RESERVE is BIT(31), defined in i915_vma.h */
#define __EXEC_OBJECT_HAS_PIN BIT(30)
#define __EXEC_OBJECT_HAS_FENCE BIT(29)
#define __EXEC_OBJECT_USERPTR_INIT BIT(28)
#define __EXEC_OBJECT_NEEDS_MAP BIT(27)
#define __EXEC_OBJECT_NEEDS_BIAS BIT(26)
#define __EXEC_OBJECT_INTERNAL_FLAGS (~0u << 26) /* all of the above + */
#define __EXEC_OBJECT_RESERVED (__EXEC_OBJECT_HAS_PIN | __EXEC_OBJECT_HAS_FENCE)
#define __EXEC_HAS_RELOC BIT(31)
#define __EXEC_ENGINE_PINNED BIT(30)
#define __EXEC_INTERNAL_FLAGS (~0u << 30)
#define __EXEC_USERPTR_USED BIT(29)
#define __EXEC_INTERNAL_FLAGS (~0u << 29)
#define UPDATE PIN_OFFSET_FIXED
#define BATCH_OFFSET_BIAS (256*1024)
@ -419,13 +423,14 @@ static u64 eb_pin_flags(const struct drm_i915_gem_exec_object2 *entry,
return pin_flags;
}
static inline bool
static inline int
eb_pin_vma(struct i915_execbuffer *eb,
const struct drm_i915_gem_exec_object2 *entry,
struct eb_vma *ev)
{
struct i915_vma *vma = ev->vma;
u64 pin_flags;
int err;
if (vma->node.size)
pin_flags = vma->node.start;
@ -437,24 +442,29 @@ eb_pin_vma(struct i915_execbuffer *eb,
pin_flags |= PIN_GLOBAL;
/* Attempt to reuse the current location if available */
/* TODO: Add -EDEADLK handling here */
if (unlikely(i915_vma_pin_ww(vma, &eb->ww, 0, 0, pin_flags))) {
err = i915_vma_pin_ww(vma, &eb->ww, 0, 0, pin_flags);
if (err == -EDEADLK)
return err;
if (unlikely(err)) {
if (entry->flags & EXEC_OBJECT_PINNED)
return false;
return err;
/* Failing that pick any _free_ space if suitable */
if (unlikely(i915_vma_pin_ww(vma, &eb->ww,
err = i915_vma_pin_ww(vma, &eb->ww,
entry->pad_to_size,
entry->alignment,
eb_pin_flags(entry, ev->flags) |
PIN_USER | PIN_NOEVICT)))
return false;
PIN_USER | PIN_NOEVICT);
if (unlikely(err))
return err;
}
if (unlikely(ev->flags & EXEC_OBJECT_NEEDS_FENCE)) {
if (unlikely(i915_vma_pin_fence(vma))) {
err = i915_vma_pin_fence(vma);
if (unlikely(err)) {
i915_vma_unpin(vma);
return false;
return err;
}
if (vma->fence)
@ -462,7 +472,10 @@ eb_pin_vma(struct i915_execbuffer *eb,
}
ev->flags |= __EXEC_OBJECT_HAS_PIN;
return !eb_vma_misplaced(entry, vma, ev->flags);
if (eb_vma_misplaced(entry, vma, ev->flags))
return -EBADSLT;
return 0;
}
static inline void
@ -483,6 +496,13 @@ eb_validate_vma(struct i915_execbuffer *eb,
struct drm_i915_gem_exec_object2 *entry,
struct i915_vma *vma)
{
/* Relocations are disallowed for all platforms after TGL-LP. This
* also covers all platforms with local memory.
*/
if (entry->relocation_count &&
INTEL_GEN(eb->i915) >= 12 && !IS_TIGERLAKE(eb->i915))
return -EINVAL;
if (unlikely(entry->flags & eb->invalid_flags))
return -EINVAL;
@ -853,6 +873,26 @@ static int eb_lookup_vmas(struct i915_execbuffer *eb)
}
eb_add_vma(eb, i, batch, vma);
if (i915_gem_object_is_userptr(vma->obj)) {
err = i915_gem_object_userptr_submit_init(vma->obj);
if (err) {
if (i + 1 < eb->buffer_count) {
/*
* Execbuffer code expects last vma entry to be NULL,
* since we already initialized this entry,
* set the next value to NULL or we mess up
* cleanup handling.
*/
eb->vma[i + 1].vma = NULL;
}
return err;
}
eb->vma[i].flags |= __EXEC_OBJECT_USERPTR_INIT;
eb->args->flags |= __EXEC_USERPTR_USED;
}
}
if (unlikely(eb->batch->flags & EXEC_OBJECT_WRITE)) {
@ -898,7 +938,11 @@ static int eb_validate_vmas(struct i915_execbuffer *eb)
if (err)
return err;
if (eb_pin_vma(eb, entry, ev)) {
err = eb_pin_vma(eb, entry, ev);
if (err == -EDEADLK)
return err;
if (!err) {
if (entry->offset != vma->node.start) {
entry->offset = vma->node.start | UPDATE;
eb->args->flags |= __EXEC_HAS_RELOC;
@ -914,6 +958,12 @@ static int eb_validate_vmas(struct i915_execbuffer *eb)
}
}
if (!(ev->flags & EXEC_OBJECT_WRITE)) {
err = dma_resv_reserve_shared(vma->resv, 1);
if (err)
return err;
}
GEM_BUG_ON(drm_mm_node_allocated(&vma->node) &&
eb_vma_misplaced(&eb->exec[i], vma, ev->flags));
}
@ -944,7 +994,7 @@ eb_get_vma(const struct i915_execbuffer *eb, unsigned long handle)
}
}
static void eb_release_vmas(struct i915_execbuffer *eb, bool final)
static void eb_release_vmas(struct i915_execbuffer *eb, bool final, bool release_userptr)
{
const unsigned int count = eb->buffer_count;
unsigned int i;
@ -958,6 +1008,11 @@ static void eb_release_vmas(struct i915_execbuffer *eb, bool final)
eb_unreserve_vma(ev);
if (release_userptr && ev->flags & __EXEC_OBJECT_USERPTR_INIT) {
ev->flags &= ~__EXEC_OBJECT_USERPTR_INIT;
i915_gem_object_userptr_submit_fini(vma->obj);
}
if (final)
i915_vma_put(vma);
}
@ -1294,6 +1349,7 @@ static int __reloc_gpu_alloc(struct i915_execbuffer *eb,
err = PTR_ERR(cmd);
goto err_pool;
}
intel_gt_buffer_pool_mark_used(pool);
memset32(cmd, 0, pool->obj->base.size / sizeof(u32));
@ -1895,6 +1951,31 @@ static int eb_prefault_relocations(const struct i915_execbuffer *eb)
return 0;
}
static int eb_reinit_userptr(struct i915_execbuffer *eb)
{
const unsigned int count = eb->buffer_count;
unsigned int i;
int ret;
if (likely(!(eb->args->flags & __EXEC_USERPTR_USED)))
return 0;
for (i = 0; i < count; i++) {
struct eb_vma *ev = &eb->vma[i];
if (!i915_gem_object_is_userptr(ev->vma->obj))
continue;
ret = i915_gem_object_userptr_submit_init(ev->vma->obj);
if (ret)
return ret;
ev->flags |= __EXEC_OBJECT_USERPTR_INIT;
}
return 0;
}
static noinline int eb_relocate_parse_slow(struct i915_execbuffer *eb,
struct i915_request *rq)
{
@ -1909,7 +1990,7 @@ static noinline int eb_relocate_parse_slow(struct i915_execbuffer *eb,
}
/* We may process another execbuffer during the unlock... */
eb_release_vmas(eb, false);
eb_release_vmas(eb, false, true);
i915_gem_ww_ctx_fini(&eb->ww);
if (rq) {
@ -1951,7 +2032,7 @@ static noinline int eb_relocate_parse_slow(struct i915_execbuffer *eb,
}
if (!err)
flush_workqueue(eb->i915->mm.userptr_wq);
err = eb_reinit_userptr(eb);
err_relock:
i915_gem_ww_ctx_init(&eb->ww, true);
@ -2013,7 +2094,7 @@ static noinline int eb_relocate_parse_slow(struct i915_execbuffer *eb,
err:
if (err == -EDEADLK) {
eb_release_vmas(eb, false);
eb_release_vmas(eb, false, false);
err = i915_gem_ww_ctx_backoff(&eb->ww);
if (!err)
goto repeat_validate;
@ -2110,7 +2191,7 @@ static int eb_relocate_parse(struct i915_execbuffer *eb)
err:
if (err == -EDEADLK) {
eb_release_vmas(eb, false);
eb_release_vmas(eb, false, false);
err = i915_gem_ww_ctx_backoff(&eb->ww);
if (!err)
goto retry;
@ -2181,9 +2262,34 @@ static int eb_move_to_gpu(struct i915_execbuffer *eb)
}
if (err == 0)
err = i915_vma_move_to_active(vma, eb->request, flags);
err = i915_vma_move_to_active(vma, eb->request,
flags | __EXEC_OBJECT_NO_RESERVE);
}
#ifdef CONFIG_MMU_NOTIFIER
if (!err && (eb->args->flags & __EXEC_USERPTR_USED)) {
spin_lock(&eb->i915->mm.notifier_lock);
/*
* count is always at least 1, otherwise __EXEC_USERPTR_USED
* could not have been set
*/
for (i = 0; i < count; i++) {
struct eb_vma *ev = &eb->vma[i];
struct drm_i915_gem_object *obj = ev->vma->obj;
if (!i915_gem_object_is_userptr(obj))
continue;
err = i915_gem_object_userptr_submit_done(obj);
if (err)
break;
}
spin_unlock(&eb->i915->mm.notifier_lock);
}
#endif
if (unlikely(err))
goto err_skip;
@ -2274,24 +2380,45 @@ struct eb_parse_work {
struct i915_vma *trampoline;
unsigned long batch_offset;
unsigned long batch_length;
unsigned long *jump_whitelist;
const void *batch_map;
void *shadow_map;
};
static int __eb_parse(struct dma_fence_work *work)
{
struct eb_parse_work *pw = container_of(work, typeof(*pw), base);
int ret;
bool cookie;
return intel_engine_cmd_parser(pw->engine,
pw->batch,
pw->batch_offset,
pw->batch_length,
pw->shadow,
pw->trampoline);
cookie = dma_fence_begin_signalling();
ret = intel_engine_cmd_parser(pw->engine,
pw->batch,
pw->batch_offset,
pw->batch_length,
pw->shadow,
pw->jump_whitelist,
pw->shadow_map,
pw->batch_map);
dma_fence_end_signalling(cookie);
return ret;
}
static void __eb_parse_release(struct dma_fence_work *work)
{
struct eb_parse_work *pw = container_of(work, typeof(*pw), base);
if (!IS_ERR_OR_NULL(pw->jump_whitelist))
kfree(pw->jump_whitelist);
if (pw->batch_map)
i915_gem_object_unpin_map(pw->batch->obj);
else
i915_gem_object_unpin_pages(pw->batch->obj);
i915_gem_object_unpin_map(pw->shadow->obj);
if (pw->trampoline)
i915_active_release(&pw->trampoline->active);
i915_active_release(&pw->shadow->active);
@ -2341,6 +2468,8 @@ static int eb_parse_pipeline(struct i915_execbuffer *eb,
struct i915_vma *trampoline)
{
struct eb_parse_work *pw;
struct drm_i915_gem_object *batch = eb->batch->vma->obj;
bool needs_clflush;
int err;
GEM_BUG_ON(overflows_type(eb->batch_start_offset, pw->batch_offset));
@ -2364,6 +2493,34 @@ static int eb_parse_pipeline(struct i915_execbuffer *eb,
goto err_shadow;
}
pw->shadow_map = i915_gem_object_pin_map(shadow->obj, I915_MAP_WB);
if (IS_ERR(pw->shadow_map)) {
err = PTR_ERR(pw->shadow_map);
goto err_trampoline;
}
needs_clflush =
!(batch->cache_coherent & I915_BO_CACHE_COHERENT_FOR_READ);
pw->batch_map = ERR_PTR(-ENODEV);
if (needs_clflush && i915_has_memcpy_from_wc())
pw->batch_map = i915_gem_object_pin_map(batch, I915_MAP_WC);
if (IS_ERR(pw->batch_map)) {
err = i915_gem_object_pin_pages(batch);
if (err)
goto err_unmap_shadow;
pw->batch_map = NULL;
}
pw->jump_whitelist =
intel_engine_cmd_parser_alloc_jump_whitelist(eb->batch_len,
trampoline);
if (IS_ERR(pw->jump_whitelist)) {
err = PTR_ERR(pw->jump_whitelist);
goto err_unmap_batch;
}
dma_fence_work_init(&pw->base, &eb_parse_ops);
pw->engine = eb->engine;
@ -2382,6 +2539,10 @@ static int eb_parse_pipeline(struct i915_execbuffer *eb,
if (err)
goto err_commit;
err = dma_resv_reserve_shared(shadow->resv, 1);
if (err)
goto err_commit;
/* Wait for all writes (and relocs) into the batch to complete */
err = i915_sw_fence_await_reservation(&pw->base.chain,
pw->batch->resv, NULL, false,
@ -2403,6 +2564,16 @@ static int eb_parse_pipeline(struct i915_execbuffer *eb,
dma_fence_work_commit_imm(&pw->base);
return err;
err_unmap_batch:
if (pw->batch_map)
i915_gem_object_unpin_map(batch);
else
i915_gem_object_unpin_pages(batch);
err_unmap_shadow:
i915_gem_object_unpin_map(shadow->obj);
err_trampoline:
if (trampoline)
i915_active_release(&trampoline->active);
err_shadow:
i915_active_release(&shadow->active);
err_batch:
@ -2474,6 +2645,7 @@ static int eb_parse(struct i915_execbuffer *eb)
err = PTR_ERR(shadow);
goto err;
}
intel_gt_buffer_pool_mark_used(pool);
i915_gem_object_set_readonly(shadow->obj);
shadow->private = pool;
@ -3263,7 +3435,7 @@ i915_gem_do_execbuffer(struct drm_device *dev,
err = eb_lookup_vmas(&eb);
if (err) {
eb_release_vmas(&eb, true);
eb_release_vmas(&eb, true, true);
goto err_engine;
}
@ -3335,6 +3507,7 @@ i915_gem_do_execbuffer(struct drm_device *dev,
trace_i915_request_queue(eb.request, eb.batch_flags);
err = eb_submit(&eb, batch);
err_request:
i915_request_get(eb.request);
err = eb_request_add(&eb, err);
@ -3355,7 +3528,7 @@ i915_gem_do_execbuffer(struct drm_device *dev,
i915_request_put(eb.request);
err_vma:
eb_release_vmas(&eb, true);
eb_release_vmas(&eb, true, true);
if (eb.trampoline)
i915_vma_unpin(eb.trampoline);
WARN_ON(err == -EDEADLK);
@ -3401,106 +3574,6 @@ static bool check_buffer_count(size_t count)
return !(count < 1 || count > INT_MAX || count > SIZE_MAX / sz - 1);
}
/*
* Legacy execbuffer just creates an exec2 list from the original exec object
* list array and passes it to the real function.
*/
int
i915_gem_execbuffer_ioctl(struct drm_device *dev, void *data,
struct drm_file *file)
{
struct drm_i915_private *i915 = to_i915(dev);
struct drm_i915_gem_execbuffer *args = data;
struct drm_i915_gem_execbuffer2 exec2;
struct drm_i915_gem_exec_object *exec_list = NULL;
struct drm_i915_gem_exec_object2 *exec2_list = NULL;
const size_t count = args->buffer_count;
unsigned int i;
int err;
if (!check_buffer_count(count)) {
drm_dbg(&i915->drm, "execbuf2 with %zd buffers\n", count);
return -EINVAL;
}
exec2.buffers_ptr = args->buffers_ptr;
exec2.buffer_count = args->buffer_count;
exec2.batch_start_offset = args->batch_start_offset;
exec2.batch_len = args->batch_len;
exec2.DR1 = args->DR1;
exec2.DR4 = args->DR4;
exec2.num_cliprects = args->num_cliprects;
exec2.cliprects_ptr = args->cliprects_ptr;
exec2.flags = I915_EXEC_RENDER;
i915_execbuffer2_set_context_id(exec2, 0);
err = i915_gem_check_execbuffer(&exec2);
if (err)
return err;
/* Copy in the exec list from userland */
exec_list = kvmalloc_array(count, sizeof(*exec_list),
__GFP_NOWARN | GFP_KERNEL);
/* Allocate extra slots for use by the command parser */
exec2_list = kvmalloc_array(count + 2, eb_element_size(),
__GFP_NOWARN | GFP_KERNEL);
if (exec_list == NULL || exec2_list == NULL) {
drm_dbg(&i915->drm,
"Failed to allocate exec list for %d buffers\n",
args->buffer_count);
kvfree(exec_list);
kvfree(exec2_list);
return -ENOMEM;
}
err = copy_from_user(exec_list,
u64_to_user_ptr(args->buffers_ptr),
sizeof(*exec_list) * count);
if (err) {
drm_dbg(&i915->drm, "copy %d exec entries failed %d\n",
args->buffer_count, err);
kvfree(exec_list);
kvfree(exec2_list);
return -EFAULT;
}
for (i = 0; i < args->buffer_count; i++) {
exec2_list[i].handle = exec_list[i].handle;
exec2_list[i].relocation_count = exec_list[i].relocation_count;
exec2_list[i].relocs_ptr = exec_list[i].relocs_ptr;
exec2_list[i].alignment = exec_list[i].alignment;
exec2_list[i].offset = exec_list[i].offset;
if (INTEL_GEN(to_i915(dev)) < 4)
exec2_list[i].flags = EXEC_OBJECT_NEEDS_FENCE;
else
exec2_list[i].flags = 0;
}
err = i915_gem_do_execbuffer(dev, file, &exec2, exec2_list);
if (exec2.flags & __EXEC_HAS_RELOC) {
struct drm_i915_gem_exec_object __user *user_exec_list =
u64_to_user_ptr(args->buffers_ptr);
/* Copy the new buffer offsets back to the user's exec list. */
for (i = 0; i < args->buffer_count; i++) {
if (!(exec2_list[i].offset & UPDATE))
continue;
exec2_list[i].offset =
gen8_canonical_addr(exec2_list[i].offset & PIN_OFFSET_MASK);
exec2_list[i].offset &= PIN_OFFSET_MASK;
if (__copy_to_user(&user_exec_list[i].offset,
&exec2_list[i].offset,
sizeof(user_exec_list[i].offset)))
break;
}
}
kvfree(exec_list);
kvfree(exec2_list);
return err;
}
int
i915_gem_execbuffer2_ioctl(struct drm_device *dev, void *data,
struct drm_file *file)

View File

@ -1,95 +0,0 @@
/*
* SPDX-License-Identifier: MIT
*
* Copyright © 2019 Intel Corporation
*/
#include "i915_drv.h"
#include "i915_gem_object.h"
struct stub_fence {
struct dma_fence dma;
struct i915_sw_fence chain;
};
static int __i915_sw_fence_call
stub_notify(struct i915_sw_fence *fence, enum i915_sw_fence_notify state)
{
struct stub_fence *stub = container_of(fence, typeof(*stub), chain);
switch (state) {
case FENCE_COMPLETE:
dma_fence_signal(&stub->dma);
break;
case FENCE_FREE:
dma_fence_put(&stub->dma);
break;
}
return NOTIFY_DONE;
}
static const char *stub_driver_name(struct dma_fence *fence)
{
return DRIVER_NAME;
}
static const char *stub_timeline_name(struct dma_fence *fence)
{
return "object";
}
static void stub_release(struct dma_fence *fence)
{
struct stub_fence *stub = container_of(fence, typeof(*stub), dma);
i915_sw_fence_fini(&stub->chain);
BUILD_BUG_ON(offsetof(typeof(*stub), dma));
dma_fence_free(&stub->dma);
}
static const struct dma_fence_ops stub_fence_ops = {
.get_driver_name = stub_driver_name,
.get_timeline_name = stub_timeline_name,
.release = stub_release,
};
struct dma_fence *
i915_gem_object_lock_fence(struct drm_i915_gem_object *obj)
{
struct stub_fence *stub;
assert_object_held(obj);
stub = kmalloc(sizeof(*stub), GFP_KERNEL);
if (!stub)
return NULL;
i915_sw_fence_init(&stub->chain, stub_notify);
dma_fence_init(&stub->dma, &stub_fence_ops, &stub->chain.wait.lock,
0, 0);
if (i915_sw_fence_await_reservation(&stub->chain,
obj->base.resv, NULL, true,
i915_fence_timeout(to_i915(obj->base.dev)),
I915_FENCE_GFP) < 0)
goto err;
dma_resv_add_excl_fence(obj->base.resv, &stub->dma);
return &stub->dma;
err:
stub_release(&stub->dma);
return NULL;
}
void i915_gem_object_unlock_fence(struct drm_i915_gem_object *obj,
struct dma_fence *fence)
{
struct stub_fence *stub = container_of(fence, typeof(*stub), dma);
i915_sw_fence_commit(&stub->chain);
}

View File

@ -138,8 +138,7 @@ static void i915_gem_object_put_pages_internal(struct drm_i915_gem_object *obj,
static const struct drm_i915_gem_object_ops i915_gem_object_internal_ops = {
.name = "i915_gem_object_internal",
.flags = I915_GEM_OBJECT_HAS_STRUCT_PAGE |
I915_GEM_OBJECT_IS_SHRINKABLE,
.flags = I915_GEM_OBJECT_IS_SHRINKABLE,
.get_pages = i915_gem_object_get_pages_internal,
.put_pages = i915_gem_object_put_pages_internal,
};
@ -178,7 +177,8 @@ i915_gem_object_create_internal(struct drm_i915_private *i915,
return ERR_PTR(-ENOMEM);
drm_gem_private_object_init(&i915->drm, &obj->base, size);
i915_gem_object_init(obj, &i915_gem_object_internal_ops, &lock_class);
i915_gem_object_init(obj, &i915_gem_object_internal_ops, &lock_class,
I915_BO_ALLOC_STRUCT_PAGE);
/*
* Mark the object as volatile, such that the pages are marked as

View File

@ -14,8 +14,6 @@ int i915_gem_busy_ioctl(struct drm_device *dev, void *data,
struct drm_file *file);
int i915_gem_create_ioctl(struct drm_device *dev, void *data,
struct drm_file *file);
int i915_gem_execbuffer_ioctl(struct drm_device *dev, void *data,
struct drm_file *file);
int i915_gem_execbuffer2_ioctl(struct drm_device *dev, void *data,
struct drm_file *file);
int i915_gem_get_aperture_ioctl(struct drm_device *dev, void *data,

View File

@ -40,13 +40,13 @@ int __i915_gem_lmem_object_init(struct intel_memory_region *mem,
struct drm_i915_private *i915 = mem->i915;
drm_gem_private_object_init(&i915->drm, &obj->base, size);
i915_gem_object_init(obj, &i915_gem_lmem_obj_ops, &lock_class);
i915_gem_object_init(obj, &i915_gem_lmem_obj_ops, &lock_class, flags);
obj->read_domains = I915_GEM_DOMAIN_WC | I915_GEM_DOMAIN_GTT;
i915_gem_object_set_cache_coherency(obj, I915_CACHE_NONE);
i915_gem_object_init_memory_region(obj, mem, flags);
i915_gem_object_init_memory_region(obj, mem);
return 0;
}

View File

@ -246,12 +246,15 @@ static vm_fault_t vm_fault_cpu(struct vm_fault *vmf)
area->vm_flags & VM_WRITE))
return VM_FAULT_SIGBUS;
if (i915_gem_object_lock_interruptible(obj, NULL))
return VM_FAULT_NOPAGE;
err = i915_gem_object_pin_pages(obj);
if (err)
goto out;
iomap = -1;
if (!i915_gem_object_type_has(obj, I915_GEM_OBJECT_HAS_STRUCT_PAGE)) {
if (!i915_gem_object_has_struct_page(obj)) {
iomap = obj->mm.region->iomap.base;
iomap -= obj->mm.region->region.start;
}
@ -269,6 +272,7 @@ static vm_fault_t vm_fault_cpu(struct vm_fault *vmf)
i915_gem_object_unpin_pages(obj);
out:
i915_gem_object_unlock(obj);
return i915_error_to_vmf_fault(err);
}
@ -417,7 +421,9 @@ vm_access(struct vm_area_struct *area, unsigned long addr,
{
struct i915_mmap_offset *mmo = area->vm_private_data;
struct drm_i915_gem_object *obj = mmo->obj;
struct i915_gem_ww_ctx ww;
void *vaddr;
int err = 0;
if (i915_gem_object_is_readonly(obj) && write)
return -EACCES;
@ -426,10 +432,18 @@ vm_access(struct vm_area_struct *area, unsigned long addr,
if (addr >= obj->base.size)
return -EINVAL;
i915_gem_ww_ctx_init(&ww, true);
retry:
err = i915_gem_object_lock(obj, &ww);
if (err)
goto out;
/* As this is primarily for debugging, let's focus on simplicity */
vaddr = i915_gem_object_pin_map(obj, I915_MAP_FORCE_WC);
if (IS_ERR(vaddr))
return PTR_ERR(vaddr);
if (IS_ERR(vaddr)) {
err = PTR_ERR(vaddr);
goto out;
}
if (write) {
memcpy(vaddr + addr, buf, len);
@ -439,6 +453,16 @@ vm_access(struct vm_area_struct *area, unsigned long addr,
}
i915_gem_object_unpin_map(obj);
out:
if (err == -EDEADLK) {
err = i915_gem_ww_ctx_backoff(&ww);
if (!err)
goto retry;
}
i915_gem_ww_ctx_fini(&ww);
if (err)
return err;
return len;
}
@ -653,9 +677,8 @@ __assign_mmap_offset(struct drm_file *file,
}
if (mmap_type != I915_MMAP_TYPE_GTT &&
!i915_gem_object_type_has(obj,
I915_GEM_OBJECT_HAS_STRUCT_PAGE |
I915_GEM_OBJECT_HAS_IOMEM)) {
!i915_gem_object_has_struct_page(obj) &&
!i915_gem_object_type_has(obj, I915_GEM_OBJECT_HAS_IOMEM)) {
err = -ENODEV;
goto out;
}

View File

@ -60,10 +60,8 @@ void i915_gem_object_free(struct drm_i915_gem_object *obj)
void i915_gem_object_init(struct drm_i915_gem_object *obj,
const struct drm_i915_gem_object_ops *ops,
struct lock_class_key *key)
struct lock_class_key *key, unsigned flags)
{
__mutex_init(&obj->mm.lock, ops->name ?: "obj->mm.lock", key);
spin_lock_init(&obj->vma.lock);
INIT_LIST_HEAD(&obj->vma.list);
@ -78,16 +76,14 @@ void i915_gem_object_init(struct drm_i915_gem_object *obj,
init_rcu_head(&obj->rcu);
obj->ops = ops;
GEM_BUG_ON(flags & ~I915_BO_ALLOC_FLAGS);
obj->flags = flags;
obj->mm.madv = I915_MADV_WILLNEED;
INIT_RADIX_TREE(&obj->mm.get_page.radix, GFP_KERNEL | __GFP_NOWARN);
mutex_init(&obj->mm.get_page.lock);
INIT_RADIX_TREE(&obj->mm.get_dma_page.radix, GFP_KERNEL | __GFP_NOWARN);
mutex_init(&obj->mm.get_dma_page.lock);
if (IS_ENABLED(CONFIG_LOCKDEP) && i915_gem_object_is_shrinkable(obj))
i915_gem_shrinker_taints_mutex(to_i915(obj->base.dev),
&obj->mm.lock);
}
/**

View File

@ -23,7 +23,8 @@ void i915_gem_object_free(struct drm_i915_gem_object *obj);
void i915_gem_object_init(struct drm_i915_gem_object *obj,
const struct drm_i915_gem_object_ops *ops,
struct lock_class_key *key);
struct lock_class_key *key,
unsigned alloc_flags);
struct drm_i915_gem_object *
i915_gem_object_create_shmem(struct drm_i915_private *i915,
resource_size_t size);
@ -32,11 +33,21 @@ i915_gem_object_create_shmem_from_data(struct drm_i915_private *i915,
const void *data, resource_size_t size);
extern const struct drm_i915_gem_object_ops i915_gem_shmem_ops;
void __i915_gem_object_release_shmem(struct drm_i915_gem_object *obj,
struct sg_table *pages,
bool needs_clflush);
int i915_gem_object_pwrite_phys(struct drm_i915_gem_object *obj,
const struct drm_i915_gem_pwrite *args);
int i915_gem_object_pread_phys(struct drm_i915_gem_object *obj,
const struct drm_i915_gem_pread *args);
int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align);
void i915_gem_object_put_pages_shmem(struct drm_i915_gem_object *obj,
struct sg_table *pages);
void i915_gem_object_put_pages_phys(struct drm_i915_gem_object *obj,
struct sg_table *pages);
void i915_gem_flush_free_objects(struct drm_i915_private *i915);
@ -107,6 +118,20 @@ i915_gem_object_put(struct drm_i915_gem_object *obj)
#define assert_object_held(obj) dma_resv_assert_held((obj)->base.resv)
/*
* If more than one potential simultaneous locker, assert held.
*/
static inline void assert_object_held_shared(struct drm_i915_gem_object *obj)
{
/*
* Note mm list lookup is protected by
* kref_get_unless_zero().
*/
if (IS_ENABLED(CONFIG_LOCKDEP) &&
kref_read(&obj->base.refcount) > 0)
assert_object_held(obj);
}
static inline int __i915_gem_object_lock(struct drm_i915_gem_object *obj,
struct i915_gem_ww_ctx *ww,
bool intr)
@ -152,11 +177,6 @@ static inline void i915_gem_object_unlock(struct drm_i915_gem_object *obj)
dma_resv_unlock(obj->base.resv);
}
struct dma_fence *
i915_gem_object_lock_fence(struct drm_i915_gem_object *obj);
void i915_gem_object_unlock_fence(struct drm_i915_gem_object *obj,
struct dma_fence *fence);
static inline void
i915_gem_object_set_readonly(struct drm_i915_gem_object *obj)
{
@ -215,7 +235,7 @@ i915_gem_object_type_has(const struct drm_i915_gem_object *obj,
static inline bool
i915_gem_object_has_struct_page(const struct drm_i915_gem_object *obj)
{
return i915_gem_object_type_has(obj, I915_GEM_OBJECT_HAS_STRUCT_PAGE);
return obj->flags & I915_BO_ALLOC_STRUCT_PAGE;
}
static inline bool
@ -242,12 +262,6 @@ i915_gem_object_never_mmap(const struct drm_i915_gem_object *obj)
return i915_gem_object_type_has(obj, I915_GEM_OBJECT_NO_MMAP);
}
static inline bool
i915_gem_object_needs_async_cancel(const struct drm_i915_gem_object *obj)
{
return i915_gem_object_type_has(obj, I915_GEM_OBJECT_ASYNC_CANCEL);
}
static inline bool
i915_gem_object_is_framebuffer(const struct drm_i915_gem_object *obj)
{
@ -299,22 +313,22 @@ struct scatterlist *
__i915_gem_object_get_sg(struct drm_i915_gem_object *obj,
struct i915_gem_object_page_iter *iter,
unsigned int n,
unsigned int *offset);
unsigned int *offset, bool allow_alloc);
static inline struct scatterlist *
i915_gem_object_get_sg(struct drm_i915_gem_object *obj,
unsigned int n,
unsigned int *offset)
unsigned int *offset, bool allow_alloc)
{
return __i915_gem_object_get_sg(obj, &obj->mm.get_page, n, offset);
return __i915_gem_object_get_sg(obj, &obj->mm.get_page, n, offset, allow_alloc);
}
static inline struct scatterlist *
i915_gem_object_get_sg_dma(struct drm_i915_gem_object *obj,
unsigned int n,
unsigned int *offset)
unsigned int *offset, bool allow_alloc)
{
return __i915_gem_object_get_sg(obj, &obj->mm.get_dma_page, n, offset);
return __i915_gem_object_get_sg(obj, &obj->mm.get_dma_page, n, offset, allow_alloc);
}
struct page *
@ -341,27 +355,10 @@ void __i915_gem_object_set_pages(struct drm_i915_gem_object *obj,
int ____i915_gem_object_get_pages(struct drm_i915_gem_object *obj);
int __i915_gem_object_get_pages(struct drm_i915_gem_object *obj);
enum i915_mm_subclass { /* lockdep subclass for obj->mm.lock/struct_mutex */
I915_MM_NORMAL = 0,
/*
* Only used by struct_mutex, when called "recursively" from
* direct-reclaim-esque. Safe because there is only every one
* struct_mutex in the entire system.
*/
I915_MM_SHRINKER = 1,
/*
* Used for obj->mm.lock when allocating pages. Safe because the object
* isn't yet on any LRU, and therefore the shrinker can't deadlock on
* it. As soon as the object has pages, obj->mm.lock nests within
* fs_reclaim.
*/
I915_MM_GET_PAGES = 1,
};
static inline int __must_check
i915_gem_object_pin_pages(struct drm_i915_gem_object *obj)
{
might_lock_nested(&obj->mm.lock, I915_MM_GET_PAGES);
assert_object_held(obj);
if (atomic_inc_not_zero(&obj->mm.pages_pin_count))
return 0;
@ -369,6 +366,8 @@ i915_gem_object_pin_pages(struct drm_i915_gem_object *obj)
return __i915_gem_object_get_pages(obj);
}
int i915_gem_object_pin_pages_unlocked(struct drm_i915_gem_object *obj);
static inline bool
i915_gem_object_has_pages(struct drm_i915_gem_object *obj)
{
@ -427,6 +426,9 @@ void i915_gem_object_writeback(struct drm_i915_gem_object *obj);
void *__must_check i915_gem_object_pin_map(struct drm_i915_gem_object *obj,
enum i915_map_type type);
void *__must_check i915_gem_object_pin_map_unlocked(struct drm_i915_gem_object *obj,
enum i915_map_type type);
void __i915_gem_object_flush_map(struct drm_i915_gem_object *obj,
unsigned long offset,
unsigned long size);
@ -495,6 +497,7 @@ int __must_check
i915_gem_object_set_to_cpu_domain(struct drm_i915_gem_object *obj, bool write);
struct i915_vma * __must_check
i915_gem_object_pin_to_display_plane(struct drm_i915_gem_object *obj,
struct i915_gem_ww_ctx *ww,
u32 alignment,
const struct i915_ggtt_view *view,
unsigned int flags);
@ -558,4 +561,25 @@ int i915_gem_object_read_from_page(struct drm_i915_gem_object *obj, u64 offset,
bool i915_gem_object_is_shmem(const struct drm_i915_gem_object *obj);
#ifdef CONFIG_MMU_NOTIFIER
static inline bool
i915_gem_object_is_userptr(struct drm_i915_gem_object *obj)
{
return obj->userptr.notifier.mm;
}
int i915_gem_object_userptr_submit_init(struct drm_i915_gem_object *obj);
int i915_gem_object_userptr_submit_done(struct drm_i915_gem_object *obj);
void i915_gem_object_userptr_submit_fini(struct drm_i915_gem_object *obj);
int i915_gem_object_userptr_validate(struct drm_i915_gem_object *obj);
#else
static inline bool i915_gem_object_is_userptr(struct drm_i915_gem_object *obj) { return false; }
static inline int i915_gem_object_userptr_submit_init(struct drm_i915_gem_object *obj) { GEM_BUG_ON(1); return -ENODEV; }
static inline int i915_gem_object_userptr_submit_done(struct drm_i915_gem_object *obj) { GEM_BUG_ON(1); return -ENODEV; }
static inline void i915_gem_object_userptr_submit_fini(struct drm_i915_gem_object *obj) { GEM_BUG_ON(1); }
static inline int i915_gem_object_userptr_validate(struct drm_i915_gem_object *obj) { GEM_BUG_ON(1); return -ENODEV; }
#endif
#endif

View File

@ -55,6 +55,9 @@ struct i915_vma *intel_emit_vma_fill_blt(struct intel_context *ce,
if (unlikely(err))
goto out_put;
/* we pinned the pool, mark it as such */
intel_gt_buffer_pool_mark_used(pool);
cmd = i915_gem_object_pin_map(pool->obj, pool->type);
if (IS_ERR(cmd)) {
err = PTR_ERR(cmd);
@ -277,6 +280,9 @@ struct i915_vma *intel_emit_vma_copy_blt(struct intel_context *ce,
if (unlikely(err))
goto out_put;
/* we pinned the pool, mark it as such */
intel_gt_buffer_pool_mark_used(pool);
cmd = i915_gem_object_pin_map(pool->obj, pool->type);
if (IS_ERR(cmd)) {
err = PTR_ERR(cmd);

View File

@ -7,6 +7,8 @@
#ifndef __I915_GEM_OBJECT_TYPES_H__
#define __I915_GEM_OBJECT_TYPES_H__
#include <linux/mmu_notifier.h>
#include <drm/drm_gem.h>
#include <uapi/drm/i915_drm.h>
@ -30,12 +32,10 @@ struct i915_lut_handle {
struct drm_i915_gem_object_ops {
unsigned int flags;
#define I915_GEM_OBJECT_HAS_STRUCT_PAGE BIT(0)
#define I915_GEM_OBJECT_HAS_IOMEM BIT(1)
#define I915_GEM_OBJECT_IS_SHRINKABLE BIT(2)
#define I915_GEM_OBJECT_IS_PROXY BIT(3)
#define I915_GEM_OBJECT_NO_MMAP BIT(4)
#define I915_GEM_OBJECT_ASYNC_CANCEL BIT(5)
/* Interface between the GEM object and its backing storage.
* get_pages() is called once prior to the use of the associated set
@ -171,9 +171,12 @@ struct drm_i915_gem_object {
unsigned long flags;
#define I915_BO_ALLOC_CONTIGUOUS BIT(0)
#define I915_BO_ALLOC_VOLATILE BIT(1)
#define I915_BO_ALLOC_FLAGS (I915_BO_ALLOC_CONTIGUOUS | I915_BO_ALLOC_VOLATILE)
#define I915_BO_READONLY BIT(2)
#define I915_TILING_QUIRK_BIT 3 /* unknown swizzling; do not release! */
#define I915_BO_ALLOC_STRUCT_PAGE BIT(2)
#define I915_BO_ALLOC_FLAGS (I915_BO_ALLOC_CONTIGUOUS | \
I915_BO_ALLOC_VOLATILE | \
I915_BO_ALLOC_STRUCT_PAGE)
#define I915_BO_READONLY BIT(3)
#define I915_TILING_QUIRK_BIT 4 /* unknown swizzling; do not release! */
/*
* Is the object to be mapped as read-only to the GPU
@ -213,7 +216,6 @@ struct drm_i915_gem_object {
* Protects the pages and their use. Do not use directly, but
* instead go through the pin/unpin interfaces.
*/
struct mutex lock;
atomic_t pages_pin_count;
atomic_t shrink_pin;
@ -288,13 +290,16 @@ struct drm_i915_gem_object {
unsigned long *bit_17;
union {
#ifdef CONFIG_MMU_NOTIFIER
struct i915_gem_userptr {
uintptr_t ptr;
unsigned long notifier_seq;
struct i915_mm_struct *mm;
struct i915_mmu_object *mmu_object;
struct work_struct *work;
struct mmu_interval_notifier notifier;
struct page **pvec;
int page_ref;
} userptr;
#endif
struct drm_mm_node *stolen;

View File

@ -19,7 +19,7 @@ void __i915_gem_object_set_pages(struct drm_i915_gem_object *obj,
bool shrinkable;
int i;
lockdep_assert_held(&obj->mm.lock);
assert_object_held_shared(obj);
if (i915_gem_object_is_volatile(obj))
obj->mm.madv = I915_MADV_DONTNEED;
@ -70,6 +70,7 @@ void __i915_gem_object_set_pages(struct drm_i915_gem_object *obj,
struct list_head *list;
unsigned long flags;
assert_object_held(obj);
spin_lock_irqsave(&i915->mm.obj_lock, flags);
i915->mm.shrink_count++;
@ -91,6 +92,8 @@ int ____i915_gem_object_get_pages(struct drm_i915_gem_object *obj)
struct drm_i915_private *i915 = to_i915(obj->base.dev);
int err;
assert_object_held_shared(obj);
if (unlikely(obj->mm.madv != I915_MADV_WILLNEED)) {
drm_dbg(&i915->drm,
"Attempting to obtain a purgeable object\n");
@ -114,23 +117,41 @@ int __i915_gem_object_get_pages(struct drm_i915_gem_object *obj)
{
int err;
err = mutex_lock_interruptible_nested(&obj->mm.lock, I915_MM_GET_PAGES);
if (err)
return err;
assert_object_held(obj);
assert_object_held_shared(obj);
if (unlikely(!i915_gem_object_has_pages(obj))) {
GEM_BUG_ON(i915_gem_object_has_pinned_pages(obj));
err = ____i915_gem_object_get_pages(obj);
if (err)
goto unlock;
return err;
smp_mb__before_atomic();
}
atomic_inc(&obj->mm.pages_pin_count);
unlock:
mutex_unlock(&obj->mm.lock);
return 0;
}
int i915_gem_object_pin_pages_unlocked(struct drm_i915_gem_object *obj)
{
struct i915_gem_ww_ctx ww;
int err;
i915_gem_ww_ctx_init(&ww, true);
retry:
err = i915_gem_object_lock(obj, &ww);
if (!err)
err = i915_gem_object_pin_pages(obj);
if (err == -EDEADLK) {
err = i915_gem_ww_ctx_backoff(&ww);
if (!err)
goto retry;
}
i915_gem_ww_ctx_fini(&ww);
return err;
}
@ -145,7 +166,7 @@ void i915_gem_object_truncate(struct drm_i915_gem_object *obj)
/* Try to discard unwanted pages */
void i915_gem_object_writeback(struct drm_i915_gem_object *obj)
{
lockdep_assert_held(&obj->mm.lock);
assert_object_held_shared(obj);
GEM_BUG_ON(i915_gem_object_has_pages(obj));
if (obj->ops->writeback)
@ -176,6 +197,8 @@ __i915_gem_object_unset_pages(struct drm_i915_gem_object *obj)
{
struct sg_table *pages;
assert_object_held_shared(obj);
pages = fetch_and_zero(&obj->mm.pages);
if (IS_ERR_OR_NULL(pages))
return pages;
@ -199,17 +222,12 @@ __i915_gem_object_unset_pages(struct drm_i915_gem_object *obj)
int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj)
{
struct sg_table *pages;
int err;
if (i915_gem_object_has_pinned_pages(obj))
return -EBUSY;
/* May be called by shrinker from within get_pages() (on another bo) */
mutex_lock(&obj->mm.lock);
if (unlikely(atomic_read(&obj->mm.pages_pin_count))) {
err = -EBUSY;
goto unlock;
}
assert_object_held_shared(obj);
i915_gem_object_release_mmap_offset(obj);
@ -226,17 +244,10 @@ int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj)
* get_pages backends we should be better able to handle the
* cancellation of the async task in a more uniform manner.
*/
if (!pages && !i915_gem_object_needs_async_cancel(obj))
pages = ERR_PTR(-EINVAL);
if (!IS_ERR(pages))
if (!IS_ERR_OR_NULL(pages))
obj->ops->put_pages(obj, pages);
err = 0;
unlock:
mutex_unlock(&obj->mm.lock);
return err;
return 0;
}
/* The 'mapping' part of i915_gem_object_pin_map() below */
@ -333,18 +344,15 @@ void *i915_gem_object_pin_map(struct drm_i915_gem_object *obj,
enum i915_map_type type)
{
enum i915_map_type has_type;
unsigned int flags;
bool pinned;
void *ptr;
int err;
flags = I915_GEM_OBJECT_HAS_STRUCT_PAGE | I915_GEM_OBJECT_HAS_IOMEM;
if (!i915_gem_object_type_has(obj, flags))
if (!i915_gem_object_has_struct_page(obj) &&
!i915_gem_object_type_has(obj, I915_GEM_OBJECT_HAS_IOMEM))
return ERR_PTR(-ENXIO);
err = mutex_lock_interruptible_nested(&obj->mm.lock, I915_MM_GET_PAGES);
if (err)
return ERR_PTR(err);
assert_object_held(obj);
pinned = !(type & I915_MAP_OVERRIDE);
type &= ~I915_MAP_OVERRIDE;
@ -354,10 +362,8 @@ void *i915_gem_object_pin_map(struct drm_i915_gem_object *obj,
GEM_BUG_ON(i915_gem_object_has_pinned_pages(obj));
err = ____i915_gem_object_get_pages(obj);
if (err) {
ptr = ERR_PTR(err);
goto out_unlock;
}
if (err)
return ERR_PTR(err);
smp_mb__before_atomic();
}
@ -392,13 +398,23 @@ void *i915_gem_object_pin_map(struct drm_i915_gem_object *obj,
obj->mm.mapping = page_pack_bits(ptr, type);
}
out_unlock:
mutex_unlock(&obj->mm.lock);
return ptr;
err_unpin:
atomic_dec(&obj->mm.pages_pin_count);
goto out_unlock;
return ptr;
}
void *i915_gem_object_pin_map_unlocked(struct drm_i915_gem_object *obj,
enum i915_map_type type)
{
void *ret;
i915_gem_object_lock(obj, NULL);
ret = i915_gem_object_pin_map(obj, type);
i915_gem_object_unlock(obj);
return ret;
}
void __i915_gem_object_flush_map(struct drm_i915_gem_object *obj,
@ -448,7 +464,8 @@ struct scatterlist *
__i915_gem_object_get_sg(struct drm_i915_gem_object *obj,
struct i915_gem_object_page_iter *iter,
unsigned int n,
unsigned int *offset)
unsigned int *offset,
bool allow_alloc)
{
const bool dma = iter == &obj->mm.get_dma_page;
struct scatterlist *sg;
@ -470,6 +487,9 @@ __i915_gem_object_get_sg(struct drm_i915_gem_object *obj,
if (n < READ_ONCE(iter->sg_idx))
goto lookup;
if (!allow_alloc)
goto manual_lookup;
mutex_lock(&iter->lock);
/* We prefer to reuse the last sg so that repeated lookup of this
@ -519,7 +539,16 @@ __i915_gem_object_get_sg(struct drm_i915_gem_object *obj,
if (unlikely(n < idx)) /* insertion completed by another thread */
goto lookup;
/* In case we failed to insert the entry into the radixtree, we need
goto manual_walk;
manual_lookup:
idx = 0;
sg = obj->mm.pages->sgl;
count = __sg_page_count(sg);
manual_walk:
/*
* In case we failed to insert the entry into the radixtree, we need
* to look beyond the current sg.
*/
while (idx + count <= n) {
@ -566,7 +595,7 @@ i915_gem_object_get_page(struct drm_i915_gem_object *obj, unsigned int n)
GEM_BUG_ON(!i915_gem_object_has_struct_page(obj));
sg = i915_gem_object_get_sg(obj, n, &offset);
sg = i915_gem_object_get_sg(obj, n, &offset, true);
return nth_page(sg_page(sg), offset);
}
@ -592,7 +621,7 @@ i915_gem_object_get_dma_address_len(struct drm_i915_gem_object *obj,
struct scatterlist *sg;
unsigned int offset;
sg = i915_gem_object_get_sg_dma(obj, n, &offset);
sg = i915_gem_object_get_sg_dma(obj, n, &offset, true);
if (len)
*len = sg_dma_len(sg) - (offset << PAGE_SHIFT);

View File

@ -76,6 +76,8 @@ static int i915_gem_object_get_pages_phys(struct drm_i915_gem_object *obj)
intel_gt_chipset_flush(&to_i915(obj->base.dev)->gt);
/* We're no longer struct page backed */
obj->flags &= ~I915_BO_ALLOC_STRUCT_PAGE;
__i915_gem_object_set_pages(obj, st, sg->length);
return 0;
@ -89,7 +91,7 @@ static int i915_gem_object_get_pages_phys(struct drm_i915_gem_object *obj)
return -ENOMEM;
}
static void
void
i915_gem_object_put_pages_phys(struct drm_i915_gem_object *obj,
struct sg_table *pages)
{
@ -134,9 +136,8 @@ i915_gem_object_put_pages_phys(struct drm_i915_gem_object *obj,
vaddr, dma);
}
static int
phys_pwrite(struct drm_i915_gem_object *obj,
const struct drm_i915_gem_pwrite *args)
int i915_gem_object_pwrite_phys(struct drm_i915_gem_object *obj,
const struct drm_i915_gem_pwrite *args)
{
void *vaddr = sg_page(obj->mm.pages->sgl) + args->offset;
char __user *user_data = u64_to_user_ptr(args->data_ptr);
@ -165,9 +166,8 @@ phys_pwrite(struct drm_i915_gem_object *obj,
return 0;
}
static int
phys_pread(struct drm_i915_gem_object *obj,
const struct drm_i915_gem_pread *args)
int i915_gem_object_pread_phys(struct drm_i915_gem_object *obj,
const struct drm_i915_gem_pread *args)
{
void *vaddr = sg_page(obj->mm.pages->sgl) + args->offset;
char __user *user_data = u64_to_user_ptr(args->data_ptr);
@ -186,62 +186,14 @@ phys_pread(struct drm_i915_gem_object *obj,
return 0;
}
static void phys_release(struct drm_i915_gem_object *obj)
{
fput(obj->base.filp);
}
static const struct drm_i915_gem_object_ops i915_gem_phys_ops = {
.name = "i915_gem_object_phys",
.get_pages = i915_gem_object_get_pages_phys,
.put_pages = i915_gem_object_put_pages_phys,
.pread = phys_pread,
.pwrite = phys_pwrite,
.release = phys_release,
};
int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align)
static int i915_gem_object_shmem_to_phys(struct drm_i915_gem_object *obj)
{
struct sg_table *pages;
int err;
if (align > obj->base.size)
return -EINVAL;
if (obj->ops == &i915_gem_phys_ops)
return 0;
if (!i915_gem_object_is_shmem(obj))
return -EINVAL;
err = i915_gem_object_unbind(obj, I915_GEM_OBJECT_UNBIND_ACTIVE);
if (err)
return err;
mutex_lock_nested(&obj->mm.lock, I915_MM_GET_PAGES);
if (obj->mm.madv != I915_MADV_WILLNEED) {
err = -EFAULT;
goto err_unlock;
}
if (i915_gem_object_has_tiling_quirk(obj)) {
err = -EFAULT;
goto err_unlock;
}
if (obj->mm.mapping) {
err = -EBUSY;
goto err_unlock;
}
pages = __i915_gem_object_unset_pages(obj);
obj->ops = &i915_gem_phys_ops;
err = ____i915_gem_object_get_pages(obj);
err = i915_gem_object_get_pages_phys(obj);
if (err)
goto err_xfer;
@ -249,25 +201,57 @@ int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align)
__i915_gem_object_pin_pages(obj);
if (!IS_ERR_OR_NULL(pages))
i915_gem_shmem_ops.put_pages(obj, pages);
i915_gem_object_put_pages_shmem(obj, pages);
i915_gem_object_release_memory_region(obj);
mutex_unlock(&obj->mm.lock);
return 0;
err_xfer:
obj->ops = &i915_gem_shmem_ops;
if (!IS_ERR_OR_NULL(pages)) {
unsigned int sg_page_sizes = i915_sg_page_sizes(pages->sgl);
__i915_gem_object_set_pages(obj, pages, sg_page_sizes);
}
err_unlock:
mutex_unlock(&obj->mm.lock);
return err;
}
int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align)
{
int err;
assert_object_held(obj);
if (align > obj->base.size)
return -EINVAL;
if (!i915_gem_object_is_shmem(obj))
return -EINVAL;
if (!i915_gem_object_has_struct_page(obj))
return 0;
err = i915_gem_object_unbind(obj, I915_GEM_OBJECT_UNBIND_ACTIVE);
if (err)
return err;
if (obj->mm.madv != I915_MADV_WILLNEED)
return -EFAULT;
if (i915_gem_object_has_tiling_quirk(obj))
return -EFAULT;
if (obj->mm.mapping || i915_gem_object_has_pinned_pages(obj))
return -EBUSY;
if (unlikely(obj->mm.madv != I915_MADV_WILLNEED)) {
drm_dbg(obj->base.dev,
"Attempting to obtain a purgeable object\n");
return -EFAULT;
}
return i915_gem_object_shmem_to_phys(obj);
}
#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
#include "selftests/i915_gem_phys.c"
#endif

View File

@ -116,7 +116,7 @@ int i915_gem_freeze_late(struct drm_i915_private *i915)
*/
with_intel_runtime_pm(&i915->runtime_pm, wakeref)
i915_gem_shrink(i915, -1UL, NULL, ~0);
i915_gem_shrink(NULL, i915, -1UL, NULL, ~0);
i915_gem_drain_freed_objects(i915);
wbinvd_on_all_cpus();

View File

@ -106,13 +106,11 @@ i915_gem_object_get_pages_buddy(struct drm_i915_gem_object *obj)
}
void i915_gem_object_init_memory_region(struct drm_i915_gem_object *obj,
struct intel_memory_region *mem,
unsigned long flags)
struct intel_memory_region *mem)
{
INIT_LIST_HEAD(&obj->mm.blocks);
obj->mm.region = intel_memory_region_get(mem);
obj->flags |= flags;
if (obj->base.size <= mem->min_page_size)
obj->flags |= I915_BO_ALLOC_CONTIGUOUS;

View File

@ -17,8 +17,7 @@ void i915_gem_object_put_pages_buddy(struct drm_i915_gem_object *obj,
struct sg_table *pages);
void i915_gem_object_init_memory_region(struct drm_i915_gem_object *obj,
struct intel_memory_region *mem,
unsigned long flags);
struct intel_memory_region *mem);
void i915_gem_object_release_memory_region(struct drm_i915_gem_object *obj);
struct drm_i915_gem_object *

View File

@ -99,7 +99,7 @@ static int shmem_get_pages(struct drm_i915_gem_object *obj)
goto err_sg;
}
i915_gem_shrink(i915, 2 * page_count, NULL, *s++);
i915_gem_shrink(NULL, i915, 2 * page_count, NULL, *s++);
/*
* We've tried hard to allocate the memory by reaping
@ -296,8 +296,7 @@ __i915_gem_object_release_shmem(struct drm_i915_gem_object *obj,
__start_cpu_write(obj);
}
static void
shmem_put_pages(struct drm_i915_gem_object *obj, struct sg_table *pages)
void i915_gem_object_put_pages_shmem(struct drm_i915_gem_object *obj, struct sg_table *pages)
{
struct sgt_iter sgt_iter;
struct pagevec pvec;
@ -331,6 +330,15 @@ shmem_put_pages(struct drm_i915_gem_object *obj, struct sg_table *pages)
kfree(pages);
}
static void
shmem_put_pages(struct drm_i915_gem_object *obj, struct sg_table *pages)
{
if (likely(i915_gem_object_has_struct_page(obj)))
i915_gem_object_put_pages_shmem(obj, pages);
else
i915_gem_object_put_pages_phys(obj, pages);
}
static int
shmem_pwrite(struct drm_i915_gem_object *obj,
const struct drm_i915_gem_pwrite *arg)
@ -343,6 +351,9 @@ shmem_pwrite(struct drm_i915_gem_object *obj,
/* Caller already validated user args */
GEM_BUG_ON(!access_ok(user_data, arg->size));
if (!i915_gem_object_has_struct_page(obj))
return i915_gem_object_pwrite_phys(obj, arg);
/*
* Before we instantiate/pin the backing store for our use, we
* can prepopulate the shmemfs filp efficiently using a write into
@ -421,17 +432,27 @@ shmem_pwrite(struct drm_i915_gem_object *obj,
return 0;
}
static int
shmem_pread(struct drm_i915_gem_object *obj,
const struct drm_i915_gem_pread *arg)
{
if (!i915_gem_object_has_struct_page(obj))
return i915_gem_object_pread_phys(obj, arg);
return -ENODEV;
}
static void shmem_release(struct drm_i915_gem_object *obj)
{
i915_gem_object_release_memory_region(obj);
if (obj->flags & I915_BO_ALLOC_STRUCT_PAGE)
i915_gem_object_release_memory_region(obj);
fput(obj->base.filp);
}
const struct drm_i915_gem_object_ops i915_gem_shmem_ops = {
.name = "i915_gem_object_shmem",
.flags = I915_GEM_OBJECT_HAS_STRUCT_PAGE |
I915_GEM_OBJECT_IS_SHRINKABLE,
.flags = I915_GEM_OBJECT_IS_SHRINKABLE,
.get_pages = shmem_get_pages,
.put_pages = shmem_put_pages,
@ -439,6 +460,7 @@ const struct drm_i915_gem_object_ops i915_gem_shmem_ops = {
.writeback = shmem_writeback,
.pwrite = shmem_pwrite,
.pread = shmem_pread,
.release = shmem_release,
};
@ -491,7 +513,8 @@ static int shmem_object_init(struct intel_memory_region *mem,
mapping_set_gfp_mask(mapping, mask);
GEM_BUG_ON(!(mapping_gfp_mask(mapping) & __GFP_RECLAIM));
i915_gem_object_init(obj, &i915_gem_shmem_ops, &lock_class);
i915_gem_object_init(obj, &i915_gem_shmem_ops, &lock_class,
I915_BO_ALLOC_STRUCT_PAGE);
obj->write_domain = I915_GEM_DOMAIN_CPU;
obj->read_domains = I915_GEM_DOMAIN_CPU;
@ -515,7 +538,7 @@ static int shmem_object_init(struct intel_memory_region *mem,
i915_gem_object_set_cache_coherency(obj, cache_level);
i915_gem_object_init_memory_region(obj, mem, 0);
i915_gem_object_init_memory_region(obj, mem);
return 0;
}

View File

@ -49,9 +49,9 @@ static bool unsafe_drop_pages(struct drm_i915_gem_object *obj,
flags = I915_GEM_OBJECT_UNBIND_TEST;
if (i915_gem_object_unbind(obj, flags) == 0)
__i915_gem_object_put_pages(obj);
return true;
return !i915_gem_object_has_pages(obj);
return false;
}
static void try_to_writeback(struct drm_i915_gem_object *obj,
@ -94,7 +94,8 @@ static void try_to_writeback(struct drm_i915_gem_object *obj,
* The number of pages of backing storage actually released.
*/
unsigned long
i915_gem_shrink(struct drm_i915_private *i915,
i915_gem_shrink(struct i915_gem_ww_ctx *ww,
struct drm_i915_private *i915,
unsigned long target,
unsigned long *nr_scanned,
unsigned int shrink)
@ -113,6 +114,7 @@ i915_gem_shrink(struct drm_i915_private *i915,
intel_wakeref_t wakeref = 0;
unsigned long count = 0;
unsigned long scanned = 0;
int err;
trace_i915_gem_shrink(i915, target, shrink);
@ -200,25 +202,40 @@ i915_gem_shrink(struct drm_i915_private *i915,
spin_unlock_irqrestore(&i915->mm.obj_lock, flags);
err = 0;
if (unsafe_drop_pages(obj, shrink)) {
/* May arrive from get_pages on another bo */
mutex_lock(&obj->mm.lock);
if (!i915_gem_object_has_pages(obj)) {
if (!ww) {
if (!i915_gem_object_trylock(obj))
goto skip;
} else {
err = i915_gem_object_lock(obj, ww);
if (err)
goto skip;
}
if (!__i915_gem_object_put_pages(obj)) {
try_to_writeback(obj, shrink);
count += obj->base.size >> PAGE_SHIFT;
}
mutex_unlock(&obj->mm.lock);
if (!ww)
i915_gem_object_unlock(obj);
}
dma_resv_prune(obj->base.resv);
scanned += obj->base.size >> PAGE_SHIFT;
skip:
i915_gem_object_put(obj);
spin_lock_irqsave(&i915->mm.obj_lock, flags);
if (err)
break;
}
list_splice_tail(&still_in_list, phase->list);
spin_unlock_irqrestore(&i915->mm.obj_lock, flags);
if (err)
return err;
}
if (shrink & I915_SHRINK_BOUND)
@ -249,7 +266,7 @@ unsigned long i915_gem_shrink_all(struct drm_i915_private *i915)
unsigned long freed = 0;
with_intel_runtime_pm(&i915->runtime_pm, wakeref) {
freed = i915_gem_shrink(i915, -1UL, NULL,
freed = i915_gem_shrink(NULL, i915, -1UL, NULL,
I915_SHRINK_BOUND |
I915_SHRINK_UNBOUND);
}
@ -295,7 +312,7 @@ i915_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc)
sc->nr_scanned = 0;
freed = i915_gem_shrink(i915,
freed = i915_gem_shrink(NULL, i915,
sc->nr_to_scan,
&sc->nr_scanned,
I915_SHRINK_BOUND |
@ -304,7 +321,7 @@ i915_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc)
intel_wakeref_t wakeref;
with_intel_runtime_pm(&i915->runtime_pm, wakeref) {
freed += i915_gem_shrink(i915,
freed += i915_gem_shrink(NULL, i915,
sc->nr_to_scan - sc->nr_scanned,
&sc->nr_scanned,
I915_SHRINK_ACTIVE |
@ -329,7 +346,7 @@ i915_gem_shrinker_oom(struct notifier_block *nb, unsigned long event, void *ptr)
freed_pages = 0;
with_intel_runtime_pm(&i915->runtime_pm, wakeref)
freed_pages += i915_gem_shrink(i915, -1UL, NULL,
freed_pages += i915_gem_shrink(NULL, i915, -1UL, NULL,
I915_SHRINK_BOUND |
I915_SHRINK_UNBOUND |
I915_SHRINK_WRITEBACK);
@ -367,7 +384,7 @@ i915_gem_shrinker_vmap(struct notifier_block *nb, unsigned long event, void *ptr
intel_wakeref_t wakeref;
with_intel_runtime_pm(&i915->runtime_pm, wakeref)
freed_pages += i915_gem_shrink(i915, -1UL, NULL,
freed_pages += i915_gem_shrink(NULL, i915, -1UL, NULL,
I915_SHRINK_BOUND |
I915_SHRINK_UNBOUND |
I915_SHRINK_VMAPS);

View File

@ -9,10 +9,12 @@
#include <linux/bits.h>
struct drm_i915_private;
struct i915_gem_ww_ctx;
struct mutex;
/* i915_gem_shrinker.c */
unsigned long i915_gem_shrink(struct drm_i915_private *i915,
unsigned long i915_gem_shrink(struct i915_gem_ww_ctx *ww,
struct drm_i915_private *i915,
unsigned long target,
unsigned long *nr_scanned,
unsigned flags);

View File

@ -630,20 +630,22 @@ static int __i915_gem_object_create_stolen(struct intel_memory_region *mem,
int err;
drm_gem_private_object_init(&mem->i915->drm, &obj->base, stolen->size);
i915_gem_object_init(obj, &i915_gem_object_stolen_ops, &lock_class);
i915_gem_object_init(obj, &i915_gem_object_stolen_ops, &lock_class, 0);
obj->stolen = stolen;
obj->read_domains = I915_GEM_DOMAIN_CPU | I915_GEM_DOMAIN_GTT;
cache_level = HAS_LLC(mem->i915) ? I915_CACHE_LLC : I915_CACHE_NONE;
i915_gem_object_set_cache_coherency(obj, cache_level);
if (WARN_ON(!i915_gem_object_trylock(obj)))
return -EBUSY;
err = i915_gem_object_pin_pages(obj);
if (err)
return err;
if (!err)
i915_gem_object_init_memory_region(obj, mem);
i915_gem_object_unlock(obj);
i915_gem_object_init_memory_region(obj, mem, 0);
return 0;
return err;
}
static int _i915_gem_object_stolen_init(struct intel_memory_region *mem,

View File

@ -265,7 +265,6 @@ i915_gem_object_set_tiling(struct drm_i915_gem_object *obj,
* pages to prevent them being swapped out and causing corruption
* due to the change in swizzling.
*/
mutex_lock(&obj->mm.lock);
if (i915_gem_object_has_pages(obj) &&
obj->mm.madv == I915_MADV_WILLNEED &&
i915->quirks & QUIRK_PIN_SWIZZLED_PAGES) {
@ -280,7 +279,6 @@ i915_gem_object_set_tiling(struct drm_i915_gem_object *obj,
i915_gem_object_set_tiling_quirk(obj);
}
}
mutex_unlock(&obj->mm.lock);
spin_lock(&obj->vma.lock);
for_each_ggtt_vma(vma, obj) {

File diff suppressed because it is too large Load Diff

View File

@ -89,7 +89,6 @@ static void huge_put_pages(struct drm_i915_gem_object *obj,
static const struct drm_i915_gem_object_ops huge_ops = {
.name = "huge-gem",
.flags = I915_GEM_OBJECT_HAS_STRUCT_PAGE,
.get_pages = huge_get_pages,
.put_pages = huge_put_pages,
};
@ -115,7 +114,8 @@ huge_gem_object(struct drm_i915_private *i915,
return ERR_PTR(-ENOMEM);
drm_gem_private_object_init(&i915->drm, &obj->base, dma_size);
i915_gem_object_init(obj, &huge_ops, &lock_class);
i915_gem_object_init(obj, &huge_ops, &lock_class,
I915_BO_ALLOC_STRUCT_PAGE);
obj->read_domains = I915_GEM_DOMAIN_CPU;
obj->write_domain = I915_GEM_DOMAIN_CPU;

View File

@ -140,8 +140,7 @@ static void put_huge_pages(struct drm_i915_gem_object *obj,
static const struct drm_i915_gem_object_ops huge_page_ops = {
.name = "huge-gem",
.flags = I915_GEM_OBJECT_HAS_STRUCT_PAGE |
I915_GEM_OBJECT_IS_SHRINKABLE,
.flags = I915_GEM_OBJECT_IS_SHRINKABLE,
.get_pages = get_huge_pages,
.put_pages = put_huge_pages,
};
@ -168,7 +167,8 @@ huge_pages_object(struct drm_i915_private *i915,
return ERR_PTR(-ENOMEM);
drm_gem_private_object_init(&i915->drm, &obj->base, size);
i915_gem_object_init(obj, &huge_page_ops, &lock_class);
i915_gem_object_init(obj, &huge_page_ops, &lock_class,
I915_BO_ALLOC_STRUCT_PAGE);
i915_gem_object_set_volatile(obj);
@ -319,9 +319,9 @@ fake_huge_pages_object(struct drm_i915_private *i915, u64 size, bool single)
drm_gem_private_object_init(&i915->drm, &obj->base, size);
if (single)
i915_gem_object_init(obj, &fake_ops_single, &lock_class);
i915_gem_object_init(obj, &fake_ops_single, &lock_class, 0);
else
i915_gem_object_init(obj, &fake_ops, &lock_class);
i915_gem_object_init(obj, &fake_ops, &lock_class, 0);
i915_gem_object_set_volatile(obj);
@ -589,7 +589,7 @@ static int igt_mock_ppgtt_misaligned_dma(void *arg)
goto out_put;
}
err = i915_gem_object_pin_pages(obj);
err = i915_gem_object_pin_pages_unlocked(obj);
if (err)
goto out_put;
@ -653,15 +653,19 @@ static int igt_mock_ppgtt_misaligned_dma(void *arg)
break;
}
i915_gem_object_lock(obj, NULL);
i915_gem_object_unpin_pages(obj);
__i915_gem_object_put_pages(obj);
i915_gem_object_unlock(obj);
i915_gem_object_put(obj);
}
return 0;
out_unpin:
i915_gem_object_lock(obj, NULL);
i915_gem_object_unpin_pages(obj);
i915_gem_object_unlock(obj);
out_put:
i915_gem_object_put(obj);
@ -675,8 +679,10 @@ static void close_object_list(struct list_head *objects,
list_for_each_entry_safe(obj, on, objects, st_link) {
list_del(&obj->st_link);
i915_gem_object_lock(obj, NULL);
i915_gem_object_unpin_pages(obj);
__i915_gem_object_put_pages(obj);
i915_gem_object_unlock(obj);
i915_gem_object_put(obj);
}
}
@ -713,7 +719,7 @@ static int igt_mock_ppgtt_huge_fill(void *arg)
break;
}
err = i915_gem_object_pin_pages(obj);
err = i915_gem_object_pin_pages_unlocked(obj);
if (err) {
i915_gem_object_put(obj);
break;
@ -889,7 +895,7 @@ static int igt_mock_ppgtt_64K(void *arg)
if (IS_ERR(obj))
return PTR_ERR(obj);
err = i915_gem_object_pin_pages(obj);
err = i915_gem_object_pin_pages_unlocked(obj);
if (err)
goto out_object_put;
@ -943,8 +949,10 @@ static int igt_mock_ppgtt_64K(void *arg)
}
i915_vma_unpin(vma);
i915_gem_object_lock(obj, NULL);
i915_gem_object_unpin_pages(obj);
__i915_gem_object_put_pages(obj);
i915_gem_object_unlock(obj);
i915_gem_object_put(obj);
}
}
@ -954,7 +962,9 @@ static int igt_mock_ppgtt_64K(void *arg)
out_vma_unpin:
i915_vma_unpin(vma);
out_object_unpin:
i915_gem_object_lock(obj, NULL);
i915_gem_object_unpin_pages(obj);
i915_gem_object_unlock(obj);
out_object_put:
i915_gem_object_put(obj);
@ -1024,7 +1034,7 @@ static int __cpu_check_vmap(struct drm_i915_gem_object *obj, u32 dword, u32 val)
if (err)
return err;
ptr = i915_gem_object_pin_map(obj, I915_MAP_WC);
ptr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC);
if (IS_ERR(ptr))
return PTR_ERR(ptr);
@ -1304,7 +1314,7 @@ static int igt_ppgtt_smoke_huge(void *arg)
return err;
}
err = i915_gem_object_pin_pages(obj);
err = i915_gem_object_pin_pages_unlocked(obj);
if (err) {
if (err == -ENXIO || err == -E2BIG) {
i915_gem_object_put(obj);
@ -1327,8 +1337,10 @@ static int igt_ppgtt_smoke_huge(void *arg)
__func__, size, i);
}
out_unpin:
i915_gem_object_lock(obj, NULL);
i915_gem_object_unpin_pages(obj);
__i915_gem_object_put_pages(obj);
i915_gem_object_unlock(obj);
out_put:
i915_gem_object_put(obj);
@ -1402,7 +1414,7 @@ static int igt_ppgtt_sanity_check(void *arg)
return err;
}
err = i915_gem_object_pin_pages(obj);
err = i915_gem_object_pin_pages_unlocked(obj);
if (err) {
i915_gem_object_put(obj);
goto out;
@ -1416,8 +1428,10 @@ static int igt_ppgtt_sanity_check(void *arg)
err = igt_write_huge(ctx, obj);
i915_gem_object_lock(obj, NULL);
i915_gem_object_unpin_pages(obj);
__i915_gem_object_put_pages(obj);
i915_gem_object_unlock(obj);
i915_gem_object_put(obj);
if (err) {
@ -1462,7 +1476,7 @@ static int igt_tmpfs_fallback(void *arg)
goto out_restore;
}
vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB);
vaddr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB);
if (IS_ERR(vaddr)) {
err = PTR_ERR(vaddr);
goto out_put;

View File

@ -45,7 +45,7 @@ static int __igt_client_fill(struct intel_engine_cs *engine)
goto err_flush;
}
vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB);
vaddr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB);
if (IS_ERR(vaddr)) {
err = PTR_ERR(vaddr);
goto err_put;
@ -157,7 +157,7 @@ static int prepare_blit(const struct tiled_blits *t,
u32 src_pitch, dst_pitch;
u32 cmd, *cs;
cs = i915_gem_object_pin_map(batch, I915_MAP_WC);
cs = i915_gem_object_pin_map_unlocked(batch, I915_MAP_WC);
if (IS_ERR(cs))
return PTR_ERR(cs);
@ -377,7 +377,7 @@ static int verify_buffer(const struct tiled_blits *t,
y = i915_prandom_u32_max_state(t->height, prng);
p = y * t->width + x;
vaddr = i915_gem_object_pin_map(buf->vma->obj, I915_MAP_WC);
vaddr = i915_gem_object_pin_map_unlocked(buf->vma->obj, I915_MAP_WC);
if (IS_ERR(vaddr))
return PTR_ERR(vaddr);
@ -564,7 +564,7 @@ static int tiled_blits_prepare(struct tiled_blits *t,
int err;
int i;
map = i915_gem_object_pin_map(t->scratch.vma->obj, I915_MAP_WC);
map = i915_gem_object_pin_map_unlocked(t->scratch.vma->obj, I915_MAP_WC);
if (IS_ERR(map))
return PTR_ERR(map);

View File

@ -160,7 +160,7 @@ static int wc_set(struct context *ctx, unsigned long offset, u32 v)
if (err)
return err;
map = i915_gem_object_pin_map(ctx->obj, I915_MAP_WC);
map = i915_gem_object_pin_map_unlocked(ctx->obj, I915_MAP_WC);
if (IS_ERR(map))
return PTR_ERR(map);
@ -183,7 +183,7 @@ static int wc_get(struct context *ctx, unsigned long offset, u32 *v)
if (err)
return err;
map = i915_gem_object_pin_map(ctx->obj, I915_MAP_WC);
map = i915_gem_object_pin_map_unlocked(ctx->obj, I915_MAP_WC);
if (IS_ERR(map))
return PTR_ERR(map);
@ -200,17 +200,15 @@ static int gpu_set(struct context *ctx, unsigned long offset, u32 v)
u32 *cs;
int err;
vma = i915_gem_object_ggtt_pin(ctx->obj, NULL, 0, 0, 0);
if (IS_ERR(vma))
return PTR_ERR(vma);
i915_gem_object_lock(ctx->obj, NULL);
err = i915_gem_object_set_to_gtt_domain(ctx->obj, true);
if (err)
goto out_unlock;
vma = i915_gem_object_ggtt_pin(ctx->obj, NULL, 0, 0, 0);
if (IS_ERR(vma)) {
err = PTR_ERR(vma);
goto out_unlock;
}
rq = intel_engine_create_kernel_request(ctx->engine);
if (IS_ERR(rq)) {
err = PTR_ERR(rq);

View File

@ -1094,7 +1094,7 @@ __read_slice_count(struct intel_context *ce,
if (ret < 0)
return ret;
buf = i915_gem_object_pin_map(obj, I915_MAP_WB);
buf = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB);
if (IS_ERR(buf)) {
ret = PTR_ERR(buf);
return ret;
@ -1511,7 +1511,7 @@ static int write_to_scratch(struct i915_gem_context *ctx,
if (IS_ERR(obj))
return PTR_ERR(obj);
cmd = i915_gem_object_pin_map(obj, I915_MAP_WB);
cmd = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB);
if (IS_ERR(cmd)) {
err = PTR_ERR(cmd);
goto out;
@ -1622,7 +1622,7 @@ static int read_from_scratch(struct i915_gem_context *ctx,
if (err)
goto out_vm;
cmd = i915_gem_object_pin_map(obj, I915_MAP_WB);
cmd = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB);
if (IS_ERR(cmd)) {
err = PTR_ERR(cmd);
goto out;
@ -1658,7 +1658,7 @@ static int read_from_scratch(struct i915_gem_context *ctx,
if (err)
goto out_vm;
cmd = i915_gem_object_pin_map(obj, I915_MAP_WB);
cmd = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB);
if (IS_ERR(cmd)) {
err = PTR_ERR(cmd);
goto out;
@ -1715,7 +1715,7 @@ static int read_from_scratch(struct i915_gem_context *ctx,
if (err)
goto out_vm;
cmd = i915_gem_object_pin_map(obj, I915_MAP_WB);
cmd = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB);
if (IS_ERR(cmd)) {
err = PTR_ERR(cmd);
goto out_vm;

View File

@ -194,7 +194,7 @@ static int igt_dmabuf_import_ownership(void *arg)
dma_buf_put(dmabuf);
err = i915_gem_object_pin_pages(obj);
err = i915_gem_object_pin_pages_unlocked(obj);
if (err) {
pr_err("i915_gem_object_pin_pages failed with err=%d\n", err);
goto out_obj;

View File

@ -116,7 +116,7 @@ static int igt_gpu_reloc(void *arg)
if (IS_ERR(scratch))
return PTR_ERR(scratch);
map = i915_gem_object_pin_map(scratch, I915_MAP_WC);
map = i915_gem_object_pin_map_unlocked(scratch, I915_MAP_WC);
if (IS_ERR(map)) {
err = PTR_ERR(map);
goto err_scratch;

View File

@ -322,7 +322,7 @@ static int igt_partial_tiling(void *arg)
if (IS_ERR(obj))
return PTR_ERR(obj);
err = i915_gem_object_pin_pages(obj);
err = i915_gem_object_pin_pages_unlocked(obj);
if (err) {
pr_err("Failed to allocate %u pages (%lu total), err=%d\n",
nreal, obj->base.size / PAGE_SIZE, err);
@ -459,7 +459,7 @@ static int igt_smoke_tiling(void *arg)
if (IS_ERR(obj))
return PTR_ERR(obj);
err = i915_gem_object_pin_pages(obj);
err = i915_gem_object_pin_pages_unlocked(obj);
if (err) {
pr_err("Failed to allocate %u pages (%lu total), err=%d\n",
nreal, obj->base.size / PAGE_SIZE, err);
@ -798,7 +798,7 @@ static int wc_set(struct drm_i915_gem_object *obj)
{
void *vaddr;
vaddr = i915_gem_object_pin_map(obj, I915_MAP_WC);
vaddr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC);
if (IS_ERR(vaddr))
return PTR_ERR(vaddr);
@ -814,7 +814,7 @@ static int wc_check(struct drm_i915_gem_object *obj)
void *vaddr;
int err = 0;
vaddr = i915_gem_object_pin_map(obj, I915_MAP_WC);
vaddr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC);
if (IS_ERR(vaddr))
return PTR_ERR(vaddr);
@ -835,9 +835,8 @@ static bool can_mmap(struct drm_i915_gem_object *obj, enum i915_mmap_type type)
return false;
if (type != I915_MMAP_TYPE_GTT &&
!i915_gem_object_type_has(obj,
I915_GEM_OBJECT_HAS_STRUCT_PAGE |
I915_GEM_OBJECT_HAS_IOMEM))
!i915_gem_object_has_struct_page(obj) &&
!i915_gem_object_type_has(obj, I915_GEM_OBJECT_HAS_IOMEM))
return false;
return true;
@ -977,10 +976,8 @@ static const char *repr_mmap_type(enum i915_mmap_type type)
static bool can_access(const struct drm_i915_gem_object *obj)
{
unsigned int flags =
I915_GEM_OBJECT_HAS_STRUCT_PAGE | I915_GEM_OBJECT_HAS_IOMEM;
return i915_gem_object_type_has(obj, flags);
return i915_gem_object_has_struct_page(obj) ||
i915_gem_object_type_has(obj, I915_GEM_OBJECT_HAS_IOMEM);
}
static int __igt_mmap_access(struct drm_i915_private *i915,
@ -1319,7 +1316,9 @@ static int __igt_mmap_revoke(struct drm_i915_private *i915,
}
if (type != I915_MMAP_TYPE_GTT) {
i915_gem_object_lock(obj, NULL);
__i915_gem_object_put_pages(obj);
i915_gem_object_unlock(obj);
if (i915_gem_object_has_pages(obj)) {
pr_err("Failed to put-pages object!\n");
err = -EINVAL;

View File

@ -47,7 +47,7 @@ static int igt_gem_huge(void *arg)
if (IS_ERR(obj))
return PTR_ERR(obj);
err = i915_gem_object_pin_pages(obj);
err = i915_gem_object_pin_pages_unlocked(obj);
if (err) {
pr_err("Failed to allocate %u pages (%lu total), err=%d\n",
nreal, obj->base.size / PAGE_SIZE, err);

View File

@ -262,7 +262,7 @@ static int igt_fill_blt_thread(void *arg)
goto err_flush;
}
vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB);
vaddr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB);
if (IS_ERR(vaddr)) {
err = PTR_ERR(vaddr);
goto err_put;
@ -380,7 +380,7 @@ static int igt_copy_blt_thread(void *arg)
goto err_flush;
}
vaddr = i915_gem_object_pin_map(src, I915_MAP_WB);
vaddr = i915_gem_object_pin_map_unlocked(src, I915_MAP_WB);
if (IS_ERR(vaddr)) {
err = PTR_ERR(vaddr);
goto err_put_src;
@ -400,7 +400,7 @@ static int igt_copy_blt_thread(void *arg)
goto err_put_src;
}
vaddr = i915_gem_object_pin_map(dst, I915_MAP_WB);
vaddr = i915_gem_object_pin_map_unlocked(dst, I915_MAP_WB);
if (IS_ERR(vaddr)) {
err = PTR_ERR(vaddr);
goto err_put_dst;

View File

@ -25,13 +25,21 @@ static int mock_phys_object(void *arg)
goto out;
}
if (!i915_gem_object_has_struct_page(obj)) {
err = -EINVAL;
pr_err("shmem has no struct page\n");
goto out_obj;
}
i915_gem_object_lock(obj, NULL);
err = i915_gem_object_attach_phys(obj, PAGE_SIZE);
i915_gem_object_unlock(obj);
if (err) {
pr_err("i915_gem_object_attach_phys failed, err=%d\n", err);
goto out_obj;
}
if (obj->ops != &i915_gem_phys_ops) {
if (i915_gem_object_has_struct_page(obj)) {
pr_err("i915_gem_object_attach_phys did not create a phys object\n");
err = -EINVAL;
goto out_obj;

View File

@ -55,7 +55,7 @@ igt_emit_store_dw(struct i915_vma *vma,
if (IS_ERR(obj))
return ERR_CAST(obj);
cmd = i915_gem_object_pin_map(obj, I915_MAP_WC);
cmd = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC);
if (IS_ERR(cmd)) {
err = PTR_ERR(cmd);
goto err;

View File

@ -143,7 +143,7 @@ static u32 *__gen2_emit_breadcrumb(struct i915_request *rq, u32 *cs,
int flush, int post)
{
GEM_BUG_ON(i915_request_active_timeline(rq)->hwsp_ggtt != rq->engine->status_page.vma);
GEM_BUG_ON(offset_in_page(i915_request_active_timeline(rq)->hwsp_offset) != I915_GEM_HWS_SEQNO_ADDR);
GEM_BUG_ON(offset_in_page(rq->hwsp_seqno) != I915_GEM_HWS_SEQNO_ADDR);
*cs++ = MI_FLUSH;

View File

@ -161,7 +161,7 @@ u32 *gen6_emit_breadcrumb_rcs(struct i915_request *rq, u32 *cs)
PIPE_CONTROL_DC_FLUSH_ENABLE |
PIPE_CONTROL_QW_WRITE |
PIPE_CONTROL_CS_STALL);
*cs++ = i915_request_active_timeline(rq)->hwsp_offset |
*cs++ = i915_request_active_seqno(rq) |
PIPE_CONTROL_GLOBAL_GTT;
*cs++ = rq->fence.seqno;
@ -359,7 +359,7 @@ u32 *gen7_emit_breadcrumb_rcs(struct i915_request *rq, u32 *cs)
PIPE_CONTROL_QW_WRITE |
PIPE_CONTROL_GLOBAL_GTT_IVB |
PIPE_CONTROL_CS_STALL);
*cs++ = i915_request_active_timeline(rq)->hwsp_offset;
*cs++ = i915_request_active_seqno(rq);
*cs++ = rq->fence.seqno;
*cs++ = MI_USER_INTERRUPT;
@ -374,7 +374,7 @@ u32 *gen7_emit_breadcrumb_rcs(struct i915_request *rq, u32 *cs)
u32 *gen6_emit_breadcrumb_xcs(struct i915_request *rq, u32 *cs)
{
GEM_BUG_ON(i915_request_active_timeline(rq)->hwsp_ggtt != rq->engine->status_page.vma);
GEM_BUG_ON(offset_in_page(i915_request_active_timeline(rq)->hwsp_offset) != I915_GEM_HWS_SEQNO_ADDR);
GEM_BUG_ON(offset_in_page(rq->hwsp_seqno) != I915_GEM_HWS_SEQNO_ADDR);
*cs++ = MI_FLUSH_DW | MI_FLUSH_DW_OP_STOREDW | MI_FLUSH_DW_STORE_INDEX;
*cs++ = I915_GEM_HWS_SEQNO_ADDR | MI_FLUSH_DW_USE_GTT;
@ -394,7 +394,7 @@ u32 *gen7_emit_breadcrumb_xcs(struct i915_request *rq, u32 *cs)
int i;
GEM_BUG_ON(i915_request_active_timeline(rq)->hwsp_ggtt != rq->engine->status_page.vma);
GEM_BUG_ON(offset_in_page(i915_request_active_timeline(rq)->hwsp_offset) != I915_GEM_HWS_SEQNO_ADDR);
GEM_BUG_ON(offset_in_page(rq->hwsp_seqno) != I915_GEM_HWS_SEQNO_ADDR);
*cs++ = MI_FLUSH_DW | MI_INVALIDATE_TLB |
MI_FLUSH_DW_OP_STOREDW | MI_FLUSH_DW_STORE_INDEX;

View File

@ -338,15 +338,14 @@ static u32 preempt_address(struct intel_engine_cs *engine)
static u32 hwsp_offset(const struct i915_request *rq)
{
const struct intel_timeline_cacheline *cl;
const struct intel_timeline *tl;
/* Before the request is executed, the timeline/cachline is fixed */
/* Before the request is executed, the timeline is fixed */
tl = rcu_dereference_protected(rq->timeline,
!i915_request_signaled(rq));
cl = rcu_dereference_protected(rq->hwsp_cacheline, 1);
if (cl)
return cl->ggtt_offset;
return rcu_dereference_protected(rq->timeline, 1)->hwsp_offset;
/* See the comment in i915_request_active_seqno(). */
return page_mask_bits(tl->hwsp_offset) + offset_in_page(rq->hwsp_seqno);
}
int gen8_emit_init_breadcrumb(struct i915_request *rq)

View File

@ -6,9 +6,18 @@
#ifndef INTEL_CONTEXT_PARAM_H
#define INTEL_CONTEXT_PARAM_H
struct intel_context;
#include <linux/types.h>
#include "intel_context.h"
int intel_context_set_ring_size(struct intel_context *ce, long sz);
long intel_context_get_ring_size(struct intel_context *ce);
static inline int
intel_context_set_watchdog_us(struct intel_context *ce, u64 timeout_us)
{
ce->watchdog.timeout_us = timeout_us;
return 0;
}
#endif /* INTEL_CONTEXT_PARAM_H */

View File

@ -97,6 +97,10 @@ struct intel_context {
#define CONTEXT_FORCE_SINGLE_SUBMISSION 7
#define CONTEXT_NOPREEMPT 8
struct {
u64 timeout_us;
} watchdog;
u32 *lrc_reg_state;
union {
struct {

View File

@ -619,6 +619,7 @@ static void cleanup_status_page(struct intel_engine_cs *engine)
}
static int pin_ggtt_status_page(struct intel_engine_cs *engine,
struct i915_gem_ww_ctx *ww,
struct i915_vma *vma)
{
unsigned int flags;
@ -639,12 +640,13 @@ static int pin_ggtt_status_page(struct intel_engine_cs *engine,
else
flags = PIN_HIGH;
return i915_ggtt_pin(vma, NULL, 0, flags);
return i915_ggtt_pin(vma, ww, 0, flags);
}
static int init_status_page(struct intel_engine_cs *engine)
{
struct drm_i915_gem_object *obj;
struct i915_gem_ww_ctx ww;
struct i915_vma *vma;
void *vaddr;
int ret;
@ -670,30 +672,39 @@ static int init_status_page(struct intel_engine_cs *engine)
vma = i915_vma_instance(obj, &engine->gt->ggtt->vm, NULL);
if (IS_ERR(vma)) {
ret = PTR_ERR(vma);
goto err;
goto err_put;
}
i915_gem_ww_ctx_init(&ww, true);
retry:
ret = i915_gem_object_lock(obj, &ww);
if (!ret && !HWS_NEEDS_PHYSICAL(engine->i915))
ret = pin_ggtt_status_page(engine, &ww, vma);
if (ret)
goto err;
vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB);
if (IS_ERR(vaddr)) {
ret = PTR_ERR(vaddr);
goto err;
goto err_unpin;
}
engine->status_page.addr = memset(vaddr, 0, PAGE_SIZE);
engine->status_page.vma = vma;
if (!HWS_NEEDS_PHYSICAL(engine->i915)) {
ret = pin_ggtt_status_page(engine, vma);
if (ret)
goto err_unpin;
}
return 0;
err_unpin:
i915_gem_object_unpin_map(obj);
if (ret)
i915_vma_unpin(vma);
err:
i915_gem_object_put(obj);
if (ret == -EDEADLK) {
ret = i915_gem_ww_ctx_backoff(&ww);
if (!ret)
goto retry;
}
i915_gem_ww_ctx_fini(&ww);
err_put:
if (ret)
i915_gem_object_put(obj);
return ret;
}
@ -763,6 +774,7 @@ static int measure_breadcrumb_dw(struct intel_context *ce)
frame->rq.engine = engine;
frame->rq.context = ce;
rcu_assign_pointer(frame->rq.timeline, ce->timeline);
frame->rq.hwsp_seqno = ce->timeline->hwsp_seqno;
frame->ring.vaddr = frame->cs;
frame->ring.size = sizeof(frame->cs);

View File

@ -279,6 +279,7 @@ int intel_engine_pulse(struct intel_engine_cs *engine)
mutex_unlock(&ce->timeline->mutex);
}
intel_engine_flush_submission(engine);
intel_engine_pm_put(engine);
return err;
}

View File

@ -27,12 +27,16 @@ static void dbg_poison_ce(struct intel_context *ce)
int type = i915_coherent_map_type(ce->engine->i915);
void *map;
if (!i915_gem_object_trylock(obj))
return;
map = i915_gem_object_pin_map(obj, type);
if (!IS_ERR(map)) {
memset(map, CONTEXT_REDZONE, obj->base.size);
i915_gem_object_flush_map(obj);
i915_gem_object_unpin_map(obj);
}
i915_gem_object_unlock(obj);
}
}

View File

@ -470,6 +470,11 @@ static void reset_active(struct i915_request *rq,
ce->lrc.lrca = lrc_update_regs(ce, engine, head);
}
static bool bad_request(const struct i915_request *rq)
{
return rq->fence.error && i915_request_started(rq);
}
static struct intel_engine_cs *
__execlists_schedule_in(struct i915_request *rq)
{
@ -482,7 +487,7 @@ __execlists_schedule_in(struct i915_request *rq)
!intel_engine_has_heartbeat(engine)))
intel_context_set_banned(ce);
if (unlikely(intel_context_is_banned(ce)))
if (unlikely(intel_context_is_banned(ce) || bad_request(rq)))
reset_active(rq, engine);
if (IS_ENABLED(CONFIG_DRM_I915_DEBUG_GEM))
@ -752,9 +757,8 @@ assert_pending_valid(const struct intel_engine_execlists *execlists,
{
struct intel_engine_cs *engine =
container_of(execlists, typeof(*engine), execlists);
struct i915_request * const *port, *rq;
struct i915_request * const *port, *rq, *prev = NULL;
struct intel_context *ce = NULL;
bool sentinel = false;
u32 ccid = -1;
trace_ports(execlists, msg, execlists->pending);
@ -804,15 +808,20 @@ assert_pending_valid(const struct intel_engine_execlists *execlists,
* Sentinels are supposed to be the last request so they flush
* the current execution off the HW. Check that they are the only
* request in the pending submission.
*
* NB: Due to the async nature of preempt-to-busy and request
* cancellation we need to handle the case where request
* becomes a sentinel in parallel to CSB processing.
*/
if (sentinel) {
if (prev && i915_request_has_sentinel(prev) &&
!READ_ONCE(prev->fence.error)) {
GEM_TRACE_ERR("%s: context:%llx after sentinel in pending[%zd]\n",
engine->name,
ce->timeline->fence_context,
port - execlists->pending);
return false;
}
sentinel = i915_request_has_sentinel(rq);
prev = rq;
/*
* We want virtual requests to only be in the first slot so
@ -948,7 +957,7 @@ static bool can_merge_rq(const struct i915_request *prev,
if (__i915_request_is_complete(next))
return true;
if (unlikely((i915_request_flags(prev) ^ i915_request_flags(next)) &
if (unlikely((i915_request_flags(prev) | i915_request_flags(next)) &
(BIT(I915_FENCE_FLAG_NOPREEMPT) |
BIT(I915_FENCE_FLAG_SENTINEL))))
return false;
@ -1208,7 +1217,7 @@ static unsigned long active_preempt_timeout(struct intel_engine_cs *engine,
return 0;
/* Force a fast reset for terminated contexts (ignoring sysfs!) */
if (unlikely(intel_context_is_banned(rq->context)))
if (unlikely(intel_context_is_banned(rq->context) || bad_request(rq)))
return 1;
return READ_ONCE(engine->props.preempt_timeout_ms);
@ -2457,11 +2466,31 @@ static void execlists_submit_request(struct i915_request *request)
spin_unlock_irqrestore(&engine->active.lock, flags);
}
static int
__execlists_context_pre_pin(struct intel_context *ce,
struct intel_engine_cs *engine,
struct i915_gem_ww_ctx *ww, void **vaddr)
{
int err;
err = lrc_pre_pin(ce, engine, ww, vaddr);
if (err)
return err;
if (!__test_and_set_bit(CONTEXT_INIT_BIT, &ce->flags)) {
lrc_init_state(ce, engine, *vaddr);
__i915_gem_object_flush_map(ce->state->obj, 0, engine->context_size);
}
return 0;
}
static int execlists_context_pre_pin(struct intel_context *ce,
struct i915_gem_ww_ctx *ww,
void **vaddr)
{
return lrc_pre_pin(ce, ce->engine, ww, vaddr);
return __execlists_context_pre_pin(ce, ce->engine, ww, vaddr);
}
static int execlists_context_pin(struct intel_context *ce, void *vaddr)
@ -3365,8 +3394,8 @@ static int virtual_context_pre_pin(struct intel_context *ce,
{
struct virtual_engine *ve = container_of(ce, typeof(*ve), context);
/* Note: we must use a real engine class for setting up reg state */
return lrc_pre_pin(ce, ve->siblings[0], ww, vaddr);
/* Note: we must use a real engine class for setting up reg state */
return __execlists_context_pre_pin(ce, ve->siblings[0], ww, vaddr);
}
static int virtual_context_pin(struct intel_context *ce, void *vaddr)

View File

@ -6,6 +6,7 @@
#ifndef __INTEL_EXECLISTS_SUBMISSION_H__
#define __INTEL_EXECLISTS_SUBMISSION_H__
#include <linux/llist.h>
#include <linux/types.h>
struct drm_printer;
@ -13,6 +14,7 @@ struct drm_printer;
struct i915_request;
struct intel_context;
struct intel_engine_cs;
struct intel_gt;
enum {
INTEL_CONTEXT_SCHEDULE_IN = 0,

View File

@ -647,7 +647,9 @@ static int init_aliasing_ppgtt(struct i915_ggtt *ggtt)
if (err)
goto err_ppgtt;
i915_gem_object_lock(ppgtt->vm.scratch[0], NULL);
err = i915_vm_pin_pt_stash(&ppgtt->vm, &stash);
i915_gem_object_unlock(ppgtt->vm.scratch[0]);
if (err)
goto err_stash;
@ -734,6 +736,7 @@ static void ggtt_cleanup_hw(struct i915_ggtt *ggtt)
mutex_unlock(&ggtt->vm.mutex);
i915_address_space_fini(&ggtt->vm);
dma_resv_fini(&ggtt->vm.resv);
arch_phys_wc_del(ggtt->mtrr);
@ -1115,6 +1118,7 @@ static int ggtt_probe_hw(struct i915_ggtt *ggtt, struct intel_gt *gt)
ggtt->vm.gt = gt;
ggtt->vm.i915 = i915;
ggtt->vm.dma = i915->drm.dev;
dma_resv_init(&ggtt->vm.resv);
if (INTEL_GEN(i915) <= 5)
ret = i915_gmch_probe(ggtt);
@ -1122,8 +1126,10 @@ static int ggtt_probe_hw(struct i915_ggtt *ggtt, struct intel_gt *gt)
ret = gen6_gmch_probe(ggtt);
else
ret = gen8_gmch_probe(ggtt);
if (ret)
if (ret) {
dma_resv_fini(&ggtt->vm.resv);
return ret;
}
if ((ggtt->vm.total - 1) >> 32) {
drm_err(&i915->drm,
@ -1420,7 +1426,7 @@ intel_partial_pages(const struct i915_ggtt_view *view,
if (ret)
goto err_sg_alloc;
iter = i915_gem_object_get_sg_dma(obj, view->partial.offset, &offset);
iter = i915_gem_object_get_sg_dma(obj, view->partial.offset, &offset, true);
GEM_BUG_ON(!iter);
sg = st->sgl;

View File

@ -29,6 +29,9 @@ void intel_gt_init_early(struct intel_gt *gt, struct drm_i915_private *i915)
INIT_LIST_HEAD(&gt->closed_vma);
spin_lock_init(&gt->closed_lock);
init_llist_head(&gt->watchdog.list);
INIT_WORK(&gt->watchdog.work, intel_gt_watchdog_work);
intel_gt_init_buffer_pool(gt);
intel_gt_init_reset(gt);
intel_gt_init_requests(gt);

View File

@ -77,4 +77,6 @@ static inline bool intel_gt_is_wedged(const struct intel_gt *gt)
void intel_gt_info_print(const struct intel_gt_info *info,
struct drm_printer *p);
void intel_gt_watchdog_work(struct work_struct *work);
#endif /* __INTEL_GT_H__ */

View File

@ -98,28 +98,6 @@ static void pool_free_work(struct work_struct *wrk)
round_jiffies_up_relative(HZ));
}
static int pool_active(struct i915_active *ref)
{
struct intel_gt_buffer_pool_node *node =
container_of(ref, typeof(*node), active);
struct dma_resv *resv = node->obj->base.resv;
int err;
if (dma_resv_trylock(resv)) {
dma_resv_add_excl_fence(resv, NULL);
dma_resv_unlock(resv);
}
err = i915_gem_object_pin_pages(node->obj);
if (err)
return err;
/* Hide this pinned object from the shrinker until retired */
i915_gem_object_make_unshrinkable(node->obj);
return 0;
}
__i915_active_call
static void pool_retire(struct i915_active *ref)
{
@ -129,10 +107,13 @@ static void pool_retire(struct i915_active *ref)
struct list_head *list = bucket_for_size(pool, node->obj->base.size);
unsigned long flags;
i915_gem_object_unpin_pages(node->obj);
if (node->pinned) {
i915_gem_object_unpin_pages(node->obj);
/* Return this object to the shrinker pool */
i915_gem_object_make_purgeable(node->obj);
/* Return this object to the shrinker pool */
i915_gem_object_make_purgeable(node->obj);
node->pinned = false;
}
GEM_BUG_ON(node->age);
spin_lock_irqsave(&pool->lock, flags);
@ -144,6 +125,19 @@ static void pool_retire(struct i915_active *ref)
round_jiffies_up_relative(HZ));
}
void intel_gt_buffer_pool_mark_used(struct intel_gt_buffer_pool_node *node)
{
assert_object_held(node->obj);
if (node->pinned)
return;
__i915_gem_object_pin_pages(node->obj);
/* Hide this pinned object from the shrinker until retired */
i915_gem_object_make_unshrinkable(node->obj);
node->pinned = true;
}
static struct intel_gt_buffer_pool_node *
node_create(struct intel_gt_buffer_pool *pool, size_t sz,
enum i915_map_type type)
@ -159,7 +153,8 @@ node_create(struct intel_gt_buffer_pool *pool, size_t sz,
node->age = 0;
node->pool = pool;
i915_active_init(&node->active, pool_active, pool_retire);
node->pinned = false;
i915_active_init(&node->active, NULL, pool_retire);
obj = i915_gem_object_create_internal(gt->i915, sz);
if (IS_ERR(obj)) {

View File

@ -18,10 +18,15 @@ struct intel_gt_buffer_pool_node *
intel_gt_get_buffer_pool(struct intel_gt *gt, size_t size,
enum i915_map_type type);
void intel_gt_buffer_pool_mark_used(struct intel_gt_buffer_pool_node *node);
static inline int
intel_gt_buffer_pool_mark_active(struct intel_gt_buffer_pool_node *node,
struct i915_request *rq)
{
/* did we call mark_used? */
GEM_WARN_ON(!node->pinned);
return i915_active_add_request(&node->active, rq);
}

View File

@ -31,6 +31,7 @@ struct intel_gt_buffer_pool_node {
};
unsigned long age;
enum i915_map_type type;
u32 pinned;
};
#endif /* INTEL_GT_BUFFER_POOL_TYPES_H */

View File

@ -9,6 +9,7 @@
#include "i915_drv.h" /* for_each_engine() */
#include "i915_request.h"
#include "intel_engine_heartbeat.h"
#include "intel_execlists_submission.h"
#include "intel_gt.h"
#include "intel_gt_pm.h"
#include "intel_gt_requests.h"
@ -243,4 +244,31 @@ void intel_gt_fini_requests(struct intel_gt *gt)
{
/* Wait until the work is marked as finished before unloading! */
cancel_delayed_work_sync(&gt->requests.retire_work);
flush_work(&gt->watchdog.work);
}
void intel_gt_watchdog_work(struct work_struct *work)
{
struct intel_gt *gt =
container_of(work, typeof(*gt), watchdog.work);
struct i915_request *rq, *rn;
struct llist_node *first;
first = llist_del_all(&gt->watchdog.list);
if (!first)
return;
llist_for_each_entry_safe(rq, rn, first, watchdog.link) {
if (!i915_request_completed(rq)) {
struct dma_fence *f = &rq->fence;
pr_notice("Fence expiration time out i915-%s:%s:%llx!\n",
f->ops->get_driver_name(f),
f->ops->get_timeline_name(f),
f->seqno);
i915_request_cancel(rq, -EINTR);
}
i915_request_put(rq);
}
}

View File

@ -8,10 +8,12 @@
#include <linux/ktime.h>
#include <linux/list.h>
#include <linux/llist.h>
#include <linux/mutex.h>
#include <linux/notifier.h>
#include <linux/spinlock.h>
#include <linux/types.h>
#include <linux/workqueue.h>
#include "uc/intel_uc.h"
@ -39,10 +41,6 @@ struct intel_gt {
struct intel_gt_timelines {
spinlock_t lock; /* protects active_list */
struct list_head active_list;
/* Pack multiple timelines' seqnos into the same page */
spinlock_t hwsp_lock;
struct list_head hwsp_free_list;
} timelines;
struct intel_gt_requests {
@ -56,6 +54,11 @@ struct intel_gt {
struct delayed_work retire_work;
} requests;
struct {
struct llist_head list;
struct work_struct work;
} watchdog;
struct intel_wakeref wakeref;
atomic_t user_wakeref;

View File

@ -13,16 +13,36 @@
struct drm_i915_gem_object *alloc_pt_dma(struct i915_address_space *vm, int sz)
{
struct drm_i915_gem_object *obj;
if (I915_SELFTEST_ONLY(should_fail(&vm->fault_attr, 1)))
i915_gem_shrink_all(vm->i915);
return i915_gem_object_create_internal(vm->i915, sz);
obj = i915_gem_object_create_internal(vm->i915, sz);
/* ensure all dma objects have the same reservation class */
if (!IS_ERR(obj))
obj->base.resv = &vm->resv;
return obj;
}
int pin_pt_dma(struct i915_address_space *vm, struct drm_i915_gem_object *obj)
{
int err;
i915_gem_object_lock(obj, NULL);
err = i915_gem_object_pin_pages(obj);
i915_gem_object_unlock(obj);
if (err)
return err;
i915_gem_object_make_unshrinkable(obj);
return 0;
}
int pin_pt_dma_locked(struct i915_address_space *vm, struct drm_i915_gem_object *obj)
{
int err;
err = i915_gem_object_pin_pages(obj);
if (err)
return err;
@ -56,6 +76,20 @@ void __i915_vm_close(struct i915_address_space *vm)
mutex_unlock(&vm->mutex);
}
/* lock the vm into the current ww, if we lock one, we lock all */
int i915_vm_lock_objects(struct i915_address_space *vm,
struct i915_gem_ww_ctx *ww)
{
if (vm->scratch[0]->base.resv == &vm->resv) {
return i915_gem_object_lock(vm->scratch[0], ww);
} else {
struct i915_ppgtt *ppgtt = i915_vm_to_ppgtt(vm);
/* We borrowed the scratch page from ggtt, take the top level object */
return i915_gem_object_lock(ppgtt->pd->pt.base, ww);
}
}
void i915_address_space_fini(struct i915_address_space *vm)
{
drm_mm_takedown(&vm->mm);
@ -69,6 +103,7 @@ static void __i915_vm_release(struct work_struct *work)
vm->cleanup(vm);
i915_address_space_fini(vm);
dma_resv_fini(&vm->resv);
kfree(vm);
}
@ -98,6 +133,7 @@ void i915_address_space_init(struct i915_address_space *vm, int subclass)
mutex_init(&vm->mutex);
lockdep_set_subclass(&vm->mutex, subclass);
i915_gem_shrinker_taints_mutex(vm->i915, &vm->mutex);
dma_resv_init(&vm->resv);
GEM_BUG_ON(!vm->total);
drm_mm_init(&vm->mm, 0, vm->total);
@ -427,7 +463,6 @@ __vm_create_scratch_for_read(struct i915_address_space *vm, unsigned long size)
{
struct drm_i915_gem_object *obj;
struct i915_vma *vma;
int err;
obj = i915_gem_object_create_internal(vm->i915, PAGE_ALIGN(size));
if (IS_ERR(obj))
@ -441,6 +476,19 @@ __vm_create_scratch_for_read(struct i915_address_space *vm, unsigned long size)
return vma;
}
return vma;
}
struct i915_vma *
__vm_create_scratch_for_read_pinned(struct i915_address_space *vm, unsigned long size)
{
struct i915_vma *vma;
int err;
vma = __vm_create_scratch_for_read(vm, size);
if (IS_ERR(vma))
return vma;
err = i915_vma_pin(vma, 0, 0,
i915_vma_is_ggtt(vma) ? PIN_GLOBAL : PIN_USER);
if (err) {

View File

@ -238,6 +238,7 @@ struct i915_address_space {
atomic_t open;
struct mutex mutex; /* protects vma and our lists */
struct dma_resv resv; /* reservation lock for all pd objects, and buffer pool */
#define VM_CLASS_GGTT 0
#define VM_CLASS_PPGTT 1
@ -346,6 +347,9 @@ struct i915_ppgtt {
#define i915_is_ggtt(vm) ((vm)->is_ggtt)
int __must_check
i915_vm_lock_objects(struct i915_address_space *vm, struct i915_gem_ww_ctx *ww);
static inline bool
i915_vm_is_4lvl(const struct i915_address_space *vm)
{
@ -522,6 +526,7 @@ struct i915_page_directory *alloc_pd(struct i915_address_space *vm);
struct i915_page_directory *__alloc_pd(int npde);
int pin_pt_dma(struct i915_address_space *vm, struct drm_i915_gem_object *obj);
int pin_pt_dma_locked(struct i915_address_space *vm, struct drm_i915_gem_object *obj);
void free_px(struct i915_address_space *vm,
struct i915_page_table *pt, int lvl);
@ -576,6 +581,9 @@ void i915_vm_free_pt_stash(struct i915_address_space *vm,
struct i915_vma *
__vm_create_scratch_for_read(struct i915_address_space *vm, unsigned long size);
struct i915_vma *
__vm_create_scratch_for_read_pinned(struct i915_address_space *vm, unsigned long size);
static inline struct sgt_dma {
struct scatterlist *sg;
dma_addr_t dma, max;

View File

@ -1417,7 +1417,7 @@ gen10_init_indirectctx_bb(struct intel_engine_cs *engine, u32 *batch)
#define CTX_WA_BB_SIZE (PAGE_SIZE)
static int lrc_setup_wa_ctx(struct intel_engine_cs *engine)
static int lrc_create_wa_ctx(struct intel_engine_cs *engine)
{
struct drm_i915_gem_object *obj;
struct i915_vma *vma;
@ -1433,10 +1433,6 @@ static int lrc_setup_wa_ctx(struct intel_engine_cs *engine)
goto err;
}
err = i915_ggtt_pin(vma, NULL, 0, PIN_HIGH);
if (err)
goto err;
engine->wa_ctx.vma = vma;
return 0;
@ -1448,9 +1444,6 @@ static int lrc_setup_wa_ctx(struct intel_engine_cs *engine)
void lrc_fini_wa_ctx(struct intel_engine_cs *engine)
{
i915_vma_unpin_and_release(&engine->wa_ctx.vma, 0);
/* Called on error unwind, clear all flags to prevent further use */
memset(&engine->wa_ctx, 0, sizeof(engine->wa_ctx));
}
typedef u32 *(*wa_bb_func_t)(struct intel_engine_cs *engine, u32 *batch);
@ -1462,6 +1455,7 @@ void lrc_init_wa_ctx(struct intel_engine_cs *engine)
&wa_ctx->indirect_ctx, &wa_ctx->per_ctx
};
wa_bb_func_t wa_bb_fn[ARRAY_SIZE(wa_bb)];
struct i915_gem_ww_ctx ww;
void *batch, *batch_ptr;
unsigned int i;
int err;
@ -1490,7 +1484,7 @@ void lrc_init_wa_ctx(struct intel_engine_cs *engine)
return;
}
err = lrc_setup_wa_ctx(engine);
err = lrc_create_wa_ctx(engine);
if (err) {
/*
* We continue even if we fail to initialize WA batch
@ -1503,7 +1497,22 @@ void lrc_init_wa_ctx(struct intel_engine_cs *engine)
return;
}
if (!engine->wa_ctx.vma)
return;
i915_gem_ww_ctx_init(&ww, true);
retry:
err = i915_gem_object_lock(wa_ctx->vma->obj, &ww);
if (!err)
err = i915_ggtt_pin(wa_ctx->vma, &ww, 0, PIN_HIGH);
if (err)
goto err;
batch = i915_gem_object_pin_map(wa_ctx->vma->obj, I915_MAP_WB);
if (IS_ERR(batch)) {
err = PTR_ERR(batch);
goto err_unpin;
}
/*
* Emit the two workaround batch buffers, recording the offset from the
@ -1528,8 +1537,26 @@ void lrc_init_wa_ctx(struct intel_engine_cs *engine)
__i915_gem_object_release_map(wa_ctx->vma->obj);
/* Verify that we can handle failure to setup the wa_ctx */
if (err || i915_inject_probe_error(engine->i915, -ENODEV))
lrc_fini_wa_ctx(engine);
if (!err)
err = i915_inject_probe_error(engine->i915, -ENODEV);
err_unpin:
if (err)
i915_vma_unpin(wa_ctx->vma);
err:
if (err == -EDEADLK) {
err = i915_gem_ww_ctx_backoff(&ww);
if (!err)
goto retry;
}
i915_gem_ww_ctx_fini(&ww);
if (err) {
i915_vma_put(engine->wa_ctx.vma);
/* Clear all flags to prevent further use */
memset(wa_ctx, 0, sizeof(*wa_ctx));
}
}
static void st_update_runtime_underflow(struct intel_context *ce, s32 dt)

View File

@ -262,7 +262,7 @@ int i915_vm_pin_pt_stash(struct i915_address_space *vm,
for (n = 0; n < ARRAY_SIZE(stash->pt); n++) {
for (pt = stash->pt[n]; pt; pt = pt->stash) {
err = pin_pt_dma(vm, pt->base);
err = pin_pt_dma_locked(vm, pt->base);
if (err)
return err;
}
@ -304,6 +304,7 @@ void ppgtt_init(struct i915_ppgtt *ppgtt, struct intel_gt *gt)
ppgtt->vm.dma = i915->drm.dev;
ppgtt->vm.total = BIT_ULL(INTEL_INFO(i915)->ppgtt_size);
dma_resv_init(&ppgtt->vm.resv);
i915_address_space_init(&ppgtt->vm, VM_CLASS_PPGTT);
ppgtt->vm.vma_ops.bind_vma = ppgtt_bind_vma;

View File

@ -197,7 +197,7 @@ int intel_renderstate_init(struct intel_renderstate *so,
if (err)
goto err_context;
err = i915_vma_pin(so->vma, 0, 0, PIN_GLOBAL | PIN_HIGH);
err = i915_vma_pin_ww(so->vma, &so->ww, 0, 0, PIN_GLOBAL | PIN_HIGH);
if (err)
goto err_context;

View File

@ -974,8 +974,6 @@ static int do_reset(struct intel_gt *gt, intel_engine_mask_t stalled_mask)
{
int err, i;
gt_revoke(gt);
err = __intel_gt_reset(gt, ALL_ENGINES);
for (i = 0; err && i < RESET_MAX_RETRIES; i++) {
msleep(10 * (i + 1));
@ -1030,6 +1028,13 @@ void intel_gt_reset(struct intel_gt *gt,
might_sleep();
GEM_BUG_ON(!test_bit(I915_RESET_BACKOFF, &gt->reset.flags));
/*
* FIXME: Revoking cpu mmap ptes cannot be done from a dma_fence
* critical section like gpu reset.
*/
gt_revoke(gt);
mutex_lock(&gt->reset.mutex);
/* Clear any previous failed attempts at recovery. Time to try again. */

View File

@ -466,6 +466,26 @@ static void ring_context_destroy(struct kref *ref)
intel_context_free(ce);
}
static int ring_context_init_default_state(struct intel_context *ce,
struct i915_gem_ww_ctx *ww)
{
struct drm_i915_gem_object *obj = ce->state->obj;
void *vaddr;
vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB);
if (IS_ERR(vaddr))
return PTR_ERR(vaddr);
shmem_read(ce->engine->default_state, 0,
vaddr, ce->engine->context_size);
i915_gem_object_flush_map(obj);
__i915_gem_object_release_map(obj);
__set_bit(CONTEXT_VALID_BIT, &ce->flags);
return 0;
}
static int ring_context_pre_pin(struct intel_context *ce,
struct i915_gem_ww_ctx *ww,
void **unused)
@ -473,6 +493,13 @@ static int ring_context_pre_pin(struct intel_context *ce,
struct i915_address_space *vm;
int err = 0;
if (ce->engine->default_state &&
!test_bit(CONTEXT_VALID_BIT, &ce->flags)) {
err = ring_context_init_default_state(ce, ww);
if (err)
return err;
}
vm = vm_alias(ce->vm);
if (vm)
err = gen6_ppgtt_pin(i915_vm_to_ppgtt((vm)), ww);
@ -528,22 +555,6 @@ alloc_context_vma(struct intel_engine_cs *engine)
if (IS_IVYBRIDGE(i915))
i915_gem_object_set_cache_coherency(obj, I915_CACHE_L3_LLC);
if (engine->default_state) {
void *vaddr;
vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB);
if (IS_ERR(vaddr)) {
err = PTR_ERR(vaddr);
goto err_obj;
}
shmem_read(engine->default_state, 0,
vaddr, engine->context_size);
i915_gem_object_flush_map(obj);
__i915_gem_object_release_map(obj);
}
vma = i915_vma_instance(obj, &engine->gt->ggtt->vm, NULL);
if (IS_ERR(vma)) {
err = PTR_ERR(vma);
@ -575,8 +586,6 @@ static int ring_context_alloc(struct intel_context *ce)
return PTR_ERR(vma);
ce->state = vma;
if (engine->default_state)
__set_bit(CONTEXT_VALID_BIT, &ce->flags);
}
return 0;
@ -1176,37 +1185,15 @@ static int gen7_ctx_switch_bb_setup(struct intel_engine_cs * const engine,
return gen7_setup_clear_gpr_bb(engine, vma);
}
static int gen7_ctx_switch_bb_init(struct intel_engine_cs *engine)
static int gen7_ctx_switch_bb_init(struct intel_engine_cs *engine,
struct i915_gem_ww_ctx *ww,
struct i915_vma *vma)
{
struct drm_i915_gem_object *obj;
struct i915_vma *vma;
int size;
int err;
size = gen7_ctx_switch_bb_setup(engine, NULL /* probe size */);
if (size <= 0)
return size;
size = ALIGN(size, PAGE_SIZE);
obj = i915_gem_object_create_internal(engine->i915, size);
if (IS_ERR(obj))
return PTR_ERR(obj);
vma = i915_vma_instance(obj, engine->gt->vm, NULL);
if (IS_ERR(vma)) {
err = PTR_ERR(vma);
goto err_obj;
}
vma->private = intel_context_create(engine); /* dummy residuals */
if (IS_ERR(vma->private)) {
err = PTR_ERR(vma->private);
goto err_obj;
}
err = i915_vma_pin(vma, 0, 0, PIN_USER | PIN_HIGH);
err = i915_vma_pin_ww(vma, ww, 0, 0, PIN_USER | PIN_HIGH);
if (err)
goto err_private;
return err;
err = i915_vma_sync(vma);
if (err)
@ -1221,17 +1208,53 @@ static int gen7_ctx_switch_bb_init(struct intel_engine_cs *engine)
err_unpin:
i915_vma_unpin(vma);
err_private:
intel_context_put(vma->private);
err_obj:
i915_gem_object_put(obj);
return err;
}
static struct i915_vma *gen7_ctx_vma(struct intel_engine_cs *engine)
{
struct drm_i915_gem_object *obj;
struct i915_vma *vma;
int size, err;
if (!IS_GEN(engine->i915, 7) || engine->class != RENDER_CLASS)
return 0;
err = gen7_ctx_switch_bb_setup(engine, NULL /* probe size */);
if (err < 0)
return ERR_PTR(err);
if (!err)
return NULL;
size = ALIGN(err, PAGE_SIZE);
obj = i915_gem_object_create_internal(engine->i915, size);
if (IS_ERR(obj))
return ERR_CAST(obj);
vma = i915_vma_instance(obj, engine->gt->vm, NULL);
if (IS_ERR(vma)) {
i915_gem_object_put(obj);
return ERR_CAST(vma);
}
vma->private = intel_context_create(engine); /* dummy residuals */
if (IS_ERR(vma->private)) {
err = PTR_ERR(vma->private);
vma->private = NULL;
i915_gem_object_put(obj);
return ERR_PTR(err);
}
return vma;
}
int intel_ring_submission_setup(struct intel_engine_cs *engine)
{
struct i915_gem_ww_ctx ww;
struct intel_timeline *timeline;
struct intel_ring *ring;
struct i915_vma *gen7_wa_vma;
int err;
setup_common(engine);
@ -1262,43 +1285,72 @@ int intel_ring_submission_setup(struct intel_engine_cs *engine)
}
GEM_BUG_ON(timeline->has_initial_breadcrumb);
err = intel_timeline_pin(timeline, NULL);
if (err)
goto err_timeline;
ring = intel_engine_create_ring(engine, SZ_16K);
if (IS_ERR(ring)) {
err = PTR_ERR(ring);
goto err_timeline_unpin;
goto err_timeline;
}
err = intel_ring_pin(ring, NULL);
if (err)
goto err_ring;
GEM_BUG_ON(engine->legacy.ring);
engine->legacy.ring = ring;
engine->legacy.timeline = timeline;
gen7_wa_vma = gen7_ctx_vma(engine);
if (IS_ERR(gen7_wa_vma)) {
err = PTR_ERR(gen7_wa_vma);
goto err_ring;
}
i915_gem_ww_ctx_init(&ww, false);
retry:
err = i915_gem_object_lock(timeline->hwsp_ggtt->obj, &ww);
if (!err && gen7_wa_vma)
err = i915_gem_object_lock(gen7_wa_vma->obj, &ww);
if (!err && engine->legacy.ring->vma->obj)
err = i915_gem_object_lock(engine->legacy.ring->vma->obj, &ww);
if (!err)
err = intel_timeline_pin(timeline, &ww);
if (!err) {
err = intel_ring_pin(ring, &ww);
if (err)
intel_timeline_unpin(timeline);
}
if (err)
goto out;
GEM_BUG_ON(timeline->hwsp_ggtt != engine->status_page.vma);
if (IS_GEN(engine->i915, 7) && engine->class == RENDER_CLASS) {
err = gen7_ctx_switch_bb_init(engine);
if (err)
goto err_ring_unpin;
if (gen7_wa_vma) {
err = gen7_ctx_switch_bb_init(engine, &ww, gen7_wa_vma);
if (err) {
intel_ring_unpin(ring);
intel_timeline_unpin(timeline);
}
}
out:
if (err == -EDEADLK) {
err = i915_gem_ww_ctx_backoff(&ww);
if (!err)
goto retry;
}
i915_gem_ww_ctx_fini(&ww);
if (err)
goto err_gen7_put;
/* Finally, take ownership and responsibility for cleanup! */
engine->release = ring_release;
return 0;
err_ring_unpin:
intel_ring_unpin(ring);
err_gen7_put:
if (gen7_wa_vma) {
intel_context_put(gen7_wa_vma->private);
i915_gem_object_put(gen7_wa_vma->obj);
}
err_ring:
intel_ring_put(ring);
err_timeline_unpin:
intel_timeline_unpin(timeline);
err_timeline:
intel_timeline_put(timeline);
err:

View File

@ -12,21 +12,9 @@
#include "intel_ring.h"
#include "intel_timeline.h"
#define ptr_set_bit(ptr, bit) ((typeof(ptr))((unsigned long)(ptr) | BIT(bit)))
#define ptr_test_bit(ptr, bit) ((unsigned long)(ptr) & BIT(bit))
#define TIMELINE_SEQNO_BYTES 8
#define CACHELINE_BITS 6
#define CACHELINE_FREE CACHELINE_BITS
struct intel_timeline_hwsp {
struct intel_gt *gt;
struct intel_gt_timelines *gt_timelines;
struct list_head free_link;
struct i915_vma *vma;
u64 free_bitmap;
};
static struct i915_vma *__hwsp_alloc(struct intel_gt *gt)
static struct i915_vma *hwsp_alloc(struct intel_gt *gt)
{
struct drm_i915_private *i915 = gt->i915;
struct drm_i915_gem_object *obj;
@ -45,174 +33,42 @@ static struct i915_vma *__hwsp_alloc(struct intel_gt *gt)
return vma;
}
static struct i915_vma *
hwsp_alloc(struct intel_timeline *timeline, unsigned int *cacheline)
{
struct intel_gt_timelines *gt = &timeline->gt->timelines;
struct intel_timeline_hwsp *hwsp;
BUILD_BUG_ON(BITS_PER_TYPE(u64) * CACHELINE_BYTES > PAGE_SIZE);
spin_lock_irq(&gt->hwsp_lock);
/* hwsp_free_list only contains HWSP that have available cachelines */
hwsp = list_first_entry_or_null(&gt->hwsp_free_list,
typeof(*hwsp), free_link);
if (!hwsp) {
struct i915_vma *vma;
spin_unlock_irq(&gt->hwsp_lock);
hwsp = kmalloc(sizeof(*hwsp), GFP_KERNEL);
if (!hwsp)
return ERR_PTR(-ENOMEM);
vma = __hwsp_alloc(timeline->gt);
if (IS_ERR(vma)) {
kfree(hwsp);
return vma;
}
GT_TRACE(timeline->gt, "new HWSP allocated\n");
vma->private = hwsp;
hwsp->gt = timeline->gt;
hwsp->vma = vma;
hwsp->free_bitmap = ~0ull;
hwsp->gt_timelines = gt;
spin_lock_irq(&gt->hwsp_lock);
list_add(&hwsp->free_link, &gt->hwsp_free_list);
}
GEM_BUG_ON(!hwsp->free_bitmap);
*cacheline = __ffs64(hwsp->free_bitmap);
hwsp->free_bitmap &= ~BIT_ULL(*cacheline);
if (!hwsp->free_bitmap)
list_del(&hwsp->free_link);
spin_unlock_irq(&gt->hwsp_lock);
GEM_BUG_ON(hwsp->vma->private != hwsp);
return hwsp->vma;
}
static void __idle_hwsp_free(struct intel_timeline_hwsp *hwsp, int cacheline)
{
struct intel_gt_timelines *gt = hwsp->gt_timelines;
unsigned long flags;
spin_lock_irqsave(&gt->hwsp_lock, flags);
/* As a cacheline becomes available, publish the HWSP on the freelist */
if (!hwsp->free_bitmap)
list_add_tail(&hwsp->free_link, &gt->hwsp_free_list);
GEM_BUG_ON(cacheline >= BITS_PER_TYPE(hwsp->free_bitmap));
hwsp->free_bitmap |= BIT_ULL(cacheline);
/* And if no one is left using it, give the page back to the system */
if (hwsp->free_bitmap == ~0ull) {
i915_vma_put(hwsp->vma);
list_del(&hwsp->free_link);
kfree(hwsp);
}
spin_unlock_irqrestore(&gt->hwsp_lock, flags);
}
static void __rcu_cacheline_free(struct rcu_head *rcu)
{
struct intel_timeline_cacheline *cl =
container_of(rcu, typeof(*cl), rcu);
/* Must wait until after all *rq->hwsp are complete before removing */
i915_gem_object_unpin_map(cl->hwsp->vma->obj);
__idle_hwsp_free(cl->hwsp, ptr_unmask_bits(cl->vaddr, CACHELINE_BITS));
i915_active_fini(&cl->active);
kfree(cl);
}
static void __idle_cacheline_free(struct intel_timeline_cacheline *cl)
{
GEM_BUG_ON(!i915_active_is_idle(&cl->active));
call_rcu(&cl->rcu, __rcu_cacheline_free);
}
__i915_active_call
static void __cacheline_retire(struct i915_active *active)
static void __timeline_retire(struct i915_active *active)
{
struct intel_timeline_cacheline *cl =
container_of(active, typeof(*cl), active);
struct intel_timeline *tl =
container_of(active, typeof(*tl), active);
i915_vma_unpin(cl->hwsp->vma);
if (ptr_test_bit(cl->vaddr, CACHELINE_FREE))
__idle_cacheline_free(cl);
i915_vma_unpin(tl->hwsp_ggtt);
intel_timeline_put(tl);
}
static int __cacheline_active(struct i915_active *active)
static int __timeline_active(struct i915_active *active)
{
struct intel_timeline_cacheline *cl =
container_of(active, typeof(*cl), active);
struct intel_timeline *tl =
container_of(active, typeof(*tl), active);
__i915_vma_pin(cl->hwsp->vma);
__i915_vma_pin(tl->hwsp_ggtt);
intel_timeline_get(tl);
return 0;
}
static struct intel_timeline_cacheline *
cacheline_alloc(struct intel_timeline_hwsp *hwsp, unsigned int cacheline)
I915_SELFTEST_EXPORT int
intel_timeline_pin_map(struct intel_timeline *timeline)
{
struct intel_timeline_cacheline *cl;
struct drm_i915_gem_object *obj = timeline->hwsp_ggtt->obj;
u32 ofs = offset_in_page(timeline->hwsp_offset);
void *vaddr;
GEM_BUG_ON(cacheline >= BIT(CACHELINE_BITS));
vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB);
if (IS_ERR(vaddr))
return PTR_ERR(vaddr);
cl = kmalloc(sizeof(*cl), GFP_KERNEL);
if (!cl)
return ERR_PTR(-ENOMEM);
timeline->hwsp_map = vaddr;
timeline->hwsp_seqno = memset(vaddr + ofs, 0, TIMELINE_SEQNO_BYTES);
clflush(vaddr + ofs);
vaddr = i915_gem_object_pin_map(hwsp->vma->obj, I915_MAP_WB);
if (IS_ERR(vaddr)) {
kfree(cl);
return ERR_CAST(vaddr);
}
cl->hwsp = hwsp;
cl->vaddr = page_pack_bits(vaddr, cacheline);
i915_active_init(&cl->active, __cacheline_active, __cacheline_retire);
return cl;
}
static void cacheline_acquire(struct intel_timeline_cacheline *cl,
u32 ggtt_offset)
{
if (!cl)
return;
cl->ggtt_offset = ggtt_offset;
i915_active_acquire(&cl->active);
}
static void cacheline_release(struct intel_timeline_cacheline *cl)
{
if (cl)
i915_active_release(&cl->active);
}
static void cacheline_free(struct intel_timeline_cacheline *cl)
{
if (!i915_active_acquire_if_busy(&cl->active)) {
__idle_cacheline_free(cl);
return;
}
GEM_BUG_ON(ptr_test_bit(cl->vaddr, CACHELINE_FREE));
cl->vaddr = ptr_set_bit(cl->vaddr, CACHELINE_FREE);
i915_active_release(&cl->active);
return 0;
}
static int intel_timeline_init(struct intel_timeline *timeline,
@ -220,45 +76,25 @@ static int intel_timeline_init(struct intel_timeline *timeline,
struct i915_vma *hwsp,
unsigned int offset)
{
void *vaddr;
kref_init(&timeline->kref);
atomic_set(&timeline->pin_count, 0);
timeline->gt = gt;
timeline->has_initial_breadcrumb = !hwsp;
timeline->hwsp_cacheline = NULL;
if (!hwsp) {
struct intel_timeline_cacheline *cl;
unsigned int cacheline;
hwsp = hwsp_alloc(timeline, &cacheline);
if (hwsp) {
timeline->hwsp_offset = offset;
timeline->hwsp_ggtt = i915_vma_get(hwsp);
} else {
timeline->has_initial_breadcrumb = true;
hwsp = hwsp_alloc(gt);
if (IS_ERR(hwsp))
return PTR_ERR(hwsp);
cl = cacheline_alloc(hwsp->private, cacheline);
if (IS_ERR(cl)) {
__idle_hwsp_free(hwsp->private, cacheline);
return PTR_ERR(cl);
}
timeline->hwsp_cacheline = cl;
timeline->hwsp_offset = cacheline * CACHELINE_BYTES;
vaddr = page_mask_bits(cl->vaddr);
} else {
timeline->hwsp_offset = offset;
vaddr = i915_gem_object_pin_map(hwsp->obj, I915_MAP_WB);
if (IS_ERR(vaddr))
return PTR_ERR(vaddr);
timeline->hwsp_ggtt = hwsp;
}
timeline->hwsp_seqno =
memset(vaddr + timeline->hwsp_offset, 0, CACHELINE_BYTES);
timeline->hwsp_map = NULL;
timeline->hwsp_seqno = (void *)(long)timeline->hwsp_offset;
timeline->hwsp_ggtt = i915_vma_get(hwsp);
GEM_BUG_ON(timeline->hwsp_offset >= hwsp->size);
timeline->fence_context = dma_fence_context_alloc(1);
@ -269,6 +105,7 @@ static int intel_timeline_init(struct intel_timeline *timeline,
INIT_LIST_HEAD(&timeline->requests);
i915_syncmap_init(&timeline->sync);
i915_active_init(&timeline->active, __timeline_active, __timeline_retire);
return 0;
}
@ -279,23 +116,19 @@ void intel_gt_init_timelines(struct intel_gt *gt)
spin_lock_init(&timelines->lock);
INIT_LIST_HEAD(&timelines->active_list);
spin_lock_init(&timelines->hwsp_lock);
INIT_LIST_HEAD(&timelines->hwsp_free_list);
}
static void intel_timeline_fini(struct intel_timeline *timeline)
static void intel_timeline_fini(struct rcu_head *rcu)
{
GEM_BUG_ON(atomic_read(&timeline->pin_count));
GEM_BUG_ON(!list_empty(&timeline->requests));
GEM_BUG_ON(timeline->retire);
struct intel_timeline *timeline =
container_of(rcu, struct intel_timeline, rcu);
if (timeline->hwsp_cacheline)
cacheline_free(timeline->hwsp_cacheline);
else
if (timeline->hwsp_map)
i915_gem_object_unpin_map(timeline->hwsp_ggtt->obj);
i915_vma_put(timeline->hwsp_ggtt);
i915_active_fini(&timeline->active);
kfree(timeline);
}
struct intel_timeline *
@ -351,6 +184,12 @@ int intel_timeline_pin(struct intel_timeline *tl, struct i915_gem_ww_ctx *ww)
if (atomic_add_unless(&tl->pin_count, 1, 0))
return 0;
if (!tl->hwsp_map) {
err = intel_timeline_pin_map(tl);
if (err)
return err;
}
err = i915_ggtt_pin(tl->hwsp_ggtt, ww, 0, PIN_HIGH);
if (err)
return err;
@ -361,9 +200,9 @@ int intel_timeline_pin(struct intel_timeline *tl, struct i915_gem_ww_ctx *ww)
GT_TRACE(tl->gt, "timeline:%llx using HWSP offset:%x\n",
tl->fence_context, tl->hwsp_offset);
cacheline_acquire(tl->hwsp_cacheline, tl->hwsp_offset);
i915_active_acquire(&tl->active);
if (atomic_fetch_inc(&tl->pin_count)) {
cacheline_release(tl->hwsp_cacheline);
i915_active_release(&tl->active);
__i915_vma_unpin(tl->hwsp_ggtt);
}
@ -372,9 +211,13 @@ int intel_timeline_pin(struct intel_timeline *tl, struct i915_gem_ww_ctx *ww)
void intel_timeline_reset_seqno(const struct intel_timeline *tl)
{
u32 *hwsp_seqno = (u32 *)tl->hwsp_seqno;
/* Must be pinned to be writable, and no requests in flight. */
GEM_BUG_ON(!atomic_read(&tl->pin_count));
WRITE_ONCE(*(u32 *)tl->hwsp_seqno, tl->seqno);
memset(hwsp_seqno + 1, 0, TIMELINE_SEQNO_BYTES - sizeof(*hwsp_seqno));
WRITE_ONCE(*hwsp_seqno, tl->seqno);
clflush(hwsp_seqno);
}
void intel_timeline_enter(struct intel_timeline *tl)
@ -450,106 +293,23 @@ static u32 timeline_advance(struct intel_timeline *tl)
return tl->seqno += 1 + tl->has_initial_breadcrumb;
}
static void timeline_rollback(struct intel_timeline *tl)
{
tl->seqno -= 1 + tl->has_initial_breadcrumb;
}
static noinline int
__intel_timeline_get_seqno(struct intel_timeline *tl,
struct i915_request *rq,
u32 *seqno)
{
struct intel_timeline_cacheline *cl;
unsigned int cacheline;
struct i915_vma *vma;
void *vaddr;
int err;
u32 next_ofs = offset_in_page(tl->hwsp_offset + TIMELINE_SEQNO_BYTES);
might_lock(&tl->gt->ggtt->vm.mutex);
GT_TRACE(tl->gt, "timeline:%llx wrapped\n", tl->fence_context);
/* w/a: bit 5 needs to be zero for MI_FLUSH_DW address. */
if (TIMELINE_SEQNO_BYTES <= BIT(5) && (next_ofs & BIT(5)))
next_ofs = offset_in_page(next_ofs + BIT(5));
/*
* If there is an outstanding GPU reference to this cacheline,
* such as it being sampled by a HW semaphore on another timeline,
* we cannot wraparound our seqno value (the HW semaphore does
* a strict greater-than-or-equals compare, not i915_seqno_passed).
* So if the cacheline is still busy, we must detach ourselves
* from it and leave it inflight alongside its users.
*
* However, if nobody is watching and we can guarantee that nobody
* will, we could simply reuse the same cacheline.
*
* if (i915_active_request_is_signaled(&tl->last_request) &&
* i915_active_is_signaled(&tl->hwsp_cacheline->active))
* return 0;
*
* That seems unlikely for a busy timeline that needed to wrap in
* the first place, so just replace the cacheline.
*/
vma = hwsp_alloc(tl, &cacheline);
if (IS_ERR(vma)) {
err = PTR_ERR(vma);
goto err_rollback;
}
err = i915_ggtt_pin(vma, NULL, 0, PIN_HIGH);
if (err) {
__idle_hwsp_free(vma->private, cacheline);
goto err_rollback;
}
cl = cacheline_alloc(vma->private, cacheline);
if (IS_ERR(cl)) {
err = PTR_ERR(cl);
__idle_hwsp_free(vma->private, cacheline);
goto err_unpin;
}
GEM_BUG_ON(cl->hwsp->vma != vma);
/*
* Attach the old cacheline to the current request, so that we only
* free it after the current request is retired, which ensures that
* all writes into the cacheline from previous requests are complete.
*/
err = i915_active_ref(&tl->hwsp_cacheline->active,
tl->fence_context,
&rq->fence);
if (err)
goto err_cacheline;
cacheline_release(tl->hwsp_cacheline); /* ownership now xfered to rq */
cacheline_free(tl->hwsp_cacheline);
i915_vma_unpin(tl->hwsp_ggtt); /* binding kept alive by old cacheline */
i915_vma_put(tl->hwsp_ggtt);
tl->hwsp_ggtt = i915_vma_get(vma);
vaddr = page_mask_bits(cl->vaddr);
tl->hwsp_offset = cacheline * CACHELINE_BYTES;
tl->hwsp_seqno =
memset(vaddr + tl->hwsp_offset, 0, CACHELINE_BYTES);
tl->hwsp_offset += i915_ggtt_offset(vma);
GT_TRACE(tl->gt, "timeline:%llx using HWSP offset:%x\n",
tl->fence_context, tl->hwsp_offset);
cacheline_acquire(cl, tl->hwsp_offset);
tl->hwsp_cacheline = cl;
tl->hwsp_offset = i915_ggtt_offset(tl->hwsp_ggtt) + next_ofs;
tl->hwsp_seqno = tl->hwsp_map + next_ofs;
intel_timeline_reset_seqno(tl);
*seqno = timeline_advance(tl);
GEM_BUG_ON(i915_seqno_passed(*tl->hwsp_seqno, *seqno));
return 0;
err_cacheline:
cacheline_free(cl);
err_unpin:
i915_vma_unpin(vma);
err_rollback:
timeline_rollback(tl);
return err;
}
int intel_timeline_get_seqno(struct intel_timeline *tl,
@ -559,51 +319,52 @@ int intel_timeline_get_seqno(struct intel_timeline *tl,
*seqno = timeline_advance(tl);
/* Replace the HWSP on wraparound for HW semaphores */
if (unlikely(!*seqno && tl->hwsp_cacheline))
return __intel_timeline_get_seqno(tl, rq, seqno);
if (unlikely(!*seqno && tl->has_initial_breadcrumb))
return __intel_timeline_get_seqno(tl, seqno);
return 0;
}
static int cacheline_ref(struct intel_timeline_cacheline *cl,
struct i915_request *rq)
{
return i915_active_add_request(&cl->active, rq);
}
int intel_timeline_read_hwsp(struct i915_request *from,
struct i915_request *to,
u32 *hwsp)
{
struct intel_timeline_cacheline *cl;
struct intel_timeline *tl;
int err;
GEM_BUG_ON(!rcu_access_pointer(from->hwsp_cacheline));
rcu_read_lock();
cl = rcu_dereference(from->hwsp_cacheline);
if (i915_request_signaled(from)) /* confirm cacheline is valid */
goto unlock;
if (unlikely(!i915_active_acquire_if_busy(&cl->active)))
goto unlock; /* seqno wrapped and completed! */
if (unlikely(__i915_request_is_complete(from)))
goto release;
tl = rcu_dereference(from->timeline);
if (i915_request_signaled(from) ||
!i915_active_acquire_if_busy(&tl->active))
tl = NULL;
if (tl) {
/* hwsp_offset may wraparound, so use from->hwsp_seqno */
*hwsp = i915_ggtt_offset(tl->hwsp_ggtt) +
offset_in_page(from->hwsp_seqno);
}
/* ensure we wait on the right request, if not, we completed */
if (tl && __i915_request_is_complete(from)) {
i915_active_release(&tl->active);
tl = NULL;
}
rcu_read_unlock();
err = cacheline_ref(cl, to);
if (err)
if (!tl)
return 1;
/* Can't do semaphore waits on kernel context */
if (!tl->has_initial_breadcrumb) {
err = -EINVAL;
goto out;
}
err = i915_active_add_request(&tl->active, to);
*hwsp = cl->ggtt_offset;
out:
i915_active_release(&cl->active);
i915_active_release(&tl->active);
return err;
release:
i915_active_release(&cl->active);
unlock:
rcu_read_unlock();
return 1;
}
void intel_timeline_unpin(struct intel_timeline *tl)
@ -612,8 +373,7 @@ void intel_timeline_unpin(struct intel_timeline *tl)
if (!atomic_dec_and_test(&tl->pin_count))
return;
cacheline_release(tl->hwsp_cacheline);
i915_active_release(&tl->active);
__i915_vma_unpin(tl->hwsp_ggtt);
}
@ -622,8 +382,11 @@ void __intel_timeline_free(struct kref *kref)
struct intel_timeline *timeline =
container_of(kref, typeof(*timeline), kref);
intel_timeline_fini(timeline);
kfree_rcu(timeline, rcu);
GEM_BUG_ON(atomic_read(&timeline->pin_count));
GEM_BUG_ON(!list_empty(&timeline->requests));
GEM_BUG_ON(timeline->retire);
call_rcu(&timeline->rcu, intel_timeline_fini);
}
void intel_gt_fini_timelines(struct intel_gt *gt)
@ -631,7 +394,6 @@ void intel_gt_fini_timelines(struct intel_gt *gt)
struct intel_gt_timelines *timelines = &gt->timelines;
GEM_BUG_ON(!list_empty(&timelines->active_list));
GEM_BUG_ON(!list_empty(&timelines->hwsp_free_list));
}
void intel_gt_show_timelines(struct intel_gt *gt,

View File

@ -117,4 +117,6 @@ intel_timeline_is_last(const struct intel_timeline *tl,
return list_is_last_rcu(&rq->link, &tl->requests);
}
I915_SELFTEST_DECLARE(int intel_timeline_pin_map(struct intel_timeline *tl));
#endif

View File

@ -18,7 +18,6 @@
struct i915_vma;
struct i915_syncmap;
struct intel_gt;
struct intel_timeline_hwsp;
struct intel_timeline {
u64 fence_context;
@ -45,12 +44,11 @@ struct intel_timeline {
atomic_t pin_count;
atomic_t active_count;
void *hwsp_map;
const u32 *hwsp_seqno;
struct i915_vma *hwsp_ggtt;
u32 hwsp_offset;
struct intel_timeline_cacheline *hwsp_cacheline;
bool has_initial_breadcrumb;
/**
@ -67,6 +65,8 @@ struct intel_timeline {
*/
struct i915_active_fence last_request;
struct i915_active active;
/** A chain of completed timelines ready for early retirement. */
struct intel_timeline *retire;
@ -90,15 +90,4 @@ struct intel_timeline {
struct rcu_head rcu;
};
struct intel_timeline_cacheline {
struct i915_active active;
struct intel_timeline_hwsp *hwsp;
void *vaddr;
u32 ggtt_offset;
struct rcu_head rcu;
};
#endif /* __I915_TIMELINE_TYPES_H__ */

View File

@ -2213,10 +2213,15 @@ static int engine_wa_list_verify(struct intel_context *ce,
if (err)
goto err_pm;
err = i915_vma_pin_ww(vma, &ww, 0, 0,
i915_vma_is_ggtt(vma) ? PIN_GLOBAL : PIN_USER);
if (err)
goto err_unpin;
rq = i915_request_create(ce);
if (IS_ERR(rq)) {
err = PTR_ERR(rq);
goto err_unpin;
goto err_vma;
}
err = i915_request_await_object(rq, vma->obj, true);
@ -2257,6 +2262,8 @@ static int engine_wa_list_verify(struct intel_context *ce,
err_rq:
i915_request_put(rq);
err_vma:
i915_vma_unpin(vma);
err_unpin:
intel_context_unpin(ce);
err_pm:
@ -2267,7 +2274,6 @@ static int engine_wa_list_verify(struct intel_context *ce,
}
i915_gem_ww_ctx_fini(&ww);
intel_engine_pm_put(ce->engine);
i915_vma_unpin(vma);
i915_vma_put(vma);
return err;
}

View File

@ -32,9 +32,20 @@
#include "mock_engine.h"
#include "selftests/mock_request.h"
static void mock_timeline_pin(struct intel_timeline *tl)
static int mock_timeline_pin(struct intel_timeline *tl)
{
int err;
if (WARN_ON(!i915_gem_object_trylock(tl->hwsp_ggtt->obj)))
return -EBUSY;
err = intel_timeline_pin_map(tl);
i915_gem_object_unlock(tl->hwsp_ggtt->obj);
if (err)
return err;
atomic_inc(&tl->pin_count);
return 0;
}
static void mock_timeline_unpin(struct intel_timeline *tl)
@ -152,6 +163,8 @@ static void mock_context_destroy(struct kref *ref)
static int mock_context_alloc(struct intel_context *ce)
{
int err;
ce->ring = mock_ring(ce->engine);
if (!ce->ring)
return -ENOMEM;
@ -162,7 +175,12 @@ static int mock_context_alloc(struct intel_context *ce)
return PTR_ERR(ce->timeline);
}
mock_timeline_pin(ce->timeline);
err = mock_timeline_pin(ce->timeline);
if (err) {
intel_timeline_put(ce->timeline);
ce->timeline = NULL;
return err;
}
return 0;
}

View File

@ -88,8 +88,8 @@ static int __live_context_size(struct intel_engine_cs *engine)
if (err)
goto err;
vaddr = i915_gem_object_pin_map(ce->state->obj,
i915_coherent_map_type(engine->i915));
vaddr = i915_gem_object_pin_map_unlocked(ce->state->obj,
i915_coherent_map_type(engine->i915));
if (IS_ERR(vaddr)) {
err = PTR_ERR(vaddr);
intel_context_unpin(ce);

View File

@ -42,6 +42,9 @@ static int perf_end(struct intel_gt *gt)
static int write_timestamp(struct i915_request *rq, int slot)
{
struct intel_timeline *tl =
rcu_dereference_protected(rq->timeline,
!i915_request_signaled(rq));
u32 cmd;
u32 *cs;
@ -54,7 +57,7 @@ static int write_timestamp(struct i915_request *rq, int slot)
cmd++;
*cs++ = cmd;
*cs++ = i915_mmio_reg_offset(RING_TIMESTAMP(rq->engine->mmio_base));
*cs++ = i915_request_timeline(rq)->hwsp_offset + slot * sizeof(u32);
*cs++ = tl->hwsp_offset + slot * sizeof(u32);
*cs++ = 0;
intel_ring_advance(rq, cs);
@ -73,7 +76,7 @@ static struct i915_vma *create_empty_batch(struct intel_context *ce)
if (IS_ERR(obj))
return ERR_CAST(obj);
cs = i915_gem_object_pin_map(obj, I915_MAP_WB);
cs = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB);
if (IS_ERR(cs)) {
err = PTR_ERR(cs);
goto err_put;
@ -209,7 +212,7 @@ static struct i915_vma *create_nop_batch(struct intel_context *ce)
if (IS_ERR(obj))
return ERR_CAST(obj);
cs = i915_gem_object_pin_map(obj, I915_MAP_WB);
cs = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB);
if (IS_ERR(cs)) {
err = PTR_ERR(cs);
goto err_put;

View File

@ -989,7 +989,7 @@ static int live_timeslice_preempt(void *arg)
goto err_obj;
}
vaddr = i915_gem_object_pin_map(obj, I915_MAP_WC);
vaddr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC);
if (IS_ERR(vaddr)) {
err = PTR_ERR(vaddr);
goto err_obj;
@ -1297,7 +1297,7 @@ static int live_timeslice_queue(void *arg)
goto err_obj;
}
vaddr = i915_gem_object_pin_map(obj, I915_MAP_WC);
vaddr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC);
if (IS_ERR(vaddr)) {
err = PTR_ERR(vaddr);
goto err_obj;
@ -1544,7 +1544,7 @@ static int live_busywait_preempt(void *arg)
goto err_ctx_lo;
}
map = i915_gem_object_pin_map(obj, I915_MAP_WC);
map = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC);
if (IS_ERR(map)) {
err = PTR_ERR(map);
goto err_obj;
@ -2714,7 +2714,7 @@ static int create_gang(struct intel_engine_cs *engine,
if (err)
goto err_obj;
cs = i915_gem_object_pin_map(obj, I915_MAP_WC);
cs = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC);
if (IS_ERR(cs)) {
err = PTR_ERR(cs);
goto err_obj;
@ -2997,7 +2997,7 @@ static int live_preempt_gang(void *arg)
* it will terminate the next lowest spinner until there
* are no more spinners and the gang is complete.
*/
cs = i915_gem_object_pin_map(rq->batch->obj, I915_MAP_WC);
cs = i915_gem_object_pin_map_unlocked(rq->batch->obj, I915_MAP_WC);
if (!IS_ERR(cs)) {
*cs = 0;
i915_gem_object_unpin_map(rq->batch->obj);
@ -3062,7 +3062,7 @@ create_gpr_user(struct intel_engine_cs *engine,
return ERR_PTR(err);
}
cs = i915_gem_object_pin_map(obj, I915_MAP_WC);
cs = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC);
if (IS_ERR(cs)) {
i915_vma_put(vma);
return ERR_CAST(cs);
@ -3269,7 +3269,7 @@ static int live_preempt_user(void *arg)
if (IS_ERR(global))
return PTR_ERR(global);
result = i915_gem_object_pin_map(global->obj, I915_MAP_WC);
result = i915_gem_object_pin_map_unlocked(global->obj, I915_MAP_WC);
if (IS_ERR(result)) {
i915_vma_unpin_and_release(&global, 0);
return PTR_ERR(result);
@ -3658,7 +3658,7 @@ static int live_preempt_smoke(void *arg)
goto err_free;
}
cs = i915_gem_object_pin_map(smoke.batch, I915_MAP_WB);
cs = i915_gem_object_pin_map_unlocked(smoke.batch, I915_MAP_WB);
if (IS_ERR(cs)) {
err = PTR_ERR(cs);
goto err_batch;
@ -4197,8 +4197,9 @@ static int preserved_virtual_engine(struct intel_gt *gt,
int err = 0;
u32 *cs;
scratch = __vm_create_scratch_for_read(&siblings[0]->gt->ggtt->vm,
PAGE_SIZE);
scratch =
__vm_create_scratch_for_read_pinned(&siblings[0]->gt->ggtt->vm,
PAGE_SIZE);
if (IS_ERR(scratch))
return PTR_ERR(scratch);
@ -4262,7 +4263,7 @@ static int preserved_virtual_engine(struct intel_gt *gt,
goto out_end;
}
cs = i915_gem_object_pin_map(scratch->obj, I915_MAP_WB);
cs = i915_gem_object_pin_map_unlocked(scratch->obj, I915_MAP_WB);
if (IS_ERR(cs)) {
err = PTR_ERR(cs);
goto out_end;

View File

@ -80,15 +80,15 @@ static int hang_init(struct hang *h, struct intel_gt *gt)
}
i915_gem_object_set_cache_coherency(h->hws, I915_CACHE_LLC);
vaddr = i915_gem_object_pin_map(h->hws, I915_MAP_WB);
vaddr = i915_gem_object_pin_map_unlocked(h->hws, I915_MAP_WB);
if (IS_ERR(vaddr)) {
err = PTR_ERR(vaddr);
goto err_obj;
}
h->seqno = memset(vaddr, 0xff, PAGE_SIZE);
vaddr = i915_gem_object_pin_map(h->obj,
i915_coherent_map_type(gt->i915));
vaddr = i915_gem_object_pin_map_unlocked(h->obj,
i915_coherent_map_type(gt->i915));
if (IS_ERR(vaddr)) {
err = PTR_ERR(vaddr);
goto err_unpin_hws;
@ -149,7 +149,7 @@ hang_create_request(struct hang *h, struct intel_engine_cs *engine)
return ERR_CAST(obj);
}
vaddr = i915_gem_object_pin_map(obj, i915_coherent_map_type(gt->i915));
vaddr = i915_gem_object_pin_map_unlocked(obj, i915_coherent_map_type(gt->i915));
if (IS_ERR(vaddr)) {
i915_gem_object_put(obj);
i915_vm_put(vm);

View File

@ -27,7 +27,7 @@
static struct i915_vma *create_scratch(struct intel_gt *gt)
{
return __vm_create_scratch_for_read(&gt->ggtt->vm, PAGE_SIZE);
return __vm_create_scratch_for_read_pinned(&gt->ggtt->vm, PAGE_SIZE);
}
static bool is_active(struct i915_request *rq)
@ -627,7 +627,7 @@ static int __live_lrc_gpr(struct intel_engine_cs *engine,
goto err_rq;
}
cs = i915_gem_object_pin_map(scratch->obj, I915_MAP_WB);
cs = i915_gem_object_pin_map_unlocked(scratch->obj, I915_MAP_WB);
if (IS_ERR(cs)) {
err = PTR_ERR(cs);
goto err_rq;
@ -921,7 +921,7 @@ store_context(struct intel_context *ce, struct i915_vma *scratch)
if (IS_ERR(batch))
return batch;
cs = i915_gem_object_pin_map(batch->obj, I915_MAP_WC);
cs = i915_gem_object_pin_map_unlocked(batch->obj, I915_MAP_WC);
if (IS_ERR(cs)) {
i915_vma_put(batch);
return ERR_CAST(cs);
@ -1085,7 +1085,7 @@ static struct i915_vma *load_context(struct intel_context *ce, u32 poison)
if (IS_ERR(batch))
return batch;
cs = i915_gem_object_pin_map(batch->obj, I915_MAP_WC);
cs = i915_gem_object_pin_map_unlocked(batch->obj, I915_MAP_WC);
if (IS_ERR(cs)) {
i915_vma_put(batch);
return ERR_CAST(cs);
@ -1199,29 +1199,29 @@ static int compare_isolation(struct intel_engine_cs *engine,
u32 *defaults;
int err = 0;
A[0] = i915_gem_object_pin_map(ref[0]->obj, I915_MAP_WC);
A[0] = i915_gem_object_pin_map_unlocked(ref[0]->obj, I915_MAP_WC);
if (IS_ERR(A[0]))
return PTR_ERR(A[0]);
A[1] = i915_gem_object_pin_map(ref[1]->obj, I915_MAP_WC);
A[1] = i915_gem_object_pin_map_unlocked(ref[1]->obj, I915_MAP_WC);
if (IS_ERR(A[1])) {
err = PTR_ERR(A[1]);
goto err_A0;
}
B[0] = i915_gem_object_pin_map(result[0]->obj, I915_MAP_WC);
B[0] = i915_gem_object_pin_map_unlocked(result[0]->obj, I915_MAP_WC);
if (IS_ERR(B[0])) {
err = PTR_ERR(B[0]);
goto err_A1;
}
B[1] = i915_gem_object_pin_map(result[1]->obj, I915_MAP_WC);
B[1] = i915_gem_object_pin_map_unlocked(result[1]->obj, I915_MAP_WC);
if (IS_ERR(B[1])) {
err = PTR_ERR(B[1]);
goto err_B0;
}
lrc = i915_gem_object_pin_map(ce->state->obj,
lrc = i915_gem_object_pin_map_unlocked(ce->state->obj,
i915_coherent_map_type(engine->i915));
if (IS_ERR(lrc)) {
err = PTR_ERR(lrc);

View File

@ -75,11 +75,12 @@ static int live_mocs_init(struct live_mocs *arg, struct intel_gt *gt)
if (flags & (HAS_GLOBAL_MOCS | HAS_ENGINE_MOCS))
arg->mocs = table;
arg->scratch = __vm_create_scratch_for_read(&gt->ggtt->vm, PAGE_SIZE);
arg->scratch =
__vm_create_scratch_for_read_pinned(&gt->ggtt->vm, PAGE_SIZE);
if (IS_ERR(arg->scratch))
return PTR_ERR(arg->scratch);
arg->vaddr = i915_gem_object_pin_map(arg->scratch->obj, I915_MAP_WB);
arg->vaddr = i915_gem_object_pin_map_unlocked(arg->scratch->obj, I915_MAP_WB);
if (IS_ERR(arg->vaddr)) {
err = PTR_ERR(arg->vaddr);
goto err_scratch;

View File

@ -35,7 +35,7 @@ static struct i915_vma *create_wally(struct intel_engine_cs *engine)
return ERR_PTR(err);
}
cs = i915_gem_object_pin_map(obj, I915_MAP_WC);
cs = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC);
if (IS_ERR(cs)) {
i915_gem_object_put(obj);
return ERR_CAST(cs);
@ -212,7 +212,7 @@ static int __live_ctx_switch_wa(struct intel_engine_cs *engine)
if (IS_ERR(bb))
return PTR_ERR(bb);
result = i915_gem_object_pin_map(bb->obj, I915_MAP_WC);
result = i915_gem_object_pin_map_unlocked(bb->obj, I915_MAP_WC);
if (IS_ERR(result)) {
intel_context_put(bb->private);
i915_vma_unpin_and_release(&bb, 0);

View File

@ -35,10 +35,31 @@ static unsigned long hwsp_cacheline(struct intel_timeline *tl)
{
unsigned long address = (unsigned long)page_address(hwsp_page(tl));
return (address + tl->hwsp_offset) / CACHELINE_BYTES;
return (address + offset_in_page(tl->hwsp_offset)) / TIMELINE_SEQNO_BYTES;
}
#define CACHELINES_PER_PAGE (PAGE_SIZE / CACHELINE_BYTES)
static int selftest_tl_pin(struct intel_timeline *tl)
{
struct i915_gem_ww_ctx ww;
int err;
i915_gem_ww_ctx_init(&ww, false);
retry:
err = i915_gem_object_lock(tl->hwsp_ggtt->obj, &ww);
if (!err)
err = intel_timeline_pin(tl, &ww);
if (err == -EDEADLK) {
err = i915_gem_ww_ctx_backoff(&ww);
if (!err)
goto retry;
}
i915_gem_ww_ctx_fini(&ww);
return err;
}
/* Only half of seqno's are usable, see __intel_timeline_get_seqno() */
#define CACHELINES_PER_PAGE (PAGE_SIZE / TIMELINE_SEQNO_BYTES / 2)
struct mock_hwsp_freelist {
struct intel_gt *gt;
@ -59,6 +80,7 @@ static void __mock_hwsp_record(struct mock_hwsp_freelist *state,
tl = xchg(&state->history[idx], tl);
if (tl) {
radix_tree_delete(&state->cachelines, hwsp_cacheline(tl));
intel_timeline_unpin(tl);
intel_timeline_put(tl);
}
}
@ -78,6 +100,12 @@ static int __mock_hwsp_timeline(struct mock_hwsp_freelist *state,
if (IS_ERR(tl))
return PTR_ERR(tl);
err = selftest_tl_pin(tl);
if (err) {
intel_timeline_put(tl);
return err;
}
cacheline = hwsp_cacheline(tl);
err = radix_tree_insert(&state->cachelines, cacheline, tl);
if (err) {
@ -85,6 +113,7 @@ static int __mock_hwsp_timeline(struct mock_hwsp_freelist *state,
pr_err("HWSP cacheline %lu already used; duplicate allocation!\n",
cacheline);
}
intel_timeline_unpin(tl);
intel_timeline_put(tl);
return err;
}
@ -452,17 +481,24 @@ static int emit_ggtt_store_dw(struct i915_request *rq, u32 addr, u32 value)
}
static struct i915_request *
tl_write(struct intel_timeline *tl, struct intel_engine_cs *engine, u32 value)
checked_tl_write(struct intel_timeline *tl, struct intel_engine_cs *engine, u32 value)
{
struct i915_request *rq;
int err;
err = intel_timeline_pin(tl, NULL);
err = selftest_tl_pin(tl);
if (err) {
rq = ERR_PTR(err);
goto out;
}
if (READ_ONCE(*tl->hwsp_seqno) != tl->seqno) {
pr_err("Timeline created with incorrect breadcrumb, found %x, expected %x\n",
*tl->hwsp_seqno, tl->seqno);
intel_timeline_unpin(tl);
return ERR_PTR(-EINVAL);
}
rq = intel_engine_create_kernel_request(engine);
if (IS_ERR(rq))
goto out_unpin;
@ -484,25 +520,6 @@ tl_write(struct intel_timeline *tl, struct intel_engine_cs *engine, u32 value)
return rq;
}
static struct intel_timeline *
checked_intel_timeline_create(struct intel_gt *gt)
{
struct intel_timeline *tl;
tl = intel_timeline_create(gt);
if (IS_ERR(tl))
return tl;
if (READ_ONCE(*tl->hwsp_seqno) != tl->seqno) {
pr_err("Timeline created with incorrect breadcrumb, found %x, expected %x\n",
*tl->hwsp_seqno, tl->seqno);
intel_timeline_put(tl);
return ERR_PTR(-EINVAL);
}
return tl;
}
static int live_hwsp_engine(void *arg)
{
#define NUM_TIMELINES 4096
@ -535,13 +552,13 @@ static int live_hwsp_engine(void *arg)
struct intel_timeline *tl;
struct i915_request *rq;
tl = checked_intel_timeline_create(gt);
tl = intel_timeline_create(gt);
if (IS_ERR(tl)) {
err = PTR_ERR(tl);
break;
}
rq = tl_write(tl, engine, count);
rq = checked_tl_write(tl, engine, count);
if (IS_ERR(rq)) {
intel_timeline_put(tl);
err = PTR_ERR(rq);
@ -608,14 +625,14 @@ static int live_hwsp_alternate(void *arg)
if (!intel_engine_can_store_dword(engine))
continue;
tl = checked_intel_timeline_create(gt);
tl = intel_timeline_create(gt);
if (IS_ERR(tl)) {
err = PTR_ERR(tl);
goto out;
}
intel_engine_pm_get(engine);
rq = tl_write(tl, engine, count);
rq = checked_tl_write(tl, engine, count);
intel_engine_pm_put(engine);
if (IS_ERR(rq)) {
intel_timeline_put(tl);
@ -666,10 +683,10 @@ static int live_hwsp_wrap(void *arg)
if (IS_ERR(tl))
return PTR_ERR(tl);
if (!tl->has_initial_breadcrumb || !tl->hwsp_cacheline)
if (!tl->has_initial_breadcrumb)
goto out_free;
err = intel_timeline_pin(tl, NULL);
err = selftest_tl_pin(tl);
if (err)
goto out_free;
@ -816,13 +833,13 @@ static int setup_watcher(struct hwsp_watcher *w, struct intel_gt *gt)
if (IS_ERR(obj))
return PTR_ERR(obj);
w->map = i915_gem_object_pin_map(obj, I915_MAP_WB);
w->map = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB);
if (IS_ERR(w->map)) {
i915_gem_object_put(obj);
return PTR_ERR(w->map);
}
vma = i915_gem_object_ggtt_pin_ww(obj, NULL, NULL, 0, 0, 0);
vma = i915_gem_object_ggtt_pin(obj, NULL, 0, 0, 0);
if (IS_ERR(vma)) {
i915_gem_object_put(obj);
return PTR_ERR(vma);
@ -833,12 +850,26 @@ static int setup_watcher(struct hwsp_watcher *w, struct intel_gt *gt)
return 0;
}
static void switch_tl_lock(struct i915_request *from, struct i915_request *to)
{
/* some light mutex juggling required; think co-routines */
if (from) {
lockdep_unpin_lock(&from->context->timeline->mutex, from->cookie);
mutex_unlock(&from->context->timeline->mutex);
}
if (to) {
mutex_lock(&to->context->timeline->mutex);
to->cookie = lockdep_pin_lock(&to->context->timeline->mutex);
}
}
static int create_watcher(struct hwsp_watcher *w,
struct intel_engine_cs *engine,
int ringsz)
{
struct intel_context *ce;
struct intel_timeline *tl;
ce = intel_context_create(engine);
if (IS_ERR(ce))
@ -851,11 +882,8 @@ static int create_watcher(struct hwsp_watcher *w,
return PTR_ERR(w->rq);
w->addr = i915_ggtt_offset(w->vma);
tl = w->rq->context->timeline;
/* some light mutex juggling required; think co-routines */
lockdep_unpin_lock(&tl->mutex, w->rq->cookie);
mutex_unlock(&tl->mutex);
switch_tl_lock(w->rq, NULL);
return 0;
}
@ -864,15 +892,13 @@ static int check_watcher(struct hwsp_watcher *w, const char *name,
bool (*op)(u32 hwsp, u32 seqno))
{
struct i915_request *rq = fetch_and_zero(&w->rq);
struct intel_timeline *tl = rq->context->timeline;
u32 offset, end;
int err;
GEM_BUG_ON(w->addr - i915_ggtt_offset(w->vma) > w->vma->size);
i915_request_get(rq);
mutex_lock(&tl->mutex);
rq->cookie = lockdep_pin_lock(&tl->mutex);
switch_tl_lock(NULL, rq);
i915_request_add(rq);
if (i915_request_wait(rq, 0, HZ) < 0) {
@ -901,10 +927,7 @@ static int check_watcher(struct hwsp_watcher *w, const char *name,
static void cleanup_watcher(struct hwsp_watcher *w)
{
if (w->rq) {
struct intel_timeline *tl = w->rq->context->timeline;
mutex_lock(&tl->mutex);
w->rq->cookie = lockdep_pin_lock(&tl->mutex);
switch_tl_lock(NULL, w->rq);
i915_request_add(w->rq);
}
@ -942,7 +965,7 @@ static struct i915_request *wrap_timeline(struct i915_request *rq)
}
i915_request_put(rq);
rq = intel_context_create_request(ce);
rq = i915_request_create(ce);
if (IS_ERR(rq))
return rq;
@ -977,7 +1000,7 @@ static int live_hwsp_read(void *arg)
if (IS_ERR(tl))
return PTR_ERR(tl);
if (!tl->hwsp_cacheline)
if (!tl->has_initial_breadcrumb)
goto out_free;
for (i = 0; i < ARRAY_SIZE(watcher); i++) {
@ -999,7 +1022,7 @@ static int live_hwsp_read(void *arg)
do {
struct i915_sw_fence *submit;
struct i915_request *rq;
u32 hwsp;
u32 hwsp, dummy;
submit = heap_fence_create(GFP_KERNEL);
if (!submit) {
@ -1017,14 +1040,26 @@ static int live_hwsp_read(void *arg)
goto out;
}
/* Skip to the end, saving 30 minutes of nops */
tl->seqno = -10u + 2 * (count & 3);
WRITE_ONCE(*(u32 *)tl->hwsp_seqno, tl->seqno);
ce->timeline = intel_timeline_get(tl);
rq = intel_context_create_request(ce);
/* Ensure timeline is mapped, done during first pin */
err = intel_context_pin(ce);
if (err) {
intel_context_put(ce);
goto out;
}
/*
* Start at a new wrap, and set seqno right before another wrap,
* saving 30 minutes of nops
*/
tl->seqno = -12u + 2 * (count & 3);
__intel_timeline_get_seqno(tl, &dummy);
rq = i915_request_create(ce);
if (IS_ERR(rq)) {
err = PTR_ERR(rq);
intel_context_unpin(ce);
intel_context_put(ce);
goto out;
}
@ -1034,32 +1069,35 @@ static int live_hwsp_read(void *arg)
GFP_KERNEL);
if (err < 0) {
i915_request_add(rq);
intel_context_unpin(ce);
intel_context_put(ce);
goto out;
}
mutex_lock(&watcher[0].rq->context->timeline->mutex);
switch_tl_lock(rq, watcher[0].rq);
err = intel_timeline_read_hwsp(rq, watcher[0].rq, &hwsp);
if (err == 0)
err = emit_read_hwsp(watcher[0].rq, /* before */
rq->fence.seqno, hwsp,
&watcher[0].addr);
mutex_unlock(&watcher[0].rq->context->timeline->mutex);
switch_tl_lock(watcher[0].rq, rq);
if (err) {
i915_request_add(rq);
intel_context_unpin(ce);
intel_context_put(ce);
goto out;
}
mutex_lock(&watcher[1].rq->context->timeline->mutex);
switch_tl_lock(rq, watcher[1].rq);
err = intel_timeline_read_hwsp(rq, watcher[1].rq, &hwsp);
if (err == 0)
err = emit_read_hwsp(watcher[1].rq, /* after */
rq->fence.seqno, hwsp,
&watcher[1].addr);
mutex_unlock(&watcher[1].rq->context->timeline->mutex);
switch_tl_lock(watcher[1].rq, rq);
if (err) {
i915_request_add(rq);
intel_context_unpin(ce);
intel_context_put(ce);
goto out;
}
@ -1068,6 +1106,7 @@ static int live_hwsp_read(void *arg)
i915_request_add(rq);
rq = wrap_timeline(rq);
intel_context_unpin(ce);
intel_context_put(ce);
if (IS_ERR(rq)) {
err = PTR_ERR(rq);
@ -1107,8 +1146,8 @@ static int live_hwsp_read(void *arg)
3 * watcher[1].rq->ring->size)
break;
} while (!__igt_timeout(end_time, NULL));
WRITE_ONCE(*(u32 *)tl->hwsp_seqno, 0xdeadbeef);
} while (!__igt_timeout(end_time, NULL) &&
count < (PAGE_SIZE / TIMELINE_SEQNO_BYTES - 1) / 2);
pr_info("%s: simulated %lu wraps\n", engine->name, count);
err = check_watcher(&watcher[1], "after", cmp_gte);
@ -1153,9 +1192,7 @@ static int live_hwsp_rollover_kernel(void *arg)
}
GEM_BUG_ON(i915_active_fence_isset(&tl->last_request));
tl->seqno = 0;
timeline_rollback(tl);
timeline_rollback(tl);
tl->seqno = -2u;
WRITE_ONCE(*(u32 *)tl->hwsp_seqno, tl->seqno);
for (i = 0; i < ARRAY_SIZE(rq); i++) {
@ -1235,11 +1272,14 @@ static int live_hwsp_rollover_user(void *arg)
goto out;
tl = ce->timeline;
if (!tl->has_initial_breadcrumb || !tl->hwsp_cacheline)
if (!tl->has_initial_breadcrumb)
goto out;
timeline_rollback(tl);
timeline_rollback(tl);
err = intel_context_pin(ce);
if (err)
goto out;
tl->seqno = -4u;
WRITE_ONCE(*(u32 *)tl->hwsp_seqno, tl->seqno);
for (i = 0; i < ARRAY_SIZE(rq); i++) {
@ -1248,7 +1288,7 @@ static int live_hwsp_rollover_user(void *arg)
this = intel_context_create_request(ce);
if (IS_ERR(this)) {
err = PTR_ERR(this);
goto out;
goto out_unpin;
}
pr_debug("%s: create fence.seqnp:%d\n",
@ -1267,17 +1307,18 @@ static int live_hwsp_rollover_user(void *arg)
if (i915_request_wait(rq[2], 0, HZ / 5) < 0) {
pr_err("Wait for timeline wrap timed out!\n");
err = -EIO;
goto out;
goto out_unpin;
}
for (i = 0; i < ARRAY_SIZE(rq); i++) {
if (!i915_request_completed(rq[i])) {
pr_err("Pre-wrap request not completed!\n");
err = -EINVAL;
goto out;
goto out_unpin;
}
}
out_unpin:
intel_context_unpin(ce);
out:
for (i = 0; i < ARRAY_SIZE(rq); i++)
i915_request_put(rq[i]);
@ -1319,13 +1360,13 @@ static int live_hwsp_recycle(void *arg)
struct intel_timeline *tl;
struct i915_request *rq;
tl = checked_intel_timeline_create(gt);
tl = intel_timeline_create(gt);
if (IS_ERR(tl)) {
err = PTR_ERR(tl);
break;
}
rq = tl_write(tl, engine, count);
rq = checked_tl_write(tl, engine, count);
if (IS_ERR(rq)) {
intel_timeline_put(tl);
err = PTR_ERR(rq);

View File

@ -112,7 +112,7 @@ read_nonprivs(struct intel_context *ce)
i915_gem_object_set_cache_coherency(result, I915_CACHE_LLC);
cs = i915_gem_object_pin_map(result, I915_MAP_WB);
cs = i915_gem_object_pin_map_unlocked(result, I915_MAP_WB);
if (IS_ERR(cs)) {
err = PTR_ERR(cs);
goto err_obj;
@ -218,7 +218,7 @@ static int check_whitelist(struct intel_context *ce)
i915_gem_object_lock(results, NULL);
intel_wedge_on_timeout(&wedge, engine->gt, HZ / 5) /* safety net! */
err = i915_gem_object_set_to_cpu_domain(results, false);
i915_gem_object_unlock(results);
if (intel_gt_is_wedged(engine->gt))
err = -EIO;
if (err)
@ -246,6 +246,7 @@ static int check_whitelist(struct intel_context *ce)
i915_gem_object_unpin_map(results);
out_put:
i915_gem_object_unlock(results);
i915_gem_object_put(results);
return err;
}
@ -490,7 +491,7 @@ static int check_dirty_whitelist(struct intel_context *ce)
u32 *cs, *results;
sz = (2 * ARRAY_SIZE(values) + 1) * sizeof(u32);
scratch = __vm_create_scratch_for_read(ce->vm, sz);
scratch = __vm_create_scratch_for_read_pinned(ce->vm, sz);
if (IS_ERR(scratch))
return PTR_ERR(scratch);
@ -502,6 +503,7 @@ static int check_dirty_whitelist(struct intel_context *ce)
for (i = 0; i < engine->whitelist.count; i++) {
u32 reg = i915_mmio_reg_offset(engine->whitelist.list[i].reg);
struct i915_gem_ww_ctx ww;
u64 addr = scratch->node.start;
struct i915_request *rq;
u32 srm, lrm, rsvd;
@ -517,6 +519,29 @@ static int check_dirty_whitelist(struct intel_context *ce)
ro_reg = ro_register(reg);
i915_gem_ww_ctx_init(&ww, false);
retry:
cs = NULL;
err = i915_gem_object_lock(scratch->obj, &ww);
if (!err)
err = i915_gem_object_lock(batch->obj, &ww);
if (!err)
err = intel_context_pin_ww(ce, &ww);
if (err)
goto out;
cs = i915_gem_object_pin_map(batch->obj, I915_MAP_WC);
if (IS_ERR(cs)) {
err = PTR_ERR(cs);
goto out_ctx;
}
results = i915_gem_object_pin_map(scratch->obj, I915_MAP_WB);
if (IS_ERR(results)) {
err = PTR_ERR(results);
goto out_unmap_batch;
}
/* Clear non priv flags */
reg &= RING_FORCE_TO_NONPRIV_ADDRESS_MASK;
@ -528,12 +553,6 @@ static int check_dirty_whitelist(struct intel_context *ce)
pr_debug("%s: Writing garbage to %x\n",
engine->name, reg);
cs = i915_gem_object_pin_map(batch->obj, I915_MAP_WC);
if (IS_ERR(cs)) {
err = PTR_ERR(cs);
goto out_batch;
}
/* SRM original */
*cs++ = srm;
*cs++ = reg;
@ -580,11 +599,12 @@ static int check_dirty_whitelist(struct intel_context *ce)
i915_gem_object_flush_map(batch->obj);
i915_gem_object_unpin_map(batch->obj);
intel_gt_chipset_flush(engine->gt);
cs = NULL;
rq = intel_context_create_request(ce);
rq = i915_request_create(ce);
if (IS_ERR(rq)) {
err = PTR_ERR(rq);
goto out_batch;
goto out_unmap_scratch;
}
if (engine->emit_init_breadcrumb) { /* Be nice if we hang */
@ -593,20 +613,16 @@ static int check_dirty_whitelist(struct intel_context *ce)
goto err_request;
}
i915_vma_lock(batch);
err = i915_request_await_object(rq, batch->obj, false);
if (err == 0)
err = i915_vma_move_to_active(batch, rq, 0);
i915_vma_unlock(batch);
if (err)
goto err_request;
i915_vma_lock(scratch);
err = i915_request_await_object(rq, scratch->obj, true);
if (err == 0)
err = i915_vma_move_to_active(scratch, rq,
EXEC_OBJECT_WRITE);
i915_vma_unlock(scratch);
if (err)
goto err_request;
@ -622,13 +638,7 @@ static int check_dirty_whitelist(struct intel_context *ce)
pr_err("%s: Futzing %x timedout; cancelling test\n",
engine->name, reg);
intel_gt_set_wedged(engine->gt);
goto out_batch;
}
results = i915_gem_object_pin_map(scratch->obj, I915_MAP_WB);
if (IS_ERR(results)) {
err = PTR_ERR(results);
goto out_batch;
goto out_unmap_scratch;
}
GEM_BUG_ON(values[ARRAY_SIZE(values) - 1] != 0xffffffff);
@ -639,7 +649,7 @@ static int check_dirty_whitelist(struct intel_context *ce)
pr_err("%s: Unable to write to whitelisted register %x\n",
engine->name, reg);
err = -EINVAL;
goto out_unpin;
goto out_unmap_scratch;
}
} else {
rsvd = 0;
@ -705,15 +715,27 @@ static int check_dirty_whitelist(struct intel_context *ce)
err = -EINVAL;
}
out_unpin:
out_unmap_scratch:
i915_gem_object_unpin_map(scratch->obj);
out_unmap_batch:
if (cs)
i915_gem_object_unpin_map(batch->obj);
out_ctx:
intel_context_unpin(ce);
out:
if (err == -EDEADLK) {
err = i915_gem_ww_ctx_backoff(&ww);
if (!err)
goto retry;
}
i915_gem_ww_ctx_fini(&ww);
if (err)
break;
}
if (igt_flush_test(engine->i915))
err = -EIO;
out_batch:
i915_vma_unpin_and_release(&batch, 0);
out_scratch:
i915_vma_unpin_and_release(&scratch, 0);
@ -847,7 +869,7 @@ static int scrub_whitelisted_registers(struct intel_context *ce)
if (IS_ERR(batch))
return PTR_ERR(batch);
cs = i915_gem_object_pin_map(batch->obj, I915_MAP_WC);
cs = i915_gem_object_pin_map_unlocked(batch->obj, I915_MAP_WC);
if (IS_ERR(cs)) {
err = PTR_ERR(cs);
goto err_batch;
@ -982,11 +1004,11 @@ check_whitelisted_registers(struct intel_engine_cs *engine,
u32 *a, *b;
int i, err;
a = i915_gem_object_pin_map(A->obj, I915_MAP_WB);
a = i915_gem_object_pin_map_unlocked(A->obj, I915_MAP_WB);
if (IS_ERR(a))
return PTR_ERR(a);
b = i915_gem_object_pin_map(B->obj, I915_MAP_WB);
b = i915_gem_object_pin_map_unlocked(B->obj, I915_MAP_WB);
if (IS_ERR(b)) {
err = PTR_ERR(b);
goto err_a;
@ -1030,14 +1052,14 @@ static int live_isolated_whitelist(void *arg)
for (i = 0; i < ARRAY_SIZE(client); i++) {
client[i].scratch[0] =
__vm_create_scratch_for_read(gt->vm, 4096);
__vm_create_scratch_for_read_pinned(gt->vm, 4096);
if (IS_ERR(client[i].scratch[0])) {
err = PTR_ERR(client[i].scratch[0]);
goto err;
}
client[i].scratch[1] =
__vm_create_scratch_for_read(gt->vm, 4096);
__vm_create_scratch_for_read_pinned(gt->vm, 4096);
if (IS_ERR(client[i].scratch[1])) {
err = PTR_ERR(client[i].scratch[1]);
i915_vma_unpin_and_release(&client[i].scratch[0], 0);

View File

@ -39,7 +39,7 @@ struct file *shmem_create_from_object(struct drm_i915_gem_object *obj)
return file;
}
ptr = i915_gem_object_pin_map(obj, I915_MAP_WB);
ptr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB);
if (IS_ERR(ptr))
return ERR_CAST(ptr);

View File

@ -682,7 +682,7 @@ int intel_guc_allocate_and_map_vma(struct intel_guc *guc, u32 size,
if (IS_ERR(vma))
return PTR_ERR(vma);
vaddr = i915_gem_object_pin_map(vma->obj, I915_MAP_WB);
vaddr = i915_gem_object_pin_map_unlocked(vma->obj, I915_MAP_WB);
if (IS_ERR(vaddr)) {
i915_vma_unpin_and_release(&vma, 0);
return PTR_ERR(vaddr);

View File

@ -335,7 +335,7 @@ static int guc_log_map(struct intel_guc_log *log)
* buffer pages, so that we can directly get the data
* (up-to-date) from memory.
*/
vaddr = i915_gem_object_pin_map(log->vma->obj, I915_MAP_WC);
vaddr = i915_gem_object_pin_map_unlocked(log->vma->obj, I915_MAP_WC);
if (IS_ERR(vaddr))
return PTR_ERR(vaddr);
@ -744,7 +744,7 @@ int intel_guc_log_dump(struct intel_guc_log *log, struct drm_printer *p,
if (!obj)
return 0;
map = i915_gem_object_pin_map(obj, I915_MAP_WC);
map = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC);
if (IS_ERR(map)) {
DRM_DEBUG("Failed to pin object\n");
drm_puts(p, "(log data unaccessible)\n");

View File

@ -82,7 +82,7 @@ static int intel_huc_rsa_data_create(struct intel_huc *huc)
if (IS_ERR(vma))
return PTR_ERR(vma);
vaddr = i915_gem_object_pin_map(vma->obj, I915_MAP_WB);
vaddr = i915_gem_object_pin_map_unlocked(vma->obj, I915_MAP_WB);
if (IS_ERR(vaddr)) {
i915_vma_unpin_and_release(&vma, 0);
return PTR_ERR(vaddr);

View File

@ -539,7 +539,7 @@ int intel_uc_fw_init(struct intel_uc_fw *uc_fw)
if (!intel_uc_fw_is_available(uc_fw))
return -ENOEXEC;
err = i915_gem_object_pin_pages(uc_fw->obj);
err = i915_gem_object_pin_pages_unlocked(uc_fw->obj);
if (err) {
DRM_DEBUG_DRIVER("%s fw pin-pages err=%d\n",
intel_uc_fw_type_repr(uc_fw->type), err);

View File

@ -218,7 +218,7 @@ static struct drm_i915_gem_object *vgpu_create_gem(struct drm_device *dev,
drm_gem_private_object_init(dev, &obj->base,
roundup(info->size, PAGE_SIZE));
i915_gem_object_init(obj, &intel_vgpu_gem_ops, &lock_class);
i915_gem_object_init(obj, &intel_vgpu_gem_ops, &lock_class, 0);
i915_gem_object_set_readonly(obj);
obj->read_domains = I915_GEM_DOMAIN_GTT;

View File

@ -293,18 +293,13 @@ static struct active_node *__active_lookup(struct i915_active *ref, u64 idx)
static struct i915_active_fence *
active_instance(struct i915_active *ref, u64 idx)
{
struct active_node *node, *prealloc;
struct active_node *node;
struct rb_node **p, *parent;
node = __active_lookup(ref, idx);
if (likely(node))
return &node->base;
/* Preallocate a replacement, just in case */
prealloc = kmem_cache_alloc(global.slab_cache, GFP_KERNEL);
if (!prealloc)
return NULL;
spin_lock_irq(&ref->tree_lock);
GEM_BUG_ON(i915_active_is_idle(ref));
@ -314,10 +309,8 @@ active_instance(struct i915_active *ref, u64 idx)
parent = *p;
node = rb_entry(parent, struct active_node, node);
if (node->timeline == idx) {
kmem_cache_free(global.slab_cache, prealloc);
if (node->timeline == idx)
goto out;
}
if (node->timeline < idx)
p = &parent->rb_right;
@ -325,7 +318,14 @@ active_instance(struct i915_active *ref, u64 idx)
p = &parent->rb_left;
}
node = prealloc;
/*
* XXX: We should preallocate this before i915_active_ref() is ever
* called, but we cannot call into fs_reclaim() anyway, so use GFP_ATOMIC.
*/
node = kmem_cache_alloc(global.slab_cache, GFP_ATOMIC);
if (!node)
goto out;
__i915_active_fence_init(&node->base, NULL, node_retire);
node->ref = ref;
node->timeline = idx;

View File

@ -1144,38 +1144,20 @@ find_reg(const struct intel_engine_cs *engine, u32 addr)
/* Returns a vmap'd pointer to dst_obj, which the caller must unmap */
static u32 *copy_batch(struct drm_i915_gem_object *dst_obj,
struct drm_i915_gem_object *src_obj,
unsigned long offset, unsigned long length)
unsigned long offset, unsigned long length,
void *dst, const void *src)
{
bool needs_clflush;
void *dst, *src;
int ret;
dst = i915_gem_object_pin_map(dst_obj, I915_MAP_WB);
if (IS_ERR(dst))
return dst;
ret = i915_gem_object_pin_pages(src_obj);
if (ret) {
i915_gem_object_unpin_map(dst_obj);
return ERR_PTR(ret);
}
needs_clflush =
bool needs_clflush =
!(src_obj->cache_coherent & I915_BO_CACHE_COHERENT_FOR_READ);
src = ERR_PTR(-ENODEV);
if (needs_clflush && i915_has_memcpy_from_wc()) {
src = i915_gem_object_pin_map(src_obj, I915_MAP_WC);
if (!IS_ERR(src)) {
i915_unaligned_memcpy_from_wc(dst,
src + offset,
length);
i915_gem_object_unpin_map(src_obj);
}
}
if (IS_ERR(src)) {
unsigned long x, n, remain;
if (src) {
GEM_BUG_ON(!needs_clflush);
i915_unaligned_memcpy_from_wc(dst, src + offset, length);
} else {
struct scatterlist *sg;
void *ptr;
unsigned int x, sg_ofs;
unsigned long remain;
/*
* We can avoid clflushing partial cachelines before the write
@ -1192,23 +1174,31 @@ static u32 *copy_batch(struct drm_i915_gem_object *dst_obj,
ptr = dst;
x = offset_in_page(offset);
for (n = offset >> PAGE_SHIFT; remain; n++) {
int len = min(remain, PAGE_SIZE - x);
sg = i915_gem_object_get_sg(src_obj, offset >> PAGE_SHIFT, &sg_ofs, false);
src = kmap_atomic(i915_gem_object_get_page(src_obj, n));
if (needs_clflush)
drm_clflush_virt_range(src + x, len);
memcpy(ptr, src + x, len);
kunmap_atomic(src);
while (remain) {
unsigned long sg_max = sg->length >> PAGE_SHIFT;
ptr += len;
remain -= len;
x = 0;
for (; remain && sg_ofs < sg_max; sg_ofs++) {
unsigned long len = min(remain, PAGE_SIZE - x);
void *map;
map = kmap_atomic(nth_page(sg_page(sg), sg_ofs));
if (needs_clflush)
drm_clflush_virt_range(map + x, len);
memcpy(ptr, map + x, len);
kunmap_atomic(map);
ptr += len;
remain -= len;
x = 0;
}
sg_ofs = 0;
sg = sg_next(sg);
}
}
i915_gem_object_unpin_pages(src_obj);
memset32(dst + length, 0, (dst_obj->base.size - length) / sizeof(u32));
/* dst_obj is returned with vmap pinned */
@ -1370,9 +1360,6 @@ static int check_bbstart(u32 *cmd, u32 offset, u32 length,
if (target_cmd_index == offset)
return 0;
if (IS_ERR(jump_whitelist))
return PTR_ERR(jump_whitelist);
if (!test_bit(target_cmd_index, jump_whitelist)) {
DRM_DEBUG("CMD: BB_START to 0x%llx not a previously executed cmd\n",
jump_target);
@ -1382,10 +1369,14 @@ static int check_bbstart(u32 *cmd, u32 offset, u32 length,
return 0;
}
static unsigned long *alloc_whitelist(u32 batch_length)
unsigned long *intel_engine_cmd_parser_alloc_jump_whitelist(u32 batch_length,
bool trampoline)
{
unsigned long *jmp;
if (trampoline)
return NULL;
/*
* We expect batch_length to be less than 256KiB for known users,
* i.e. we need at most an 8KiB bitmap allocation which should be
@ -1423,14 +1414,16 @@ int intel_engine_cmd_parser(struct intel_engine_cs *engine,
unsigned long batch_offset,
unsigned long batch_length,
struct i915_vma *shadow,
bool trampoline)
unsigned long *jump_whitelist,
void *shadow_map,
const void *batch_map)
{
u32 *cmd, *batch_end, offset = 0;
struct drm_i915_cmd_descriptor default_desc = noop_desc;
const struct drm_i915_cmd_descriptor *desc = &default_desc;
unsigned long *jump_whitelist;
u64 batch_addr, shadow_addr;
int ret = 0;
bool trampoline = !jump_whitelist;
GEM_BUG_ON(!IS_ALIGNED(batch_offset, sizeof(*cmd)));
GEM_BUG_ON(!IS_ALIGNED(batch_length, sizeof(*cmd)));
@ -1438,16 +1431,8 @@ int intel_engine_cmd_parser(struct intel_engine_cs *engine,
batch->size));
GEM_BUG_ON(!batch_length);
cmd = copy_batch(shadow->obj, batch->obj, batch_offset, batch_length);
if (IS_ERR(cmd)) {
DRM_DEBUG("CMD: Failed to copy batch\n");
return PTR_ERR(cmd);
}
jump_whitelist = NULL;
if (!trampoline)
/* Defer failure until attempted use */
jump_whitelist = alloc_whitelist(batch_length);
cmd = copy_batch(shadow->obj, batch->obj, batch_offset, batch_length,
shadow_map, batch_map);
shadow_addr = gen8_canonical_addr(shadow->node.start);
batch_addr = gen8_canonical_addr(batch->node.start + batch_offset);
@ -1548,9 +1533,6 @@ int intel_engine_cmd_parser(struct intel_engine_cs *engine,
i915_gem_object_flush_map(shadow->obj);
if (!IS_ERR_OR_NULL(jump_whitelist))
kfree(jump_whitelist);
i915_gem_object_unpin_map(shadow->obj);
return ret;
}

View File

@ -904,10 +904,10 @@ i915_drop_caches_set(void *data, u64 val)
fs_reclaim_acquire(GFP_KERNEL);
if (val & DROP_BOUND)
i915_gem_shrink(i915, LONG_MAX, NULL, I915_SHRINK_BOUND);
i915_gem_shrink(NULL, i915, LONG_MAX, NULL, I915_SHRINK_BOUND);
if (val & DROP_UNBOUND)
i915_gem_shrink(i915, LONG_MAX, NULL, I915_SHRINK_UNBOUND);
i915_gem_shrink(NULL, i915, LONG_MAX, NULL, I915_SHRINK_UNBOUND);
if (val & DROP_SHRINK_ALL)
i915_gem_shrink_all(i915);

View File

@ -1691,7 +1691,7 @@ static const struct drm_ioctl_desc i915_ioctls[] = {
DRM_IOCTL_DEF_DRV(I915_VBLANK_SWAP, drm_noop, DRM_AUTH),
DRM_IOCTL_DEF_DRV(I915_HWS_ADDR, drm_noop, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
DRM_IOCTL_DEF_DRV(I915_GEM_INIT, drm_noop, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
DRM_IOCTL_DEF_DRV(I915_GEM_EXECBUFFER, i915_gem_execbuffer_ioctl, DRM_AUTH),
DRM_IOCTL_DEF_DRV(I915_GEM_EXECBUFFER, drm_invalid_op, DRM_AUTH),
DRM_IOCTL_DEF_DRV(I915_GEM_EXECBUFFER2_WR, i915_gem_execbuffer2_ioctl, DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(I915_GEM_PIN, i915_gem_reject_pin_ioctl, DRM_AUTH|DRM_ROOT_ONLY),
DRM_IOCTL_DEF_DRV(I915_GEM_UNPIN, i915_gem_reject_pin_ioctl, DRM_AUTH|DRM_ROOT_ONLY),

View File

@ -554,12 +554,13 @@ struct i915_gem_mm {
struct notifier_block vmap_notifier;
struct shrinker shrinker;
#ifdef CONFIG_MMU_NOTIFIER
/**
* Workqueue to fault in userptr pages, flushed by the execbuf
* when required but otherwise left to userspace to try again
* on EAGAIN.
* notifier_lock for mmu notifiers, memory may not be allocated
* while holding this lock.
*/
struct workqueue_struct *userptr_wq;
spinlock_t notifier_lock;
#endif
/* shrinker accounting, also useful for userland debugging */
u64 shrink_memory;
@ -938,8 +939,6 @@ struct drm_i915_private {
struct i915_ggtt ggtt; /* VM representing the global address space */
struct i915_gem_mm mm;
DECLARE_HASHTABLE(mm_structs, 7);
spinlock_t mm_lock;
/* Kernel Modesetting */
@ -1946,12 +1945,17 @@ const char *i915_cache_level_str(struct drm_i915_private *i915, int type);
int i915_cmd_parser_get_version(struct drm_i915_private *dev_priv);
int intel_engine_init_cmd_parser(struct intel_engine_cs *engine);
void intel_engine_cleanup_cmd_parser(struct intel_engine_cs *engine);
unsigned long *intel_engine_cmd_parser_alloc_jump_whitelist(u32 batch_length,
bool trampoline);
int intel_engine_cmd_parser(struct intel_engine_cs *engine,
struct i915_vma *batch,
unsigned long batch_offset,
unsigned long batch_length,
struct i915_vma *shadow,
bool trampoline);
unsigned long *jump_whitelist,
void *shadow_map,
const void *batch_map);
#define I915_CMD_PARSER_TRAMPOLINE_SIZE 8
/* intel_device_info.c */

View File

@ -204,7 +204,6 @@ i915_gem_shmem_pread(struct drm_i915_gem_object *obj,
{
unsigned int needs_clflush;
unsigned int idx, offset;
struct dma_fence *fence;
char __user *user_data;
u64 remain;
int ret;
@ -213,19 +212,17 @@ i915_gem_shmem_pread(struct drm_i915_gem_object *obj,
if (ret)
return ret;
ret = i915_gem_object_prepare_read(obj, &needs_clflush);
if (ret) {
i915_gem_object_unlock(obj);
return ret;
}
ret = i915_gem_object_pin_pages(obj);
if (ret)
goto err_unlock;
ret = i915_gem_object_prepare_read(obj, &needs_clflush);
if (ret)
goto err_unpin;
fence = i915_gem_object_lock_fence(obj);
i915_gem_object_finish_access(obj);
i915_gem_object_unlock(obj);
if (!fence)
return -ENOMEM;
remain = args->size;
user_data = u64_to_user_ptr(args->data_ptr);
offset = offset_in_page(args->offset);
@ -243,7 +240,13 @@ i915_gem_shmem_pread(struct drm_i915_gem_object *obj,
offset = 0;
}
i915_gem_object_unlock_fence(obj, fence);
i915_gem_object_unpin_pages(obj);
return ret;
err_unpin:
i915_gem_object_unpin_pages(obj);
err_unlock:
i915_gem_object_unlock(obj);
return ret;
}
@ -271,6 +274,83 @@ gtt_user_read(struct io_mapping *mapping,
return unwritten;
}
static struct i915_vma *i915_gem_gtt_prepare(struct drm_i915_gem_object *obj,
struct drm_mm_node *node,
bool write)
{
struct drm_i915_private *i915 = to_i915(obj->base.dev);
struct i915_ggtt *ggtt = &i915->ggtt;
struct i915_vma *vma;
struct i915_gem_ww_ctx ww;
int ret;
i915_gem_ww_ctx_init(&ww, true);
retry:
vma = ERR_PTR(-ENODEV);
ret = i915_gem_object_lock(obj, &ww);
if (ret)
goto err_ww;
ret = i915_gem_object_set_to_gtt_domain(obj, write);
if (ret)
goto err_ww;
if (!i915_gem_object_is_tiled(obj))
vma = i915_gem_object_ggtt_pin_ww(obj, &ww, NULL, 0, 0,
PIN_MAPPABLE |
PIN_NONBLOCK /* NOWARN */ |
PIN_NOEVICT);
if (vma == ERR_PTR(-EDEADLK)) {
ret = -EDEADLK;
goto err_ww;
} else if (!IS_ERR(vma)) {
node->start = i915_ggtt_offset(vma);
node->flags = 0;
} else {
ret = insert_mappable_node(ggtt, node, PAGE_SIZE);
if (ret)
goto err_ww;
GEM_BUG_ON(!drm_mm_node_allocated(node));
vma = NULL;
}
ret = i915_gem_object_pin_pages(obj);
if (ret) {
if (drm_mm_node_allocated(node)) {
ggtt->vm.clear_range(&ggtt->vm, node->start, node->size);
remove_mappable_node(ggtt, node);
} else {
i915_vma_unpin(vma);
}
}
err_ww:
if (ret == -EDEADLK) {
ret = i915_gem_ww_ctx_backoff(&ww);
if (!ret)
goto retry;
}
i915_gem_ww_ctx_fini(&ww);
return ret ? ERR_PTR(ret) : vma;
}
static void i915_gem_gtt_cleanup(struct drm_i915_gem_object *obj,
struct drm_mm_node *node,
struct i915_vma *vma)
{
struct drm_i915_private *i915 = to_i915(obj->base.dev);
struct i915_ggtt *ggtt = &i915->ggtt;
i915_gem_object_unpin_pages(obj);
if (drm_mm_node_allocated(node)) {
ggtt->vm.clear_range(&ggtt->vm, node->start, node->size);
remove_mappable_node(ggtt, node);
} else {
i915_vma_unpin(vma);
}
}
static int
i915_gem_gtt_pread(struct drm_i915_gem_object *obj,
const struct drm_i915_gem_pread *args)
@ -279,44 +359,17 @@ i915_gem_gtt_pread(struct drm_i915_gem_object *obj,
struct i915_ggtt *ggtt = &i915->ggtt;
intel_wakeref_t wakeref;
struct drm_mm_node node;
struct dma_fence *fence;
void __user *user_data;
struct i915_vma *vma;
u64 remain, offset;
int ret;
int ret = 0;
wakeref = intel_runtime_pm_get(&i915->runtime_pm);
vma = ERR_PTR(-ENODEV);
if (!i915_gem_object_is_tiled(obj))
vma = i915_gem_object_ggtt_pin(obj, NULL, 0, 0,
PIN_MAPPABLE |
PIN_NONBLOCK /* NOWARN */ |
PIN_NOEVICT);
if (!IS_ERR(vma)) {
node.start = i915_ggtt_offset(vma);
node.flags = 0;
} else {
ret = insert_mappable_node(ggtt, &node, PAGE_SIZE);
if (ret)
goto out_rpm;
GEM_BUG_ON(!drm_mm_node_allocated(&node));
}
ret = i915_gem_object_lock_interruptible(obj, NULL);
if (ret)
goto out_unpin;
ret = i915_gem_object_set_to_gtt_domain(obj, false);
if (ret) {
i915_gem_object_unlock(obj);
goto out_unpin;
}
fence = i915_gem_object_lock_fence(obj);
i915_gem_object_unlock(obj);
if (!fence) {
ret = -ENOMEM;
goto out_unpin;
vma = i915_gem_gtt_prepare(obj, &node, false);
if (IS_ERR(vma)) {
ret = PTR_ERR(vma);
goto out_rpm;
}
user_data = u64_to_user_ptr(args->data_ptr);
@ -353,14 +406,7 @@ i915_gem_gtt_pread(struct drm_i915_gem_object *obj,
offset += page_length;
}
i915_gem_object_unlock_fence(obj, fence);
out_unpin:
if (drm_mm_node_allocated(&node)) {
ggtt->vm.clear_range(&ggtt->vm, node.start, node.size);
remove_mappable_node(ggtt, &node);
} else {
i915_vma_unpin(vma);
}
i915_gem_gtt_cleanup(obj, &node, vma);
out_rpm:
intel_runtime_pm_put(&i915->runtime_pm, wakeref);
return ret;
@ -378,10 +424,17 @@ int
i915_gem_pread_ioctl(struct drm_device *dev, void *data,
struct drm_file *file)
{
struct drm_i915_private *i915 = to_i915(dev);
struct drm_i915_gem_pread *args = data;
struct drm_i915_gem_object *obj;
int ret;
/* PREAD is disallowed for all platforms after TGL-LP. This also
* covers all platforms with local memory.
*/
if (INTEL_GEN(i915) >= 12 && !IS_TIGERLAKE(i915))
return -EOPNOTSUPP;
if (args->size == 0)
return 0;
@ -400,6 +453,11 @@ i915_gem_pread_ioctl(struct drm_device *dev, void *data,
}
trace_i915_gem_object_pread(obj, args->offset, args->size);
ret = -ENODEV;
if (obj->ops->pread)
ret = obj->ops->pread(obj, args);
if (ret != -ENODEV)
goto out;
ret = -ENODEV;
if (obj->ops->pread)
@ -413,15 +471,10 @@ i915_gem_pread_ioctl(struct drm_device *dev, void *data,
if (ret)
goto out;
ret = i915_gem_object_pin_pages(obj);
if (ret)
goto out;
ret = i915_gem_shmem_pread(obj, args);
if (ret == -EFAULT || ret == -ENODEV)
ret = i915_gem_gtt_pread(obj, args);
i915_gem_object_unpin_pages(obj);
out:
i915_gem_object_put(obj);
return ret;
@ -469,11 +522,10 @@ i915_gem_gtt_pwrite_fast(struct drm_i915_gem_object *obj,
struct intel_runtime_pm *rpm = &i915->runtime_pm;
intel_wakeref_t wakeref;
struct drm_mm_node node;
struct dma_fence *fence;
struct i915_vma *vma;
u64 remain, offset;
void __user *user_data;
int ret;
int ret = 0;
if (i915_gem_object_has_struct_page(obj)) {
/*
@ -491,37 +543,10 @@ i915_gem_gtt_pwrite_fast(struct drm_i915_gem_object *obj,
wakeref = intel_runtime_pm_get(rpm);
}
vma = ERR_PTR(-ENODEV);
if (!i915_gem_object_is_tiled(obj))
vma = i915_gem_object_ggtt_pin(obj, NULL, 0, 0,
PIN_MAPPABLE |
PIN_NONBLOCK /* NOWARN */ |
PIN_NOEVICT);
if (!IS_ERR(vma)) {
node.start = i915_ggtt_offset(vma);
node.flags = 0;
} else {
ret = insert_mappable_node(ggtt, &node, PAGE_SIZE);
if (ret)
goto out_rpm;
GEM_BUG_ON(!drm_mm_node_allocated(&node));
}
ret = i915_gem_object_lock_interruptible(obj, NULL);
if (ret)
goto out_unpin;
ret = i915_gem_object_set_to_gtt_domain(obj, true);
if (ret) {
i915_gem_object_unlock(obj);
goto out_unpin;
}
fence = i915_gem_object_lock_fence(obj);
i915_gem_object_unlock(obj);
if (!fence) {
ret = -ENOMEM;
goto out_unpin;
vma = i915_gem_gtt_prepare(obj, &node, true);
if (IS_ERR(vma)) {
ret = PTR_ERR(vma);
goto out_rpm;
}
i915_gem_object_invalidate_frontbuffer(obj, ORIGIN_CPU);
@ -570,14 +595,7 @@ i915_gem_gtt_pwrite_fast(struct drm_i915_gem_object *obj,
intel_gt_flush_ggtt_writes(ggtt->vm.gt);
i915_gem_object_flush_frontbuffer(obj, ORIGIN_CPU);
i915_gem_object_unlock_fence(obj, fence);
out_unpin:
if (drm_mm_node_allocated(&node)) {
ggtt->vm.clear_range(&ggtt->vm, node.start, node.size);
remove_mappable_node(ggtt, &node);
} else {
i915_vma_unpin(vma);
}
i915_gem_gtt_cleanup(obj, &node, vma);
out_rpm:
intel_runtime_pm_put(rpm, wakeref);
return ret;
@ -617,7 +635,6 @@ i915_gem_shmem_pwrite(struct drm_i915_gem_object *obj,
unsigned int partial_cacheline_write;
unsigned int needs_clflush;
unsigned int offset, idx;
struct dma_fence *fence;
void __user *user_data;
u64 remain;
int ret;
@ -626,19 +643,17 @@ i915_gem_shmem_pwrite(struct drm_i915_gem_object *obj,
if (ret)
return ret;
ret = i915_gem_object_prepare_write(obj, &needs_clflush);
if (ret) {
i915_gem_object_unlock(obj);
return ret;
}
ret = i915_gem_object_pin_pages(obj);
if (ret)
goto err_unlock;
ret = i915_gem_object_prepare_write(obj, &needs_clflush);
if (ret)
goto err_unpin;
fence = i915_gem_object_lock_fence(obj);
i915_gem_object_finish_access(obj);
i915_gem_object_unlock(obj);
if (!fence)
return -ENOMEM;
/* If we don't overwrite a cacheline completely we need to be
* careful to have up-to-date data by first clflushing. Don't
* overcomplicate things and flush the entire patch.
@ -666,8 +681,14 @@ i915_gem_shmem_pwrite(struct drm_i915_gem_object *obj,
}
i915_gem_object_flush_frontbuffer(obj, ORIGIN_CPU);
i915_gem_object_unlock_fence(obj, fence);
i915_gem_object_unpin_pages(obj);
return ret;
err_unpin:
i915_gem_object_unpin_pages(obj);
err_unlock:
i915_gem_object_unlock(obj);
return ret;
}
@ -683,10 +704,17 @@ int
i915_gem_pwrite_ioctl(struct drm_device *dev, void *data,
struct drm_file *file)
{
struct drm_i915_private *i915 = to_i915(dev);
struct drm_i915_gem_pwrite *args = data;
struct drm_i915_gem_object *obj;
int ret;
/* PWRITE is disallowed for all platforms after TGL-LP. This also
* covers all platforms with local memory.
*/
if (INTEL_GEN(i915) >= 12 && !IS_TIGERLAKE(i915))
return -EOPNOTSUPP;
if (args->size == 0)
return 0;
@ -724,10 +752,6 @@ i915_gem_pwrite_ioctl(struct drm_device *dev, void *data,
if (ret)
goto err;
ret = i915_gem_object_pin_pages(obj);
if (ret)
goto err;
ret = -EFAULT;
/* We can only do the GTT pwrite on untiled buffers, as otherwise
* it would end up going through the fenced access, and we'll get
@ -748,7 +772,6 @@ i915_gem_pwrite_ioctl(struct drm_device *dev, void *data,
ret = i915_gem_shmem_pwrite(obj, args);
}
i915_gem_object_unpin_pages(obj);
err:
i915_gem_object_put(obj);
return ret;
@ -909,7 +932,11 @@ i915_gem_object_ggtt_pin_ww(struct drm_i915_gem_object *obj,
return ERR_PTR(ret);
}
ret = i915_vma_pin_ww(vma, ww, size, alignment, flags | PIN_GLOBAL);
if (ww)
ret = i915_vma_pin_ww(vma, ww, size, alignment, flags | PIN_GLOBAL);
else
ret = i915_vma_pin(vma, size, alignment, flags | PIN_GLOBAL);
if (ret)
return ERR_PTR(ret);
@ -949,7 +976,7 @@ i915_gem_madvise_ioctl(struct drm_device *dev, void *data,
if (!obj)
return -ENOENT;
err = mutex_lock_interruptible(&obj->mm.lock);
err = i915_gem_object_lock_interruptible(obj, NULL);
if (err)
goto out;
@ -995,8 +1022,8 @@ i915_gem_madvise_ioctl(struct drm_device *dev, void *data,
i915_gem_object_truncate(obj);
args->retained = obj->mm.madv != __I915_MADV_PURGED;
mutex_unlock(&obj->mm.lock);
i915_gem_object_unlock(obj);
out:
i915_gem_object_put(obj);
return err;
@ -1050,10 +1077,8 @@ int i915_gem_init(struct drm_i915_private *dev_priv)
err_unlock:
i915_gem_drain_workqueue(dev_priv);
if (ret != -EIO) {
if (ret != -EIO)
intel_uc_cleanup_firmwares(&dev_priv->gt.uc);
i915_gem_cleanup_userptr(dev_priv);
}
if (ret == -EIO) {
/*
@ -1110,7 +1135,6 @@ void i915_gem_driver_release(struct drm_i915_private *dev_priv)
intel_wa_list_free(&dev_priv->gt_wa_list);
intel_uc_cleanup_firmwares(&dev_priv->gt.uc);
i915_gem_cleanup_userptr(dev_priv);
i915_gem_drain_freed_objects(dev_priv);

View File

@ -44,7 +44,7 @@ int i915_gem_gtt_prepare_pages(struct drm_i915_gem_object *obj,
* the DMA remapper, i915_gem_shrink will return 0.
*/
GEM_BUG_ON(obj->mm.pages == pages);
} while (i915_gem_shrink(to_i915(obj->base.dev),
} while (i915_gem_shrink(NULL, to_i915(obj->base.dev),
obj->base.size >> PAGE_SHIFT, NULL,
I915_SHRINK_BOUND |
I915_SHRINK_UNBOUND));

Some files were not shown because too many files have changed in this diff Show More