As we never expose context objects directly to userspace, we can forgo
allocating a first-class GEM object for them and prefer to use the
limited resource of reserved/stolen memory for them. Note this means
that their initial contents are undefined.
However, a downside of using stolen objects for execlists is that we
cannot access the physical address directly (thanks MCH!) which prevents
their use.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The problem is we're going to switch to a new context, which could be
the default context. The plan was to use restore inhibit, which would be
fine, except if we are using dynamic page tables (which we will). If we
use dynamic page tables and we don't load new page tables, the previous
page tables might go away, and future operations will fault.
CTXA runs.
switch to default, restore inhibit
CTXA dies and has its address space taken away.
Run CTXB, tries to save using the context A's address space - this
fails.
The general solution is to make sure every context has it's own state,
and its own address space. For cases when we must restore inhibit, first
thing we do is load a valid address space. I thought this would be
enough, but apparently there are references within the context itself
which will refer to the old address space - therefore, we also must
reinitialize.
v2: to->ppgtt is only valid in full ppgtt.
v3: Rebased.
v4: Make post PDP update clearer.
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Michel Thierry <michel.thierry@intel.com> (v2+)
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This patch was formerly known as, "Force pd restore when PDEs change,
gen6-7." I had to change the name because it is needed for GEN8 too.
The real issue this is trying to solve is when a new object is mapped
into the current address space. The GPU does not snoop the new mapping
so we must do the gen specific action to reload the page tables.
GEN8 and GEN7 do differ in the way they load page tables for the RCS.
GEN8 does so with the context restore, while GEN7 requires the proper
load commands in the command streamer. Non-render is similar for both.
Caveat for GEN7
The docs say you cannot change the PDEs of a currently running context.
We never map new PDEs of a running context, and expect them to be
present - so I think this is okay. (We can unmap, but this should also
be okay since we only unmap unreferenced objects that the GPU shouldn't
be tryingto va->pa xlate.) The MI_SET_CONTEXT command does have a flag
to signal that even if the context is the same, force a reload. It's
unclear exactly what this does, but I have a hunch it's the right thing
to do.
The logic assumes that we always emit a context switch after mapping new
PDEs, and before we submit a batch. This is the case today, and has been
the case since the inception of hardware contexts. A note in the comment
let's the user know.
It's not just for gen8. If the current context has mappings change, we
need a context reload to switch
v2: Rebased after ppgtt clean up patches. Split the warning for aliasing
and true ppgtt options. And do not break aliasing ppgtt, where to->ppgtt
is always null.
v3: Invalidate PPGTT TLBs inside alloc_va_range.
v4: Rename ppgtt_invalidate_tlbs to mark_tlbs_dirty and move
pd_dirty_rings from i915_address_space to i915_hw_ppgtt. Fixes when
neither ctx->ppgtt and aliasing_ppgtt exist.
v5: Removed references to teardown_va_range.
v6: Updated needs_pd_load_pre/post.
v7: Fix pd_dirty_rings check in needs_pd_load_post, and update/move
comment about updated PDEs to object_pin/bind (Mika).
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Michel Thierry <michel.thierry@intel.com> (v2+)
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
In Gen8, PDPs are saved and restored with legacy contexts (legacy contexts
only exist on the render ring). So change the ordering of LRI vs MI_SET_CONTEXT
for the initialization of the context. Also the only cases in which we
need to manually update the PDPs are when MI_RESTORE_INHIBIT has been
set in MI_SET_CONTEXT (i.e. when the context is not yet initialized or
it is the default context).
Legacy submission is not available post GEN8, so it isn't necessary to
add extra checks for newer generations.
v2: Use new functions to replace the logic right away (Daniel)
v3: Add missing pd load logic.
v4: Add warning in case pd_load_pre & pd_load_post are true, and add
missing trace_switch_mm. Cleaned up pd_load conditions. Add more
information about when is pd_load_post needed. (Mika)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Michel Thierry <michel.thierry@intel.com> (v2+)
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Work was getting left behind in LRC contexts during reset. This causes a hang
if the GPU is reset when HEAD==TAIL because the context's ringbuffer head and
tail don't get reset and retiring a request doesn't alter them, so the ring
still appears full.
Added a function intel_lr_context_reset() to reset head and tail on a LRC and
its ringbuffer.
Call intel_lr_context_reset() for each context in i915_gem_context_reset() when
in execlists mode.
Testcase: igt/pm_rps --run-subtest reset #bdw
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=88096
Signed-off-by: Thomas Daniel <thomas.daniel@intel.com>
Reviewed-by: Dave Gordon <david.s.gordon@intel.com>
[danvet: Flatten control flow in the lrc reset code a notch.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Conflicts:
drivers/gpu/drm/i915/intel_runtime_pm.c
Separate branch so that Takashi can also pull just this refactoring
into sound-next.
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Sometimes we wish to tweak how an individual context behaves. Since we
always create a context for every filp, this means that individual
processes can fine tune their behaviour even if they do not explicitly
create a context.
The first example parameter here is to enable multi-process GPU testing,
but the interface should be able to cope with passing arbitrarily complex
parameters.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Testcase: igt/gem_reset_stats/ban-period-*
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This will allow us to set per-file, or even per-context, periods in the
future.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
There exists a current workaround to prevent a hang on context switch
should the ring go to sleep in the middle of the restore,
WaProgramMiArbOnOffAroundMiSetContext (applicable to all gen7+). In
spite of disabling arbitration (which prevents the ring from powering
down during the critical section) we were still hitting hangs that had
the hallmarks of the known erratum. That is we are still seeing hangs
"on the last instruction in the context restore". By comparing -nightly
(broken) with requests (working), we were able to deduce that it was the
semaphore LRI cross-talk that reproduced the original failure. The key
was that requests implemented deferred semaphore signalling, and
disabling that, i.e. emitting the semaphore signal to every other ring
after every batch restored the frequent hang. Explicitly disabling PSMI
sleep on the RCS ring was insufficient, all the rings had to be awake to
prevent the hangs. Fortunately, we can reduce the wakelock to the
MI_SET_CONTEXT operation itself, and so should be able to limit the extra
power implications.
Since the MI_ARB_ON_OFF workaround is listed for all gen7 and above
products, we should apply this extra hammer for all of the same
platforms despite so far that we have only been able to reproduce the
hang on certain ivb and hsw models. The last question is whether we want
to always use the extra hammer or only when we know semaphores are in
operation. At the moment, we only use LRI on non-RCS rings for
semaphores, but that may change in the future with the possibility of
reintroducing this bug under subtle conditions.
v2: Make it explicit that the PSMI LRI are an extension to the original
workaround for the other rings.
v3: Bikeshedding variable names and whitespacing
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=80660
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=83677
Cc: Simon Farnsworth <simon@farnz.org.uk>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Tested-by: Peter Frühberger <fritsch@xbmc.org>
Reviewed-by: Daniel Vetter <daniel@ffwll.ch>
Cc: stable@vger.kernel.org
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Things like reliable GGTT mappings and mirrored 2d-on-3d display will need
to map objects into the same address space multiple times.
Added a GGTT view concept and linked it with the VMA to distinguish between
multiple instances per address space.
New objects and GEM functions which do not take this new view as a parameter
assume the default of zero (I915_GGTT_VIEW_NORMAL) which preserves the
previous behaviour.
This now means that objects can have multiple VMA entries so the code which
assumed there will only be one also had to be modified.
Alternative GGTT views are supposed to borrow DMA addresses from obj->pages
which is DMA mapped on first VMA instantiation and unmapped on the last one
going away.
v2:
* Removed per view special casing in i915_gem_ggtt_prepare /
finish_object in favour of creating and destroying DMA mappings
on first VMA instantiation and last VMA destruction. (Daniel Vetter)
* Simplified i915_vma_unbind which does not need to count the GGTT views.
(Daniel Vetter)
* Also moved obj->map_and_fenceable reset under the same check.
* Checkpatch cleanups.
v3:
* Only retire objects once the last VMA is unbound.
v4:
* Keep scatter-gather table for alternative views persistent for the
lifetime of the VMA.
* Propagate binding errors to callers and handle appropriately.
v5:
* Explicitly look for normal GGTT view in i915_gem_obj_bound to align
usage in i915_gem_object_ggtt_unpin. (Michel Thierry)
* Change to single if statement in i915_gem_obj_to_ggtt. (Michel Thierry)
* Removed stray semi-colon in i915_gem_object_set_cache_level.
For: VIZ-4544
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Michel Thierry <michel.thierry@intel.com>
[danvet: Drop hunk from i915_gem_shrink since it's just prettification
but upsets a __must_check warning.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Similar to a patch from Thomas Daniel for lrc contexts. This keeps
both sides somewhat in sync and should make Dave Gordon happy.
Note that both the wa and the golden context init code suffer a bit
from an inssuficient split into driver load and hw init code. Which
means we have a bunch of tests all over the place to check whether the
one-time initialization has been done already or not.
All that one-tim code should be moved into the one-time ring setup
code, but that's work for later.
Cc: Dave Gordon <david.s.gordon@intel.com>
Cc: Thomas Daniel <thomas.daniel@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Reviewed-by: Dave Gordon <david.s.gordon@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
A previous commit introduced engine init changes:
commit 372ee59699d9 ("drm/i915: Only init engines once")
This broke execlists as intel_lr_context_render_state_init was trying to emit
commands to the RCS for the default context before the ring->init_hw was called.
Made a new gen8_init_rcs_context function and assign in to render ring
init_context. Moved call to intel_logical_ring_workarounds_emit into
gen8_init_rcs_context to maintain previous functionality.
Moved call to render_state_init from lr_context_deferred_create into
gen8_init_rcs_context, and modified deferred_create to call ring->init_context
for non-default contexts.
Modified i915_gem_context_enable to call ring->init_context for the default
context.
So init_context will now always be called when the hw is ready - in
i915_gem_context_enable for the default context and in lr_context_deferred_create
for other contexts.
Signed-off-by: Thomas Daniel <thomas.daniel@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The ring member of the object structure was always updated with the
last_read_seqno member. Thus with the conversion to last_read_req, obj->ring is
now a direct copy of obj->last_read_req->ring. This makes it somewhat redundant
and potentially misleading (especially as there was no comment to explain its
purpose).
This checkin removes the redundant field. Many uses were simply testing for
non-null to see if the object is active on the GPU. Some of these have been
converted to check 'obj->active' instead. Others (where the last_read_req is
about to be used anyway) have been changed to check obj->last_read_req. The rest
simply pull the ring out from the request structure and proceed as before.
For: VIZ-4377
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Thomas Daniel <Thomas.Daniel@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Following the legacy ring submission example, update the
ring->init_context() hook to support the execlist submission mode.
v2: update to use the new workaround macros and cleanup unused code.
This takes care of both bdw and chv workarounds.
v2.1: Add missing call to init_context() during deferred context creation.
v3: Split init_context (emit) in legacy/lrc modes. For lrc, get the ringbuf
from the context (Mika/Daniel).
v4: Merge init_context interfaces back, the legacy mode only needs the ring,
but the lrc mode needs the ring and context (Mika).
Issue: VIZ-4092
Issue: GMIN-3475
Change-Id: Ie3d093b2542ab0e2a44b90460533e2f979788d6c
Cc: Deepak S <deepak.s@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Daniel Vetter <daniel@ffwll.ch>
Signed-off-by: Michel Thierry <michel.thierry@intel.com>
Signed-off-by: Arun Siluvery <arun.siluvery@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
[danvet: Align function paramater lists properly.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
- ppgtt init/release: these tracepoints are useful for observing the
creation and destruction of Full PPGTTs.
- ctx create/free: we can use the ctx_free trace in combination with the
ppgtt_release one to be sure that the ppgtt doesn't stay alive for too
long after the ctx is destroyed. ctx_create is there for simmetry
- switch_mm: important point in the lifetime of the vm
v4: add DOC information
v5: pull the DOC in drm.tmpl
v6: clean ppgtt init/release traces + add ctx create/free and switch_mm
tracepoints (Chris)
v7: drop execlist_submit_context tracepoint
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
If these flags are on the object level it will be more difficult to allow
for multiple VMAs per object.
v2: Simplification and cleanup after code review comments (Chris Wilson).
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
For BDW workarounds are currently initialized in init_clock_gating() but
they are lost during reset, suspend/resume etc; this patch moves the WAs
that are part of register state context to render ring init fn otherwise
default context ends up with incorrect values as they don't get initialized
until init_clock_gating fn.
v2: Add workarounds to golden render state
This method has its own issues, first of all this is different for
each gen and it is generated using a tool so adding new workaround
and mainitaining them across gens is not a straightforward process.
v3: Use LRIs to emit these workarounds (Ville)
Instead of modifying the golden render state the same LRIs are
emitted from within the driver.
v4: Use abstract name when exporting gen specific routines (Chris)
For: VIZ-4092
Signed-off-by: Arun Siluvery <arun.siluvery@linux.intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
These two functions make no sense in an Logical Ring Context & Execlists
world.
v2: We got rid of lrc_enabled and centralized everything in the sanitized
i915.enable_execlists instead.
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
v3: Rebased. Corrected a typo in comment for i915_switch_context and
added a comment that it should not be called in execlist mode. Added
WARN_ON if i915_switch_context is called in execlist mode. Moved check
for execlist mode out of i915_switch_context and into callers. Added
comment in context_reset explaining why nothing is done in execlist
mode.
Signed-off-by: Thomas Daniel <thomas.daniel@intel.com>
[danvet: Simplify the patch subject so I can understand it.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This patch is to address Daniels concerns over different code during reset:
http://lists.freedesktop.org/archives/intel-gfx/2014-June/047758.html
"The reason for aiming as hard as possible to use the exact same code for
driver load, gpu reset and runtime pm/system resume is that we've simply
seen too many bugs due to slight variations and unintended omissions."
Tested using igt drv_hangman.
V2: Cleaner way of preventing check_wedge returning -EAGAIN
V3: Clean the last_context during reset, to ensure do_switch() does the MI_SET_CONTEXT. As per review.
Signed-off-by: McAulay, Alistair <alistair.mcaulay@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
[danvet: Rebase over ctx->ppgtt rework and extend the comment in
check_wedge a bit.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Now that all the flow is streamlined the rule is simple: We create
a new ppgtt for a new context when we have full ppgtt enabled.
Reviewed-by: Michel Thierry <michel.thierry@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
There's a bit a confusion since we track the global gtt,
the aliasing and real ppgtt in the ctx->vm pointer. And not
all callers really bother to check for the different cases and just
presume that it points to a real ppgtt.
Now looking closely we don't actually need ->vm to always point at an
address space - the only place that cares actually has fixup code
already to decide whether to look at the per-proces or the global
address space.
So switch to just tracking the ppgtt directly and ditch all the
extraneous code.
v2: Fixup the ppgtt debugfs file to not oops on a NULL ctx->ppgtt.
Also drop the early exit - without aliasing ppgtt we want to dump all
the ppgtts of the contexts if we have full ppgtt.
v3: Actually git add the compile fix.
Reviewed-by: Michel Thierry <michel.thierry@intel.com>
Cc: "Thierry, Michel" <michel.thierry@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
OTC-Jira: VIZ-3724
[danvet: Resolve conflicts with execlist patches while applying.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Stuffing this into the context setup code doesn't make a lot of sense.
Also reusing the real ppgtt setup code makes even less sense since the
aliasing ppgtt isn't a real address space. Leaving all that stuff
unitialized will make sure that we catch any abusers promptly.
This is also a prep work to clean up the context->ppgtt link.
v2: Fix up the logic fail, I've fumbled it so badly to completely
disable ppgtt on gen6. Spotted by Ville and Michel. Also move around
the pde write into the gen6 init function, since otherwise it won't
work at all.
v3: Only initialize the aliasing ppgtt when we actually enable it.
Cc: "Thierry, Michel" <michel.thierry@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Michel Thierry <michel.thierry@intel.com>
[danvet: Squash in fixup from Fengguang Wu.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Currently we abuse the aliasing ppgtt to set up the ppgtt support in
general. Which is a bit backwards since with full ppgtt we don't ever
need the aliasing ppgtt.
So untangle this and separate the ppgtt init from the aliasing
ppgtt. While at it drag it out of the context enabling (which just
does a switch to the default context).
Note that we still have the differentiation between synchronous and
asynchronous ppgtt setup, but that will soon vanish. So also correctly
wire up the return value handling to be prepared for when ->switch_mm
drops the synchronous parameter and could start to fail.
Reviewed-by: Michel Thierry <michel.thierry@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Hardware contexts reference a ppgtt, not the other way round. And the
only user of this (in debugfs) actually only cares about which file
the ppgtt is associated with. So give it what it wants.
While at it give the ppgtt create function a proper name&place.
Reviewed-by: Michel Thierry <michel.thierry@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
So when reviewing Michel's patch I've noticed a few things and cleaned
them up:
- The early checks in ppgtt_release are now redundant: The inactive
list should always be empty now, so we can ditch these checks. Even
for the aliasing ppgtt (though that's a different confusion) since
we tear that down after all the objects are gone.
- The ppgtt handling functions are splattered all over. Consolidate
them in i915_gem_gtt.c, give them OCD prefixes and add wrappers for
get/put.
- There was a bit a confusion in ppgtt_release about whether it cares
about the active or inactive list. It should care about them both,
so augment the WARNINGs to check for both.
There's still create_vm_for_ctx left to do, put that is blocked on the
removal of ppgtt->ctx. Once that's done we can rename it to
i915_ppgtt_create and move it to its siblings for handling ppgtts.
v2: Move the ppgtt checks into the inline get/put functions as
suggested by Chris.
v3: Inline the now redundant ppgtt local variable.
Cc: Michel Thierry <michel.thierry@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Michel Thierry <michel.thierry@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
VMAs should take a reference of the address space they use.
Now, when the fd is closed, it will release the ref that the context was
holding, but it will still be referenced by any vmas that are still
active.
ppgtt_release() should then only be called when the last thing referencing
it releases the ref, and it can just call the base cleanup and free the
ppgtt.
Note that with this we will extend the lifetime of ppgtts which
contain shared objects. But all the non-shared objects will get
removed as soon as they drop of the active list and for the shared
ones the shrinker can eventually reap them. Since we currently can't
evict ppgtt pagetables either I don't think that temporary leak is
important.
Signed-off-by: Michel Thierry <michel.thierry@intel.com>
[danvet: Add note about potential ppgtt leak with this approach.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Allocate and populate the default LRC for every ring, call
gen-specific init/cleanup, init/fini the command parser and
set the status page (now inside the LRC object). These are
things all engines/rings have in common.
Stopping the ring before cleanup and initializing the seqnos
is left as a TODO task (we need more infrastructure in place
before we can achieve this).
v2: Check the ringbuffer backing obj for ring_is_initialized,
instead of the context backing obj (similar, but not exactly
the same).
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The backing objects and ringbuffers for contexts created via open
fd are actually empty until the user starts sending execbuffers to
them. At that point, we allocate & populate them. We do this because,
at create time, we really don't know which engine is going to be used
with the context later on (and we don't want to waste memory on
objects that we might never use).
v2: As contexts created via ioctl can only be used with the render
ring, we have enough information to allocate & populate them right
away.
v3: Defer the creation always, even with ioctl-created contexts, as
requested by Daniel Vetter.
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Now that we have the ability to allocate our own context backing objects
and we have multiplexed one of them per engine inside the context structs,
we can finally allocate and free them correctly.
Regarding the context size, reading the register to calculate the sizes
can work, I think, however the docs are very clear about the actual
context sizes on GEN8, so just hardcode that and use it.
v2: Rebased on top of the Full PPGTT series. It is important to notice
that at this point we have one global default context per engine, all
of them using the aliasing PPGTT (as opposed to the single global
default context we have with legacy HW contexts).
v3:
- Go back to one single global default context, this time with multiple
backing objects inside.
- Use different context sizes for non-render engines, as suggested by
Damien (still hardcoded, since the information about the context size
registers in the BSpec is, well, *lacking*).
- Render ctx size is 20 (or 19) pages, but not 21 (caught by Damien).
- Move default context backing object creation to intel_init_ring (so
that we don't waste memory in rings that might not get initialized).
v4:
- Reuse the HW legacy context init/fini.
- Create a separate free function.
- Rename the functions with an intel_ preffix.
v5: Several rebases to account for the changes in the previous patches.
Signed-off-by: Ben Widawsky <ben@bwidawsk.net> (v1)
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
For the moment this is just a placeholder, but it shows one of the
main differences between the good ol' HW contexts and the shiny
new Logical Ring Contexts: LR contexts allocate and free their
own backing objects. Another difference is that the allocation is
deferred (as the create function name suggests), but that does not
happen in this patch yet, because for the moment we are only dealing
with the default context.
Early in the series we had our own gen8_gem_context_init/fini
functions, but the truth is they now look almost the same as the
legacy hw context init/fini functions. We can always split them
later if this ceases to be the case.
Also, we do not fall back to legacy ringbuffers when logical ring
context initialization fails (not very likely to happen and, even
if it does, hw contexts would probably fail as well).
v2: Daniel says "explain, do not showcase".
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
[danvet: s/BUG_ON/WARN_ON/.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The comment [which was mine] is wrong. The context object can never be
bound in a PPGTT because it is only capable of living in the Global GTT.
So, remove the comment, and reorder the unref. What's nice about the
latter is it keeps the context object alive past the PPGTT. This makes
the destroy ordering symmetric with the creation ordering.
Create:
1. Create context
2. Create PPGTT
Destroy:
1. Destroy PPGTT
2. Destroy context
As far as I know, this does not fix a bug. The code previously kept the
context data structure, only the object was gone. As the code was,
nothing tried to use the object after this point.
NOTE: If in the future we have cases where the PPGTT can/should outlive
the context (which doesn't occur today, but the code permits it), this
ordering does not matter. Even if this occurs, as it stands now, we do
not expect that to be the normal case, and having this order makes
debugging a bit easier if we're tracking object lifetimes for the
context vs ppgtt
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Resolve conflict with Oscar's execlist prep patches.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This is an Execlists preparatory patch, since they make context ID become an
overloaded term:
- In the software, it was used to distinguish which context userspace was
trying to use.
- In the BSpec, the term is used to describe the 20-bits long field the
hardware uses to it to discriminate the contexts that are submitted to
the ELSP and inform the driver about their current status (via Context
Switch Interrupts and Context Status Buffers).
Initially, I tried to make the different meanings converge, but it proved
impossible:
- The software ctx->id is per-filp, while the hardware one needs to be
globally unique.
- Also, we multiplex several backing states objects per intel_context,
and all of them need unique HW IDs.
- I tried adding a per-filp ID and then composing the HW context ID as:
ctx->id + file_priv->id + ring->id, but the fact that the hardware only
uses 20-bits means we have to artificially limit the number of filps or
contexts the userspace can create.
The ctx->user_handle renaming bits are done with this Cocci patch (plus
manual frobbing of the struct declaration):
@@
struct intel_context c;
@@
- (c).id
+ c.user_handle
@@
struct intel_context *c;
@@
- (c)->id
+ c->user_handle
Also, while we are at it, s/DEFAULT_CONTEXT_ID/DEFAULT_CONTEXT_HANDLE and
change the type to unsigned 32 bits.
v2: s/handle/user_handle and change the type to uint32_t as suggested by
Chris Wilson.
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org> (v1)
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We have already advanced that Logical Ring Contexts have their own kind
of backing objects, but everything will be better explained in the Execlists
series. For now, suffice it to say that the current backing object is only
ever used with the render ring, so we're making this fact more explicit
(which is a good reason on its own).
As for the is_initialized flag, we only use to signify that the render state
has been initialized (a.k.a. golden context, a.k.a. null context). It doesn't
mean anything for the other engines, so make that distinction obvious.
Done with the following Coccinelle patch (plus manual frobbing of the struct):
@@
struct intel_context c;
@@
- (c).obj
+ c.legacy_hw_ctx.rcs_state
@@
struct intel_context *c;
@@
- (c)->obj
+ c->legacy_hw_ctx.rcs_state
@@
struct intel_context c;
@@
- (c).is_initialized
+ c.legacy_hw_ctx.initialized
@@
struct intel_context *c;
@@
- (c)->is_initialized
+ c->legacy_hw_ctx.initialized
This Execlists prep-work patch has been suggested by Chris Wilson and Daniel
Vetter separately.
Initially, it was two separate patches:
drm/i915: Rename ctx->obj to ctx->rcs_state
drm/i915: Make it obvious that ctx->id is merely a user handle
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
[danvet: s/id/is_initialized/ to fix the subject and resolve a
conflict in i915_gem_context_reset. Also introduce a new lctx local
variable to avoid overtly long lines.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This is preparatory work for Execlists: we plan to use it later to
allocate our own context objects (since Logical Ring Contexts do
not have the same kind of backing objects).
No functional changes.
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We're forgetting to unpin the last_context from the ggtt at GPU reset
time. This leads to the vma pin_count leaking at every reset if the
last context wasn't the ring default context. Further use of the same
context will trigger the pin_count check in i915_gem_object_pin() and
userspace will be faced with EBUSY as a result.
This plaques kms_flip rather badly since it performs lots of resets,
and every fd has its own default context these days.
Fix the problem by properly unpinning the last context at reset.
This regression seems to back to
commit acce9ffa48
Author: Ben Widawsky <ben@bwidawsk.net>
Date: Fri Dec 6 14:11:03 2013 -0800
drm/i915: Better reset handling for contexts
Testcase: igt/gem_ctx_exec/reset-pin-leak
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJTuaWZAAoJEHm+PkMAQRiGfkIH/2Hhwrg51GWazUYIXVxz5zLU
kPMlaws3vankbhka9HCg02eS3tkzr6shO3F/qlBba+5GUkUDKCcCisIsvk4hgZZg
7YqepTvcaupNxIp4TmTGm1FYVK1GpaWFdJVgg2PDdGFahw3HSlfZoTkBzirNCwga
p/jfeRzathbUixpz9OAC1AEn2gP1AxNRpSt1wShL5rexBb1YRXCPuCEt9B0UsVoR
mzKf5xEsuaZnpCuvWK4S60fjfVhTe8UJ/xGPPfdLyIXU0rvhaKzfeVQO6F5nIQBy
Xvrar1f7oOPZaJRdlmPvAimS7iS8lq/YctuHu7ia1NdJSihtA5sRPf7cWAw2d7s=
=4PrL
-----END PGP SIGNATURE-----
Merge tag 'v3.16-rc4' into drm-intel-next-queued
Due to Dave's vacation drm-next hasn't opened yet for 3.17 so I
couldn't move my drm-intel-next queue forward yet like I usually do.
Just pull in the latest upstream -rc to unblock patch merging - I
don't want to needlessly rebase my current patch pile really and void
all the testing we've done already.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Fallout from
commit 46470fc932
Author: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Date: Wed May 21 19:01:06 2014 +0300
drm/i915: Add null state batch to active list
undid the earlier fix of only marking the ctx as initialised after it is
saved by the hardware during a SET_CONTEXT operation:
commit ad1d219974
Author: Ben Widawsky <benjamin.widawsky@intel.com>
Date: Sat Dec 28 13:31:49 2013 -0800
drm/i915: set ctx->initialized only after RCS
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Damien Lespiau <damien.lespiau@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Cc: Ben Widawsky <ben@bwidawsk.net>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Ben Widawsky <ben@bwidawsk.net>
[Jani: add reference to the earlier fix in the commit messsage.]
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
The original comment that introduced it said:
commit 0009e46cd5
Author: Ben Widawsky <ben@bwidawsk.net>
Date: Fri Dec 6 14:11:02 2013 -0800
drm/i915: Track which ring a context ran on
Previously we dropped the association of a context to a ring. It is
however very important to know which ring a context ran on (we could
have reused the other member, but I was nitpicky).
This is very important when we switch address spaces, which unlike
context objects, do change per ring.
As an example, if we have:
RCS BCS
ctx A
ctx A
ctx B
ctx B
Without tracking the last ring B ran on, we wouldn't know to switch the
address space on BCS in the last row.
But this is not really true, because we are already checking to != from (with
"from" being = ring->last_context) and that should be enough to make sure we
switch to the right address space.
We would have a problem if we switched the context object for every ring (since
then we would fail to do it in some situations) but we only switch it for the
render ring, so we don't care.
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
It's barely alive now anyway, so give it the "coup de grâce".
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Up until now, contexts had one (and only one) backing object that was
used by the hardware to save/restore render ring contexts (via the
MI_SET_CONTEXT command). Other rings did not have or need this, so
our i915_hw_context struct had a 1:1 relationship with a a real HW
context.
With Logical Ring Contexts and Execlists, this is not possible anymore:
all rings need a backing object, and it cannot be reused. To prepare
for that, rename our contexts to the more generic term intel_context.
No functional changes.
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
In the upcoming patches we plan to break the correlation between
engine command streamers (a.k.a. rings) and ringbuffers, so it
makes sense to refactor the code and make the change obvious.
No functional changes.
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
for proper refcounting to take place as we use
i915_add_request() for it.
i915_add_request() also takes the context for the request
from ring->last_context so move the null state batch
submission after the ring context has been set.
v2: we need to check for correct ring now (Ville Syrjälä)
v3: no need to expose i915_gem_move_object_to_active (Chris Wilson)
v4: cargoculted vma/active/inactive error handling removed (Chris Wilson)
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We implement the following workarounds:
* WaDisableAsyncFlipPerfMode:chv
* WaProgramMiArbOnOffAroundMiSetContext:chv
v2: Drop WaDisableSemaphoreAndSyncFlipWait note
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Since commit 691e6415c8
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Apr 9 09:07:36 2014 +0100
drm/i915: Always use kref tracking for all contexts.
we have contexts everywhere, and so we must be careful to distinguish
fake contexts, which do not have an associated bo, and real ones, which
do. In particular, we now need to be careful not to dereference NULL
pointers.
This is one such example, as the commit highlighted above failed to move
the unpinning of the default ctx object into the real-context-only
branch.
Reported-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=78792
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Ben Widawsky <benjamin.widawsky@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Cc: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
HW guys say that it is not a cool idea to let device
go into rc6 without proper 3d pipeline state.
For each new uninitialized context, generate a
valid null render state to be run on context
creation.
This patch introduces a skeleton with empty states.
v2: - No need to vmap (Chris Wilson)
- use .c files for state (Daniel Vetter)
- no need to flush as i915_add_request does it
- remove parameter for batch alloc size
- don't wait for the init (Ben Widawsky)
v3: - move to cpu/gpu (Chris Wilson)
Tested-by: Kristen Carlson Accardi <kristen@linux.intel.com> (v1)
Tested-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
drm-intel-next-2014-04-16:
- vlv infoframe fixes from Jesse
- dsi/mipi fixes from Shobhit
- gen8 pageflip fixes for LRI/SRM from Damien
- cmd parser fixes from Brad Volkin
- some prep patches for CHV, DRRS, ...
- and tons of little things all over
drm-intel-next-2014-04-04:
- cmd parser for gen7 but only in enforcing and not yet granting mode - the
batch copying stuff is still missing. Also performance is a bit ... rough
(Brad Volkin + OACONTROL fix from Ken).
- deprecate UMS harder (i.e. CONFIG_BROKEN)
- interrupt rework from Paulo Zanoni
- runtime PM support for bdw and snb, again from Paulo
- a pile of refactorings from various people all over the place to prep for new
stuff (irq reworks, power domain polish, ...)
drm-intel-next-2014-04-04:
- cmd parser for gen7 but only in enforcing and not yet granting mode - the
batch copying stuff is still missing. Also performance is a bit ... rough
(Brad Volkin + OACONTROL fix from Ken).
- deprecate UMS harder (i.e. CONFIG_BROKEN)
- interrupt rework from Paulo Zanoni
- runtime PM support for bdw and snb, again from Paulo
- a pile of refactorings from various people all over the place to prep for new
stuff (irq reworks, power domain polish, ...)
Conflicts:
drivers/gpu/drm/i915/i915_gem_context.c
If we always initialize kref for the context, even if we are using fake
contexts for hangstats when there is no hw support, we can forgo the
dance to dereference the ctx->obj and inspect whether we are permitted
to use kref inside i915_gem_context_reference() and _unreference().
My ulterior motive here is to improve the debugging of a use-after-free
of ctx->obj. This patch avoids the dereference here and instead forces
the assertion checks associated with kref.
v2: Refactor the fake contexts to being even more like the real
contexts, so that there is much less duplicated and special case code.
v3: Tweaks.
v4: Tweaks, minor.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=76671
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Tested-by: lu hua <huax.lu@intel.com>
Cc: Ben Widawsky <benjamin.widawsky@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Ben Widawsky <ben@bwidawsk.net>
[Jani: tiny change to backport to drm-intel-fixes.]
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
We don't do CPU access to GPU contexts so making the GPU access snoop
the CPU caches seems silly, and potentially expensive.
v2: Use !IS_VALLEYVIEW instead of HAS_LLC as this is really
about what the PTEs can represent.
Add a comment clarifying the situation.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We have been setting the bit which was originally BIOS dependent since:
commit f05bb0c7b6
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Sun Jan 20 16:33:32 2013 +0000
drm/i915: GFX_MODE Flush TLB Invalidate Mode must be '1' for scanline waits
Therefore, we do not need to try to figure it out dynamically and we can
just always invalidate the TLBs.
It's a partial revert of:
commit 12b0286f49
Author: Ben Widawsky <ben@bwidawsk.net>
Date: Mon Jun 4 14:42:50 2012 -0700
drm/i915: possibly invalidate TLB before context switch
The original commit attempted to only invalidate when necessary
(very much a relic from the old days). Now, we can just always invalidate.
I guess the old TODO still exists. Since we seem to have abandoned ILK
contexts however, there isn't much point in even remembering.
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>