Commit Graph

480 Commits

Author SHA1 Message Date
Chris Wilson d6f328bfeb drm/i915: Hack and slash, throttle execbuffer hogs
Apply backpressure to hogs that emit requests faster than the GPU can
process them by waiting for their ring to be less than half-full before
proceeding with taking the struct_mutex.

This is a gross hack to apply throttling backpressure, the long term
goal is to remove the struct_mutex contention so that each client
naturally waits, preferably in an asynchronous, nonblocking fashion
(pipelined operations for the win), for their own resources and never
blocks another client within the driver at least. (Realtime priority
goals would extend to ensuring that resource contention favours high
priority clients as well.)

This patch only limits excessive request production and does not attempt
to throttle clients that block waiting for eviction (either global GTT or
system memory) or any other global resources, see above for the long term
goal.

No microbenchmarks are harmed (to the best of my knowledge).

Testcase: igt/gem_exec_schedule/pi-ringfull-*
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190207071829.5574-1-chris@chris-wilson.co.uk
2019-02-07 16:13:21 +00:00
Tvrtko Ursulin 26a11deea6 drm/i915/pmu: Fix enable count array size and bounds checking
Enable count array is supposed to have one counter for each possible
engine sampler. As such, array sizing and bounds checking is not correct
and would blow up the asserts if more samplers were added.

No ill-effect in the current code base but lets fix it for correctness.

At the same time tidy the assert for readability and robustness.

v2:
 * One check per assert. (Chris Wilson)

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Fixes: b46a33e271 ("drm/i915/pmu: Expose a PMU interface for perf queries")
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190205130353.21105-1-tvrtko.ursulin@linux.intel.com
2019-02-06 10:22:47 +00:00
Chris Wilson 3d7a64b992 drm/i915: Allow normal clients to always preempt idle priority clients
When first enabling preemption, we hesitated from making it a free-for-all
where every higher priority client would force a preempt-to-idle cycle
and take over from all lower priority clients. We hesitated because we
were uncertain just how well preemption would work in practice, whether
the preemption latency itself would detract from the latency gains for
higher priority tasks and whether it would work at all. Since
introducing preemption, we have been enabling it for more common tasks,
even giving normal clients a small preemptive boost when they first
start (to aide fairness and improve interactivity). Now lets take one
step further and give permission for all normal (priority:0) clients to
preempt any idle (priority:<0) task so that users running long compute
jobs do not overly impact other jobs (i.e. their desktop) and the system
remains responsive under such idle loads.

References: f6322eddaf ("drm/i915/preemption: Allow preemption between submission ports")
References: b16c765122 ("drm/i915: Priority boost for new clients")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: Michał Winiarski <michal.winiarski@intel.com>
Cc: "Bloomfield, Jon" <jon.bloomfield@intel.com>
Cc: "Stead, Alan" <alan.stead@intel.com>
Reviewed-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190204084116.3013-1-chris@chris-wilson.co.uk
2019-02-04 09:47:28 +00:00
Chris Wilson 789659f430 drm/i915: Drop fake breadcrumb irq
Missed breadcrumb detection is defunct due to the tight coupling with
dma_fence signaling and the myriad ways we may signal fences from
everywhere but from an interrupt, i.e. we frequently signal a fence
before we even see its interrupt. This means that even if we miss an
interrupt for a fence, it still is signaled before our breadcrumb
hangcheck fires, so simplify the breadcrumb hangchecking by moving it
into the GPU hangcheck and forgo fake interrupts.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190129205230.19056-3-chris@chris-wilson.co.uk
2019-01-29 21:45:30 +00:00
Chris Wilson 52c0fdb25c drm/i915: Replace global breadcrumbs with per-context interrupt tracking
A few years ago, see commit 688e6c7258 ("drm/i915: Slaughter the
thundering i915_wait_request herd"), the issue of handling multiple
clients waiting in parallel was brought to our attention. The
requirement was that every client should be woken immediately upon its
request being signaled, without incurring any cpu overhead.

To handle certain fragility of our hw meant that we could not do a
simple check inside the irq handler (some generations required almost
unbounded delays before we could be sure of seqno coherency) and so
request completion checking required delegation.

Before commit 688e6c7258, the solution was simple. Every client
waiting on a request would be woken on every interrupt and each would do
a heavyweight check to see if their request was complete. Commit
688e6c7258 introduced an rbtree so that only the earliest waiter on
the global timeline would woken, and would wake the next and so on.
(Along with various complications to handle requests being reordered
along the global timeline, and also a requirement for kthread to provide
a delegate for fence signaling that had no process context.)

The global rbtree depends on knowing the execution timeline (and global
seqno). Without knowing that order, we must instead check all contexts
queued to the HW to see which may have advanced. We trim that list by
only checking queued contexts that are being waited on, but still we
keep a list of all active contexts and their active signalers that we
inspect from inside the irq handler. By moving the waiters onto the fence
signal list, we can combine the client wakeup with the dma_fence
signaling (a dramatic reduction in complexity, but does require the HW
being coherent, the seqno must be visible from the cpu before the
interrupt is raised - we keep a timer backup just in case).

Having previously fixed all the issues with irq-seqno serialisation (by
inserting delays onto the GPU after each request instead of random delays
on the CPU after each interrupt), we can rely on the seqno state to
perfom direct wakeups from the interrupt handler. This allows us to
preserve our single context switch behaviour of the current routine,
with the only downside that we lose the RT priority sorting of wakeups.
In general, direct wakeup latency of multiple clients is about the same
(about 10% better in most cases) with a reduction in total CPU time spent
in the waiter (about 20-50% depending on gen). Average herd behaviour is
improved, but at the cost of not delegating wakeups on task_prio.

v2: Capture fence signaling state for error state and add comments to
warm even the most cold of hearts.
v3: Check if the request is still active before busywaiting
v4: Reduce the amount of pointer misdirection with list_for_each_safe
and using a local i915_request variable inside the loops
v5: Add a missing pluralisation to a purely informative selftest message.

References: 688e6c7258 ("drm/i915: Slaughter the thundering i915_wait_request herd")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190129205230.19056-2-chris@chris-wilson.co.uk
2019-01-29 21:45:22 +00:00
Chris Wilson c9a6462288 drm/i915/execlists: Suppress preempting self
In order to avoid preempting ourselves, we currently refuse to schedule
the tasklet if we reschedule an inflight context. However, this glosses
over a few issues such as what happens after a CS completion event and
we then preempt the newly executing context with itself, or if something
else causes a tasklet_schedule triggering the same evaluation to
preempt the active context with itself.

However, when we avoid preempting ELSP[0], we still retain the preemption
value as it may match a second preemption request within the same time period
that we need to resolve after the next CS event. However, since we only
store the maximum preemption priority seen, it may not match the
subsequent event and so we should double check whether or not we
actually do need to trigger a preempt-to-idle by comparing the top
priorities from each queue. Later, this gives us a hook for finer
control over deciding whether the preempt-to-idle is justified.

The sequence of events where we end up preempting for no avail is:

1. Queue requests/contexts A, B
2. Priority boost A; no preemption as it is executing, but keep hint
3. After CS switch, B is less than hint, force preempt-to-idle
4. Resubmit B after idling

v2: We can simplify a bunch of tests based on the knowledge that PI will
ensure that earlier requests along the same context will have the highest
priority.
v3: Demonstrate the stale preemption hint with a selftest

References: a2bf92e8cc ("drm/i915/execlists: Avoid kicking priority on the current context")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190129185452.20989-4-chris@chris-wilson.co.uk
2019-01-29 20:00:05 +00:00
Chris Wilson 4d97cbe019 drm/i915: Rename execlists->queue_priority to queue_priority_hint
After noticing that we trigger preemption events for currently executing
requests, as well as requests that complete before the preemption and
attempting to suppress those preemption events, it is wise to not
consider the queue_priority to be authoritative. As we only track the
maximum priority seen between dequeue passes, if the maximum priority
request is no longer available for dequeuing (it completed or is even
executing on another engine), we have no knowledge of the previous
queue_priority as it would require us to keep a full history of enqueued
requests -- but we already have that history in the priolists!

Rename the queue_priority to queue_priority_hint so that we do not
confuse it as being exactly the maximum priority in the queue, but merely
an indication that we have seen a new maximum priority value and as such
we should check whether it should preempt the currently running request.

v2: s/preempt_priority_hint/queue_priority_hint/ as preempt implies it
being only used for the singular task of preemption and not the wider
question of waking up due to a change in the queue.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190129185452.20989-3-chris@chris-wilson.co.uk
2019-01-29 20:00:03 +00:00
Chris Wilson 8547444137 drm/i915: Identify active requests
To allow requests to forgo a common execution timeline, one question we
need to be able to answer is "is this request running?". To track
whether a request has started on HW, we can emit a breadcrumb at the
beginning of the request and check its timeline's HWSP to see if the
breadcrumb has advanced past the start of this request. (This is in
contrast to the global timeline where we need only ask if we are on the
global timeline and if the timeline has advanced past the end of the
previous request.)

There is still confusion from a preempted request, which has already
started but relinquished the HW to a high priority request. For the
common case, this discrepancy should be negligible. However, for
identification of hung requests, knowing which one was running at the
time of the hang will be much more important.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190129185452.20989-2-chris@chris-wilson.co.uk
2019-01-29 19:59:59 +00:00
Chris Wilson 52954edd1f drm/i915: Allocate a status page for each timeline
Allocate a page for use as a status page by a group of timelines, as we
only need a dword of storage for each (rounded up to the cacheline for
safety) we can pack multiple timelines into the same page. Each timeline
will then be able to track its own HW seqno.

v2: Reuse the common per-engine HWSP for the solitary ringbuffer
timeline, so that we do not have to emit (using per-gen specialised
vfuncs) the breadcrumb into the distinct timeline HWSP and instead can
keep on using the common MI_STORE_DWORD_INDEX. However, to maintain the
sleight-of-hand for the global/per-context seqno switchover, we will
store both temporarily (and so use a custom offset for the shared timeline
HWSP until the switch over).

v3: Keep things simple and allocate a page for each timeline, page
sharing comes next.

v4: I was caught repeating the same MI_STORE_DWORD_IMM over and over
again in selftests.

v5: And caught red handed copying create timeline + check.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190128181812.22804-3-chris@chris-wilson.co.uk
2019-01-28 19:07:02 +00:00
Chris Wilson 0ca88ba0d6 drm/i915: Always allocate an object/vma for the HWSP
Currently we only allocate an object and vma if we are using a GGTT
virtual HWSP, and a plain struct page for a physical HWSP. For
convenience later on with global timelines, it will be useful to always
have the status page being tracked by a struct i915_vma. Make it so.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190128102356.15037-4-chris@chris-wilson.co.uk
2019-01-28 16:24:19 +00:00
Chris Wilson eb8d0f5af4 drm/i915: Remove GPU reset dependence on struct_mutex
Now that the submission backends are controlled via their own spinlocks,
with a wave of a magic wand we can lift the struct_mutex requirement
around GPU reset. That is we allow the submission frontend (userspace)
to keep on submitting while we process the GPU reset as we can suspend
the backend independently.

The major change is around the backoff/handoff strategy for performing
the reset. With no mutex deadlock, we no longer have to coordinate with
any waiter, and just perform the reset immediately.

Testcase: igt/gem_mmap_gtt/hang # regresses
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190125132230.22221-3-chris@chris-wilson.co.uk
2019-01-25 14:27:22 +00:00
Chris Wilson 832a67bdb2 drm/i915: Compute the HWS offsets explicitly
Simplify by using sizeof(u32) to convert from the index inside the HWSP
to the byte offset. This has the advantage of not only being shorter
(and so not upsetting checkpatch!) but that it matches use where we are
writing to byte addresses using other commands than MI_STORE_DWORD_IMM.

v2: Drop the now superfluous MI_STORE_DWORD_INDEX_SHIFT, it appears to
be a local invention so keeping it after the final use does not help to
clarify the GPU instruction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190125120005.25191-2-chris@chris-wilson.co.uk
2019-01-25 12:53:15 +00:00
Chris Wilson 9fa4973e91 drm/i915: Remove manual breadcumb counting
Now that we know we measure the size of the engine->emit_breadcrumb()
correctly, we can remove the previous manual counting.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190125120005.25191-1-chris@chris-wilson.co.uk
2019-01-25 12:53:13 +00:00
Chris Wilson e1a73a54a9 drm/i915: Measure the required reserved size for request emission
Instead of tediously and fragilely counting up the number of dwords
required to emit the breadcrumb to seal a request, fake a request and
measure it automatically once during engine setup.

The downside is that this requires a fair amount of mocking to create a
proper breadcrumb. Still, should be less error prone in future as the
breadcrumb size fluctuates!

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190125100520.20163-1-chris@chris-wilson.co.uk
2019-01-25 11:19:39 +00:00
Chris Wilson 293f8c0f2b drm/i915: Use b->irq_enable() as predicate for mock engine
Since commit  d4ccceb055 ("drm/i915/icl: Ringbuffer interrupt handling")
we have required a mechanism to avoid touching the interrupt hardware
for breadcrumbs, superseding our mock interface for selftests.

The residual problem (ideas welcome) is in probing the mock ring
registers for ring_is_idle. Hmm, maybe we should just install
mock handlers for i915->uncore.mmio__write and friends? Only problem
being is that we would to truly mock some expected reads. :(

References: d4ccceb055 ("drm/i915/icl: Ringbuffer interrupt handling")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190118112225.13780-1-chris@chris-wilson.co.uk
2019-01-18 12:05:29 +00:00
Jani Nikula 739f3abdbf drm/i915: small isolated c99 types to kernel types switch
Mixed C99 and kernel types use is getting ugly. Prefer kernel types.

sed -i 's/\buint\(8\|16\|32\|64\)_t\b/u\1/g'

Minor checkpatch fixes sprinkled on top of the changed lines.

Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/14ed72e7f04c9340a057855c5950b54811f8a477.1547629303.git.jani.nikula@intel.com
2019-01-17 09:02:00 +02:00
Chris Wilson 55277e1f31 drm/i915: Always try to reset the GPU on takeover
When we first introduced the reset to sanitize the GPU on taking over
from the BIOS and before returning control to third parties (the BIOS!),
we restricted it to only systems utilizing HW contexts as we were
uncertain of how stable our reset mechanism truly was. We now have
reasonable coverage across all machines that expose a GPU reset method,
and so we should be safe to sanitize the GPU state everywhere.

v2: We _have_ to skip the reset if it would clobber the display.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190103112104.19561-1-chris@chris-wilson.co.uk
2019-01-03 12:40:42 +00:00
Jani Nikula 0258404f9d drm/i915: start moving runtime device info to a separate struct
First move the low hanging fruit, the fields that are only initialized
runtime. Use RUNTIME_INFO() exclusively to access the fields.

Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/c24fe7a4b0492a888690c46814c0ff21ce2f12b1.1546267488.git.jani.nikula@intel.com
2019-01-02 12:46:29 +02:00
Chris Wilson 1216e3c3af drm/i915: Drop unused engine->irq_seqno_barrier w/a
Now that we have eliminated the CPU-side irq_seqno_barrier by moving the
delays on the GPU before emitting the MI_USER_INTERRUPT, we can remove
the engine->irq_seqno_barrier infrastructure. Though intentionally
slowing down the GPU is nasty, so is the code we can now remove!

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181228171641.16531-6-chris@chris-wilson.co.uk
2018-12-31 15:35:45 +00:00
Chris Wilson ed2922c025 drm/i915: Remove redundant trailing request flush
Now that we perform the request flushing inline with emitting the
breadcrumb, we can remove the now redundant manual flush. And we can
also remove the infrastructure that remained only for its purpose.

v2: emit_breadcrumb_sz is in dwords, but rq->reserved_space is in bytes

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181228171641.16531-1-chris@chris-wilson.co.uk
2018-12-31 15:35:45 +00:00
Chris Wilson 6a6237293d drm/i915/execlists: Pull the render flush into breadcrumb emission
In preparation for removing the manual EMIT_FLUSH prior to emitting the
breadcrumb implement the flush inline with writing the breadcrumb for
execlists. Using one command to both flush and write the breadcrumb is
naturally a tiny bit faster than splitting it into two.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181228153114.4948-1-chris@chris-wilson.co.uk
2018-12-28 16:36:55 +00:00
Chris Wilson 6faf5916e6 drm/i915: Remove HW semaphores for gen7 inter-engine synchronisation
The writing is on the wall for the existence of a single execution queue
along each engine, and as a consequence we will not be able to track
dependencies along the HW queue itself, i.e. we will not be able to use
HW semaphores on gen7 as they use a global set of registers (and unlike
gen8+ we can not effectively target memory to keep per-context seqno and
dependencies).

On the positive side, when we implement request reordering for gen7 we
also can not presume a simple execution queue and would also require
removing the current semaphore generation code. So this bring us another
step closer to request reordering for ringbuffer submission!

The negative side is that using interrupts to drive inter-engine
synchronisation is much slower (4us -> 15us to do a nop on each of the 3
engines on ivb). This is much better than it was at the time of introducing
the HW semaphores and equally important userspace weaned itself off
intermixing dependent BLT/RENDER operations (the prime culprit was glyph
rendering in UXA). So while we regress the microbenchmarks, it should not
impact the user.

References: https://bugs.freedesktop.org/show_bug.cgi?id=108888
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181228140736.32606-2-chris@chris-wilson.co.uk
2018-12-28 14:43:27 +00:00
Chris Wilson 060f23225d drm/i915: Apply missed interrupt after reset w/a to all ringbuffer gen
Having completed a test run of gem_eio across all machines in CI we also
observe the phenomenon (of lost interrupts after resetting the GPU) on
gen3 machines as well as the previously sighted gen6/gen7. Let's apply
the same HWSTAM workaround that was effective for gen6+ for all, as
although we haven't seen the same failure on gen4/5 it seems prudent to
keep the code the same.

As a consequence we can remove the extra setting of HWSTAM and apply the
register from a single site.

v2: Delazy and move the HWSTAM into its own function
v3: Mask off all HWSP writes on driver unload and engine cleanup.
v4: And what about the physical hwsp?
v5: No, engine->init_hw() is not called from driver_init_hw(), don't be
daft. Really scrub HWSTAM as early as we can in driver_init_mmio()
v6: Rename set_hwsp as it was setting the mask not the hwsp register.
v7: Ville pointed out that although vcs(bsd) was introduced for g4x/ilk,
per-engine HWSTAM was not introduced until gen6!

References: https://bugs.freedesktop.org/show_bug.cgi?id=108735
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181218102712.11058-1-chris@chris-wilson.co.uk
2018-12-18 14:24:46 +00:00
Lucas De Marchi cf819eff90 drm/i915: replace IS_GEN<N> with IS_GEN(..., N)
Define IS_GEN() similarly to our IS_GEN_RANGE(). but use gen instead of
gen_mask to do the comparison. Now callers can pass then gen as a parameter,
so we don't require one macro for each gen.

The following spatch was used to convert the users of these macros:

@@
expression e;
@@
(
- IS_GEN2(e)
+ IS_GEN(e, 2)
|
- IS_GEN3(e)
+ IS_GEN(e, 3)
|
- IS_GEN4(e)
+ IS_GEN(e, 4)
|
- IS_GEN5(e)
+ IS_GEN(e, 5)
|
- IS_GEN6(e)
+ IS_GEN(e, 6)
|
- IS_GEN7(e)
+ IS_GEN(e, 7)
|
- IS_GEN8(e)
+ IS_GEN(e, 8)
|
- IS_GEN9(e)
+ IS_GEN(e, 9)
|
- IS_GEN10(e)
+ IS_GEN(e, 10)
|
- IS_GEN11(e)
+ IS_GEN(e, 11)
)

v2: use IS_GEN rather than GT_GEN and compare to info.gen rather than
    using the bitmask

Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181212181044.15886-2-lucas.demarchi@intel.com
2018-12-12 16:52:10 -08:00
Chris Wilson 5179749925 drm/i915: Allocate a common scratch page
Currently we allocate a scratch page for each engine, but since we only
ever write into it for post-sync operations, it is not exposed to
userspace nor do we care for coherency. As we then do not care about its
contents, we can use one page for all, reducing our allocations and
avoid complications by not assuming per-engine isolation.

For later use, it simplifies engine initialisation (by removing the
allocation that required struct_mutex!) and means that we can always rely
on there being a scratch page.

v2: Check that we allocated a large enough scratch for I830 w/a

Fixes: 06e562e7f515 ("drm/i915/ringbuffer: Delay after EMIT_INVALIDATE for gen4/gen5") # v4.18.20
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=108850
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181204141522.13640-1-chris@chris-wilson.co.uk
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: <stable@vger.kernel.org> # v4.18.20+
2018-12-04 15:57:08 +00:00
Tvrtko Ursulin 452420d22d drm/i915: Fuse per-context workaround handling with the common framework
Convert the per context workaround handling code to run against the newly
introduced common workaround framework and fuse the two to use the
existing smarter list add helper, the one which does the sorted insert and
merges registers where possible.

This completes migration of all four classes of workarounds onto the
common framework.

Existing macros are kept untouched for smaller code churn.

v2:
 * Rename to list name ctx_wa_list and move from dev_priv to engine.

v3:
 * API rename and parameters tweaking. (Chris Wilson)

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20181203133357.10341-1-tvrtko.ursulin@linux.intel.com
2018-12-04 12:23:22 +00:00
Tvrtko Ursulin 69bcdecf1a drm/i915: Move register white-listing to the common workaround framework
Instead of having a separate list of white-listed registers we can
trivially move this to the common workarounds framework.

This brings us one step closer to the goal of driving all workaround
classes using the same code.

v2:
 * Use GEM_DEBUG_WARN_ON for the sanity check. (Chris Wilson)

v3:
 * API rename. (Chris Wilson)

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20181203125014.3219-6-tvrtko.ursulin@linux.intel.com
2018-12-04 12:23:21 +00:00
Tvrtko Ursulin 4a15c75c42 drm/i915: Introduce per-engine workarounds
We stopped re-applying the GT workarounds after engine reset since commit
59b449d5c8 ("drm/i915: Split out functions for different kinds of
workarounds").

Issue with this is that some of the GT workarounds live in the MMIO space
which gets lost during engine resets. So far the registers in 0x2xxx and
0xbxxx address range have been identified to be affected.

This losing of applied workarounds has obvious negative effects and can
even lead to hard system hangs (see the linked Bugzilla).

Rather than just restoring this re-application, because we have also
observed that it is not safe to just re-write all GT workarounds after
engine resets (GPU might be live and weird hardware states can happen),
we introduce a new class of per-engine workarounds and move only the
affected GT workarounds over.

Using the framework introduced in the previous patch, we therefore after
engine reset, re-apply only the workarounds living in the affected MMIO
address ranges.

v2:
 * Move Wa_1406609255:icl to engine workarounds as well.
 * Rename API. (Chris Wilson)
 * Drop redundant IS_KABYLAKE. (Chris Wilson)
 * Re-order engine wa/ init so latest platforms are first. (Rodrigo Vivi)

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Bugzilla: https://bugzilla.freedesktop.org/show_bug.cgi?id=107945
Fixes: 59b449d5c8 ("drm/i915: Split out functions for different kinds of workarounds")
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Jani Nikula <jani.nikula@linux.intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: intel-gfx@lists.freedesktop.org
Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20181203133341.10258-1-tvrtko.ursulin@linux.intel.com
2018-12-04 12:23:16 +00:00
Chris Wilson 46592892e1 drm/i915/vgpu: Disallow loading on old vGPU hosts
Since commit fd8526e509 ("drm/i915/execlists: Trust the CSB") we
actually broke the force-mmio mode for our execlists implementation. No
one noticed, so ergo no one is actually using an old vGPU host (where we
required the older method) and so can simply remove the broken support.

v2: csb_read can go as well (Mika)

Reported-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Fixes: fd8526e509 ("drm/i915/execlists: Trust the CSB")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181130125954.11924-1-chris@chris-wilson.co.uk
2018-12-03 16:08:26 +00:00
Jonathan Gray 77c8fdae25 drm/i915/ringbuffer: change header SPDX identifier to MIT
Commit b24413180f ("License cleanup: add SPDX GPL-2.0 license
identifier to files with no license") added "SPDX-License-Identifier:
GPL-2.0" to files which previously had no license, change this to MIT
for intel_ringbuffer.h matching the license text of intel_ringbuffer.c.

Signed-off-by: Jonathan Gray <jsg@jsg.id.au>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181031005331.20775-1-jsg@jsg.id.au
2018-10-31 11:57:11 +02:00
Rodrigo Vivi 9e7833758b drm/i915: Prefer IS_GEN<n> check with bitmask.
Whenever possible we should stick with IS_GEN<n> checks.

Bitmaks has been introduced on commit ae7617f0ef ("drm/i915:
Allow optimized platform checks") for efficiency.

Let's stick with it whenever possible.

This patch was generated with coccinelle:

spatch -sp_file is_gen.cocci *{c,h} --in-place

is_gen.cocci:
@gen2@ expression e; @@
-INTEL_GEN(e) == 2
+IS_GEN2(e)
@gen3@ expression e; @@
-INTEL_GEN(e) == 3
+IS_GEN3(e)
@gen4@ expression e; @@
-INTEL_GEN(e) == 4
+IS_GEN4(e)
@gen5@ expression e; @@
-INTEL_GEN(e) == 5
+IS_GEN5(e)
@gen6@ expression e; @@
-INTEL_GEN(e) == 6
+IS_GEN6(e)
@gen7@ expression e; @@
-INTEL_GEN(e) == 7
+IS_GEN7(e)
@gen8@ expression e; @@
-INTEL_GEN(e) == 8
+IS_GEN8(e)
@gen9@ expression e; @@
-INTEL_GEN(e) == 9
+IS_GEN9(e)
@gen10@ expression e; @@
-INTEL_GEN(e) == 10
+IS_GEN10(e)
@gen11@ expression e; @@
-INTEL_GEN(e) == 11
+IS_GEN11(e)

Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181026195143.20353-1-rodrigo.vivi@intel.com
2018-10-29 10:44:11 -07:00
Chris Wilson e2f3496e93 drm/i915: Pull scheduling under standalone lock
Currently, the backend scheduling code abuses struct_mutex into order to
have a global lock to manipulate a temporary list (without widespread
allocation) and to protect against list modifications. This is an
extraneous coupling to struct_mutex and further can not extend beyond
the local device.

Pull all the code that needs to be under the one true lock into
i915_scheduler.c, and make it so.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181001144755.7978-2-chris@chris-wilson.co.uk
2018-10-01 20:34:21 +01:00
Chris Wilson 85f5e1f385 drm/i915: Combine multiple internal plists into the same i915_priolist bucket
As we are about to allow ourselves to slightly bump the user priority
into a few different sublevels, packthose internal priority lists
into the same i915_priolist to keep the rbtree compact and avoid having
to allocate the default user priority even after the internal bumping.
The downside to having an requests[] rather than a node per active list,
is that we then have to walk over the empty higher priority lists. To
compensate, we track the active buckets and use a small bitmap to skip
over any inactive ones.

v2: Use MASK of internal levels to simplify our usage.
v3: Prevent overflow when SHIFT is zero.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181001123204.23982-4-chris@chris-wilson.co.uk
2018-10-01 15:26:20 +01:00
Dave Airlie 2dc7bad71c drm-misc-next for 4.20:
UAPI Changes:
 - Add host endian variants for the most common formats (Gerd)
 - Fail ADDFB2 for big-endian drivers that don't advertise BE quirk (Gerd)
 - clear smem_start in fbdev for drm drivers to avoid leaking fb addr (Daniel)
 
 Cross-subsystem Changes:
 
 Core Changes:
 - fix drm_mode_addfb() on big endian machines (Gerd)
 - add timeline point to syncobj find+replace (Chunming)
 - more drmP.h removal effort (Daniel)
 - split uapi portions of drm_atomic.c into drm_atomic_uapi.c (Daniel)
 
 Driver Changes:
 - bochs: Convert open-coded portions to use helpers (Peter)
 - vkms: Add cursor support (Haneen)
 - udmabuf: Lots of fixups (mostly cosmetic afaict) (Gerd)
 - qxl: Convert to use fbdev helper (Peter)
 
 Cc: Gerd Hoffmann <kraxel@redhat.com>
 Cc: Chunming Zhou <david1.zhou@amd.com>
 Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
 Cc: Peter Wu <peter@lekensteyn.nl>
 Cc: Haneen Mohammed <hamohammed.sa@gmail.com>
 -----BEGIN PGP SIGNATURE-----
 
 iQEzBAABCgAdFiEEfxcpfMSgdnQMs+QqlvcN/ahKBwoFAluaXuAACgkQlvcN/ahK
 BwrQHggAtcu96+plN6cDcMcoOfnQT/OG30dBER4/cpG05hEciq/NXwXBQ9dPWtqk
 Nkcgst28UbXTmt0UKck7ibfePLVqnN7+yRqnj3yrD28Qjrg1Ewr0go8cKlIJ8+8t
 E6aLvgRwx5/9sHHaeCC1K1qfowEr0Put9DQvLH2BVRM3C1Sj5BXeXMP4djb5PHGU
 BYGLoN9DrrVHLVARwbmzSb8V5oLED2CdRkL7WpXC2LcEGZ3jPllTN8EOoqsIMOAZ
 LGnpWxADVnYTA5np3O0QJsalu942T4rMPoxgCHZmuGIhEijqk7mgGWpeOmzN71Eh
 rXX1yyWvZenUc69Pbl7G7lQmE6DSDw==
 =9Mxt
 -----END PGP SIGNATURE-----

Merge tag 'drm-misc-next-2018-09-13' of git://anongit.freedesktop.org/drm/drm-misc into drm-next

drm-misc-next for 4.20:

UAPI Changes:
- Add host endian variants for the most common formats (Gerd)
- Fail ADDFB2 for big-endian drivers that don't advertise BE quirk (Gerd)
- clear smem_start in fbdev for drm drivers to avoid leaking fb addr (Daniel)

Cross-subsystem Changes:

Core Changes:
- fix drm_mode_addfb() on big endian machines (Gerd)
- add timeline point to syncobj find+replace (Chunming)
- more drmP.h removal effort (Daniel)
- split uapi portions of drm_atomic.c into drm_atomic_uapi.c (Daniel)

Driver Changes:
- bochs: Convert open-coded portions to use helpers (Peter)
- vkms: Add cursor support (Haneen)
- udmabuf: Lots of fixups (mostly cosmetic afaict) (Gerd)
- qxl: Convert to use fbdev helper (Peter)

Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Chunming Zhou <david1.zhou@amd.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Peter Wu <peter@lekensteyn.nl>
Cc: Haneen Mohammed <hamohammed.sa@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>

From: Sean Paul <sean@poorly.run>
Link: https://patchwork.freedesktop.org/patch/msgid/20180913130254.GA156437@art_vandelay
2018-09-14 09:43:16 +10:00
Daniel Vetter d78aa65067 drm: Add drm/drm_util.h header file
We have a bunch of neat little macros all over the place which should
move to kernel.h. But some of them died in bikesheds on lkml, and we
need a decent home for them.

Start out by moving the for_each_if macro there.

v2: Rename to drm_util.h instead (Dave&Sean)

Cc: Sean Paul <seanpaul@chromium.org>
Acked-by: Sean Paul <seanpaul@chromium.org>
Cc: Dave Airlie <airlied@gmail.com>
Acked-by: Dave Airlie <airlied@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20180905135711.28370-1-daniel.vetter@ffwll.ch
2018-09-09 14:18:11 +02:00
Chris Wilson a99b32a6ff drm/i915: Clear stop-engine for a pardoned reset
If we pardon a per-engine reset, we may leave the STOP_RING bit asserted
in RING_MI_MODE resulting in the engine hanging. Unconditionally clear
it on the per-engine exit path as we know that either we skipped the
reset and so need the cancellation, or the reset was successful and the
cancellation is a no-op, or there was an error and we will follow up
with a full-reset or wedging (both of which will stop the engines again
as required).

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=107188
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=106560
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180814171857.24673-1-chris@chris-wilson.co.uk
2018-08-15 10:15:28 +01:00
Chris Wilson 97f0615800 drm/i915: Pull seqno started checks together
We have a few instances of checking seqno-1 to see if the HW has started
the request. Pull those together under a helper.

v2: Pull the !seqno assertion higher, as given seqno==1 we may indeed
check to see if we have started using seqno==0.

Suggested-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180806112605.20725-1-chris@chris-wilson.co.uk
2018-08-07 12:43:00 +01:00
Lucas De Marchi 08e3e21a24 drm/i915: kill resource streamer support
After disabling resource streamer on ICL (due to it actually not
existing there), I got feedback that there have been some experimental
patches for mesa to use RS years ago, but nothing ever landed or shipped
because there was no performance improvement.

This removes it from kernel keeping the uapi defines around for
compatibility.

v2: - re-add the inadvertent removal of CTX_CTRL_INHIBIT_SYN_CTX_SWITCH
    - don't bother trying to document removed params on uapi header:
      applications should know that from the query.
      (from Chris)

v3: - disable CTX_CTRL_RS_CTX_ENABLE istead of removing it
    - reword commit message after Daniele confirmed no performance
      regression on his machine
    - reword commit message to make clear RS is being removed due to
      never been used
v4: - move I915_EXEC_RESOURCE_STREAMER to __I915_EXEC_ILLEGAL_FLAGS so
      the check on ioctl() is made much earlier by
      i915_gem_check_execbuffer() (suggested by Tvrtko)

Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Acked-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20180803232443.17193-1-lucas.demarchi@intel.com
2018-08-06 17:19:51 +01:00
Chris Wilson 5503cb0dec drm/i915: Drop unneed i915 parameter from intel_ring_pin()
As we now have a ring->vma available, we can just lookup our i915
pointer from inside the vm, and so not require the unsightly parameter.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180727155501.18963-1-chris@chris-wilson.co.uk
2018-07-27 18:22:08 +01:00
Jakub Bartmiński 496bcce3c9 drm/i915: Remove unnecessary ggtt_offset_bias from i915_gem_context
Since ggtt_offset_bias is now stored in ggtt.pin_bias, it is duplicated
inside i915_gem_context, and can instead be accessed directly from ggtt.

v3:
Added a helper function to retrieve the ggtt.pin_bias from the vma.

v4:
Moved the helper function to the previous patch in the series.
Dropped the bias from intel_ring_pin. This introduces a slight functional
change since we are always pinning the ring a bit higher if GuC is present
even though we don't really need to.

v8:
Fixed patch not applying on the most recent upstream.

Signed-off-by: Jakub Bartmiński <jakub.bartminski@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Michał Winiarski <michal.winiarski@intel.com>
Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20180727141148.30874-4-jakub.bartminski@intel.com
2018-07-27 16:07:37 +01:00
Chris Wilson 0f6b79fa13 drm/i915/selftests: Force a preemption hang
Inject a failure into preemption completion to pretend as if the HW
didn't successfully handle preemption and we are forced to do a reset in
the middle.

v2: Wait for preemption, to force testing with the missed preemption.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180716132154.12539-1-chris@chris-wilson.co.uk
2018-07-16 17:17:27 +01:00
Chris Wilson 0051163ab3 drm/i915/execlists: Always clear preempt status on cancelling all
On reset/wedging, we cancel all pending replies from the HW and we also
want to cancel an outstanding preemption event. Since we use the same
function to cancel the pending replies for reset and for a preemption
event, we can simply clear the active tracking for all.

v2: Keep execlists_user_end() markup for wedging
v3: Move assignment to inline to hide the bare assignment.

Fixes: 60a9432454 ("drm/i915/execlists: Drop clear_gtiir() on GPU reset")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180716125424.5715-1-chris@chris-wilson.co.uk
2018-07-16 17:17:27 +01:00
Chris Wilson 655250a8d1 drm/i915/execlists: Switch to rb_root_cached
The kernel recently gained an augmented rbtree with the purpose of
cacheing the leftmost element of the rbtree, a frequent optimisation to
avoid calls to rb_first() which is also employed by the
execlists->queue. Switch from our open-coded cache to the library.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180629075348.27358-9-chris@chris-wilson.co.uk
2018-07-11 14:38:45 +01:00
Chris Wilson fd8526e509 drm/i915/execlists: Trust the CSB
Now that we use the CSB stored in the CPU friendly HWSP, we do not need
to track interrupts for when the mmio CSB registers are valid and can
just check where we read up to last from the cached HWSP. This means we
can forgo the atomic bit tracking from interrupt, and in the next patch
it means we can check the CSB at any time.

v2: Change the splitting inside reset_prepare, we only want to lose
testing the interrupt in this patch, the next patch requires the change
in locking

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-8-chris@chris-wilson.co.uk
2018-06-28 22:55:09 +01:00
Chris Wilson f4b58f0438 drm/i915/execlists: Reset CSB write pointer after reset
On HW reset, the HW clears the write pointer (to 0). But since it also
writes its first CSB entry to slot 0, we need to reset the write pointer
back to the element before (so the first entry we read is 0).

This is required for the next patch, where we trust the CSB completely!

v2: Use _MASKED_FIELD
v3: Store the reset value, so that we differentiate between mmio/hwsp
transparently and without pretense.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-6-chris@chris-wilson.co.uk
2018-06-28 22:55:07 +01:00
Chris Wilson bc4237ec8d drm/i915/execlists: Unify CSB access pointers
Following the removal of the last workarounds, the only CSB mmio access
is for the old vGPU interface. The mmio registers presented by vGPU do
not require forcewake and can be treated as ordinary volatile memory,
i.e. they behave just like the HWSP access just at a different location.
We can reduce the CSB access to a set of read/write/buffer pointers and
treat the various paths identically and not worry about forcewake.
(Forcewake is nightmare for worstcase latency, and we want to process
this all with irqsoff -- no latency allowed!)

v2: Comments, comments, comments. Well, 2 bonus comments.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-5-chris@chris-wilson.co.uk
2018-06-28 22:55:06 +01:00
Chris Wilson e3be4079ea drm/i915: Only signal from interrupt when requested
Avoid calling dma_fence_signal() from inside the interrupt if we haven't
enabled signaling on the request.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180627201304.15817-4-chris@chris-wilson.co.uk
2018-06-28 20:56:35 +01:00
Chris Wilson 78796877c3 drm/i915: Move the irq_counter inside the spinlock
Rather than have multiple locked instructions inside the notify_ring()
irq handler, move them inside the spinlock and reduce their intrinsic
locking.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180627201304.15817-3-chris@chris-wilson.co.uk
2018-06-28 20:56:35 +01:00
Chris Wilson 4fdd5b4e9a drm/i915: Fix fallout of fake reset along resume
commit b2209e62a4 ("drm/i915/execlists: Reset the CSB head tracking on
reset/sanitization") and commit 1288786b18 ("drm/i915: Move GEM sanitize
from resume_early to resume") show the conflicting requirements on the
code. We must reset the GPU before trashing live state on a fast resume
(hibernation debug, or error paths), but we must only reset our state
tracking iff the GPU is reset (or power cycled). This is tricky if we
are disabling GPU reset to simulate broken hardware; we reset our state
tracking but the GPU is left intact and recovers from its stale state.

v2: Again without the assertion for forcewake, no longer required since
commit b3ee09a4de ("drm/i915/ringbuffer: Fix context restore upon reset")
as the contexts are reset from the CS ensuring everything is powered up.

Fixes: b2209e62a4 ("drm/i915/execlists: Reset the CSB head tracking on reset/sanitization")
Fixes: 1288786b18 ("drm/i915: Move GEM sanitize from resume_early to resume")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180616202534.18767-1-chris@chris-wilson.co.uk
2018-06-18 10:14:54 +01:00
Chris Wilson 1fd00c0fae drm/i915: Declare the driver wedged if hangcheck makes no progress
Hangcheck is our back up in case the GPU or the driver gets stuck. It
detects when the GPU is not making any progress and issues a GPU reset.
However, if the driver is failing to make any progress, we can get
ourselves into a situation where we continually try resetting the GPU to
no avail. Employ a second timeout such that if we continue to see the
same seqno (the stalled engine has made no progress at all) over the
course of several hangchecks, declare the driver wedged and attempt to
start afresh.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180602104853.17140-1-chris@chris-wilson.co.uk
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
2018-06-14 19:20:33 +01:00