There is no need to hold the GPU lock while freeing the submit
object. Only move the retired submits from the GPU active list to
a temporary retire list under the GPU lock.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Now that the PMR lifetime issues are solved we can safely re-enable
performance counter profiling support.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
As long as there is an active submit, we want the GPU to stay awake. This
is slightly complicated by the fact that we really want to wake the GPU
at the last possible moment to achieve maximum power savings.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
The active count is used to check if the BO is idle, where idle is defined
as not active on the GPU and all VM mappings and reference counts dropped
to the initial state. As the idling of the mappings and references now only
happens in the submit cleanup, the active state handling must be moved to
the same location in order to keep the userspace semantics.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Less dynamic allocations and slims down the cmdbuf object to only the
required information, as everything else is already available in the
submit object.
This also simplifies buffer and mappings lifetime management, as they
are now exlusively attached to the submit object and not additionally
to the cmdbuf.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
The GPU exec state may have changed at the time when the perfmon sampling
is done, as it reflects the state of the last submission, not the current
GPU execution state.
So for proper sampling we must use the submit exec_state.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
We'll need this in some places where only the submit is available. Also
this is a first step at slimming down the cmdbuf object.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
To make them available to the event worker even after the actual
command stream execution has finished.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
The submit object lifetime will get extended to the actual GPU
execution. As multiple users will depend on this, add a kref to
properly control destruction of the object.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
The acquire_ctx is special in that it needs to be released from the same
thread as has been used to initialize it. This collides with the intention to
extend the submit lifetime beyond the gem_submit function with potentially
other threads doing the final cleanup.
Move the ww_acquire_ctx to the function local stack as suggested in the
documentation.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
This is safe to call in all paths, as the BO_PINNED flag tells us if the BO
needs unpinning.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Simplifies the cleanup path and moves fence waiting to a central location.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
The object fencing has nothing to do with the actual GPU buffer submit,
so move it to the gem submit path to have a cleaner split.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Use kzalloc so other code doesn't need to worry about uninitialized members.
Drop the non-standard GFP flags, as we really don't want to fail the submit
when under slight memory pressure. Remove one level of indentation by using
an early return if the allocation failed. Also remove the unused drm device
member.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
When manipulating the kernel command buffer the GPU mutex must be held, as
otherwise different callers might try to replace the same part of the
buffer, wreacking havok in the GPU execution.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Inserting the END command when suspending the GPU is changing the
command buffer state, which requires the GPU to be held.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
While the etnaviv workqueue needs to be ordered, as we rely on work items
being executed in queuing order, this is only true for a single GPU.
Having a shared workqueue for all GPUs in the system limits concurrency
artificially.
Getting each GPU its own ordered workqueue still meets our ordering
expectations and enables retire workers to run concurrently.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
There is no need to store this in the gpu struct. MMU flushes are triggered
correctly in reaction to MMU maps and unmaps, independent of the current ctx.
Any required pipe switches can be infered from the current and the desired
GPU exec state.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
There is no need to synchronize with oustanding retire jobs if the object
has gone idle. Retire jobs only ever change the object state from active to
idle, not the other way around.
The IOVA put race is uncritical, as the GEM_WAIT ioctl itself is holding
a reference to the GEM object, so the retire worker will not pull the
object into the CPU domain, which is the thing we are trying to guard
against with etnaviv_gpu_wait_obj_inactive. The ordering of the various
counts and waits may change a bit, but the userspace visible behavior at
the bounds of the syscall are unchanged.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Flush and prefetch are properly handled in the buffer code, data endianess
would need much wider changes than adding something to this single function.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Now that the userptr BO handling doesn't rely on the userspace restarting
the submit after object population, there is no need to special case the
-EAGAIN return value anymore.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
All code paths which populate userptr BOs are fine with the get_pages
function taking the mmap_sem lock. This allows to get rid of the pretty
involved architecture with a worker being scheduled if the mmap_sem
needs to be taken, but instead call GUP directly and allow it to take
the lock if necessary.
This simplifies the code a lot and removes the possibility of this
function returning -EAGAIN, which complicates object population
handling at the callers.
A notable change in behavior is that we don't allow a process to populate
objects with user pages from a foreign MM anymore. This would have been an
invalid use before, as it breaks the assumptions made in the etnaviv kernel
driver to enfore cache coherence. We now disallow this by rejecting the
request to populate those objects. Well behaving userspace is unaffected by
this change.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
This function never fails, as it does nothing more than adding the GEM
object to the global device list. Making this explicit through the void
return type allows to drop some unnecessary error handling.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
This function has only one caller and it isn't expected that there will
be any more in the future. Folding this function into the caller is
helping the readability.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
The current userptr page population will defer work to a work item if
needed to avoid ever taking the mmap_sem in the direct call path. With
the more fine-grained locking in etnaviv this isn't needed anymore, so
a future commit will simplify this code.
Add a lockdep annotation to validate the assumption that the mmap_sem
can be taken in the direct call path.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Userptr, prime and shmem buffer objects have different lock ordering
requirements. This is mostly due to the fact that we don't allow to mmap
userptr buffers, so we won't ever end up in our fault handler for those,
so some of the code paths are never called with the mmap_sem held.
To avoid lockdep false positives, split them up into different lock classes.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
If the FE is restarted before the sync point event is cleared, the GPU
might trigger a completion IRQ for the next sync point, corrupting
the state of the currently running worker.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Exynos DRM IPP subsystem is in fact non-functional and frankly speaking
dead-code. This patch clearly marks that Exynos DRM IPP subsystem is
broken and never really functional. It will be replaced by a completely
rewritten API.
Exynos DRM IPP user-space API can be obsoleted for the following
reasons:
1. Exynos DRM IPP user-space API can be optional in Exynos DRM, so
userspace should not rely that it is always available and should have
a software fallback in case it is not there.
2. The only mode which was initially semi-working was memory-to-memory
image processing. The remaining modes (LCD-"writeback" and "output")
were never operational due to missing code (both in mainline and even
vendor kernels).
3. Exynos DRM IPP mainline user-space API compatibility for
memory-to-memory got broken very early by commit 083500baef ("drm:
remove DRM_FORMAT_NV12MT", which removed the support for tiled formats,
the main feature which made this API somehow useful on Exynos platforms
(video codec that time produced only tiled frames, to implement xvideo
or any other video overlay, one has to de-tile them for proper
display).
4. Broken drivers. Especially once support for IOMMU has been added,
it revealed that drivers don't configure DMA operations properly and in
many cases operate outside the provided buffers trashing memory around.
5. Need for external patches. Although IPP user-space API has been used
in some vendor kernels, but in such cases there were additional patches
applied (like reverting mentioned 083500baef patch) what means that
those userspace apps which might use it, still won't work with the
mainline kernel version.
We don't have time machines, so we cannot change it, but Exynos DRM IPP
extension should never have been merged to mainline in that form.
Exynos IPP subsystem and user-space API will be rewritten, so remove
current IPP core code and mark existing drivers as BROKEN.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Daniel Stone <daniels@collabora.com>
Acked-by: Krzysztof Kozlowski <krzk@kernel.org>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Although header is included only once but still having an include guard
is a good practice. To avoid confusion, add SoC prefix to existing
Exynos5433 header include guard.
Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
The DECON headers contain only defines for registers. There are no
other drivers using them so this should be put locally to the Exynos DRM
driver. Keeping headers local helps managing the code.
Suggested-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
The amdgpu_vm_frag_ptes will call amdgpu_vm_update_ptes, and for buffer
object that has shadow buffer, need twice commands.
Signed-off-by: Emily Deng <Emily.Deng@amd.com>
Reviewed-by: Chunming Zhou <david1.zhou@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
if the bo shares same reservation object then not lock it again
at swapout time to make it possible to swap out.
v2: refine the commmit message
Reviewed-by: Thomas Hellström <thellstrom@vmware.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Chuming Zhou <david1.zhou@amd.com>
Signed-off-by: Roger He <Hongbo.He@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
extract a function as ttm_bo_evict_swapout_allowable since eviction and
swapout can share same logic.
v2: modify commit message and add description in the code
Reviewed-by: Thomas Hellström <thellstrom@vmware.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Chuming Zhou <david1.zhou@amd.com>
Signed-off-by: Roger He <Hongbo.He@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
forward the operation context to ttm_tt_bind as well,
and the ultimate goal is swapout enablement for reserved BOs.
v2: use common term rather than amd specific
Reviewed-by: Thomas Hellström <thellstrom@vmware.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Chuming Zhou <david1.zhou@amd.com>
Signed-off-by: Roger He <Hongbo.He@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
forward the operation context to ttm_tt_populate as well,
and the ultimate goal is swapout enablement for reserved BOs.
v2: squash in fix for vboxvideo
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Roger He <Hongbo.He@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
- More improvements on logs, dumps, and trace (Chris, Michal)
- Coffee Lake important fix for stolen memory (Lucas)
- Continue to make GPU reset more robust as well
improving selftest coverage for it (Chris)
- Unifying debugfs return codes (Michal)
- Using existing helper for testing obj pages (Matthew)
- Organize and improve gem_request tracepoints (Lionel)
- Protect DDI port to DPLL map from theoretical race (Rodrigo)
- ... and consequently fixing the indentation on this DDI clk selection function (Chris)
- ... and consequently properly serializing non-blocking modesets (Ville)
- Add support for horizontal plane flipping on Cannonlake (Joonas)
- Two Cannonlake Workarounds for better stability (Rafael)
- Fix mess around PSR registers (DK)
- More Coffee Lake PCI IDs (Rodrigo)
- Remove CSS modifiers on pipe C of Geminilake (Krisman)
- Disable all planes for load detection (Ville)
- Reorg on i915 display headers (Michal)
- Avoid enabling movntdqa optimization on hypervisor guest (Changbin)
GVT:
- more mmio switch optimization (Weinan)
- cleanup i915_reg_t vs. offset usage (Zhenyu)
- move write protect handler out of mmio handler (Zhenyu)
-----BEGIN PGP SIGNATURE-----
iQEcBAABAgAGBQJaPXCRAAoJEPpiX2QO6xPKxi4IAJmAQCVBEZVz2TI/t6xJIYcl
xGXlghAVlF8i2bRPpi8PioqUbASF1o7sIVjwIWEV+DgrIQT4MQCv1BmqvExlftBw
5mgkKyS+7Itnp7vaioYRmF/YxMoqP1vHF4J6fBScmtHf+RKtlwXQzw+AnlJtg88h
d9mudeDzV5UXB2Prntia3w3sb6oJVKbtgeo+njll2SL6EPaz0sKBEuhcJkKWygtH
4gfneJG0cwIA/rJe4+eIfpnHRiXhhiwofPBYV0eWhBTTo47sKyGxfjpxmEEax1DF
3JUUe9a+2dYqXxOyhLlZEOeCfkcXhkgDmvJTlupWGVV3POIncNlt60lhmuS4t5g=
=wJGr
-----END PGP SIGNATURE-----
Merge tag 'drm-intel-next-2017-12-22' of git://anongit.freedesktop.org/drm/drm-intel into drm-next
- Allow internal page allocation to fail (Chris)
- More improvements on logs, dumps, and trace (Chris, Michal)
- Coffee Lake important fix for stolen memory (Lucas)
- Continue to make GPU reset more robust as well
improving selftest coverage for it (Chris)
- Unifying debugfs return codes (Michal)
- Using existing helper for testing obj pages (Matthew)
- Organize and improve gem_request tracepoints (Lionel)
- Protect DDI port to DPLL map from theoretical race (Rodrigo)
- ... and consequently fixing the indentation on this DDI clk selection function (Chris)
- ... and consequently properly serializing non-blocking modesets (Ville)
- Add support for horizontal plane flipping on Cannonlake (Joonas)
- Two Cannonlake Workarounds for better stability (Rafael)
- Fix mess around PSR registers (DK)
- More Coffee Lake PCI IDs (Rodrigo)
- Remove CSS modifiers on pipe C of Geminilake (Krisman)
- Disable all planes for load detection (Ville)
- Reorg on i915 display headers (Michal)
- Avoid enabling movntdqa optimization on hypervisor guest (Changbin)
GVT:
- more mmio switch optimization (Weinan)
- cleanup i915_reg_t vs. offset usage (Zhenyu)
- move write protect handler out of mmio handler (Zhenyu)
* tag 'drm-intel-next-2017-12-22' of git://anongit.freedesktop.org/drm/drm-intel: (55 commits)
drm/i915: Update DRIVER_DATE to 20171222
drm/i915: Show HWSP in intel_engine_dump()
drm/i915: Assert that the request is on the execution queue before being removed
drm/i915/execlists: Show preemption progress in GEM_TRACE
drm/i915: Put all non-blocking modesets onto an ordered wq
drm/i915: Disable GMBUS clock gating around GMBUS transfers on gen9+
drm/i915: Clean up the PNV bit banging vs. GMBUS clock gating w/a
drm/i915: No need to power up PG2 for GMBUS on BXT
drm/i915: Disable DC states around GMBUS on GLK
drm/i915: Do not enable movntdqa optimization in hypervisor guest
drm/i915: Dump device info at once
drm/i915: Add pretty printer for runtime part of intel_device_info
drm/i915: Update intel_device_info_runtime_init() parameter
drm/i915: Move intel_device_info definitions to its own header
drm/i915: Move opregion definitions to dedicated intel_opregion.h
drm/i915: Move display related definitions to dedicated header
drm/i915: Move some utility functions to i915_util.h
drm/i915/gvt: move write protect handler out of mmio emulation function
drm/i915/gvt: cleanup usage for typed mmio reg vs. offset
drm/i915/gvt: Fix pipe A enable as default for vgpu
...
forward the operation context to ttm_mem_global_alloc_page as well,
and the ultimate goal is swapout enablement for reserved BOs.
Here reserved BOs refer to all the BOs which share same reservation object
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Roger He <Hongbo.He@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
forward the operation context to ttm_mem_global_alloc as well, and the
ultimate goal is swapout enablement for reserved BOs
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Roger He <Hongbo.He@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
remove the extra indirection because we have only one implementation anyway
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Roger He <Hongbo.He@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
The default interface situation has been taken into the framework, so
remove the default set of each module.
Signed-off-by: Tan Xiaojun <tanxiaojun@huawei.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
The default interface situation has been taken into the framework, so
remove the default set of each module.
Signed-off-by: Tan Xiaojun <tanxiaojun@huawei.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
The default interface situation has been taken into the framework, so
remove the default set of each module.
Signed-off-by: Tan Xiaojun <tanxiaojun@huawei.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
The default interface situation has been taken into the framework, so
remove the default set of each module.
Signed-off-by: Tan Xiaojun <tanxiaojun@huawei.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
The default interface situation has been taken into the framework, so
remove the default set of each module.
Signed-off-by: Tan Xiaojun <tanxiaojun@huawei.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
The default interface situation has been taken into the framework, so
remove the default set of each module.
Signed-off-by: Tan Xiaojun <tanxiaojun@huawei.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
The default interface situation has been taken into the framework, so
remove the default set of each module.
Signed-off-by: Tan Xiaojun <tanxiaojun@huawei.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
The default interface situation has been taken into the framework, so
remove the default set of each module.
Signed-off-by: Tan Xiaojun <tanxiaojun@huawei.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
The default interface situation has been taken into the framework, so
remove the default set of each module.
Signed-off-by: Tan Xiaojun <tanxiaojun@huawei.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>