A cleanup left behind an unused variable that we have to remove
in order to avoid this harmless warning:
drivers/gpu/drm/msm/adreno/a5xx_gpu.c: In function 'a5xx_zap_shader_init':
drivers/gpu/drm/msm/adreno/a5xx_gpu.c:493:19: error: unused variable 'a5xx_gpu' [-Werror=unused-variable]
Fixes: 8d6f08272b ("drm/msm: Remove uneeded platform dev members")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Rob Clark <robdclark@gmail.com>
Nearly all of the buffer allocations for kernel allocate an buffer object,
virtual address and GPU iova at the same time. Make a helper function to
handle the details.
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
[dropped msm_fbdev conversion to new helper, since it interferes with
display-handover work, where we want to separate allocation and mapping]
Signed-off-by: Rob Clark <robdclark@gmail.com>
Currently the GPU MMU is attached in the adreno_gpu code but as
more and more of the GPU initialization moves to the generic
GPU path we have a need to map and use GPU memory earlier and
earlier. There isn't any reason to defer attaching the MMU
until later so attach it right after the address space is
created so it can be used immediately.
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Rob Clark <robdclark@gmail.com>
The A5XX GPU has really good hardware fault detection that can
detect a abnormal hardware condition and fire an interrupt in
a matter of milliseconds which is a lot better than waiting for
the hangcheck timer.
Enable the interrupt and log information before kicking off
recovery.
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Rob Clark <robdclark@gmail.com>
Commit eeb754746b ("drm/msm/gpu: use pm-runtime") adds a pointer
for the GPU platform device to the msm_gpu struct so we can
happily remove the same pointers from the individual GPU
structs.
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Rob Clark <robdclark@gmail.com>
In zap_shader_load_mdt(), we pass a pointer to a phys_addr_t
into dmam_alloc_coherent, which the compiler warns about:
drivers/gpu/drm/msm/adreno/a5xx_gpu.c: In function 'zap_shader_load_mdt':
drivers/gpu/drm/msm/adreno/a5xx_gpu.c:54:50: error: passing argument 3 of 'dmam_alloc_coherent' from incompatible pointer type [-Werror=incompatible-pointer-types]
The returned DMA address is later passed on to a function that
takes a phys_addr_t, so it's clearly wrong to use the DMA
mapping interface here: the memory may be uncached, or the
address may be completely wrong if there is an IOMMU connected
to the device. What the code actually wants to do is to get
the physical address from the reserved-mem node. It goes through
the dma-mapping interfaces for obscure reasons, and this
apparently only works by chance, relying on specific bugs
in the error handling of the arm64 dma-mapping implementation.
The same problem existed in the "venus" media driver, which was
now fixed by Stanimir Varbanov after long discussions.
In order to make some progress here, I have now ported his
approach over to the adreno driver. The patch is currently
untested, and should get a good review, but it is now much
simpler than the original, and it should be obvious what
goes wrong if I made a mistake in the port.
See also: a6e2d36bf6 ("media: venus: don't abuse dma_alloc for non-DMA allocations")
Cc: Stanimir Varbanov <stanimir.varbanov@linaro.org>
Fixes: 7c65817e6d ("drm/msm: gpu: Enable zap shader for A5XX")
Acked-by: Bjorn Andersson <bjorn.andersson@linaro.org>
Acked-and-Tested-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Rob Clark <robdclark@gmail.com>
When compile-testing for something other than ARCH_QCOM,
we run into a link error:
drivers/gpu/drm/msm/adreno/a5xx_gpu.o: In function `a5xx_hw_init':
a5xx_gpu.c:(.text.a5xx_hw_init+0x600): undefined reference to `qcom_mdt_get_size'
a5xx_gpu.c:(.text.a5xx_hw_init+0x93c): undefined reference to `qcom_mdt_load'
There is already an #ifdef that tries to check for CONFIG_QCOM_MDT_LOADER,
but that symbol is only meaningful when building for ARCH_QCOM.
This adds a compile-time check for ARCH_QCOM, and clarifies the
Kconfig select statement so we don't even try it for other targets.
The check for CONFIG_QCOM_MDT_LOADER can then go away, which also
improves compile-time coverage and makes the code a little nicer
to read.
Fixes: 7c65817e6d ("drm/msm: gpu: Enable zap shader for A5XX")
Acked-by: Jordan Crouse <jcrouse@codeaurora.org>
Acked-by: Bjorn Andersson <bjorn.andersson@linaro.org>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Rob Clark <robdclark@gmail.com>
On A5XX GPU hardware clock gating needs to be turned off before
reading certain GPU registers via AHB. Turn off HWCG before calling
adreno_show() to safely dump all the registers without a system hang.
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Rob Clark <robdclark@gmail.com>
There are some use cases wherein we need to turn off hardware clock
gating before reading certain registers. Modify the A5XX HWCG function
to allow user to enable or disable clock gating at will.
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Rob Clark <robdclark@gmail.com>
The 0xf400 and 0xf800 ranges are in the RBBM_SECVID block which may
be protected from CPU access. Skip dumping them since they are minimally
useful for debugging and they aren't worth a system hang.
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Rob Clark <robdclark@gmail.com>
Buffer object specific resources like pages, domains, sg list
need not be protected with struct_mutex. They can be protected
with a buffer object level lock. This simplifies locking and
makes it easier to avoid potential recursive locking scenarios
for SVM involving mmap_sem and struct_mutex. This also removes
unnecessary serialization when creating buffer objects, and also
between buffer object creation and GPU command submission.
Signed-off-by: Sushmita Susheelendra <ssusheel@codeaurora.org>
[robclark: squash in handling new locking for shrinker]
Signed-off-by: Rob Clark <robdclark@gmail.com>
No functional change, that will come later. But this will make it
easier to deal with dynamically created address spaces (ie. per-
process pagetables for gpu).
Signed-off-by: Rob Clark <robdclark@gmail.com>
Most, but not all, paths where calling the with struct_mutex held. The
fast-path in msm_gem_get_iova() (plus some sub-code-paths that only run
the first time) was masking this issue.
So lets just always hold struct_mutex for hw_init(). And sprinkle some
WARN_ON()'s and might_lock() to avoid this sort of problem in the
future.
Signed-off-by: Rob Clark <robdclark@gmail.com>
There isn't any generic code that uses ->idle so remove it.
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Rob Clark <robdclark@gmail.com>
The A5XX GPU powers on in "secure" mode. In secure mode the GPU can
only render to buffers that are marked as secure and inaccessible
to the kernel and user through a series of hardware protections. In
practice secure mode is used to draw things like a UI on a secure
video frame.
In order to switch out of secure mode the GPU executes a special
shader that clears out the GMEM and other sensitve registers and
then writes a register. Because the kernel can't be trusted the
shader binary is signed and verified and programmed by the
secure world. To do this we need to read the MDT header and the
segments from the firmware location and put them in memory and
present them for approval.
For targets without secure support there is an out: if the
secure world doesn't support secure then there are no hardware
protections and we can freely write the SECVID_TRUST register from
the CPU. We don't have 100% confidence that we can query the
secure capabilities at run time but we have enough calls that
need to go right to give us some confidence that we're at least doing
something useful.
Of course if we guess wrong you trigger a permissions violation
which usually ends up in a system crash but thats a problem
that shows up immediately.
[v2: use child device per Bjorn]
[v3: use generic MDT loader per Bjorn]
[v4: use managed dma functions and ifdefs for the MDT loader]
[v5: Add depends for QCOM_MDT_LOADER]
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Acked-by: Bjorn Andersson <bjorn.andersson@linaro.org>
[robclark: fix Kconfig to use select instead of depends + #if IS_ENABLED()]
Signed-off-by: Rob Clark <robdclark@gmail.com>
The interrupt status was being cleared before processing the handlers.
a5xx_rbbm_err_irq() was checking the interrupt status again, which would
likely turn out bad because the interrupt status would be 0 (or at least
different). Pass the original status to the function instead.
Also, skip clearing RBBM_AHB_ERROR from the interrupt status. The interrupt
will keep firing until the error source is cleared. Skip the clear to
avoid a storm until the error is cleared in a5xx_rbbm_err_irq().
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Rob Clark <robdclark@gmail.com>
Instead of checking for a5xx_gpu->gpmu_iova during destroy we
accidently check a5xx_gpu->gpmu_bo.
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Rob Clark <robdclark@gmail.com>
The newly added a5xx support fails to build when debugfs is diabled:
drivers/gpu/drm/msm/adreno/a5xx_gpu.c:849:4: error: 'struct msm_gpu_funcs' has no member named 'show'
drivers/gpu/drm/msm/adreno/a5xx_gpu.c:849:11: error: 'a5xx_show' undeclared here (not in a function); did you mean 'a5xx_irq'?
This adds a missing #ifdef.
Fixes: b5f103ab98 ("drm/msm: gpu: Add A5XX target support")
Cc: stable@vger.kernel.org
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Rob Clark <robdclark@gmail.com>
Each of the per-generation callbacks was doing this. Lets just simplify
and move it into toplevel show() fxn.
Signed-off-by: Rob Clark <robdclark@gmail.com>
This was never documented or used in upstream dtb. It is used by
downstream bindings from android device kernels. But the quirks are
a property of the gpu revision, and as such are redundant to be listed
separately in dt. Instead, move the quirks to the device table.
Signed-off-by: Rob Clark <robdclark@gmail.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Most 5XX targets have GPMU (Graphics Power Management Unit) that
handles a lot of the heavy lifting for power management including
thermal and limits management and dynamic power collapse. While
the GPMU itself is optional, it is usually nessesary to hit
aggressive power targets.
The GPMU firmware needs to be loaded into the GPMU at init time via a
shared hardware block of registers. Using the GPU to write the microcode
is more efficient than using the CPU so at first load create an indirect
buffer that can be executed during subsequent initalization sequences.
After loading the GPMU gets initalized through a shared register
interface and then we mostly get out of its way and let it do
its thing.
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Rob Clark <robdclark@gmail.com>