This replaces the repetitive GPL-2.0 license text in code and header files
with the SPDX tags. Generated hardware headers aren't changed, as any changes
there need to be done in the upstream rnndb repository.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
MMUv2 supports up to 40 bits of physical address by folding the upper
8 bits into bits [4:11] of the PTE.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
With etnaviv not being tied into the IOMMU framework anymore, the MMU
functions will only be called under sleeping locks. Thus we are able
to allocate the memory for the 2nd level page tables on demand without
having to deal with memory allocation in atomic context.
This speeds up driver intitialization on MMUv2 GPU cores, as we don't
need to preallocate all the page table memory and also reduces memory
consumption for most workloads, as most of them won't use the full
GPU virtual address space.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
We are likely to write multiple page entries at once and already ensure
proper write buffer flushing before GPU submit, so this improves CPU
time usage in the submit path without any downsides.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
I'm not aware of any case where tracing GPU register manipulation at the
kernel level would have been useful. It only adds more indirections and
adds to the code size.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
This was useful on MMUv1 GPUs, which don't generate proper faults,
when the GPU write caches weren't fully understood and not properly
handled by the kernel driver. As this has been fixed for quite some
time, the cycling though the MMU address space needlessly spreads
out the MMU mappings.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
The old way did clamp the jiffy conversion and thus caused the timeouts
to become negative after some time. Also it didn't work with userspace
which actually fills the upper 32bits of the 64bit timestamp value.
clock_gettime() is 32-bit on 32-bit architectures. Using 64-bit timespec
math, like we do in this commit, means that when a wrap occurs, the
specified timeout goes into the past and we can't request a timeout in
the future. As the Linux implementation of CLOCK_MONOTONIC is reasonable
and starts at 0, the first such timer wrap will occur after approx. 68
years of system uptime.
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Inserted wait-for-gr-idle in the places it seems that RM does it, seems
to prevent some random mmio timeouts on Quadro GV100.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
It's better to use "list_for_each_entry_from_reverse" for iterating list
than "for loop" as it makes the code more clear to read.
This patch replace "for loop" with "list_for_each_entry_from_reverse"
and "start" variable with "cstate" which helps in refactoring
the code and also "cstate" variable is more commonly used in the other
functions.
changes in v2:
"start" variable is removed, before "cstate" variable was removed
but "cstate" is more common so preferred "cstate" over "start".
Signed-off-by: Arushi Singhal <arushisinghal19971997@gmail.com>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
A NV34 GPU was seeing temp and pwm entries in hwmon, which would error
out when read. These should not have been visible, but also the whole
hwmon object should just not have been registered in the first place.
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
The method struct vga_switcheroo_handler::get_client_id() is defined
as returning an 'enum vga_switcheroo_client_id' but the implementation
in this driver, nouveau_dsm_get_client_id(), returns an 'int'.
Fix this by returning 'enum vga_switcheroo_client_id' in this driver too.
Signed-off-by: Luc Van Oostenryck <luc.vanoostenryck@gmail.com>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
The method struct drm_connector_helper_funcs::mode_valid is defined
as returning an 'enum drm_mode_status' but the driver implementation
for this method uses an 'int' for it.
Fix this by using 'enum drm_mode_status' in the driver too.
Signed-off-by: Luc Van Oostenryck <luc.vanoostenryck@gmail.com>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
VEID support hacked in here, as it's the most convenient place for now.
Will be refined once it's better understood.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
When only the position of a window changes, there's no need to submit
an image update as well.
Will be required to support the overlays, and Volta windows.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Window visibility is going to become a little more complicated with the
upcoming LUT changes, so store the calculated value to avoid needing to
recalculate the armed state again.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Instead of windows returning their core channel interlock mask if they
know core has been modified, it's recorded unconditionally and used if
required when update methods are emitted.
This will be required to support Volta.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Fences attached to deferred client work items now originate from channels
belonging to the client, meaning we can be certain they've been signalled
before we destroy a client.
This closes a race that could happen if the dma_fence_wait_timeout() call
didn't succeed. When the fence was later signalled, a use-after-free was
possible.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
As VMAs are per-client, unlike buffers, this allows us to avoid referencing
foreign fences (those that belong to another client/driver) from the client
deferred work handler, and prevent some not-fun race conditions that can be
triggered when a fence stalls.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
We previously only did this for push buffers, but an upcoming patch will
need to attach fences to all VMAs to resolve another issue.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
These were missed the first time around due to the driver version I traced
using the older registers still.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
There are differences on GM200 and newer too, but we can't fix them there
as they come from firmware packages.
A request has been made to NVIDIA to release updated firmware.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
There's a number of places that require this data, so let's separate out
the calculations to ensure they remain consistent.
This is incorrect for GM200 and newer, but will produce the same results
as we did before.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
There's also a couple of hardcoded tables for a couple of very specific
configurations that NVGPU's algorithm didn't work for.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
The algorithm for GM200 and newer matches RM for all the boards I have, but
I don't have enough data to try and figure something out for earlier boards,
so these will still write zeroes to the table as we did before.
The code in NVGPU isn't helpful here, it appears to handle specific cases.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
I don't think this is done after Fermi, NVGPU used to do it but removed
the code, and I've not seen RM traces touching it either.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
I haven't yet been able to find a fully programatic way of calculating the
same mapping as NVIDIA for GF100-GF119, so the algorithm partially depends
on data tables for specific configurations.
I couldn't find traces for every possibility, so the algorithm will switch
to a mapping similar to what GK104-GM10x use if it encounters one. We did
the wrong thing before anyway, so shouldn't matter too much.
The algorithm used in the GK104 implementation was ported from NVGPU.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Newer HW doesn't appear to send this event, which will cause long delays
in runlist updates if they don't complete immediately.
RM doesn't use these events anywhere, and an NVGPU commit message notes
that polling is the preferred method even on HW that supports the event.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
We didn't used to be aware that runlist/engine IDs weren't the same thing,
or that there was such variability in configuration between GPUs.
By exposing this information to a client, and giving it explicit control
of which runlist it's allocating a channel on, we're able to make better
choices.
The immediate effect of this is that on GPUs where CE0 is the "GRCE", we
will now be allocating a copy engine running asynchronously to GR for BO
migrations - as intended.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
We have a need to fetch data from GPU-specific sub-devices that is not
tied to any particular engine object.
This commit provides the framework to support such queries.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
This will be required to support Volta, but also allows us to remove code
that's duplicated for each channel type already.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>