Currently drivers get called to move a buffer, but if they have to
move it temporarily through another space (SYSTEM->VRAM via TT)
then they can end up with a lot of ttm->driver->ttm call stacks,
if the temprorary space moves requires eviction.
Instead of letting the driver do all the placement/space for the
temporary, allow it to report back (-EMULTIHOP) and a placement (hop)
to the move code, which will then do the temporary move, and the
correct placement move afterwards.
This removes a lot of code from drivers, at the expense of
adding some midlayering. I've some further ideas on how to turn
it inside out, but I think this is a good solution to the call
stack problems.
v2: separate out the driver patches, add WARN for getting
MULTHOP in paths we shouldn't (Daniel)
v3: use memset (Christian)
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: hristian König <christian.koenig@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201109005432.861936-2-airlied@gmail.com
The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in kernel
address space. The mapping's address is returned as struct dma_buf_map.
Each function is a simplified version of TTM's existing kmap code. Both
functions respect the memory's location ani/or writecombine flags.
On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
two helpers that convert a GEM object into the TTM BO and forward the call
to TTM's vmap/vunmap. These helpers can be dropped into the rsp GEM object
callbacks.
v5:
* use size_t for storing mapping size (Christian)
* ignore premapped memory areas correctly in ttm_bo_vunmap()
* rebase onto latest TTM interfaces (Christian)
* remove BUG() from ttm_bo_vmap() (Christian)
v4:
* drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers (Daniel,
Christian)
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Christian König <christian.koenig@amd.com>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Tested-by: Sam Ravnborg <sam@ravnborg.org>
Link: https://patchwork.freedesktop.org/patch/msgid/20201103093015.1063-6-tzimmermann@suse.de
The ttm_operation_ctx structure has a mixture of flags and bools. Drop the
flags and replace them with bools as well.
v2: fix typos, improve comments
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/398686/
During eviction we do want to trigger the OOM killer.
Only while doing new allocations we should try to avoid that and
return -ENOMEM to the application.
v2: rename the flag to gfp_retry_mayfail.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/398685/
Not used any more.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Dave Airlie <airlied@redhat.com>
Reviewed-by: Madhav Chauhan <madhav.chauhan@amd.com>
Tested-by: Huang Rui <ray.huang@amd.com>
Link: https://patchwork.freedesktop.org/patch/397087/?series=83051&rev=1
Provide the necessary parameters by all drivers and use the new pool alloc
when no driver specific function is provided.
v2: fix the GEM VRAM helpers
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Dave Airlie <airlied@redhat.com>
Reviewed-by: Madhav Chauhan <madhav.chauhan@amd.com>
Tested-by: Huang Rui <ray.huang@amd.com>
Link: https://patchwork.freedesktop.org/patch/397081/?series=83051&rev=1
This replaces the spaghetti code in the two existing page pools.
First of all depending on the allocation size it is between 3 (1GiB) and
5 (1MiB) times faster than the old implementation.
It makes better use of buddy pages to allow for larger physical contiguous
allocations which should result in better TLB utilization at least for
amdgpu.
Instead of a completely braindead approach of filling the pool with one
CPU while another one is trying to shrink it we only give back freed
pages.
This also results in much less locking contention and a trylock free MM
shrinker callback, so we can guarantee that pages are given back to the
system when needed.
Downside of this is that it takes longer for many small allocations until
the pool is filled up. We could address this, but I couldn't find an use
case where this actually matters. We also don't bother freeing large
chunks of pages any more since the CPU overhead in that path isn't really
that important.
The sysfs files are replaced with a single module parameter, allowing
users to override how many pages should be globally pooled in TTM. This
unfortunately breaks the UAPI slightly, but as far as we know nobody ever
depended on this.
Zeroing memory coming from the pool was handled inconsistently. The
alloc_pages() based pool was zeroing it, the dma_alloc_attr() based one
wasn't. For now the new implementation isn't zeroing pages from the pool
either and only sets the __GFP_ZERO flag when necessary.
The implementation has only 768 lines of code compared to the over 2600
of the old one, and also allows for saving quite a bunch of code in the
drivers since we don't need specialized handling there any more based on
kernel config.
Additional to all of that there was a neat bug with IOMMU, coherent DMA
mappings and huge pages which is now fixed in the new code as well.
v2: make ttm_pool_apply_caching static as reported by the kernel bot, add
some more checks
v3: fix some more checkpatch.pl warnings
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Dave Airlie <airlied@redhat.com>
Reviewed-by: Madhav Chauhan <madhav.chauhan@amd.com>
Tested-by: Huang Rui <ray.huang@amd.com>
Link: https://patchwork.freedesktop.org/patch/397080/?series=83051&rev=1
It makes no difference to kmalloc if the structure
is 48 or 64 bytes in size.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Dave Airlie <airlied@redhat.com>
Link: https://patchwork.freedesktop.org/patch/396950/
We can still allocate 16TiB with that.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Dave Airlie <airlied@redhat.com>
Link: https://patchwork.freedesktop.org/patch/396946/
Neither page allocation backend nor the driver should mess with that.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Dave Airlie <airlied@redhat.com>
Reviewed-by: Madhav Chauhan <madhav.chauhan@amd.com>
Link: https://patchwork.freedesktop.org/patch/396948/
The move notify callback is only used in one place, this should
be removed in the future, but for now just rename it to the use
case which is to notify the driver that the GPU memory is to be
deleted.
Drivers can be cleaned up after this separately.
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201021044031.1752624-2-airlied@gmail.com
This moves the to system move into the drivers, and moves all
the unbinds in the move path under driver control
Note: radeon/nouveau already wait so don't duplicate it.
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201020010319.1692445-4-airlied@gmail.com
Changing the caching on the fly never really worked
flawlessly.
So stop this completely and just let drivers specific the
desired caching in the tt or bus object.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Michael J. Ruhl <michael.j.ruhl@intel.com>
Link: https://patchwork.freedesktop.org/patch/394256/
Instead of the placement flags use the caching of the bus
mapping or tt object for the page protection flags.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Michael J. Ruhl <michael.j.ruhl@intel.com>
Link: https://patchwork.freedesktop.org/patch/394255/
And implement setting it up correctly in the drivers.
This allows getting rid of the placement flags for this.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Michael J. Ruhl <michael.j.ruhl@intel.com>
Link: https://patchwork.freedesktop.org/patch/394254/
All drivers can determine the tt caching state at creation time,
no need to do this on the fly during every validation.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Michael J. Ruhl <michael.j.ruhl@intel.com>
Link: https://patchwork.freedesktop.org/patch/394253/
This is not something drivers should use.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Michael J. Ruhl <michael.j.ruhl@intel.com>
Link: https://patchwork.freedesktop.org/patch/393430/
That was missed during the cleanup.
v2: fix comment in vmwgfx as well
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Michael J. Ruhl <michael.j.ruhl@intel.com>
Link: https://patchwork.freedesktop.org/patch/394092/
It is the sole user of this.
Signed-off-by: Christian König <christian.koenig@amd.com>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/393498/
We can always access the global state.
Signed-off-by: Christian König <christian.koenig@amd.com>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/393499/
Make it more clear what the resource manager function
does and nuke the wrapper function.
v2: nuke the wrapper
v3: fix typo in radeon, rebased
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> (v2)
Link: https://patchwork.freedesktop.org/patch/393914/
As an alternative to the placement flag add a
pin count to the ttm buffer object.
v2: add dma_resv_assert_help() calls
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Dave Airlie <airlied@redhat.com>
Reviewed-by: Huang Rui <ray.huang@amd.com>
Link: https://patchwork.freedesktop.org/patch/391596/?series=81973&rev=1
When we swapout/in a BO we try to change the caching
attributes of the pages before/after doing the copy.
On x86 this is done by calling set_pages_uc(),
set_memory_wc() or set_pages_wb() for not highmem pages
to update the linear mapping of the page.
On all other platforms we do exactly nothing.
Now on x86 this is unnecessary because copy_highpage() will
either create a temporary mapping of the page which is wb
anyway and destroyed immediately again or use the linear
mapping with the correct caching attributes.
So stop this nonsense and just keep the caching as it is and
return an error when a driver tries to change the caching of
an already populated TT object.
This is much more defensive since changing caching
attributes is platform and driver specific and usually
doesn't work after the page was initially allocated.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Link: https://patchwork.freedesktop.org/patch/391293/
map_page_into_agp() and unmap_page_from_agp() are only defined on x86.
On all other platforms they are defined as noops. So this code doesn't has any effect at all.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Link: https://patchwork.freedesktop.org/patch/391292/
This moves the generic tracking into the drivers and protects
against reentrancy in the drivers. It fixes up radeon and agp
to be able to query the bound status as that is required.
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200917043040.146575-2-airlied@gmail.com
Extern is the default attribute for functions anyway.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/390972/
Unexport ttm_check_under_lowerlimit.
Make ttm_bo_acc_size static and unexport it.
Remove ttm_get_kernel_zone_memory_size.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Dave Airlie <airlied@redhat.com>
Link: https://patchwork.freedesktop.org/patch/390515/
Move bound up into the bo object, and keep populated with the tt
object.
The ghost object handling needs to follow the flags at the bo
level now instead of it being part of the ttm tt object.
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200915024007.67163-7-airlied@gmail.com
This adds 2 getters and 4 setters, however unbound and populated
are currently the same thing, this will change, it also drops
a BUG_ON that seems not that useful.
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200915024007.67163-2-airlied@gmail.com