mirror of https://gitee.com/openkylin/linux.git
32 Commits
Author | SHA1 | Message | Date |
---|---|---|---|
Chris Wilson | f4457ae71f |
drm/i915: Prevent leaking of -EIO from i915_wait_request()
Reporting -EIO from i915_wait_request() has proven very troublematic over the years, with numerous hard-to-reproduce bugs cropping up in the corner case of where a reset occurs and the code wasn't expecting such an error. If the we reset the GPU or have detected a hang and wish to reset the GPU, the request is forcibly complete and the wait broken. Currently, we report either -EAGAIN or -EIO in order for the caller to retreat and restart the wait (if appropriate) after dropping and then reacquiring the struct_mutex (essential to allow the GPU reset to proceed). However, if we take the view that the request is complete (no further work will be done on it by the GPU because it is dead and soon to be reset), then we can proceed with the task at hand and then drop the struct_mutex allowing the reset to occur. This transfers the burden of checking whether it is safe to proceed to the caller, which in all but one instance it is safe - completely eliminating the source of all spurious -EIO. Of note, we only have two API entry points where we expect that userspace can observe an EIO. First is when submitting an execbuf, if the GPU is terminally wedged, then the operation cannot succeed and an -EIO is reported. Secondly, existing userspace uses the throttle ioctl to detect an already wedged GPU before starting using HW acceleration (or to confirm that the GPU is wedged after an error condition). So if the GPU is wedged when the user calls throttle, also report -EIO. v2: Split more carefully the change to i915_wait_request() and assorted ABI from the reset handling. v3: Add a couple of WARN_ON(EIO) to the interruptible modesetting code so that we don't start to leak EIO there in future (and break our hang resistant modesetting). Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: http://patchwork.freedesktop.org/patch/msgid/1460565315-7748-9-git-send-email-chris@chris-wilson.co.uk Link: http://patchwork.freedesktop.org/patch/msgid/1460565315-7748-1-git-send-email-chris@chris-wilson.co.uk |
|
Chris Wilson | 299259a3a9 |
drm/i915: Store the reset counter when constructing a request
As the request is only valid during the same global reset epoch, we can record the current reset_counter when constructing the request and reuse it when waiting upon that request in future. This removes a very hairy atomic check serialised by the struct_mutex at the time of waiting and allows us to transfer those waits to a central dispatcher for all waiters and all requests. PS: With per-engine resets, we obviously cannot assume a global reset epoch for the requests - a per-engine epoch makes the most sense. The challenge then is how to handle checking in the waiter for when to break the wait, as the fine-grained reset may also want to requeue the request (i.e. the assumption that just because the epoch changes the request is completed may be broken - or we just avoid breaking that assumption with the fine-grained resets). Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: http://patchwork.freedesktop.org/patch/msgid/1460565315-7748-7-git-send-email-chris@chris-wilson.co.uk |
|
Chris Wilson | f470b19095 |
drm/i915/userptr: Store i915 backpointer for i915_mm_struct
Since we only ever use the drm_i915_private from the stored i915_mm_struct->dev, save some electrons by storing the right backpointer. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Michał Winiarski <michal.winiarski@intel.com> Reviewed-by: Michał Winiarski <michal.winiarski@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1459864801-28606-3-git-send-email-chris@chris-wilson.co.uk |
|
Chris Wilson | 40313f0cd0 |
drm/i915/userptr: Hold mmref whilst calling get-user-pages
Holding a reference to the containing task_struct is not sufficient to prevent the mm_struct from being reaped under memory pressure. If this happens whilst we are calling get_user_pages(), explosions erupt - sometimes an immediate GPF, sometimes page flag corruption. To prevent the target mm from being reaped as we are reading from it, acquire a reference before we begin. Testcase: igt/gem_shrink/*userptr Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: stable@vger.kernel.org Reviewed-by: Michał Winiarski <michal.winiarski@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1459864801-28606-2-git-send-email-chris@chris-wilson.co.uk |
|
Chris Wilson | 393afc2c3f |
drm/i915/userptr: Flush cancellations before mmu-notifier invalidate returns
In order to ensure that all invalidations are completed before the operation returns to userspace (i.e. before the munmap() syscall returns) we need to wait upon the outstanding operations. We are allowed to block inside the invalidate_range_start callback, and as struct_mutex is the inner lock with mmap_sem we can wait upon the struct_mutex without provoking lockdep into warning about a deadlock. However, we don't actually want to wait upon outstanding rendering whilst holding the struct_mutex if we can help it otherwise we also block other processes from submitting work to the GPU. So first we do a wait without the lock and then when we reacquire the lock, we double check that everything is ready for removing the invalidated pages. Finally to wait upon the outstanding unpinning tasks, we create a private workqueue as a means to conveniently wait upon all at once. The drawback is that this workqueue is per-mm, so any threads concurrently invalidating objects will wait upon each other. The advantage of using the workqueue is that we can wait in parallel for completion of rendering and unpinning of several objects (of particular importance if the process terminates with a whole mm full of objects). v2: Apply a cup of tea to the changelog. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=94699 Testcase: igt/gem_userptr_blits/sync-unmap-cycles Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Michał Winiarski <michal.winiarski@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1459864801-28606-1-git-send-email-chris@chris-wilson.co.uk Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> |
|
Daniel Vetter | 3970285319 |
Linux 4.6-rc3
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJXCva8AAoJEHm+PkMAQRiGXBoIAIkrjxdbuT2nS9A3tHwkiFXa
6/Th1UjbNaoLuZ+MckQHayAD9NcWY9lVjOUmFsSiSWMCQK/rTWDl8x5ITputrY2V
VuhrJCwI7huEtu6GpRaJaUgwtdOjhIHz1Ue2MCdNIbKX3l+LjVyyJ9Vo8rruvZcR
fC7kiivH04fYX58oQ+SHymCg54ny3qJEPT8i4+g26686m11hvZLI3UAs2PAn6ut+
atCjxdQ4yLN3DWsbjuA7wYGWhTgFloxL4TIoisuOUc3FXnSi/ivIbXZvu4lUfisz
LA2JBhfII3AEMBWG9xfGbXPijJTT4q7yNlTD0oYcnMtAt/Roh2F04asqB1LetEY=
=bri6
-----END PGP SIGNATURE-----
Merge tag 'v4.6-rc3' into drm-intel-next-queued
Linux 4.6-rc3
Backmerge requested by Chris Wilson to make his patches apply cleanly.
Tiny conflict in vmalloc.c with the (properly acked and all) patch in
drm-intel-next:
commit
|
|
Chris Wilson | f2a85e1975 |
drm,i915: Introduce drm_malloc_gfp()
I have instances where I want to use drm_malloc_ab() but with a custom gfp mask. And with those, where I want a temporary allocation, I want to try a high-order kmalloc() before using a vmalloc(). So refactor my usage into drm_malloc_gfp(). Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: dri-devel@lists.freedesktop.org Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Acked-by: Dave Airlie <airlied@redhat.com> Link: http://patchwork.freedesktop.org/patch/msgid/1460113874-17366-6-git-send-email-chris@chris-wilson.co.uk |
|
Kirill A. Shutemov | 09cbfeaf1a |
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time ago with promise that one day it will be possible to implement page cache with bigger chunks than PAGE_SIZE. This promise never materialized. And unlikely will. We have many places where PAGE_CACHE_SIZE assumed to be equal to PAGE_SIZE. And it's constant source of confusion on whether PAGE_CACHE_* or PAGE_* constant should be used in a particular case, especially on the border between fs and mm. Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much breakage to be doable. Let's stop pretending that pages in page cache are special. They are not. The changes are pretty straight-forward: - <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>; - <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>; - PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN}; - page_cache_get() -> get_page(); - page_cache_release() -> put_page(); This patch contains automated changes generated with coccinelle using script below. For some reason, coccinelle doesn't patch header files. I've called spatch for them manually. The only adjustment after coccinelle is revert of changes to PAGE_CAHCE_ALIGN definition: we are going to drop it later. There are few places in the code where coccinelle didn't reach. I'll fix them manually in a separate patch. Comments and documentation also will be addressed with the separate patch. virtual patch @@ expression E; @@ - E << (PAGE_CACHE_SHIFT - PAGE_SHIFT) + E @@ expression E; @@ - E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) + E @@ @@ - PAGE_CACHE_SHIFT + PAGE_SHIFT @@ @@ - PAGE_CACHE_SIZE + PAGE_SIZE @@ @@ - PAGE_CACHE_MASK + PAGE_MASK @@ expression E; @@ - PAGE_CACHE_ALIGN(E) + PAGE_ALIGN(E) @@ expression E; @@ - page_cache_get(E) + get_page(E) @@ expression E; @@ - page_cache_release(E) + put_page(E) Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Acked-by: Michal Hocko <mhocko@suse.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> |
|
Linus Torvalds | 266c73b777 |
Merge branch 'drm-next' of git://people.freedesktop.org/~airlied/linux
Pull drm updates from Dave Airlie: "This is the main drm pull request for 4.6 kernel. Overall the coolest thing here for me is the nouveau maxwell signed firmware support from NVidia, it's taken a long while to extract this from them. I also wish the ARM vendors just designed one set of display IP, ARM display block proliferation is definitely increasing. Core: - drm_event cleanups - Internal API cleanup making mode_fixup optional. - Apple GMUX vga switcheroo support. - DP AUX testing interface Panel: - Refactoring of DSI core for use over more transports. New driver: - ARM hdlcd driver i915: - FBC/PSR (framebuffer compression, panel self refresh) enabled by default. - Ongoing atomic display support work - Ongoing runtime PM work - Pixel clock limit checks - VBT DSI description support - GEM fixes - GuC firmware scheduler enhancements amdkfd: - Deferred probing fixes to avoid make file or link ordering. amdgpu/radeon: - ACP support for i2s audio support. - Command Submission/GPU scheduler/GPUVM optimisations - Initial GPU reset support for amdgpu vmwgfx: - Support for DX10 gen mipmaps - Pageflipping and other fixes. exynos: - Exynos5420 SoC support for FIMD - Exynos5422 SoC support for MIPI-DSI nouveau: - GM20x secure boot support - adds acceleration for Maxwell GPUs. - GM200 support - GM20B clock driver support - Power sensors work etnaviv: - Correctness fixes for GPU cache flushing - Better support for i.MX6 systems. imx-drm: - VBlank IRQ support - Fence support - OF endpoint support msm: - HDMI support for 8996 (snapdragon 820) - Adreno 430 support - Timestamp queries support virtio-gpu: - Fixes for Android support. rockchip: - Add support for Innosilicion HDMI rcar-du: - Support for 4 crtcs - R8A7795 support - RCar Gen 3 support omapdrm: - HDMI interlace output support - dma-buf import support - Refactoring to remove a lot of legacy code. tilcdc: - Rewrite of pageflipping code - dma-buf support - pinctrl support vc4: - HDMI modesetting bug fixes - Significant 3D performance improvement. fsl-dcu (FreeScale): - Lots of fixes tegra: - Two small fixes sti: - Atomic support for planes - Improved HDMI support" * 'drm-next' of git://people.freedesktop.org/~airlied/linux: (1063 commits) drm/amdgpu: release_pages requires linux/pagemap.h drm/sti: restore mode_fixup callback drm/amdgpu/gfx7: add MTYPE definition drm/amdgpu: removing BO_VAs shouldn't be interruptible drm/amd/powerplay: show uvd/vce power gate enablement for tonga. drm/amd/powerplay: show uvd/vce power gate info for fiji drm/amdgpu: use sched fence if possible drm/amdgpu: move ib.fence to job.fence drm/amdgpu: give a fence param to ib_free drm/amdgpu: include the right version of gmc header files for iceland drm/radeon: fix indentation. drm/amd/powerplay: add uvd/vce dpm enabling flag to fix the performance issue for CZ drm/amdgpu: switch back to 32bit hw fences v2 drm/amdgpu: remove amdgpu_fence_is_signaled drm/amdgpu: drop the extra fence range check v2 drm/amdgpu: signal fences directly in amdgpu_fence_process drm/amdgpu: cleanup amdgpu_fence_wait_empty v2 drm/amdgpu: keep all fences in an RCU protected array v2 drm/amdgpu: add number of hardware submissions to amdgpu_fence_driver_init_ring drm/amdgpu: RCU protected amd_sched_fence_release ... |
|
Tvrtko Ursulin | ca377809d6 |
drm/i915: Avoid snooping with userptr where not supported
commit
|
|
Chris Wilson | 1c7f4bca5a |
drm/i915: Rename vma->*_list to *_link for consistency
Elsewhere we have adopted the convention of using '_link' to denote elements in the list (and '_list' for the actual list_head itself), and that the name should indicate which list the link belongs to (and preferrably not just where the link is being stored). s/vma_link/obj_link/ (we iterate over obj->vma_list) s/mm_list/vm_link/ (we iterate over vm->[in]active_list) Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> |
|
Dave Hansen | 1e9877902d |
mm/gup: Introduce get_user_pages_remote()
For protection keys, we need to understand whether protections should be enforced in software or not. In general, we enforce protections when working on our own task, but not when on others. We call these "current" and "remote" operations. This patch introduces a new get_user_pages() variant: get_user_pages_remote() Which is a replacement for when get_user_pages() is called on non-current tsk/mm. We also introduce a new gup flag: FOLL_REMOTE which can be used for the "__" gup variants to get this new behavior. The uprobes is_trap_at_addr() location holds mmap_sem and calls get_user_pages(current->mm) on an instruction address. This makes it a pretty unique gup caller. Being an instruction access and also really originating from the kernel (vs. the app), I opted to consider this a 'remote' access where protection keys will not be enforced. Without protection keys, this patch should not change any behavior. Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Dave Hansen <dave@sr71.net> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rik van Riel <riel@redhat.com> Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: jack@suse.cz Cc: linux-mm@kvack.org Link: http://lkml.kernel.org/r/20160212210154.3F0E51EA@viggo.jf.intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org> |
|
Chris Wilson | 93232aeb30 |
drm/i915: Allow i915_gem_object_get_page() on userptr as well
commit |
|
Chris Wilson | de4726649b |
drm/i915: Allow i915_gem_object_get_page() on userptr as well
commit
|
|
Chris Wilson | 768e159f43 |
drm/i915: Improve handling of overlapping objects
The generic interval tree we use to speed up range invalidation is an augmented rbtree that can report all overlapping intervals for a given range. Therefore we do not need to degrade to a linear list if we find overlapping objects. Oops. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Michał Winiarski <michal.winiarski@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1453397563-2848-1-git-send-email-chris@chris-wilson.co.uk Reviewed-by: Michał Winiarski <michal.winiarski@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> |
|
Linus Torvalds | 3e82806b97 |
Merge branch 'drm-next' of git://people.freedesktop.org/~airlied/linux
Pull drm updates from Dave Airlie: "I Was Almost Tempted To Capitalise Every Word, but then I decided I couldn't read it myself! I've also got one pull request for the sti driver outstanding. It relied on a commit in Greg's tree and I didn't find out in time, that commit is in your tree now so I might send that along once this is merged. I also had the accidental misfortune to have access to a Skylake on my desk for a few days, and I've had to encourage Intel to try harder, which seems to be happening now. Here is the main drm-next pull request for 4.4. Highlights: New driver: vc4 driver for the Rasberry Pi VPU. (From Eric Anholt at Broadcom.) Core: Atomic fbdev support Atomic helpers for runtime pm dp/aux i2c STATUS_UPDATE handling struct_mutex usage cleanups. Generic of probing support. Documentation: Kerneldoc for VGA switcheroo code. Rename to gpu instead of drm to reflect scope. i915: Skylake GuC firmware fixes HPD A support VBT backlight fallbacks Fastboot by default for some systems FBC work BXT/SKL workarounds Skylake deeper sleep state fixes amdgpu: Enable GPU scheduler by default New atombios opcodes GPUVM debugging options Stoney support. Fencing cleanups. radeon: More efficient CS checking nouveau: gk20a instance memory handling improvements. Improved PGOB detection and GK107 support Kepler GDDR5 PLL statbility improvement G8x/GT2xx reclock improvements new userspace API compatiblity fixes. virtio-gpu: Add 3D support - qemu 2.5 has it merged for it's gtk backend. msm: Initial msm88896 (snapdragon 8200) exynos: HDMI cleanups Enable mixer driver byt default Add DECON-TV support vmwgfx: Move to using memremap + fixes. rcar-du: Add support for R8A7793/4 DU armada: Remove support for non-component mode Improved plane handling Power savings while in DPMS off. tda998x: Remove unused slave encoder support Use more HDMI helpers Fix EDID read handling dwhdmi: Interlace video mode support for ipu-v3/dw_hdmi Hotplug state fixes Audio driver integration imx: More color formats support. tegra: Minor fixes/improvements" [ Merge fixup: remove unused variable 'dev' that had all uses removed in commit 4e270f088011: "drm/gem: Drop struct_mutex requirement from drm_gem_mmap_obj" ] * 'drm-next' of git://people.freedesktop.org/~airlied/linux: (764 commits) drm/vmwgfx: Relax irq locking somewhat drm/vmwgfx: Properly flush cursor updates and page-flips drm/i915/skl: disable display side power well support for now drm/i915: Extend DSL readout fix to BDW and SKL. drm/i915: Do graphics device reset under forcewake drm/i915: Skip fence installation for objects with rotated views (v4) vga_switcheroo: Drop client power state VGA_SWITCHEROO_INIT drm/amdgpu: group together common fence implementation drm/amdgpu: remove AMDGPU_FENCE_OWNER_MOVE drm/amdgpu: remove now unused fence functions drm/amdgpu: fix fence fallback check drm/amdgpu: fix stoping the scheduler timeout drm/amdgpu: cleanup on error in amdgpu_cs_ioctl() drm/i915: Fix locking around GuC firmware load drm/amdgpu: update Fiji's Golden setting drm/amdgpu: update Fiji's rev id drm/amdgpu: extract common code in vi_common_early_init drm/amd/scheduler: don't oops on failure to load drm/amdgpu: don't oops on failure to load (v2) drm/amdgpu: don't VT switch on suspend ... |
|
Chris Wilson | cc917ab435 |
drm/i915: Deny wrapping an userptr into a framebuffer
Pinning a userptr onto the hardware raises interesting questions about the lifetime of such a surface as the framebuffer extends that life beyond the client's address space. That is the hardware will need to keep scanning out from the backing storage even after the client wants to remap its address space. As the hardware pins the backing storage, the userptr becomes invalid and this raises a WARN when the clients tries to unmap its address space. The situation can be even more complicated when the buffer is passed between processes, between a client and display server, where the lifetime and hardware access is even more confusing. Deny it. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: stable@vger.kernel.org Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Jani Nikula <jani.nikula@intel.com> |
|
Chris Wilson | 380996aab5 |
drm/i915: Use a task to cancel the userptr on invalidate_range
Whilst discussing possible ways to trigger an invalidate_range on a userptr with an aliased GGTT mmapping (and so cause a struct_mutex deadlock), the conclusion is that we can, and we must, prevent any possible deadlock by avoiding taking the mutex at all during invalidate_range. This has numerous advantages all of which stem from avoid the sleeping function from inside the unknown context. In particular, it simplifies the invalidate_range because we no longer have to juggle the spinlock/mutex and can just hold the spinlock for the entire walk. To compensate, we have to make get_pages a bit more complicated in order to serialise with a pending cancel_userptr worker. As we hold the struct_mutex, we have no choice but to return EAGAIN and hope that the worker is then flushed before we retry after reacquiring the struct_mutex. The important caveat is that the invalidate_range itself is no longer synchronous. There exists a small but definite period in time in which the old PTE's page remain accessible via the GPU. Note however that the physical pages themselves are not invalidated by the mmu_notifier, just the CPU view of the address space. The impact should be limited to a delay in pages being flushed, rather than a possibility of writing to the wrong pages. The only race condition that this worsens is remapping an userptr active on the GPU where fresh work may still reference the old pages due to struct_mutex contention. Given that userspace is racing with the GPU, it is fair to say that the results are undefined. v2: Only queue (and importantly only take one refcnt) the worker once. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> |
|
Chris Wilson | e4b946bfe1 |
drm/i915: Fix userptr deadlock with aliased GTT mmappings
Michał Winiarski found a really evil way to trigger a struct_mutex deadlock with userptr. He found that if he allocated a userptr bo and then GTT mmaped another bo, or even itself, at the same address as the userptr using MAP_FIXED, he could then cause a deadlock any time we then had to invalidate the GTT mmappings (so at will). Tvrtko then found by repeatedly allocating GTT mmappings he could alias with an old userptr mmap and also trigger the deadlock. To counter act the deadlock, we make the observation that we only need to take the struct_mutex if the object has any pages to revoke, and that before userspace can alias with the userptr address space, it must have invalidated the userptr->pages. Thus if we can check for those pages outside of the struct_mutex, we can avoid the deadlock. To do so we introduce a separate flag for userptr objects that we can inspect from the mmu-notifier underneath its spinlock. The patch makes one eye-catching change. That is the removal serial=0 after detecting a to-be-freed object inside the invalidate walker. I felt setting serial=0 was a questionable pessimisation: it denies us the chance to reuse the current iterator for the next loop (before it is freed) and being explicit makes the reader question the validity of the locking (since the object-free race could occur elsewhere). The serialisation of the iterator is through the spinlock, if the object is freed before the next loop then the notifier.serial will be incremented and we start the walk from the beginning as we detect the invalid cache. To try and tame the error paths and interactions with the userptr->active flag, we have to do a fair amount of rearranging of get_pages_userptr(). v2: Grammar fixes v3: Reorder set-active so that it is only set when obj->pages is set (and so needs cancellation). Only the order of setting obj->pages and the active-flag is crucial. Calling gup after invalidate-range begin means the userptr sees the new set of backing storage (and so will not need to invalidate its new pages), but we have to be careful not to set the active-flag prior to successfully establishing obj->pages. v4: Take the active->flag early so we know in the mmu-notifier when we have to cancel a pending gup-worker. v5: Rearrange the error path so that is not so convoluted v6: Set pinned to 0 when negative before calling release_pages() Reported-by: Michał Winiarski <michal.winiarski@intel.com> Testcase: igt/gem_userptr_blits/map-fixed* Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: stable@vger.kernel.org Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> |
|
Chris Wilson | 68d6c84059 |
drm/i915: Only update the current userptr worker
The userptr worker allows for a slight race condition where upon there may two or more threads calling get_user_pages for the same object. When we have the array of pages, then we serialise the update of the object. However, the worker should only overwrite the obj->userptr.work pointer if and only if it is the active one. Currently we clear it for a secondary worker with the effect that we may rarely force a second lookup. v2: Rebase and rename a variable to avoid 80cols v3: Mention v2 Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> |
|
Michel Thierry | c6d576cc57 |
drm/i915/userptr: Kill user_size limit check
GTT was only 32b and its max value is 4GB. In order to allow objects bigger than 4GB in 48b PPGTT, i915_gem_userptr_ioctl we could check against max 48b range (1ULL << 48). But since the check no longer applies, just kill the limit. v2: Use the default ctx to infer the ppgtt max size (Akash). v3: Just kill the limit, it was only there for early detection of an error when used for execbuffer (Chris). Cc: Akash Goel <akash.goel@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Michel Thierry <michel.thierry@intel.com> Reviewed-by: Akash Goel <akash.goel@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> |
|
Imre Deak | e227330223 |
drm/i915: avoid leaking DMA mappings
We have 3 types of DMA mappings for GEM objects: 1. physically contiguous for stolen and for objects needing contiguous memory 2. DMA-buf mappings imported via a DMA-buf attach operation 3. SG DMA mappings for shmem backed and userptr objects For 1. and 2. the lifetime of the DMA mapping matches the lifetime of the corresponding backing pages and so in practice we create/release the mapping in the object's get_pages/put_pages callback. For 3. the lifetime of the mapping matches that of any existing GPU binding of the object, so we'll create the mapping when the object is bound to the first vma and release the mapping when the object is unbound from its last vma. Since the object can be bound to multiple vmas, we can end up creating a new DMA mapping in the 3. case even if the object already had one. This is not allowed by the DMA API and can lead to leaked mapping data and IOMMU memory space starvation in certain cases. For example HW IOMMU drivers (intel_iommu) allocate a new range from their memory space whenever a mapping is created, silently overriding a pre-existing mapping. Fix this by moving the creation/removal of DMA mappings to the object's get_pages/put_pages callbacks. These callbacks already check for and do an early return in case of any nested calls. This way objects of the 3. case also become more like the other object types. I noticed this issue by enabling DMA debugging, which got disabled after a while due to its internal mapping tables getting full. It also reported errors in connection to random other drivers that did a DMA mapping for an address that was previously mapped by i915 but was never released. Besides these diagnostic messages and the memory space starvation problem for IOMMUs, I'm not aware of this causing a real issue. The fix is based on a patch from Chris. v2: - move the DMA mapping create/remove calls to the get_pages/put_pages callbacks instead of adding new callbacks for these (Chris) v3: - also fix the get_page cache logic on the userptr async path (Chris) Signed-off-by: Imre Deak <imre.deak@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: stable@vger.kernel.org Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> |
|
Chris Wilson | 281400ff0f |
drm/i915: Use uninterruptible mutex_lock for userptr bo creation
Mika encountered one pathological scenario under X where acquiring all the mm locks (required to insert a mmu notifier) was very slow, so slow that by the time we tried to lock the struct_mutex with the usual call to i915_mutex_lock_interruptible(), X's signal timer had fired causing us to restart the ioctl (and so looped indefinitely). While I suspect this is the result of another bug (something leaking mm perhaps?) we can forgo the error checking and interuptible nature of the lock here so we only have to pay the expense once and get on with it. This does expose the userptr creation routine to a driver livelock though by not being interruptible. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@intel.com> [danvet: Init ret to avoid issues reported by PRTS.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> |
|
Maarten Lankhorst | b588c92b66 |
drm/i915: get rid of -Iinclude/drm
This results in a warning when building out of tree: "cc1: warning: include/drm: No such file or directory [enabled by default]" Most code already uses #include <drm/foo.h> correctly, so fix the instances that don't. Reported-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> |
|
Michał Winiarski | 460822b0b1 |
drm/i915: Prevent use-after-free in invalidate_range_start callback
It's possible for invalidate_range_start mmu notifier callback to race against userptr object release. If the gem object was released prior to obtaining the spinlock in invalidate_range_start we're hitting null pointer dereference. Testcase: igt/gem_userptr_blits/stress-mm-invalidate-close Testcase: igt/gem_userptr_blits/stress-mm-invalidate-close-overlap Cc: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Michał Winiarski <michal.winiarski@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: stable@vger.kernel.org [Jani: added code comment suggested by Chris] Signed-off-by: Jani Nikula <jani.nikula@intel.com> |
|
Tvrtko Ursulin | c479f4383e |
drm/i915: Do not leak pages when freeing userptr objects
sg_alloc_table_from_pages() can build us a table with coalesced ranges which means we need to iterate over pages and not sg table entries when releasing page references. Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: "Barbalho, Rafael" <rafael.barbalho@intel.com> Tested-by: Rafael Barbalho <rafael.barbalho@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: stable@vger.kernel.org [danvet: Remove unused local variable sg.] Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> |
|
Chris Wilson | e9681366ea |
drm/i915: Do not store the error pointer for a failed userptr registration
If we fail to create our mmu notification, we report the error back and
currently store the error inside the i915_mm_struct. This not only causes
subsequent registerations of the same mm to fail (an issue if the first
was interrupted by a signal and needed to be restarted) but also causes
us to eventually try and free the error pointer.
[ 73.419599] BUG: unable to handle kernel NULL pointer dereference at 000000000000004c
[ 73.419831] IP: [<ffffffff8114af33>] mmu_notifier_unregister+0x23/0x130
[ 73.420065] PGD 8650c067 PUD 870bb067 PMD 0
[ 73.420319] Oops: 0000 [#1] SMP DEBUG_PAGEALLOC
[ 73.420580] CPU: 0 PID: 42 Comm: kworker/0:1 Tainted: G W 3.17.0-rc6+ #1561
[ 73.420837] Hardware name: Intel Corporation SandyBridge Platform/LosLunas CRB, BIOS ASNBCPT1.86C.0075.P00.1106281639 06/28/2011
[ 73.421405] Workqueue: events __i915_mm_struct_free__worker
[ 73.421724] task: ffff880088a81220 ti: ffff880088168000 task.ti: ffff880088168000
[ 73.422051] RIP: 0010:[<ffffffff8114af33>] [<ffffffff8114af33>] mmu_notifier_unregister+0x23/0x130
[ 73.422410] RSP: 0018:ffff88008816bd50 EFLAGS: 00010286
[ 73.422765] RAX: 0000000000000003 RBX: ffff880086485400 RCX: 0000000000000000
[ 73.423137] RDX: ffff88016d80ee90 RSI: ffff880086485400 RDI: 0000000000000044
[ 73.423513] RBP: ffff88008816bd70 R08: 0000000000000001 R09: 0000000000000000
[ 73.423895] R10: 0000000000000320 R11: 0000000000000001 R12: 0000000000000044
[ 73.424282] R13: ffff880166e5f008 R14: ffff88016d815200 R15: ffff880166e5f040
[ 73.424682] FS: 0000000000000000(0000) GS:ffff88016d800000(0000) knlGS:0000000000000000
[ 73.425099] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 73.425537] CR2: 000000000000004c CR3: 0000000087f5f000 CR4: 00000000000407f0
[ 73.426157] Stack:
[ 73.426597] ffff880088a81248 ffff880166e5f038 fffffffffffffffc ffff880166e5f008
[ 73.427096] ffff88008816bd98 ffffffff814a75f2 ffff880166e5f038 ffff8800880f8a28
[ 73.427603] ffff88016d812ac0 ffff88008816be00 ffffffff8106321a ffffffff810631af
[ 73.428119] Call Trace:
[ 73.428606] [<ffffffff814a75f2>] __i915_mm_struct_free__worker+0x42/0x80
[ 73.429116] [<ffffffff8106321a>] process_one_work+0x1ba/0x610
[ 73.429632] [<ffffffff810631af>] ? process_one_work+0x14f/0x610
[ 73.430153] [<ffffffff810636db>] worker_thread+0x6b/0x4a0
[ 73.430671] [<ffffffff8108d67d>] ? trace_hardirqs_on+0xd/0x10
[ 73.431501] [<ffffffff81063670>] ? process_one_work+0x610/0x610
[ 73.432030] [<ffffffff8106a206>] kthread+0xf6/0x110
[ 73.432561] [<ffffffff8106a110>] ? __kthread_parkme+0x80/0x80
[ 73.433100] [<ffffffff8169c22c>] ret_from_fork+0x7c/0xb0
[ 73.433644] [<ffffffff8106a110>] ? __kthread_parkme+0x80/0x80
[ 73.434194] Code: 0f 1f 84 00 00 00 00 00 66 66 66 66 90 8b 46 4c 85 c0 0f 8e 10 01 00 00 55 48 89 e5 41 55 41 54 53 48 89 f3 49 89 fc 48 83 ec 08 <48> 83 7f 08 00 0f 84 b1 00 00 00 48 c7 c7 40 e6 ac 82 e8 26 65
[ 73.435942] RIP [<ffffffff8114af33>] mmu_notifier_unregister+0x23/0x130
[ 73.437017] RSP <ffff88008816bd50>
[ 73.437704] CR2: 000000000000004c
Fixes regression from commit
|
|
Chris Wilson | ad46cb533d |
drm/i915: Prevent recursive deadlock on releasing a busy userptr
During release of the GEM object we hold the struct_mutex. As the object may be holding onto the last reference for the task->mm, calling mmput() may trigger exit_mmap() which close the vma which will call drm_gem_vm_close() and attempt to reacquire the struct_mutex. In order to avoid that recursion, we have to defer the mmput() until after we drop the struct_mutex, i.e. we need to schedule a worker to do the clean up. A further issue spotted by Tvrtko was caused when we took a GTT mmapping of a userptr buffer object. In that case, we would never call mmput as the object would be cyclically referenced by the GTT mmapping and not freed upon process exit - keeping the entire process mm alive after the process task was reaped. The fix employed is to replace the mm_users/mmput() reference handling to mm_count/mmdrop() for the shared i915_mm_struct. INFO: task test_surfaces:1632 blocked for more than 120 seconds. Tainted: GF O 3.14.5+ #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. test_surfaces D 0000000000000000 0 1632 1590 0x00000082 ffff88014914baa8 0000000000000046 0000000000000000 ffff88014914a010 0000000000012c40 0000000000012c40 ffff8800a0058210 ffff88014784b010 ffff88014914a010 ffff880037b1c820 ffff8800a0058210 ffff880037b1c824 Call Trace: [<ffffffff81582499>] schedule+0x29/0x70 [<ffffffff815825fe>] schedule_preempt_disabled+0xe/0x10 [<ffffffff81583b93>] __mutex_lock_slowpath+0x183/0x220 [<ffffffff81583c53>] mutex_lock+0x23/0x40 [<ffffffffa005c2a3>] drm_gem_vm_close+0x33/0x70 [drm] [<ffffffff8115a483>] remove_vma+0x33/0x70 [<ffffffff8115a5dc>] exit_mmap+0x11c/0x170 [<ffffffff8104d6eb>] mmput+0x6b/0x100 [<ffffffffa00f44b9>] i915_gem_userptr_release+0x89/0xc0 [i915] [<ffffffffa00e6706>] i915_gem_free_object+0x126/0x250 [i915] [<ffffffffa005c06a>] drm_gem_object_free+0x2a/0x40 [drm] [<ffffffffa005cc32>] drm_gem_object_handle_unreference_unlocked+0xe2/0x120 [drm] [<ffffffffa005ccd4>] drm_gem_object_release_handle+0x64/0x90 [drm] [<ffffffff8127ffeb>] idr_for_each+0xab/0x100 [<ffffffffa005cc70>] ? drm_gem_object_handle_unreference_unlocked+0x120/0x120 [drm] [<ffffffff81583c46>] ? mutex_lock+0x16/0x40 [<ffffffffa005c354>] drm_gem_release+0x24/0x40 [drm] [<ffffffffa005b82b>] drm_release+0x3fb/0x480 [drm] [<ffffffff8118d482>] __fput+0xb2/0x260 [<ffffffff8118d6de>] ____fput+0xe/0x10 [<ffffffff8106f27f>] task_work_run+0x8f/0xf0 [<ffffffff81052228>] do_exit+0x1a8/0x480 [<ffffffff81052551>] do_group_exit+0x51/0xc0 [<ffffffff810525d7>] SyS_exit_group+0x17/0x20 [<ffffffff8158e092>] system_call_fastpath+0x16/0x1b v2: Incorporate feedback from Tvrtko and remove the unnessary mm referencing when creating the i915_mm_struct and improve some of the function names and comments. Reported-by: Jacek Danecki <jacek.danecki@intel.com> Test-case: igt/gem_userptr_blits/process-exit* Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Tested-by: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Jacek Danecki <jacek.danecki@intel.com> Cc: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Reviewed-by: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: stable@vger.kernel.org # hold off until 3.17 ships for additional testing Signed-off-by: Jani Nikula <jani.nikula@intel.com> |
|
Chris Wilson | 487777673e |
drm/i915/userptr: Keep spin_lock/unlock in the same block
Move the code around in order to acquire and release the spinlock in the same function and in the same block. This keeps static analysers happy and the reader sane. Suggested-by: Julia Lawall <julia.lawall@lip6.fr> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Julia Lawall <julia.lawall@lip6.fr> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> |
|
Chris Wilson | ec8b0dd51c |
drm/i915: Allow overlapping userptr objects
Whilst I strongly advise against doing so for the implicit coherency issues between the multiple buffer objects accessing the same backing store, it nevertheless is a valid use case, akin to mmaping the same file multiple times. The reason why we forbade it earlier was that our use of the interval tree for fast invalidation upon vma changes excluded overlapping objects. So in the case where the user wishes to create such pairs of overlapping objects, we degrade the range invalidation to walkin the linear list of objects associated with the mm. A situation where overlapping objects could arise is the lax implementation of MIT-SHM Pixmaps in the xserver. A second situation is where the user wishes to have different access modes to a region of memory (e.g. access through a read-only userptr buffer and through a normal userptr buffer). v2: Compile for mmu-notifiers after tweaking v3: Rename is_linear/has_linear Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: "Li, Victor Y" <victor.y.li@intel.com> Cc: "Kelley, Sean V" <sean.v.kelley@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> |
|
Chris Wilson | 6c308fecb4 |
drm/i915: Initialise userptr mmu_notifier serial to 1
During the range invalidate, we walk the list of buffers associated with the mmu_notifer and find the ones that overlap the range. An optimisation is made to speed up the iteration by assuming the previous iter is still valid whilst the tree is unmodified. This exposes a bug when a range invalidate is triggered after we have just created the mmu_notifier, but before attaching any buffers. In that case, we presume we have an unmodified list and start walking from the last iter which is NULL. Oops. The easiest fix is then to initialise the serial of the tree to 1. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Testecase: igt/gem_userptr_blts/stress-mm Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> |
|
Chris Wilson | 5cc9ed4b9a |
drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl
By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> |