mirror of https://gitee.com/openkylin/linux.git
101 Commits
Author | SHA1 | Message | Date |
---|---|---|---|
Chris Wilson | ee8efa8079 |
drm/i915: Check domains for userptr on release
When we return pages to the system, we release control over them and should defensively return them to the CPU write domain so that we catch any external writes on reacquiring them (e.g. to transparently swapout/swapin). While we did this defensive clflushing for ordinary shmem pages, it was forgotten for userptr. Fortunately, userptr objects are normally cache coherent and so oblivious to the forgotten domain tracking. References: |
|
Chris Wilson | 13f1bfd3b3 |
drm/i915: Make object/vma allocation caches global
As our allocations are not device specific, we can move our slab caches to a global scope. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190228102035.5857-2-chris@chris-wilson.co.uk |
|
Chris Wilson | 9e267d286a |
drm/i915/userptr: Fix error handling of mutex_lock_killable()
mutex_lock_killable() returns -EINTR on failure, not the anticipate bool
return like trylock. (Oh no, not again.)
Fixes:
|
|
Chris Wilson | 484d9a844d |
drm/i915/userptr: Avoid struct_mutex recursion for mmu_invalidate_range_start
Since commit |
|
Jani Nikula | 2f80d7bd8d |
drm/i915: drop all drmP.h includes
Needs just a few additional includes here and there. Cc: Sam Ravnborg <sam@ravnborg.org> Cc: Daniel Vetter <daniel@ffwll.ch> Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch> Acked-by: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: Jani Nikula <jani.nikula@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190108082709.3748-1-jani.nikula@intel.com |
|
Linus Torvalds | 96d4f267e4 |
Remove 'type' argument from access_ok() function
Nobody has actually used the type (VERIFY_READ vs VERIFY_WRITE) argument of the user address range verification function since we got rid of the old racy i386-only code to walk page tables by hand. It existed because the original 80386 would not honor the write protect bit when in kernel mode, so you had to do COW by hand before doing any user access. But we haven't supported that in a long time, and these days the 'type' argument is a purely historical artifact. A discussion about extending 'user_access_begin()' to do the range checking resulted this patch, because there is no way we're going to move the old VERIFY_xyz interface to that model. And it's best done at the end of the merge window when I've done most of my merges, so let's just get this done once and for all. This patch was mostly done with a sed-script, with manual fix-ups for the cases that weren't of the trivial 'access_ok(VERIFY_xyz' form. There were a couple of notable cases: - csky still had the old "verify_area()" name as an alias. - the iter_iov code had magical hardcoded knowledge of the actual values of VERIFY_{READ,WRITE} (not that they mattered, since nothing really used it) - microblaze used the type argument for a debug printout but other than those oddities this should be a total no-op patch. I tried to fix up all architectures, did fairly extensive grepping for access_ok() uses, and the changes are trivial, but I may have missed something. Any missed conversion should be trivially fixable, though. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> |
|
Jérôme Glisse | 5d6527a784 |
mm/mmu_notifier: use structure for invalidate_range_start/end callback
Patch series "mmu notifier contextual informations", v2. This patchset adds contextual information, why an invalidation is happening, to mmu notifier callback. This is necessary for user of mmu notifier that wish to maintains their own data structure without having to add new fields to struct vm_area_struct (vma). For instance device can have they own page table that mirror the process address space. When a vma is unmap (munmap() syscall) the device driver can free the device page table for the range. Today we do not have any information on why a mmu notifier call back is happening and thus device driver have to assume that it is always an munmap(). This is inefficient at it means that it needs to re-allocate device page table on next page fault and rebuild the whole device driver data structure for the range. Other use case beside munmap() also exist, for instance it is pointless for device driver to invalidate the device page table when the invalidation is for the soft dirtyness tracking. Or device driver can optimize away mprotect() that change the page table permission access for the range. This patchset enables all this optimizations for device drivers. I do not include any of those in this series but another patchset I am posting will leverage this. The patchset is pretty simple from a code point of view. The first two patches consolidate all mmu notifier arguments into a struct so that it is easier to add/change arguments. The last patch adds the contextual information (munmap, protection, soft dirty, clear, ...). This patch (of 3): To avoid having to change many callback definition everytime we want to add a parameter use a structure to group all parameters for the mmu_notifier invalidate_range_start/end callback. No functional changes with this patch. [akpm@linux-foundation.org: fix drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c kerneldoc] Link: http://lkml.kernel.org/r/20181205053628.3210-2-jglisse@redhat.com Signed-off-by: Jérôme Glisse <jglisse@redhat.com> Acked-by: Jan Kara <jack@suse.cz> Acked-by: Jason Gunthorpe <jgg@mellanox.com> [infiniband] Cc: Matthew Wilcox <mawilcox@microsoft.com> Cc: Ross Zwisler <zwisler@kernel.org> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krcmar <rkrcmar@redhat.com> Cc: Michal Hocko <mhocko@kernel.org> Cc: Christian Koenig <christian.koenig@amd.com> Cc: Felix Kuehling <felix.kuehling@amd.com> Cc: Ralph Campbell <rcampbell@nvidia.com> Cc: John Hubbard <jhubbard@nvidia.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> |
|
Michal Hocko | 93065ac753 |
mm, oom: distinguish blockable mode for mmu notifiers
There are several blockable mmu notifiers which might sleep in mmu_notifier_invalidate_range_start and that is a problem for the oom_reaper because it needs to guarantee a forward progress so it cannot depend on any sleepable locks. Currently we simply back off and mark an oom victim with blockable mmu notifiers as done after a short sleep. That can result in selecting a new oom victim prematurely because the previous one still hasn't torn its memory down yet. We can do much better though. Even if mmu notifiers use sleepable locks there is no reason to automatically assume those locks are held. Moreover majority of notifiers only care about a portion of the address space and there is absolutely zero reason to fail when we are unmapping an unrelated range. Many notifiers do really block and wait for HW which is harder to handle and we have to bail out though. This patch handles the low hanging fruit. __mmu_notifier_invalidate_range_start gets a blockable flag and callbacks are not allowed to sleep if the flag is set to false. This is achieved by using trylock instead of the sleepable lock for most callbacks and continue as long as we do not block down the call chain. I think we can improve that even further because there is a common pattern to do a range lookup first and then do something about that. The first part can be done without a sleeping lock in most cases AFAICS. The oom_reaper end then simply retries if there is at least one notifier which couldn't make any progress in !blockable mode. A retry loop is already implemented to wait for the mmap_sem and this is basically the same thing. The simplest way for driver developers to test this code path is to wrap userspace code which uses these notifiers into a memcg and set the hard limit to hit the oom. This can be done e.g. after the test faults in all the mmu notifier managed memory and set the hard limit to something really small. Then we are looking for a proper process tear down. [akpm@linux-foundation.org: coding style fixes] [akpm@linux-foundation.org: minor code simplification] Link: http://lkml.kernel.org/r/20180716115058.5559-1-mhocko@kernel.org Signed-off-by: Michal Hocko <mhocko@suse.com> Acked-by: Christian König <christian.koenig@amd.com> # AMD notifiers Acked-by: Leon Romanovsky <leonro@mellanox.com> # mlx and umem_odp Reported-by: David Rientjes <rientjes@google.com> Cc: "David (ChunMing) Zhou" <David1.Zhou@amd.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Alex Deucher <alexander.deucher@amd.com> Cc: David Airlie <airlied@linux.ie> Cc: Jani Nikula <jani.nikula@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Doug Ledford <dledford@redhat.com> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Mike Marciniszyn <mike.marciniszyn@intel.com> Cc: Dennis Dalessandro <dennis.dalessandro@intel.com> Cc: Sudeep Dutt <sudeep.dutt@intel.com> Cc: Ashutosh Dixit <ashutosh.dixit@intel.com> Cc: Dimitri Sivanich <sivanich@sgi.com> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: Juergen Gross <jgross@suse.com> Cc: "Jérôme Glisse" <jglisse@redhat.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Felix Kuehling <felix.kuehling@amd.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> |
|
Chris Wilson | 0b100760e3 |
drm/i915/userptr: Enable read-only support on gen8+
On gen8 and onwards, we can mark GPU accesses through the ppGTT as being read-only, that is cause any GPU write onto that page to be discarded (not triggering a fault). This is all that we need to finally support the read-only flag for userptr! v2: Check default address space for read only support as a proxy for the user context/ppgtt. Testcase: igt/gem_userptr_blits/readonly* Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Jon Bloomfield <jon.bloomfield@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: Matthew Auld <matthew.william.auld@gmail.com> Reviewed-by: Jon Bloomfield <jon.bloomfield@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180712191430.9269-1-chris@chris-wilson.co.uk |
|
Matthew Auld | c11c7bfd21 |
drm/i915/userptr: reject zero user_size
Operating on a zero sized GEM userptr object will lead to explosions.
Fixes:
|
|
Christian König | c0a51fd07b |
drm: move read_domains and write_domain into i915
i915 is the only driver using those fields in the drm_gem_object structure, so they only waste memory for all other drivers. Move the fields into drm_i915_gem_object instead and patch the i915 code with the following sed commands: sed -i "s/obj->base.read_domains/obj->read_domains/g" drivers/gpu/drm/i915/*.c drivers/gpu/drm/i915/*/*.c sed -i "s/obj->base.write_domain/obj->write_domain/g" drivers/gpu/drm/i915/*.c drivers/gpu/drm/i915/*/*.c Change is only compile tested. v2: move fields around as suggested by Chris. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20180216124338.9087-1-christian.koenig@amd.com Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> |
|
Chris Wilson | a5a5ae2abe |
drm/i915: Fix kerneldoc warnings for i915_gem_userptr
drivers/gpu/drm/i915/i915_gem_userptr.c:761: warning: No description found for parameter 'dev' drivers/gpu/drm/i915/i915_gem_userptr.c:761: warning: No description found for parameter 'data' drivers/gpu/drm/i915/i915_gem_userptr.c:761: warning: No description found for parameter 'file' Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180208111328.32422-1-chris@chris-wilson.co.uk |
|
Linus Torvalds | 43f462f1c2 |
previous part 2 tag + ttm regression fix, i915,vc4,core,uapi fixes
-----BEGIN PGP SIGNATURE----- iQIcBAABAgAGBQJaF5aSAAoJEAx081l5xIa+TgcP/ijY7I5K7uJXq+KwCThM2g2Z 8MW0QM8u55Mk6PdNRQafVZSP6S/tyWS3gtjW2CmB6UFazNiQzJiVdoxeuKJerwob hyciMaYiEJ1x4Z4dJUxv7dtfdDH0duqES+rPE9znCvpW/PaR+6ohobVL2tH8QVRO 884QHTvmABU8xmfzmpViiLdrjNQaZtAzNMl0mD07NlfAI3bNpE/UIVd+vm1ADDPl avZZHjyAZFgiM9anuXPGpwOcA5LSiAkUHOKZMwfj5FOhEJjAwZy0z50Jnw/Wo7OX N8ymDk7vRv/Q/stOk2m/yMuoDrEtG3os4L0cyDXFIumEVVsqE7Y5WMw5tvDULw6E WaSYr+F7t0e9OwB6w5yKRp+t97lKK1O7KZ0HA8NW0EgERHD+8/XLojr8BBAqJqxH mo3DVMfU7fmm7uOIBrjHGdkyWEni/Bqk/Vxo6rOTKVeRYWiCA4fNHvM7TN7h8DZA VlDEHB3l2k44T0ONE4vo/LgEg1Ta7B3whv0qKykYbcNK8scEBU5iV1znT+zRzJYY /cwuT+BxfTgXCKAveMi6FKvjvIohR9TLyj7BS6/QUK4mD+9V5AnERcorZoO6/8qY qiPjVDvN1BNrueyHRg162AlRXqxnvt8LFdVt2QIn8kAuXHbXOn6RMUMP49OLGlB3 g0hpJ0MOwuHUKQcnW60d =3TmE -----END PGP SIGNATURE----- Merge tag 'drm-for-v4.15-part2-fixes' of git://people.freedesktop.org/~airlied/linux Pull drm fixes from Dave Airlie: - TTM regression fix for some virt gpus (bochs vga) - a few i915 stable fixes - one vc4 fix - one uapi fix * tag 'drm-for-v4.15-part2-fixes' of git://people.freedesktop.org/~airlied/linux: drm/ttm: don't attempt to use hugepages if dma32 requested (v2) drm/vblank: Pass crtc_id to page_flip_ioctl. drm/i915: Fix init_clock_gating for resume drm/i915: Mark the userptr invalidate workqueue as WQ_MEM_RECLAIM drm/i915: Clear breadcrumb node when cancelling signaling drm/i915/gvt: ensure -ve return value is handled correctly drm/i915: Re-register PMIC bus access notifier on runtime resume drm/i915: Fix false-positive assert_rpm_wakelock_held in i915_pmic_bus_access_notifier v2 drm/edid: Don't send non-zero YQ in AVI infoframe for HDMI 1.x sinks drm/vc4: Account for interrupts in flight |
|
Chris Wilson | 457db89b53 |
drm/i915: Mark the userptr invalidate workqueue as WQ_MEM_RECLAIM
Commit |
|
Linus Torvalds | e60e1ee606 |
main drm pull request for v4.15
-----BEGIN PGP SIGNATURE----- iQIcBAABAgAGBQJaCm8RAAoJEAx081l5xIa+zX0QAJSm31kCG3vdw2CNiRx25L3q 3hcsEOgAjVJ9FQVGKFWjzb8TK35tSqtNx5kWIj0VGaIfBE5Bdg5SLLgKKUYas8rY 4LaphqICq2uxu2BNa2tpiar/sHhAnuozwQ4czpVWXzlaISnb9yYzRl7gMuyUVGkx +Gih5VUhLmQC0HsRTLJ3vaZQoUsLAl2gAjKcWa1bx57j2S+iKOPfsLaq7VYo+y1I Njc+iSGqMhJzRLXVkxL2lQKaslp7R38Bbh5K4Kvyjkm4Aq7zErOF6irpOXKMcrGl mwnr89vf1G9thjikrBaXpKnuvdbWYveoN/ORMlTdCfxkFnChHLnm3bd7NJ49RXDN Hv/Iq9YYjmZ9GTatxnx7lWtmXnZXC5he1yn1JAuz/yt7/0b/Wx+Mu/wEpBXYNFTd 1AZdD586i+AmPo3yDkqH9nBu8JC0W0AnS9VZma4LVvZOP2UfJmj5Im1CLHItbGDN FnUCkwyD/lJUUk+WgT+w/GOMJgmFHDiFFl4tFtYVVjrUirpCFVguSKG9xuv6tT8P 8iRsoP7RrcmDN9ojN2SEHwcpsAv3HnKkDv+9+GIbWnrGsSbCPq8Qm+JDSvf4h22I K5lwNpJrcpSKI+q10L7w2xliTBwb98sJkWGA/rssomrdBOWteGZAyqFRYAVgQ+mJ x/nJurIqQYh2KQN9+uLG =xVV2 -----END PGP SIGNATURE----- Merge tag 'drm-for-v4.15' of git://people.freedesktop.org/~airlied/linux Pull drm updates from Dave Airlie: "This is the main drm pull request for v4.15. Core: - Atomic object lifetime fixes - Atomic iterator improvements - Sparse/smatch fixes - Legacy kms ioctls to be interruptible - EDID override improvements - fb/gem helper cleanups - Simple outreachy patches - Documentation improvements - Fix dma-buf rcu races - DRM mode object leasing for improving VR use cases. - vgaarb improvements for non-x86 platforms. New driver: - tve200: Faraday Technology TVE200 block. This "TV Encoder" encodes a ITU-T BT.656 stream and can be found in the StorLink SL3516 (later Cortina Systems CS3516) as well as the Grain Media GM8180. New bridges: - SiI9234 support New panels: - S6E63J0X03, OTM8009A, Seiko 43WVF1G, 7" rpi touch panel, Toshiba LT089AC19000, Innolux AT043TN24 i915: - Remove Coffeelake from alpha support - Cannonlake workarounds - Infoframe refactoring for DisplayPort - VBT updates - DisplayPort vswing/emph/buffer translation refactoring - CCS fixes - Restore GPU clock boost on missed vblanks - Scatter list updates for userptr allocations - Gen9+ transition watermarks - Display IPC (Isochronous Priority Control) - Private PAT management - GVT: improved error handling and pci config sanitizing - Execlist refactoring - Transparent Huge Page support - User defined priorities support - HuC/GuC firmware refactoring - DP MST fixes - eDP power sequencing fixes - Use RCU instead of stop_machine - PSR state tracking support - Eviction fixes - BDW DP aux channel timeout fixes - LSPCON fixes - Cannonlake PLL fixes amdgpu: - Per VM BO support - Powerplay cleanups - CI powerplay support - PASID mgr for kfd - SR-IOV fixes - initial GPU reset for vega10 - Prime mmap support - TTM updates - Clock query interface for Raven - Fence to handle ioctl - UVD encode ring support on Polaris - Transparent huge page DMA support - Compute LRU pipe tweaks - BO flag to allow buffers to opt out of implicit sync - CTX priority setting API - VRAM lost infrastructure plumbing qxl: - fix flicker since atomic rework amdkfd: - Further improvements from internal AMD tree - Usermode events - Drop radeon support nouveau: - Pascal temperature sensor support - Improved BAR2 handling - MMU rework to support Pascal MMU exynos: - Improved HDMI/mixer support - HDMI audio interface support tegra: - Prep work for tegra186 - Cleanup/fixes msm: - Preemption support for a5xx - Display fixes for 8x96 (snapdragon 820) - Async cursor plane fixes - FW loading rework - GPU debugging improvements vc4: - Prep for DSI panels - fix T-format tiling scanout - New madvise ioctl Rockchip: - LVDS support omapdrm: - omap4 HDMI CEC support etnaviv: - GPU performance counters groundwork sun4i: - refactor driver load + TCON backend - HDMI improvements - A31 support - Misc fixes udl: - Probe/EDID read fixes. tilcdc: - Misc fixes. pl111: - Support more variants adv7511: - Improve EDID handling. - HDMI CEC support sii8620: - Add remote control support" * tag 'drm-for-v4.15' of git://people.freedesktop.org/~airlied/linux: (1480 commits) drm/rockchip: analogix_dp: Use mutex rather than spinlock drm/mode_object: fix documentation for object lookups. drm/i915: Reorder context-close to avoid calling i915_vma_close() under RCU drm/i915: Move init_clock_gating() back to where it was drm/i915: Prune the reservation shared fence array drm/i915: Idle the GPU before shinking everything drm/i915: Lock llist_del_first() vs llist_del_all() drm/i915: Calculate ironlake intermediate watermarks correctly, v2. drm/i915: Disable lazy PPGTT page table optimization for vGPU drm/i915/execlists: Remove the priority "optimisation" drm/i915: Filter out spurious execlists context-switch interrupts drm/amdgpu: use irq-safe lock for kiq->ring_lock drm/amdgpu: bypass lru touch for KIQ ring submission drm/amdgpu: Potential uninitialized variable in amdgpu_vm_update_directories() drm/amdgpu: potential uninitialized variable in amdgpu_vce_ring_parse_cs() drm/amd/powerplay: initialize a variable before using it drm/amd/powerplay: suppress KASAN out of bounds warning in vega10_populate_all_memory_levels drm/amd/amdgpu: fix evicted VRAM bo adjudgement condition drm/vblank: Tune drm_crtc_accurate_vblank_count() WARN down to a debug drm/rockchip: add CONFIG_OF dependency for lvds ... |
|
Mel Gorman | c6f92f9fbe |
mm: remove cold parameter for release_pages
All callers of release_pages claim the pages being released are cache hot. As no one cares about the hotness of pages being released to the allocator, just ditch the parameter. No performance impact is expected as the overhead is marginal. The parameter is removed simply because it is a bit stupid to have a useless parameter copied everywhere. Link: http://lkml.kernel.org/r/20171018075952.10627-7-mgorman@techsingularity.net Signed-off-by: Mel Gorman <mgorman@techsingularity.net> Acked-by: Vlastimil Babka <vbabka@suse.cz> Cc: Andi Kleen <ak@linux.intel.com> Cc: Dave Chinner <david@fromorbit.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Jan Kara <jack@suse.cz> Cc: Johannes Weiner <hannes@cmpxchg.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> |
|
Tvrtko Ursulin | cb8d50dfb3 |
drm/i915: Fixup userptr mmu notifier registration error handling
Avoid dereferencing the error pointer and also avoid returning NULL
from i915_mmu_notifier_find since the callers do not expect that.
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Fixes:
|
|
Chris Wilson | bd3d2252f9 |
drm/i915: Rename obj->pin_display to obj->pin_global
In the next patch, we want to extend use of the global pin counter for semi-permanent pinning of context/ring objects. Given that we plan to extend the usage to encompass a disparate set of objects, we want a name that reflects both and should entail less confusion. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171013202621.7276-2-chris@chris-wilson.co.uk |
|
Chris Wilson | f1fa4f442c |
drm/i915: Refactor testing obj->mm.pages
Since we occasionally stuff an error pointer into obj->mm.pages for a semi-permanent or even permanent failure, we have to be more careful and not just test against NULL when deciding if the object has a complete set of its concurrent pages. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171013202621.7276-1-chris@chris-wilson.co.uk |
|
Daniel Vetter | 7741b547b6 |
drm/i915: Preallocate our mmu notifier workequeu to unbreak cpu hotplug deadlock
4.14-rc1 gained the fancy new cross-release support in lockdep, which seems to have uncovered a few more rules about what is allowed and isn't. This one here seems to indicate that allocating a work-queue while holding mmap_sem is a no-go, so let's try to preallocate it. Of course another way to break this chain would be somewhere in the cpu hotplug code, since this isn't the only trace we're finding now which goes through msr_create_device. Full lockdep splat: ====================================================== WARNING: possible circular locking dependency detected 4.14.0-rc1-CI-CI_DRM_3118+ #1 Tainted: G U ------------------------------------------------------ prime_mmap/1551 is trying to acquire lock: (cpu_hotplug_lock.rw_sem){++++}, at: [<ffffffff8109dbb7>] apply_workqueue_attrs+0x17/0x50 but task is already holding lock: (&dev_priv->mm_lock){+.+.}, at: [<ffffffffa01a7b2a>] i915_gem_userptr_init__mmu_notifier+0x14a/0x270 [i915] which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #6 (&dev_priv->mm_lock){+.+.}: __lock_acquire+0x1420/0x15e0 lock_acquire+0xb0/0x200 __mutex_lock+0x86/0x9b0 mutex_lock_nested+0x1b/0x20 i915_gem_userptr_init__mmu_notifier+0x14a/0x270 [i915] i915_gem_userptr_ioctl+0x222/0x2c0 [i915] drm_ioctl_kernel+0x69/0xb0 drm_ioctl+0x2f9/0x3d0 do_vfs_ioctl+0x94/0x670 SyS_ioctl+0x41/0x70 entry_SYSCALL_64_fastpath+0x1c/0xb1 -> #5 (&mm->mmap_sem){++++}: __lock_acquire+0x1420/0x15e0 lock_acquire+0xb0/0x200 __might_fault+0x68/0x90 _copy_to_user+0x23/0x70 filldir+0xa5/0x120 dcache_readdir+0xf9/0x170 iterate_dir+0x69/0x1a0 SyS_getdents+0xa5/0x140 entry_SYSCALL_64_fastpath+0x1c/0xb1 -> #4 (&sb->s_type->i_mutex_key#5){++++}: down_write+0x3b/0x70 handle_create+0xcb/0x1e0 devtmpfsd+0x139/0x180 kthread+0x152/0x190 ret_from_fork+0x27/0x40 -> #3 ((complete)&req.done){+.+.}: __lock_acquire+0x1420/0x15e0 lock_acquire+0xb0/0x200 wait_for_common+0x58/0x210 wait_for_completion+0x1d/0x20 devtmpfs_create_node+0x13d/0x160 device_add+0x5eb/0x620 device_create_groups_vargs+0xe0/0xf0 device_create+0x3a/0x40 msr_device_create+0x2b/0x40 cpuhp_invoke_callback+0xa3/0x840 cpuhp_thread_fun+0x7a/0x150 smpboot_thread_fn+0x18a/0x280 kthread+0x152/0x190 ret_from_fork+0x27/0x40 -> #2 (cpuhp_state){+.+.}: __lock_acquire+0x1420/0x15e0 lock_acquire+0xb0/0x200 cpuhp_issue_call+0x10b/0x170 __cpuhp_setup_state_cpuslocked+0x134/0x2a0 __cpuhp_setup_state+0x46/0x60 page_writeback_init+0x43/0x67 pagecache_init+0x3d/0x42 start_kernel+0x3a8/0x3fc x86_64_start_reservations+0x2a/0x2c x86_64_start_kernel+0x6d/0x70 verify_cpu+0x0/0xfb -> #1 (cpuhp_state_mutex){+.+.}: __lock_acquire+0x1420/0x15e0 lock_acquire+0xb0/0x200 __mutex_lock+0x86/0x9b0 mutex_lock_nested+0x1b/0x20 __cpuhp_setup_state_cpuslocked+0x52/0x2a0 __cpuhp_setup_state+0x46/0x60 page_alloc_init+0x28/0x30 start_kernel+0x145/0x3fc x86_64_start_reservations+0x2a/0x2c x86_64_start_kernel+0x6d/0x70 verify_cpu+0x0/0xfb -> #0 (cpu_hotplug_lock.rw_sem){++++}: check_prev_add+0x430/0x840 __lock_acquire+0x1420/0x15e0 lock_acquire+0xb0/0x200 cpus_read_lock+0x3d/0xb0 apply_workqueue_attrs+0x17/0x50 __alloc_workqueue_key+0x1d8/0x4d9 i915_gem_userptr_init__mmu_notifier+0x1fb/0x270 [i915] i915_gem_userptr_ioctl+0x222/0x2c0 [i915] drm_ioctl_kernel+0x69/0xb0 drm_ioctl+0x2f9/0x3d0 do_vfs_ioctl+0x94/0x670 SyS_ioctl+0x41/0x70 entry_SYSCALL_64_fastpath+0x1c/0xb1 other info that might help us debug this: Chain exists of: cpu_hotplug_lock.rw_sem --> &mm->mmap_sem --> &dev_priv->mm_lock Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&dev_priv->mm_lock); lock(&mm->mmap_sem); lock(&dev_priv->mm_lock); lock(cpu_hotplug_lock.rw_sem); *** DEADLOCK *** 2 locks held by prime_mmap/1551: #0: (&mm->mmap_sem){++++}, at: [<ffffffffa01a7b18>] i915_gem_userptr_init__mmu_notifier+0x138/0x270 [i915] #1: (&dev_priv->mm_lock){+.+.}, at: [<ffffffffa01a7b2a>] i915_gem_userptr_init__mmu_notifier+0x14a/0x270 [i915] stack backtrace: CPU: 4 PID: 1551 Comm: prime_mmap Tainted: G U 4.14.0-rc1-CI-CI_DRM_3118+ #1 Hardware name: Dell Inc. XPS 8300 /0Y2MRG, BIOS A06 10/17/2011 Call Trace: dump_stack+0x68/0x9f print_circular_bug+0x235/0x3c0 ? lockdep_init_map_crosslock+0x20/0x20 check_prev_add+0x430/0x840 __lock_acquire+0x1420/0x15e0 ? __lock_acquire+0x1420/0x15e0 ? lockdep_init_map_crosslock+0x20/0x20 lock_acquire+0xb0/0x200 ? apply_workqueue_attrs+0x17/0x50 cpus_read_lock+0x3d/0xb0 ? apply_workqueue_attrs+0x17/0x50 apply_workqueue_attrs+0x17/0x50 __alloc_workqueue_key+0x1d8/0x4d9 ? __lockdep_init_map+0x57/0x1c0 i915_gem_userptr_init__mmu_notifier+0x1fb/0x270 [i915] i915_gem_userptr_ioctl+0x222/0x2c0 [i915] ? i915_gem_userptr_release+0x140/0x140 [i915] drm_ioctl_kernel+0x69/0xb0 drm_ioctl+0x2f9/0x3d0 ? i915_gem_userptr_release+0x140/0x140 [i915] ? __do_page_fault+0x2a4/0x570 do_vfs_ioctl+0x94/0x670 ? entry_SYSCALL_64_fastpath+0x5/0xb1 ? __this_cpu_preempt_check+0x13/0x20 ? trace_hardirqs_on_caller+0xe3/0x1b0 SyS_ioctl+0x41/0x70 entry_SYSCALL_64_fastpath+0x1c/0xb1 RIP: 0033:0x7fbb83c39587 RSP: 002b:00007fff188dc228 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 RAX: ffffffffffffffda RBX: ffffffff81492963 RCX: 00007fbb83c39587 RDX: 00007fff188dc260 RSI: 00000000c0186473 RDI: 0000000000000003 RBP: ffffc90001487f88 R08: 0000000000000000 R09: 00007fff188dc2ac R10: 00007fbb83efcb58 R11: 0000000000000246 R12: 0000000000000000 R13: 0000000000000003 R14: 00000000c0186473 R15: 00007fff188dc2ac ? __this_cpu_preempt_check+0x13/0x20 Note that this also has the minor benefit of slightly reducing the critical section where we hold mmap_sem. v2: Set ret correctly when we raced with another thread. v3: Use Chris' diff. Attach the right lockdep splat. v4: Repaint in Tvrtko's colors (aka don't report ENOMEM if we race and some other thread managed to not also get an ENOMEM and successfully install the mmu notifier. Note that the kernel guarantees that small allocations succeed, so this never actually happens). Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Sasha Levin <alexander.levin@verizon.com> Cc: Marta Lofstedt <marta.lofstedt@intel.com> Cc: Tejun Heo <tj@kernel.org> References: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_3180/shard-hsw3/igt@prime_mmap@test_userptr.html Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=102939 Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171009164401.16035-1-daniel.vetter@ffwll.ch |
|
Matthew Auld | 84e8978e62 |
drm/i915: s/sg_mask/sg_page_sizes/
It's a little unclear what the sg_mask actually is, so prefer the more meaningful name of sg_page_sizes. Suggested-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171009110024.29114-1-matthew.auld@intel.com Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> |
|
Matthew Auld | a5c0816626 |
drm/i915: introduce page_size members
In preparation for supporting huge gtt pages for the ppgtt, we introduce page size members for gem objects. We fill in the page sizes by scanning the sg table. v2: pass the sg_mask to set_pages v3: calculate the sg_mask inline with populating the sg_table where possible, and pass to set_pages along with the pages. v4: bunch of improvements from Joonas v5: fix num_pages blunder introduce i915_sg_page_sizes helper v6: prefer GEM_BUG_ON(sizes == 0) Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Daniel Vetter <daniel@ffwll.ch> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171006145041.21673-7-matthew.auld@intel.com Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171006221833.32439-6-chris@chris-wilson.co.uk |
|
Matthew Auld | b91b09eea7 |
drm/i915: push set_pages down to the callers
Each backend is now responsible for calling __i915_gem_object_set_pages upon successfully gathering its backing storage. This eliminates the inconsistency between the async and sync paths, which stands out even more when we start throwing around an sg_mask in a later patch. Suggested-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171006145041.21673-6-matthew.auld@intel.com Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171006221833.32439-5-chris@chris-wilson.co.uk |
|
Jani Nikula | 32f35b8634 |
Merge drm-upstream/drm-next into drm-intel-next-queued
Need MST sideband message transaction to power up/down nodes. Signed-off-by: Jani Nikula <jani.nikula@intel.com> |
|
Chris Wilson | 21cc6431e0 |
drm/i915: Mark the userptr invalidate workqueue as WQ_MEM_RECLAIM
To silence the critcs: [56532.161115] workqueue: PF_MEMALLOC task 36(khugepaged) is flushing !WQ_MEM_RECLAIM i915-userptr-release: (null) [56532.161138] ------------[ cut here ]------------ [56532.161144] WARNING: CPU: 1 PID: 36 at kernel/workqueue.c:2418 check_flush_dependency+0xe8/0xf0 [56532.161145] Modules linked in: wmi_bmof [56532.161148] CPU: 1 PID: 36 Comm: khugepaged Not tainted 4.13.0-krejzi #1 [56532.161149] Hardware name: HP HP ProBook 470 G3/8102, BIOS N78 Ver. 01.17 06/08/2017 [56532.161150] task: ffff8802371ee200 task.stack: ffffc90000174000 [56532.161152] RIP: 0010:check_flush_dependency+0xe8/0xf0 [56532.161152] RSP: 0018:ffffc900001777b8 EFLAGS: 00010286 [56532.161153] RAX: 000000000000006c RBX: ffff88022fc5a000 RCX: 0000000000000001 [56532.161154] RDX: 0000000000000000 RSI: 0000000000000086 RDI: 00000000ffffffff [56532.161155] RBP: 0000000000000000 R08: 14f038bb55f6dae0 R09: 0000000000000516 [56532.161155] R10: ffffc900001778a0 R11: 000000006c756e28 R12: ffff8802371ee200 [56532.161156] R13: 0000000000000000 R14: 000000000000000b R15: ffffc90000177810 [56532.161157] FS: 0000000000000000(0000) GS:ffff880240480000(0000) knlGS:0000000000000000 [56532.161158] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [56532.161158] CR2: 0000000004795ff8 CR3: 000000000220a000 CR4: 00000000003406e0 [56532.161159] Call Trace: [56532.161161] ? flush_workqueue+0x136/0x3e0 [56532.161178] ? _raw_spin_unlock_irqrestore+0xf/0x30 [56532.161179] ? try_to_wake_up+0x1ce/0x3b0 [56532.161183] ? i915_gem_userptr_mn_invalidate_range_start+0x13f/0x150 [56532.161184] ? _raw_spin_unlock+0xd/0x20 [56532.161186] ? i915_gem_userptr_mn_invalidate_range_start+0x13f/0x150 [56532.161189] ? __mmu_notifier_invalidate_range_start+0x4a/0x70 [56532.161191] ? try_to_unmap_one+0x5e5/0x660 [56532.161193] ? rmap_walk_file+0xe4/0x240 [56532.161195] ? __ClearPageMovable+0x10/0x10 [56532.161196] ? try_to_unmap+0x8c/0xe0 [56532.161197] ? page_remove_rmap+0x280/0x280 [56532.161199] ? page_not_mapped+0x10/0x10 [56532.161200] ? page_get_anon_vma+0x90/0x90 [56532.161202] ? migrate_pages+0x6a5/0x940 [56532.161203] ? isolate_freepages_block+0x330/0x330 [56532.161205] ? compact_zone+0x593/0x6a0 [56532.161206] ? enqueue_task_fair+0xc3/0x1180 [56532.161208] ? compact_zone_order+0x9b/0xc0 [56532.161210] ? get_page_from_freelist+0x24a/0x900 [56532.161212] ? try_to_compact_pages+0xc8/0x240 [56532.161213] ? try_to_compact_pages+0xc8/0x240 [56532.161215] ? __alloc_pages_direct_compact+0x45/0xe0 [56532.161216] ? __alloc_pages_slowpath+0x845/0xb90 [56532.161218] ? __alloc_pages_nodemask+0x176/0x1f0 [56532.161220] ? wait_woken+0x80/0x80 [56532.161222] ? khugepaged+0x29e/0x17d0 [56532.161223] ? wait_woken+0x80/0x80 [56532.161225] ? collapse_shmem.isra.39+0xa60/0xa60 [56532.161226] ? kthread+0x10d/0x130 [56532.161227] ? kthread_create_on_node+0x60/0x60 [56532.161228] ? ret_from_fork+0x22/0x30 [56532.161229] Code: 00 8b b0 10 05 00 00 48 8d 8b b0 00 00 00 48 8d 90 b8 06 00 00 49 89 e8 48 c7 c7 38 55 09 82 c6 05 f9 c6 1d 01 01 e8 0e a1 03 00 <0f> ff e9 6b ff ff ff 90 48 8b 37 40 f6 c6 04 75 1b 48 c1 ee 05 [56532.161251] ---[ end trace 2ce2b4f5f69b803b ]--- Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20170911084135.22903-2-chris@chris-wilson.co.uk Reviewed-by: Michał Winiarski <michal.winiarski@intel.com> |
|
Michal Hocko | 0ee931c4e3 |
mm: treewide: remove GFP_TEMPORARY allocation flag
GFP_TEMPORARY was introduced by commit
|
|
Davidlohr Bueso | f808c13fd3 |
lib/interval_tree: fast overlap detection
Allow interval trees to quickly check for overlaps to avoid unnecesary tree lookups in interval_tree_iter_first(). As of this patch, all interval tree flavors will require using a 'rb_root_cached' such that we can have the leftmost node easily available. While most users will make use of this feature, those with special functions (in addition to the generic insert, delete, search calls) will avoid using the cached option as they can do funky things with insertions -- for example, vma_interval_tree_insert_after(). [jglisse@redhat.com: fix deadlock from typo vm_lock_anon_vma()] Link: http://lkml.kernel.org/r/20170808225719.20723-1-jglisse@redhat.com Link: http://lkml.kernel.org/r/20170719014603.19029-12-dave@stgolabs.net Signed-off-by: Davidlohr Bueso <dbueso@suse.de> Signed-off-by: Jérôme Glisse <jglisse@redhat.com> Acked-by: Christian König <christian.koenig@amd.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Doug Ledford <dledford@redhat.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Cc: David Airlie <airlied@linux.ie> Cc: Jason Wang <jasowang@redhat.com> Cc: Christian Benvenuti <benve@cisco.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> |
|
Tvrtko Ursulin | 5602452e4c |
drm/i915: Use __sg_alloc_table_from_pages for userptr allocations
With the addition of __sg_alloc_table_from_pages we can control the maximum coalescing size and eliminate a separate path for allocating backing store here. Similar to |
|
Chris Wilson | b8f55be644 |
drm/i915: Split obj->cache_coherent to track r/w
Another month, another story in the cache coherency saga. This time, we
come to the realisation that i915_gem_object_is_coherent() has been
reporting whether we can read from the target without requiring a cache
invalidate; but we were using it in places for testing whether we could
write into the object without requiring a cache flush. So split the
tracking into two, one to decide before reads, one after writes.
See commit
|
|
Chris Wilson | 8a2421bd0d |
drm/i915: Wait upon userptr get-user-pages within execbuffer
This simply hides the EAGAIN caused by userptr when userspace causes resource contention. However, it is quite beneficial with highly contended userptr users as we avoid repeating the setup costs and kernel-user context switches. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Michał Winiarski <michal.winiarski@intel.com> |
|
Chris Wilson | 7fc92e96c3 |
drm/i915: Store i915_gem_object_is_coherent() as a bit next to cache-dirty
For ease of use (i.e. avoiding a few checks and function calls), store the object's cache coherency next to the cache is dirty bit. Specifically this patch aims to reduce the frequency of no-op calls to i915_gem_object_clflush() to counter-act the increase of such calls for GPU only objects in the previous patch. v2: Replace cache_dirty & ~cache_coherent with cache_dirty && !cache_coherent as gcc generates much better code for the latter (Tvrtko) Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Dongwon Kim <dongwon.kim@intel.com> Cc: Matt Roper <matthew.d.roper@intel.com> Tested-by: Dongwon Kim <dongwon.kim@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170616105455.16977-1-chris@chris-wilson.co.uk Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> |
|
Chris Wilson | e27ab73d17 |
drm/i915: Mark CPU cache as dirty on every transition for CPU writes
Currently, we only mark the CPU cache as dirty if we skip a clflush.
This leads to some confusion where we have to ask if the object is in
the write domain or missed a clflush. If we always mark the cache as
dirty, this becomes a much simply question to answer.
The goal remains to do as few clflushes as required and to do them as
late as possible, in the hope of deferring the work to a kthread and not
block the caller (e.g. execbuf, flips).
v2: Always call clflush before GPU execution when the cache_dirty flag
is set. This may cause some extra work on llc systems that migrate dirty
buffers back and forth - but we do try to limit that by only setting
cache_dirty at the end of the gpu sequence.
v3: Always mark the cache as dirty upon a level change, as we need to
invalidate any stale cachelines due to external writes.
Reported-by: Dongwon Kim <dongwon.kim@intel.com>
Fixes:
|
|
Michal Hocko | 2098105ec6 |
drm: drop drm_[cm]alloc* helpers
Now that drm_[cm]alloc* helpers are simple one line wrappers around kvmalloc_array and drm_free_large is just kvfree alias we can drop them and replace by their native forms. This shouldn't introduce any functional change. Changes since v1 - fix typo in drivers/gpu//drm/etnaviv/etnaviv_gem.c - noticed by 0day build robot Suggested-by: Daniel Vetter <daniel@ffwll.ch> Signed-off-by: Michal Hocko <mhocko@suse.com>drm: drop drm_[cm]alloc* helpers [danvet: Fixup vgem which grew another user very recently.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Acked-by: Christian König <christian.koenig@amd.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170517122312.GK18247@dhcp22.suse.cz |
|
Chris Wilson | 15c344f4d0 |
drm/i915/userptr: Reinvent GGTT self-faulting protection
lockdep doesn't like us taking the mm->mmap_sem inside the get_pages
callback for a couple of reasons. The straightforward deadlock:
[13755.434059] =============================================
[13755.434061] [ INFO: possible recursive locking detected ]
[13755.434064] 4.11.0-rc1-CI-CI_DRM_297+ #1 Tainted: G U
[13755.434066] ---------------------------------------------
[13755.434068] gem_userptr_bli/8398 is trying to acquire lock:
[13755.434070] (&mm->mmap_sem){++++++}, at: [<ffffffffa00c988a>] i915_gem_userptr_get_pages+0x5a/0x2e0 [i915]
[13755.434096]
but task is already holding lock:
[13755.434098] (&mm->mmap_sem){++++++}, at: [<ffffffff8104d485>] __do_page_fault+0x105/0x560
[13755.434105]
other info that might help us debug this:
[13755.434108] Possible unsafe locking scenario:
[13755.434110] CPU0
[13755.434111] ----
[13755.434112] lock(&mm->mmap_sem);
[13755.434115] lock(&mm->mmap_sem);
[13755.434117]
*** DEADLOCK ***
[13755.434121] May be due to missing lock nesting notation
[13755.434126] 2 locks held by gem_userptr_bli/8398:
[13755.434128] #0: (&mm->mmap_sem){++++++}, at: [<ffffffff8104d485>] __do_page_fault+0x105/0x560
[13755.434135] #1: (&obj->mm.lock){+.+.+.}, at: [<ffffffffa00b887d>] __i915_gem_object_get_pages+0x1d/0x70 [i915]
[13755.434156]
stack backtrace:
[13755.434161] CPU: 3 PID: 8398 Comm: gem_userptr_bli Tainted: G U 4.11.0-rc1-CI-CI_DRM_297+ #1
[13755.434165] Hardware name: GIGABYTE GB-BKi7(H)A-7500/MFLP7AP-00, BIOS F4 02/20/2017
[13755.434169] Call Trace:
[13755.434174] dump_stack+0x67/0x92
[13755.434178] __lock_acquire+0x133a/0x1b50
[13755.434182] lock_acquire+0xc9/0x220
[13755.434200] ? i915_gem_userptr_get_pages+0x5a/0x2e0 [i915]
[13755.434204] down_read+0x42/0x70
[13755.434221] ? i915_gem_userptr_get_pages+0x5a/0x2e0 [i915]
[13755.434238] i915_gem_userptr_get_pages+0x5a/0x2e0 [i915]
[13755.434255] ____i915_gem_object_get_pages+0x25/0x60 [i915]
[13755.434272] __i915_gem_object_get_pages+0x59/0x70 [i915]
[13755.434288] i915_gem_fault+0x397/0x6a0 [i915]
[13755.434304] ? i915_gem_fault+0x1a1/0x6a0 [i915]
[13755.434308] ? __lock_acquire+0x449/0x1b50
[13755.434311] ? __lock_acquire+0x449/0x1b50
[13755.434315] ? vm_mmap_pgoff+0xa9/0xd0
[13755.434318] __do_fault+0x19/0x70
[13755.434321] __handle_mm_fault+0x863/0xe50
[13755.434325] handle_mm_fault+0x17f/0x370
[13755.434329] ? handle_mm_fault+0x40/0x370
[13755.434332] __do_page_fault+0x279/0x560
[13755.434336] do_page_fault+0xc/0x10
[13755.434339] page_fault+0x22/0x30
[13755.434342] RIP: 0033:0x7f5ab91b5880
[13755.434345] RSP: 002b:00007fff62922218 EFLAGS: 00010216
[13755.434348] RAX: 0000000000b74500 RBX: 00007f5ab7f81000 RCX: 0000000000000000
[13755.434352] RDX: 0000000000100000 RSI: 00007f5ab7f81000 RDI: 00007f5aba61c000
[13755.434355] RBP: 00007f5aba61c000 R08: 0000000000000007 R09: 0000000100000000
[13755.434359] R10: 000000000000037d R11: 00007f5ab91b5840 R12: 0000000000000001
[13755.434362] R13: 0000000000000005 R14: 0000000000000001 R15: 0000000000000000
and cyclic deadlocks:
[ 2566.458979] ======================================================
[ 2566.459054] [ INFO: possible circular locking dependency detected ]
[ 2566.459127] 4.11.0-rc1+ #26 Not tainted
[ 2566.459194] -------------------------------------------------------
[ 2566.459266] gem_streaming_w/759 is trying to acquire lock:
[ 2566.459334] (&obj->mm.lock){+.+.+.}, at: [<ffffffffa034bc80>] i915_gem_object_pin_pages+0x0/0xc0 [i915]
[ 2566.459605]
[ 2566.459605] but task is already holding lock:
[ 2566.459699] (&mm->mmap_sem){++++++}, at: [<ffffffff8106fd11>] __do_page_fault+0x121/0x500
[ 2566.459814]
[ 2566.459814] which lock already depends on the new lock.
[ 2566.459814]
[ 2566.459934]
[ 2566.459934] the existing dependency chain (in reverse order) is:
[ 2566.460030]
[ 2566.460030] -> #1 (&mm->mmap_sem){++++++}:
[ 2566.460139] lock_acquire+0xfe/0x220
[ 2566.460214] down_read+0x4e/0x90
[ 2566.460444] i915_gem_userptr_get_pages+0x6e/0x340 [i915]
[ 2566.460669] ____i915_gem_object_get_pages+0x8b/0xd0 [i915]
[ 2566.460900] __i915_gem_object_get_pages+0x6a/0x80 [i915]
[ 2566.461132] __i915_vma_do_pin+0x7fa/0x930 [i915]
[ 2566.461352] eb_add_vma+0x67b/0x830 [i915]
[ 2566.461572] eb_lookup_vmas+0xafe/0x1010 [i915]
[ 2566.461792] i915_gem_do_execbuffer+0x715/0x2870 [i915]
[ 2566.462012] i915_gem_execbuffer2+0x106/0x2b0 [i915]
[ 2566.462152] drm_ioctl+0x36c/0x670 [drm]
[ 2566.462236] do_vfs_ioctl+0x12c/0xa60
[ 2566.462317] SyS_ioctl+0x41/0x70
[ 2566.462399] entry_SYSCALL_64_fastpath+0x1c/0xb1
[ 2566.462477]
[ 2566.462477] -> #0 (&obj->mm.lock){+.+.+.}:
[ 2566.462587] __lock_acquire+0x1602/0x1790
[ 2566.462661] lock_acquire+0xfe/0x220
[ 2566.462893] i915_gem_object_pin_pages+0x4c/0xc0 [i915]
[ 2566.463116] i915_gem_fault+0x2c2/0x8c0 [i915]
[ 2566.463197] __do_fault+0x42/0x130
[ 2566.463276] __handle_mm_fault+0x92c/0x1280
[ 2566.463356] handle_mm_fault+0x1e2/0x440
[ 2566.463443] __do_page_fault+0x1c4/0x500
[ 2566.463529] do_page_fault+0xc/0x10
[ 2566.463613] page_fault+0x1f/0x30
[ 2566.463693]
[ 2566.463693] other info that might help us debug this:
[ 2566.463693]
[ 2566.463820] Possible unsafe locking scenario:
[ 2566.463820]
[ 2566.463918] CPU0 CPU1
[ 2566.463988] ---- ----
[ 2566.464068] lock(&mm->mmap_sem);
[ 2566.464143] lock(&obj->mm.lock);
[ 2566.464226] lock(&mm->mmap_sem);
[ 2566.464304] lock(&obj->mm.lock);
[ 2566.464378]
[ 2566.464378] *** DEADLOCK ***
[ 2566.464378]
[ 2566.464504] 1 lock held by gem_streaming_w/759:
[ 2566.464576] #0: (&mm->mmap_sem){++++++}, at: [<ffffffff8106fd11>] __do_page_fault+0x121/0x500
[ 2566.464699]
[ 2566.464699] stack backtrace:
[ 2566.464801] CPU: 0 PID: 759 Comm: gem_streaming_w Not tainted 4.11.0-rc1+ #26
[ 2566.464881] Hardware name: GIGABYTE GB-BXBT-1900/MZBAYAB-00, BIOS F8 03/02/2016
[ 2566.464983] Call Trace:
[ 2566.465061] dump_stack+0x68/0x9f
[ 2566.465144] print_circular_bug+0x20b/0x260
[ 2566.465234] __lock_acquire+0x1602/0x1790
[ 2566.465323] ? debug_check_no_locks_freed+0x1a0/0x1a0
[ 2566.465564] ? i915_gem_object_wait+0x238/0x650 [i915]
[ 2566.465657] ? debug_lockdep_rcu_enabled.part.4+0x1a/0x30
[ 2566.465749] lock_acquire+0xfe/0x220
[ 2566.465985] ? i915_sg_trim+0x1b0/0x1b0 [i915]
[ 2566.466223] i915_gem_object_pin_pages+0x4c/0xc0 [i915]
[ 2566.466461] ? i915_sg_trim+0x1b0/0x1b0 [i915]
[ 2566.466699] i915_gem_fault+0x2c2/0x8c0 [i915]
[ 2566.466939] ? i915_gem_pwrite_ioctl+0xce0/0xce0 [i915]
[ 2566.467030] ? __lock_acquire+0x642/0x1790
[ 2566.467122] ? __lock_acquire+0x642/0x1790
[ 2566.467209] ? debug_lockdep_rcu_enabled+0x35/0x40
[ 2566.467299] ? get_unmapped_area+0x1b4/0x1d0
[ 2566.467387] __do_fault+0x42/0x130
[ 2566.467474] __handle_mm_fault+0x92c/0x1280
[ 2566.467564] ? __pmd_alloc+0x1e0/0x1e0
[ 2566.467651] ? vm_mmap_pgoff+0x160/0x190
[ 2566.467740] ? handle_mm_fault+0x111/0x440
[ 2566.467827] handle_mm_fault+0x1e2/0x440
[ 2566.467914] ? handle_mm_fault+0x5d/0x440
[ 2566.468002] __do_page_fault+0x1c4/0x500
[ 2566.468090] do_page_fault+0xc/0x10
[ 2566.468180] page_fault+0x1f/0x30
[ 2566.468263] RIP: 0033:0x557895ced32a
[ 2566.468337] RSP: 002b:00007fffd6dd8a10 EFLAGS: 00010202
[ 2566.468419] RAX: 00007f659a4db000 RBX: 0000000000000003 RCX: 00007f659ad032da
[ 2566.468501] RDX: 0000000000000000 RSI: 0000000000100000 RDI: 0000000000000000
[ 2566.468586] RBP: 0000000000000007 R08: 0000000000000003 R09: 0000000100000000
[ 2566.468667] R10: 0000000000000001 R11: 0000000000000246 R12: 0000557895ceda60
[ 2566.468749] R13: 0000000000000001 R14: 00007fffd6dd8ac0 R15: 00007f659a4db000
By checking the status of the gup worker (serialized by the
obj->mm.lock) we can determine whether it is still active, has failed or
has succeeded. If the worker is still active (or failed), we know that
it cannot be bound and so we can skip taking struct_mutex (risking
potential recursion). As we check the worker status, we mark it to
discard any partial results, forcing us to restart on the next
get_pages.
Reported-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Fixes:
|
|
Chris Wilson | 1c8782dd31 |
drm/i915/userptr: Disallow wrapping GTT into a userptr
If we allow the user to convert a GTT mmap address into a userptr, we may end up in recursion hell, where currently we hit a mutex deadlock but other possibilities include use-after-free during the unbind/cancel_userptr. [ 143.203989] gem_userptr_bli D 0 902 898 0x00000000 [ 143.204054] Call Trace: [ 143.204137] __schedule+0x511/0x1180 [ 143.204195] ? pci_mmcfg_check_reserved+0xc0/0xc0 [ 143.204274] schedule+0x57/0xe0 [ 143.204327] schedule_timeout+0x383/0x670 [ 143.204374] ? trace_hardirqs_on_caller+0x187/0x280 [ 143.204457] ? trace_hardirqs_on_thunk+0x1a/0x1c [ 143.204507] ? usleep_range+0x110/0x110 [ 143.204657] ? irq_exit+0x89/0x100 [ 143.204710] ? retint_kernel+0x2d/0x2d [ 143.204794] ? trace_hardirqs_on_caller+0x187/0x280 [ 143.204857] ? _raw_spin_unlock_irq+0x33/0x60 [ 143.204944] wait_for_common+0x1f0/0x2f0 [ 143.205006] ? out_of_line_wait_on_atomic_t+0x170/0x170 [ 143.205103] ? wake_up_q+0xa0/0xa0 [ 143.205159] ? flush_workqueue_prep_pwqs+0x15a/0x2c0 [ 143.205237] wait_for_completion+0x1d/0x20 [ 143.205292] flush_workqueue+0x2e9/0xbb0 [ 143.205339] ? flush_workqueue+0x163/0xbb0 [ 143.205418] ? __schedule+0x533/0x1180 [ 143.205498] ? check_flush_dependency+0x1a0/0x1a0 [ 143.205681] i915_gem_userptr_mn_invalidate_range_start+0x1c7/0x270 [i915] [ 143.205865] ? i915_gem_userptr_dmabuf_export+0x40/0x40 [i915] [ 143.205955] __mmu_notifier_invalidate_range_start+0xc6/0x120 [ 143.206044] ? __mmu_notifier_invalidate_range_start+0x51/0x120 [ 143.206123] zap_page_range_single+0x1c7/0x1f0 [ 143.206171] ? unmap_single_vma+0x160/0x160 [ 143.206260] ? unmap_mapping_range+0xa9/0x1b0 [ 143.206308] ? vma_interval_tree_subtree_search+0x75/0xd0 [ 143.206397] unmap_mapping_range+0x18f/0x1b0 [ 143.206444] ? zap_vma_ptes+0x70/0x70 [ 143.206524] ? __pm_runtime_resume+0x67/0xa0 [ 143.206723] i915_gem_release_mmap+0x1ba/0x1c0 [i915] [ 143.206846] i915_vma_unbind+0x5c2/0x690 [i915] [ 143.206925] ? __lock_is_held+0x52/0x100 [ 143.207076] i915_gem_object_set_tiling+0x1db/0x650 [i915] [ 143.207236] i915_gem_set_tiling_ioctl+0x1d3/0x3b0 [i915] [ 143.207377] ? i915_gem_set_tiling_ioctl+0x5/0x3b0 [i915] [ 143.207457] drm_ioctl+0x36c/0x670 [ 143.207535] ? debug_lockdep_rcu_enabled.part.0+0x1a/0x30 [ 143.207730] ? i915_gem_object_set_tiling+0x650/0x650 [i915] [ 143.207793] ? drm_getunique+0x120/0x120 [ 143.207875] ? __handle_mm_fault+0x996/0x14a0 [ 143.207939] ? vm_insert_page+0x340/0x340 [ 143.208028] ? up_write+0x28/0x50 [ 143.208086] ? vm_mmap_pgoff+0x160/0x190 [ 143.208163] do_vfs_ioctl+0x12c/0xa60 [ 143.208218] ? debug_lockdep_rcu_enabled+0x35/0x40 [ 143.208267] ? ioctl_preallocate+0x150/0x150 [ 143.208353] ? __do_page_fault+0x36a/0x6e0 [ 143.208400] ? mark_held_locks+0x23/0xc0 [ 143.208479] ? up_read+0x1f/0x40 [ 143.208526] ? entry_SYSCALL_64_fastpath+0x5/0xc6 [ 143.208669] ? __fget_light+0xa7/0xc0 [ 143.208747] SyS_ioctl+0x41/0x70 To prevent the possibility of a deadlock, we defer scheduling the worker until after we have proven that given the current mm, the userptr range does not overlap a GGTT mmaping. If another thread tries to remap the GGTT over the userptr before the worker is scheduled, it will be stopped by its invalidate-range flushing the current work, before the deadlock can occur. v2: Improve discussion of how we end up in the deadlock. v3: Don't forget to mark the userptr as active after a successful gup_fast. Rename overlaps_ggtt to noncontiguous_or_overlaps_ggtt. v4: Fix test ordering between invalid GTT mmaping and range completion (Tvrtko) Reported-by: Michał Winiarski <michal.winiarski@intel.com> Testcase: igt/gem_userptr_blits/map-fixed-invalidate-gup Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170308215903.24171-1-chris@chris-wilson.co.uk Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> |
|
Chris Wilson | d151e9ce98 |
drm/i915/userptr: Only flush the workqueue if required
To avoid waiting for work from other invalidate-range threads where not required, only wait on the userptr cancel workqueue if we have added some work to it. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170307205851.32578-2-chris@chris-wilson.co.uk Reviewed-by: Michał Winiarski <michal.winiarski@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> |
|
Chris Wilson | 42953b3c51 |
drm/i915/userptr: Deactivate a failed userptr if the worker reports an EFAULT
If the worker fails, it no longer has pages to release and can be immediately removed from the invalidate-tree. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170307205851.32578-1-chris@chris-wilson.co.uk Reviewed-by: Michał Winiarski <michal.winiarski@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> |
|
Ingo Molnar | 6e84f31522 |
sched/headers: Prepare for new header dependencies before moving code to <linux/sched/mm.h>
We are going to split <linux/sched/mm.h> out of <linux/sched.h>, which will have to be picked up from other headers and a couple of .c files. Create a trivial placeholder <linux/sched/mm.h> file that just maps to <linux/sched.h> to make this patch obviously correct and bisectable. The APIs that are going to be moved first are: mm_alloc() __mmdrop() mmdrop() mmdrop_async_fn() mmdrop_async() mmget_not_zero() mmput() mmput_async() get_task_mm() mm_access() mm_release() Include the new header in the files that are going to need it. Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org> |
|
Vegard Nossum | 388f793455 |
mm: use mmget_not_zero() helper
We already have the helper, we can convert the rest of the kernel mechanically using: git grep -l 'atomic_inc_not_zero.*mm_users' | xargs sed -i 's/atomic_inc_not_zero(&\(.*\)->mm_users)/mmget_not_zero\(\1\)/' This is needed for a later patch that hooks into the helper, but might be a worthwhile cleanup on its own. Link: http://lkml.kernel.org/r/20161218123229.22952-3-vegard.nossum@oracle.com Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com> Acked-by: Michal Hocko <mhocko@suse.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: David Rientjes <rientjes@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> |
|
Vegard Nossum | f1f1007644 |
mm: add new mmgrab() helper
Apart from adding the helper function itself, the rest of the kernel is converted mechanically using: git grep -l 'atomic_inc.*mm_count' | xargs sed -i 's/atomic_inc(&\(.*\)->mm_count);/mmgrab\(\1\);/' git grep -l 'atomic_inc.*mm_count' | xargs sed -i 's/atomic_inc(&\(.*\)\.mm_count);/mmgrab\(\&\1\);/' This is needed for a later patch that hooks into the helper, but might be a worthwhile cleanup on its own. (Michal Hocko provided most of the kerneldoc comment.) Link: http://lkml.kernel.org/r/20161218123229.22952-1-vegard.nossum@oracle.com Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com> Acked-by: Michal Hocko <mhocko@suse.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: David Rientjes <rientjes@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> |
|
Daniel Vetter | a402eae64d |
Linux 4.10-rc2
-----BEGIN PGP SIGNATURE----- iQEcBAABAgAGBQJYaYNlAAoJEHm+PkMAQRiGtCUH/18PMUJpHqRKjxL3Yscw+QZC RmGlD/hwBRLUSgiTCfURNGKP4QZv2kQW7BGsGC72oL01lmxozsU72ixUIO+wXzDY K2b0OOKGZZWzFtaVm7Qs+5JhHAEKZcT046mLD8sjJuqkrFAhmNLKdwHjihKBEkm9 J3s2tpdXdN0x/Uyga/GY9khEYIrvLPeBoKSz+JXcQKdC0iq3/+PMpWnN47QCNScr 7azojkJkj/rs2cqVdOi7Wbh6PSqIvPsl8E3qJefpaVJF/IQaU1pFdy5g8kYm4V7T fr6HgIbuN4EQWdN/5cgKrUdpQyV7D8iYx02klk4R8WgfS0QMYoUcsg+XsTd02TI= =OhGe -----END PGP SIGNATURE----- Merge tag 'v4.10-rc2' into drm-intel-next-queued Backmerge Linux 4.10-rc2 to resync with our -fixes cherry-picks. I've done the backmerge directly because Dave is on vacation. Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> |
|
Lorenzo Stoakes | 5b56d49fc3 |
mm: add locked parameter to get_user_pages_remote()
Patch series "mm: unexport __get_user_pages_unlocked()". This patch series continues the cleanup of get_user_pages*() functions taking advantage of the fact we can now pass gup_flags as we please. It firstly adds an additional 'locked' parameter to get_user_pages_remote() to allow for its callers to utilise VM_FAULT_RETRY functionality. This is necessary as the invocation of __get_user_pages_unlocked() in process_vm_rw_single_vec() makes use of this and no other existing higher level function would allow it to do so. Secondly existing callers of __get_user_pages_unlocked() are replaced with the appropriate higher-level replacement - get_user_pages_unlocked() if the current task and memory descriptor are referenced, or get_user_pages_remote() if other task/memory descriptors are referenced (having acquiring mmap_sem.) This patch (of 2): Add a int *locked parameter to get_user_pages_remote() to allow VM_FAULT_RETRY faulting behaviour similar to get_user_pages_[un]locked(). Taking into account the previous adjustments to get_user_pages*() functions allowing for the passing of gup_flags, we are now in a position where __get_user_pages_unlocked() need only be exported for his ability to allow VM_FAULT_RETRY behaviour, this adjustment allows us to subsequently unexport __get_user_pages_unlocked() as well as allowing for future flexibility in the use of get_user_pages_remote(). [sfr@canb.auug.org.au: merge fix for get_user_pages_remote API change] Link: http://lkml.kernel.org/r/20161122210511.024ec341@canb.auug.org.au Link: http://lkml.kernel.org/r/20161027095141.2569-2-lstoakes@gmail.com Signed-off-by: Lorenzo Stoakes <lstoakes@gmail.com> Acked-by: Michal Hocko <mhocko@suse.com> Cc: Jan Kara <jack@suse.cz> Cc: Hugh Dickins <hughd@google.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Rik van Riel <riel@redhat.com> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krcmar <rkrcmar@redhat.com> Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> |
|
Tvrtko Ursulin | 187685cb90 |
drm/i915: Make GEM object alloc/free and stolen created take dev_priv
Where it is more appropriate and also to be consistent with the direction of the driver. v2: Leave out object alloc/free inlining. (Joonas Lahtinen) Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> |
|
Tvrtko Ursulin | 0031fb9685 |
drm/i915: Assorted dev_priv cleanups
A small selection of macros which can only accept dev_priv from now on and a resulting trickle of fixups. Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: David Weinehall <david.weinehall@linux.intel.com> |
|
Tvrtko Ursulin | 3599a91cc8 |
drm/i915: Allow shrinking of userptr objects once again
Commit |
|
Chris Wilson | 548625ee8f |
drm/i915: Improve lockdep tracking for obj->mm.lock
The shrinker may appear to recurse into obj->mm.lock as the shrinker may be called from a direct reclaim path whilst handling get_pages. We filter out recursing on the same obj->mm.lock by inspecting obj->mm.pages, but we do want to take the lock on a second object in order to reap their pages. lockdep spots the recursion on the same lockclass and needs annotation to avoid a false positive. To keep the two paths distinct, create an enum to indicate which subclass of obj->mm.lock we are using. This removes the false positive and avoids masking real bugs. Suggested-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161101121134.27504-1-chris@chris-wilson.co.uk Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> |
|
Chris Wilson | f0cd518206 |
drm/i915: Use lockless object free
Having moved the locked phase of freeing an object to a separate worker, we can now declare to the core that we only need the unlocked variant of driver->gem_free_object, and can use the simple unreference internally. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161028125858.23563-20-chris@chris-wilson.co.uk |
|
Chris Wilson | 1233e2db19 |
drm/i915: Move object backing storage manipulation to its own locking
Break the allocation of the backing storage away from struct_mutex into a per-object lock. This allows parallel page allocation, provided we can do so outside of struct_mutex (i.e. set-domain-ioctl, pwrite, GTT fault), i.e. before execbuf! The increased cost of the atomic counters are hidden behind i915_vma_pin() for the typical case of execbuf, i.e. as the object is typically bound between execbufs, the page_pin_count is static. The cost will be felt around set-domain and pwrite, but offset by the improvement from reduced struct_mutex contention. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161028125858.23563-14-chris@chris-wilson.co.uk |
|
Chris Wilson | 03ac84f183 |
drm/i915: Pass around sg_table to get_pages/put_pages backend
The plan is to move obj->pages out from under the struct_mutex into its own per-object lock. We need to prune any assumption of the struct_mutex from the get_pages/put_pages backends, and to make it easier we pass around the sg_table to operate on rather than indirectly via the obj. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161028125858.23563-13-chris@chris-wilson.co.uk |
|
Chris Wilson | a4f5ea64f0 |
drm/i915: Refactor object page API
The plan is to make obtaining the backing storage for the object avoid struct_mutex (i.e. use its own locking). The first step is to update the API so that normal users only call pin/unpin whilst working on the backing storage. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161028125858.23563-12-chris@chris-wilson.co.uk |