For optimus and powerxpress muxless we really want the GPU
driver deciding when to power up/down the GPU, not userspace.
This adds the ability for a driver to dynamically power up/down
the GPU and remove the switcheroo from controlling it, the
switcheroo reports the dynamic state to userspace also.
It also adds 2 power domains, one for machine where the power
switch is controlled outside the GPU D3 state, so the powerdown
ordering is done correctly, and the second for the hdmi audio
device to make sure it can resume for PCI config space accesses.
v1.1: fix build with switcheroo off
v2: add power domain support for radeon and v1 nvidia dsms
v2.1: fix typo in off case
v3: add audio power domain for hdmi audio + misc audio fixes
v4: use PCI_SLOT macro, drop power reference on hdmi audio resume
failure also.
Signed-off-by: Dave Airlie <airlied@redhat.com>
Merge the MSM driver from Rob Clark
* 'drm-next' of git://people.freedesktop.org/~robclark/linux:
drm/msm: add basic hangcheck/recovery mechanism
drm/msm: add a3xx gpu support
drm/msm: add register definitions for gpu
drm/msm: basic KMS driver for snapdragon
drm/msm: add register definitions
GEM does already a good job in tracking access to gem buffers via handles
and drm_vma access management. However, TTM drivers currently do not
verify this during mmap().
TTM provides the verify_access() callback to test this. So fix all drivers
to actually call into gem+vma to verify access instead of always returning
0.
All drivers assume that user-space can only get access to TTM buffers via
GEM handles. So whenever the verify_access() callback is called from
ttm_bo_mmap(), the buffer must have a valid embedded gem object. This is
true for all TTM+GEM drivers. But that's why this patch doesn't touch pure
TTM drivers (ie, vmwgfx).
v2: Switch to drm_vma_node_verify_access() to correctly return -EACCES if
access was denied.
Cc: Dave Airlie <airlied@redhat.com>
Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Ben Skeggs <bskeggs@redhat.com>
Cc: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Cc: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
We implement automatic vma mmap() access management for all drivers using
gem_mmap. We use the vma manager to add each open-file that creates a
gem-handle to the vma-node of the underlying gem object. Once the handle
is destroyed, we drop the open-file again.
This allows us to use drm_vma_node_is_allowed() on _any_ gem object to see
whether an open-file is granted access. In drm_gem_mmap() we use this to
verify that unprivileged users cannot guess gem offsets and map arbitrary
buffers.
Note that this manages access for _all_ gem users (also TTM+GEM), but the
actual access checks are only done for drm_gem_mmap(). TTM drivers use the
TTM mmap helpers, which need to do that separately.
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
The VMA offset manager uses a device-global address-space. Hence, any
user can currently map any offset-node they want. They only need to guess
the right offset. If we wanted per open-file offset spaces, we'd either
need VM_NONLINEAR mappings or multiple "struct address_space" trees. As
both doesn't really scale, we implement access management in the VMA
manager itself.
We use an rb-tree to store open-files for each VMA node. On each mmap
call, GEM, TTM or the drivers must check whether the current user is
allowed to map this file.
We add a separate lock for each node as there is no generic lock available
for the caller to protect the node easily.
As we currently don't know whether an object may be used for mmap(), we
have to do access management for all objects. If it turns out to slow down
handle creation/deletion significantly, we can optimize it in several
ways:
- Most times only a single filp is added per bo so we could use a static
"struct file *main_filp" which is checked/added/removed first before we
fall back to the rbtree+drm_vma_offset_file.
This could be even done lockless with rcu.
- Let user-space pass a hint whether mmap() should be supported on the
bo and avoid access-management if not.
- .. there are probably more ideas once we have benchmarks ..
v2: add drm_vma_node_verify_access() helper
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
A basic, no-frills recovery mechanism in case the gpu gets wedged. We
could try to be a bit more fancy and restart the next submit after the
one that got wedged, but for now keep it simple. This is enough to
recover things if, for example, the gpu hangs mid way through a piglit
run.
Signed-off-by: Rob Clark <robdclark@gmail.com>
Add initial support for a3xx 3d core.
So far, with hardware that I've seen to date, we can have:
+ zero, one, or two z180 2d cores
+ a3xx or a2xx 3d core, which share a common CP (the firmware
for the CP seems to implement some different PM4 packet types
but the basics of cmdstream submission are the same)
Which means that the eventual complete "class" hierarchy, once
support for all past and present hw is in place, becomes:
+ msm_gpu
+ adreno_gpu
+ a3xx_gpu
+ a2xx_gpu
+ z180_gpu
This commit splits out the parts that will eventually be common
between a2xx/a3xx into adreno_gpu, and the parts that are even
common to z180 into msm_gpu.
Note that there is no cmdstream validation required. All memory access
from the GPU is via IOMMU/MMU. So as long as you don't map silly things
to the GPU, there isn't much damage that the GPU can do.
Signed-off-by: Rob Clark <robdclark@gmail.com>
Generated from rnndb files in:
https://github.com/freedreno/envytools
Keep this split out as a separate commit to make it easier to review the
actual driver.
Signed-off-by: Rob Clark <robdclark@gmail.com>
The snapdragon chips have multiple different display controllers,
depending on which chip variant/version. (As far as I can tell, current
devices have either MDP3 or MDP4, and upcoming devices have MDSS.) And
then external to the display controller are HDMI, DSI, etc. blocks which
may be shared across devices which have different display controller
blocks.
To more easily add support for different display controller blocks, the
display controller specific bits are split out into a "kms" module,
which provides the kms plane/crtc/encoder objects.
The external HDMI, DSI, etc. blocks are part encoder, and part connector
currently. But I think I will pull in the drm_bridge patches from
chromeos tree, and split them into a bridge+connector, with the
registers that need to be set in modeset handled by the bridge. This
would remove the 'msm_connector' base class. But some things need to be
double checked to make sure I could get the correct ON/OFF sequencing..
This patch adds support for mdp4 crtc (including hw cursor), dtv encoder
(part of MDP4 block), and hdmi.
Signed-off-by: Rob Clark <robdclark@gmail.com>
Generated from rnndb files in:
https://github.com/freedreno/envytools
Keep this split out as a separate commit to make it easier to review the
actual driver.
Signed-off-by: Rob Clark <robdclark@gmail.com>
Here's some gma500 unifying and cleanups for drm-next. There is more stuff in
the pipe for 3.12 but I'd like to get these out of the way first.
* 'gma500-next' of git://github.com/patjak/drm-gma500: (35 commits)
drm/gma500/cdv: Add and hook up chip op for disabling sr
drm/gma500/cdv: Add and hook up chip op for watermarks
drm/gma500: Rename psb_intel_encoder to gma_encoder
drm/gma500: Rename psb_intel_connector to gma_connector
drm/gma500: Rename psb_intel_crtc to gma_crtc
drm/gma500/cdv: Convert to generic set_config()
drm/gma500/psb: Convert to generic set_config()
drm/gma500: Add generic set_config() function
drm/gma500/cdv: Convert to generic save/restore
drm/gma500/psb: Convert to generic save/restore
drm/gma500: Add generic crtc save/restore funcs
drm/gma500: Convert to generic encoder funcs
drm/gma500: Add generic encoder functions
drm/gma500/psb: Convert to generic cursor funcs
drm/gma500/cdv: Convert to generic cursor funcs
drm/gma500: Add generic cursor functions
drm/gma500/psb: Convert to generic crtc->destroy
drm/gma500/mdfld: Use identical generic crtc funcs
drm/gma500/oak: Use identical generic crtc funcs
drm/gma500/psb: Convert to gma_crtc_dpms()
...
... not only when the dma-buf is freshly created. In contrived
examples someone else could have exported/imported the dma-buf already
and handed us the gem object with a flink name. If such on object gets
reexported as a dma_buf we won't have it in the handle cache already,
which breaks the guarantee that for dma-buf imports we always hand
back an existing handle if there is one.
This is exercised by igt/prime_self_import/with_one_bo_two_files
Now if we extend the locked sections just a notch more we can also
plug th racy buf/handle cache setup in handle_to_fd:
If evil userspace races a concurrent gem close against a prime export
operation we can end up tearing down the gem handle before the dma buf
handle cache is set up. When handle_to_fd gets around to adding the
handle to the cache there will be no one left to clean it up,
effectily leaking the bo (and the dma-buf, since the handle cache
holds a ref on the dma-buf):
Thread A Thread B
handle_to_fd:
lookup gem object from handle
creates new dma_buf
gem_close on the same handle
obj->dma_buf is set, but file priv buf
handle cache has no entry
obj->handle_count drops to 0
drm_prime_add_buf_handle sets up the handle cache
-> We have a dma-buf reference in the handle cache, but since the
handle_count of the gem object already dropped to 0 no on will clean
it up. When closing the drm device fd we'll hit the WARN_ON in
drm_prime_destroy_file_private.
The important change is to extend the critical section of the
filp->prime.lock to cover the gem handle lookup. This serializes with
a concurrent gem handle close.
This leak is exercised by igt/prime_self_import/export-vs-gem_close-race
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
... and move it to the top of the function to avoid a forward
declaration.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
with the reworking semantics and locking of the obj->dma_buf pointer
this pointer is always set as long as there's still a gem handle
around and a dma_buf associated with this gem object.
Also, the per file-priv lookup-cache for dma-buf importing is also
unified between foreign and native objects.
Hence we don't need to special case the clean any more and can simply
drop the clause which only runs for foreing objects, i.e. with
obj->import_attach set.
Note that with this change (actually with the previous one to always
set up obj->dma_buf even for foreign objects) it is no longer required
to set obj->import_attach when importing a foreing object. So update
comments accordingly, too.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
The export dma-buf cache is semantically similar to an flink name. So
semantically it makes sense to treat it the same and remove the name
(i.e. the dma_buf pointer) and its references when the last gem handle
disappears.
Again we need to be careful, but double so: Not just could someone
race and export with a gem close ioctl (so we need to recheck
obj->handle_count again when assigning the new name), but multiple
exports can also race against each another. This is prevented by
holding the dev->object_name_lock across the entire section which
touches obj->dma_buf.
With the new scheme we also need to reinstate the obj->dma_buf link at
import time (in case the only reference userspace has held in-between
was through the dma-buf fd and not through any native gem handle). For
simplicity we don't check whether it's a native object but
unconditionally set up that link - with the new scheme of removing the
obj->dma_buf reference when the last handle disappears we can do that.
To make it clear that this is not just for exported buffers anymore
als rename it from export_dma_buf to dma_buf.
To make sure that now one can race a fd_to_handle or handle_to_fd with
gem_close we use the same tricks as in flink of extending the
dev->object_name_locking critical section. With this change we finally
have a guaranteed 1:1 relationship (at least for native objects)
between gem objects and dma-bufs, even accounting for races (which can
happen since the dma-buf itself holds a reference while in-flight).
This prevent igt/prime_self_import/export-vs-gem_close-race from
Oopsing the kernel. There is still a leak though since the per-file
priv dma-buf/handle cache handling is racy. That will be fixed in a
later patch.
v2: Remove the bogus dma_buf_put from the export_and_register_object
failure path if we've raced with the handle count dropping to 0.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
The gem flink name holds a reference onto the object itself, and this
self-reference would prevent an flink'ed object from every being
freed. To break that loop we remove the flink name when the last
userspace handle disappears, i.e. when obj->handle_count reaches 0.
Now in gem_open we drop the dev->object_name_lock between the flink
name lookup and actually adding the handle. This means a concurrent
gem_close of the last handle could result in the flink name getting
reaped right inbetween, i.e.
Thread 1 Thread 2
gem_open gem_close
flink -> obj lookup
handle_count drops to 0
remove flink name
create_handle
handle_count++
If someone now flinks this object again, we'll get a new flink name.
We can close this race by removing the lock dropping and making the
entire lookup+handle_create sequence atomic. Unfortunately to still be
able to share the handle_create logic this requires a
handle_create_tail function which drops the lock - we can't hold the
object_name_lock while calling into a driver's ->gem_open callback.
Note that for flink fixing this race isn't really important, since
racing gem_open against gem_close is clearly a userspace bug. And no
matter how the race ends, we won't leak any references.
But with dma-buf where the userspace dma-buf fd itself is refcounted
this is a valid sequence and hence we should fix it. Therefore this
patch here is just a warm-up exercise (and for consistency between
flink buffer sharing and dma-buf buffer sharing with self-imports).
Also note that this extension of the critical section in gem_open
protected by dev->object_name_lock only works because it's now a
mutex: A spinlock would conflict with the potential memory allocation
in idr_preload().
This is exercises by igt/gem_flink_race/flink_name.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
I want to wrap the creation of a dma-buf from a gem object in it,
so that the obj->export_dma_buf cache can be atomically filled in.
Instead of creating a new mutex just for that variable I've figured
I can reuse the existing dev->object_name_lock, especially since
the new semantics will exactly mirror the flink obj->name already
protected by that lock.
v2: idr_preload/idr_preload_end is now an atomic section, so need to
move the mutex locking outside.
[airlied: fix up conflict with patch to make debugfs use lock]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
if (!ret) implies that ret == 0, so no need to clear it again. And
explicitly check for ret == 0 to indicate that we're checking an errno
integer.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
When exporting a gem object as a dma-buf the critical section for the
per-fd prime lock is just the adding (and in case of errors, removing)
of the handle to the per-fd lookup cache.
So restrict the critical section to just that part of the function.
This simplifies later reordering.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Part of the function uses the properly-typed dmabuf variable, the
other an untyped void *buf. Kill the later.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
No one outside of drm should use this, the official interfaces are
drm_gem_handle_create and drm_gem_handle_delete. The handle refcounting
is purely an implementation detail of gem.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
handle_unreference only clears up the obj->name and the reference,
but would leave a dangling handle in the idr. The right thing
to do is to call handle_delete.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
This is the 2nd attempt, I've always been a bit dissatisified with the
tricky nature of the first one:
http://lists.freedesktop.org/archives/dri-devel/2012-July/025451.html
The issue is that the flink ioctl can race with calling gem_close on
the last gem handle. In that case we'll end up with a zero handle
count, but an flink name (and it's corresponding reference). Which
results in a neat space leak.
In my first attempt I've solved this by rechecking the handle count.
But fundamentally the issue is that ->handle_count isn't your usual
refcount - it can be resurrected from 0 among other things.
For those special beasts atomic_t often suggest way more ordering that
it actually guarantees. To prevent being tricked by those hairy
semantics take the easy way out and simply protect the handle with the
existing dev->object_name_lock.
With that change implemented it's dead easy to fix the flink vs. gem
close reace: When we try to create the name we simply have to check
whether there's still officially a gem handle around and if not refuse
to create the flink name. Since the handle count decrement and flink
name destruction is now also protected by that lock the reace is gone
and we can't ever leak the flink reference again.
Outside of the drm core only the exynos driver looks at the handle
count, and tbh I have no idea why (it's just for debug dmesg output
luckily).
I've considered inlining the drm_gem_object_handle_free, but I plan to
add more name-like things (like the exported dma_buf) to this scheme,
so it's clearer to leave the handle freeing in its own function.
This is exercised by the new gem_flink_race i-g-t testcase, which on
my snb leaks gem objects at a rate of roughly 1k objects/s.
v2: Fix up the error path handling in handle_create and make it more
robust by simply calling object_handle_unreference.
v3: Fix up the handle_unreference logic bug - atomic_dec_and_test
retursn 1 for 0. Oops.
v4: Squash in inlining of drm_gem_object_handle_reference as suggested
by Dave Airlie and add a note that we now have a testcase.
Cc: Dave Airlie <airlied@gmail.com>
Cc: Inki Dae <inki.dae@samsung.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Daniel writes:
New pile of stuff for -next:
- Cleanup of the old crtc helper callbacks, all encoders are now converted
to the i915 modeset infrastructure.
- Massive amount of wm patches from Ville for ilk, snb, ivb, hsw, this is
prep work to eventually get things going for nuclear pageflips where we
need to adjust watermarks on the fly.
- More vm/vma patches from Ben. This refactoring isn't yet fully rolled
out, we miss the execbuf conversion and some of the low-level
bind/unbind support code.
- Convert our hdmi infoframe code to use the new common helper functions
(Damien). This contains some bugfixes for the common infoframe helpers.
- Some cruft removal from Damien.
- Various smaller bits&pieces all over, as usual.
* tag 'drm-intel-next-2013-08-09' of git://people.freedesktop.org/~danvet/drm-intel: (105 commits)
drm/i915: Fix FB WM for HSW
drm/i915: expose HDMI connectors on port C on BYT
drm/i915: fix a limit check in hsw_compute_wm_results()
drm/i915: unbreak i915_gem_object_ggtt_unbind()
drm/i915: Make intel_set_mode() static
drm/i915: Remove intel_modeset_disable()
drm/i915: Make intel_encoder_dpms() static
drm/i915: Make i915_hangcheck_elapsed() static
drm/i915: Fix #endif comment
drm/i915: Remove i915_gem_object_check_coherency()
drm/i915: Remove stale prototypes
drm/i915: List objects allocated from stolen memory in debugfs
drm/i915: Always call intel_update_sprite_watermarks() when disabling a plane
drm/i915: Pass plane and crtc to intel_update_sprite_watermarks
drm/i915: Don't try to disable plane if it's already disabled
drm/i915: Pass crtc to our update/disable_plane hooks
drm/i915: Split plane watermark parameters into a separate struct
drm/i915: Pull some watermarks state into a separate structure
drm/i915: Calculate max watermark levels for ILK+
drm/i915: Rename hsw_lp_wm_result to intel_wm_level
...
It's only used in drm_platform.c.
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
This function is only used in drm_fb_cma_helper.c.
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
It's only used in drm_crtc.c.
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
The last user was removed in
commit 575dc34ee0
Author: Dave Airlie <airlied@redhat.com>
Date: Mon Sep 7 18:43:26 2009 +1000
drm/kms: remove old std mode fallback code.
The new code adds modes in the helper, which makes more sense
I disliked the non-driver code adding modes.
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
This was last used by nouveau, replaced by a driver-specific property
in:
commit de69185573
Author: Ben Skeggs <bskeggs@redhat.com>
Date: Mon Oct 17 12:23:41 2011 +1000
drm/nouveau: improve dithering properties, and implement proper auto mode
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
We kzalloc this structure, and for real kms devices we should never
loose track of things really.
But ums/legacy drivers rely on the drm core to clean up a bit of cruft
between lastclose and firstopen (i.e. when X is being restarted), so
keep this around. But give it a clear drm_legacy_ prefix and
conditionalize the code on !DRIVER_MODESET.
Cc: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
So almost two years ago I've tried to nuke the procfs code already
once before:
http://lists.freedesktop.org/archives/dri-devel/2011-October/015707.html
The conclusion was that userspace drivers (specifically libdrm device
node detection) stopped relying on procfs in 2001. But after some
digging it turned out that the drmstat tool in libdrm is still using
those files (but only when certain options are set). So we've decided
to keep profcs.
But I when I've started to dig around again what exactly this tool
does I've noticed that it tries to read the "mem", "vm", and "vma"
files from procfs. Now as far my git history digging shows "mem" never
did anything useful (at least in the version that first showed up in
upstream history in 2004) and the file was remove in
commit 955b12def4
Author: Ben Gamari <bgamari@gmail.com>
Date: Tue Feb 17 20:08:49 2009 -0500
drm: Convert proc files to seq_file and introduce debugfs
Which means that for over 4 years drmstat has been broken, and no one
cared. In my opinion that's proof enough that no one is actually using
drmstat, and so that we can savely nuke the procfs support from drm.
While at it fix up the error case cleanup for debugfs in drm_get_minor.
v2: Fix dates, libdrm stopped relying on procfs for drm node detection
in 2001.
v3: fixup compilation warning for !CONFIG_DEBUG_FS, reported by
Fengguang Wu.
Cc: kbuild test robot <fengguang.wu@intel.com>
Cc: Dave Airlie <airlied@linux.ie>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
It has way too much potential for driver writers to do stupid things
like delayed hw setup because the load sequence is somehow racy (e.g.
the imx driver in staging). So don't call it for modesetting drivers,
which reduces the complexity of the drm core -> driver interface a
notch.
v2: Don't forget to update DocBook.
v3: Go with Laurent's slightly more elaborate proposal for the DocBook
update. Add a few words on top of his diff to elaborate a bit on what
KMS drivers should and shouldn't do in lastclose. There was already a
paragraph present talking about restoring properties, I've simply
extended that one.
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
So if we survey kms drivers there's a bunch of things they commonly do
in ->lastclose
- delayed processing of vga switcheroo requests (i915, nouveau,
radeon)
- force-restoring the fbcon (most)
- resetting a bunch properties to make fbcon work better (omap)
- disabling all outputs (vmwgfx)
In short besides the semantically important vga switcheroo stuff they
all try very hard to keep fbcon working in case X dies.
But none of them try to not do this at driver unload time safe for
vmwgfx, and digging through logs I couldn't find any reason for why
vmwgfx is special.
Since ->firstopen has lots of potential for abuse with kms drivers
(like delaying driver setup to pamper over races in the load sequence)
it's imo very much worth it to remove this logic so that we can
stop using the ->firstopen callback for kms drivers.
Also module unloading is rather a debug feature and developers should
know how to restore the display to a sane configuration.
Cc: Jakob Bornecrantz <jakob@vmware.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Currently, both ranges overlap. Fix the limits so both ranges are mutually
exclusive. Also use the occasion to convert whitespaces to tabs.
Signed-off-by: Kristian Høgsberg <krh@bitplanet.net>
(fixed up tabs and adjust commit-msg accordingly)
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
The idr is protected with our spinlock, if we don't hold that nothing
prevents the gem objects from disappearing from under us.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
We might as well have a real ioctl function which checks for the
callbacks. This seems to be a remnant from back in the days when each
drm driver had their own complete ioctl table, with no shared core
drm table at all.
To make really sure no mis-guided user in a kms driver pops up again
explicitly check for that in the new ioctl implementation.
v2: Drop the unused variable I've accidentally left in the code,
spotted by David Herrmann.
Cc: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
The new arch_phys_wc_add/del functions do the right thing both with
and without MTRR support in the kernel. So we can drop these
additional checks.
David Herrmann suggest to also kill the DRIVER_USE_MTRR flag since
it's now unused, which spurred me to do a bit a better audit of the
affected drivers. David helped a lot in that. Quoting our mail
discussion:
On Wed, Jul 10, 2013 at 5:41 PM, David Herrmann <dh.herrmann@gmail.com> wrote:
> On Wed, Jul 10, 2013 at 5:22 PM, Daniel Vetter <daniel.vetter@ffwll.ch> wrote:
>> On Wed, Jul 10, 2013 at 3:51 PM, David Herrmann <dh.herrmann@gmail.com> wrote:
>>>> -#if __OS_HAS_MTRR
>>>> -static inline int drm_core_has_MTRR(struct drm_device *dev)
>>>> -{
>>>> - return drm_core_check_feature(dev, DRIVER_USE_MTRR);
>>>> -}
>>>> -#else
>>>> -#define drm_core_has_MTRR(dev) (0)
>>>> -#endif
>>>> -
>>>
>>> That was the last user of DRIVER_USE_MTRR (apart from drivers setting
>>> it in .driver_features). Any reason to keep it around?
>>
>> Yeah, I guess we could rip things out. Which will also force me to
>> properly audit drivers for the eventual behaviour change this could
>> entail (in case there's an x86 driver which did not ask for an mtrr,
>> but iirc there isn't).
>
> david@david-mb ~/dev/kernel/linux $ for i in drivers/gpu/drm/* ; do if
> test -d "$i" ; then if ! grep -q USE_MTRR -r $i ; then echo $i ; fi ;
> fi ; done
> drivers/gpu/drm/exynos
> drivers/gpu/drm/gma500
> drivers/gpu/drm/i2c
> drivers/gpu/drm/nouveau
> drivers/gpu/drm/omapdrm
> drivers/gpu/drm/qxl
> drivers/gpu/drm/rcar-du
> drivers/gpu/drm/shmobile
> drivers/gpu/drm/tilcdc
> drivers/gpu/drm/ttm
> drivers/gpu/drm/udl
> drivers/gpu/drm/vmwgfx
> david@david-mb ~/dev/kernel/linux $
>
> So for x86 gma500,nouveau,qxl,udl,vmwgfx don't set DRIVER_USE_MTRR.
> But I cannot tell whether they break if we call arch_phys_wc_add/del,
> anyway. At least nouveau seemed to work here, but it doesn't use AGP
> or drm_bufs, I guess.
Cool, thanks a lot for stitching together the list of drivers to look
at. So for real KMS drivers it's the drives responsibility to add an
mtrr if it needs one. nouvea, radeon, mgag200, i915 and vmwgfx do that
already. Somehow the savage driver also ends up doing that, I have no
idea why.
Note that gma500 as a pure KMS driver doesn't need MTRR setup since
the platforms that it supports all support PAT. So no MTRRs needed to
get wc iomappings.
The mtrr support in the drm core is all for legacy mappings of garts,
framebuffers and registers. All legacy drivers set the USE_MTRR flag,
so we're good there.
All in all I think we can really just ditch this
/endquote
v2: Also kill DRIVER_USE_MTRR as suggested by David Herrmann
v3: Rebase on top of David Herrmann's agp setup/cleanup changes.
Cc: David Herrmann <dh.herrmann@gmail.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Acked-by: Andy Lutomirski <luto@amacapital.net>
Reviewed-by: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Trying to drop a reference we don't have is a pretty serious bug.
Trying to paper over it is an even worse offense.
So scream into dmesg with a big WARN in case that ever happens.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Calling this function with a NULL object is simply a bug, so papering
over a NULL object not a good idea.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
We have three callers of this function now and it's neither
performance critical nor really small. So an inline function feels
like overkill and unecessarily separates the different parts of the
code.
Since all callers of drm_gem_object_handle_free are now in drm_gem.c
we can make that static (and remove the unused EXPORT_SYMBOL). To
avoid a forward declaration move it (and drm_gem_object_free_bug) up a
bit.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
I've checked both implementations (radeon/nouveau) and they both grab
the page array from ttm simply by dereferencing it and then wrapping
it up with drm_prime_pages_to_sg in the callback and map it with
dma_map_sg (in the helper).
Only the grabbing of the underlying page array is anything we need to
be concerned about, and either those pages are pinned independently,
or we're screwed no matter what.
And indeed, nouveau/radeon pin the backing storage in their
attach/detach functions.
Since I've created this patch cma prime support for dma_buf was added.
drm_gem_cma_prime_get_sg_table only calls kzalloc and the creates&maps
the sg table with dma_get_sgtable. It doesn't touch any gem object
state otherwise. So the cma helpers also look safe.
The only thing we might claim it does is prevent concurrent mapping of
dma_buf attachments. But a) that's not allowed and b) the current code
is racy already since it checks whether the sg mapping exists _before_
grabbing the lock.
So the dev->struct_mutex locking here does absolutely nothing useful,
but only distracts. Remove it.
This should also help Maarten's work to eventually pin the backing
storage more dynamically by preventing locking inversions around
dev->struct_mutex.
v2: Add analysis for recently added cma helper prime code.
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Acked-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Note that this is slightly tricky since both drivers store their
native objects in dma_buf->priv. But both also embed the base
drm_gem_object at the first position, so the implicit cast is ok.
To use the release helper we need to export it, too.
Cc: Inki Dae <inki.dae@samsung.com>
Cc: Intel Graphics Development <intel-gfx@lists.freedesktop.org>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
VMA offsets are 64bit so do not cast them to "unsigned int". Also remove
the (now useless) offset-retrieval helper. The VMA manager provides simple
enough helpers.
Cc: Thierry Reding <thierry.reding@gmail.com>
Cc: "Terje Bergström" <tbergstrom@nvidia.com>
Cc: Arto Merilainen <amerilainen@nvidia.com>
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
This makes it so that reloading a module does not cause all the
connector ids to change, which are user-visible and sometimes used
for configuration.
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Basically just extracting some code duplicated in gma500, omapdrm, udl,
and upcoming msm driver.
Signed-off-by: Rob Clark <robdclark@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>