Render nodes provide an API for userspace to use non-privileged GPU
commands without any running DRM-Master. It is useful for offscreen
rendering, GPGPU clients, and normal render clients which do not perform
modesetting.
Compared to legacy clients, render clients no longer need any
authentication to perform client ioctls. Instead, user-space controls
render/client access to GPUs via filesystem access-modes on the
render-node. Once a render-node was opened, a client has full access to
the client/render operations on the GPU. However, no modesetting or ioctls
that affect global state are allowed on render nodes.
To prevent privilege-escalation, drivers must explicitly state that they
support render nodes. They must mark their render-only ioctls as
DRM_RENDER_ALLOW so render clients can use them. Furthermore, they must
support clients without any attached master.
If filesystem access-modes are not enough for fine-grained access control
to render nodes (very unlikely, considering the versaitlity of FS-ACLs),
you may still fall-back to fd-passing from server to client (which allows
arbitrary access-control). However, note that revoking access is
currently impossible and unlikely to get implemented.
Note: Render clients no longer have any associated DRM-Master as they are
supposed to be independent of any server state. DRM core highly depends on
file_priv->master to be non-NULL for modesetting/ctx/etc. commands.
Therefore, drivers must be very careful to not require DRM-Master if they
support DRIVER_RENDER.
So far render-nodes are protected by "drm_rnodes". As long as this
module-parameter is not set to 1, a driver will not create render nodes.
This allows us to experiment with the API a bit before we stabilize it.
v2: drop insecure GEM_FLINK to force use of dmabuf
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
... not only when the dma-buf is freshly created. In contrived
examples someone else could have exported/imported the dma-buf already
and handed us the gem object with a flink name. If such on object gets
reexported as a dma_buf we won't have it in the handle cache already,
which breaks the guarantee that for dma-buf imports we always hand
back an existing handle if there is one.
This is exercised by igt/prime_self_import/with_one_bo_two_files
Now if we extend the locked sections just a notch more we can also
plug th racy buf/handle cache setup in handle_to_fd:
If evil userspace races a concurrent gem close against a prime export
operation we can end up tearing down the gem handle before the dma buf
handle cache is set up. When handle_to_fd gets around to adding the
handle to the cache there will be no one left to clean it up,
effectily leaking the bo (and the dma-buf, since the handle cache
holds a ref on the dma-buf):
Thread A Thread B
handle_to_fd:
lookup gem object from handle
creates new dma_buf
gem_close on the same handle
obj->dma_buf is set, but file priv buf
handle cache has no entry
obj->handle_count drops to 0
drm_prime_add_buf_handle sets up the handle cache
-> We have a dma-buf reference in the handle cache, but since the
handle_count of the gem object already dropped to 0 no on will clean
it up. When closing the drm device fd we'll hit the WARN_ON in
drm_prime_destroy_file_private.
The important change is to extend the critical section of the
filp->prime.lock to cover the gem handle lookup. This serializes with
a concurrent gem handle close.
This leak is exercised by igt/prime_self_import/export-vs-gem_close-race
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
... and move it to the top of the function to avoid a forward
declaration.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
with the reworking semantics and locking of the obj->dma_buf pointer
this pointer is always set as long as there's still a gem handle
around and a dma_buf associated with this gem object.
Also, the per file-priv lookup-cache for dma-buf importing is also
unified between foreign and native objects.
Hence we don't need to special case the clean any more and can simply
drop the clause which only runs for foreing objects, i.e. with
obj->import_attach set.
Note that with this change (actually with the previous one to always
set up obj->dma_buf even for foreign objects) it is no longer required
to set obj->import_attach when importing a foreing object. So update
comments accordingly, too.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
The export dma-buf cache is semantically similar to an flink name. So
semantically it makes sense to treat it the same and remove the name
(i.e. the dma_buf pointer) and its references when the last gem handle
disappears.
Again we need to be careful, but double so: Not just could someone
race and export with a gem close ioctl (so we need to recheck
obj->handle_count again when assigning the new name), but multiple
exports can also race against each another. This is prevented by
holding the dev->object_name_lock across the entire section which
touches obj->dma_buf.
With the new scheme we also need to reinstate the obj->dma_buf link at
import time (in case the only reference userspace has held in-between
was through the dma-buf fd and not through any native gem handle). For
simplicity we don't check whether it's a native object but
unconditionally set up that link - with the new scheme of removing the
obj->dma_buf reference when the last handle disappears we can do that.
To make it clear that this is not just for exported buffers anymore
als rename it from export_dma_buf to dma_buf.
To make sure that now one can race a fd_to_handle or handle_to_fd with
gem_close we use the same tricks as in flink of extending the
dev->object_name_locking critical section. With this change we finally
have a guaranteed 1:1 relationship (at least for native objects)
between gem objects and dma-bufs, even accounting for races (which can
happen since the dma-buf itself holds a reference while in-flight).
This prevent igt/prime_self_import/export-vs-gem_close-race from
Oopsing the kernel. There is still a leak though since the per-file
priv dma-buf/handle cache handling is racy. That will be fixed in a
later patch.
v2: Remove the bogus dma_buf_put from the export_and_register_object
failure path if we've raced with the handle count dropping to 0.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
The gem flink name holds a reference onto the object itself, and this
self-reference would prevent an flink'ed object from every being
freed. To break that loop we remove the flink name when the last
userspace handle disappears, i.e. when obj->handle_count reaches 0.
Now in gem_open we drop the dev->object_name_lock between the flink
name lookup and actually adding the handle. This means a concurrent
gem_close of the last handle could result in the flink name getting
reaped right inbetween, i.e.
Thread 1 Thread 2
gem_open gem_close
flink -> obj lookup
handle_count drops to 0
remove flink name
create_handle
handle_count++
If someone now flinks this object again, we'll get a new flink name.
We can close this race by removing the lock dropping and making the
entire lookup+handle_create sequence atomic. Unfortunately to still be
able to share the handle_create logic this requires a
handle_create_tail function which drops the lock - we can't hold the
object_name_lock while calling into a driver's ->gem_open callback.
Note that for flink fixing this race isn't really important, since
racing gem_open against gem_close is clearly a userspace bug. And no
matter how the race ends, we won't leak any references.
But with dma-buf where the userspace dma-buf fd itself is refcounted
this is a valid sequence and hence we should fix it. Therefore this
patch here is just a warm-up exercise (and for consistency between
flink buffer sharing and dma-buf buffer sharing with self-imports).
Also note that this extension of the critical section in gem_open
protected by dev->object_name_lock only works because it's now a
mutex: A spinlock would conflict with the potential memory allocation
in idr_preload().
This is exercises by igt/gem_flink_race/flink_name.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
I want to wrap the creation of a dma-buf from a gem object in it,
so that the obj->export_dma_buf cache can be atomically filled in.
Instead of creating a new mutex just for that variable I've figured
I can reuse the existing dev->object_name_lock, especially since
the new semantics will exactly mirror the flink obj->name already
protected by that lock.
v2: idr_preload/idr_preload_end is now an atomic section, so need to
move the mutex locking outside.
[airlied: fix up conflict with patch to make debugfs use lock]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
No one outside of drm should use this, the official interfaces are
drm_gem_handle_create and drm_gem_handle_delete. The handle refcounting
is purely an implementation detail of gem.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
This is the 2nd attempt, I've always been a bit dissatisified with the
tricky nature of the first one:
http://lists.freedesktop.org/archives/dri-devel/2012-July/025451.html
The issue is that the flink ioctl can race with calling gem_close on
the last gem handle. In that case we'll end up with a zero handle
count, but an flink name (and it's corresponding reference). Which
results in a neat space leak.
In my first attempt I've solved this by rechecking the handle count.
But fundamentally the issue is that ->handle_count isn't your usual
refcount - it can be resurrected from 0 among other things.
For those special beasts atomic_t often suggest way more ordering that
it actually guarantees. To prevent being tricked by those hairy
semantics take the easy way out and simply protect the handle with the
existing dev->object_name_lock.
With that change implemented it's dead easy to fix the flink vs. gem
close reace: When we try to create the name we simply have to check
whether there's still officially a gem handle around and if not refuse
to create the flink name. Since the handle count decrement and flink
name destruction is now also protected by that lock the reace is gone
and we can't ever leak the flink reference again.
Outside of the drm core only the exynos driver looks at the handle
count, and tbh I have no idea why (it's just for debug dmesg output
luckily).
I've considered inlining the drm_gem_object_handle_free, but I plan to
add more name-like things (like the exported dma_buf) to this scheme,
so it's clearer to leave the handle freeing in its own function.
This is exercised by the new gem_flink_race i-g-t testcase, which on
my snb leaks gem objects at a rate of roughly 1k objects/s.
v2: Fix up the error path handling in handle_create and make it more
robust by simply calling object_handle_unreference.
v3: Fix up the handle_unreference logic bug - atomic_dec_and_test
retursn 1 for 0. Oops.
v4: Squash in inlining of drm_gem_object_handle_reference as suggested
by Dave Airlie and add a note that we now have a testcase.
Cc: Dave Airlie <airlied@gmail.com>
Cc: Inki Dae <inki.dae@samsung.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
It's only used in drm_platform.c.
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
A few prototypes have been left in the headers, their function friends
long gone.
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
So almost two years ago I've tried to nuke the procfs code already
once before:
http://lists.freedesktop.org/archives/dri-devel/2011-October/015707.html
The conclusion was that userspace drivers (specifically libdrm device
node detection) stopped relying on procfs in 2001. But after some
digging it turned out that the drmstat tool in libdrm is still using
those files (but only when certain options are set). So we've decided
to keep profcs.
But I when I've started to dig around again what exactly this tool
does I've noticed that it tries to read the "mem", "vm", and "vma"
files from procfs. Now as far my git history digging shows "mem" never
did anything useful (at least in the version that first showed up in
upstream history in 2004) and the file was remove in
commit 955b12def4
Author: Ben Gamari <bgamari@gmail.com>
Date: Tue Feb 17 20:08:49 2009 -0500
drm: Convert proc files to seq_file and introduce debugfs
Which means that for over 4 years drmstat has been broken, and no one
cared. In my opinion that's proof enough that no one is actually using
drmstat, and so that we can savely nuke the procfs support from drm.
While at it fix up the error case cleanup for debugfs in drm_get_minor.
v2: Fix dates, libdrm stopped relying on procfs for drm node detection
in 2001.
v3: fixup compilation warning for !CONFIG_DEBUG_FS, reported by
Fengguang Wu.
Cc: kbuild test robot <fengguang.wu@intel.com>
Cc: Dave Airlie <airlied@linux.ie>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
We might as well have a real ioctl function which checks for the
callbacks. This seems to be a remnant from back in the days when each
drm driver had their own complete ioctl table, with no shared core
drm table at all.
To make really sure no mis-guided user in a kms driver pops up again
explicitly check for that in the new ioctl implementation.
v2: Drop the unused variable I've accidentally left in the code,
spotted by David Herrmann.
Cc: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
The new arch_phys_wc_add/del functions do the right thing both with
and without MTRR support in the kernel. So we can drop these
additional checks.
David Herrmann suggest to also kill the DRIVER_USE_MTRR flag since
it's now unused, which spurred me to do a bit a better audit of the
affected drivers. David helped a lot in that. Quoting our mail
discussion:
On Wed, Jul 10, 2013 at 5:41 PM, David Herrmann <dh.herrmann@gmail.com> wrote:
> On Wed, Jul 10, 2013 at 5:22 PM, Daniel Vetter <daniel.vetter@ffwll.ch> wrote:
>> On Wed, Jul 10, 2013 at 3:51 PM, David Herrmann <dh.herrmann@gmail.com> wrote:
>>>> -#if __OS_HAS_MTRR
>>>> -static inline int drm_core_has_MTRR(struct drm_device *dev)
>>>> -{
>>>> - return drm_core_check_feature(dev, DRIVER_USE_MTRR);
>>>> -}
>>>> -#else
>>>> -#define drm_core_has_MTRR(dev) (0)
>>>> -#endif
>>>> -
>>>
>>> That was the last user of DRIVER_USE_MTRR (apart from drivers setting
>>> it in .driver_features). Any reason to keep it around?
>>
>> Yeah, I guess we could rip things out. Which will also force me to
>> properly audit drivers for the eventual behaviour change this could
>> entail (in case there's an x86 driver which did not ask for an mtrr,
>> but iirc there isn't).
>
> david@david-mb ~/dev/kernel/linux $ for i in drivers/gpu/drm/* ; do if
> test -d "$i" ; then if ! grep -q USE_MTRR -r $i ; then echo $i ; fi ;
> fi ; done
> drivers/gpu/drm/exynos
> drivers/gpu/drm/gma500
> drivers/gpu/drm/i2c
> drivers/gpu/drm/nouveau
> drivers/gpu/drm/omapdrm
> drivers/gpu/drm/qxl
> drivers/gpu/drm/rcar-du
> drivers/gpu/drm/shmobile
> drivers/gpu/drm/tilcdc
> drivers/gpu/drm/ttm
> drivers/gpu/drm/udl
> drivers/gpu/drm/vmwgfx
> david@david-mb ~/dev/kernel/linux $
>
> So for x86 gma500,nouveau,qxl,udl,vmwgfx don't set DRIVER_USE_MTRR.
> But I cannot tell whether they break if we call arch_phys_wc_add/del,
> anyway. At least nouveau seemed to work here, but it doesn't use AGP
> or drm_bufs, I guess.
Cool, thanks a lot for stitching together the list of drivers to look
at. So for real KMS drivers it's the drives responsibility to add an
mtrr if it needs one. nouvea, radeon, mgag200, i915 and vmwgfx do that
already. Somehow the savage driver also ends up doing that, I have no
idea why.
Note that gma500 as a pure KMS driver doesn't need MTRR setup since
the platforms that it supports all support PAT. So no MTRRs needed to
get wc iomappings.
The mtrr support in the drm core is all for legacy mappings of garts,
framebuffers and registers. All legacy drivers set the USE_MTRR flag,
so we're good there.
All in all I think we can really just ditch this
/endquote
v2: Also kill DRIVER_USE_MTRR as suggested by David Herrmann
v3: Rebase on top of David Herrmann's agp setup/cleanup changes.
Cc: David Herrmann <dh.herrmann@gmail.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Acked-by: Andy Lutomirski <luto@amacapital.net>
Reviewed-by: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
We have three callers of this function now and it's neither
performance critical nor really small. So an inline function feels
like overkill and unecessarily separates the different parts of the
code.
Since all callers of drm_gem_object_handle_free are now in drm_gem.c
we can make that static (and remove the unused EXPORT_SYMBOL). To
avoid a forward declaration move it (and drm_gem_object_free_bug) up a
bit.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Lifetime rules seem to be solid around ->import_attach. So this patch
just properly documents them.
Note that pointing directly at the attachment might have issues for
devices that have multiple struct device *dev parts constituting the
logical gpu and so might need multiple attachment points. Similarly
for drm devices which don't need a dma attachment at all (like udl).
But fixing that up is material for different patches.
Reviewed-by: Rob Clark <robdclark@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Note that this is slightly tricky since both drivers store their
native objects in dma_buf->priv. But both also embed the base
drm_gem_object at the first position, so the implicit cast is ok.
To use the release helper we need to export it, too.
Cc: Inki Dae <inki.dae@samsung.com>
Cc: Intel Graphics Development <intel-gfx@lists.freedesktop.org>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Basically just extracting some code duplicated in gma500, omapdrm, udl,
and upcoming msm driver.
Signed-off-by: Rob Clark <robdclark@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Variant of drm_gem_create_mmap_offset() which doesn't make the
assumption that virtual size and physical size (obj->size) are the same.
This is needed in omapdrm to deal with tiled buffers. And lets us get
rid of a duplicated and slightly modified version of
drm_gem_create_mmap_offset() in omapdrm.
Signed-off-by: Rob Clark <robdclark@gmail.com>
Reviewed-by: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
The gma500 driver somehow set the DRIVER_IRQ_VBL flag, but since
there's no code at all to check for this we can kill it. The other two
are completely unused.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
So I've stumbled over drm_fasync and wondered what it does. Digging
that up is quite a story.
First I've had to read up on what this does and ended up being rather
bewildered why peopled loved signals so much back in the days that
they've created SIGIO just for that ...
Then I wondered how this ever works, and what that strange "No-op."
comment right above it should mean. After all calling the core fasync
helper is pretty obviously not a noop. After reading through the
kernels FASYNC implementation I've noticed that signals are only sent
out to the processes attached with FASYNC by calling kill_fasync.
No merged drm driver has ever done that.
After more digging I've found out that the only driver that ever used
this is the so called GAMMA driver. I've frankly never heard of such a
gpu brand ever before. Now FASYNC seems to not have been the only bad
thing with that driver, since Dave Airlie removed it from the drm
driver with prejudice:
commit 1430163b4bbf7b00367ea1066c1c5fe85dbeefed
Author: Dave Airlie <airlied@linux.ie>
Date: Sun Aug 29 12:04:35 2004 +0000
Drop GAMMA DRM from a great height ...
Long story short, the drm fasync support seems to be doing absolutely
nothing. And the only user of it was never merged into the upstream
kernel. And we don't need any fops->fasync callback since the fcntl
implementation in the kernel already implements the noop case
correctly.
So stop this particular cargo-cult and rip it all out.
v2: Kill drm_fasync assignments in rcar (newly added) and imx drivers
(somehow I've missed that one in staging). Also drop the reference in
the drm DocBook. ARM compile-fail reported by Rob Clark.
v3: Move the removal of dev->buf_asnyc assignment in drm_setup to this
patch here.
v4: Actually git add ... tsk.
Cc: Dave Airlie <airlied@linux.ie>
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: Rob Clark <robdclark@gmail.com>
Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
So after a lot of digging around in git histories it looks like this
has only ever be used by dri1 render clients. Hence we can fully
disable the entire thing for modesetting drivers and so greatly reduce
the attack surface for potential exploits (or at least tools like
trinity ...).
Also add the drm_legacy prefix for functions which are called from
common code. To further reduce the impact on common code also extract
all the ctx release handling into a function (instead of only
releasing individual handles) and make ctxbitmap_cleanup return void -
it can never fail.
Reviewed-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
And hide the checks a bit better. This was already disallowed for
modesetting drivers, so no functinal change here.
Reviewed-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
I've decided that some clear markers for what's legacy dri1/non-gem
code is useful. I've opted to use the drm_legacy prefix and then hide
all the checks in that function for better readability in the common
code.
Reviewed-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Totally unused, so just rip it out. Anyway, we want drivers to be
fully backwards compatible, allowing them to change behaviour is just
a recipe for them to break badly.
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Rob Clark <robdclark@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
We currently rely on gcc dead-code elimination so the drm_agp_* helpers
are not called if drm_core_has_AGP() is false. That's ugly as hell so
provide "static inline" dummies for the case that AGP is disabled.
Fixes a build-regression introduced by:
commit 28ec711cd4
Author: David Herrmann <dh.herrmann@gmail.com>
Date: Sat Jul 27 16:37:00 2013 +0200
drm/agp: move AGP cleanup paths to drm_agpsupport.c
v2: switch #ifdef -> #if (spotted by Stephen)
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: Daniel Vetter <daniel@ffwll.ch>
Tested-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Dave Airlie <airlied@gmail.com>
Introduce two new helpers, drm_agp_clear() and drm_agp_destroy() which
clear all AGP mappings and destroy the AGP head. This allows to reduce the
AGP code in core DRM and move it all to drm_agpsupport.c.
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Because, there is no reason for it not to be const.
v1: original
v2: fix compile break in vmwgfx, and couple related cleanups suggested
by Ville Syrjälä
Signed-off-by: Rob Clark <robdclark@gmail.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
We can apply the same optimisation tricks as kref_put_mutex() in our
local equivalent function. However, we have a different locking semantic
(we unlock ourselves, in kref_put_mutex() the callee unlocks) so that we
can use the same callbacks for both locked and unlocked kref_put()s and
so can not simply convert to using kref_put_mutex() directly.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
All the gem based kms drivers really want the same function to
destroy a dumb framebuffer backing storage object.
So give it to them and roll it out in all drivers.
This still leaves the option open for kms drivers which don't use GEM
for backing storage, but it does decently simplify matters for gem
drivers.
Acked-by: Inki Dae <inki.dae@samsung.com>
Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: Intel Graphics Development <intel-gfx@lists.freedesktop.org>
Cc: Ben Skeggs <skeggsb@gmail.com>
Reviwed-by: Rob Clark <robdclark@gmail.com>
Cc: Alex Deucher <alexdeucher@gmail.com>
Acked-by: Patrik Jakobsson <patrik.r.jakobsson@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Use the new vma manager instead of the old hashtable. Also convert all
drivers to use the new convenience helpers. This drops all the
(map_list.hash.key << PAGE_SHIFT) non-sense.
Locking and access-management is exactly the same as before with an
additional lock inside of the vma-manager, which strictly wouldn't be
needed for gem.
v2:
- rebase on drm-next
- init nodes via drm_vma_node_reset() in drm_gem.c
v3:
- fix tegra
v4:
- remove duplicate if (drm_vma_node_has_offset()) checks
- inline now trivial drm_vma_node_offset_addr() calls
v5:
- skip node-reset on gem-init due to kzalloc()
- do not allow mapping gem-objects with offsets (backwards compat)
- remove unneccessary casts
Cc: Inki Dae <inki.dae@samsung.com>
Cc: Rob Clark <robdclark@gmail.com>
Cc: Dave Airlie <airlied@redhat.com>
Cc: Thierry Reding <thierry.reding@gmail.com>
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Acked-by: Patrik Jakobsson <patrik.r.jakobsson@gmail.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@gmail.com>
All users of it are now gone!
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@gmail.com>
It's unused, everyone is using the _unlocked variant only.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Rob Clark <robdclark@gmail.com>
Signed-off-by: Dave Airlie <airlied@gmail.com>
drm_gem_object_init() and drm_gem_private_object_init() do exactly the
same (except for shmem alloc) so make the first use the latter to reduce
code duplication.
Also drop the return code from drm_gem_private_object_init(). It seems
unlikely that we will extend it any time soon so no reason to keep it
around. This simplifies code paths in drivers, too.
Last but not least, fix gma500 to call drm_gem_object_release() before
freeing objects that were allocated via drm_gem_private_object_init().
That isn't actually necessary for now, but might be in the future.
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Patrik Jakobsson <patrik.r.jakobsson@gmail.com>
Acked-by: Rob Clark <robdclark@gmail.com>
Signed-off-by: Dave Airlie <airlied@gmail.com>
Only ever re-cleared in drm_setup, otherwise completely unused.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@gmail.com>
There's no other caller from driver code, so we can fold this in.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@gmail.com>
Only ever assigned in the context code for real, with no readers
anywhere. Remove it.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@gmail.com>
No one ever waits on this waitqueue, so the wake_up call is wasted.
Remove it all.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@gmail.com>
This adds to call low-level mmap() from prime helpers.
Signed-off-by: Joonyoung Shim <jy0922.shim@samsung.com>
Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
GEM CMA PRIME support from Laurent.
* 'drm/next' of git://linuxtv.org/pinchartl/fbdev:
drm: GEM CMA: Add DRM PRIME support
drm: GEM CMA: Split object mapping into GEM mapping and CMA mapping
drm: GEM CMA: Split object creation into object alloc and DMA memory alloc
drm/omap: Use drm_gem_mmap_obj() to implement dma-buf mmap
drm/gem: Split drm_gem_mmap() into object search and object mapping
The structures and strings involved with various pretty-print functions
aren't meant to be modified, so make them all const. The exception is
drm_connector_enum_list which does get modified in drm_connector_init().
While at it move the drm_get_connector_status_name() prototype from
drmP.h to drm_crtc.h where it belongs.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Prevents buffers from being pinned forever.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
The drm_gem_mmap() function first finds the GEM object to be mapped
based on the fake mmap offset and then maps the object. Split the object
mapping code into a standalone drm_gem_mmap_obj() function that can be
used to implement dma-buf mmap() operations.
Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
Reviewed-by: Rob Clark <robdclark@gmail.com>