This allows drivers to distinguish between different types of vma_node's.
The readonly flag was unused and is thus removed.
This is a temporary solution, until i915 is converted completely to
use ttm for bo's.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Acked-by: Daniel Vetter <daniel@ffwll.ch> #irc
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20210610070152.572423-4-thomas.hellstrom@linux.intel.com
After patch "drm: Use the same mmap-range offset and size for GEM and
TTM", application failed to create bo of system memory because drm
mmap_range size decrease to 64GB from original 1TB. This is not big
enough for applications. Increase the drm mmap_range size to 1TB.
Reviewed-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Philip Yang <Philip.Yang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190417221507.933-1-Philip.Yang@amd.com
(cherry picked from commit 96354b5ca4)
GEM defines DRM_FILE_PAGE_OFFSET_{START,SIZE} constants for the
mmap-able range of addresses. TTM can use them as well.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Christian König <christian.koenig@amd.com>
Acked-by: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
If the user has created a read-only object, they should not be allowed
to circumvent the write protection by using a GGTT mmapping. Deny it.
Also most machines do not support read-only GGTT PTEs, so again we have
to reject attempted writes. Fortunately, this is known a priori, so we
can at least reject in the call to create the mmap (with a sanity check
in the fault handler).
v2: Check the vma->vm_flags during mmap() to allow readonly access.
v3: Remove VM_MAYWRITE to curtail mprotect()
Testcase: igt/gem_userptr_blits/readonly_mmap*
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Jon Bloomfield <jon.bloomfield@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Matthew Auld <matthew.william.auld@gmail.com>
Cc: David Herrmann <dh.herrmann@gmail.com>
Reviewed-by: Matthew Auld <matthew.william.auld@gmail.com> #v1
Reviewed-by: Jon Bloomfield <jon.bloomfield@intel.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180712185315.3288-4-chris@chris-wilson.co.uk
Rather than using "struct file*", use "struct drm_file*" as tag VM tag for
BOs. This will pave the way for "struct drm_file*" without any "struct
file*" back-pointer.
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: http://patchwork.freedesktop.org/patch/msgid/20160901124837.680-3-dh.herrmann@gmail.com
It's racy, creating mmap offsets is a slowpath, so better to remove it
to avoid drivers doing broken things.
The only user is i915, and it's ok there because everything (well
almost) is protected by dev->struct_mutex in i915-gem.
While at it add a note in the create_mmap_offset kerneldoc that
drivers must release it again. And then I also noticed that
drm_gem_object_release entirely lacks kerneldoc.
Cc: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1459330852-27668-14-git-send-email-daniel.vetter@ffwll.ch
Compared to wrapping the final kref_put with dev->struct_mutex this
allows us to only acquire the offset manager look both in the final
cleanup and in the lookup. Which has the upside that no locks leak out
of the core abstractions. But it means that we need to hold a
temporary reference to the object while checking mmap constraints, to
make sure the object doesn't disappear. Extended the critical region
would have worked too, but would result in more leaky locking.
Also, this is the final bit which required dev->struct_mutex in gem
core, now modern drivers can be completely struct_mutex free!
This needs a new drm_vma_offset_exact_lookup_locked and makes both
drm_vma_offset_exact_lookup and drm_vma_offset_lookup unused.
v2: Don't leak object references in failure paths (David).
v3: Add a comment from Chris explaining how the ordering works, with
the slight adjustment that I dropped any mention of struct_mutex since
with this patch it's now immaterial ot core gem.
Cc: David Herrmann <dh.herrmann@gmail.com>
Reviewed-by: David Herrmann <dh.herrmann@gmail.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: http://mid.gmane.org/1444901623-18918-1-git-send-email-daniel.vetter@ffwll.ch
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
This snippet...
* Lock VMA manager for extended lookups. Only *_locked() VMA function calls
* are allowed while holding this lock. All other contexts are blocked from VMA
* until the lock is released via drm_vma_offset_unlock_lookup().
...causes markdown-enabled kernel-doc to barf:
debian/build/build-doc/Documentation/DocBook/gpu.aux.xml:3247: parser error : Opening and ending tag mismatch: emphasis line 3247 and function
*<function><emphasis>locked</function> VMA function calls are allowed while
^
/root/airlied/debian/build/build-doc/Documentation/DocBook/gpu.aux.xml:3249: parser error : Opening and ending tag mismatch: function line 3249 and emphasis
released via <function>drm</emphasis>vma_offset_unlock_lookup</function>.
^
unable to parse /root/airlied/debian/build/build-doc/Documentation/DocBook/gpu.aux.xml
A quick workaround is to replace *_locked() by X_locked().
Cc: Danilo Cesar Lemes de Paula <danilo.cesar@collabora.co.uk>
Signed-off-by: Lukas Wunner <lukas@wunner.de>
[danvet: Just drop the X_ too, the usual style is _unlocked, except
that _ seems to be what annoys markdown.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
With dev->anon_inode we have a global address_space ready for operation
right from the beginning. Therefore, there is no need to do a delayed
setup with TTM. Instead, set dev_mapping during initialization in
ttm_bo_device_init() and remove any "if (dev_mapping)" conditions.
Cc: Dave Airlie <airlied@redhat.com>
Cc: Ben Skeggs <bskeggs@redhat.com>
Cc: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Cc: Alex Deucher <alexdeucher@gmail.com>
Cc: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
The VMA offset manager uses a device-global address-space. Hence, any
user can currently map any offset-node they want. They only need to guess
the right offset. If we wanted per open-file offset spaces, we'd either
need VM_NONLINEAR mappings or multiple "struct address_space" trees. As
both doesn't really scale, we implement access management in the VMA
manager itself.
We use an rb-tree to store open-files for each VMA node. On each mmap
call, GEM, TTM or the drivers must check whether the current user is
allowed to map this file.
We add a separate lock for each node as there is no generic lock available
for the caller to protect the node easily.
As we currently don't know whether an object may be used for mmap(), we
have to do access management for all objects. If it turns out to slow down
handle creation/deletion significantly, we can optimize it in several
ways:
- Most times only a single filp is added per bo so we could use a static
"struct file *main_filp" which is checked/added/removed first before we
fall back to the rbtree+drm_vma_offset_file.
This could be even done lockless with rcu.
- Let user-space pass a hint whether mmap() should be supported on the
bo and avoid access-management if not.
- .. there are probably more ideas once we have benchmarks ..
v2: add drm_vma_node_verify_access() helper
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Instead of unmapping the nodes in TTM and GEM users manually, we provide
a generic wrapper which does the correct thing for all vma-nodes.
v2: remove bdev->dev_mapping test in ttm_bo_unmap_virtual_unlocked() as
ttm_mem_io_free_vm() does nothing in that case (io_reserved_vm is 0).
v4: Fix docbook comments
v5: use drm_vma_node_size()
Cc: Dave Airlie <airlied@redhat.com>
Cc: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Cc: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@gmail.com>
If we want to map GPU memory into user-space, we need to linearize the
addresses to not confuse mm-core. Currently, GEM and TTM both implement
their own offset-managers to assign a pgoff to each object for user-space
CPU access. GEM uses a hash-table, TTM uses an rbtree.
This patch provides a unified implementation that can be used to replace
both. TTM allows partial mmaps with a given offset, so we cannot use
hashtables as the start address may not be known at mmap time. Hence, we
use the rbtree-implementation of TTM.
We could easily update drm_mm to use an rbtree instead of a linked list
for it's object list and thus drop the rbtree from the vma-manager.
However, this would slow down drm_mm object allocation for all other
use-cases (rbtree insertion) and add another 4-8 bytes to each mm node.
Hence, use the separate tree but allow for later migration.
This is a rewrite of the 2012-proposal by David Airlie <airlied@linux.ie>
v2:
- fix Docbook integration
- drop drm_mm_node_linked() and use drm_mm_node_allocated()
- remove unjustified likely/unlikely usage (but keep for rbtree paths)
- remove BUG_ON() as drm_mm already does that
- clarify page-based vs. byte-based addresses
- use drm_vma_node_reset() for initialization, too
v4:
- allow external locking via drm_vma_offset_un/lock_lookup()
- add locked lookup helper drm_vma_offset_lookup_locked()
v5:
- fix drm_vma_offset_lookup() to correctly validate range-mismatches
(fix (offset > start + pages))
- fix drm_vma_offset_exact_lookup() to actually do what it says
- remove redundant vm_pages member (add drm_vma_node_size() helper)
- remove unneeded goto
- fix documentation
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@gmail.com>